text
stringlengths 0
316k
| year
stringclasses 50
values | No
stringclasses 911
values |
---|---|---|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1566–1576, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Effective Measures of Domain Similarity for Parsing Barbara Plank University of Groningen The Netherlands [email protected] Gertjan van Noord University of Groningen The Netherlands [email protected] Abstract It is well known that parsing accuracy suffers when a model is applied to out-of-domain data. It is also known that the most beneficial data to parse a given domain is data that matches the domain (Sekine, 1997; Gildea, 2001). Hence, an important task is to select appropriate domains. However, most previous work on domain adaptation relied on the implicit assumption that domains are somehow given. As more and more data becomes available, automatic ways to select data that is beneficial for a new (unknown) target domain are becoming attractive. This paper evaluates various ways to automatically acquire related training data for a given test set. The results show that an unsupervised technique based on topic models is effective – it outperforms random data selection on both languages examined, English and Dutch. Moreover, the technique works better than manually assigned labels gathered from meta-data that is available for English. 1 Introduction and Motivation Previous research on domain adaptation has focused on the task of adapting a system trained on one domain, say newspaper text, to a particular new domain, say biomedical data. Usually, some amount of (labeled or unlabeled) data from the new domain was given – which has been determined by a human. However, with the growth of the web, more and more data is becoming available, where each document “is potentially its own domain” (McClosky et al., 2010). It is not straightforward to determine which data or model (in case we have several source domain models) will perform best on a new (unknown) target domain. Therefore, an important issue that arises is how to measure domain similarity, i.e. whether we can find a simple yet effective method to determine which model or data is most beneficial for an arbitrary piece of new text. Moreover, if we had such a measure, a related question is whether it can tell us something more about what is actually meant by “domain”. So far, it was mostly arbitrarily used to refer to some kind of coherent unit (related to topic, style or genre), e.g.: newspaper text, biomedical abstracts, questions, fiction. Most previous work on domain adaptation, for instance Hara et al. (2005), McClosky et al. (2006), Blitzer et al. (2006), Daum´e III (2007), sidestepped this problem of automatic domain selection and adaptation. For parsing, to our knowledge only one recent study has started to examine this issue (McClosky et al., 2010) – we will discuss their approach in Section 2. Rather, an implicit assumption of all of these studies is that domains are given, i.e. that they are represented by the respective corpora. Thus, a corpus has been considered a homogeneous unit. As more data is becoming available, it is unlikely that domains will be ‘given’. Moreover, a given corpus might not always be as homogeneous as originally thought (Webber, 2009; Lippincott et al., 2010). For instance, recent work has shown that the well-known Penn Treebank (PT) Wall Street Journal (WSJ) actually contains a variety of genres, including letters, wit and short verse (Webber, 2009). In this study we take a different approach. Rather than viewing a given corpus as a monolithic entity, 1566 we break it down to the article-level and disregard corpora boundaries. Given the resulting set of documents (articles), we evaluate various ways to automatically acquire related training data for a given test set, to find answers to the following questions: • Given a pool of data (a collection of articles from unknown domains) and a test article, is there a way to automatically select data that is relevant for the new domain? If so: • Which similarity measure is good for parsing? • How does it compare to human-annotated data? • Is the measure also useful for other languages and/or tasks? To this end, we evaluate measures of domain similarity and feature representations and their impact on dependency parsing accuracy. Given a collection of annotated articles, and a new article that we want to parse, we want to select the most similar articles to train the best parser for that new article. In the following, we will first compare automatic measures to human-annotated labels by examining parsing performance within subdomains of the Penn Treebank WSJ. Then, we extend the experiments to the domain adaptation scenario. Experiments were performed on two languages: English and Dutch. The empirical results show that a simple measure based on topic distributions is effective for both languages and works well also for Part-of-Speech tagging. As the approach is based on plain surfacelevel information (words) and it finds related data in a completely unsupervised fashion, it can be easily applied to other tasks or languages for which annotated (or automatically annotated) data is available. 2 Related Work The work most related to ours is McClosky et al. (2010). They try to find the best combination of source models to parse data from a new domain, which is related to Plank and Sima’an (2008). In the latter, unlabeled data was used to create several parsers by weighting trees in the WSJ according to their similarity to the subdomain. McClosky et al. (2010) coined the term multiple source domain adaptation. Inspired by work on parsing accuracy prediction (Ravi et al., 2008), they train a linear regression model to predict the best (linear interpolation) of source domain models. Similar to us, McClosky et al. (2010) regard a target domain as mixture of source domains, but they focus on phrasestructure parsing. Furthermore, our approach differs from theirs in two respects: we do not treat source corpora as one entity and try to mix models, but rather consider articles as base units and try to find subsets of related articles (the most similar articles); moreover, instead of creating a supervised model (in their case to predict parsing accuracy), our approach is ‘simplistic’: we apply measures of domain similarity directly (in an unsupervised fashion), without the necessity to train a supervised model. Two other related studies are (Lippincott et al., 2010; Van Asch and Daelemans, 2010). Van Asch and Daelemans (2010) explore a measure of domain difference (Renyi divergence) between pairs of domains and its correlation to Part-of-Speech tagging accuracy. Their empirical results show a linear correlation between the measure and the performance loss. Their goal is different, but related: rather than finding related data for a new domain, they want to estimate the loss in accuracy of a PoS tagger when applied to a new domain. We will briefly discuss results obtained with the Renyi divergence in Section 5.1. Lippincott et al. (2010) examine subdomain variation in biomedicine corpora and propose awareness of NLP tools to such variation. However, they did not yet evaluate the effect on a practical task, thus our study is somewhat complementary to theirs. The issue of data selection has recently been examined for Language Modeling (Moore and Lewis, 2010). A subset of the available data is automatically selected as training data for a Language Model based on a scoring mechanism that compares crossentropy scores. Their approach considerably outperformed random selection and two previous proposed approaches both based on perplexity scoring.1 3 Measures of Domain Similarity 3.1 Measuring Similarity Automatically Feature Representations A similarity function may be defined over any set of events that are con1We tested data selection by perplexity scoring, but found the Language Models too small to be useful in our setting. 1567 sidered to be relevant for the task at hand. For parsing, these might be words, characters, n-grams (of words or characters), Part-of-Speech (PoS) tags, bilexical dependencies, syntactic rules, etc. However, to obtain more abstract types such as PoS tags or dependency relations, one would first need to gather respective labels. The necessary tools for this are again trained on particular corpora, and will suffer from domain shifts, rendering labels noisy. Therefore, we want to gauge the effect of the simplest representation possible: plain surface characteristics (unlabeled text). This has the advantage that we do not need to rely on additional supervised tools; moreover, it is interesting to know how far we can get with this level of information only. We examine the following feature representations: relative frequencies of words, relative frequencies of character tetragrams, and topic models. Our motivation was as follows. Relative frequencies of words are a simple and effective representation used e.g. in text classification (Manning and Sch¨utze, 1999), while character n-grams have proven successful in genre classification (Wu et al., 2010). Topic models (Blei et al., 2003; Steyvers and Griffiths, 2007) can be considered an advanced model over word distributions: every article is represented by a topic distribution, which in turn is a distribution over words. Similarity between documents can be measured by comparing topic distributions. Similarity Functions There are many possible similarity (or distance) functions. They fall broadly into two categories: probabilistically-motivated and geometrically-motivated functions. The similarity functions examined in this study will be described in the following. The Kullback-Leibler (KL) divergence D(q||r) is a classical measure of ‘distance’2 between two probability distributions, and is defined as: D(q||r) = P y q(y) log q(y) r(y). It is a non-negative, additive, asymmetric measure, and 0 iff the two distributions are identical. However, the KL-divergence is undefined if there exists an event y such that q(y) > 0 but r(y) = 0, which is a property that “makes it unsuitable for distributions derived via maximumlikelihood estimates” (Lee, 2001). 2It is not a proper distance metric since it is asymmetric. One option to overcome this limitation is to apply smoothing techniques to gather non-zero estimates for all y. The alternative, examined in this paper, is to consider an approximation to the KL divergence, such as the Jensen-Shannon (JS) divergence (Lin, 1991) and the skew divergence (Lee, 2001). The Jensen-Shannon divergence, which is symmetric, computes the KL-divergence between q, r, and the average between the two. We use the JS divergence as defined in Lee (2001): JS(q, r) = 1 2[D(q||avg(q, r)) + D(r||avg(q, r))]. The asymmetric skew divergence sα, proposed by Lee (2001), mixes one distribution with the other by a degree defined by α ∈[0, 1): sα(q, r, α) = D(q||αr + (1 − α)q). As α approaches 1, the skew divergence approximates the KL-divergence. An alternative way to measure similarity is to consider the distributions as vectors and apply geometrically-motivated distance functions. This family of similarity functions includes the cosine cos(q, r) = q(y) · r(y)/||q(y)||||r(y)||, euclidean euc(q, r) = qP y(q(y) −r(y))2 and variational (also known as L1 or Manhattan) distance function, defined as var(q, r) = P y |q(y) −r(y)|. 3.2 Human-annotated data In contrast to the automatic measures devised in the previous section, we might have access to human annotated data. That is, use label information such as topic or genre to define the set of similar articles. Genre For the Penn Treebank (PT) Wall Street Journal (WSJ) section, more specifically, the subset available in the Penn Discourse Treebank, there exists a partition of the data by genre (Webber, 2009). Every article is assigned one of the following genre labels: news, letters, highlights, essays, errata, wit and short verse, quarterly progress reports, notable and quotable. This classification has been made on the basis of meta-data (Webber, 2009). It is wellknown that there is no meta-data directly associated with the individual WSJ files in the Penn Treebank. However, meta-data can be obtained by looking at the articles in the ACL/DCI corpus (LDC99T42), and a mapping file that aligns document numbers of DCI (DOCNO) to WSJ keys (Webber, 2009). An example document is given in Figure 1. The metadata field HL contains headlines, SO source info, and 1568 the IN field includes topic markers. <DOC><DOCNO> 891102-0186. </DOCNO> <WSJKEY> wsj_0008 </WSJKEY> <AN> 891102-0186. </AN> <HL> U.S. Savings Bonds Sales @ Suspended by Debt Limit </HL> <DD> 11/02/89 </DD> <SO> WALL STREET JOURNAL (J) </SO> <IN> FINANCIAL, ACCOUNTING, LEASING (FIN) BOND MARKET NEWS (BON) </IN> <GV> TREASURY DEPARTMENT (TRE) </GV> <DATELINE> WASHINGTON </DATELINE> <TXT> <p><s> The federal government suspended sales of U.S. savings bonds because Congress hasn’t lifted the ceiling on government debt.</s></p> [...] Figure 1: Example of ACL/DCI article. We have augmented it with the WSJ filename (WSJKEY). Topic On the basis of the same meta-data, we devised a classification of the Penn Treebank WSJ by topic. That is, while the genre division has been mostly made on the basis of headlines, we use the information of the IN field. Every article is assigned one, more than one or none of a predefined set of keywords. While their origin remains unclear,3 these keywords seem to come from a controlled vocabulary. There are 76 distinct topic markers. The three most frequent keywords are: TENDER OFFERS, MERGERS, ACQUISITIONS (TNM), EARNINGS (ERN), STOCK MARKET, OFFERINGS (STK). This reflects the fact that a lot of articles come from the financial domain. But the corpus also contains articles from more distant domains, like MARKETING, ADVERTISING (MKT), COMPUTERS AND INFORMATION TECHNOLOGY (CPR), HEALTH CARE PROVIDERS, MEDICINE, DENTISTRY (HEA), PETROLEUM (PET). 4 Experimental Setup 4.1 Tools & Evaluation The parsing system used in this study is the MST parser (McDonald et al., 2005), a state-of-the-art data-driven graph-based dependency parser. It is 3It is not known what IN stands for, as also stated in Mark Liberman’s notes in the readme of the ACL/DCI corpus. However, a reviewer suggested that IN might stand for “index terms” which seems plausible. a system that can be trained on a variety of languages given training data in CoNLL format (Buchholz and Marsi, 2006). Additionally, the parser implements both projective and non-projective parsing algorithms. The projective algorithm is used for the experiments on English, while the non-projective variant is used for Dutch. We train the parser using default settings. MST takes PoS-tagged data as input; we use gold-standard tags in the experiments. We estimate topic models using Latent Dirichlet Allocation (Blei et al., 2003) implemented in the MALLET4 toolkit. Like Lippincott et al. (2010), we set the number of topics to 100, and otherwise use standard settings (no further optimization). We experimented with the removal of stopwords, but found no deteriorating effect while keeping them. Thus, all experiments are carried out on data where stopwords were not removed. We implemented the similarity measures presented in Section 3.1. For skew divergence, that requires parameter α, we set α = .99 (close to KL divergence) since that has shown previously to work best (Lee, 2001). Additionally, we evaluate the approach on English PoS tagging using two different taggers: MXPOST, the MaxEnt tagger of Ratnaparkhi5 and Citar,6 a trigram HMM tagger. In all experiments, parsing performance is measured as Labeled Attachment Score (LAS), the percentage of tokens with correct dependency edge and label. To compute LAS, we use the CoNLL 2007 evaluation script7 with punctuation tokens excluded from scoring (as was the default setting in CoNLL 2006). PoS tagging accuracy is measured as the percentage of correctly labeled words out of all words. Statistical significance is determined by Approximate Randomization Test (Noreen, 1989; Yeh, 2000) with 10,000 iterations. 4.2 Data English - WSJ For English, we use the portion of the Penn Treebank Wall Street Journal (WSJ) that has been made available in the CoNLL 2008 shared 4http://mallet.cs.umass.edu/ 5ftp://ftp.cis.upenn.edu/pub/adwait/jmx/ 6Citar has been implemented by Dani¨el de Kok and is available at: https://github.com/danieldk/citar 7http://nextens.uvt.nl/depparse-wiki/ 1569 task. This data has been automatically converted8 into dependency structure, and contains three files: the training set (sections 02-21), development set (section 24) and test set (section 23). Since we use articles as basic units, we actually split the data to get back original article boundaries.9 This led to a total of 2,034 articles (1 million words). Further statistics on the datasets are given in Table 1. In the first set of experiments on WSJ subdomains, we consider articles from section 23 and 24 that contain at least 50 sentences as test sets (target domains). This amounted to 22 test articles. EN: WSJ WSJ+G+B Dutch articles 2,034 3,776 51,454 sentences 43,117 77,422 1,663,032 words 1,051,997 1,784,543 20,953,850 Table 1: Overview of the datasets for English and Dutch. To test whether we have a reasonable system, we performed a sanity check and trained the MST parser on the training section (02-21). The result on the standard test set (section 23) is identical to previously reported results (excluding punctuation tokens: LAS 87.50, Unlabeled Attachment Score (UAS) 90.75; with punctuation tokens: LAS 87.07, UAS 89.95). The latter has been reported in (Surdeanu and Manning, 2010). English - Genia (G) & Brown (B) For the Domain Adaptation experiments, we added 1,552 articles from the GENIA10 treebank (biomedical abstracts from Medline) and 190 files from the Brown corpus to the pool of data. We converted the data to CoNLL format with the LTH converter (Johansson and Nugues, 2007). The size of the test files is, respectively: Genia 1,360 sentences with an average number of 26.20 words per sentence; the Brown test set is the same as used in the CoNLL 2008 shared task and contains 426 sentences with a mean of 16.80 words. 8Using the LTH converter: http://nlp.cs.lth.se/ software/treebank_converter/ 9This was a non-trivial task, as we actually noticed that some sentences have been omitted from the CoNLL 2008 shared task. 10We use the GENIA distribution in Penn Treebank format available at http://bllip.cs.brown.edu/download/ genia1.0-division-rel1.tar.gz 5 Experiments on English 5.1 Experiments within the WSJ In the first set of experiments, we focus on the WSJ and evaluate the similarity functions to gather related data for a given test article. We have 22 WSJ articles as test set, sampled from sections 23 and 24. Regarding feature representations, we examined three possibilities: relative frequencies of words, relative frequencies of character tetragrams (both unsmoothed) and document topic distributions. In the following, we only discuss representations based on words or topic models as we found character tetragrams less stable; they performed sometimes like their word-based counterparts but other times, considerably worse. Results of Similarity Measures Table 2 compares the effect of the different ways to select related data in comparison to the random baseline for increasing amounts of training data. The table gives the average over 22 test articles (rather than showing individual tables for the 22 articles). We select articles up to various thresholds that specify the total number of sentences selected in each round (e.g. 0.3k, 1.2k, etc.).11 In more detail, Table 2 shows the result of applying various similarity functions (introduced in Section 3.1) over the two different feature representations (w: words; tm: topic model) for increasing amounts of data. We additionally provide results of using the Renyi divergence.12 Clearly, as more and more data is selected, the differences become smaller, because we are close to the data limit. However, for all data points less than 38k (97%), selection by jensen-shannon, variational and cosine similarity outperform random data selection significantly for both types of feature representations (words and topic model). For selection by topic models, this additionally holds for the euclidean measure. From the various measures we can see that selection by jensen-shannon divergence and variational distance perform best, followed by cosine similarity, skew divergence, euclidean and renyi. 11Rather than choosing k articles, as article length may differ. 12The Renyi divergence (R´enyi, 1961), also used by Van Asch and Daelemans (2010), is defined as Dα(q, r) = 1/(α − 1) log(P qαr1−α). 1570 1% 3% 25% 49% 97% (0.3k) (1.2k) (9.6k) (19.2k) (38k) random 70.61 77.21 82.98 84.48 85.51 w-js 74.07⋆ 79.41⋆ 83.98⋆ 84.94⋆ 85.68 w-var 74.07⋆ 79.60⋆ 83.82⋆ 84.94⋆ 85.45 w-skw 74.20⋆ 78.95⋆ 83.68⋆ 84.60 85.55 w-cos 73.77⋆ 79.30⋆ 83.87⋆ 84.96⋆ 85.59 w-euc 73.85⋆ 78.90⋆ 83.52⋆ 84.68 85.57 w-ryi 73.41⋆ 78.31 83.76⋆ 84.46 85.46 tm-js 74.23⋆ 79.49⋆ 84.04⋆ 85.01⋆ 85.45 tm-var 74.29⋆ 79.59⋆ 83.93⋆ 84.94⋆ 85.43 tm-skw 74.13⋆ 79.42⋆ 84.13⋆ 84.82 85.73 tm-cos 74.04⋆ 79.27⋆ 84.14⋆ 84.99⋆ 85.42 tm-euc 74.27⋆ 79.53⋆ 83.93⋆ 85.15⋆ 85.62 tm-ryi 71.26 78.64⋆ 83.79⋆ 84.85 85.58 Table 2: Comparison of similarity measures based on words (w) and topic model (tm): parsing accuracy for increasing amounts of training data as average over 22 WSJ articles (js=jensen-shannon; cos=cosine; skw=skew; var=variational; euc=euclidean; ryi=renyi). Best score (per representation) underlined, best overall score bold; ⋆indicates significantly better (p < 0.05) than random. Renyi divergence does not perform as well as other probabilistically-motivated functions. Regarding feature representations, the representation based on topic models works slightly better than the respective word-based measure (cf. Table 2) and often achieves the overall best score (boldface). Overall, the differences in accuracy between the various similarity measures are small; but interestingly, the overlap between them is not that large. Table 3 and Table 4 show the overlap (in terms of proportion of identically selected articles) between pairs of similarity measures. As shown in Table 3, for all measures there is only a small overlap with the random baseline (around 10%-14%). Despite similar performance, topic model selection has interestingly no substantial overlap with any other wordbased similarity measures: their overlap is at most 41.6%. Moreover, Table 4 compares the overlap of the various similarity functions within a certain feature representation (here x stands for either topic model – left value – or words – right value). The table shows that there is quite some overlap between jensen-shannon, variational and skew divergence on one side, and cosine and euclidean on the other side, i.e. between probabilistically- and geometrically-motivated functions. Variational has a higher overlap with the probabilistic functions. Interestingly, the ‘peaks’ in Table 4 (underlined, i.e. the highest pair-wise overlaps) are the same for the different feature representations. In the following we analyze selection by topic model and words, as they are relatively different from each other, despite similar performance. For the word-based model, we use jensen-shannon as similarity function, as it turned out to be the best measure. For topic model, we use the simpler variational metric. However, very similar results were achieved using jensen-shannon. Cosine and euclidean did not perform as well. ran w-js w-var w-skw w-cos w-euc ran – 10.3 10.4 10.0 10.4 10.2 tm-js 12.1 41.6 39.6 36.0 29.3 28.6 tm-var 12.3 40.8 39.3 34.9 29.3 28.5 tm-skw 11.8 40.9 39.7 36.8 30.0 30.1 tm-cos 14.0 31.7 30.7 27.3 24.1 23.2 tm-euc 14.6 27.5 27.2 23.4 22.6 22.1 Table 3: Average overlap (in %) of similarity measure: random selection (ran) vs. measures based on words (w) and topic model (tm). x=tm/w x-js x-var x-skw x-cos x-euc tm/w-var 76/74 – 60/63 55/48 49/47 tm/w-skw 69/72 60/63 – 48/41 42/42 tm/w-cos 57/42 55/48 48/41 – 62/71 tm/w-euc 47/41 49/47 42/42 62/71 – Table 4: Average overlap (in %) for different feature representations x as tm/w, where tm=topic model and w=words. Highest pair-wise overlap is underlined. Automatic Measure vs. Human labels The next question is how these automatic measures compare to human-annotated data. We compare word-based and topic model selection (by using jensen-shannon and variational, respectively) to selection based on human-given labels: genre and topic. For genre, we randomly select larger amounts of training data for a given test article from the same genre. For topic, the approach is similar, but as an article might have 1571 several topic markers (keywords in the IN field), we rank articles by proportion of overlapping keywords. G G G G G G 0 5000 10000 15000 20000 76 78 80 82 84 86 Average number of sentences Accuracy G random words−js topic model−var genre topic (IN fields) Figure 2: Comparison of automatic measures (words using jensen-shannon and topic model using variational) with human-annotated labels (genre/topic). Automatic measures outperform human labels (p < 0.05). Figure 2 shows that human-labels do actually not perform better than the automatic measures. Both are close to random selection. Moreover, the line of selection by topic marker (IN fields) stops early – we believe the reason for this is that the IN fields are too fine-grained, which limits the number of articles that are considered relevant for a given test article. However, manually aggregating articles on similar topics did not improve topic-based selection either. We conclude that the automatic selection techniques perform significantly better than humanannotated data, at least within the WSJ domain considered here. 5.2 Domain Adaptation Results Until now, we compared similarity measures by restricting ourselves to articles from the WSJ. In this section, we extend the experiments to the domain adaptation scenario. We augment the pool of WSJ articles with articles coming from two other corpora: Genia and Brown. We want to gauge the effectiveness of the domain similarity measures in the multidomain setting, where articles are selected from the pool of data without knowing their identity (which corpus the articles came from). The test sets are the standard evaluation sets from the three corpora: the standard WSJ (section 23) and Brown test set from CoNLL 2008 (they contain 2,399 and 426 sentences, respectively) and the Genia test set (1,370 sentences). As a reference, we give results of models trained on the respective corpora (per-corpus models; i.e. if we consider corpora boundaries and train a model on the respective domain – this model is ‘supervised’ in the sense that it knows from which corpus the test article came from) as well as a baseline model trained on all data, i.e. the union of all three corpora (wsj+genia+brown), which is a standard baseline in domain adaptation (Daum´e III, 2007; McClosky et al., 2010). WSJ Brown Genia (38k) (28k) (19k) random 86.58 73.81 83.77 per-corpus 87.50 81.55 86.63 union 87.05 79.12 81.57 topic model (var) 87.11⋆ 81.76♦ 86.77♦ words (js) 86.30 81.47♦ 86.44♦ Table 5: Domain Adaptation Results on English (significantly better: ⋆than random; ♦than random and union). The learning curves are shown in Figure 3, the scores for a specific amount of data are given in Table 5. The performance of the reference models (per-corpus and union in Table 5) are indicated in Figure 3 with horizontal lines: the dashed line represents the per-corpus performance (‘supervised’ model); the solid line shows the performance of the union baseline trained on all available data (77k sentences). For the former, the vertical dashed lines indicate the amount of data the model was trained on (e.g. 23k sentences for Brown). Simply taking all available data has a deteriorating effect: on all three test sets, the performance of the union model is below the presumably best performance of a model trained on the respective corpus (per-corpus model). The empirical results show that automatic data selection by topic model outperforms random selection on all three test sets and the union baseline in two out of three cases. More specifically, selection by topic model outperforms random selection significantly on all three test sets and all points in the graph (p < 0.001). Selection by the word-based measure (words-js) achieves a significant improve1572 G G G G G G 0 10000 20000 30000 40000 80 82 84 86 88 wsj23all number of sentences Accuracy G G G G G G G 0 10000 20000 30000 40000 70 75 80 brown number of sentences Accuracy G G G G G G G 0 10000 20000 30000 40000 76 78 80 82 84 86 88 genia number of sentences Accuracy G random words−js topic model−var per−corpus model union (wsj+genia+brown) Figure 3: Domain Adaptation Results for English Parsing with Increasing Amounts of Training Data. The vertical line represents the amount of data the per-corpus model is trained on. ment over the random baseline on two out of the three test sets – it falls below the random baseline on the WSJ test set. Thus, selection by topic model performs best – it achieves better performance than the union baseline with comparatively little data (Genia: 4k; Brown: 19k – in comparison: union has 77k). Moreover, it comes very close to the supervised percorpus model performance13 with a similar amount of data (cf. vertical dashed line). This is a very good result, given that the technique disregards the origin of the articles and just uses plain words as information. It automatically finds data that is beneficial for an unknown target domain. So far we examined domain similarity measures for parsing, and concluded that selection by topic model performs best, closely followed by wordbased selection using the jensen-shannon divergence. The question that remains is whether the measure is more widely applicable: How does it perform on another language and task? PoS tagging We perform similar Domain Adaptation experiments on WSJ, Genia and Brown for PoS tagging. We use two taggers (HMM and MaxEnt) and the same three test articles as before. The results are shown in Figure 4 (it depicts the average over the three test sets, WSJ, Genia, Brown, for space reasons). The left figure shows the performance of the HMM tagger; on the right is the MaxEnt tagger. The graphs show that automatic training data selection outperforms random data selec13On Genia and Brown (cf. Table 5) there is no significant difference between topic model and per-corpus model. tion, and again topic model selection performs best, closely followed by words-js. This confirms previous findings and shows that the domain similarity measures are effective also for this task. G G G G G G G G 0 10000 20000 30000 40000 0.90 0.92 0.94 0.96 0.98 Average HMM tagger number of sentences Accuracy G random words−js topic model−var G G G G G G G 0 10000 20000 30000 40000 0.90 0.92 0.94 0.96 0.98 Average MXPOST tagger number of sentences Accuracy G random words−js topic model−var Figure 4: PoS tagging results, average over 3 test sets. 6 Experiments on Dutch For Dutch, we evaluate the approach on a bigger and more varied dataset. It contains in total over 50k articles and 20 million words (cf. Table 1). In contrast to the English data, only a small portion of the dataset is manually annotated: 281 articles.14 Since we want to evaluate the performance of different similarity measures, we want to keep the influence of noise as low as possible. Therefore, we annotated the remaining articles with a parsing system that is more accurate (Plank and van Noord, 2010), the Alpino parser (van Noord, 2006). Note that using a more accurate parsing system to train another parser has recently also been proposed by Petrov et al. (2010) as uptraining. Alpino is a 14http://www.let.rug.nl/vannoord/Lassy/ 1573 parser tailored to Dutch, that has been developed over the last ten years, and reaches an accuracy level of 90% on general newspaper text. It uses a conditional MaxEnt model as parse selection component. Details of the parser are given in (van Noord, 2006). G G G G G G G 0 5000 10000 15000 20000 25000 30000 74 76 78 80 82 84 86 Average number of sentences Accuracy G random topic model−var words−js Figure 5: Result on Dutch; average over 30 articles. Data and Results The Dutch dataset contains articles from a variety of sources: Wikipedia15, EMEA16 (documents from the European Medicines Agency) and the Dutch parallel corpus17 (DPC), that covers a variety of subdomains. The Dutch articles were parsed with Alpino and automatically converted to CoNLL format with the treebank conversion software from CoNLL 2006, where PoS tags have been replaced with more fine-grained Alpino tags as that had a positive effect on MST. The 281 annotated articles come from all three sources. As with English, we consider as test set articles with at least 50 sentences, from which 30 are randomly sampled. The results on Dutch are shown in Figure 5. Domain similarity measures clearly outperform random data selection also in this setting with another language and a considerably larger pool of data (20 million words; 51k articles). 7 Discussion In this paper we have shown the effectiveness of a simple technique that considers only plain words as domain selection measure for two tasks, dependency 15http://ilps.science.uva.nl/WikiXML/ 16http://urd.let.rug.nl/tiedeman/OPUS/EMEA.php 17http://www.kuleuven-kortrijk.be/DPC parsing and PoS tagging. Interestingly, humanannotated labels did not perform better than the automatic measures. The best technique is based on topic models, and compares document topic distributions estimated by LDA (Blei et al., 2003) using the variational metric (very similar results were obtained using jensen-shannon). Topic model selection significantly outperforms random data selection on both examined languages, English and Dutch, and has a positive effect on PoS tagging. Moreover, it outperformed a standard Domain Adaptation baseline (union) on two out of three test sets. Topic model is closely followed by the word-based measure using jensen-shannon divergence. By examining the overlap between word-based and topic model-based techniques, we found that despite similar performance their overlap is rather small. Given these results and the fact that no optimization has been done for the topic model itself, results are encouraging: there might be an even better measure that exploits the information from both techniques. So far, we tested a simple combination of the two by selecting half of the articles by a measure based on words and the other half by a measure based on topic models (by testing different metrics). However, this simple combination technique did not improve results yet – topic model alone still performed best. Overall, plain surface characteristics seem to carry important information of what kind of data is relevant for a given domain. Undoubtedly, parsing accuracy will be influenced by more factors than lexical information. Nevertheless, as we have seen, lexical differences constitute an important factor. Applying divergence measures over syntactic patterns, adding additional articles to the pool of data (by uptraining (Petrov et al., 2010), selftraining (McClosky et al., 2006) or active learning (Hwa, 2004)), gauging the effect of weighting instances according to their similarity to the test data (Jiang and Zhai, 2007; Plank and Sima’an, 2008), as well as analyzing differences between gathered data are venues for further research. Acknowledgments The authors would like to thank Bonnie Webber and the three anonymous reviewers for their valuable comments on earlier drafts of this paper. 1574 References David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain Adaptation with Structural Correspondence Learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, Sydney, Australia. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X Shared Task on Multilingual Dependency Parsing. In Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL-X), pages 149– 164, New York City. Hal Daum´e III. 2007. Frustratingly Easy Domain Adaptation. In Proceedings of the 45th Meeting of the Association for Computational Linguistics, Prague, Czech Republic. Daniel Gildea. 2001. Corpus Variation and Parser Performance. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, Pittsburgh, PA. Tadayoshi Hara, Yusuke Miyao, and Jun’ichi Tsujii. 2005. Adapting a Probabilistic Disambiguation Model of an HPSG Parser to a New Domain. In Robert Dale, Kam-Fai Wong, Jian Su, and Oi Yee Kwong, editors, Natural Language Processing IJCNLP 2005, volume 3651 of Lecture Notes in Computer Science, pages 199–210. Springer Berlin / Heidelberg. Rebecca Hwa. 2004. Sample Selection for Statistical Parsing. Compututational Linguistics, 30:253–276, September. Jing Jiang and ChengXiang Zhai. 2007. Instance Weighting for Domain Adaptation in NLP. In Proceedings of the 45th Meeting of the Association for Computational Linguistics, pages 264–271, Prague, Czech Republic, June. Association for Computational Linguistics. Richard Johansson and Pierre Nugues. 2007. Extended Constituent-to-dependency Conversion for English. In Proceedings of NODALIDA, Tartu, Estonia. Lillian Lee. 2001. On the Effectiveness of the Skew Divergence for Statistical Language Analysis. In In Artificial Intelligence and Statistics 2001, pages 65–72, Key West, Florida. J. Lin. 1991. Divergence measures based on the Shannon entropy. Information Theory, IEEE Transactions on, 37(1):145 –151, January. Tom Lippincott, Diarmuid ´O S´eaghdha, Lin Sun, and Anna Korhonen. 2010. Exploring variation across biomedical subdomains. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 689–697, Beijing, China, August. Christopher D. Manning and Hinrich Sch¨utze. 1999. Foundations of Statistical Natural Language Processing. MIT Press, Cambridge Mass. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective Self-Training for Parsing. In Proceedings of Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 152–159, Brooklyn, New York. Association for Computational Linguistics. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic Domain Adaptation for Parsing. In Proceedings of Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 28–36, Los Angeles, California, June. Association for Computational Linguistics. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005. Non-projective Dependency Parsing using Spanning Tree Algorithms. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 523–530, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics. Robert C. Moore and William Lewis. 2010. Intelligent Selection of Language Model Training Data. In Proceedings of the ACL 2010 Conference Short Papers, pages 220–224, Uppsala, Sweden, July. Association for Computational Linguistics. Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses: An Introduction. WileyInterscience. Slav Petrov, Pi-Chuan Chang, Michael Ringgaard, and Hiyan Alshawi. 2010. Uptraining for Accurate Deterministic Question Parsing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 705–713, Cambridge, MA, October. Association for Computational Linguistics. Barbara Plank and Khalil Sima’an. 2008. Subdomain Sensitive Statistical Parsing using Raw Corpora. In Proceedings of the 6th International Conference on Language Resources and Evaluation, Marrakech, Morocco, May. Barbara Plank and Gertjan van Noord. 2010. GrammarDriven versus Data-Driven: Which Parsing System Is More Affected by Domain Shifts? In Proceedings of the 2010 Workshop on NLP and Linguistics: Finding the Common Ground, pages 25–33, Uppsala, Sweden, July. Association for Computational Linguistics. Sujith Ravi, Kevin Knight, and Radu Soricut. 2008. Automatic Prediction of Parser Accuracy. In EMNLP ’08: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 887– 1575 896, Morristown, NJ, USA. Association for Computational Linguistics. A. R´enyi. 1961. On measures of information and entropy. In Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability, pages 547–561, Berkeley. Satoshi Sekine. 1997. The Domain Dependence of Parsing. In In Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 96–102, Washington D.C. Mark Steyvers and Tom Griffiths, 2007. Probabilistic Topic Models. Lawrence Erlbaum Associates. Mihai Surdeanu and Christopher D. Manning. 2010. Ensemble Models for Dependency Parsing: Cheap and Good? In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 649–652, Los Angeles, California, June. Association for Computational Linguistics. Vincent Van Asch and Walter Daelemans. 2010. Using Domain Similarity for Performance Estimation. In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pages 31–36, Uppsala, Sweden, July. Association for Computational Linguistics. Gertjan van Noord. 2006. At Last Parsing Is Now Operational. In TALN 2006 Verbum Ex Machina, Actes De La 13e Conference sur Le Traitement Automatique des Langues naturelles, pages 20–42, Leuven. Bonnie Webber. 2009. Genre distinctions for Discourse in the Penn TreeBank. In Proceedings of the 47th Meeting of the Association for Computational Linguistics, pages 674–682, Suntec, Singapore, August. Association for Computational Linguistics. Zhili Wu, Katja Markert, and Serge Sharoff. 2010. FineGrained Genre Classification Using Structural Learning Algorithms. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 749–759, Uppsala, Sweden, July. Association for Computational Linguistics. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th conference on Computational linguistics, pages 947–953, Morristown, NJ, USA. Association for Computational Linguistics. 1576
|
2011
|
157
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1577–1585, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Efficient CCG Parsing: A* versus Adaptive Supertagging Michael Auli School of Informatics University of Edinburgh [email protected] Adam Lopez HLTCOE Johns Hopkins University [email protected] Abstract We present a systematic comparison and combination of two orthogonal techniques for efficient parsing of Combinatory Categorial Grammar (CCG). First we consider adaptive supertagging, a widely used approximate search technique that prunes most lexical categories from the parser’s search space using a separate sequence model. Next we consider several variants on A*, a classic exact search technique which to our knowledge has not been applied to more expressive grammar formalisms like CCG. In addition to standard hardware-independent measures of parser effort we also present what we believe is the first evaluation of A* parsing on the more realistic but more stringent metric of CPU time. By itself, A* substantially reduces parser effort as measured by the number of edges considered during parsing, but we show that for CCG this does not always correspond to improvements in CPU time over a CKY baseline. Combining A* with adaptive supertagging decreases CPU time by 15% for our best model. 1 Introduction Efficient parsing of Combinatorial Categorial Grammar (CCG; Steedman, 2000) is a longstanding problem in computational linguistics. Even with practical CCG that are strongly context-free (Fowler and Penn, 2010), parsing can be much harder than with Penn Treebank-style context-free grammars, since the number of nonterminal categories is generally much larger, leading to increased grammar constants. Where a typical Penn Treebank grammar may have fewer than 100 nonterminals (Hockenmaier and Steedman, 2002), we found that a CCG grammar derived from CCGbank contained nearly 1600. The same grammar assigns an average of 26 lexical categories per word, resulting in a very large space of possible derivations. The most successful strategy to date for efficient parsing of CCG is to first prune the set of lexical categories considered for each word, using the output of a supertagger, a sequence model over these categories (Bangalore and Joshi, 1999; Clark, 2002). Variations on this approach drive the widelyused, broad coverage C&C parser (Clark and Curran, 2004; Clark and Curran, 2007). However, pruning means approximate search: if a lexical category used by the highest probability derivation is pruned, the parser will not find that derivation (§2). Since the supertagger enforces no grammaticality constraints it may even prefer a sequence of lexical categories that cannot be combined into any derivation (Figure 1). Empirically, we show that supertagging improves efficiency by an order of magnitude, but the tradeoff is a significant loss in accuracy (§3). Can we improve on this tradeoff? The line of investigation we pursue in this paper is to consider more efficient exact algorithms. In particular, we test different variants of the classical A* algorithm (Hart et al., 1968), which has met with success in Penn Treebank parsing with context-free grammars (Klein and Manning, 2003; Pauls and Klein, 2009a; Pauls and Klein, 2009b). We can substitute A* for standard CKY on either the unpruned set of lexical categories, or the pruned set resulting from su1577 Valid supertag-sequences Valid parses High scoring supertags High scoring parses Desirable parses Attainable parses Figure 1: The relationship between supertagger and parser search spaces based on the intersection of their corresponding tag sequences. pertagging. Our empirical results show that on the unpruned set of lexical categories, heuristics employed for context-free grammars show substantial speedups in hardware-independent metrics of parser effort (§4). To understand how this compares to the CKY baseline, we conduct a carefully controlled set of timing experiments. Although our results show that improvements on hardware-independent metrics do not always translate into improvements in CPU time due to increased processing costs that are hidden by these metrics, they also show that when the lexical categories are pruned using the output of a supertagger, then we can still improve efficiency by 15% with A* techniques (§5). 2 CCG and Parsing Algorithms CCG is a lexicalized grammar formalism encoding for each word lexical categories which are either basic (eg. NN, JJ) or complex. Complex lexical categories specify the number and directionality of arguments. For example, one lexical category (of over 100 in our model) for the transitive verb like is (S\NP2)/NP1, specifying the first argument as an NP to the right and the second as an NP to the left. In parsing, adjacent spans are combined using a small number of binary combinatory rules like forward application or composition on the spanning categories (Steedman, 2000; Fowler and Penn, 2010). In the first derivation below, (S\NP)/NP and NP combine to form the spanning category S\NP, which only requires an NP to its left to form a complete sentence-spanning S. The second derivation uses type-raising to change the category type of I. I like tea NP (S\NP)/NP NP > S\NP < S I like tea NP (S\NP)/NP NP >T S/(S\NP) >B S/NP > S Because of the number of lexical categories and their complexity, a key difficulty in parsing CCG is that the number of analyses for each span of the sentence quickly becomes extremely large, even with efficient dynamic programming. 2.1 Adaptive Supertagging Supertagging (Bangalore and Joshi, 1999) treats the assignment of lexical categories (or supertags) as a sequence tagging problem. Once the supertagger has been run, lexical categories that apply to each word in the input sentence are pruned to contain only those with high posterior probability (or other figure of merit) under the supertagging model (Clark and Curran, 2004). The posterior probabilities are then discarded; it is the extensive pruning of lexical categories that leads to substantially faster parsing times. Pruning the categories in advance this way has a specific failure mode: sometimes it is not possible to produce a sentence-spanning derivation from the tag sequences preferred by the supertagger, since it does not enforce grammaticality. A workaround for this problem is the adaptive supertagging (AST) approach of Clark and Curran (2004). It is based on a step function over supertagger beam ratios, relaxing the pruning threshold for lexical categories whenever the parser fails to find an analysis. The process either succeeds and returns a parse after some iteration or gives up after a predefined number of iterations. As Clark and Curran (2004) show, most sentences can be parsed with a very small number of supertags per word. However, the technique is inherently approximate: it will return a lower probability parse under the parsing model if a higher probability parse can only be constructed from a supertag sequence returned by a subsequent iteration. In this way it prioritizes speed over accuracy, although the tradeoff can be modified by adjusting the beam step function. 2.2 A* Parsing Irrespective of whether lexical categories are pruned in advance using the output of a supertagger, the CCG parsers we are aware of all use some vari1578 ant of the CKY algorithm. Although CKY is easy to implement, it is exhaustive: it explores all possible analyses of all possible spans, irrespective of whether such analyses are likely to be part of the highest probability derivation. Hence it seems natural to consider exact algorithms that are more efficient than CKY. A* search is an agenda-based best-first graph search algorithm which finds the lowest cost parse exactly without necessarily traversing the entire search space (Klein and Manning, 2003). In contrast to CKY, items are not processed in topological order using a simple control loop. Instead, they are processed from a priority queue, which orders them by the product of their inside probability and a heuristic estimate of their outside probability. Provided that the heuristic never underestimates the true outside probability (i.e. it is admissible) the solution is guaranteed to be exact. Heuristics are model specific and we consider several variants in our experiments based on the CFG heuristics developed by Klein and Manning (2003) and Pauls and Klein (2009a). 3 Adaptive Supertagging Experiments Parser. For our experiments we used the generative CCG parser of Hockenmaier and Steedman (2002). Generative parsers have the property that all edge weights are non-negative, which is required for A* techniques.1 Although not quite as accurate as the discriminative parser of Clark and Curran (2007) in our preliminary experiments, this parser is still quite competitive. It is written in Java and implements the CKY algorithm with a global pruning threshold of 10−4 for the models we use. We focus on two parsing models: PCFG, the baseline of Hockenmaier and Steedman (2002) which treats the grammar as a PCFG (Table 1); and HWDep, a headword dependency model which is the best performing model of the parser. The PCFG model simply generates a tree top down and uses very simple structural probabilities while the HWDep model conditions node expansions on headwords and their lexical categories. Supertagger. For supertagging we used Dennis Mehay’s implementation, which follows Clark 1Indeed, all of the past work on A* parsing that we are aware of uses generative parsers (Pauls and Klein, 2009b, inter alia). (2002).2 Due to differences in smoothing of the supertagging and parsing models, we occasionally drop supertags returned by the supertagger because they do not appear in the parsing model 3. Evaluation. All experiments were conducted on CCGBank (Hockenmaier and Steedman, 2007), a right-most normal-form CCG version of the Penn Treebank. Models were trained on sections 2-21, tuned on section 00, and tested on section 23. Parsing accuracy is measured using labelled and unlabelled predicate argument structure recovery (Clark and Hockenmaier, 2002); we evaluate on all sentences and thus penalise lower coverage. All timing experiments reported in the paper were run on a 2.5 GHz Xeon machine with 32 GB memory and are averaged over ten runs4. 3.1 Results Supertagging has been shown to improve the speed of a generative parser, although little analysis has been reported beyond the speedups (Clark, 2002) We ran experiments to understand the time/accuracy tradeoff of adaptive supertagging, and to serve as baselines. Adaptive supertagging is parametrized by a beam size β and a dictionary cutoff k that bounds the number of lexical categories considered for each word (Clark and Curran, 2007). Table 3 shows both the standard beam levels (AST) used for the C&C parser and looser beam levels: AST-covA, a simple extension of AST with increased coverage and AST-covB, also increasing coverage but with better performance for the HWDep model. Parsing results for the AST settings (Tables 4 and 5) confirm that it improves speed by an order of magnitude over a baseline parser without AST. Perhaps surprisingly, the number of parse failures decreases with AST in some cases. This is because the parser prunes more aggressively as the search space increases.5 2http://code.google.com/p/statopenccg 3Less than 2% of supertags are affected by this. 4The timing results reported differ from an earlier draft since we used a different machine 5Hockenmaier and Steedman (2002) saw a similar effect. 1579 Expansion probability p(exp|P) exp ∈{leaf, unary, left-head, right-head} Head probability p(H|P, exp) H is the head daughter Non-head probability p(S|P, exp, H) S is the non-head daughter Lexical probability p(w|P) Table 1: Factorisation of the PCFG model. H,P, and S are categories, and w is a word. Expansion probability p(exp|P, cP #wP ) exp ∈{leaf, unary, left-head, right-head} Head probability p(H|P, exp, cP #wP ) H is the head daughter Non-head probability p(S|P, exp, H#cP #wP ) S is the non-head daughter Lexcat probability p(cS|S#P, H, S) p(cTOP |P=TOP) Headword probability p(wS|cS#P, H, S, wP ) p(wTOP |cTOP ) Table 2: Headword dependency model factorisation, backoff levels are denoted by ’#’ between conditioning variables: A # B # C indicates that ˆP(. . . |A, B, C) is interpolated with ˆP(. . . |A, B), which is an interpolation of ˆP . . . |A, B) and ˆP(. . . |A). Variables cP and wP represent, respectively, the head lexical category and headword of category P. Condition Parameter Iteration 1 2 3 4 5 6 AST β (beam width) 0.075 0.03 0.01 0.005 0.001 k (dictionary cutoff) 20 20 20 20 150 AST-covA β 0.075 0.03 0.01 0.005 0.001 0.0001 k 20 20 20 20 150 150 AST-covB β 0.03 0.01 0.005 0.001 0.0001 0.0001 k 20 20 20 20 20 150 Table 3: Beam step function used for standard (AST) and high-coverage (AST-covA and AST-covB) supertagging. Time(sec) Sent/sec Cats/word Fail UP UR UF LP LR LF PCFG 290 6.6 26.2 5 86.4 86.5 86.5 77.2 77.3 77.3 PCFG (AST) 65 29.5 1.5 14 87.4 85.9 86.6 79.5 78.0 78.8 PCFG (AST-covA) 67 28.6 1.5 6 87.3 86.9 87.1 79.1 78.8 78.9 PCFG (AST-covB) 69 27.7 1.7 5 87.3 86.2 86.7 79.1 78.1 78.6 HWDep 1512 1.3 26.2 5 90.2 90.1 90.2 83.2 83.0 83.1 HWDep (AST) 133 14.4 1.5 16 89.8 88.0 88.9 82.6 80.9 81.8 HWDep (AST-covA) 139 13.7 1.5 9 89.8 88.3 89.0 82.6 81.1 81.9 HWDep (AST-covB) 155 12.3 1.7 7 90.1 88.7 89.4 83.0 81.8 82.4 Table 4: Results on CCGbank section 00 when applying adaptive supertagging (AST) to two models of a generative CCG parser. Performance is measured in terms of parse failures, labelled and unlabelled precision (LP/UP), recall (LR/UR) and F-score (LF/UF). Evaluation is based only on sentences for which each parser returned an analysis. 3.2 Efficiency versus Accuracy The most interesting result is the effect of the speedup on accuracy. As shown in Table 6, the vast majority of sentences are actually parsed with a very tight supertagger beam, raising the question of whether many higher-scoring parses are pruned.6 6Similar results are reported by Clark and Curran (2007). Despite this, labeled F-score improves by up to 1.6 F-measure for the PCFG model, although it harms accuracy for HWDep as expected. In order to understand this effect, we filtered section 00 to include only sentences of between 18 and 26 words (resulting in 610 sentences) for which 1580 Time(sec) Sent/sec Cats/word Fail UP UR UF LP LR LF PCFG 326 7.4 25.7 29 85.9 85.4 85.7 76.6 76.2 76.4 PCFG (AST) 82 29.4 1.5 34 86.7 84.8 85.7 78.6 76.9 77.7 PCFG (AST-covA) 85 28.3 1.6 15 86.6 85.5 86.0 78.5 77.5 78.0 PCFG (AST-covB) 86 27.9 1.7 14 86.6 85.6 86.1 78.1 77.3 77.7 HWDep 1754 1.4 25.7 30 90.2 89.3 89.8 83.5 82.7 83.1 HWDep (AST) 167 14.4 1.5 27 89.5 87.6 88.5 82.3 80.6 81.5 HWDep (AST-covA) 177 13.6 1.6 14 89.4 88.1 88.8 82.2 81.1 81.7 HWDep (AST-covB) 188 12.8 1.7 14 89.7 88.5 89.1 82.5 81.4 82.0 Table 5: Results on CCGbank section 23 when applying adaptive supertagging (AST) to two models of a CCG parser. β Cats/word Parses % 0.075 1.33 1676 87.6 0.03 1.56 114 6.0 0.01 1.97 60 3.1 0.005 2.36 15 0.8 0.001k=150 3.84 32 1.7 Fail 16 0.9 Table 6: Breakdown of the number of sentences parsed for the HWDep (AST) model (see Table 4) at each of the supertagger beam levels from the most to the least restrictive setting. we can perform exhaustive search without pruning7, and for which we could parse without failure at all of the tested beam settings. We then measured the log probability of the highest probability parse found under a variety of beam settings, relative to the log probability of the unpruned exact parse, along with the labeled F-Score of the Viterbi parse under these settings (Figure 2). The results show that PCFG actually finds worse results as it considers more of the search space. In other words, the supertagger can actually “fix” a bad parsing model by restricting it to a small portion of the search space. With the more accurate HWDep model, this does not appear to be a problem and there is a clear opportunity for improvement by considering the larger search space. The next question is whether we can exploit this larger search space without paying as high a cost in efficiency. 7The fact that only a subset of short sentences could be exhaustively parsed demonstrates the need for efficient search algorithms. 79
80
81
82
83
84
85
86
87
88
95.0
95.5
96.0
96.5
97.0
97.5
98.0
98.5
99.0
99.5
100.0
0.075
0.030
0.010
0.005
0.001
0.001₁₅₀
0.0001
0.0001₁₅₀
exact
Labeled
F-‐score
%
of
op1mal
Log
Probability
Supertagger
beam
PCFG
Log
Probability
HWDep
Log
Probability
PCFG
F-‐score
HWDep
F-‐score
Figure 2: Log-probability of parses relative to exact solution vs. labelled F-score at each supertagging beam-level. 4 A* Parsing Experiments To compare approaches, we extended our baseline parser to support A* search. Following (Klein and Manning, 2003) we restrict our experiments to sentences on which we can perform exact search via using the same subset of section 00 as in §3.2. Before considering CPU time, we first evaluate the amount of work done by the parser using three hardwareindependent metrics. We measure the number of edges pushed (Pauls and Klein, 2009a) and edges popped, corresponding to the insert/decrease-key operations and remove operation of the priority queue, respectively. Finally, we measure the number of traversals, which counts the number of edge weights computed, regardless of whether the weight is discarded due to the prior existence of a better weight. This latter metric seems to be the most accurate account of the work done by the parser. Due to differences in the PCFG and HWDep models, we considered different A* variants: for the PCFG model we use a simple A* with a precom1581 puted heuristic, while for the the more complex HWDep model, we used a hierarchical A* algorithm (Pauls and Klein, 2009a; Felzenszwalb and McAllester, 2007) based on a simple grammar projection that we designed. 4.1 Hardware-Independent Results: PCFG For the PCFG model, we compared three agendabased parsers: EXH prioritizes edges by their span length, thereby simulating the exhaustive CKY algorithm; NULL prioritizes edges by their inside probability; and SX is an A* parser that prioritizes edges by their inside probability times an admissible outside probability estimate.8 We use the SX estimate devised by Klein and Manning (2003) for CFG parsing, where they found it offered very good performance for relatively little computation. It gives a bound on the outside probability of a nonterminal P with i words to the right and j words to the left, and can be computed from a grammar using a simple dynamic program. The parsers are tested with and without adaptive supertagging where the former can be seen as performing exact search (via A*) over the pruned search space created by AST. Table 7 shows that A* with the SX heuristic decreases the number of edges pushed by up to 39% on the unpruned search space. Although encouraging, this is not as impressive as the 95% speedup obtained by Klein and Manning (2003) with this heuristic on their CFG. On the other hand, the NULL heuristic works better for CCG than for CFG, with speedups of 29% and 11%, respectively. These results carry over to the AST setting which shows that A* can improve search even on the highly pruned search graph. Note that A* only saves work in the final iteration of AST, since for earlier iterations it must process the entire agenda to determine that there is no spanning analysis. Since there are many more categories in the CCG grammar we might have expected the SX heuristic to work better than for a CFG. Why doesn’t it? We can measure how well a heuristic bounds the true cost in 8The NULL parser is a special case of A*, also called uniform cost search, which in the case of parsing corresponds to Knuth’s algorithm (Knuth, 1977; Klein and Manning, 2001), the extension of Dijkstra’s algorithm to hypergraphs. 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
1
4
7
10
13
16
19
22
25
Average
Slack
Outside
Span
Figure 3: Average slack of the SX heuristic. The figure aggregates the ratio of the difference between the estimated outside cost and true outside costs relative to the true cost across the development set. terms of slack: the difference between the true and estimated outside cost. Lower slack means that the heuristic bounds the true cost better and guides us to the exact solution more quickly. Figure 3 plots the average slack for the SX heuristic against the number of words in the outside context. Comparing this with an analysis of the same heuristic when applied to a CFG by Klein and Manning (2003), we find that it is less effective in our setting9. There is a steep increase in slack for outside contexts with size more than one. The main reason for this is because a single word in the outside context is in many cases the full stop at the end of the sentence, which is very predictable. However for longer spans the flexibility of CCG to analyze spans in many different ways means that the outside estimate for a nonterminal can be based on many high probability outside derivations which do not bound the true probability very well. 4.2 Hardware-Independent Results: HWDep Lexicalization in the HWDep model makes the precomputed SX estimate impractical, so for this model we designed two hierarchical A* (HA*) variants based on simple grammar projections of the model. The basic idea of HA* is to compute Viterbi inside probabilities using the easier-to-parse projected 9Specifically, we refer to Figure 9 of their paper which uses a slightly different representation of estimate sharpness 1582 Parser Edges pushed Edges popped Traversals Std % AST % Std % AST % Std % AST % EXH 34 100 6.3 100 15.7 100 4.2 100 133.4 100 13.3 100 NULL 24.3 71 4.9 78 13.5 86 3.5 83 113.8 85 11.1 83 SX 20.9 61 4.3 68 10.0 64 2.6 62 96.5 72 9.7 73 Table 7: Exhaustive search (EXH), A* with no heuristic (NULL) and with the SX heuristic in terms of millions of edges pushed, popped and traversals computed using the PCFG grammar with and without adaptive supertagging. grammar, use these to compute Viterbi outside probabilities for the simple grammar, and then use these as outside estimates for the true grammar; all computations are prioritized in a single agenda following the algorithm of Felzenszwalb and McAllester (2007) and Pauls and Klein (2009a). We designed two simple grammar projections, each simplifying the HWDep model: PCFGProj completely removes lexicalization and projects the grammar to a PCFG, while as LexcatProj removes only the headwords but retains the lexical categories. Figure 4 compares exhaustive search, A* with no heuristic (NULL), and HA*. For HA*, parsing effort is broken down into the different edge types computed at each stage: We distinguish between the work carried out to compute the inside and outside edges of the projection, where the latter represent the heuristic estimates, and finally, the work to compute the edges of the target grammar. We find that A* NULL saves about 44% of edges pushed which makes it slightly more effective than for the PCFG model. However, the effort to compute the grammar projections outweighs their benefit. We suspect that this is due to the large difference between the target grammar and the projection: The PCFG projection is a simple grammar and so we improve the probability of a traversal less often than in the target grammar. The Lexcat projection performs worst, for two reasons. First, the projection requires about as much work to compute as the target grammar without a heuristic (NULL). Second, the projection itself does not save a large amount of work as can be seen in the statistics for the target grammar. 5 CPU Timing Experiments Hardware-independent metrics are useful for understanding agenda-based algorithms, but what we actually care about is CPU time. We were not aware of any past work that measures A* parsers in terms of CPU time, but as this is the real objective we feel that experiments of this type are important. This is especially true in real implementations because the savings in edges processed by an agenda parser come at a cost: operations on the priority queue data structure can add significant runtime. Timing experiments of this type are very implementation-dependent, so we took care to implement the algorithms as cleanly as possible and to reuse as much of the existing parser code as we could. An important implementation decision for agenda-based algorithms is the data structure used to implement the priority queue. Preliminary experiments showed that a Fibonacci heap implementation outperformed several alternatives: Brodal queues (Brodal, 1996), binary heaps, binomial heaps, and pairing heaps.10 We carried out timing experiments on the best A* parsers for each model (SX and NULL for PCFG and HWDep, respectively), comparing them with our CKY implementation and the agenda-based CKY simulation EXH; we used the same data as in §3.2. Table 8 presents the cumulative running times with and without adaptive supertagging average over ten runs, while Table 9 reports F-scores. The results (Table 8) are striking. Although the timing results of the agenda-based parsers track the hardware-independent metrics, they start at a significant disadvantage to exhaustive CKY with a simple control loop. This is most evident when looking at the timing results for EXH, which in the case of the full PCFG model requires more than twice the time than the CKY algorithm that it simulates. A* 10We used the Fibonacci heap implementation at http://www.jgrapht.org 1583 Figure 4: Comparsion between a CKY simulation (EXH), A* with no heuristic (NULL), hierarchical A* (HA*) using two grammar projections for standard search (left) and AST (right). The breakdown of the inside/outside edges for the grammar projection as well as the target grammar shows that the projections, serving as the heuristic estimates for the target grammar, are costly to compute. Standard AST PCFG HWDep PCFG HWDep CKY 536 24489 34 143 EXH 1251 26889 41 155 A* NULL 1032 21830 36 121 A* SX 889 34 Table 8: Parsing time in seconds of CKY and agendabased parsers with and without adaptive supertagging. Standard AST PCFG HWDep PCFG HWDep CKY 80.4 85.5 81.7 83.8 EXH 79.4 85.5 80.3 83.8 A* NULL 79.6 85.5 80.7 83.8 A* SX 79.4 80.4 Table 9: Labelled F-score of exact CKY and agendabased parsers with/without adaptive supertagging. All parses have the same probabilities, thus variances are due to implementation-dependent differences in tiebreaking. makes modest CPU-time improvements in parsing the full space of the HWDep model. Although this decreases the time required to obtain the highest accuracy, it is still a substantial tradeoff in speed compared with AST. On the other hand, the AST tradeoff improves significantly: by combining AST with A* we observe a decrease in running time of 15% for the A* NULL parser of the HWDep model over CKY. As the CKY baseline with AST is very strong, this result shows that A* holds real promise for CCG parsing. 6 Conclusion and Future Work Adaptive supertagging is a strong technique for efficient CCG parsing. Our analysis confirms tremendous speedups, and shows that for weak models, it can even result in improved accuracy. However, for better models, the efficiency gains of adaptive supertagging come at the cost of accuracy. One way to look at this is that the supertagger has good precision with respect to the parser’s search space, but low recall. For instance, we might combine both parsing and supertagging models in a principled way to exploit these observations, eg. by making the supertagger output a soft constraint on the parser rather than a hard constraint. Principled, efficient search algorithms will be crucial to such an approach. To our knowledge, we are the first to measure A* parsing speed both in terms of running time and commonly used hardware-independent metrics. It is clear from our results that the gains from A* do not come as easily for CCG as for CFG, and that agenda-based algorithms like A* must make very large reductions in the number of edges processed to result in realtime savings, due to the added expense of keeping a priority queue. However, we 1584 have shown that A* can yield real improvements even over the highly optimized technique of adaptive supertagging: in this pruned search space, a 44% reduction in the number of edges pushed results in a 15% speedup in CPU time. Furthermore, just as A* can be combined with adaptive supertagging, it should also combine easily with other search-space pruning methods, such as those of Djordjevic et al. (2007), Kummerfeld et al. (2010), Zhang et al. (2010) and Roark and Hollingshead (2009). In future work we plan to examine better A* heuristics for CCG, and to look at principled approaches to combine the strengths of A*, adaptive supertagging, and other techniques to the best advantage. Acknowledgements We would like to thank Prachya Boonkwan, Juri Ganitkevitch, Philipp Koehn, Tom Kwiatkowski, Matt Post, Mark Steedman, Emily Thomforde, and Luke Zettlemoyer for helpful discussion related to this work and comments on previous drafts; Julia Hockenmaier for furnishing us with her parser; and the anonymous reviewers for helpful commentary. We also acknowledge funding from EPSRC grant EP/P504171/1 (Auli); the EuroMatrixPlus project funded by the European Commission, 7th Framework Programme (Lopez); and the resources provided by the Edinburgh Compute and Data Facility (http://www.ecdf.ed.ac.uk/). The ECDF is partially supported by the eDIKT initiative (http://www.edikt.org.uk/). References S. Bangalore and A. K. Joshi. 1999. Supertagging: An Approach to Almost Parsing. Computational Linguistics, 25(2):238–265, June. G. S. Brodal. 1996. Worst-case efficient priority queues. In Proc. of SODA, pages 52–58. S. Clark and J. R. Curran. 2004. The importance of supertagging for wide-coverage CCG parsing. In Proc. of COLING. S. Clark and J. R. Curran. 2007. Wide-coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493–552. S. Clark and J. Hockenmaier. 2002. Evaluating a widecoverage CCG parser. In Proc. of LREC Beyond Parseval Workshop, pages 60–66. S. Clark. 2002. Supertagging for Combinatory Categorial Grammar. In Proc. of TAG+6, pages 19–24. B. Djordjevic, J. R. Curran, and S. Clark. 2007. Improving the efficiency of a wide-coverage CCG parser. In Proc. of IWPT. P. F. Felzenszwalb and D. McAllester. 2007. The Generalized A* Architecture. In Journal of Artificial Intelligence Research, volume 29, pages 153–190. T. A. D. Fowler and G. Penn. 2010. Accurate contextfree parsing with combinatory categorial grammar. In Proc. of ACL. P. Hart, N. Nilsson, and B. Raphael. 1968. A formal basis for the heuristic determination of minimum cost paths. Transactions on Systems Science and Cybernetics, 4, Jul. J. Hockenmaier and M. Steedman. 2002. Generative models for statistical parsing with Combinatory Categorial Grammar. In Proc. of ACL, pages 335–342. Association for Computational Linguistics. J. Hockenmaier and M. Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics, 33(3):355–396. D. Klein and C. D. Manning. 2001. Parsing and hypergraphs. In Proc. of IWPT. D. Klein and C. D. Manning. 2003. A* parsing: Fast exact Viterbi parse selection. In Proc. of HLT-NAACL, pages 119–126, May. D. E. Knuth. 1977. A generalization of Dijkstra’s algorithm. Information Processing Letters, 6:1–5. J. K. Kummerfeld, J. Roesner, T. Dawborn, J. Haggerty, J. R. Curran, and S. Clark. 2010. Faster parsing by supertagger adaptation. In Proc. of ACL. A. Pauls and D. Klein. 2009a. Hierarchical search for parsing. In Proc. of HLT-NAACL, pages 557–565, June. A. Pauls and D. Klein. 2009b. k-best A* Parsing. In Proc. of ACL-IJCNLP, ACL-IJCNLP ’09, pages 958– 966. B. Roark and K. Hollingshead. 2009. Linear complexity context-free parsing pipelines via chart constraints. In Proc. of HLT-NAACL. M. Steedman. 2000. The syntactic process. MIT Press, Cambridge, MA. Y. Zhang, B.-G. Ahn, S. Clark, C. V. Wyk, J. R. Curran, and L. Rimell. 2010. Chart pruning for fast lexicalised-grammar parsing. In Proc. of COLING. 1585
|
2011
|
158
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1586–1596, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Improving Arabic Dependency Parsing with Form-based and Functional Morphological Features Yuval Marton T.J. Watson Research Center IBM [email protected] Nizar Habash and Owen Rambow Center for Computational Learning Systems Columbia University {habash,rambow}@ccls.columbia.edu Abstract We explore the contribution of morphological features – both lexical and inflectional – to dependency parsing of Arabic, a morphologically rich language. Using controlled experiments, we find that definiteness, person, number, gender, and the undiacritzed lemma are most helpful for parsing on automatically tagged input. We further contrast the contribution of form-based and functional features, and show that functional gender and number (e.g., “broken plurals”) and the related rationality feature improve over form-based features. It is the first time functional morphological features are used for Arabic NLP. 1 Introduction Parsers need to learn the syntax of the modeled language in order to project structure on newly seen sentences. Parsing model design aims to come up with features that best help parsers to learn the syntax and choose among different parses. One aspect of syntax, which is often not explicitly modeled in parsing, involves morphological constraints on syntactic structure, such as agreement, which often plays an important role in morphologically rich languages. In this paper, we explore the role of morphological features in parsing Modern Standard Arabic (MSA). For MSA, the space of possible morphological features is fairly large. We determine which morphological features help and why. We also explore going beyond the easily detectable, regular form-based (“surface”) features, by representing functional values for some morphological features. We expect that representing lexical abstractions and inflectional features participating in agreement relations would help parsing quality, but other inflectional features would not help. We further expect functional features to be superior to surfaceonly features. The paper is structured as follows. We first present the corpus we use (Section 2), then relevant Arabic linguistic facts (Section 3); we survey related work (Section 4), describe our experiments (Section 5), and conclude with an analysis of parsing error types (Section 6). 2 Corpus We use the Columbia Arabic Treebank (CATiB) (Habash and Roth, 2009). Specifically, we use the portion converted automatically from part 3 of the Penn Arabic Treebank (PATB) (Maamouri et al., 2004) to the CATiB format, which enriches the CATiB dependency trees with full PATB morphological information. CATiB’s dependency representation is based on traditional Arabic grammar and emphasizes syntactic case relations. It has a reduced POS tagset (with six tags only – henceforth CATIB6), but a standard set of eight dependency relations: SBJ and OBJ for subject and (direct or indirect) object, respectively, (whether they appear preor post-verbally); IDF for the idafa (possessive) relation; MOD for most other modifications; and other less common relations that we will not discuss here. For more information, see Habash et al. (2009). The CATiB treebank uses the word segmentation of the PATB: it splits off several categories of orthographic clitics, but not the definite article È@ Al. In all of the experiments reported in this paper, we use the gold 1586 VRB ÉÒªK tςml ‘work’ MOD PRT ú ¯ fy ‘in’ OBJ NOM P@YÖÏ@ AlmdArs ‘the-schools’ MOD NOM éJ ÓñºmÌ'@ AlHkwmyℏ ‘the-governmental’ SBJ NOM H@YJ ®k HfydAt ‘granddaughters’ MOD NOM HAJ » YË@ AlðkyAt ‘smart’ IDF NOM I. KA¾Ë@ AlkAtb ‘the-writer’ Figure 1: CATiB Annotation example (tree display from right to left). éJ ÓñºmÌ'@ P@YÖÏ@ ú ¯ HAJ » YË@ I. KA¾Ë@ H@YJ ®k ÉÒªK tςml HfydAt AlkAtb AlðkyAt fy AlmdArs AlHkwmyℏ‘The writer’s smart granddaughters work for public schools.’ segmentation. An example CATiB dependency tree is shown in Figure 1. 3 Relevant Linguistic Concepts In this section, we present the linguistic concepts relevant to our discussion of Arabic parsing. Orthography The Arabic script uses optional diacritics to represent short vowels, consonantal doubling and the indefininteness morpheme (nunation). For example, the word I. J» kataba ‘he wrote’ is often written as I. J»ktb, which can be ambiguous with other words such as I. J» kutub˜u ‘books’. In news text, only around 1.6% of all words have any diacritic (Habash, 2010). As expected, the lack of diacritics contributes heavily to Arabic’s morphological ambiguity. In this work, we only use undiacritized text; however, some of our parsing features which are derived through morphological disambiguation include diacritics (specifically, lemmas, see below). Morphemes Words can be described in terms of their morphemes; in Arabic, in addition to concatenative prefixes and suffixes, there are templatic morphemes called root and pattern. For example, the word àñJ.KA¾K yu+kAtib+uwn ‘they correspond’ has one prefix and one suffix, in addition to a stem composed of the root H. H¼ k-t-b ‘writing related’ and the pattern 1A2i3.1 Lexeme and features Alternatively, Arabic words can be described in terms of lexemes and inflectional features. The set of word forms that only vary inflectionally among each other is called the lexeme. A lemma is a specific word form chosen to represent the lexeme word set; for example, Arabic verb lemmas are third person masculine singular perfective. We explore using both the diacritized lemma and the undiacritized lemma (hereafter LMM). Just as the lemma abstracts over inflectional morphology, the root abstracts over both inflectional and derivational morphology and thus provides a deeper level of lexical abstraction, indicating the “core” meaning of the word. The pattern is a generally complementary abstraction sometimes indicating sematic notions such causation and reflexiveness. We use the pattern of the lemma, not of the word form. We group the ROOT, PATTERN, LEMMA and LMM in our discussion as lexical features. Nominal lexemes can also be classified into two groups: rational (i.e., human) or irrational (i.e.„ non-human).2 The rationality feature interacts with syntactic agreement and other inflectional features (discussed next); as such, we group it with those features in this paper’s experiments. The inflectional features define the the space of variations of the word forms associated with a lexeme. PATB-tokenized words vary along nine dimensions: GENDER and NUMBER (for nominals and verbs); PERSON, ASPECT, VOICE and MOOD (for verbs); and CASE, STATE, and the attached definite article proclitic DET (for nominals). Inflectional features abstract away from the specifics of morpheme forms. Some inflectional features affect more than one morpheme in the same word. For example, changing the value of the ASPECT feature in the example above from imperfective to perfective yields the word form @ñJ.KA¿ kAtab+uwA ‘they corresponded’, which differs in terms of prefix, suffix and pattern. 1The digits in the pattern correspond to the positions root radicals are inserted. 2Note that rationality (‘human-ness’ ‘ɯA« Q «/ɯA«’) is narrower than animacy; its expression is wide-spead in Arabic, but less so English, where it mainly shows in pronouns (he/she vs. it) and relativizers (the student who... vs. the desk/bird which...). 1587 Surface vs. functional features Additionally, some inflectional features, specifically gender and number, are expressed using different morphemes in different words (even within the same part-ofspeech). There are four sound gender-number suffixes in Arabic:3 +φ (null morpheme) for masculine singular, è+ +ℏfor feminine singular, àð+ +wn for masculine plural and H@+ +At for feminine plural. Plurality can be expressed using sound plural suffixes or using a pattern change together with singular suffixes. A sound plural example is the word pair H@YJ ®k/èYJ ®k Hafiyd+aℏ/Hafiyd+At ‘granddaughter/granddaughters’. On the other hand, the plural of the inflectionally and morphemically feminine singular word éPYÓ madras+aℏ‘school’ is the word P@YÓ madAris+φ ‘schools’, which is feminine and plural inflectionally, but has a masculine singular suffix. This irregular inflection, known as broken plural, is similar to the English mouse/mice, but is much more common in Arabic (over 50% of plurals in our training data). A similar inconsistency appears in feminine nouns that are not inflected using sound gender suffixes, e.g., the feminine form of the masculine singular adjective P P
@ Âzraq+φ ‘blue’ is ZA¯P P zarqA’+φ not é¯P P
@* *Âzraq+aℏ. To address this inconsistency in the correspondence between inflectional features and morphemes, and inspired by (Smrž, 2007), we distinguish between two types of inflectional features: surface (or form-based)4 features and functional features. Most available Arabic NLP tools and resources model morphology using surface inflectional features and do not mark rationality; this includes the PATB (Maamouri et al., 2004), the Buckwalter morphological analyzer (BAMA) (Buckwalter, 2004) and tools using them such as the Morphological Analysis and Disambiguation for Arabic (MADA) system (Habash and Rambow, 2005). The ElixirFM analyzer (Smrž, 2007) readily provides the functional inflectional number feature, but not full functional gender (only for adjectives and verbs but not for nouns), nor rationality. Most recently, Alkuhlani and Habash (2011) present a version of the PATB (part 3) that is annotated for functional gender, num3We ignore duals, which are regular in Arabic, and case/state variations in this discussion for simplicity. 4Smrž (2007) uses the term illusory for surface features. ber and rationality features for Arabic. We use this resource in modeling these features in Section 5.5. Morpho-syntactic interactions Inflectional features and rationality interact with syntax in two ways. In agreement relations, two words in a specific syntactic configuration have coordinated values for specific sets of features. MSA has standard (i.e., matching value) agreement for subject-verb pairs on PERSON, GENDER, and NUMBER, and for nounadjective pairs on NUMBER, GENDER, CASE, and DET. There are three very common cases of exceptional agreement: verbs preceding subjects are always singular, adjectives of irrational plural nouns are always feminine singular, and verbs whose subjects are irrational plural are also always feminine singular. See the example in Figure 1: the adjective, HAJ » YË@ AlðkyAt ‘smart’, of the feminine plural (and rational) H@YJ ®k HafiydAt ‘granddaughters’ is feminine plural; but the adjective, éJ ÓñºmÌ'@ AlHkwmyℏ ‘the-governmental’, of the feminine plural (and irrational) P@YÓ madAris ‘schools’ is feminine singular. These agreement rules always refer to functional morphology categories; they are orthogonal to the morpheme-feature inconsistency discussed above. MSA exhibits marking relations in CASE and STATE marking. Different types of dependents have different CASE, e.g., verbal subjects are always marked NOMINATIVE. CASE and STATE are rarely explicitly manifested in undiacritized MSA. The DET feature plays an important role in distinguishing between the N-N idafa (possessive) construction, in which only the last noun may bear the definite article, and the N-A modifier construction, in which both elements generally exhibit agreement in definiteness. Lexical features do not constrain syntactic structure as inflectional features do. Instead, bilexical dependencies are used to model semantic relations which often are the only way to disambiguate among different possible syntactic structures. Lexical abstraction also reduces data sparseness. The core POS tagsets Words also have associated part-of-speech (POS) tags, e.g., “verb”, which further abstract over morphologically and syntactically similar lexemes. Traditional Arabic grammars often describe a very general three-way distinction into verbs, nominals and particles. In com1588 parison, the tagset of the Buckwalter Morphological Analyzer (Buckwalter, 2004) used in the PATB has a core POS set of 44 tags (before morphological extension). Cross-linguistically, a core set containing around 12 tags is often assumed, including: noun, proper noun, verb, adjective, adverb, preposition, particles, connectives, and punctuation. Henceforth, we reduce CORE44 to such a tagset, and dub it CORE12. The CATIB6 tagset can be viewed as a further reduction, with the exception that CATIB6 contains a passive voice tag; however, this tag constitutes only 0.5% of the tags in the training. Extended POS tagsets The notion of “POS tagset” in natural language processing usually does not refer to a core set. Instead, the Penn English Treebank (PTB) uses a set of 46 tags, including not only the core POS, but also the complete set of morphological features (this tagset is still fairly small since English is morphologically impoverished). In PATB-tokenized MSA, the corresponding type of tagset (core POS extended with a complete description of morphology) would contain upwards of 2,000 tags, many of which are extremely rare (in our training corpus of about 300,000 words, we encounter only 430 of such POS tags with complete morphology). Therefore, researchers have proposed tagsets for MSA whose size is similar to that of the English PTB tagset, as this has proven to be a useful size computationally. These tagsets are hybrids in the sense that they are neither simply the core POS, nor the complete morphologically enriched tagset, but instead they selectively enrich the core POS tagset with only certain morphological features. A full dicussion of how these tagsets affect parsing is presented in Marton et al. (2010); we summarize the main points here. The following are the various tagsets we use in this paper: (a) the core POS tagset CORE12; (b) the CATiB treebank tagset CATIBEX, a newly introduced extension of CATIB6 (Habash and Roth, 2009) by simple regular expressions of the word form, indicating particular morphemes such as the prefix È@ Al+ or the suffix àð +wn; this tagset is the best-performing tagset for Arabic on predicted values. (c) the PATB full tagset (BW), size ≈2000+ (Buckwalter, 2004); We only discuss here the best performing tagsets (on predicted values), and BW for comparison. 4 Related Work Much work has been done on the use of morphological features for parsing of morphologically rich languages. Collins et al. (1999) report that an optimal tagset for parsing Czech consists of a basic POS tag plus a CASE feature (when applicable). This tagset (size 58) outperforms the basic Czech POS tagset (size 13) and the complete tagset (size ≈3000+). They also report that the use of gender, number and person features did not yield any improvements. We got similar results for CASE in the gold experimental setting (Marton et al., 2010) but not when using predicted POS tags (POS tagger output). This may be a result of CASE tagging having a lower error rate in Czech (5.0%) (Hajiˇc and Vidová-Hladká, 1998) compared to Arabic (≈14.0%, see Table 2). Similarly, Cowan and Collins (2005) report that the use of a subset of Spanish morphological features (number for adjectives, determiners, nouns, pronouns, and verbs; and mode for verbs) outperforms other combinations. Our approach is comparable to their work in terms of its systematic exploration of the space of morphological features. We also find that the number feature helps for Arabic. Looking at Hebrew, a Semitic language related to Arabic, Tsarfaty and Sima’an (2007) report that extending POS and phrase structure tags with definiteness information helps unlexicalized PCFG parsing. As for work on Arabic, results have been reported on PATB (Kulick et al., 2006; Diab, 2007; Green and Manning, 2010), the Prague Dependency Treebank (PADT) (Buchholz and Marsi, 2006; Nivre, 2008) and the Columbia Arabic Treebank (CATiB) (Habash and Roth, 2009). Recently, Green and Manning (2010) analyzed the PATB for annotation consistency, and introduced an enhanced split-state constituency grammar, including labels for short Idafa constructions and verbal or equational clauses. Nivre (2008) reports experiments on Arabic parsing using his MaltParser (Nivre et al., 2007), trained on the PADT. His results are not directly comparable to ours because of the different treebanks’ representations, even though all the experiments reported here were performed using MaltParser. Our results agree with previous work on Arabic and Hebrew in that marking the definite article is helpful for parsing. However, we go beyond previous work in that 1589 we also extend this morphologically enhanced feature set to include additional lexical and inflectional features. Previous work with MaltParser in Russian, Turkish and Hindi showed gains with case but not with agreement features (Nivre et al., 2008; Eryigit et al., 2008; Nivre, 2009). Our work is the first using MaltParser to show gains using agreement-oriented features (Marton et al., 2010), and the first to use functional features for this task (this paper). 5 Experiments Throughout this section, we only report results using predicted input feature values (e.g., generated automatically by a POS tagger). After presenting the parser we use (Section 5.1), we examine a large space of settings in the following order: the contribution of numerous inflectional features in a controlled fashion (Section 5.2);5 the contribution of the lexical features in a similar fashion, as well as the combination of lexical and inflectional features (Section 5.3); an extension of the DET feature (Section 5.4); using functional NUMBER and GENDER feature values, as well as the RATIONALITY feature (Section 5.5); finally, putting best feature combinations to test with the best-performing POS tagset, and on an unseen test set (Section 5.6). All results are reported mainly in terms of labeled attachment accuracy score (parent word and the dependency relation to it, a.k.a. LAS). Unlabeled attachment accuracy score (UAS) is also given. We use McNemar’s statistical significance test as implemented by Nilsson and Nivre (2008), and denote p < 0.05 and p < 0.01 with +and ++, respectively. 5.1 Parser For all experiments reported here we used the syntactic dependency parser MaltParser v1.3 (Nivre, 2003; Nivre, 2008; Kübler et al., 2009) – a transition-based parser with an input buffer and a stack, using SVM classifiers to predict the next state in the parse derivation. All experiments were done using the Nivre "eager" algorithm.6 For training, de5In this paper, we do not examine the contribution of different POS tagsets, see Marton et al. (2010) for details. 6Nivre (2008) reports that non-projective and pseudoprojective algorithms outperform the "eager" projective algorithm in MaltParser, but our training data did not contain any non-projective dependencies. The Nivre "standard" algorithm velopment and testing, we follow the splits used by Roth et al. (2008) for PATB part 3 (Maamouri et al., 2004). We kept the test unseen during training. There are five default attributes, in the MaltParser terminology, for each token in the text: word ID (ordinal position in the sentence), word form, POS tag, head (parent word ID), and deprel (the dependency relation between the current word and its parent). There are default MaltParser features (in the machine learning sense),7 which are the values of functions over these attributes, serving as input to the MaltParser internal classifiers. The most commonly used feature functions are the top of the input buffer (next word to process, denoted buf[0]), or top of the stack (denoted stk[0]); following items on buffer or stack are also accessible (buf[1], buf[2], stk[1], etc.). Hence MaltParser features are defined as POS tag at stk[0], word form at buf[0], etc. Kübler et al. (2009) describe a “typical” MaltParser model configuration of attributes and features.8 Starting with it, in a series of initial controlled experiments, we settled on using buf[0-1] + stk[0-1] for wordforms, and buf[0-3] + stk[0-2] for POS tags. For features of new MaltParser-attributes (discussed later), we used buf[0] + stk[0]. We did not change the features for deprel. This new MaltParser configuration resulted in gains of 0.3-1.1% in labeled attachment accuracy (depending on the POS tagset) over the default MaltParser configuration.9 All experiments reported below were conducted using this new configuration. 5.2 Inflectional features In order to explore the contribution of inflectional and lexical information in a controlled manner, we focused on the best performing core (“morphologyfree”) POS tagset, CORE12, as baseline; using three is also reported to do better on Arabic, but in a preliminary experimentation, it did similarly or slightly worse than the "eager” one, perhaps due to high percentage of right branching (left headed structures) in our Arabic training set – an observation already noted in Nivre (2008). 7The terms “feature” and “attribute” are overloaded in the literature. We use them in the linguistic sense, unless specifically noted otherwise, e.g., “MaltParser feature(s)”. 8It is slightly different from the default configuration. 9We also experimented with normalizing word forms (Alif Maqsura conversion to Ya, and hamza removal from Alif forms) as is common in parsing and statistical machine translation literature – but it resulted in a similar or slightly decreased performance, so we settled on using non-normalized word forms. 1590 setup LAS LASdiff UAS All CORE12 78.68 — 82.48 + all inflectional features 77.91 -0.77 82.14 Sep +DET 79.82++ 1.14 83.18 +STATE 79.34++ 0.66 82.85 +GENDER 78.75 0.07 82.35 +PERSON 78.74 0.06 82.45 +NUMBER 78.66 -0.02 82.39 +VOICE 78.64 -0.04 82.41 +ASPECT 78.60 -0.08 82.39 +MOOD 78.54 -0.14 82.35 +CASE 75.81 -2.87 80.24 Greedy +DET+STATE 79.42++ 0.74 82.84 +DET+GENDER 79.90++ 1.22 83.20 +DET+GENDER+PERSON 79.94++ 1.26 83.21 +DET+PNG 80.11++ 1.43 83.29 +DET+PNG+VOICE 79.96++ 1.28 83.18 +DET+PNG+ASPECT 80.01++ 1.33 83.20 +DET+PNG+MOOD 80.03++ 1.35 83.21 Table 1: CORE12 with inflectional features, predicted input. Top: Adding all nine features to CORE12. Second part: Adding each feature separately, comparing difference from CORE12. Third part: Greedily adding best features from second part. different setups, we added nine morphological features with values predicted by MADA: DET (presence of the definite determiner), PERSON, ASPECT, VOICE, MOOD, GENDER, NUMBER, STATE (morphological marking as head of an idafa construction), and CASE. In setup All, we augmented the baseline model with all nine MADA features (as nine additional MaltParser attributes); in setup Sep, we augmented the baseline model with the MADA features, one at a time; and in setup Greedy, we combined them in a greedy heuristic (since the entire feature space is too vast to exhaust): starting with the most gainful feature from Sep, adding the next most gainful feature, keeping it if it helped, or discarding it otherwise, and continuing through the least gainful feature. See Table 1. Somewhat surprisingly, setup All hurts performance. This can be explained if one examines the prediction accuracy of each feature (top of Table 2). Features which are not predicted with very high accuracy, such as CASE (86.3%), can dominate the negative contribution, even though they are top contributors when provided as gold input (Marton et al., 2010); when all features are provided as gold input, All actually does better than individual features, which puts to rest a concern that its decrease here feature acc set size DET 99.6 3* PERSON 99.1 4* ASPECT 99.1 5* VOICE 98.9 4* MOOD 98.6 5* GENDER 99.3 3* NUMBER 99.5 4* STATE 95.6 4* CASE 86.3 5* ROOT 98.4 9646 PATTERN 97.0 338 LEMMA (diacritized) 96.7 16837 LMM (undiacritized lemma) 98.3 15305 normalized word form (A,Y) 99.3 29737 non-normalized word form 98.9 29980 Table 2: Feature prediction accuracy and set sizes. * = The set includes a "N/A" value. setup LAS LASdiff UAS All CORE12 (repeated) 78.68 — 82.48 + all lexical features 78.85 0.17 82.46 Sep +LMM 78.96+ 0.28 82.54 +ROOT 78.94+ 0.26 82.64 +LEMMA 78.80 0.12 82.42 +PATTERN 78.59 -0.09 82.39 Greedy +LMM+ROOT 79.04++ 0.36 82.63 +LMM+ROOT+LEMMA 79.05++ 0.37 82.63 +LMM+ROOT+PATTERN 78.93 0.25 82.58 Table 3: Lexical features. Top part: Adding each feature separately; difference from CORE12 (predicted). Bottom part: Greedily adding best features from previous part. is due to data sparseness. Here, when features are predicted, the DET feature (determiner), followed by the STATE (construct state, idafa) feature, are top individual contributors in setup Sep. Adding DET and the so-called φ-features (PERSON, NUMBER, GENDER, also shorthanded PNG) in the Greedy setup, yields 1.43% gain over the CORE12 baseline. 5.3 Lexical features Next, we experimented with adding the lexical features, which involve semantic abstraction to some degree: LEMMA, LMM (the undiacritized lemma), and ROOT. We experimented with the same setups as above: All, Sep, and Greedy. Adding all four features yielded a minor gain in setup All. LMM was the best single contributor, closely followed by ROOT in Sep. CORE12+LMM+ROOT (with or with1591 CORE12 + . . . LAS LASdiff UAS +DET+PNG (repeated) 80.11++ 1.43 83.29 +DET+PNG+LMM 80.23++ 1.55 83.34 +DET+PNG+LMM +ROOT 80.10++ 1.42 83.25 +DET+PNG+LMM +PATTERN 80.03++ 1.35 83.15 Table 4: Inflectional+lexical features together. CORE12 + . . . LAS LASdiff UAS +DET (repeated) 79.82++ — 83.18 +DET2 80.13++ 0.31 83.49 +DET+PNG+LMM (repeated) 80.23++ — 83.34 +DET2+PNG+LMM 80.21++ -0.02 83.39 Table 5: Extended inflectional features. out LEMMA) was the best greedy combination in setup Greedy. See Table 3. All lexical features are predicted with high accuracy (bottom of Table 2). Following the same greedy heuristic, we augmented the best inflection-based model CORE12+DET+PNG with lexical features, and found that only the undiacritized lemma (LMM) alone improved performance (80.23%). See Table 4. 5.4 Inflectional feature engineering So far we experimented with morphological feature values as predicted by MADA. However, it is likely that from a machine-learning perspective, representing similar categories with the same tag may be useful for learning. Therefore, we next experimented with modifying inflectional features that proved most useful. As DET may help distinguish the N-N idafa construction from the N-A modifier construction, we attempted modeling also the DET values of previous and next elements (as MaltParser’s stk[1] + buf[1], in addition to stk[0] + buf[0]). This variant, denoted DET2, indeed helps: when added to the CORE12, DET2 improves non-gold parsing quality by more than 0.3%, compared to DET (Table 5). This improvement unfortunately does not carry over to our best feature combination to date, CORE12+DET+PNG+LMM. However, in subsequent feature combinations, we see that DET2 helps again, or at least, doesn’t hurt: LAS goes up by 0.06% in conjunction with features LMM+PERSON +FN*NGR in Table 6. CORE12 + . . . LAS LASdiff UAS CORE12 (repeated) 78.68 – 82.48 +PERSON (repeated) 78.74 0.06 82.45 +GENDER (repeated) 78.75 0.07 82.35 +NUMBER (repeated) 78.66 -0.02 82.39 +FN*GENDER 78.96++ 0.28 82.53 +FN*NUMBER 78.88+ 0.20 82.53 +FN*NUMDGTBIN 78.87 0.19 82.53 +FN*RATIONALITY 78.91+ 0.23 82.60 +FN*GNR 79.32++ 0.64 82.78 +PERSON+FN*GNR 79.34++ 0.66 82.82 +DET+LMM+PERSON+FN*NGR 80.47++ 1.79 83.57 +DET2+LMM+PERSON+FN*NGR 80.53++ 1.85 83.66 +DET2+LMM+PERSON+FN*NG 80.43++ 1.75 83.56 +DET2+LMM+PNG+FN*NGR 80.51++ 1.83 83.66 CATIBEX 79.74 – 83.30 +DET2+LMM +PERSON+FN*NGR 80.83++ 1.09 84.02 BW 72.64 – 77.91 +DET2+LMM +PERSON+FN*NGR 74.40++ 1.76 79.40 Table 6: Functional features: gender, number, rationality. We also experimented with PERSON. We changed the values of proper names from “N/A” to “3” (third person), but it resulted in a similar or slightly decreased performance, so it was abandoned. 5.5 Functional feature values The NUMBER and GENDER features we have used so far only reflect surface (as opposed to functional) values, e.g., broken plurals are marked as singular. This might have a negative effect on learning generalizations over the complex agreement patterns in MSA (see Section 3), beyond memorization of word pairs seen together in training. Predicting functional features To predict functional GENDER, functional NUMBER and RATIONALITY, we build a simple maximum likelihood estimate (MLE) model using these annotations in the corpus created by Alkuhlani and Habash (2011). We train using the same training data we use throughout this paper. For all three features, we select the most seen value in training associated with the triple word-CATIBEX-lemma; we back off to CATIBEXlemma and then to lemma. For gender and number, we further back off to the surface values; for rationality, we back off to the most common value (irrational). On our predicted dev set, the overall accuracy baseline of predicting correct functional gender-number-rationality using surface features is 1592 85.1% (for all POS tags). Our MLE model reduces the error by two thirds reaching an overall accuracy of 95.5%. The high accuracy may be a result of the low percentage of words in the dev set that do not appear in training (around 4.6%). Digit tokens (e.g., “4”) are also marked singular by default. They don’t show surface agreement, even though the corresponding number-word token (éªK.P@ Arbςℏ‘four.fem.sing’) would. We further observe that MSA displays complex agreement patterns with numbers (Dada, 2007). Therefore, we alternatively experimented with binning the digit tokens’ NUMBER value accordingly: • the number 0 and numbers ending with 00 • the number 1 and numbers ending with 01 • the number 2 and numbers ending with 02 • the numbers 3-10 and those ending with 03-10 • the numbers, and numbers ending with, 11-99 • all other number tokens (e.g., 0.35 or 7/16) and denoted these experiments with NUMDGTBIN. Almost 1.5% of the tokens are digit tokens in the training set, and 1.2% in the dev set.10 Results using these new features are shown in Table 6. The first part repeats the CORE12 baseline. The second part repeats previous experiments with surface morphological features. The third part uses the new functional morphological features instead. The performance using NUMBER and GENDER increases by 0.21% and 0.22%, respectively, as we replace surface features with functional features. (Recall that there is no functional PERSON.) We then see that the change in the representation of digits does not help; in the large space of experiments we have performed, we saw some improvement through the use of this alternative representation, but not in any of the feature combinations that performed best and that we report on in this paper. We then use just the RATIONALITY feature, which results in an increase over the baseline. The combination of all three functional features (NUMBER, GENDER, RATIONALITY) provides for a nice cumulative effect. Adding PERSON hardly improves further. In the fourth part of the table, we include the other features which we found previously to be helpful, 10We didn’t mark the number-words since in our training data there were less than 30 lemmas of less than 2000 such tokens, so presumably their agreement patterns can be more easily learned. namely DET and LMM. Here, using DET2 instead of DET (see Section 5.4) gives us a slight improvement, providing our best result using the CORE12 POS tagset: 80.53%. This is a 1.85% improvement over using only the CORE12 POS tags (an 8.7% error reduction); of this improvement, 0.3% absolute (35% relative) is due to the use of functional features. We then use the best configuration, but without the RATIONALITY feature; we see that this feature on its own contributes 0.1% absolute, confirming its place in Arabic syntax. In gold experiments which we do not report here, the contribution was even higher (0.6-0.7%). The last row in the fourth part of Table 6 shows that using both surface and functional variants of NUMBER and GENDER does not help (hurts, in fact); the functional morphology features carry sufficient information for syntactic disambiguation. The last part of the table revalidates the gains achieved of the best feature combination using the two other POS tagsets mentioned in Section 3: CATIBEX (the best performing tagset with predicted values), and BW (the best POS tagset with gold values in Marton et al. (2010), but results shown here are with predicted values). The CATIBEX result of 80.83% is our overall best result. The result using BW reconfirms that BW is not the best tagset to use for parsing Arabic with current prediction ability. 5.6 Validating results on unseen test set Once experiments on the development set were done, we ran the best performing models on the previously unseen test set (Section 5.1). Table 7 shows that the same trends hold on this set as well. Model LAS LASdiff UAS CATIBEX 78.46 — 81.81 +DET2+LMM+PER+FN*NGR 79.45++ 0.99 82.56 Table 7: Results on unseen test set for models which performed best on dev set – predicted input. 6 Error Analysis We analyze the attachment accuracy by attachment type. We show the accuracy for selected attachment types in Table 8. Using just CORE12, we see that some attachments (subject, modifications) are harder than others (objects, idafa). We see that by 1593 Features SBJ OBJ MN MP IDF Tot. CORE12 67.9 90.4 72.0 70.3 94.5 78.7 CORE12 + LMM 68.8 90.4 72.6 70.9 94.6 79.0 CORE12 + DET2 +LMM+PNG 71.7 91.0 74.9 72.4 95.5 80.2 CORE12 + DET2 +LMM+PERS +FN*NGR 72.3 91.0 76.0 73.3 95.4 80.5 Table 8: Error analysis: Accuracy by attachment type (selected): subject, object, modification by a noun, modification (of a verb or a noun) by a preposition, idafa, and overall results (which match previously shown results) adding LMM, all attachment types improve a little bit; this is as expected, since this feature provides a slight lexical abstraction. We then add features designed to improve idafa and those relations subject to agreement, subject and nominal modification (DET2, PERSON, NUMBER, GENDER). We see that as expected, subject, nominal modification (MN), and idafa reduce error by substantial margins (error reduction over CORE12+LMM greater than 8%, in the case of idafa the error reduction is 16.7%), while object and prepositional attachment (MP) improve to a lesser degree (error reduction of 6.2% or less). We assume that the relations not subject to agreement (object and prepositional attachment) improve because of the overall improvement in the parse due to the improvements in the other relations. When we move to the functional features, we again see a reduction in the attachments which are subject to agreement, namely subject and nominal modification (error reductions over surface features of 2.1% and 4.4%, respectively). Idafa decreases slightly (since this relation is not affected by the functional features), while object stays the same. Surprisingly, prepositional attachment also improves, with an error reduction of 3.3%. Again, we can only explain this by proposing that the improvement in nominal modification attachment has the indirect effect of ruling out some bad prepositional attachments as well. In summary, we see that not only do morphological features – and functional morphology features in particular – improve parsing, but they improve parsing in the way that we expect: those relations subject to agreement improve more than those that are not. Last, we point out that MaltParser does not model generalized feature checking or matching directly, i.e., it has not learned that certain syntactic relations require identical (functional) morphological feature values. The gains in parsing quality reflect that the MaltParser SVM classifier has learned that the pairing of specific morphological feature values – e.g., fem.sing. for both the verb and its subject – is useful, with no generalization from each specific value to other values, or to general pair-wise value matching. 7 Conclusions and Future Work We explored the contribution of different morphological (inflectional and lexical) features to dependency parsing of Arabic. We find that definiteness (DET), φ-features (PERSON, NUMBER, GENDER), and undiacritized lemma (LMM) are most helpful for Arabic dependency parsing on predicted input. We further find that functional morphology features and rationality improve over surface morphological features, as predicted by the complex agreement rules of Arabic. To our knowledge, this is the first result in Arabic NLP that uses functional morphology features, and that shows an improvement over surface features. In future work, we intend to improve the prediction of functional morphological features in order to improve parsing accuracy. We also intend to investigate how these features can be integrated into other parsing frameworks; we expect them to help independently of the framework. We plan to make our parser available to other researchers. Please contact the authors if interested. Acknowledgments This work was supported by the DARPA GALE program, contract HR0011-08-C-0110. We thank Joakim Nivre for his useful remarks, Otakar Smrž for his help with Elixir-FM, Ryan Roth and Sarah Alkuhlani for their help with data, and three anonymous reviewers for useful comments. Part of the work was done while the first author was at Columbia University. 1594 References Sarah Alkuhlani and Nizar Habash. 2011. A corpus for modeling morpho-syntactic agreement in Arabic: gender, number and rationality. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL), Portland, Oregon, USA. Sabine Buchholz and Erwin Marsi. 2006. CoNLLX shared task on multilingual dependency parsing. In Proceedings of Computational Natural Language Learning (CoNLL), pages 149–164. Timothy A. Buckwalter. 2004. Buckwalter Arabic Morphological Analyzer Version 2.0. Linguistic Data Consortium, University of Pennsylvania, 2002. LDC Cat alog No.: LDC2004L02, ISBN 1-58563-324-0. Michael Collins, Jan Hajic, Lance Ramshaw, and Christoph Tillmann. 1999. A statistical parser for czech. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL), College Park, Maryland, USA, June. Brooke Cowan and Michael Collins. 2005. Morphology and reranking for the statistical parsing of spanish. In Proceedings of Human Language Technology (HLT) and the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 795–802. Ali Dada. 2007. Implementation of Arabic numerals and their syntax in GF. In Proceedings of the Workshop on Computational Approaches to Semitic Languages, pages 9–16, Prague, Czech Republic. Mona Diab. 2007. Towards an optimal pos tag set for modern standard arabic processing. In Proceedings of Recent Advances in Natural Language Processing (RANLP), Borovets, Bulgaria. Gülsen Eryigit, Joakim Nivre, and Kemal Oflazer. 2008. Dependency parsing of turkish. Computational Linguistics, 34(3):357–389. Spence Green and Christopher D. Manning. 2010. Better Arabic parsing: Baselines, evaluations, and analysis. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING), pages 394– 402, Beijing, China. Nizar Habash and Owen Rambow. 2005. Arabic Tokenization, Part-of-Speech Tagging and Morphological Disambiguation in One Fell Swoop. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 573–580, Ann Arbor, Michigan. Nizar Habash and Ryan Roth. 2009. Catib: The columbia arabic treebank. In Proceedings of the ACLIJCNLP 2009 Conference Short Papers, pages 221– 224, Suntec, Singapore, August. Nizar Habash, Reem Faraj, and Ryan Roth. 2009. Syntactic Annotation in the Columbia Arabic Treebank. In Proceedings of MEDAR International Conference on Arabic Language Resources and Tools, Cairo, Egypt. Nizar Habash. 2010. Introduction to Arabic Natural Language Processing. Morgan & Claypool Publishers. Jan Hajiˇc and Barbora Vidová-Hladká. 1998. Tagging Inflective Languages: Prediction of Morphological Categories for a Rich, Structured Tagset. In Proceedings of the International Conference on Computational Linguistics (COLING)- the Association for Computational Linguistics (ACL), pages 483–490. Sandra Kübler, Ryan McDonald, and Joakim Nivre. 2009. Dependency Parsing. Synthesis Lectures on Human Language Technologies. Morgan and Claypool Publishers. Seth Kulick, Ryan Gabbard, and Mitch Marcus. 2006. Parsing the Arabic Treebank: Analysis and improvements. In Proceedings of the Treebanks and Linguistic Theories Conference, pages 31–42, Prague, Czech Republic. Mohamed Maamouri, Ann Bies, Timothy A. Buckwalter, and Wigdan Mekki. 2004. The Penn Arabic Treebank: Building a Large-Scale Annotated Arabic Corpus. In Proceedings of the NEMLAR Conference on Arabic Language Resources and Tools, pages 102–109, Cairo, Egypt. Yuval Marton, Nizar Habash, and Owen Rambow. 2010. Improving Arabic dependency parsing with inflectional and lexical morphological features. In Proceedings of Workshop on Statistical Parsing of Morphologically Rich Languages (SPMRL) at the 11th Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL) - Human Language Technology (HLT), Los Angeles, USA. Jens Nilsson and Joakim Nivre. 2008. MaltEval: An evaluation and visualization tool for dependency parsing. In Proceedings of the sixth International Conference on Language Resources and Evaluation (LREC), Marrakech, Morocco. Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, Gulsen Eryigit, Sandra Kubler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A languageindependent system for data-driven dependency parsing. Natural Language Engineering, 13(2):95–135. Joakim Nivre, Igor M. Boguslavsky, and Leonid K. Iomdin. 2008. Parsing the SynTagRus Treebank of Russian. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING), pages 641–648. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the 8th International Conference on Parsing Technologies (IWPT), pages 149–160, Nancy, France. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4). 1595 Joakim Nivre. 2009. Parsing Indian languages with MaltParser. In Proceedings of the ICON09 NLP Tools Contest: Indian Language Dependency Parsing, pages 12–18. Ryan Roth, Owen Rambow, Nizar Habash, Mona Diab, and Cynthia Rudin. 2008. Arabic morphological tagging, diacritization, and lemmatization using lexeme models and feature ranking. In Proceedings of ACL08: HLT, Short Papers, pages 117–120, Columbus, Ohio, June. Association for Computational Linguistics. Otakar Smrž. 2007. Functional Arabic Morphology. Formal System and Implementation. Ph.D. thesis, Charles University, Prague. Reut Tsarfaty and Khalil Sima’an. 2007. Threedimensional parametrization for parsing morphologically rich languages. In Proceedings of the 10th International Conference on Parsing Technologies (IWPT), pages 156–167, Morristown, NJ, USA. 1596
|
2011
|
159
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 151–160, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Target-dependent Twitter Sentiment Classification Long Jiang1 Mo Yu2 Ming Zhou1 Xiaohua Liu1 Tiejun Zhao2 1 Microsoft Research Asia 2 School of Computer Science & Technology Beijing, China Harbin Institute of Technology Harbin, China {longj,mingzhou,xiaoliu}@microsoft.com {yumo,tjzhao}@mtlab.hit.edu.cn Abstract Sentiment analysis on Twitter data has attracted much attention recently. In this paper, we focus on target-dependent Twitter sentiment classification; namely, given a query, we classify the sentiments of the tweets as positive, negative or neutral according to whether they contain positive, negative or neutral sentiments about that query. Here the query serves as the target of the sentiments. The state-ofthe-art approaches for solving this problem always adopt the target-independent strategy, which may assign irrelevant sentiments to the given target. Moreover, the state-of-the-art approaches only take the tweet to be classified into consideration when classifying the sentiment; they ignore its context (i.e., related tweets). However, because tweets are usually short and more ambiguous, sometimes it is not enough to consider only the current tweet for sentiment classification. In this paper, we propose to improve target-dependent Twitter sentiment classification by 1) incorporating target-dependent features; and 2) taking related tweets into consideration. According to the experimental results, our approach greatly improves the performance of target-dependent sentiment classification. 1 Introduction Twitter, as a micro-blogging system, allows users to publish tweets of up to 140 characters in length to tell others what they are doing, what they are thinking, or what is happening around them. Over the past few years, Twitter has become very popular. According to the latest Twitter entry in Wikipedia, the number of Twitter users has climbed to 190 million and the number of tweets published on Twitter every day is over 65 million1. As a result of the rapidly increasing number of tweets, mining people’s sentiments expressed in tweets has attracted more and more attention. In fact, there are already many web sites built on the Internet providing a Twitter sentiment search service, such as Tweetfeel2, Twendz3, and Twitter Sentiment4. In those web sites, the user can input a sentiment target as a query, and search for tweets containing positive or negative sentiments towards the target. The problem needing to be addressed can be formally named as Target-dependent Sentiment Classification of Tweets; namely, given a query, classifying the sentiments of the tweets as positive, negative or neutral according to whether they contain positive, negative or neutral sentiments about that query. Here the query serves as the target of the sentiments. The state-of-the-art approaches for solving this problem, such as (Go et al., 20095; Barbosa and Feng, 2010), basically follow (Pang et al., 2002), who utilize machine learning based classifiers for the sentiment classification of texts. However, their classifiers actually work in a target-independent way: all the features used in the classifiers are independent of the target, so the sentiment is decided no matter what the target is. Since (Pang et al., 2002) (or later research on sentiment classification 1 http://en.wikipedia.org/wiki/Twitter 2 http://www.tweetfeel.com/ 3 http://twendz.waggeneredstrom.com/ 4 http://twittersentiment.appspot.com/ 5 The algorithm used in Twitter Sentiment 151 of product reviews) aim to classify the polarities of movie (or product) reviews and each movie (or product) review is assumed to express sentiments only about the target movie (or product), it is reasonable for them to adopt the target-independent approach. However, for target-dependent sentiment classification of tweets, it is not suitable to exactly adopt that approach. Because people may mention multiple targets in one tweet or comment on a target in a tweet while saying many other unrelated things in the same tweet, target-independent approaches are likely to yield unsatisfactory results: 1. Tweets that do not express any sentiments to the given target but express sentiments to other things will be considered as being opinionated about the target. For example, the following tweet expresses no sentiment to Bill Gates but is very likely to be classified as positive about Bill Gates by targetindependent approaches. "People everywhere love Windows & vista. Bill Gates" 2. The polarities of some tweets towards the given target are misclassified because of the interference from sentiments towards other targets in the tweets. For example, the following tweet expresses a positive sentiment to Windows 7 and a negative sentiment to Vista. However, with targetindependent sentiment classification, both of the targets would get positive polarity. “Windows 7 is much better than Vista!” In fact, it is easy to find many such cases by looking at the output of Twitter Sentiment or other Twitter sentiment analysis web sites. Based on our manual evaluation of Twitter Sentiment output, about 40% of errors are because of this (see Section 6.1 for more details). In addition, tweets are usually shorter and more ambiguous than other sentiment data commonly used for sentiment analysis, such as reviews and blogs. Consequently, it is more difficult to classify the sentiment of a tweet only based on its content. For instance, for the following tweet, which contains only three words, it is difficult for any existing approaches to classify its sentiment correctly. “First game: Lakers!” However, relations between individual tweets are more common than those in other sentiment data. We can easily find many related tweets of a given tweet, such as the tweets published by the same person, the tweets replying to or replied by the given tweet, and retweets of the given tweet. These related tweets provide rich information about what the given tweet expresses and should definitely be taken into consideration for classifying the sentiment of the given tweet. In this paper, we propose to improve targetdependent sentiment classification of tweets by using both target-dependent and context-aware approaches. Specifically, the target-dependent approach refers to incorporating syntactic features generated using words syntactically connected with the given target in the tweet to decide whether or not the sentiment is about the given target. For instance, in the second example, using syntactic parsing, we know that “Windows 7” is connected to “better” by a copula, while “Vista” is connected to “better” by a preposition. By learning from training data, we can probably predict that “Windows 7” should get a positive sentiment and “Vista” should get a negative sentiment. In addition, we also propose to incorporate the contexts of tweets into classification, which we call a context-aware approach. By considering the sentiment labels of the related tweets, we can further boost the performance of the sentiment classification, especially for very short and ambiguous tweets. For example, in the third example we mentioned above, if we find that the previous and following tweets published by the same person are both positive about the Lakers, we can confidently classify this tweet as positive. The remainder of this paper is structured as follows. In Section 2, we briefly summarize related work. Section 3 gives an overview of our approach. We explain the target-dependent and contextaware approaches in detail in Sections 4 and 5 respectively. Experimental results are reported in Section 6 and Section 7 concludes our work. 2 Related Work In recent years, sentiment analysis (SA) has become a hot topic in the NLP research community. A lot of papers have been published on this topic. 152 2.1 Target-independent SA Specifically, Turney (2002) proposes an unsupervised method for classifying product or movie reviews as positive or negative. In this method, sentimental phrases are first selected from the reviews according to predefined part-of-speech patterns. Then the semantic orientation score of each phrase is calculated according to the mutual information values between the phrase and two predefined seed words. Finally, a review is classified based on the average semantic orientation of the sentimental phrases in the review. In contrast, (Pang et al., 2002) treat the sentiment classification of movie reviews simply as a special case of a topic-based text categorization problem and investigate three classification algorithms: Naive Bayes, Maximum Entropy, and Support Vector Machines. According to the experimental results, machine learning based classifiers outperform the unsupervised approach, where the best performance is achieved by the SVM classifier with unigram presences as features. 2.2 Target-dependent SA Besides the above mentioned work for targetindependent sentiment classification, there are also several approaches proposed for target-dependent classification, such as (Nasukawa and Yi, 2003; Hu and Liu, 2004; Ding and Liu, 2007). (Nasukawa and Yi, 2003) adopt a rule based approach, where rules are created by humans for adjectives, verbs, nouns, and so on. Given a sentiment target and its context, part-of-speech tagging and dependency parsing are first performed on the context. Then predefined rules are matched in the context to determine the sentiment about the target. In (Hu and Liu, 2004), opinions are extracted from product reviews, where the features of the product are considered opinion targets. The sentiment about each target in each sentence of the review is determined based on the dominant orientation of the opinion words appearing in the sentence. As mentioned in Section 1, target-dependent sentiment classification of review sentences is quite different from that of tweets. In reviews, if any sentiment is expressed in a sentence containing a feature, it is very likely that the sentiment is about the feature. However, the assumption does not hold in tweets. 2.3 SA of Tweets As Twitter becomes more popular, sentiment analysis on Twitter data becomes more attractive. (Go et al., 2009; Parikh and Movassate, 2009; Barbosa and Feng, 2010; Davidiv et al., 2010) all follow the machine learning based approach for sentiment classification of tweets. Specifically, (Davidiv et al., 2010) propose to classify tweets into multiple sentiment types using hashtags and smileys as labels. In their approach, a supervised KNN-like classifier is used. In contrast, (Barbosa and Feng, 2010) propose a two-step approach to classify the sentiments of tweets using SVM classifiers with abstract features. The training data is collected from the outputs of three existing Twitter sentiment classification web sites. As mentioned above, these approaches work in a target-independent way, and so need to be adapted for target-dependent sentiment classification. 3 Approach Overview The problem we address in this paper is targetdependent sentiment classification of tweets. So the input of our task is a collection of tweets containing the target and the output is labels assigned to each of the tweets. Inspired by (Barbosa and Feng, 2010; Pang and Lee, 2004), we design a three-step approach in this paper: 1. Subjectivity classification as the first step to decide if the tweet is subjective or neutral about the target; 2. Polarity classification as the second step to decide if the tweet is positive or negative about the target if it is classified as subjective in Step 1; 3. Graph-based optimization as the third step to further boost the performance by taking the related tweets into consideration. In each of the first two steps, a binary SVM classifier is built to perform the classification. To train the classifiers, we use SVM-Light 6 with a linear kernel; the default setting is adopted in all experiments. 6 http://svmlight.joachims.org/ 153 3.1 Preprocessing In our approach, rich feature representations are used to distinguish between sentiments expressed towards different targets. In order to generate such features, much NLP work has to be done beforehand, such as tweet normalization, POS tagging, word stemming, and syntactic parsing. In our experiments, POS tagging is performed by the OpenNLP POS tagger7. Word stemming is performed by using a word stem mapping table consisting of about 20,000 entries. We also built a simple rule-based model for tweet normalization which can correct simple spelling errors and variations into normal form, such as “gooood” to “good” and “luve” to “love”. For syntactic parsing we use a Maximum Spanning Tree dependency parser (McDonald et al., 2005). 3.2 Target-independent Features Previous work (Barbosa and Feng, 2010; Davidiv et al., 2010) has discovered many effective features for sentiment analysis of tweets, such as emoticons, punctuation, prior subjectivity and polarity of a word. In our classifiers, most of these features are also used. Since these features are all generated without considering the target, we call them targetindependent features. In both the subjectivity classifier and polarity classifier, the same targetindependent feature set is used. Specifically, we use two kinds of target-independent features: 1. Content features, including words, punctuation, emoticons, and hashtags (hashtags are provided by the author to indicate the topic of the tweet). 2. Sentiment lexicon features, indicating how many positive or negative words are included in the tweet according to a predefined lexicon. In our experiments, we use the lexicon downloaded from General Inquirer8. 4 Target-dependent Sentiment Classification Besides target-independent features, we also incorporate target-dependent features in both the subjec 7 http://opennlp.sourceforge.net/projects.html 8 http://www.wjh.harvard.edu/~inquirer/ tivity classifier and polarity classifier. We will explain them in detail below. 4.1 Extended Targets It is quite common that people express their sentiments about a target by commenting not on the target itself but on some related things of the target. For example, one may express a sentiment about a company by commenting on its products or technologies. To express a sentiment about a product, one may choose to comment on the features or functionalities of the product. It is assumed that readers or audiences can clearly infer the sentiment about the target based on those sentiments about the related things. As shown in the tweet below, the author expresses a positive sentiment about “Microsoft” by expressing a positive sentiment directly about “Microsoft technologies”. “I am passionate about Microsoft technologies especially Silverlight.” In this paper, we define those aforementioned related things as Extended Targets. Tweets expressing positive or negative sentiments towards the extended targets are also regarded as positive or negative about the target. Therefore, for targetdependent sentiment classification of tweets, the first thing is identifying all extended targets in the input tweet collection. In this paper, we first regard all noun phrases, including the target, as extended targets for simplicity. However, it would be interesting to know under what circumstances the sentiment towards the target is truly consistent with that towards its extended targets. For example, a sentiment about someone’s behavior usually means a sentiment about the person, while a sentiment about someone’s colleague usually has nothing to do with the person. This could be a future work direction for target-dependent sentiment classification. In addition to the noun phrases including the target, we further expand the extended target set with the following three methods: 1. Adding mentions co-referring to the target as new extended targets. It is common that people use definite or demonstrative noun phrases or pronouns referring to the target in a tweet and express sentiments directly on them. For instance, in “Oh, Jon Stewart. How I love you so.”, the author expresses 154 a positive sentiment to “you” which actually refers to “Jon Stewart”. By using a simple co-reference resolution tool adapted from (Soon et al., 2001), we add all the mentions referring to the target into the extended target set. 2. Identifying the top K nouns and noun phrases which have the strongest association with the target. Here, we use Pointwise Mutual Information (PMI) to measure the association. ) ( ) ( ) , ( log ) , ( t p w p t w p t w PMI Where p(w,t), p(w), and p(t) are probabilities of w and t co-occurring, w appearing, and t appearing in a tweet respectively. In the experiments, we estimate them on a tweet corpus containing 20 million tweets. We set K = 20 in the experiments based on empirical observations. 3. Extracting head nouns of all extended targets, whose PMI values with the target are above some predefined threshold, as new extended targets. For instance, suppose we have found “Microsoft Technologies” as the extended target, we will further add “technologies” into the extended target set if the PMI value for “technologies” and “Microsoft” is above the threshold. Similarly, we can find “price” as the extended targets for “iPhone” from “the price of iPhone” and “LoveGame” for “Lady Gaga” from “LoveGame by Lady Gaga”. 4.2 Target-dependent Features Target-dependent sentiment classification needs to distinguish the expressions describing the target from other expressions. In this paper, we rely on the syntactic parse tree to satisfy this need. Specifically, for any word stem wi in a tweet which has one of the following relations with the given target T or any from the extended target set, we generate corresponding target-dependent features with the following rules: wi is a transitive verb and T (or any of the extended target) is its object; we generate a feature wi _arg2. “arg” is short for “argument”. For example, for the target iPhone in “I love iPhone”, we generate “love_arg2” as a feature. wi is a transitive verb and T (or any of the extended target) is its subject; we generate a feature wi_arg1 similar to Rule 1. wi is a intransitive verb and T (or any of the extended target) is its subject; we generate a feature wi_it_arg1. wi is an adjective or noun and T (or any of the extended target) is its head; we generate a feature wi_arg1. wi is an adjective or noun and it (or its head) is connected by a copula with T (or any of the extended target); we generate a feature wi_cp_arg1. wi is an adjective or intransitive verb appearing alone as a sentence and T (or any of the extended target) appears in the previous sentence; we generate a feature wi_arg. For example, in “John did that. Great!”, “Great” appears alone as a sentence, so we generate “great_arg” for the target “John”. wi is an adverb, and the verb it modifies has T (or any of the extended target) as its subject; we generate a feature arg1_v_wi. For example, for the target iPhone in the tweet “iPhone works better with the CellBand”, we will generate the feature “arg1_v_well”. Moreover, if any word included in the generated target-dependent features is modified by a negation9, then we will add a prefix “neg-” to it in the generated features. For example, for the target iPhone in the tweet “iPhone does not work better with the CellBand”, we will generate the features “arg1_v_neg-well” and “neg-work_it_arg1”. To overcome the sparsity of target-dependent features mentioned above, we design a special binary feature indicating whether or not the tweet contains at least one of the above target-dependent features. Target-dependent features are binary features, each of which corresponds to the presence of the feature in the tweet. If the feature is present, the entry will be 1; otherwise it will be 0. 9 Seven negations are used in the experiments: not, no, never, n’t, neither, seldom, hardly. 155 5 Graph-based Sentiment Optimization As we mentioned in Section 1, since tweets are usually shorter and more ambiguous, it would be useful to take their contexts into consideration when classifying the sentiments. In this paper, we regard the following three kinds of related tweets as context for a tweet. 1. Retweets. Retweeting in Twitter is essentially the forwarding of a previous message. People usually do not change the content of the original tweet when retweeting. So retweets usually have the same sentiment as the original tweets. 2. Tweets containing the target and published by the same person. Intuitively, the tweets published by the same person within a short timeframe should have a consistent sentiment about the same target. 3. Tweets replying to or replied by the tweet to be classified. Based on these three kinds of relations, we can construct a graph using the input tweet collection of a given target. As illustrated in Figure 1, each circle in the graph indicates a tweet. The three kinds of edges indicate being published by the same person (solid line), retweeting (dash line), and replying relations (round dotted line) respectively. Figure 1. An example graph of tweets about a target If we consider that the sentiment of a tweet only depends on its content and immediate neighbors, we can leverage a graph-based method for sentiment classification of tweets. Specifically, the probability of a tweet belonging to a specific sentiment class can be computed with the following formula: ) ( )) ( ( )) ( | ( ) | ( ) , | ( d N d N p d N c p c p G c p Where c is the sentiment label of a tweet which belongs to {positive, negative, neutral}, G is the tweet graph, N(d) is a specific assignment of sentiment labels to all immediate neighbors of the tweet, and τ is the content of the tweet. We can convert the output scores of a tweet by the subjectivity and polarity classifiers into probabilistic form and use them to approximate p(c| τ). Then a relaxation labeling algorithm described in (Angelova and Weikum, 2006) can be used on the graph to iteratively estimate p(c|τ,G) for all tweets. After the iteration ends, for any tweet in the graph, the sentiment label that has the maximum p(c| τ,G) is considered the final label. 6 Experiments Because there is no annotated tweet corpus publicly available for evaluation of target-dependent Twitter sentiment classification, we have to create our own. Since people are most interested in sentiments towards celebrities, companies and products, we selected 5 popular queries of these kinds: {Obama, Google, iPad, Lakers, Lady Gaga}. For each of those queries, we downloaded 400 English tweets10 containing the query using the Twitter API. We manually classify each tweet as positive, negative or neutral towards the query with which it is downloaded. After removing duplicate tweets, we finally obtain 459 positive, 268 negative and 1,212 neutral tweets. Among the tweets, 100 are labeled by two human annotators for inter-annotator study. The results show that for 86% of them, both annotators gave identical labels. Among the 14 tweets which the two annotators disagree on, only 1 case is a positive-negative disagreement (one annotator considers it positive while the other negative), and the other 13 are all neutral-subjective disagreement. This probably indicates that it is harder for humans to decide if a tweet is neutral or subjective than to decide if it is positive or negative. 10 In this paper, we use sentiment classification of English tweets as a case study; however, our approach is applicable to other languages as well. 156 6.1 Error Analysis of Twitter Sentiment Output We first analyze the output of Twitter Sentiment (TS) using the five test queries. For each query, we randomly select 20 tweets labeled as positive or negative by TS. We also manually classify each tweet as positive, negative or neutral about the corresponding query. Then, we analyze those tweets that get different labels from TS and humans. Finally we find two major types of error: 1) Tweets which are totally neutral (for any target) are classified as subjective by TS; 2) sentiments in some tweets are classified correctly but the sentiments are not truly about the query. The two types take up about 35% and 40% of the total errors, respectively. The second type is actually what we want to resolve in this paper. After further checking those tweets of the second type, we found that most of them are actually neutral for the target, which means that the dominant error in Twitter Sentiment is classifying neutral tweets as subjective. Below are several examples of the second type where the bolded words are the targets. “No debate needed, heat can't beat lakers or celtics” (negative by TS but positive by human) “why am i getting spams from weird people asking me if i want to chat with lady gaga” (positive by TS but neutral by human) “Bringing iPhone and iPad apps into cars? http://www.speakwithme.com/ will be out soon and alpha is awesome in my car.” (positive by TS but neutral by human) “Here's a great article about Monte Veronese cheese. It's in Italian so just put the url into Google translate and enjoy http://ow.ly/3oQ77” (positive by TS but neutral by human) 6.2 Evaluation of Subjectivity Classification We conduct several experiments to evaluate subjectivity classifiers using different features. In the experiments, we consider the positive and negative tweets annotated by humans as subjective tweets (i.e., positive instances in the SVM classifiers), which amount to 727 tweets. Following (Pang et al., 2002), we balance the evaluation data set by randomly selecting 727 tweets from all neutral tweets annotated by humans and consider them as objective tweets (i.e., negative instances in the classifiers). We perform 10-fold cross-validations on the selected data. Following (Go et al., 2009; Pang et al., 2002), we use accuracy as a metric in our experiments. The results are listed below. Features Accuracy (%) Content features 61.1 + Sentiment lexicon features 63.8 + Target-dependent features 68.2 Re-implementation of (Barbosa and Feng, 2010) 60.3 Table 1. Evaluation of subjectivity classifiers. As shown in Table 1, the classifier using only the content features achieves an accuracy of 61.1%. Adding sentiment lexicon features improves the accuracy to 63.8%. Finally, the best performance (68.2%) is achieved by combining targetdependent features and other features (t-test: p < 0.005). This clearly shows that target-dependent features do help remove many sentiments not truly about the target. We also re-implemented the method proposed in (Barbosa and Feng, 2010) for comparison. From Table 1, we can see that all our systems perform better than (Barbosa and Feng, 2010) on our data set. One possible reason is that (Barbosa and Feng, 2010) use only abstract features while our systems use more lexical features. To further evaluate the contribution of target extension, we compare the system using the exact target and all extended targets with that using only the exact target. We also eliminate the extended targets generated by each of the three target extension methods and reevaluate the performances. Target Accuracy (%) Exact target 65.6 + all extended targets 68.2 - co-references 68.0 - targets found by PMI 67.8 - head nouns 67.3 Table 2. Evaluation of target extension methods. As shown in Table 2, without extended targets, the accuracy is 65.6%, which is still higher than those using only target-independent features. After adding all extended targets, the accuracy is improved significantly to 68.2% (p < 0.005), which suggests that target extension does help find indi157 rectly expressed sentiments about the target. In addition, all of the three methods contribute to the overall improvement, with the head noun method contributing most. However, the other two methods do not contribute significantly. 6.3 Evaluation of Polarity Classification Similarly, we conduct several experiments on positive and negative tweets to compare the polarity classifiers with different features, where we use 268 negative and 268 randomly selected positive tweets. The results are listed below. Features Accuracy (%) Content features 78.8 + Sentiment lexicon features 84.2 + Target-dependent features 85.6 Re-implementation of (Barbosa and Feng, 2010) 83.9 Table 3. Evaluation of polarity classifiers. From Table 3, we can see that the classifier using only the content features achieves the worst accuracy (78.8%). Sentiment lexicon features are shown to be very helpful for improving the performance. Similarly, we re-implemented the method proposed by (Barbosa and Feng, 2010) in this experiment. The results show that our system using both content features and sentiment lexicon features performs slightly better than (Barbosa and Feng, 2010). The reason may be same as that we explained above. Again, the classifier using all features achieves the best performance. Both the classifiers with all features and with the combination of content and sentiment lexicon features are significantly better than that with only the content features (p < 0.01). However, the classifier with all features does not significantly outperform that using the combination of content and sentiment lexicon features. We also note that the improvement by target-dependent features here is not as large as that in subjectivity classification. Both of these indicate that targetdependent features are more useful for improving subjectivity classification than for polarity classification. This is consistent with our observation in Subsection 6.2 that most errors caused by incorrect target association are made in subjectivity classification. We also note that all numbers in Table 3 are much bigger than those in Table 1, which suggests that subjectivity classification of tweets is more difficult than polarity classification. Similarly, we evaluated the contribution of target extension for polarity classification. According to the results, adding all extended targets improves the accuracy by about 1 point. However, the contributions from the three individual methods are not statistically significant. 6.4 Evaluation of Graph-based Optimization As seen in Figure 1, there are several tweets which are not connected with any other tweets. For these tweets, our graph-based optimization approach will have no effect. The following table shows the percentages of the tweets in our evaluation data set which have at least one related tweet according to various relation types. Relation type Percentage Published by the same person11 41.6 Retweet 23.0 Reply 21.0 All 66.2 Table 4. Percentages of tweets having at least one related tweet according to various relation types. According to Table 4, for 66.2% of the tweets concerning the test queries, we can find at least one related tweet. That means our context-aware approach is potentially useful for most of the tweets. To evaluate the effectiveness of our contextaware approach, we compared the systems with and without considering the context. System Accuracy F1-score (%) pos neu neg Target-dependent sentiment classifier 66.0 57.5 70.1 66.1 +Graph-based optimization 68.3 63.5 71.0 68.5 Table 5. Effectiveness of the context-aware approach. As shown in Table 5, the overall accuracy of the target-dependent classifiers over three classes is 66.0%. The graph-based optimization improves the performance by over 2 points (p < 0.005), which clearly shows that the context information is very 11 We limit the time frame from one week before to one week after the post time of the current tweet. 158 useful for classifying the sentiments of tweets. From the detailed improvement for each sentiment class, we find that the context-aware approach is especially helpful for positive and negative classes. Relation type Accuracy (%) Published by the same person 67.8 Retweet 66.0 Reply 67.0 Table 6. Contribution comparison between relations. We further compared the three types of relations for context-aware sentiment classification; the results are reported in Table 6. Clearly, being published by the same person is the most useful relation for sentiment classification, which is consistent with the percentage distribution of the tweets over relation types; using retweet only does not help. One possible reason for this is that the retweets and their original tweets are nearly the same, so it is very likely that they have already got the same labels in previous classifications. 7 Conclusions and Future Work Twitter sentiment analysis has attracted much attention recently. In this paper, we address targetdependent sentiment classification of tweets. Different from previous work using targetindependent classification, we propose to incorporate syntactic features to distinguish texts used for expressing sentiments towards different targets in a tweet. According to the experimental results, the classifiers incorporating target-dependent features significantly outperform the previous targetindependent classifiers. In addition, different from previous work using only information on the current tweet for sentiment classification, we propose to take the related tweets of the current tweet into consideration by utilizing graph-based optimization. According to the experimental results, the graph-based optimization significantly improves the performance. As mentioned in Section 4.1, in future we would like to explore the relations between a target and any of its extended targets. We are also interested in exploring relations between Twitter accounts for classifying the sentiments of the tweets published by them. Acknowledgments We would like to thank Matt Callcut for refining the language of this paper, and thank Yuki Arase and the anonymous reviewers for many valuable comments and helpful suggestions. We would also thank Furu Wei and Xiaolong Wang for their help with some of the experiments and the preparation of the camera-ready version of the paper. References Ralitsa Angelova, Gerhard Weikum. 2006. Graph-based text classification: learn from your neighbors. SIGIR 2006: 485-492 Luciano Barbosa and Junlan Feng. 2010. Robust Sentiment Detection on Twitter from Biased and Noisy Data. Coling 2010. Christopher Burges. 1998. A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery, 2(2):121-167. Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan S. 2005. Identifying sources of opinions with conditional random fields and extraction patterns. In Proc. of the 2005 Human Language Technology Conf. and Conf. on Empirical Methods in Natural Language Processing (HLT/EMNLP 2005). pp. 355-362 Dmitry Davidiv, Oren Tsur and Ari Rappoport. 2010. Enhanced Sentiment Learning Using Twitter Hashtags and Smileys. Coling 2010. Xiaowen Ding and Bing Liu. 2007. The Utility of Linguistic Rules in Opinion Mining. SIGIR-2007 (poster paper), 23-27 July 2007, Amsterdam. Alec Go, Richa Bhayani, Lei Huang. 2009. Twitter Sentiment Classification using Distant Supervision. Vasileios Hatzivassiloglou and Kathleen.R. McKeown. 2002. Predicting the semantic orientation of adjectives. In Proceedings of the 35th ACL and the 8th Conference of the European Chapter of the ACL. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD-2004, full paper), Seattle, Washington, USA, Aug 22-25, 2004. Thorsten Joachims. Making Large-scale Support Vector Machine Learning Practical. In B. SchÄolkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in kernel methods: support vector learning, pages 169184. MIT Press, Cambridge, MA, USA, 1999. 159 Soo-Min Kim and Eduard Hovy 2006. Extracting opinions, opinion holders, and topics expressed in online news media text, In Proc. of ACL Workshop on Sentiment and Subjectivity in Text, pp.1-8, Sydney, Australia. Ryan McDonald, F. Pereira, K. Ribarov, and J. Hajiˇc. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proc. HLT/EMNLP. Tetsuya Nasukawa, Jeonghee Yi. 2003. Sentiment analysis: capturing favorability using natural language processing. In Proceedings of K-CAP. Bo Pang, Lillian Lee. 2004. A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts. In Proceedings of ACL 2004. Bo Pang, Lillian Lee, Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment Classification using Machine Learning Techniques. Ravi Parikh and Matin Movassate. 2009. Sentiment Analysis of User-Generated Twitter Updates using Various Classification Techniques. Wee. M. Soon, Hwee. T. Ng, and Danial. C. Y. Lim. 2001. A Machine Learning Approach to Coreference Resolution of Noun Phrases. Computational Linguistics, 27(4):521–544. Peter D. Turney. 2002. Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews. In proceedings of ACL 2002. Janyce Wiebe. 2000. Learning subjective adjectives from corpora. In Proceedings of AAAI-2000. Theresa Wilson, Janyce Wiebe, Paul Hoffmann. 2005. Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis. In Proceedings of NAACL 2005. 160
|
2011
|
16
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1597–1606, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Partial Parsing from Bitext Projections Prashanth Mannem and Aswarth Dara Language Technologies Research Center International Institute of Information Technology Hyderabad, AP, India - 500032 {prashanth,abhilash.d}@research.iiit.ac.in Abstract Recent work has shown how a parallel corpus can be leveraged to build syntactic parser for a target language by projecting automatic source parse onto the target sentence using word alignments. The projected target dependency parses are not always fully connected to be useful for training traditional dependency parsers. In this paper, we present a greedy non-directional parsing algorithm which doesn’t need a fully connected parse and can learn from partial parses by utilizing available structural and syntactic information in them. Our parser achieved statistically significant improvements over a baseline system that trains on only fully connected parses for Bulgarian, Spanish and Hindi. It also gave a significant improvement over previously reported results for Bulgarian and set a benchmark for Hindi. 1 Introduction Parallel corpora have been used to transfer information from source to target languages for Part-Of-Speech (POS) tagging, word sense disambiguation (Yarowsky et al., 2001), syntactic parsing (Hwa et al., 2005; Ganchev et al., 2009; Jiang and Liu, 2010) and machine translation (Koehn, 2005; Tiedemann, 2002). Analysis on the source sentences was induced onto the target sentence via projections across word aligned parallel corpora. Equipped with a source language parser and a word alignment tool, parallel data can be used to build an automatic treebank for a target language. The parse trees given by the parser on the source sentences in the parallel data are projected onto the target sentence using the word alignments from the alignment tool. Due to the usage of automatic source parses, automatic word alignments and differences in the annotation schemes of source and target languages, the projected parses are not always fully connected and can have edges missing (Hwa et al., 2005; Ganchev et al., 2009). Nonliteral translations and divergences in the syntax of the two languages also lead to incomplete projected parse trees. Figure 1 shows an English-Hindi parallel sentence with correct source parse, alignments and target dependency parse. For the same sentence, Figure 2 is a sample partial dependency parse projected using an automatic source parser on aligned text. This parse is not fully connected with the words banaa, kottaige and dikhataa left without any parents. para bahuta hai The cottage built on the hill looks very beautiful pahaada banaa huaa kottaige sundara dikhataa Figure 1: Word alignment with dependency parses for an English-Hindi parallel sentence To train the traditional dependency parsers (Yamada and Matsumoto, 2003; Eisner, 1996; Nivre, 2003), the dependency parse has to satisfy four constraints: connectedness, single-headedness, acyclicity and projectivity (Kuhlmann and Nivre, 2006). Projectivity can be relaxed in some parsers (McDonald et al., 2005; Nivre, 2009). But these parsers can not directly be used to learn from partially connected parses (Hwa et al., 2005; Ganchev et al., 2009). In the projected Hindi treebank (section 4) that was extracted from English-Hindi parallel text, only 5.9% of the sentences had full trees. In 1597 Spanish and Bulgarian projected data extracted by Ganchev et al. (2009), the figures are 3.2% and 12.9% respectively. Learning from data with such high proportions of partially connected dependency parses requires special parsing algorithms which are not bound by connectedness. Its only during learning that the constraint doesn’t satisfy. For a new sentence (i.e. during inference), the parser should output fully connected dependency tree. para bahuta hai pahaada banaa huaa kottaige sundara dikhataa on cottage very beautiful build look hill PastPart. Be.Pres. Figure 2: A sample dependency parse with partial parses In this paper, we present a dependency parsing algorithm which can train on partial projected parses and can take rich syntactic information as features for learning. The parsing algorithm constructs the partial parses in a bottom-up manner by performing a greedy search over all possible relations and choosing the best one at each step without following either left-to-right or right-to-left traversal. The algorithm is inspired by earlier nondirectional parsing works of Shen and Joshi (2008) and Goldberg and Elhadad (2010). We also propose an extended partial parsing algorithm that can learn from partial parses whose yields are partially contiguous. Apart from bitext projections, this work can be extended to other cases where learning from partial structures is required. For example, while bootstrapping parsers high confidence parses are extracted and trained upon (Steedman et al., 2003; Reichart and Rappoport, 2007). In cases where these parses are few, learning from partial parses might be beneficial. We train our parser on projected Hindi, Bulgarian and Spanish treebanks and show statistically significant improvements in accuracies between training on fully connected trees and learning from partial parses. 2 Related Work Learning from partial parses has been dealt in different ways in the literature. Hwa et al. (2005) used post-projection completion/transformation rules to get full parse trees from the projections and train Collin’s parser (Collins, 1999) on them. Ganchev et al. (2009) handle partial projected parses by avoiding committing to entire projected tree during training. The posterior regularization based framework constrains the projected syntactic relations to hold approximately and only in expectation. Jiang and Liu (2010) refer to alignment matrix and a dynamic programming search algorithm to obtain better projected dependency trees. They deal with partial projections by breaking down the projected parse into a set of edges and training on the set of projected relations rather than on trees. While Hwa et al. (2005) requires full projected parses to train their parser, Ganchev et al. (2009) and Jiang and Liu (2010) can learn from partially projected trees. However, the discriminative training in (Ganchev et al., 2009) doesn’t allow for richer syntactic context and it doesn’t learn from all the relations in the partial dependency parse. By treating each relation in the projected dependency data independently as a classification instance for parsing, Jiang and Liu (2010) sacrifice the context of the relations such as global structural context, neighboring relations that are crucial for dependency analysis. Due to this, they report that the parser suffers from local optimization during training. The parser proposed in this work (section 3) learns from partial trees by using the available structural information in it and also in neighboring partial parses. We evaluated our system (section 5) on Bulgarian and Spanish projected dependency data used in (Ganchev et al., 2009) for comparison. The same could not be carried out for Chinese (which was the language (Jiang and Liu, 2010) worked on) due to the unavailability of projected data used in their work. Comparison with the traditional dependency parsers (McDonald et al., 2005; Yamada and Matsumoto, 2003; Nivre, 2003; Goldberg and Elhadad, 2010) which train on complete dependency parsers is out of the scope of this work. 3 Partial Parsing A standard dependency graph satisfies four graph constraints: connectedness, single-headedness, acyclicity and projectivity (Kuhlmann and Nivre, 2006). In our work, we assume the dependency graph for a sentence only satisfies the single1598 a) para pahaada banaa huaa kottaige bahuta sundara dikhataa hai hill on build PastPart. cottage very beautiful look Be.Pres. b) para bahuta hai pahaada banaa huaa kottaige sundara dikhataa c) para hai banaa huaa kottaige sundara dikhataa pahaada bahuta d) hai banaa huaa kottaige sundara dikhataa pahaada bahuta para e) hai banaa kottaige sundara dikhataa pahaada bahuta para huaa f) banaa kottaige sundara dikhataa pahaada bahuta para huaa hai g) sundara bahuta hai pahaada para banaa kottaige dikhataa huaa h) hai pahaada para sundara bahuta banaa kottaige dikhataa huaa Figure 3: Steps taken by GNPPA. The dashed arcs indicate the unconnected words in unConn. The dotted arcs indicate the candidate arcs in candidateArcs and the solid arcs are the high scoring arcs that are stored in builtPPs headedness, acyclicity and projectivity constraints while not necessarily being connected i.e. all the words need not have parents. Given a sentence W=w0 · · · wn with a set of directed arcs A on the words in W, wi →wj denotes a dependency arc from wi to wj, (wi,wj) ϵ A. wi is the parent in the arc and wj is the child in the arc. ∗−→denotes the reflexive and transitive closure of the arc. wi ∗−→wj says that wi dominates wj, i.e. there is (possibly empty) path from wi to wj. A node wi is unconnected if it does not have an incoming arc. R is the set of all such unconnected nodes in the dependency graph. For the example in Figure 2, R={banaa, kottaige, dikhataa}. A partial parse rooted at node wi denoted by ρ(wi) is the set of arcs that can be traversed from node wi. The yield of a partial parse ρ(wi) is the set of nodes dominated by it. We use π(wi) to refer to the yield of ρ(wi) arranged in the linear order of their occurrence in the sentence. The span of the partial tree is the first and last words in its yield. The dependency graph D can now be represented in terms of partial parses by D = (W, R, ϱ(R)) where W={w0 · · · wn} is the sentence, R={r1 · · · rm} is the set of unconnected nodes and ϱ(R)= {ρ(r1) · · · ρ(rm)} is the set of partial parses rooted at these unconnected nodes. w0 is a dummy word added at the beginning of W to behave as a root of a fully connected parse. A fully connected dependency graph would have only one element w0 in R and the dependency graph rooted at w0 as the only (fully connected) parse in ϱ(R). We assume the combined yield of ϱ(R) spans the entire sentence and each of the partial parses in ϱ(R) to be contiguous and non-overlapping with one another. A partial parse is contiguous if its yield is contiguous i.e. if a node wj ϵ π(wi), then all the words between wi and wj also belong to π(wi). A partial parse ρ(wi) is non-overlapping if the intersection of its yield π(wi) with yields of all other partial parses is empty. 3.1 Greedy Non-directional Partial Parsing Algorithm (GNPPA) Given the sentence W and the set of unconnected nodes R, the parser follows a non-directional greedy approach to establish relations in a bottom up manner. The parser does a greedy search over all the possible relations and picks the one with 1599 the highest score at each stage. This process is repeated until parents for all the nodes that do not belong to R are chosen. Algorithm 1 lists the outline of the greedy nondirectional partial parsing algorithm (GNPPA). builtPPs maintains a list of all the partial parses that have been built. It is initialized in line 1 by considering each word as a separate partial parse with just one node. candidateArcs stores all the arcs that are possible at each stage of the parsing process in a bottom up strategy. It is initialized in line 2 using the method initCandidateArcs(w0 · · · wn). initCandidateArcs(w0 · · · wn) adds two candidate arcs for each pair of consecutive words with each other as parent (see Figure 3b). If an arc has one of the nodes in R as the child, it isn’t included in candidateArcs. Algorithm 1 Partial Parsing Algorithm Input: sentence w0 · · · wn and set of partial tree roots unConn={r1 · · · rm} Output: set of partial parses whose roots are in unConn (builtPPs = {ρ(r1) · · · ρ(rm)}) 1: builtPPs = {ρ(r1) · · · ρ(rn)} ←{w0 · · · wn} 2: candidateArcs = initCandidateArcs(w0 · · · wn) 3: while candidateArcs.isNotEmpty() do 4: bestArc = argmax ci ϵ candidateArcs score(ci, −→ w ) 5: builtPPs.remove(bestArc.child) 6: builtPPs.remove(bestArc.parent) 7: builtPPs.add(bestArc) 8: updateCandidateArcs(bestArc, candidateArcs, builtPPs, unConn) 9: end while 10: return builtPPs Once initialized, the candidate arc with the highest score (line 4) is chosen and accepted into builtPPs. This involves replacing the best arc’s child partial parse ρ(arc.child) and parent partial parse ρ(arc.parent) over which the arc has been formed with the arc ρ(arc.parent) → ρ(arc.child) itself in builtPPs (lines 5-7). In Figure 3f, to accept the best candidate arc ρ(banaa) → ρ(pahaada), the parser would remove the nodes ρ(banaa) and ρ(pahaada) in builtPPs and add ρ(banaa) →ρ(pahaada) to builtPPs (see Figure 3g). After the best arc is accepted, the candidateArcs has to be updated (line 8) to remove the arcs that are no longer valid and add new arcs in the context of the updated builtPPs. Algorithm 2 shows the update procedure. First, all the arcs that end on the child are removed (lines 3-7) along with the arc from child to parent. Then, the immediately previous and next partial parses of the best arc in builtPPs are retrieved (lines 8-9) to add possible candidate arcs between them and the partial parse representing the best arc (lines 10-23). In the example, between Figures 3b and 3c, the arcs ρ(kottaige) →ρ(bahuta) and ρ(bahuta) →ρ(sundara) are first removed and the arc ρ(kottaige) →ρ(sundara) is added to candidateArcs. Care is taken to avoid adding arcs that end on unconnected nodes listed in R. The entire GNPPA parsing process for the example sentence in Figure 2 is shown in Figure 3. Algorithm 2 updateCandidateArcs(bestArc, candidateArcs, builtPPs, unConn) 1: baChild = bestArc.child 2: baParent = bestArc.parent 3: for all arc ϵ candidateArcs do 4: if arc.child = baChild or (arc.parent = baChild and arc.child = baParent) then 5: remove arc 6: end if 7: end for 8: prevPP = builtPPs.previousPP(bestArc) 9: nextPP = builtPPs.nextPP(bestArc) 10: if bestArc.direction == LEFT then 11: newArc1 = new Arc(prevPP,baParent) 12: newArc2 = new Arc(baParent,prevPP) 13: end if 14: if bestArc.direction == RIGHT then 15: newArc1 = new Arc(nextPP,baParent) 16: newArc2 = new Arc(baParent,nextPP) 17: end if 18: if newArc1.parent /∈unConn then 19: candidateArcs.add(newArc1) 20: end if 21: if newArc2.parent /∈unConn then 22: candidateArcs.add(newArc2) 23: end if 24: return candidateArcs 3.2 Learning The algorithm described in the previous section uses a weight vector −→ w to compute the best arc from the list of candidate arcs. This weight vector is learned using a simple Perceptron like algorithm similar to the one used in (Shen and Joshi, 2008). Algorithm 3 lists the learning framework for GNPPA. For a training sample with sentence w0 · · · wn, projected partial parses projectedPPs={ρ(ri) · · · ρ(rm)}, unconnected words unConn and weight vector −→ w , the builtPPs and candidateArcs are initiated as in algorithm 1. Then the arc with the highest score is selected. If this arc belongs to the parses in projectedPPs, builtPPs and candidateArcs are updated similar to the operations in 1600 a) para hai pahaada banaa huaa kottaige bahuta sundara dikhataa hill on build PastPart. cottage very beautiful look Be.Pres. b) para hai pahaada banaa huaa kottaige bahuta sundara dikhataa c) hai bahuta pahaada para banaa huaa kottaige sundara dikhataa d) hai para bahuta pahaada banaa huaa kottaige sundara dikhataa Figure 4: First four steps taken by E-GNPPA. The blue colored dotted arcs are the additional candidate arcs that are added to candidateArcs algorithm 1. If it doesn’t, it is treated as a negative sample and a corresponding positive candidate arc which is present both projectedPPs and candidateArcs is selected (lines 11-12). The weights of the positive candidate arc are increased while that of the negative sample (best arc) are decreased. To reduce over fitting, we use averaged weights (Collins, 2002) in algorithm 1. Algorithm 3 Learning for Non-directional Greedy Partial Parsing Algorithm Input: sentence w0 · · · wn, projected partial parses projectedPPs, unconnected words unConn, current −→ w Output: updated −→ w 1: builtPPs = {ρ(r1) · · · ρ(rn)} ←{w0 · · · wn} 2: candidateArcs = initCandidateArcs(w0 · · · wn) 3: while candidateArcs.isNotEmpty() do 4: bestArc = argmax ci ϵ candidateArcs score(ci, −→ w ) 5: if bestArc ∈projectedPPs then 6: builtPPs.remove(bestArc.child) 7: builtPPs.remove(bestArc.parent) 8: builtPPs.add(bestArc) 9: updateCandidateArcs(bestArc, candidateArcs, builtPPs, unConn) 10: else 11: allowedArcs = {ci | ci ϵ candidateArcs && ci ϵ projectedArcs} 12: compatArc = argmax ci ϵ allowedArcs score(ci, −→ w ) 13: promote(compatArc,−→ w ) 14: demote(bestArc,−→ w ) 15: end if 16: end while 17: return builtPPs 3.3 Extended GNPPA (E-GNPPA) The GNPPA described in section 3.1 assumes that the partial parses are contiguous. The example in Figure 5 has a partial tree ρ(dikhataa) which isn’t contiguous. Its yield doesn’t contain bahuta and sundara. We call such noncontiguous partial parses whose yields encompass the yield of an other partial parse as partially contiguous. Partially contiguous parses are common in the projected data and would not be parsable by the algorithm 1 (ρ(dikhataa) →ρ(kottaige) would not be identified). para bahuta hai pahaada banaa huaa kottaige sundara dikhataa hill on build cottage very beautiful look PastPart. Be.Pres. Figure 5: Dependency parse with a partially contiguous partial parse In order to identify and learn from relations which are part of partially contiguous partial parses, we propose an extension to GNPPA. The extended GNPAA (E-GNPPA) broadens its scope while searching for possible candidate arcs given R and builtPPs. If the immediate previous or the next partial parses over which arcs are to be formed are designated unconnected nodes, the parser looks further for a partial parse over which it can form arcs. For example, in Figure 4b, the arc ρ(para) →ρ(banaa) can not be added to the candidateArcs since banaa is a designated unconnected node in unConn. The E-GNPPA looks over the unconnected node and adds the arc ρ(para) →ρ(huaa) to the candidate arcs list candidateArcs. E-GNPPA differs from algorithm 1 in lines 2 and 8. The E-GNPPA uses an extended initialization method initCandidateArcsExtended(w0) for 1601 Parent and Child par.pos, chd.pos, par.lex, chd.lex Sentence Context par-1.pos, par-2.pos, par+1.pos, par+2.pos, par-1.lex, par+1.lex chd-1.pos, chd-2.pos, chd+1.pos, chd+2.pos, chd-1.lex, chd+1.lex Structural Info leftMostChild(par).pos, rightMostChild(par).pos, leftSibling(chd).pos, rightSibling(chd).pos Partial Parse Context previousPP().pos, previousPP().lex, nextPP().pos, nextPP().lex Table 1: Information on which features are defined. par denotes the parent in the relation and chd the child. .pos and .lex is the POS and word-form of the corresponding node. +/-i is the previous/next ith word in the sentence. leftMostChild() and rightMostChild() denote the left most and right most children of a node. leftSibling() and rightSibling() get the immediate left and right siblings of a node. previousPP() and nextPP() return the immediate previous and next partial parses of the arc in builtPPs at the state. candidateArcs in line 2 and an extended procedure updateCandidateArcsExtended to update the candidateArcs after each step in line 8. Algorithm 4 shows the changes w.r.t algorithm 2. Figure 4 presents the steps taken by the E-GNPPA parser for the example parse in Figure 5. Algorithm 4 updateCandidateArcsExtended ( bestArc, candidateArcs, builtPPs,unConn ) · · · lines 1 to 7 of Algorithm 2 · · · prevPP = builtPPs.previousPP(bestArc) while prevPP ∈unConn do prevPP = builtPPs.previousPP(prevPP) end while nextPP = builtPPs.nextPP(bestArc) while nextPP ∈unConn do nextPP = builtPPs.nextPP(nextPP) end while · · · lines 10 to 24 of Algorithm 2 · · · 3.4 Features Features for a relation (candidate arc) are defined on the POS tags and lexical items of the nodes in the relation and those in its context. Two kinds of context are used a) context from the input sentence (sentence context) b) context in builtPPs i.e. nearby partial parses (partial parse context). Information from the partial parses (structural info) such as left and right most children of the parent node in the relation, left and right siblings of the child node in the relation are also used. Table 1 lists the information on which features are defined in the various configurations of the three language parsers. The actual features are combinations of the information present in the table. The set varies depending on the language and whether its GNPPA or E-GNPPA approach. While training, no features are defined on whether a node is unconnected (present in unConn) or not as this information isn’t available during testing. 4 Hindi Projected Dependency Treebank We conducted experiments on English-Hindi parallel data by transferring syntactic information from English to Hindi to build a projected dependency treebank for Hindi. The TIDES English-Hindi parallel data containing 45,000 sentences was used for this purpose 1 (Venkatapathy, 2008). Word alignments for these sentences were obtained using the widely used GIZA++ toolkit in grow-diag-final-and mode (Och and Ney, 2003). Since Hindi is a morphologically rich language, root words were used instead of the word forms. A bidirectional English POS tagger (Shen et al., 2007) was used to POS tag the source sentences and the parses were obtained using the first order MST parser (McDonald et al., 2005) trained on dependencies extracted from Penn treebank using the head rules of Yamada and Matsumoto (2003). A CRF based Hindi POS tagger (PVS. and Gali, 2007) was used to POS tag the target sentences. English and Hindi being morphologically and syntactically divergent makes the word alignment and dependency projection a challenging task. The source dependencies are projected using an approach similar to (Hwa et al., 2005). While they use post-projection transformations on the projected parse to account for annotation differences, we use pre-projection transformations on the source parse. The projection algorithm pro1The original data had 50,000 parallel sentences. It was later refined by IIIT-Hyderabad to remove repetitions and other trivial errors. The corpus is still noisy with typographical errors, mismatched sentences and unfaithful translations. 1602 duces acyclic parses which could be unconnected and non-projective. 4.1 Annotation Differences in Hindi and English Before projecting the source parses onto the target sentence, the parses are transformed to reflect the annotation scheme differences in English and Hindi. While English dependency parses reflect the PTB annotation style (Marcus et al., 1994), we project them to Hindi to reflect the annotation scheme described in (Begum et al., 2008). The differences in the annotation schemes are with respect to three phenomena: a) head of a verb group containing auxiliary and main verbs, b) prepositions in a prepositional phrase (PP) and c) coordination structures. In the English parses, the auxiliary verb is the head of the main verb while in Hindi, the main verb is the head of the auxiliary in the verb group. For example, in the Hindi parse in Figure 1, dikhataa is the head of the auxiliary verb hai. The prepositions in English are realized as postpositions in Hindi. While prepositions are the heads in a preposition phrase, post-positions are the modifiers of the preceding nouns in Hindi. In pahaada para (on the hill), hill is the head of para. In coordination structures, while English differentiates between how NP coordination and VP coordination structures behave, Hindi annotation scheme is consistent in its handling. Leftmost verb is the head of a VP coordination structure in English whereas the rightmost noun is the head in case of NP coordination. In Hindi, the conjunct is the head of the two verbs/nouns in the coordination structure. These three cases are identified in the source tree and appropriate transformations are made to the source parse itself before projecting the relations using word alignments. 5 Experiments We carried out all our experiments on parallel corpora belonging to English-Hindi, EnglishBulgarian and English-Spanish language pairs. While the Hindi projected treebank was obtained using the method described in section 4, Bulgarian and Spanish projected datasets were obtained using the approach in (Ganchev et al., 2009). The datasets of Bulgarian and Spanish that contributed to the best accuracies for Ganchev et al. (2009) Statistic Hindi Bulgarian Spanish N(Words) 226852 71986 133124 N(Parent==-1) 44607 30268 54815 P(Parent==-1) 19.7 42.0 41.1 N(Full trees) 593 1299 327 N(GNPPA) 30063 10850 19622 P(GNPPA) 16.4 26.0 25.0 N(E-GNPPA) 35389 12281 24577 P(E-GNPPA) 19.3 29.4 30.0 Table 2: Statistics of the Hindi, Bulgarian and Spanish projected treebanks used for experiments. Each of them has 10,000 randomly picked parses. N(X) denotes number of X and P(X) denotes percentage of X. N(Words) is the number of words. N(Parents==-1) is the number of words without a parent. N(Full trees) is the number of parses which are fully connected. N(GNPPA) is the number of relations learnt by GNPPA parser and N(E-GNPPA) is the number of relations learnt by E-GNPPA parser. Note that P(GNPPA) is calculated as N(GNPPA)/(N(Words) - N(Parents==-1)). were used in our work (7 rules dataset for Bulgarian and 3 rules dataset for Spanish). The Hindi, Bulgarian and Spanish projected dependency treebanks have 44760, 39516 and 76958 sentences respectively. Since we don’t have confidence scores for the projections on the sentences, we picked 10,000 sentences randomly in each of the three datasets for training the parsers2. Other methods of choosing the 10K sentences such as those with the max. no. of relations, those with least no. of unconnected words, those with max. no. of contiguous partial trees that can be learned by GNPPA parser etc. were tried out. Among all these, random selection was consistent and yielded the best results. The errors introduced in the projected parses by errors in word alignment, source parser and projection are not consistent enough to be exploited to select the better parses from the entire projected data. Table 2 gives an account of the randomly chosen 10k sentences in terms of the number of words, words without parents etc. Around 40% of the words spread over 88% of sentences in Bulgarian and 97% of sentences in Spanish have no parents. Traditional dependency parsers which only train from fully connected trees would not be able to learn from these sentences. P(GNPPA) is the percentage of relations in the data that are learned by the GNPPA parser satisfying the contiguous partial tree constraint and P(E-GNPPA) is the per2Exactly 10K sentences were selected in order to compare our results with those of (Ganchev et al., 2009). 1603 Parser Hindi Bulgarian Spanish Punct NoPunct Punct NoPunct Punct NoPunct Baseline 78.70 77.39 51.85 55.15 41.60 45.61 GNPPA 80.03* 78.81* 77.03* 79.06* 65.49* 68.70* E-GNPPA 81.10*† 79.94*† 78.93*† 80.11*† 67.69*† 70.90*† Table 3: UAS for Hindi, Bulgarian and Spanish with the baseline, GNPPA and E-GNPPA parsers trained on 10k parses selected randomly. Punct indicates evaluation with punctuation whereas NoPunct indicates without punctuation. * next to an accuracy denotes statistically significant (McNemar’s and p < 0.05) improvement over the baseline. † denotes significance over GNPPA centage that satisfies the partially contiguous constraint. E-GNPPA parser learns around 2-5% more no. of relations than GNPPA due to the relaxation in the constraints. The Hindi test data that was released as part of the ICON-2010 Shared Task (Husain et al., 2010) was used for evaluation. For Bulgarian and Spanish, we used the same test data that was used in the work of Ganchev et al. (2009). These test datasets had sentences from the training section of the CoNLL Shared Task (Nivre et al., 2007) that had lengths less than or equal to 10. All the test datasets have gold POS tags. A baseline parser was built to compare learning from partial parses with learning from fully connected parses. Full parses are constructed from partial parses in the projected data by randomly assigning parents to unconnected parents, similar to the work in (Hwa et al., 2005). The unconnected words in the parse are selected randomly one by one and are assigned parents randomly to complete the parse. This process is repeated for all the sentences in the three language datasets. The parser is then trained with the GNPPA algorithm on these fully connected parses to be used as the baseline. Table 3 lists the accuracies of the baseline, GNPPA and E-GNPPA parsers. The accuracies are unlabeled attachment scores (UAS): the percentage of words with the correct head. Table 4 compares our accuracies with those reported in (Ganchev et al., 2009) for Bulgarian and Spanish. 5.1 Discussion The baseline reported in (Ganchev et al., 2009) significantly outperforms our baseline (see Table 4) due to the different baselines used in both the works. In our work, while creating the data for the baseline by assigning random parents to unconnected words, acyclicity and projectivity conParser Bulgarian Spanish Ganchev-Baseline 72.6 69.0 Baseline 55.15 45.61 Ganchev-Discriminative 78.3 72.3 GNPPA 79.06 68.70 E-GNPPA 80.11 70.90 Table 4: Comparison of baseline, GNPPA and EGNPPA with baseline and discriminative model from (Ganchev et al., 2009) for Bulgarian and Spanish. Evaluation didn’t include punctuation. straints are not enforced. Ganchev et al. (2009)’s baseline is similar to the first iteration of their discriminative model and hence performs better than ours. Our Bulgarian E-GNPPA parser achieved a 1.8% gain over theirs while the Spanish results are lower. Though their training data size is also 10K, the training data is different in both our works due to the difference in the method of choosing 10K sentences from the large projected treebanks. The GNPPA accuracies (see table 3) for all the three languages are significant improvements over the baseline accuracies. This shows that learning from partial parses is effective when compared to imposing the connected constraint on the partially projected dependency parse. Even while projecting source dependencies during data creation, it is better to project high confidence relations than look to project more relations and thereby introduce noise. The E-GNPPA which also learns from partially contiguous partial parses achieved statistically significant gains for all the three languages. The gains across languages is due to the fact that in the 10K data that was used for training, E-GNPPA parser could learn 2 −5% more relations over GNPPA (see Table 2). Figure 6 shows the accuracies of baseline and E1604 30 40 50 60 70 80 0 1 2 3 4 5 6 7 8 9 10 Unlabeled Accuracy Thousands of sentences Bulgarian Hindi Spanish hn-baseline bg-baseline es-baseline Figure 6: Accuracies (without punctuation) w.r.t varying training data sizes for baseline and EGNPPA parsers. GNPPA parser for the three languages when training data size is varied. The parsers peak early with less than 1000 sentences and make small gains with the addition of more data. 6 Conclusion We presented a non-directional parsing algorithm that can learn from partial parses using syntactic and contextual information as features. A Hindi projected dependency treebank was developed from English-Hindi bilingual data and experiments were conducted for three languages Hindi, Bulgarian and Spanish. Statistically significant improvements were achieved by our partial parsers over the baseline system. The partial parsing algorithms presented in this paper are not specific to bitext projections and can be used for learning from partial parses in any setting. References R. Begum, S. Husain, A. Dhwaj, D. Sharma, L. Bai, and R. Sangal. 2008. Dependency annotation scheme for indian languages. In In Proceedings of The Third International Joint Conference on Natural Language Processing (IJCNLP), Hyderabad, India. Michael John Collins. 1999. Head-driven statistical models for natural language parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA, USA. AAI9926110. Michael Collins. 2002. Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10, EMNLP ’02, pages 1–8, Morristown, NJ, USA. Association for Computational Linguistics. Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: an exploration. In Proceedings of the 16th conference on Computational linguistics - Volume 1, pages 340–345, Morristown, NJ, USA. Association for Computational Linguistics. Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1, ACL-IJCNLP ’09, pages 369– 377, Morristown, NJ, USA. Association for Computational Linguistics. Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 742–750, Morristown, NJ, USA. Association for Computational Linguistics. Samar Husain, Prashanth Mannem, Bharath Ambati, and Phani Gadde. 2010. Icon 2010 tools contest on indian language dependency parsing. In Proceedings of ICON 2010 NLP Tools Contest. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Nat. Lang. Eng., 11:311–325, September. Wenbin Jiang and Qun Liu. 2010. Dependency parsing and projection based on word-pair classification. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 12–20, Morristown, NJ, USA. Association for Computational Linguistics. P. Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5. Citeseer. Marco Kuhlmann and Joakim Nivre. 2006. Mildly non-projective dependency structures. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 507–514, Morristown, NJ, USA. Association for Computational Linguistics. Mitchell P. Marcus, Beatrice Santorini, and Mary A. Marcinkiewicz. 1994. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330. R. McDonald, K. Crammer, and F. Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). 1605 Jens Nilsson and Joakim Nivre. 2008. Malteval: an evaluation and visualization tool for dependency parsing. In Proceedings of the Sixth International Language Resources and Evaluation (LREC’08), Marrakech, Morocco, may. European Language Resources Association (ELRA). http://www.lrecconf.org/proceedings/lrec2008/. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan Mcdonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 915–932, Prague, Czech Republic. Association for Computational Linguistics. Joakim Nivre. 2003. An Efficient Algorithm for Projective Dependency Parsing. In Eighth International Workshop on Parsing Technologies, Nancy, France. Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 351–359, Suntec, Singapore, August. Association for Computational Linguistics. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Avinesh PVS. and Karthik Gali. 2007. Part-Of-Speech Tagging and Chunking using Conditional Random Fields and Transformation-Based Learning. In Proceedings of the IJCAI and the Workshop On Shallow Parsing for South Asian Languages (SPSAL), pages 21–24. Roi Reichart and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 616–623, Prague, Czech Republic, June. Association for Computational Linguistics. Libin Shen and Aravind Joshi. 2008. LTAG dependency parsing with bidirectional incremental construction. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 495–504, Honolulu, Hawaii, October. Association for Computational Linguistics. L. Shen, G. Satta, and A. Joshi. 2007. Guided learning for bidirectional sequence classification. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL). Mark Steedman, Miles Osborne, Anoop Sarkar, Stephen Clark, Rebecca Hwa, Julia Hockenmaier, Paul Ruhlen, Steven Baker, and Jeremiah Crim. 2003. Bootstrapping statistical parsers from small datasets. In Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics - Volume 1, EACL ’03, pages 331– 338, Morristown, NJ, USA. Association for Computational Linguistics. Jrg Tiedemann. 2002. MatsLex - a multilingual lexical database for machine translation. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (LREC’2002), volume VI, pages 1909–1912, Las Palmas de Gran Canaria, Spain, 29-31 May. Sriram Venkatapathy. 2008. Nlp tools contest - 2008: Summary. In Proceedings of ICON 2008 NLP Tools Contest. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical Dependency Analysis with Support Vector Machines. In In Proceedings of IWPT, pages 195–206. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the first international conference on Human language technology research, HLT ’01, pages 1–8, Morristown, NJ, USA. Association for Computational Linguistics. 1606
|
2011
|
160
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1607–1615, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Ranking Class Labels Using Query Sessions Marius Pas¸ca Google Inc. 1600 Amphitheatre Parkway Mountain View, California 94043 [email protected] Abstract The role of search queries, as available within query sessions or in isolation from one another, in examined in the context of ranking the class labels (e.g., brazilian cities, business centers, hilly sites) extracted from Web documents for various instances (e.g., rio de janeiro). The co-occurrence of a class label and an instance, in the same query or within the same query session, is used to reinforce the estimated relevance of the class label for the instance. Experiments over evaluation sets of instances associated with Web search queries illustrate the higher quality of the query-based, re-ranked class labels, relative to ranking baselines using documentbased counts. 1 Introduction Motivation: The offline acquisition of instances (rio de janeiro, porsche cayman) and their corresponding class labels (brazilian cities, locations, vehicles, sports cars) from text has been an active area of research. In order to extract fine-grained classes of instances, existing methods often apply manuallycreated (Banko et al., 2007; Talukdar et al., 2008) or automatically-learned (Snow et al., 2006) extraction patterns to text within large document collections. In Web search, the relative ranking of documents returned in response to a query directly affects the outcome of the search. Similarly, the quality of the relative ranking among class labels extracted for a given instance influences any applications (e.g., query refinements or structured extraction) using the extracted data. But due to noise in Web data and limitations of extraction techniques, class labels acquired for a given instance (e.g., oil shale) may fail to properly capture the semantic classes to which the instance may belong (Kozareva et al., 2008). Inevitably, some of the extracted class labels will be less useful (e.g., sources, mutual concerns) or incorrect (e.g., plants for the instance oil shale). In previous work, the relative ranking of class labels for an instance is determined mostly based on features derived from the source Web documents from which the data has been extracted, such as variations of the frequency of co-occurrence or diversity of extraction patterns producing a given pair (Etzioni et al., 2005). Contributions: This paper explores the role of Web search queries, rather than Web documents, in inducing superior ranking among class labels extracted automatically from documents for various instances. It compares two sources of indirect ranking evidence available within anonymized query logs: a) co-occurrence of an instance and its class label in the same query; and b) co-occurrence of an instance and its class label, as separate queries within the same query session. The former source is a noisy attempt to capture queries that narrow the search results to a particular class of the instance (e.g., jaguar car maker). In comparison, the latter source noisily identifies searches that specialize from a class (e.g., car maker) to an instance (e.g., jaguar) or, conversely, generalize from an instance to a class. To our knowledge, this is the first study comparing inherently-noisy queries and query sessions for the purpose of ranking of open-domain, labeled class instances. 1607 The remainder of the paper is organized as follows. Section 2 introduces intuitions behind an approach using queries for ranking class labels of various instances, and describes associated ranking functions. Sections 3 and 4 describe the experimental setting and evaluation results over evaluation sets of instances associated with Web search queries. The results illustrate the higher quality of the querybased, re-ranked lists of class labels, relative to alternative ranking methods using only document-based counts. 2 Instance Class Ranking via Query Logs Ranking Hypotheses: We take advantage of anonymized query logs, to induce superior ranking among the class labels associated with various class instances within an IsA repository acquired from Web documents. Given a class instance I, the functions used for the ranking of its class labels are chosen following several observations. • Hypothesis H1: If C is a prominent class of an instance I, then C and I are likely to occur in text in contexts that are indicative of an IsA relation. • Hypothesis H2: If C is a prominent class of an instance I, and I is ambiguous, then a fraction of the queries about I may also refer to and contain C. • Hypothesis H3: If C is a prominent class of an instance I, then a fraction of the queries about I may be followed by queries about C, and vice-versa. Ranking Functions: The ranking functions follow directly from the above hypotheses. • Ranking based on H1 (using documents): The first hypothesis H1 is a reformulation of findings from previous work (Etzioni et al., 2005). In practice, a class label is deemed more relevant for an instance if the pair is extracted more frequently and by multiple patterns, with the scoring formula: ScoreH1(C, I) = Freq(C, I) × Size({Pattern(C)})2 (1) where Freq(C, I) is the frequency of extraction of C for the instance I, and Size({Pattern(C)}) is the number of unique patterns extracting the class label C for the instance I. The patterns are hand-written, following (Hearst, 1992): ⟨[..] C [such as|including] I [and|,|.]⟩, where I is a potential instance (e.g., diderot) and C is a potential class label (e.g., writers). The boundaries are approximated from the part-of-speech tags of the sentence words, for potential class labels C; and identified by checking that I occurs as an entire query in query logs, for instances I (Van Durme and Pas¸ca, 2008). The application of the scoring formula (1) to candidates extracted from the Web produces a ranked list of class labels LH1(I). • Ranking based on H2 (using queries): Intuitively, Web users searching for information about I sometimes add some or all terms of C to a search query already containing I, either to further specify their query, or in response to being presented with sets of search results spanning several meanings of an ambiguous instance. Examples of such queries are happiness emotion and diderot philosopher. Moreover, queries like happiness positive psychology and diderot enlightenment may be considered to weakly and partially reinforce the relevance of the class labels positive emotions and enlightenment writers of the instances happiness and diderot respectively. In practice, a class label is deemed more relevant if its individual terms occur in popular queries containing the instance. More precisely, for each term within any class label from LH1(I), we compute a score TermQueryScore. The score is the frequency sum of the term within anonymized queries containing the instance I as a prefix, and the term anywhere else in the queries. Terms are stemmed before the computation. Each class label C is assigned the geometric mean of the scores of its N terms Ti, after ignoring stop words: ScoreH2(C, I) = ( N Y i=1 TermQueryScore(Ti))1/N (2) The geometric mean is preferred to the arithmetic mean, because the latter is more strongly affected by outlier values. The class labels are ranked according to the means, resulting in a ranked list LH2(I). In case of ties, LH2(I) keeps the relative ranking from LH1(I). • Ranking based on H3 (using query sessions): Given the third hypothesis H3, Web users searching for information about I may subsequently search for more general information about one of its classes C. Conversely, users may specialize their search from a class C to one of its instances I. Examples of such queries are happiness followed later by emotions, or diderot followed by philosophers; or emo1608 tions followed later by happiness, or philosophers followed by diderot. In practice, a class label is deemed more relevant if its individual terms occur as part of queries that are in the same query session as a query containing only the instance. More precisely, for each term within any class label from LH1(I), we compute a score TermSessionScore, equal to the frequency sum of the anonymized queries from the query sessions that contain the term and are: a) either the initial query of the session, with the instance I being one of the subsequent queries from the same session; or b) one of the subsequent queries of the session, with the instance I being the initial query of the same session. Before computing the frequencies, the class label terms are stemmed. Each class label C is assigned the geometric mean of the scores of its terms, after ignoring stop words: ScoreH3(C, I) = ( N Y i=1 TermSessionScore(Ti))1/N (3) The class labels are ranked according to the geometric means, resulting in a ranked list LH3(I). In case of ties, LH3(I) preserves the relative ranking from LH1(I). Unsupervised Ranking: Given an instance I, the ranking hypotheses and corresponding functions LH1(I), LH2(I) and LH3(I) (or any combination of them) can be used together to generate a merged, ranked list of class labels per instance I. The score of a class label in the merged list is determined by the inverse of the average rank in the lists LH1(I) and LH2(I) and LH3(I), computed with the following formula: ScoreH1+H2+H3(C, I) = N PN i Rank(C, LHi) (4) where N is the number of input lists of class labels (in this case, 3), and Rank(C, LHi) is the rank of C in the input list of class labels LHi (LH1, LH2 or LH3). The rank is set to 1000, if C is not present in the list LHi. By using only the relative ranks and not the absolute scores of the class labels within the input lists, the outcome of the merging is less sensitive to how class labels of a given instance are numerically scored within the input lists. In case of ties, the scores of the class labels from LH1(I) serve as a secondary ranking criterion. Thus, every instance I from the IsA repository is associated with a ranked list of class labels computed according to this ranking formula. Conversely, each class label C from the IsA repository is associated with a ranked list of class instances computed with the earlier scoring formula (1) used to generate lists LH1(I). Note that the ranking formula can also consider only a subset of the available input lists. For instance, ScoreH1+H2 would use only LH1(I) and LH2(I) as input lists; ScoreH1+H3 would use only LH1(I) and LH3(I) as input lists; etc. 3 Experimental Setting Textual Data Sources: The acquisition of the IsA repository relies on unstructured text available within Web documents and search queries. The queries are fully-anonymized queries in English submitted to Google by Web users in 2009, and are available in two collections. The first collection is a random sample of 50 million unique queries that are independent from one another. The second collection is a random sample of 5 million query sessions. Each session has an initial query and a series of subsequent queries. A subsequent query is a query that has been submitted by the same Web user within no longer than a few minutes after the initial query. Each subsequent query is accompanied by its frequency of occurrence in the session, with the corresponding initial query. The document collection consists of a sample of 100 million documents in English. Experimental Runs: The experimental runs correspond to different methods for extracting and ranking pairs of an instance and a class: • from the repository extracted here, with class labels of an instance ranked based on the frequency and the number of extraction patterns (ScoreH1 from Equation (1) in Section 2), in run Rd; • from the repository extracted here, with class labels of an instance ranked via the rank-based merging of: ScoreH1+H2 from Section 2, in run Rp, which corresponds to re-ranking using cooccurrence of an instance and its class label in the same query; ScoreH1+H3 from Section 2, in run Rs, which corresponds to re-ranking using cooccurrence of an instance and its class label, as separate queries within the same query session; and ScoreH1+H2+H3 from Section 2, in run Ru, which corresponds to re-ranking using both types of cooccurrences in queries. 1609 Evaluation Procedure: The manual evaluation of open-domain information extraction output is time consuming (Banko et al., 2007). A more practical alternative is an automatic evaluation procedure for ranked lists of class labels, based on existing resources and systems. Assume that there is a gold standard, containing gold class labels that are each associated with a gold set of their instances. The creation of such gold standards is discussed later. Based on the gold standard, the ranked lists of class labels available within an IsA repository can be automatically evaluated as follows. First, for each gold label, the ranked lists of class labels of individual gold instances are retrieved from the IsA repository. Second, the individual retrieved lists are merged into a ranked list of class labels, associated with the gold label. The merged list can be computed, e.g., using an extension of the ScoreH1+H2+H3 formula (Equation (4)) described earlier in Section 2. Third, the merged list is compared against the gold label, to estimate the accuracy of the merged list. Intuitively, a ranked list of class labels is a better approximation of a gold label, if class labels situated at better ranks in the list are closer in meaning to the gold label. Evaluation Metric: Given a gold label and a list of class labels, if any, derived from the IsA repository, the rank of the highest class label that matches the gold label determines the score assigned to the gold label, in the form of the reciprocal rank of the match. Thus, if the gold label matches a class label at rank 1, 2 or 3 in the computed list, the gold label receives a score of 1, 0.5 or 0.33 respectively. The score is 0 if the gold label does not match any of the top 20 class labels. The overall score over the entire set of gold labels is the mean reciprocal rank (MRR) score over all gold labels from the set. Two types of MRR scores are automatically computed: • MRRf considers a gold label and a class label to match, if they are identical; • MRRp considers a gold label and a class label to match, if one or more of their tokens that are not stop words are identical. During matching, all string comparisons are caseinsensitive, and all tokens are first converted to their singular form (e.g., european countries to european country) using WordNet (Fellbaum, 1998). Thus, insurance carriers and insurance companies are conQuery Set: Sample of Queries Qe (807 queries): 2009 movies, amino acids, asian countries, bank, board games, buildings, capitals, chemical functional groups, clothes, computer language, dairy farms near modesto ca, disease, egyptian pharaohs, eu countries, fetishes, french presidents, german islands, hawaiian islands, illegal drugs, irc clients, lakes, macintosh models, mobile operator india, nba players, nobel prize winners, orchids, photo editors, programming languages, renaissance artists, roller costers, science fiction tv series, slr cameras, soul singers, states of india, taliban members, thomas edison inventions, u.s. presidents, us president, water slides Qm (40 queries): actors, actresses, airlines, american presidents, antibiotics, birds, cars, celebrities, colors, computer languages, digital camera, dog breeds, dogs, drugs, elements, endangered animals, european countries, flowers, fruits, greek gods, horror movies, idioms, ipods, movies, names, netbooks, operating systems, park slope restaurants, planets, presidents, ps3 games, religions, renaissance artists, rock bands, romantic movies, states, universities, university, us cities, vitamins Table 1: Size and composition of evaluation sets of queries associated with non-filtered (Qe) or manuallyfiltered (Qm) instances sidered to not match in MRRf scores, but match in MRRp scores. On the other hand, MRRp scores may give credit to less relevant class labels, such as insurance policies for the gold label insurance carriers. Therefore, MRRp is an optimistic, and MRRf is a pessimistic estimate of the actual usefulness of the computed ranked lists of class labels as approximations of the gold labels. 4 Evaluation IsA Repository: The IsA repository, extracted from the document collection, covers a total of 4.04 million instances associated with 7.65 million class labels. The number of class labels available per instance and vice-versa follows a long-tail distribution, indicating that 2.12 million of the instances each have two or more class labels (with an average of 19.72 class labels per instance). Evaluation Sets of Queries: Table 1 shows samples of two query sets, introduced in (Pas¸ca, 2010) and used in the evaluation. The first set, denoted Qe, 1610 Query Set Min Max Avg Median Number of Gold Instances: Qe 10 100 70.4 81 Qm 8 33 16.9 17 Number of Query Tokens: Qe 1 8 2.0 2 Qm 1 3 1.4 1 Table 2: Number of gold instances (upper part) and number of query tokens (lower part) available per query, over the evaluation sets of queries associated with non-filtered gold instances (Qe) or manually-filtered gold instances (Qm) is obtained from a random sample of anonymized, class-seeking queries submitted by Web users to Google Squared. The set contains 807 queries, each associated with a ranked list of between 10 and 100 gold instances automatically extracted by Google Squared. Since the gold instances available as input for each query as part of Qe are automatically extracted, they may or may not be true instances of the respective queries. As described in (Pas¸ca, 2010), the second evaluation set Qm is a subset of 40 queries from Qe, such that the gold instances available for each query in Qm are found to be correct after manual inspection. The 40 queries from Qm are associated with between 8 and 33 human-validated instances. As shown in the upper part of Table 2, the queries from Qe are up to 8 tokens in length, with an average of 2 tokens per query. Queries from Qm are comparatively shorter, both in maximum (3 tokens) and average (1.4 tokens) length. The lower part of Table 2 shows the number of gold instances available as input, which average around 70 and 17 per query, for queries from Qe and Qm respectively. To provide another view on the distribution of the queries from evaluation sets, Table 3 lists tokens that are not stop words, which occur in most queries from Qe. Comparatively, few query tokens occur in more than one query in Qm. Evaluation Procedure: Following the general evaluation procedure, each query from the sets Qe and Qm acts as a gold class label associated with the corresponding set of instances. Given a query and its instances I from the evaluation sets Qe or Qm, a merged, ranked lists of class labels is computed out of the ranked lists of class labels available in the Query Cnt. Examples of Queries Containing the Token Token countries 22 african countries, eu countries, poor countries cities 21 australian cities, cities in california, greek cities presidents 18 american presidents, korean presidents, presidents of the south korea restaurants 15 atlanta restaurants, nova scotia restaurants, restaurants 10024 companies 14 agriculture companies, gas utility companies, retail companies states 14 american states, states of india, united states national parks prime 11 australian prime ministers, indian prime ministers, prime ministers cameras 10 cameras, digital cameras olympus, nikon cameras movies 10 2009 movies, movies, romantic movies american 9 american authors, american president, american revolution battles ministers 9 australian prime ministers, indian prime ministers, prime ministers Table 3: Query tokens occurring most frequently in queries from the Qe evaluation set, along with the number (Cnt) and examples of queries containing the tokens underlying IsA repository for each instance I. The evaluation compares the merged lists of class labels, with the corresponding queries from Qe or Qm. Accuracy of Lists of Class Labels: Table 4 summarizes results from comparative experiments, quantifying a) horizontally, the impact of alternative parameter settings on the computed lists of class labels; and b) vertically, the comparative accuracy of the experimental runs over the query sets. The experimental parameters are the number of input instances from the evaluation sets that are used for retrieving class labels, I-per-Q, set to 3, 5, 10; and the number of class labels retrieved per input instance, C-per-I, set to 5, 10, 20. Four conclusions can be derived from the results. First, the scores over Qm are higher than those over Qe, confirming the intuition that the higher-quality 1611 Accuracy I-per-Q 3 5 10 C-per-I 5 10 20 5 10 20 5 10 20 MRRf computed over Qe: Rd 0.186 0.195 0.198 0.198 0.207 0.210 0.204 0.214 0.218 Rp 0.202 0.211 0.216 0.232 0.238 0.244 0.245 0.255 0.257 Rs 0.258 0.260 0.261 0.278 0.277 0.276 0.279 0.280 0.282 Ru 0.234 0.241 0.244 0.260 0.263 0.270 0.274 0.275 0.278 MRRp computed over Qe: Rd 0.489 0.495 0.495 0.517 0.528 0.529 0.541 0.553 0.557 Rp 0.520 0.531 0.533 0.564 0.573 0.578 0.590 0.601 0.602 Rs 0.576 0.584 0.583 0.612 0.616 0.614 0.641 0.636 0.628 Ru 0.561 0.570 0.571 0.606 0.614 0.617 0.640 0.641 0.636 MRRf computed over Qm: Rd 0.406 0.436 0.442 0.431 0.447 0.466 0.467 0.470 0.501 Rp 0.423 0.426 0.429 0.436 0.483 0.508 0.500 0.526 0.530 Rs 0.590 0.601 0.594 0.578 0.604 0.595 0.624 0.612 0.624 Ru 0.481 0.502 0.508 0.531 0.539 0.545 0.572 0.588 0.575 MRRp computed over Qm: Rd 0.667 0.662 0.660 0.675 0.677 0.699 0.702 0.695 0.716 Rp 0.711 0.703 0.680 0.734 0.731 0.748 0.733 0.797 0.782 Rs 0.841 0.822 0.820 0.835 0.828 0.823 0.850 0.856 0.844 Ru 0.800 0.810 0.781 0.795 0.794 0.779 0.806 0.827 0.816 Table 4: Accuracy of instance set labeling, as full-match (MRRf) or partial-match (MRRp) scores over the evaluation sets of queries associated with non-filtered instances (Qe) or manually-filtered instances (Qm), for various experimental runs (I-per-Q=number of gold instances available in the input evaluation sets that are used for retrieving class labels; C-per-I=number of class labels retrieved from IsA repository per input instance) input set of instances available in Qm relative to Qe should lead to higher-quality class labels for the corresponding queries. Second, when I-per-Q is fixed, increasing C-per-I leads to small, if any, score improvements. Third, when C-per-I is fixed, even small values of I-per-Q, such as 3 (that is, very small sets of instances provided as input) produce scores that are competitive with those obtained with a higher value like 10. This suggests that useful class labels can be generated even in extreme scenarios, where the number of instances available as input is as small as 3 or 5. Fourth and most importantly, for most combinations of parameter settings and on both query sets, the runs that take advantage of query logs (Rp, Rs, Ru) produce the highest scores. In particular, when I-per-Q is set to 10 and C-per-I to 20, run Ru identifies the original query as an exact match among the top three to four class labels returned (score 0.278); and as a partial match among the top one to two class labels returned (score 0.636), as an average over the Qe set. The corresponding MRRf score of 0.278 over the Qe set obtained with run Ru is 27% higher than with run Rd. In all experiments, the higher scores of Rp, Rs and Ru can be attributed to higher-quality lists of class labels, relative to Rd. Among combinations of parameter settings described in Table 4, values around 10 for I-per-Q and 20 for C-per-I give the highest scores over both Qe and Qm. Among the query-based runs Rp, Rs and Ru, the highest scores in Table 4 are obtained mostly for run Rs. Thus, between the presence of a class label and an instance either in the same query, or as separate queries within the same query session, it is the latter that provides a more useful signal during the reranking of class labels of each instance. Table 5 illustrates the top class labels from the ranked lists generated in run Rs for various queries from both Qe and Qm. The table suggests that the computed class labels are relatively resistant to noise and variation within the input set of gold instances. For example, the top elements of the lists of class la1612 Query Query Gold Instances Top Labels Generated Using Top 10 Gold Instances Set Cnt. Sample from Top Gold Instances actors Qe 100 abe vigoda, ben kingsley, bill hickman actors, stars, favorite actors, celebrities, movie stars Qm 28 al pacino, christopher walken, danny devito actors, celebrities, favorite actors, movie stars, stars computer languages Qe 59 acm transactions on mathematical software, applescript, c languages, programming languages, programs, standard programming languages, computer programming languages Qm 17 applescript, eiffel, haskell languages, programming languages, computer languages, modern programming languages, high-level languages european countries Qe 60 abkhazia, armenia, bosnia & herzegovina countries, european countries, eu countries, foreign countries, western countries Qm 19 belgium, finland, greece countries, european countries, eu countries, foreign countries, western countries endangered animals Qe 98 arkive, arabian oryx, bagheera species, animals, endangered species, animal species, endangered animals Qm 21 arabian oryx, blue whale, giant hispaniolan galliwasp animals, endangered species, species, endangered animals, rare animals park slope restaurants Qe 100 12th street bar & grill, aji bar lounge, anthony’s businesses, departments Qm 18 200 fifth restaurant bar, applewood restaurant, beet thai restaurant (none) renaissance artists Qe 95 michele da verona, andrea sansovino, andrea del sarto artists, famous artists, great artists, renaissance artists, italian artists Qm 11 botticelli, filippo lippi, giorgione artists, famous artists, renaissance artists, great artists, italian artists rock bands Qe 65 blood doll, nightmare, rockaway beach songs, hits, films, novels, famous songs Qm 15 arcade fire, faith no more, indigo girls bands, rock bands, favorite bands, great bands, groups Table 5: Examples of gold instances available in the input, and actual ranked lists of class labels produced by run Rs for various queries from the evaluation sets of queries associated with non-filtered gold instances (Qe) or manually-filtered gold instances (Qm) bels generated for computer languages are relevant and also quite similar for Qe vs. Qm, although the list of gold instances in Qe may contain incorrect items (e.g., acm transactions on mathematical software). Similarly, the class labels computed for european countries are almost the same for Qe vs. Qm, although the overlap of the respective lists of 10 gold instances used as input is not large. The table shows at least one query (park slope restaurants) for which the output is less than optimal, either because the class labels (e.g., businesses) are quite distant semantically from the query (for Qe), or because no output is produced at all, due to no class labels being found in the IsA repository for any of the 10 input gold instances (for Qm). For many queries, however, the computed class labels arguably capture the meaning of the original query, although not necessarily in the exact same lexical form, and sometimes only partially. For example, for the query endangered animals, only the fourth class label from Qm identifies the query exactly. However, class labels preceding endangered animals already capture the notion of animals or species (first and third labels), or that they are endangered (second label). 1613 0.062 0.125 0.250 0.500 1.000 2.000 4.000 8.000 16.000 32.000 64.000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 (not in top 20) Percentage of queries Rank Query evaluation set: Qe Full-match Partial-match 0.062 0.125 0.250 0.500 1.000 2.000 4.000 8.000 16.000 32.000 64.000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 (not in top 20) Percentage of queries Rank Query evaluation set: Qm Full-match Partial-match Figure 1: Percentage of queries from the evaluation sets, for which the earliest class labels from the computed ranked lists of class labels, which match the queries, occur at various ranks in the ranked lists returned by run Rs Figure 1 provides a detailed view on the distribution of queries from the Qe and Qm evaluation sets, for which the class label that matches the query occurs at a particular rank in the computed list of class labels. In the first graph of Figure 1, for Qe, the query matches the automatically-generated class label at ranks 1, 2, 3, 4 and 5 for 18.9%, 10.3%, 5.7%, 3.7% and 1.2% of the queries respectively, with full string matching, i.e., corresponding to MRRf; and for 52.6%, 12.4%, 5.3%, 3.7% and 1.7% respectively, with partial string matching, corresponding to MRRp. The second graph confirms that higher MRR scores are obtained for Qm than for Qe. In particular, the query matches the class label at rank 1 and 2 for 50.0% and 17.5% (or a combined 67.5%) of the queries from Qm, with full string matching; and for 52.6% and 12.4% (or a combined 67%), with partial string matching. Discussion: The quality of lists of items extracted from documents can benefit from query-driven ranking, particularly for the task of ranking class labels of instances within IsA repositories. The use of queries for ranking is generally applicable: it can be seen as a post-processing stage that enhances the ranking of the class labels extracted for various instances by any method into any IsA repository. Open-domain class labels extracted from text and re-ranked as described in this paper are useful in a variety of applications. Search tools such as Google Squared return a set of instances, in response to class-seeking queries (e.g., insurance companies). The labeling of the returned set of instances, using the re-ranked class labels available per instances, allows for the generation of query refinements (e.g., insurers). In search over semi-structured data (Cafarella et al., 2008), the labeling of column cells is useful to infer the semantics of a table column, when the subject row of the table in which the column appears is either absent or difficult to detect. 5 Related Work The role of anonymized query logs in Web-based information extraction has been explored in tasks such as class attribute extraction (Pas¸ca and Van Durme, 2007), instance set expansion (Pennacchiotti and Pantel, 2009) and extraction of sets of similar entities (Jain and Pennacchiotti, 2010). Our work compares the usefulness of queries and query sessions for ranking class labels in extracted IsA repositories. It shows that query sessions produce betterranked class labels than isolated queries do. A task complementary to class label ranking is entity ranking (Billerbeck et al., 2010), also referred to as ranking for typed search (Demartini et al., 2009). The choice of search queries and query substitutions is often influenced by, and indicative of, various semantic relations holding among full queries or query terms (Jones et al., 2006). Semantic relations may be loosely defined, e.g., by exploring the acquisition of untyped, similarity-based relations from query logs (Baeza-Yates and Tiberi, 2007). In comparison, queries are used here to re-rank class labels capturing a well-defined type of open-domain relations, namely IsA relations. 6 Conclusion In an attempt to bridge the gap between information stated in documents and information requested 1614 in search queries, this study shows that inherentlynoisy queries are useful in re-ranking class labels extracted from Web documents for various instances, with query sessions leading to higher quality than isolated queries. Current work investigates the impact of ambiguous input instances (Vyas and Pantel, 2009) on the quality of the generated class labels. References R. Baeza-Yates and A. Tiberi. 2007. Extracting semantic relations from query logs. In Proceedings of the 13th ACM Conference on Knowledge Discovery and Data Mining (KDD-07), pages 76–85, San Jose, California. M. Banko, Michael J Cafarella, S. Soderland, M. Broadhead, and O. Etzioni. 2007. Open information extraction from the Web. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07), pages 2670–2676, Hyderabad, India. B. Billerbeck, G. Demartini, C. Firan, T. Iofciu, and R. Krestel. 2010. Ranking entities using Web search query logs. In Proceedings of the 14th European Conference on Research and Advanced Technology for Digital Libraries (ECDL-10), pages 273–281, Glasgow, Scotland. M. Cafarella, A. Halevy, D. Wang, E. Wu, and Y. Zhang. 2008. WebTables: Exploring the power of tables on the Web. In Proceedings of the 34th Conference on Very Large Data Bases (VLDB-08), pages 538–549, Auckland, New Zealand. G. Demartini, T. Iofciu, and A. de Vries. 2009. Overview of the INEX 2009 Entity Ranking track. In INitiative for the Evaluation of XML Retrieval Workshop, pages 254–264, Brisbane, Australia. O. Etzioni, M. Cafarella, D. Downey, A. Popescu, T. Shaked, S. Soderland, D. Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the Web: an experimental study. Artificial Intelligence, 165(1):91–134. C. Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database and Some of its Applications. MIT Press. M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th International Conference on Computational Linguistics (COLING-92), pages 539–545, Nantes, France. A. Jain and M. Pennacchiotti. 2010. Open entity extraction from Web search query logs. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING-10), pages 510–518, Beijing, China. R. Jones, B. Rey, O. Madani, and W. Greiner. 2006. Generating query substitutions. In Proceedings of the 15h World Wide Web Conference (WWW-06), pages 387– 396, Edinburgh, Scotland. Z. Kozareva, E. Riloff, and E. Hovy. 2008. Semantic class learning from the Web with hyponym pattern linkage graphs. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL-08), pages 1048–1056, Columbus, Ohio. M. Pas¸ca and B. Van Durme. 2007. What you seek is what you get: Extraction of class attributes from query logs. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07), pages 2832–2837, Hyderabad, India. M. Pas¸ca. 2010. The role of queries in ranking labeled instances extracted from text. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING-10), pages 955–962, Beijing, China. M. Pennacchiotti and P. Pantel. 2009. Entity extraction via ensemble semantics. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-09), pages 238–247, Singapore. R. Snow, D. Jurafsky, and A. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLINGACL-06), pages 801–808, Sydney, Australia. P. Talukdar, J. Reisinger, M. Pas¸ca, D. Ravichandran, R. Bhagat, and F. Pereira. 2008. Weakly-supervised acquisition of labeled class instances using graph random walks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP-08), pages 582–590, Honolulu, Hawaii. B. Van Durme and M. Pas¸ca. 2008. Finding cars, goddesses and enzymes: Parametrizable acquisition of labeled instances for open-domain information extraction. In Proceedings of the 23rd National Conference on Artificial Intelligence (AAAI-08), pages 1243– 1248, Chicago, Illinois. V. Vyas and P. Pantel. 2009. Semi-automatic entity set refinement. In Proceedings of the 2009 Conference of the North American Association for Computational Linguistics (NAACL-HLT-09), pages 290–298, Boulder, Colorado. 1615
|
2011
|
161
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1616–1625, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Insights from Network Structure for Text Mining Zornitsa Kozareva and Eduard Hovy USC Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292-6695 {kozareva,hovy}@isi.edu Abstract Text mining and data harvesting algorithms have become popular in the computational linguistics community. They employ patterns that specify the kind of information to be harvested, and usually bootstrap either the pattern learning or the term harvesting process (or both) in a recursive cycle, using data learned in one step to generate more seeds for the next. They therefore treat the source text corpus as a network, in which words are the nodes and relations linking them are the edges. The results of computational network analysis, especially from the world wide web, are thus applicable. Surprisingly, these results have not yet been broadly introduced into the computational linguistics community. In this paper we show how various results apply to text mining, how they explain some previously observed phenomena, and how they can be helpful for computational linguistics applications. 1 Introduction Text mining / harvesting algorithms have been applied in recent years for various uses, including learning of semantic constraints for verb participants (Lin and Pantel, 2002) related pairs in various relations, such as part-whole (Girju et al., 2003), cause (Pantel and Pennacchiotti, 2006), and other typical information extraction relations, large collections of entities (Soderland et al., 1999; Etzioni et al., 2005), features of objects (Pasca, 2004) and ontologies (Carlson et al., 2010). They generally start with one or more seed terms and employ patterns that specify the desired information as it relates to the seed(s). Several approaches have been developed specifically for learning patterns, including guided pattern collection with manual filtering (Riloff and Shepherd, 1997) automated surface-level pattern induction (Agichtein and Gravano, 2000; Ravichandran and Hovy, 2002) probabilistic methods for taxonomy relation learning (Snow et al., 2005) and kernel methods for relation learning (Zelenko et al., 2003). Generally, the harvesting procedure is recursive, in which data (terms or patterns) gathered in one step of a cycle are used as seeds in the following step, to gather more terms or patterns. This method treats the source text as a graph or network, consisting of terms (words) as nodes and inter-term relations as edges. Each relation type induces a different network1. Text mining is a process of network traversal, and faces the standard problems of handling cycles, ranking search alternatives, estimating yield maxima, etc. The computational properties of large networks and large network traversal have been studied intensively (Sabidussi, 1966; Freeman, 1979; Watts and Strogatz, 1998) and especially, over the past years, in the context of the world wide web (Page et al., 1999; Broder et al., 2000; Kleinberg and Lawrence, 2001; Li et al., 2005; Clauset et al., 2009). Surprisingly, except in (Talukdar and Pereira, 2010), this work has not yet been related to text mining research in the computational linguistics community. The work is, however, relevant in at least two ways. It sometimes explains why text mining algo1These networks are generally far larger and more densely interconnected than the world wide web’s network of pages and hyperlinks. 1616 rithms have the limitations and thresholds that are empirically found (or suspected), and it may suggest ways to improve text mining algorithms for some applications. In Section 2, we review some related work. In Section 3 we describe the general harvesting procedure, and follow with an examination of the various statistical properties of implicit semantic networks in Section 4, using our implemented harvester to provide illustrative statistics. In Section 5 we discuss implications for computational linguistics research. 2 Related Work The Natural Language Processing knowledge harvesting community has developed a good understanding of how to harvests various kinds of semantic information and use this information to improve the performance of tasks such as information extraction (Riloff, 1993), textual entailment (Zanzotto et al., 2006), question answering (Katz et al., 2003), and ontology creation (Suchanek et al., 2007), among others. Researchers have focused on the automated extraction of semantic lexicons (Hearst, 1992; Riloff and Shepherd, 1997; Girju et al., 2003; Pasca, 2004; Etzioni et al., 2005; Kozareva et al., 2008). While clustering approaches tend to extract general facts, pattern based approaches have shown to produce more constrained but accurate lists of semantic terms. To extract this information, (Lin and Pantel, 2002) showed the effect of using different sizes and genres of corpora such as news and Web documents. The latter has been shown to provide broader and more complete information. Researchers outside computational linguistics have studied complex networks such as the World Wide Web, the Social Web, the network of scientific papers, among others. They have investigated the properties of these text-based networks with the objective of understanding their structure and applying this knowledge to determine node importance/centrality, connectivity, growth and decay of interest, etc. In particular, the ability to analyze networks, identify influential nodes, and discover hidden structures has led to important scientific and technological breakthroughs such as the discovery of communities of like-minded individuals (Newman and Girvan, 2004), the identification of influential people (Kempe et al., 2003), the ranking of scientists by their citation indexes (Radicchi et al., 2009), and the discovery of important scientific papers (Walker et al., 2006; Chen et al., 2007; Sayyadi and Getoor, 2009). Broder et al. (2000) demonstrated that the Web link structure has a “bow-tie” shape, while (2001) classified Web pages into authorities (pages with relevant information) and hubs (pages with useful references). These findings resulted in the development of the PageRank (Page et al., 1999) algorithm which analyzes the structure of the hyperlinks of Web documents to find pages with authoritative information. PageRank has revolutionized the whole Internet search society. However, no-one has studied the properties of the text-based semantic networks induced by semantic relations between terms with the objective of understanding their structure and applying this knowledge to improve concept discovery. Most relevant to this theme is the work of Steyvers and Tenenbaum (Steyvers and Tenenbaum, 2004), who studied three manually built lexical networks (association norms, WordNet, and Roget’s Thesaurus (Roget, 1911)) and proposed a model of the growth of the semantic structure over time. These networks are limited to the semantic relations among nouns. In this paper we take a step further to explore the statistical properties of semantic networks relating proper names, nouns, verbs, and adjectives. Understanding the semantics of nouns, verbs, and adjectives has been of great interest to linguists and cognitive scientists such as (Gentner, 1981; Levin and Somers, 1993; Gasser and Smith, 1998). We implement a general harvesting procedure and show its results for these word types. A fundamental difference with the work of (Steyvers and Tenenbaum, 2004) is that we study very large semantic networks built ‘naturally’ by (millions of) users rather than ‘artificially’ by a small set of experts. The large networks capture the semantic intuitions and knowledge of the collective mass. It is conceivable that an analysis of this knowledge can begin to form the basis of a large-scale theory of semantic meaning and its interconnections, support observation of the process of lexical development and usage in humans, and even suggest explanations of how knowledge is organized in our brains, especially when performed for differ1617 ent languages on the WWW. 3 Inducing Semantic Networks in the Web Text mining algorithms such as those mentioned above raise certain questions, such as: Why are some seed terms more powerful (provide a greater yield) than others?, How can one find high-yield terms?, How many steps does one need, typically, to learn all terms for a given relation?, Can one estimate the total eventual yield of a given relation?, and so on. On the face of it, one would need to know the structure of the network a priori to be able to provide answers. But research has shown that some surprising regularities hold. For example, in the text mining community, (Kozareva and Hovy, 2010b) have shown that one can obtain a quite accurate estimate of the eventual yield of a pattern and seed after only five steps of harvesting. Why is this? They do not provide an answer, but research from the network community does. To illustrate the properties of networks of the kind induced by semantic relations, and to show the applicability of network research to text harvesting, we implemented a harvesting algorithm and applied it to a representative set of relations and seeds in two languages. Since the goal of this paper is not the development of a new text harvesting algorithm, we implemented a version of an existing one: the so-called DAP (doubly-anchored pattern) algorithm (Kozareva et al., 2008), because it (1) is easy to implement, (2) requires minimum input (one pattern and one seed example), (3) achieves very high precision compared to existing methods (Pasca, 2004; Etzioni et al., 2005; Pasca, 2007), (4) enriches existing semantic lexical repositories such as WordNet and Yago (Suchanek et al., 2007), (5) can be formulated to learn semantic lexicons and relations for noun, verb and verb+preposition syntactic constructions; (6) functions equally well in different languages. Next we describe the knowledge harvesting procedure and the construction of the text-mined semantic networks. 3.1 Harvesting to Induce Semantic Networks For a given semantic class of interest say singers, the algorithm starts with a seed example of the class, say Madonna. The seed term is inserted in the lexicosyntactic pattern “class such as seed and *”, which learns on the position of the ∗new terms of type class. The newly learned terms are then individually placed into the position of the seed in the pattern, and the bootstrapping process is repeated until no new terms are found. The output of the algorithm is a set of terms for the semantic class. The algorithm is implemented as a breadth-first search and its mechanism is described as follows: 1. Given: a language L={English, Spanish} a pattern Pi={such as, including, verb prep, noun} a seed term seed for Pi 2. Build a query for Pi using template Ti ‘class such as seed and *’, ‘class including seed and *’, ‘* and seed verb prep’, ‘* and seed noun’, ‘seed and * noun’ 3. Submit Ti to Yahoo! or other search engine 4. Extract terms occupying the * position 5. Feed terms from 4. into 2. 6. Repeat steps 2–5. until no new terms are found The output of the knowledge harvesting algorithm is a network of semantic terms interconnected by the semantic relation captured in the pattern. We can represent the traversed (implicit) network as a directed graph G(V, E) with nodes V (|V | = n) and edges E(|E| = m). A node u in the network corresponds to a term discovered during bootstrapping. An edge (u, v) ∈E represents an existing link between two terms. The direction of the edge indicates that the term v was generated by the term u. For example, given the sentence (where the pattern is in italics and the extracted term is underlined) “He loves singers such as Madonna and Michael Jackson”, two nodes Madonna and Michael Jackson with an edge e=(Madonna, Michael Jackson) would be created in the graph G. Figure 1 shows a small example of the singer network. The starting seed term Madonna is shown in red color and the harvested terms are in blue. 3.2 Data We harvested data from the Web for a representative selection of semantic classes and relations, of 1618 !"#$%%"& '()$%&*$+%& !,-+".(& *"-/0$%& 1.(,%.&2,$%& '33"& 45,%-.& 6.7$%-.& 8"57& 9,+"%%"& :5.##,.& !.5-;57& <(,-,"&8.70& =+"/,5"& 2>>& 9,-/.7& !"5?%& =).@,.& A$%#.5& B,%"&B;5%.5& 2$((7&4")$%& Figure 1: Harvesting Procedure. the type used in (Etzioni et al., 2005; Pasca, 2007; Kozareva and Hovy, 2010a): • semantic classes that can be learned using different seeds (e.g., “singers such as Madonna and *” and “singers such as Placido Domingo and *”); • semantic classes that are expressed through different lexico-syntactic patterns (e.g., “weapons such as bombs and *” and “weapons including bombs and *”); • verbs and adjectives characterizing the semantic class (e.g., “expensive and * car”, “dogs run and *”); • semantic relations with more complex lexicosyntactic structure (e.g., “* and Easyjet fly to”, “* and Sam live in”); • semantic classes that are obtained in different languages, such as English and Spanish (e.g., “singers such as Madonna and *” and “cantantes como Madonna y *”); While most of these variations have been explored in individual papers, we have found no paper that covers them all, and none whatsoever that uses verbs and adjectives as seeds. Using the above procedure to generate the data, each pattern was submitted as a query to Yahoo!Boss. For each query the top 1000 text snippets were retrieved. The algorithm ran until exhaustion. In total, we collected 10GB of data which was partof-speech tagged with Treetagger (Schmid, 1994) and used for the semantic term extraction. Table 1 summarizes the number of nodes and edges learned for each semantic network using pattern Pi and the initial seed shown in italics. Lexico-Syntactic Pattern Nodes Edges P1=“singers such as Madonna and *” 1115 1942 P2=“singers such as Placido Domingo and *” 815 1114 P3=“emotions including anger and *” 113 250 P4=“emotions such as anger and *” 748 2547 P5=“diseases such as malaria and *” 3168 6752 P6=“drugs such as ibuprofen and *” 2513 9428 P7=“expensive and * cars” 4734 22089 P8=“* and tasty fruits” 1980 7874 P9=“whales swim and *” 869 2163 P10=“dogs chase and *” 4252 20212 P11=“Britney Spears dances and *” 354 540 P12=“John reads and *” 3894 18545 P13=“* and Easyjet fly to” 3290 6480 P14=“* and Charlie work for” 2125 3494 P15=“* and Sam live in” 6745 24348 P16=“cantantes como Madonna y *” 240 318 P17=“gente como Jorge y *” 572 701 Table 1: Size of the Semantic Networks. 4 Statistical Properties of Text-Mined Semantic Networks In this section we apply a range of relevant measures from the network analysis community to the networks described above. 4.1 Centrality The first statistical property we explore is centrality. It measures the degree to which the network structure determines the importance of a node in the network (Sabidussi, 1966; Freeman, 1979). We explore the effect of two centrality measures: indegree and outdegree. The indegree of a node u denoted as indegree(u)=P(v, u) considers the sum of all incoming edges to u and captures the ability of a semantic term to be discovered by other semantic terms. The outdegree of a node u denoted as outdegree(u)=P(u, v) considers the number of outgoing edges of the node u and measures the ability of a semantic term to discover new terms. Intuitively, the more central the node u is, the more confident we are that it is a correct term. Since harvesting algorithms are notorious for extracting erroneous information, we use the two centrality measures to rerank the harvested elements. Table 2 shows the accuracy2 of the singer semantic terms at different ranks using the in and out degree measures. Consistently, outdegree outperforms indegree and reaches higher accuracy. This 2Accuracy is calculated as the number of correct terms at rank R divided by the total number of terms at rank R. 1619 shows that for the text-mined semantic networks, the ability of a term to discover new terms is more important than the ability to be discovered. @rank in-degree out-degree 10 .92 1.0 25 .91 1.0 50 .90 .97 75 .90 .96 100 .89 .96 150 .88 .95 Table 2: Accuracy of the Singer Terms. This poses the question “What are the terms with high and low outdegree?”. Table 3 shows the top and bottom 10 terms of the semantic class. Semantic Class top 10 outDegree bottom 10 outDegree Singers Frank Sinatra Alanis Morisette Ella Fitzgerald Christine Agulera Billie Holiday Buffy Sainte-Marie Britney Spears Cece Winans Aretha Franklin Wolfman Jack Michael Jackson Billie Celebration Celine Dion Alejandro Sanz Beyonce France Gall Bessie Smith Peter Joni Mitchell Sarah Table 3: Singer Term Ranking with Centrality Measures. The nodes with high outdegree correspond to famous or contemporary singers. The lower-ranked nodes are mostly spelling errors such as Alanis Morisette and Christine Agulera, less known singers such as Buffy Sainte-Marie and Cece Winans, nonAmerican singers such as Alejandro Sanz and France Gall, extractions due to part-of-speech tagging errors such as Billie Celebration, and general terms such as Peter and Sarah. Potentially, knowing which terms have a high outdegree allows one to rerank candidate seeds for more effective harvesting. 4.2 Power-law Degree Distribution We next study the degree distributions of the networks. Similarly to the Web (Broder et al., 2000) and social networks like Orkut and Flickr, the textmined semantic networks also exhibit a power-law distribution. This means that while a few terms have a significantly high degree, the majority of the semantic terms have small degree. Figure 2 shows the indegree and outdegree distributions for different semantic classes, lexico-syntactic patterns, and languages (English and Spanish). For each semantic network, we plot the best-fitting power-law function (Clauset et al., 2009) which fits well all degree distributions. Table 4 shows the power-law exponent values for all text-mined semantic networks. Patt. γin γout Patt. γin γout P1 2.37 1.27 P10 1.65 1.12 P2 2.25 1.21 P11 2.42 1.41 P3 2.20 1.76 P12 1.60 1.13 P4 2.28 1.18 P13 2.26 1.20 P5 2.49 1.18 P14 2.43 1.25 P6 2.42 1.30 P15 2.51 1.43 P7 1.95 1.20 P16 2.74 1.31 P8 1.94 1.07 P17 2.90 1.20 P9 1.96 1.30 Table 4: Power-Law Exponents of Semantic Networks. It is interesting to note that the indegree powerlaw exponents for all semantic networks fall within the same range (γin ≈2.4), and similarly for the outdegree exponents (γout ≈1.3). However, the values of the indegree and outdegree exponents differ from each other. This observation is consistent with Web degree distributions (Broder et al., 2000). The difference in the distributions can be explained by the link asymmetry of semantic terms: A discovering B does not necessarily mean that B will discover A. In the text-mined semantic networks, this asymmetry is caused by patterns of language use, such as the fact that people use first adjectives of the size and then of the color (e.g., big red car), or prefer to place male before female proper names. Harvesting patterns should take into account this tendency. 4.3 Sparsity Another relevant property of the semantic networks concerns sparsity. Following Preiss (Preiss, 1999), a graph is sparse if |E| = O(|V |k) and 1 < k < 2, where |E| is the number of edges and |V | is the number of nodes, otherwise the graph is dense. For the studied text-semantic networks, k is ≈1.08. Sparsity can be also captured through the density of the semantic network which is computed as |E| V (V −1). All networks have low density which suggests that the networks exhibit a sparse connectivity pattern. On average a node (semantic term) is connected to a very small percentage of other nodes. Similar behavior was reported for the WordNet and Roget’s semantic networks (Steyvers and Tenenbaum, 2004). 1620 0 50 100 150 200 250 300 350 400 450 500 0 10 20 30 40 50 60 70 80 90 Number of Nodes Indegree 'emotions' power-law exponent=2.28 0 20 40 60 80 100 120 0 20 40 60 80 100 120 Number of Nodes Outdegree 'emotions' power-law exponent=1.18 0 500 1000 1500 2000 2500 0 10 20 30 40 50 60 Number of Nodes Indegree 'travel_to' power-law exponent=2.26 0 100 200 300 400 500 600 700 0 5 10 15 20 25 30 35 Number of Nodes Outdegree 'fly_to' power-law exponent=1.20 0 50 100 150 200 250 300 350 400 450 500 1 2 3 4 5 6 7 8 Number of Nodes Indegree 'gente' power-law exponent=2.90 0 20 40 60 80 100 120 0 2 4 6 8 10 12 14 Number of Nodes Outdegree 'gente' power-law exponent=1.20 Figure 2: Degree Distributions of Semantic Networks. 4.4 Connectedness For every network, we computed the strongly connected component (SCC) such that for all nodes (semantic terms) in the SCC, there is a path from any node to another node in the SCC considering the direction of the edges between the nodes. For each network, we found that there is only one SCC. The size of the component is shown in Table 5. Unlike WordNet and Roget’s semantic networks where the SCC consists 96% of all semantic terms, in the text-mined semantic networks only 12 to 55% of the terms are in the SCC. This shows that not all nodes can reach (discover) every other node in the network. This also explains the findings of (Kozareva et al., 2008; Vyas et al., 2009) why starting with a good seed is important. 4.5 Path Lengths and Diameter Next, we describe the properties of the shortest paths between the semantic terms in the SCC. The distance between two nodes in the SCC is measured as the length of the shortest path connecting the terms. The direction of the edges between the terms is taken into consideration. The average distance is the average value of the shortest path lengths over all pairs of nodes in the SCC. The diameter of the SCC is calculated as the maximum distance over all pairs of nodes (u, v), such that a node v is reachable from node u. Table 5 shows the average distance and the diameter of the semantic networks. Patt. #nodes in SCC SCC Average Distance SCC Diameter P1 364 (.33) 5.27 16 P2 285 (.35) 4.65 13 P3 48 (.43) 2.85 6 P4 274 (.37) 2.94 7 P5 1249 (.38) 5.99 17 P6 1471 (.29) 4.82 15 P7 2255 (.46 ) 3.51 11 P8 1012 (.50) 3.87 11 P9 289 (.33) 4.93 13 P10 2342 (.55) 4.50 12 P11 87 (.24) 5.00 11 P12 1967 (.51) 3.20 13 P13 1249 (.38) 4.75 13 P14 608 (.29) 7.07 23 P15 1752 (.26) 5.32 15 P16 56 (.23) 4.79 12 P17 69 (.12 ) 5.01 13 Table 5: SCC, SCC Average Distance and SCC Diameter of the Semantic Networks. The diameter shows the maximum number of steps necessary to reach from any node to any other, while the average distance shows the number of steps necessary on average. Overall, all networks have very short average path lengths and small diameters that are consistent with Watt’s finding for small-world networks. Therefore, the yield of harvesting seeds can be predicted within five steps explaining (Kozareva and Hovy, 2010b; Vyas et al., 2009). We also compute for any randomly selected node in the semantic network on average how many hops (steps) are necessary to reach from one node to another. Figure 3 shows the obtained results for some of the studied semantic networks. 4.6 Clustering The clustering coefficient (C) is another measure to study the connectivity structure of the networks (Watts and Strogatz, 1998). This measure captures the probability that the two neighbors of a randomly selected node will be neighbors. The clustering coefficient of a node u is calculated as Cu= |eij| ku(ku−1) 1621 0 10 20 30 40 50 60 70 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Number of Nodes Distance (Hops) Britney Spears (verb harvesting) 0 50 100 150 200 250 300 350 400 1 2 3 4 5 6 7 8 9 10 11 Number of Nodes Distance (Hops) fruits (adjective harvesting) 0 50 100 150 200 250 2 4 6 8 10 12 14 16 18 20 22 24 Number of Nodes Distance (Hops) work for 0 5 10 15 20 25 30 2 4 6 8 10 12 14 16 18 20 Number of Nodes Distance (Hops) gente Figure 3: Hop Plot of the Semantic Networks. : vi, vj ∈Nu, eij ∈E, where ku is the total degree of the node u and Nu is the neighborhood of u. The clustering coefficient C for the whole semantic network is the average clustering coefficient of all its nodes, C= 1 n P Ci. The value of the clustering coefficient ranges between [0, 1], where 0 indicates that the nodes do not have neighbors which are themselves connected, while 1 indicates that all nodes are connected. Table 6 shows the clustering coefficient for all text-mined semantic networks together with the number of closed and open triads3. The analysis suggests the presence of a strong local cluster, however there are few possibilities to form overlapping neighborhoods of nodes. The clustering coefficient of WordNet (Steyvers and Tenenbaum, 2004) is similar to those of the text-mined networks. 4.7 Joint Degree Distribution In social networks, understanding the preferential attachment of nodes is important to identify the speed with which epidemics or gossips spread. Similarly, we are interested in understanding how the nodes of the semantic networks connect to each other. For this purpose, we examine the Joint Degree Distribution (JDD) (Li et al., 2005; Newman, 2003). JDD is approximated by the degree correlation function knn which maps the outdegree and the average 3A triad is three nodes that are connected by either two (open triad) or three (closed triad) directed ties. Patt. C ClosedTriads OpenTriads P1 .01 14096 (.97) 388 (.03) P2 .01 6487 (.97) 213 (.03) P3 .30 1898 (.94) 129 (.06) P4 .33 60734 (.94) 3944 (.06) P5 .10 79986 (.97) 2321 (.03) P6 .11 78716 (.97) 2336 (.03) P7 .17 910568 (.95) 43412 (.05) P8 .19 21138 (.95) 10728 (.05) P9 .20 27830 (.95) 1354 (.05) P10 .15 712227 (.96) 62101(.04) P11 .09 3407 (.98) 63 (.02) P12 .15 734724 (.96) 32517 (.04) P13 .06 66162 (.99) 858 (.01) P14 .05 28216 (.99) 408 (.01) P15 .09 1336679 (.97) 47110 (.03) P16 .09 1525 (.98) 37 ( .02) P17 .05 2222 (.99) 21 (.01) Table 6: Clustering Coefficient of the Semantic Networks. indegree of all nodes connected to a node with that outdegree. High values of knn indicate that high-degree nodes tend to connect to other highdegree nodes (forming a “core” in the network), while lower values of knn suggest that the highdegree nodes tend to connect to low-degree ones. Figure 4 shows the knn for the singer, whale, live in, cars, cantantes, and gente networks. The figure plots the outdegree and the average indegree of the semantic terms in the networks on a log-log scale. We can see that for all networks the high-degree nodes tend to connect to other high-degree ones. This explains why text mining algorithms should focus their effort on high-degree nodes. 4.8 Assortivity The property of the nodes to connect to other nodes with similar degrees can be captured through the assortivity coefficient r (Newman, 2003). The range of r is [−1, 1]. A positive assortivity coefficient means that the nodes tend to connect to nodes of similar degree, while negative coefficient means that nodes are likely to connect to nodes with degree very different from their own. We find that the assortivitiy coefficient of our semantic networks is positive, ranging from 0.07 to 0.20. In this respect, the semantic networks differ from the Web, which has a negative assortivity (Newman, 2003). This implies a difference in text mining and web search traversal strategies: since starting from a highly-connected seed term will tend to lead to other highly-connected terms, text mining algorithms should prefer depthfirst traversal, while web search algorithms starting 1622 1 10 100 1 10 100 knn Outdegree singer (seed is Madonna) 1 10 100 1 10 100 knn Outdegree whale (verb harvesting) 1 10 100 1 10 100 knn Outdegree live in 1 10 100 1 10 100 knn Outdegree cars (adjective harvesting) 1 10 1 10 knn Outdegree cantantes 1 10 1 10 knn Outdegree gente Figure 4: Joint Degree Distribution of the Semantic Networks. from a highly-connected seed page should prefer a breadth-first strategy. 5 Discussion The above studies show that many of the properties discovered of the network formed by the web hold also for the networks induced by semantic relations in text mining applications, for various semantic classes, semantic relations, and languages. We can therefore apply some of the research from network analysis to text mining. The small-world phenomenon, for example, holds that any node is connected to any other node in at most six steps. Since as shown in Section 4.5 the semantic networks also exhibit this phenomenon, we can explain the observation of (Kozareva and Hovy, 2010b) that one can quite accurately predict the relative ‘goodness’ of a seed term (its eventual total yield and the number of steps required to obtain that) within five harvesting steps. We have shown that due to the strongly connected components in text mining networks, not all elements within the harvested graph can discover each other. This implies that harvesting algorithms have to be started with several seeds to obtain adequate Recall (Vyas et al., 2009). We have shown that centrality measures can be used successfully to rank harvested terms to guide the network traversal, and to validate the correctness of the harvested terms. In the future, the knowledge and observations made in this study can be used to model the lexical usage of people over time and to develop new semantic search technology. 6 Conclusion In this paper we describe the implicit ‘hidden’ semantic network graph structure induced over the text of the web and other sources by the semantic relations people use in sentences. We describe how term harvesting patterns whose seed terms are harvested and then applied recursively can be used to discover these semantic term networks. Although these networks differ considerably from the web in relation density, type, and network size, we show, somewhat surprisingly, that the same power-law, smallworld effect, transitivity, and most other characteristics that apply to the web’s hyperlinked network structure hold also for the implicit semantic term graphs—certainly for the semantic relations and languages we have studied, and most probably for almost all semantic relations and human languages. This rather interesting observation leads us to surmise that the hyperlinks people create in the web are of essentially the same type as the semantic relations people use in normal sentences, and that they form an extension of normal language that was not needed before because people did not have the ability within the span of a single sentence to ‘embed’ structures larger than a clause—certainly not a whole other page’s worth of information. The principal exception is the academic citation reference (lexicalized as “see”), which is not used in modern webpages. Rather, the ‘lexicalization’ now used is a formatting convention: the hyperlink is colored and often underlined, facilities offered by computer screens but not available to speech or easy in traditional typesetting. 1623 Acknowledgments We acknowledge the support of DARPA contract number FA8750-09-C-3705 and NSF grant IIS0429360. We would like to thank Sujith Ravi for his useful comments and suggestions. References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. pages 85–94. Andrei Broder, Ravi Kumar, Farzin Maghoul, Prabhakar Raghavan, Sridhar Rajagopalan, Raymie Stata, Andrew Tomkins, and Janet Wiener. 2000. Graph structure in the web. Comput. Netw., 33(1-6):309–320. Andrew Carlson, Justin Betteridge, Richard C. Wang, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Coupled semi-supervised learning for information extraction. pages 101–110. Peng Chen, Huafeng Xie, Sergei Maslov, and Sid Redner. 2007. Finding scientific gems with google’s pagerank algorithm. Journal of Informetrics, 1(1):8–15, January. Aaron Clauset, Cosma Rohilla Shalizi, and M. E. J. Newman. 2009. Power-law distributions in empirical data. SIAM Rev., 51(4):661–703. Oren Etzioni, Michael Cafarella, Doug Downey, AnaMaria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2005. Unsupervised named-entity extraction from the web: an experimental study. Artificial Intelligence, 165(1):91–134, June. Linton Freeman. 1979. Centrality in social networks conceptual clarification. Social Networks, 1(3):215– 239. Michael Gasser and Linda B. Smith. 1998. Learning nouns and adjectives: A connectionist account. In Language and Cognitive Processes, pages 269–306. Demdre Gentner. 1981. Some interesting differences between nouns and verbs. Cognition and Brain Theory, pages 161–178. Roxana Girju, Adriana Badulescu, and Dan Moldovan. 2003. Learning semantic constraints for the automatic discovery of part-whole relations. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 1–8. Marti Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th conference on Computational linguistics, pages 539– 545. Boris Katz, Jimmy Lin, Daniel Loreto, Wesley Hildebrandt, Matthew Bilotti, Sue Felshin, Aaron Fernandes, Gregory Marton, and Federico Mora. 2003. Integrating web-based and corpus-based techniques for question answering. In Proceedings of the twelfth text retrieval conference (TREC), pages 426–435. David Kempe, Jon Kleinberg, and ´Eva Tardos. 2003. Maximizing the spread of influence through a social network. In KDD ’03: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 137–146. Jon Kleinberg and Steve Lawrence. 2001. The structure of the web. Science, 29:1849–1850. Zornitsa Kozareva and Eduard Hovy. 2010a. Learning arguments and supertypes of semantic relations using recursive patterns. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL 2010, pages 1482–1491, July. Zornitsa Kozareva and Eduard Hovy. 2010b. Not all seeds are equal: Measuring the quality of text mining seeds. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 618–626. Zornitsa Kozareva, Ellen Riloff, and Eduard Hovy. 2008. Semantic class learning from the web with hyponym pattern linkage graphs. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics ACL-08: HLT, pages 1048–1056. Beth Levin and Harold Somers. 1993. English verb classes and alternations: A preliminary investigation. Lun Li, David Alderson, Reiko Tanaka, John C. Doyle, and Walter Willinger. 2005. Towards a Theory of Scale-Free Graphs: Definition, Properties, and Implications (Extended Version). Internet Mathematica, 2(4):431–523. Dekang Lin and Patrick Pantel. 2002. Concept discovery from text. In Proc. of the 19th international conference on Computational linguistics, pages 1–7. Mark E. Newman and Michelle Girvan. 2004. Finding and evaluating community structure in networks. Physical Review, 69(2). Mark Newman. 2003. Mixing patterns in networks. Physical Review E, 67. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automatically harvesting semantic relations. pages 113–120. Marius Pasca. 2004. Acquisition of categorized named entities for web search. In Proceedings of the thirteenth ACM international conference on Information and knowledge management, pages 137–145. 1624 Marius Pasca. 2007. Weakly-supervised discovery of named entities using web search queries. In Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management, CIKM 2007, pages 683– 690. Bruno R. Preiss. 1999. Data structures and algorithms with object-oriented design patterns in C++. Filippo Radicchi, Santo Fortunato, Benjamin Markines, and Alessandro Vespignani. 2009. Diffusion of scientific credits and the ranking of scientists. In Phys. Rev. E 80, 056103. Deepack Ravichandran and Eduard H. Hovy. 2002. Learning surface text patterns for a question answering system. pages 41–47. Ellen Riloff and Jessica Shepherd. 1997. A corpus-based approach for building semantic lexicons. In Proceedings of the Empirical Methods for Natural Language Processing, pages 117–124. Ellen Riloff. 1993. Automatically constructing a dictionary for information extraction tasks. pages 811–816. Peter Mark Roget. 1911. Roget’s thesaurus of English Words and Phrases. New York Thomas Y. Crowell company. Gert Sabidussi. 1966. The centrality index of a graph. Psychometrika, 31(4):581–603. Hassan Sayyadi and Lise Getoor. 2009. Future rank: Ranking scientific articles by predicting their future pagerank. In 2009 SIAM International Conference on Data Mining (SDM09). Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of the International Conference on New Methods in Language Processing, pages 44–49. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. pages 1297–1304. Stephen Soderland, Claire Cardie, and Raymond Mooney. 1999. Learning information extraction rules for semi-structured and free text. Machine Learning, 34(1-3), pages 233–272. Mark Steyvers and Joshua B. Tenenbaum. 2004. The large-scale structure of semantic networks: Statistical analyses and a model of semantic growth. Cognitive Science, 29:41–78. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In WWW ’07: Proceedings of the 16th international conference on World Wide Web, pages 697–706. Partha Pratim Talukdar and Fernando Pereira. 2010. Graph-based weakly-supervised methods for information extraction and integration. pages 1473–1481. Vishnu Vyas, Patrick Pantel, and Eric Crestan. 2009. Helping editors choose better seed sets for entity set expansion. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM, pages 225–234. Dylan Walker, Huafeng Xie, Koon-Kiu Yan, and Sergei Maslov. 2006. Ranking scientific publications using a simple model of network traffic. December. Duncan Watts and Steven Strogatz. 1998. Collective dynamics of ’small-world’ networks. Nature, 393(6684):440–442. Fabio Massimo Zanzotto, Marco Pennacchiotti, and Maria Teresa Pazienza. 2006. Discovering asymmetric entailment relations between verbs using selectional preferences. In ACL-44: Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 849–856. Dmitry Zelenko, Chinatsu Aone, Anthony Richardella, Jaz K, Thomas Hofmann, Tomaso Poggio, and John Shawe-taylor. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research 3. 1625
|
2011
|
162
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1626–1635, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Event Extraction as Dependency Parsing David McClosky, Mihai Surdeanu, and Christopher D. Manning Department of Computer Science Stanford University Stanford, CA 94305 {mcclosky,mihais,manning}@stanford.edu Abstract Nested event structures are a common occurrence in both open domain and domain specific extraction tasks, e.g., a “crime” event can cause a “investigation” event, which can lead to an “arrest” event. However, most current approaches address event extraction with highly local models that extract each event and argument independently. We propose a simple approach for the extraction of such structures by taking the tree of event-argument relations and using it directly as the representation in a reranking dependency parser. This provides a simple framework that captures global properties of both nested and flat event structures. We explore a rich feature space that models both the events to be parsed and context from the original supporting text. Our approach obtains competitive results in the extraction of biomedical events from the BioNLP’09 shared task with a F1 score of 53.5% in development and 48.6% in testing. 1 Introduction Event structures in open domain texts are frequently highly complex and nested: a “crime” event can cause an “investigation” event, which can lead to an “arrest” event (Chambers and Jurafsky, 2009). The same observation holds in specific domains. For example, the BioNLP’09 shared task (Kim et al., 2009) focuses on the extraction of nested biomolecular events, where, e.g., a REGULATION event causes a TRANSCRIPTION event (see Figure 1a for a detailed example). Despite this observation, many stateof-the-art supervised event extraction models still extract events and event arguments independently, ignoring their underlying structure (Bj¨orne et al., 2009; Miwa et al., 2010b). In this paper, we propose a new approach for supervised event extraction where we take the tree of relations and their arguments and use it directly as the representation in a dependency parser (rather than conventional syntactic relations). Our approach is conceptually simple: we first convert the original representation of events and their arguments to dependency trees by creating dependency arcs between event anchors (phrases that anchor events in the supporting text) and their corresponding arguments.1 Note that after conversion, only event anchors and entities remain. Figure 1 shows a sentence and its converted form from the biomedical domain with four events: two POSITIVE REGULATION events, anchored by the phrase “acts as a costimulatory signal,” and two TRANSCRIPTION events, both anchored on “gene transcription.” All events take either protein entity mentions (PROT) or other events as arguments. The latter is what allows for nested event structures. Existing dependency parsing models can be adapted to produce these semantic structures instead of syntactic dependencies. We built a global reranking parser model using multiple decoders from MSTParser (McDonald et al., 2005; McDonald et al., 2005b). The main contributions of this paper are the following: 1. We demonstrate that parsing is an attractive approach for extracting events, both nested and otherwise. 1While our approach only works on trees, we show how we can handle directed acyclic graphs in Section 5. 1626 (a) Original sentence with nested events (b) After conversion to event dependencies Figure 1: Nested events in the text fragment: “...the HTLV-1 transactivator protein, tax, acts as a costimulatory signal for GM-CSF and IL-2 gene transcription ...” Throughout this paper, bold text indicates instances of event anchors and italicized text denotes entities (PROTEINs in the BioNLP’09 domain). Note that in (a) there are two copies of each type of event, which are merged to single nodes in the dependency tree (Section 3.1). 2. We propose a wide range of features for event extraction. Our analysis indicates that features which model the global event structure yield considerable performance improvements, which proves that modeling event structure jointly is beneficial. 3. We evaluate on the biomolecular event corpus from the the BioNLP’09 shared task and show that our approach obtains competitive results. 2 Related Work The pioneering work of Miller et al. (1997) was the first, to our knowledge, to propose parsing as a framework for information extraction. They extended the syntactic annotations of the Penn Treebank corpus (Marcus et al., 1993) with entity and relation mentions specific to the MUC-7 evaluation (Chinchor et al., 1997) — e.g., EMPLOYEE OF relations that hold between person and organization named entities — and then trained a generative parsing model over this combined syntactic and semantic representation. In the same spirit, Finkel and Manning (2009) merged the syntactic annotations and the named entity annotations of the OntoNotes corpus (Hovy et al., 2006) and trained a discriminative parsing model for the joint problem of syntactic parsing and named entity recognition. However, both these works require a unified annotation of syntactic and semantic elements, which is not always feasible, and focused only on named entities and binary relations. On the other hand, our approach focuses on event structures that are nested and have an arbitrary number of arguments. We do not need a unified syntactic and semantic representation (but we can and do extract features from the underlying syntactic structure of the text). Finkel and Manning (2009b) also proposed a parsing model for the extraction of nested named entity mentions, which, like this work, parses just the corresponding semantic annotations. In this work, we focus on more complex structures (events instead of named entities) and we explore more global features through our reranking layer. In the biomedical domain, two recent papers proposed joint models for event extraction based on Markov logic networks (MLN) (Riedel et al., 2009; Poon and Vanderwende, 2010). Both works propose elegant frameworks where event anchors and arguments are jointly predicted for all events in the same sentence. One disadvantage of MLN models is the requirement that a human expert develop domainspecific predicates and formulas, which can be a cumbersome process because it requires thorough domain understanding. On the other hand, our approach maintains the joint modeling advantage, but our model is built over simple, domain-independent features. We also propose and analyze a richer feature space that captures more information on the global event structure in a sentence. Furthermore, since our approach is agnostic to the parsing model used, it could easily be tuned for various scenarios, e.g., models with lower inference overhead such as shift-reduce parsers. Our work is conceptually close to the recent CoNLL shared tasks on semantic role labeling, where the predicate frames were converted to se1627 Events
to
Dependencies
Parser
1
…
Reranker
Dependencies
to
Events
Parser
k
Dependencies
to
Events
Event
Trigger
Recognizer
En8ty
Recognizer
Figure 2: Overview of the approach. Rounded rectangles indicate domain-independent components; regular rectangles mark domain-specific modules; blocks in dashed lines surround components not necessary for the domain presented in this paper. mantic dependencies between predicates and their arguments (Surdeanu et al., 2008; Hajic et al., 2009). In this representation the dependency structure is a directed acyclic graph (DAG), i.e., the same node can be an argument to multiple predicates, and there are no explicit dependencies between predicates. Due to this representation, all joint models proposed for semantic role labeling handle semantic frames independently. 3 Approach Figure 2 summarizes our architecture. Our approach converts the original event representation to dependency trees containing both event anchors and entity mentions, and trains a battery of parsers to recognize these structures. The trees are built using event anchors predicted by a separate classifier. In this work, we do not discuss entity recognition because in the BioNLP’09 domain used for evaluation entities (PROTEINs) are given (but including entity recognition is an obvious extension of our model). Our parsers are several instances of MSTParser2 (McDonald et al., 2005; McDonald et al., 2005b) configured with different decoders. However, our approach is agnostic to the actual parsing models used and could easily be adapted to other dependency parsers. The output from the reranking parser is 2http://sourceforge.net/projects/mstparser/ converted back to the original event representation and passed to a reranker component (Collins, 2000; Charniak and Johnson, 2005), tailored to optimize the task-specific evaluation metric. Note that although we use the biomedical event domain from the BioNLP’09 shared task to illustrate our work, the core of our approach is almost domain independent. Our only constraints are that each event mention be activated by a phrase that serves as an event anchor, and that the event-argument structures be mapped to a dependency tree. The conversion between event and dependency structures and the reranker metric are the only domain dependent components in our approach. 3.1 Converting between Event Structures and Dependencies As in previous work, we extract event structures at sentence granularity, i.e., we ignore events which span sentences (Bj¨orne et al., 2009; Riedel et al., 2009; Poon and Vanderwende, 2010). These form approximately 5% of the events in the BioNLP’09 corpus. For each sentence, we convert the BioNLP’09 event representation to a graph (representing a labeled dependency tree) as follows. The nodes in the graph are protein entity mentions, event anchors, and a virtual ROOT node. Thus, the only words in this dependency tree are those which participate in events. We create edges in the graph in the following way. For each event anchor, we create one link to each of its arguments labeled with the slot name of the argument (for example, connecting gene transcription to IL-2 with the label THEME in Figure 1b). We link the ROOT node to each entity that does not participate in an event using the ROOTLABEL dependency label. Finally, we link the ROOT node to each top-level event anchor, (those which do not serve as arguments to other events) again using the ROOT-LABEL label. We follow the convention that the source of each dependency arc is the head while the target is the modifier. The output of this process is a directed graph, since a phrase can easily play a role in two or more events. Furthermore, the graph may contain selfreferential edges (self-loops) due to related events sharing the same anchor (example below). To guarantee that the output of this process is a tree, we must post-process the above graph with the follow1628 ing three heuristics: Step 1: We remove self-referential edges. An example of these can be seen in the text “the domain interacted preferentially with underphosphorylated TRAF2,” there are two events anchored by the same underphosphorylated phrase, a NEGATIVE REGULATION and a PHOSPHORYLATION event, and the latter serves as a THEME argument for the former. Due to the shared anchor, our conversion component creates an self-referential THEME dependency. By removing these edges, 1.5% of the events in the training arguments are left without arguments, so we remove them as well. Step 2: We break structures where one argument participates in multiple events, by keeping only the dependency to the event that appears first in text. For example, in the fragment “by enhancing its inactivation through binding to soluble TNF-alpha receptor type II,” the protein TNF-alpha receptor type II is an argument in both a BINDING event (binding) and in a NEGATIVE REGULATION event (inactivation). As a consequence of this step, 4.7% of the events in training are removed. Step 3: We unify events with the same types anchored on the same anchor phrase. For example, for the fragment “Surface expression of intercellular adhesion molecule-1, P-selectin, and E-selectin,” the BioNLP’09 annotation contains three distinct GENE EXPRESSION events anchored on the same phrase (expression), each having one of the proteins as THEMEs. In such cases, we migrate all arguments to one of the events, and remove the empty events. 21.5% of the events in training are removed in this step (but no dependencies are lost). Note that we do not guarantee that the resulting tree is projective. In fact, our trees are more likely to be non-projective than syntactic dependency trees of English sentences, because in our representation many nodes can be linked directly to the ROOT node. Our analysis indicates that 2.9% of the dependencies generated in the training corpus are non-projective and 7.9% of the sentences contain at least one nonprojective dependency (for comparison, these numbers for the English Penn Treebank are 0.3% and 6.7%, respectively). After parsing, we implement the inverse process, i.e., we convert the generated dependency trees to the BioNLP’09 representation. In addition to the obvious conversions, this process implements the heuristics proposed by Bj¨orne et al. (2009), which reverse step 3 above, e.g., we duplicate GENE EXPRESSION events with multiple THEME arguments. The heuristics are executed sequentially in the given order: 1. Since all non-BINDING events can have at most one THEME argument, we duplicate nonBINDING events with multiple THEME arguments by creating one separate event for each THEME. 2. Similarly, since REGULATION events accepts only one CAUSE argument, we duplicate REGULATION events with multiple CAUSE arguments, obtaining one event per CAUSE. 3. Lastly, we implement the heuristic of Bj¨orne et al. (2009) to handle the splitting of BINDING events with multiple THEME arguments. This is more complex because these events can accept one or more THEMEs. In such situations, we first group THEME arguments by the label of the first Stanford dependency (Marneffe and Manning, 2008) from the head word of the anchor to this argument. Then we create one event for each combination of THEME arguments in different groups. 3.2 Recognition of Event Anchors For anchor detection, we used a multiclass classifier that labels each token independently.3 Since over 92% of the anchor phrases in our evaluation domain contain a single word, we simplify the task by reducing all multi-word anchor phrases in the training corpus to their syntactic head word (e.g., “acts” for the anchor “acts as a costimulatory signal”). We implemented this model using a logistic regression classifier with L2 regularization over the following features: 3We experimented with using conditional random fields as a sequence labeler but did not see improvements in the biomedical domain. We hypothesize that the sequence tagger fails to capture potential dependencies between anchor labels – which are its main advantage over an i.i.d. classifier – because anchor words are typically far apart in text. This result is consistent with observations in previous work (Bj¨orne et al., 2009). 1629 • Token-level: The form, lemma, and whether the token is present in a gazetteer of known anchor words.4 • Surface context: The above token features extracted from a context of two words around the current token. Additionally, we build token bigrams in this context window, and model them with similar features. • Syntactic context: We model all syntactic dependency paths up to depth two starting from the token to be classified. These paths are built from Stanford syntactic dependencies (Marneffe and Manning, 2008). We extract token features from the first and last token in these paths. We also generate combination features by concatenating: (a) the last token in each path with the sequence of dependency labels along the corresponding path; and (b) the word to be classified, the last token in each path, and the sequence of dependency labels in that path. • Bag-of-word and entity count: Extracted from (a) the entire sentence, and (b) a window of five words around the token to be classified. 3.3 Parsing Event Structures Given the entities and event anchors from the previous stages in the pipeline, the parser generates labeled dependency links between them. Many dependency parsers are available and we chose MSTParser for its ability to produce non-projective and n-best parses directly. MSTParser frames parsing as a graph algorithm. To parse a sentence, MSTParser finds the tree covering all the words (nodes) in the sentence (graph) with the largest sum of edge weights, i.e., the maximum weighted spanning tree. Each labeled, directed edge in the graph represents a possible dependency between its two endpoints and has an associated score (weight). Scores for edges come from the dot product between the edge’s corresponding feature vector and learned feature weights. As a result, all features for MSTParser must be edgefactored, i.e., functions of both endpoints and the label connecting them. McDonald et al. (2006) extends the basic model to include second-order dependencies (i.e., two adjacent sibling nodes and their 4These are automatically extracted from the training corpus. parent). Both first and second-order modes include projective and non-projective decoders. Our features for MSTParser use both the event structures themselves as well as the surrounding English sentences which include them. By mapping event anchors and entities back to the original text, we can incorporate information from the original English sentence as well its syntactic tree and corresponding Stanford dependencies. Both forms of context are valuable and complementary. MSTParser comes with a large number of features which, in our setup, operate on the event structure level (since this is the “sentence” from the parser’s point of view). The majority of additional features that we introduced take advantage of the original text as context (primarily its associated Stanford dependencies). Our system includes the following first-order features: • Path: Syntactic paths in the original sentence between nodes in an event dependency (as in previous work by Bj¨orne et al. (2009)). These have many variations including using Stanford dependencies (“collapsed” and “uncollapsed”) or constituency trees as sources, optionally lexicalizing the path, and using words or relation names along the path. Additionally, we include the bucketed length of the paths. • Original sentence words: Words from the full English sentence surrounding and between the nodes in event dependencies, and their bucketed distances. This additional context helps compensate for how our anchor detection provides only the head word of each anchor, which does not necessarily provide the full context for event disambiguation. • Graph: Parents, children, and siblings of nodes in the Stanford dependencies graph along with label of the edge. This provides additional syntactic context. • Consistency: Soft constraints on edges between anchors and their arguments (e.g., only regulation events can have edges labeled with CAUSE). These features fire if their constraints are violated. • Ontology: Generalized types of the endpoints of edges using a given type hierarchy (e.g., POSITIVE REGULATION is a COM1630 PLEX EVENT5 is an EVENT). Values of this feature are coded with the types of each of the endpoints on an edge, running over the cross-product of types for each endpoint. For instance, an edge between a BINDING event anchor and a POSITIVE REGULATION could cause this feature to fire with the values [head:EVENT, child:COMPLEX EVENT] or [head:SIMPLE EVENT, child:EVENT].6 The latter feature can capture generalizations such as “simple event anchors cannot take other events as arguments.” Both Consistency and Ontology feature classes include domain-specific information but can be used on other domains under different constraints and type hierarchies. When using second-order dependencies, we use additional Path and Ontology features. We include the syntactic paths between sibling nodes (adjacent arguments of the same event anchor). These Path features are as above but differentiated as paths between sibling nodes. The second-order Ontology features use the type hierarchy information on both sibling nodes and their parent. For example, a POSITIVE REGULATION anchor attached to a PROTEIN and a BINDING event would produce an Ontology feature with the value [parent:COMPLEX EVENT, child1:PROTEIN, child2:SIMPLE EVENT] (among several other possible combinations). To prune the number of features used, we employ a simple entropy-based measure. Our intuition is that good features should typically appear with only one edge label.7 Given all edges enumerated during training and their gold labels, we obtain a distribution over edge labels (df) for each feature f. Given this distribution and the frequency of a feature, we can score the feature with the following: score(f) = α × log2 freq(f) −H(df) The α parameter adjusts the relative weight of the two components. The log frequency component favors more frequent features while the entropy component favors features with low entropy in their edge 5We define complex events are those which can accept other events are arguments. Simple events can only take PROTEINs. 6We omit listing the other two combinations. 7Labels include ROOT-LABEL, THEME, CAUSE, and NULL. We assign the NULL label to edges which aren’t in the gold data. label distribution. Features are pruned by accepting all features with a score above a certain threshold. 3.4 Reranking Event Structures When decoding, the parser finds the highest scoring tree which incorporates global properties of the sentence. However, its features are edge-factored and thus unable to take into account larger contexts. To incorporate arbitrary global features, we employ a two-step reranking parser. For the first step, we extend our parser to output its n-best parses instead of just its top scoring parse. In the second step, a discriminative reranker rescores each parse and reorders the n-best list. Rerankers have been successfully used in syntactic parsing (Collins, 2000; Charniak and Johnson, 2005; Huang, 2008) and semantic role labeling (Toutanova et al., 2008). Rerankers provide additional advantages in our case due to the mismatch between the dependency structures that the parser operates on and their corresponding event structures. We convert the output from the parser to event structures (Section 3.1) before including them in the reranker. This allows the reranker to capture features over the actual event structures rather than their original dependency trees which may contain extraneous portions.8 Furthermore, this lets the reranker optimize the actual BioNLP F1 score. The parser, on the other hand, attempts to optimize the Labeled Attachment Score (LAS) between the dependency trees and converted gold dependency trees. LAS is approximate for two reasons. First, it is much more local than the BioNLP metric.9 Second, the converted gold dependency trees lose information that doesn’t transfer to trees (specifically, that event structures are really multi-DAGs and not trees). We adapt the maximum entropy reranker from Charniak and Johnson (2005) by creating a customized feature extractor for event structures — in all other ways, the reranker model is unchanged. We use the following types of features in the reranker: • Source: Score and rank of the parse from the 8For instance, event anchors with no arguments could be proposed by the parser. These event anchors are automatically dropped by the conversion process. 9As an example, getting an edge label between an anchor and its argument correct is unimportant if the anchor is missing other arguments. 1631 Unreranked Reranked Decoder(s) R P F1 R P F1 1P 65.6 76.7 70.7 68.0 77.6 72.5 2P 67.4 77.1 71.9 67.9 77.3 72.3 1N 67.5 76.7 71.8 — — — 2N 68.9 77.1 72.7 — — — 1P, 2P, 2N — — — 68.5 78.2 73.1 (a) Gold event anchors Unreranked Reranked Decoder(s) R P F1 R P F1 1P 44.7 62.2 52.0 47.8 59.6 53.1 2P 45.9 61.8 52.7 48.4 57.5 52.5 1N 46.0 61.2 52.5 — — — 2N 38.6 66.6 48.8 — — — 1P, 2P, 2N — — — 48.7 59.3 53.5 (b) Predicted event anchors Table 1: BioNLP recall, precision, and F1 scores of individual decoders and the best decoder combination on development data with the impact of event anchor detection and reranking. Decoder names include the features order (1 or 2) followed by the projectivity (P = projective, N = non-projective). decoder; number of different decoders producing the parse (when using multiple decoders). • Event path: Path from each node in the event tree up to the root. Unlike the Path features in the parser, these paths are over event structures, not the syntactic dependency graphs from the original English sentence. Variations of the Event path features include whether to include word forms (e.g., “binds”), types (BINDING), and/or argument slot names (THEME). We also include the path length as a feature. • Event frames: Event anchors with all their arguments and argument slot names. • Consistency: Similar to the parser Consistency features, but capable of capturing larger classes of errors (e.g., incorrect number or types of arguments). We include the number of violations from four different classes of errors. To improve performance and robustness, features are pruned as in Charniak and Johnson (2005): selected features must distinguish a parse with the highest F1 score in a n-best list, from a parse with a suboptimal F1 score at least five times. Rerankers can also be used to perform model combination (Toutanova et al., 2008; Zhang et al., 2009; Johnson and Ural, 2010). While we use a single parsing model, it has multiple decoders.10 When combining multiple decoders, we concatenate their n-best lists and extract the unique parses. 10We only have n-best versions of the projective decoders. For the non-projective decoders, we use their 1-best parse. 4 Experimental Results Our experiments use the BioNLP’09 shared task corpus (Kim et al., 2009) which includes 800 biomedical abstracts (7,449 sentences, 8,597 events) for training and 150 abstracts (1,450 sentences, 1,809 events) for development. The test set includes 260 abstracts, 2,447 sentences, and 3,182 events. Throughout our experiments, we report BioNLP F1 scores with approximate span and recursive event matching (as described in the shared task definition). For preprocessing, we parsed all documents using the self-trained biomedical McClosky-CharniakJohnson reranking parser (McClosky, 2010). We bias the anchor detector to favor recall, allowing the parser and reranker to determine which event anchors will ultimately be used. When performing nbest parsing, n = 50. For parser feature pruning, α = 0.001. Table 1a shows the performance of each of the decoders when using gold event anchors. In both cases where n-best decoding is available, the reranker improves performance over the 1-best parsers. We also present the results from a reranker trained from multiple decoders which is our highest scoring model.11 In Table 1b, we present the output for the predicted anchor scenario. In the case of the 2P decoder, the reranker does not improve performance, though the drop is minimal. This is because the reranker chose an unfortunate regularization constant during crossvalidation, most likely due to the small size of the training data. In later experiments where more 11Including the 1N decoder as well provided no gains, possibly because its outputs are mostly subsumed by the 2N decoder. 1632 data is available, the reranker consistently improves accuracy (McClosky et al., 2011). As before, the reranker trained from multiple decoders outperforms unreranked models and reranked single decoders. All in all, our best model in Table 1a scores 1 F1 point higher than the best system at the BioNLP’09 shared task, and the best model in Table 1b performs similarly to the best shared task system (Bj¨orne et al., 2009), which also scores 53.5% on development. We show the effects of each system component in Table 2. Note how our upper limit is 87.1% due to our conversion process, which enforces the tree constraint, drops events spanning sentences, and performs approximate reconstruction of BINDING events. Given that state-of-the-art systems on this task currently perform in the 50-60% range, we are not troubled by this number as it still allows for plenty of potential.12 Bj¨orne et al. (2009) list 94.7% as the upper limit for their system. Considering this relatively large difference, we find the results in the previous table very encouraging. As in other BioNLP’09 systems, our performance drops when switching from gold to predicted anchor information. Our decrease is similar to the one seen in Bj¨orne et al. (2009). To show the potential of reranking, we provide oracle reranker scores in Table 3. An oracle reranker picks the highest scoring parse from the available parses. We limit the n-best lists to the top k parses where k ∈{1, 2, 10, All}. For single decoders, “All” uses the entire 50-best list. For multiple decoders, the n-best lists are concatenated together. The oracle score with multiple decoders and gold anchors is only 0.4% lower than our upper limit (see Table 2). This indicates that parses which could have achieved that limit were nearly always present. Improving the features in the reranker as well as the original parsers will help us move closer to the limit. With predicated anchors, the oracle score is about 13% lower but still shows significant potential. Our final results on the test set, broken down by class, are shown in Table 4. As with other systems, complex events (e.g., REGULATION) prove harder than simple events. To get a complex event correct, one must correctly detect and parse all events in 12Additionally, improvements such as document-level parsing and DAG parsing would eliminate the need for much of the approximate and lossy portions of the conversion process. AD Parse RR Conv R P F1 ✓ ✓ ✓ 45.9 61.8 52.7 ✓ ✓ ✓ ✓ 48.7 59.3 53.5 G ✓ ✓ 68.9 77.1 72.7 G ✓ ✓ ✓ 68.5 78.2 73.1 G G G ✓ 81.6 93.4 87.1 Table 2: Effect of each major component to the overall performance in the development corpus. Components shown: AD — event anchor detection; Parse — best individual parsing model; RR — reranking multiple parsers; Conv — conversion between the event and dependency representations. ‘G’ indicates that gold data was used; ‘✓’ indicates that the actual component was used. n-best parses considered Anchors Decoder(s) 1 2 10 All Gold 1P 70.7 76.6 84.0 85.7 2P 71.8 77.5 84.8 86.2 1P, 2P, 2N — — — 86.7 Predicted 1P 52.0 60.3 69.9 72.5 2P 52.7 60.7 70.1 72.5 1P, 2P, 2N — — — 73.4 Table 3: Oracle reranker BioNLP F1 scores for our n-best decoders and their combinations before reranking on the development corpus. the event subtree allowing small errors to have large effects. Top systems on this task obtain F1 scores of 52.0% at the shared task evaluation (Bj¨orne et al., 2009) and 56.3% post evaluation (Miwa et al., 2010a). However, both systems are tailored to the biomedical domain (the latter uses multiple syntactic parsers), whereas our system has a design that is virtually domain independent. 5 Discussion We believe that the potential of our approach is higher than what the current experiments show. For example, the reranker can be used to combine not only several parsers but also multiple anchor recognizers. This passes the anchor selection decision to the reranker, which uses global information not available to the current anchor recognizer or parser. Furthermore, our approach can be adapted to parse event structures in entire documents (instead of in1633 Event Class Count R P F1 Gene Expression 722 68.6 75.8 72.0 Transcription 137 42.3 51.3 46.4 Protein Catabolism 14 64.3 75.0 69.2 Phosphorylation 135 80.0 82.4 81.2 Localization 174 44.8 78.8 57.1 Binding 347 42.9 51.7 46.9 Regulation 291 23.0 36.6 28.3 Positive Regulation 983 28.4 42.5 34.0 Negative Regulation 379 29.3 43.5 35.0 Total 3,182 42.6 56.6 48.6 Table 4: Results in the test set broken by event class; scores generated with the main official metric of approximate span and recursive event matching. dividual sentences) by using a representation with a unique ROOT node for all event structures in a document. This representation has the advantage that it maintains cross-sentence events (which account for 5% of BioNLP’09 events), and it allows for document-level features that model discourse structure. We plan to explore these ideas in future work. One current limitation of the proposed model is that it constrains event structures to map to trees. In the BioNLP’09 corpus this leads to the removal of almost 5% of the events, which generate DAGs instead of trees. Local event extraction models (Bj¨orne et al., 2009) do not have this limitation, because their local decisions are blind to (and hence not limited by) the global event structure. However, our approach is agnostic to the actual parsing models used, so we can easily incorporate models that can parse DAGs (Sagae and Tsujii, 2008). Additionally, we are free to incorporate any new techniques from dependency parsing. Parsing using dual-decomposition (Rush et al., 2010) seems especially promising in this area. 6 Conclusion In this paper we proposed a simple approach for the joint extraction of event structures: we converted the representation of events and their arguments to dependency trees with arcs between event anchors and event arguments, and used a reranking parser to parse these structures. Despite the fact that our approach has very little domain-specific engineering, we obtain competitive results. Most importantly, we showed that the joint modeling of event structures is beneficial: our reranker outperforms parsing models without reranking in five out of the six configurations investigated. Acknowledgments The authors would like to thank Mark Johnson for helpful discussions on the reranker component and the BioNLP shared task organizers, Sampo Pyysalo and Jin-Dong Kim, for answering questions. We gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of DARPA, AFRL, or the US government. References Jari Bj¨orne, Juho Heimonen, Filip Ginter, Antti Airola, Tapio Pahikkala, and Tapio Salakoski. 2009. Extracting Complex Biological Events with Rich GraphBased Feature Sets. Proceedings of the Workshop on BioNLP: Shared Task. Nate Chambers and Dan Jurafsky. 2009. Unsupervised Learning of Narrative Schemas and their Participants. Proceedings of ACL. Eugene Charniak and Mark Johnson. 2005. Coarse-tofine n-best parsing and MaxEnt discriminative reranking. In Proceedings of the 2005 Meeting of the Association for Computational Linguistics (ACL), pages 173–180 Nancy Chinchor. 1997. Overview of MUC-7. Proceedings of the Message Understanding Conference (MUC-7). Michael Collins. 2000. Discriminative reranking for natural language parsing. In Machine Learning: Proceedings of the Seventeenth International Conference (ICML 2000), pages 175–182. Jenny R. Finkel and Christopher D. Manning. 2009. Joint Parsing and Named Entity Recognition. Proceedings of NAACL. Jenny R. Finkel and Christopher D. Manning. 2009b. Nested Named Entity Recognition. Proceedings of EMNLP. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria A. Marti, Lluis Marquez, Adam Meyers, Joakim Nivre, Sebastian Pado, Jan Stepanek, Pavel Stranak, Mihai Surdeanu, Nianwen 1634 Xue, and Yi Zhang. 2009. The CoNLL-2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages. Proceedings of CoNLL. Eduard Hovy, Mitchell P. Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% Solution. Proceedings of the NAACL-HLT. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-08: HLT, pages 586–594, Association for Computational Linguistics. Mark Johnson and Ahmet Engin Ural. 2010. Reranking the Berkeley and Brown Parsers. Proceedings of NAACL. Jin-Dong Kim, Tomoko Ohta, Sampo Pyysalo, Yoshinobu Kano, and Jun’ichi Tsujii. 2009. Overview of the BioNLP’09 Shared Task on Event Extraction. Proceedings of the NAACL-HLT 2009 Workshop on Natural Language Processing in Biomedicine (BioNLP’09). Mitchell P. Marcus, Beatrice Santorini, and Marry Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. Stanford Typed Hierarchies Representation. Proceedings of the COLING Workshop on Cross-Framework and Cross-Domain Parser Evaluation. David McClosky. 2010. Any Domain Parsing: Automatic Domain Adaptation for Natural Language Parsing. PhD thesis, Department of Computer Science, Brown University. David McClosky, Mihai Surdeanu, and Christopher D. Manning. 2011. Event extraction as dependency parsing in BioNLP 2011. In BioNLP 2011 Shared Task (submitted). Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online Large-Margin Training of Dependency Parsers. Proceedings of ACL. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005b. Non-projective Dependency Parsing using Spanning Tree Algorithms. Proceedings of HLT/EMNLP. Ryan McDonald and Fernando Pereira. 2006. Online Learning of Approximate Dependency Parsing Algorithms. Proceedings of EACL. Scott Miller, Michael Crystal, Heidi Fox, Lance Ramshaw, Richard Schwartz, Rebecca Stone, and Ralph Weischedel. 1997. BBN: Description of the SIFT System as Used for MUC-7. Proceedings of the Message Understanding Conference (MUC-7). Makoto Miwa, Sampo Pyysalo, Tadayoshi Hara, and Jun’ichi Tsujii. 2010a. A Comparative Study of Syntactic Parsers for Event Extraction. Proceedings of the 2010 Workshop on Biomedical Natural Language Processing. Makoto Miwa, Rune Saetre, Jin-Dong Kim, and Jun’ichi Tsujii. 2010b. Event Extraction with Complex Event Classification Using Rich Features. Journal of Bioinformatics and Computational Biology, 8 (1). Hoifung Poon and Lucy Vanderwende. 2010. Joint Inference for Knowledge Extraction from Biomedical Literature. Proceedings of NAACL. Sebastian Riedel, Hong-Woo Chun, Toshihisa Takagi, and Jun’ichi Tsujii. 2009. A Markov Logic Approach to Bio-Molecular Event Extraction. Proceedings of the Workshop on BioNLP: Shared Task. Alexander M. Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. Proceedings of EMNLP. Kenji Sagae and Jun’ichi Tsujii. 2008. Shift-Reduce Dependency DAG Parsing. Proceedings of the COLING. Mihai Surdeanu, Richard Johansson, Adam Meyers, Lluis Marquez, and Joakim Nivre. 2008. The CoNLL2008 Shared Task on Joint Parsing of Syntactic and Semantic Dependencies. Proceedings of CoNLL. Kristina Toutanova, Aria Haghighi, and Christopher D. Manning. 2008. A Global Joint Model for Semantic Role Labeling. Computational Linguistics 34(2). Zhang, H. and Zhang, M. and Tan, C.L. and Li, H. 2009. K-best combination of syntactic parsers. Proceedings of Empirical Methods in Natural Language Processing. 1635
|
2011
|
163
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1636–1644, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Extracting Comparative Entities and Predicates from Texts Using Comparative Type Classification Seon Yang Youngjoong Ko Department of Computer Engineering, Department of Computer Engineering, Dong-A University, Dong-A University, Busan, Korea Busan, Korea [email protected] [email protected] Abstract The automatic extraction of comparative information is an important text mining problem and an area of increasing interest. In this paper, we study how to build a Korean comparison mining system. Our work is composed of two consecutive tasks: 1) classifying comparative sentences into different types and 2) mining comparative entities and predicates. We perform various experiments to find relevant features and learning techniques. As a result, we achieve outstanding performance enough for practical use. 1 Introduction Almost every day, people are faced with a situation that they must decide upon one thing or the other. To make better decisions, they probably attempt to compare entities that they are interesting in. These days, many web search engines are helping people look for their interesting entities. It is clear that getting information from a large amount of web data retrieved by the search engines is a much better and easier way than the traditional survey methods. However, it is also clear that directly reading each document is not a perfect solution. If people only have access to a small amount of data, they may get a biased point of view. On the other hand, investigating large amounts of data is a timeconsuming job. Therefore, a comparison mining system, which can automatically provide a summary of comparisons between two (or more) entities from a large quantity of web documents, would be very useful in many areas such as marketing. We divide our work into two tasks to effectively build a comparison mining system. The first task is related to a sentence classification problem and the second is related to an information extraction problem. Task 1. Classifying comparative sentences into one non-comparative class and seven comparative classes (or types); 1) Equality, 2) Similarity, 3) Difference, 4) Greater or lesser, 5) Superlative, 6) Pseudo, and 7) Implicit comparisons. The purpose of this task is to efficiently perform the following task. Task 2. Mining comparative entities and predicates taking into account the characteristics of each type. For example, from the sentence “Stock-X is worth more than stock-Y.” belonging to “4) Greater or lesser” type, we extract “stockX” as a subject entity (SE), “stock-Y” as an object entity (OE), and “worth” as a comparative predicate (PR). These tasks are not easy or simple problems as described below. Classifying comparative sentences (Task 1): For the first task, we extract comparative sentences from text documents and then classify the extracted comparative sentences into seven 1636 comparative types. Our basic idea is a keyword search. Since Ha (1999a) categorized dozens of Korean comparative keywords, we easily build an initial keyword set as follows: ▪ Кling = {“같 ([gat]: same)”, “보다 ([bo-da]: than)”, “가장 ([ga-jang]: most)”, …} In addition, we easily match each of these keywords to a particular type anchored to Ha‟s research, e.g., “같 ([gat]: same)” to “1) Equality”, “보다 ([bo-da]: than)” to “4) Greater or lesser”. However, any method that depends on just these linguistic-based keywords has obvious limitations as follows: 1) Кling is insufficient to cover all of the actual comparison expressions. 2) There are many non-comparative sentences that contain some elements of Кling. 3) There is no one-to-one relationship between keyword types and sentence types. Mining comparative entities and predicates (Task 2): Our basic idea for the second task is selecting candidates first and finding answers from the candidates later. We regard each of noun words as a candidate for SE/OE, and each of adjective (or verb) words as a candidate for PR. However, this candidate detection has serious problems as follows: 4) There are many actual SEs, OEs, and PRs that consist of multiple words. 5) There are many sentences with no OE, especially among superlative sentences. It means that the ellipsis is frequently occurred in superlative sentences. We focus on solving the above five problems. We perform various experiments to find relevant features and proper machine learning techniques. The final experimental results in 5-fold cross validation show the overall accuracy of 88.59% for the first task and the overall accuracy of 86.81% for the second task. The remainder of the paper is organized as follows. Section 2 briefly introduces related work. Section 3 and Section 4 describe our first task and second task in detail, respectively. Section 5 reports our experimental results and finally Section 6 concludes. 2 Related Work Linguistic researchers focus on defining the syntax and semantics of comparative constructs. Ha (1999a; 1999b) classified the structures of Korean comparative sentences into several classes and arranged comparison-bearing words from a linguistic perspective. Since he summarized the modern Korean comparative studies, his research helps us have a linguistic point of view. We also refer to Jeong (2000) and Oh (2004). Jeong classified adjective superlatives using certain measures, and Oh discussed the gradability of comparatives. In computer engineering, we found five previous studies related to comparison mining. Jindal and Liu (2006a; 2006b) studied to mine comparative relations from English text documents. They used comparative and superlative POS tags, and some additional keywords. Their methods applied Class Sequential Rules and Label Sequential Rules. Yang and Ko (2009; 2011) studied to extract comparative sentences in Korean text documents. Li et al. (2010) studied to mine comparable entities from English comparative questions that users posted online. They focused on finding a set of comparable entities given a user‟s input entity. Opinion mining is also related to our work because many comparative sentences also contain the speaker‟s opinion/sentiment. Lee et al. (2008) surveyed various techniques that have been developed for the key tasks of opinion mining. Kim and Hovy (2006) introduced a methodology for analyzing judgment opinion. Riloff and Wiebe (2003) presented a bootstrapping process that learns linguistically rich extraction patterns for subjective expressions. In this study, three learning techniques are employed: the maximum entropy method (MEM) as a representative probabilistic model, the support vector machine (SVM) as a kernel model, and transformation-based learning (TBL) as a rulebased model. Berger et al. (1996) presented a Maximum Entropy Approach to natural language processing. Joachims (1998) introduced SVM for text classification. Various TBL studies have been performed. Brill (1992; 1995) first introduced TBL and presented a case study on part-of-speech 1637 tagging. Ramshaw and Marcus (1995) applied TBL for locating chunks in tagged texts. Black and Vasilakopoulos (2002) used a modified TBL technique for Named Entity Recognition. 3 Classifying Comparative Sentences (Task 1) We first classify the sentences into comparatives and non-comparatives by extracting only comparatives from text documents. Then we classify the comparatives into seven types. 3.1 Extracting comparative sentences from text documents Our strategy is to first detect Comparative Sentence candidates (CS-candidates), and then eliminate non-comparative sentences from the candidates. As mentioned in the introduction section, we easily construct a linguistic-based keyword set, Кling. However, we observe that Кling is not enough to capture all the actual comparison expressions. Hence, we build a comparison lexicon as follows: ▪ Comparison Lexicon = Кling U {Additional keywords that are frequently used for actual comparative expressions} This lexicon is composed of three parts. The first part includes the elements of Кling and their synonyms. The second part consists of idioms. For example, an idiom “X 가 먼저 웃었다 [X-ga meon-jeo u-seot-da]” commonly means “The winner is X” while it literally means “X laughed first”. The last part consists of long-distance-words sequences, e.g., “<X 는 [X-neun], 지만 [ji-man], Y 는 [Y-neun], 다 [da]>”. This sequence means that the sentence is formed as < S(X) + V + but + S(Y) + V > in English (S: subject phrase; V: verb phrase; X, Y: proper nouns). We could regard a word, “지만 ([jiman]: but),” as a single keyword. However, this word also captures numerous non-comparative sentences. Namely, the precision value can fall too much due to this word. By using long-distancewords sequences instead of single keywords, we can keep the precision value from dropping seriously low. The comparison lexicon finally has a total of 177 elements. We call each element “CK” hereafter. Note that our lexicon does not include comparative/superlative POS tags. Unlike English, there is no Korean comparative/superlative POS tag from POS tagger commonly. Our lexicon covers 95.96% of the comparative sentences in our corpus. It means that we successfully defined a comparison lexicon for CS-candidate detection. However, the lexicon shows a relatively low precision of 68.39%. While detecting CScandidates, the lexicon also captures many noncomparative sentences, e.g., following Ex1: ▪ Ex1. “내일은 주식이 오를 것 같다.” ([nai-il-eun jusik-i o-reul-geot gat-da]: I think stock price will rise tomorrow.) This sentence is a non-comparative sentence even though it contains a CK, “같[gat].” This CK generally means “same,” but it often expresses “conjecture.” Since it is an adjective in both cases, it is difficult to distinguish the difference. To effectively filter out non-comparative sentences from CS-candidates, we use the sequences of “continuous POS tags within a radius of 3 words from each CK” as features. Each word in the sequence is replaced with its POS tag in order to reflect various expressions. However, as CKs play the most important role, they are represented as a combination of their lexicalization and POS tag, e.g., “같/pa1.” Finally, the feature has the form of “X y” (“X” means a sequence and “y” means a class; y1: comparative, y2: noncomparative). For instance, “<pv etm nbn 같/pa ef sf 2 > y2” is one of the features from Ex1 sentence. Finally, we achieved an f1-score of 90.23% using SVM. 3.2 Classifying comparative sentences into seven types As we extract comparative sentences successfully, the next step is to classify the comparatives into different types. We define seven comparative types and then employ TBL for comparative sentence classification. We first define six broad comparative types based on modern Korean linguistics: 1) Equality, 2) Similarity, 3) Difference, 4) Greater or lesser, 5) Superlative, 6) Pseudo comparisons. The first five types can be understood intuitively, whereas 1 The POS tag “pa” means “the stem of an adjective”. 2 The labels such as “pv”, “etm” are Korean POS Tags. 1638 the sixth type needs more explanation. “6) Pseudo” comparison includes comparative sentences that compare two (or more) properties of one entity such as “Smartphone-X is a computer rather than a phone.” This type of sentence is often classified into “4) Greater or lesser.” However, since this paper focuses on comparisons between different entities, we separate “6) Pseudo” type from “4) Greater or lesser” type. The seventh type is “7) Implicit” comparison. It is added with the goal of covering literally “implicit” comparisons. For example, the sentence “Shopping Mall X guarantees no fee full refund, but Shopping Mall Y requires refund-fee” does not directly compare two shopping malls. It implicitly gives a hint that X is more beneficial to use than Y. It can be considered as a non-comparative sentence from a linguistic point of view. However, we conclude that this kind of sentence is as important as the other explicit comparisons from an engineering point of view. After defining the seven comparative types, we simply match each sentences to a particular type based on the CK types; e.g., a sentence which contains the word “가장 ([ga-jang]: most)” is matched to “Superlative” type. However, a method that uses just the CK information has a serious problem. For example, although we easily match the CK “보다 ([bo-da]: than)” to “Greater or lesser” without doubt, we observe that the type of CK itself does not guarantee the correct type of the sentence as we can see in the following three sentences: ▪ Ex2. “X 의 품질은 Y 보다 좋지도 나쁘지도 않다.” ([Xeui pum-jil-eun Y-bo-da jo-chi-do na-ppeu-ji-do anta]: The quality of X is neither better nor worse than that of Y.) It can be interpreted as “The quality of X is similar to that of Y.” (Similarity) ▪ Ex3. “X 가 Y 보다 품질이 좋다.” ([X-ga Y-bo-da pumjil-I jo-ta]: The quality of X is better than that of Y.) It is consistent with the CK type (Greater or lesser) ▪ Ex4. “X 는 다른 어떤 카메라보다 품질이 좋다.” ([Xneun da-reun eo-tteon ka-me-ra-bo-da pum-jil-i jota]: X is better than any other cameras in quality.) It can be interpreted as “X is the best camera in quality.” (Superlative) If we only rely on the CK type, we should label the above three sentences as “Greater or lesser”. However, each of these three sentences belongs to a different type. This fact addresses that many CKs could have an ambiguity problem just like the CK of “보다 ([bo-da]: than).” To solve this ambiguity problem, we employ TBL. We first roughly annotate the type of sentences using the type of CK itself. After this initial annotating, TBL generates a set of errordriven transformation rules, and then a scoring function ranks the rules. We define our scoring function as Equation (1): Score(ri) = Ci - Ei (1) Here, ri is the i-th transformation rule, Ci is the number of corrected sentences after ri is applied, and Ei is the number of the opposite case. The ranking process is executed iteratively. The iterations stop when the scoring function reaches a certain threshold. We finally set up the threshold value as 1 after tuning. This means that we use only the rules whose score is 2 or more. 4 Mining Comparative Entities and Predicates (Task 2) This section explains how to extract comparative entities and predicates. Our strategy is to first detect Comparative Element candidates (CEcandidates), and then choose the answer among the candidates. In this paper, we only present the results of two types: “Greater or lesser” and “Superlative.” As we will see in the experiment section, these two types cover 65.8% of whole comparative sentences. We are still studying the other five types and plan to report their results soon. 4.1 Comparative elements We extract three kinds of comparative elements in this paper: SE, OE and PR ▪ Ex5. “X 파이가 Y 파이보다 싸고 맛있다.” ([X-pa-i-ga Y-pa-i-bo-da ssa-go mas-it-da]: Pie X is cheaper and more delicious than Pie Y.) ▪ Ex6. “대선 후보들 중 Z 가 가장 믿음직하다.” ([daiseon hu-bo-deul jung Z-ga ga-jang mit-eum-jikha-da]: “Z is the most trustworthy among the presidential candidates.”) 1639 In Ex5 sentence, “X 파이 (Pie X)” is a SE, “Y 파이 (Pie Y)” is an OE, and “싸고 맛있다 (cheaper and more delicious)” is a PR. In Ex6 sentence, “Z” is a SE, “대선 후보들 (the presidential candidates)” is an OE, and “믿음직하다 (trustworthy)” is a PR. Note that comparative elements are not limited to just one word. For example, “싸고 맛있다 (cheaper and more delicious)” and “대선 후보들 (the presidential candidates)” are composed of multiple words. After investigating numerous actual comparison expressions, we conclude that SEs, OEs, and PRs should not be limited to a single word. It can miss a considerable amount of important information to restrict comparative elements to only one word. Hence, we define as follows: ▪ Comparative elements (SE, OE, and PR) are composed of one or more consecutive words. It should also be noted that a number of superlative sentences are expressed without OE. In our corpus, the percentage of the Superlative sentences without any OE is close to 70%. Hence, we define as follows: ▪ OEs can be omitted in the Superlative sentences. 4.2 Detecting CE-candidates As comparative elements are allowed to have multiple words, we need some preprocessing steps for easy detection of CE-candidates. We thus apply some simplification processes. Through the simplification processes, we represent potential SEs/OEs as one “N” and potential PRs as one “P”. The following process is one of the simplification processes for making “N” - Change each noun (or each noun compound) to a symbol “N”. And, the following two example processes are for “P”. - Change “pa (adjective)” and “pv (verb)” to a symbol “P”. - Change “P + ecc (a suffix whose meaning is “and”) + P” to one “P”, e.g., “cheaper and more delicious” is tagged as one “P”. In addition to the above examples, several processes are performed. We regard all the “N”s as CE-candidates for SE/OE and all the “P”s as CEcandidates for PR. It is possible that a more analytic method is used instead of this simplification task, e.g., by a syntactic parser. We leave this to our future work. 4.3 Finding final answers We now generate features. The patterns that consist of POS tags, CKs, and “P”/“N” sequences within a radius of 4 POS tags from each “N” or “P” are considered as features. Original sentence “X 파이가 Y 파이보다 싸고 맛있다.” (Pie X is cheaper and more delicious than Pie Y.) After POS tagging X 파이/nq + 가/jcs + Y 파이/nq + 보다/jca + 싸/pa + 고/ecc + 맛있/pa + 다/ef +./sf After simplification process X 파이/N(SE) + 가/jcs + Y 파이/N(OE) + 보다/jca + 싸고맛있다/P(PR) + ./sf Patterns for SE <N(SE), jcs, N, 보다/jca,P>, …, <N(SE), jcs> Patterns for OE <N, jcs, N(OE), 보다/jca,P, sf>, …, <N(OE), 보다/jca > Patterns for PR <N, jcs, N, 보다/jca,P(PR), sf>, …, <P(PR), sf> Table 1: Feature examples for mining comparative elements Table 1 lists some examples. Since the CKs play an important role, they are represented as a combination of their lexicalization and POS tag. After feature generation, we calculate each probability value of all CE-candidates using SVM. For example, if a sentence has three “P”s, one “P” with the highest probability value is selected as the answer PR. 5 Experimental Evaluation 5.1 Experimental Settings The experiments are conducted on 7,384 sentences collected from the web by three trained human labelers. Firstly, two labelers annotated the corpus. A Kappa value of 0.85 showed that it was safe to say that the two labelers agreed in their judgments. 1640 Secondly, the third labeler annotated the conflicting part of the corpus. All three labelers discussed any conflict, and finally reached an agreement. Table 2 lists the distribution of the corpus. Comparative Types Sentence Portion Non-comparative: 5,001 (67.7%) Comparative: 2,383 (32.3%) Total (Corpus) 7,384 (100%) Among Comparative Sentences 1) Equality 3.6% 2) Similarity 7.2% 3) Difference 4.8% 4) Greater or lesser 54.5% 5) Superlative 11.3% 6) Pseudo 1.3% 7) Implicit 17.5% Total (Comparative) 100% Table 2: Distribution of the corpus 5.2 Classifying comparative sentences Our experimental results for Task 1 showed an f1score of 90.23% in extracting comparative sentences from text documents and an accuracy of 81.67% in classifying the comparative sentences into seven comparative types. The integrated results showed an accuracy of 88.59%. Non-comparative sentences were regarded as an eighth comparative type in this integrated result. It means that we classify entire sentences into eight types (seven comparative types and one non-comparative type). 5.2.1 Extracting comparative sentences. Before evaluating our proposed method for comparative sentence extraction, we conducted four experiments with all of the lexical unigrams and bigrams using MEM and SVM. Among these four cases, SVM with lexical unigrams showed the highest performance, an f1-score of 79.49%. We regard this score as our baseline performance. Next, we did experiments using all of the continuous lexical sequences and using all of the POS tags sequences within a radius of n words from each CK as features (n=1,2,3,4,5). Among these ten cases, “the POS tags sequences within a radius of 3” showed the best performance. Besides, as SVM showed the better performance than MEM in overall experiments, we employ SVM as our proposed learning technique. Table 3 summarizes the overall results. Systems Precision Recall F1-score baseline 87.86 72.57 79.49 comparison lexicon only 68.39 95.96 79.87 comparison lexicon & SVM (proposed) 92.24 88.31 90.23 Table 3: Final results in comparative sentence extraction (%) As given above, we successfully detected CScandidates with considerably high recall by using the comparison lexicon. We also successfully filtered the candidates with high precision while still preserving high recall by applying machine learning technique. Finally, we could achieve an outstanding performance, an f1-score of 90.23%. 5.2.2 Classifying comparative sentences into seven types. Like the previous comparative sentence extraction task, we also conducted experiments for type classification using the same features (continuous POS tags sequences within a radius of 3 words from each CK) and the same learning technique (SVM). Here, we achieved an accuracy of 73.64%. We regard this score as our baseline performance. Next, we tested a completely different technique, the TBL method. TBL is well-known to be relatively strong in sparse problems. We observed that the performance of type classification can be influenced by very subtle differences in many cases. Hence, we think that an error-driven approach can perform well in comparative type classification. Experimental results showed that TBL actually performed better than SVM or MEM. In the first step, we roughly annotated the type of a sentence using the type of the CK itself. Then, we generated error-driven transformation rules from the incorrectly annotated sentences. Transformation templates we defined are given in Table 4. Numerous transformation rules were generated on the basis of the templates. For example, “Change the type of the current sentence from “Greater or lesser” to “Superlative” if this sentence holds the CK of “보다 ([bo-da]: than)”, 1641 and the second preceding word of the CK is tagged as mm” is a transformation rule generated by the third template. Change the type of the current sentence from x to y if this sentence holds the CK of k, and … 1. the preceding word of k is tagged z. 2. the following word of k is tagged z. 3. the second preceding word of k is tagged z. 4. the second following word of k is tagged z. 5. the preceding word of k is tagged z, and the following word of k is tagged w. 6. the preceding word of k is tagged z, and the second preceding word of k is tagged w. 7. the following word of k is tagged z, and the second following word of k is tagged w. Table 4: Transformation templates For evaluation of threshold values, we performed experiments with three options as given in Table 5. Threshold 0 1 2 Accuracy 79.99 81.67 80.04 Table 5: Evaluation of threshold option (%); Threshold n means that the learning iterations continues while Ci-Ei ≥ n+1 We achieved the best performance with the threshold option 1. Finally, we classified comparative sentences into seven types using TBL with an accuracy of 81.67%. 5.2.3 Integrated results of Task 1 We sum up our proposed method for Task 1 as two steps as follows; 1) The comparison lexicon detects CS-candidates in text documents, and then SVM eliminates the non-comparative sentences from the candidates. Thus, all of the sentences are divided into two classes: a comparative class and a non-comparative class. 2) TBL then classifies the sentences placed in the comparative class in the previous step into seven comparative types. The integrated results showed an overall accuracy of 88.59% for the eight-type classification. To evaluate the effectiveness of our two-step processing, we performed one-step processing experiments using SVM and TBL. Table 6 shows a comparison of the results. Processing Accuracy One-step processing (classifying eight types at a time) comparison lexicon & SVM 75.64 comparison lexicon & TBL 72.49 Two-step processing (proposed) 88.59 Table 6: Integrated results for Task 1 (%) As shown above, Task 1 was successfully divided into two steps. 5.3 Mining comparative entities and predicates For the mining task of comparative entities and predicates, we used 460 comparative sentences (Greater or lesser: 300, Superlative: 160). As previously mentioned, we allowed multiple-word comparative elements. Table 7 lists the portion of multiple-word comparative elements. Multi-word rate SE OE PR Greater or lesser 30.0 31.3 8.3 Superlative 24.4 9.4 (32.6) 8.1 Table 7: Portion (%) of multiple-word comparative elements As given above, each multiple-word portion, especially in SEs and OEs, is quite high. This fact proves that it is absolutely necessary to allow multiple-word comparative elements. Relatively lower rate of 9.4% in Superlative-OEs is caused by a number of omitted OEs. If sentences that do not have any OEs are excluded, the portion of multiple-words becomes 32.6% as written in parentheses. Table 8 shows the effectiveness of simplification processes. We calculated the error rates of CEcandidate detection before and after simplification processes. 1642 Simplification processes SE OE PR Greater or lesser Before 34.7 39.3 10.0 After 4.7 8.0 1.7 Superlative Before 26.3 85.0 (38.9) 9.4 After 1.9 75.6 (6.3) 1.3 Table 8: Error rate (%) in CE-candidate detection Here, the first value of 34.7% means that the real SEs of 104 sentences (among total 300 Greater or lesser sentences) were not detected by CEcandidate detection before simplification processes. After the processes, the error rate decreased to 4.7%. The significant differences between before and after indicate that we successfully detect CEcandidates through the simplification processes. Although the Superlative-OEs still show the seriously high rate of 75.6%, it is also caused by a number of omitted OEs. If sentences that do not have any OEs are excluded, the error rate is only 6.3% as written in parentheses. The final results for Task 2 are reported in Table 9. We calculated each probability of CE-candidates using MEM and SVM. Both MEM and SVM showed outstanding performance; there was no significant difference between the two machine learning methods (SVM and MEM). Hence, we only report the results of SVM. Note that many sentences do not contain any OE. To identify such sentences, if SVM tagged every “N” in a sentence as “not OE”, we tagged the sentence as “no OE”. Final Results SE OE PR Greater or lesser 86.00 89.67 92.67 Superlative 84.38 71.25 90.00 Total 85.43 83.26 91.74 Table 9: Final results of Task 2 (Accuracy, %) As shown above, we successfully extracted the comparative entities and predicates with outstanding performance, an overall accuracy of 86.81%. 6 Conclusions and Future Work This paper has studied a Korean comparison mining system. Our proposed system achieved an accuracy of 88.59% for classifying comparative sentences into eight types (one non-comparative type and seven comparative types), and an accuracy of 86.81% for mining comparative entities and predicates. These results demonstrated that our proposed method could be used effectively in practical applications. Since the comparison mining is an area of increasing interest around the world, our study can contribute greatly to text mining research. In our future work, we have the following plans. Our first plan is to complete the mining process on all the types of sentences. The second one is to conduct more experiments for obtaining better performance. The final one is about an integrated system. Since we perform Task 1 and Task 2 separately, we need to build an end-to-end system. Acknowledgment This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0015613) References Adam L. Berger, Stephen A. Della Pietra and Vicent J. Della Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics, 22(1):39-71. William J. Black and Argyrios Vasilakopoulos. 2002. Language-Independent named Entity Classification by modified Transformation-based Learning and by Decision Tree Induction. In Proceedings of CoNLL’02, 24:1-4. Eric Brill. 1992. A simple rule-based part of speech tagger. In Proceedings of ANLP’92, 152-155. Eric Brill. 1995. Transformation-based Error-Driven Learning and Natural language Processing: A Case Study in Part-of-Speech tagging. Computational Linguistics, 543-565. Gil-jong Ha. 1999a. Korean Modern Comparative Syntax, Pijbook Press, Seoul, Korea. Gil-jong Ha. 1999b. Research on Korean Equality Comparative Syntax, Association for Korean Linguistics, 5:229-265. In-su Jeong. 2000. Research on Korean Adjective Superlative Comparative Syntax. Korean Han-minjok Eo-mun-hak, 36:61-86. 1643 Nitin Jindal and Bing Liu. 2006. Identifying Comparative Sentences in Text Documents, In Proceedings of SIGIR’06, 244-251. Nitin Jindal and Bing Liu. 2006. Mining Comparative Sentences and Relations, In Proceedings of AAAI’06, 1331-1336. Thorsten Joachims. 1998. Text Categorization with Support Vector Machines: Learning with Many relevant Features. In Proceedings of ECML’98, 137142 Soomin Kim and Eduard Hovy. 2006. Automatic Detection of Opinion Bearing Words and Sentences. In Proceedings of ACL’06. Dong-joo Lee, OK-Ran Jeong and Sang-goo Lee. 2008. Opinion Mining of Customer Feedback Data on the Web. In Proceedings of ICUIMC’08, 247-252. Shasha Li, Chin-Yew Lin, Young-In Song and Zhoujun Li. 2010. Comparable Entity Mining from Comparative Questions. In Proceedings of ACL’10, 650-658. Kyeong-sook Oh. 2004. The Difference between „Mankum‟ Comparative and „Cheo-rum‟ Comparative. Society of Korean Semantics, 14:197-221. Lance A. Ramshaw and Mitchell P. Marcus. 1995. Text Chunking using Transformation-Based Learning. In Proceedings of NLP/VLC’95, 82-94. Ellen Riloff and Janyce Wiebe. 2003. Learning Extraction Patterns for Subjective Expressions. In Proceedings of EMNLP’03. Seon Yang and Youngjoong Ko. 2009. Extracting Comparative Sentences from Korean Text Documents Using Comparative Lexical Patterns and Machine Learning Techniques. In Proceedings of ACL-IJNLP:Short Papers, 153-156 Seon Yang and Youngjoong Ko. 2011. Finding relevant features for Korean comparative sentence extraction. Pattern Recognition Letters, 32(2):293-296 1644
|
2011
|
164
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 161–170, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Comprehensive Dictionary of Multiword Expressions Kosho Shudo1, Akira Kurahone2, and Toshifumi Tanabe1 1Fukuoka University, Nanakuma, Jonan-ku, Fukuoka, 814-0180, JAPAN {shudo,tanabe}@fukuoka-u.ac.jp 2TechTran Ltd., Ikebukuro, Naka-ku, Yokohama, 231-0834, JAPAN [email protected] Abstract It has been widely recognized that one of the most difficult and intriguing problems in natural language processing (NLP) is how to cope with idiosyncratic multiword expressions. This paper presents an overview of the comprehensive dictionary (JDMWE) of Japanese multiword expressions. The JDMWE is characterized by a large notational, syntactic, and semantic diversity of contained expressions as well as a detailed description of their syntactic functions, structures, and flexibilities. The dictionary contains about 104,000 expressions, potentially 750,000 expressions. This paper shows that the JDMWE’s validity can be supported by comparing the dictionary with a large-scale Japanese N-gram frequency dataset, namely the LDC2009T08, generated by Google Inc. (Kudo et al. 2009). 1 Introduction Linguistically idiosyncratic multiword expressions occur in authentic sentences with an unexpectedly high frequency. Since (Sag et al. 2002), we have become aware that a proper solution of idiosyncratic multiword expressions (MWEs) is one of the most difficult and intriguing problems in NLP. In principle, the nature of the idiosyncrasy of MWEs is twofold: one is idiomaticity, i.e., noncompositionality of meaning; the other is the strong probabilistic affinity between component words. Many attempts have been made to extract these expressions from corpora, mainly using automated methods that exploit statistical means. However, to our knowledge, no reliable, extensive solution has yet been made available, presumably because of the difficulty of extracting correctly without any human insight. Recognizing the crucial importance of such expressions, one of the authors of the current paper began in the 1970s to construct a Japanese electronic dictionary with comprehensive inclusion of idioms, idiom-like expressions, and probabilistically idiosyncratic expressions for general use. In this paper, we begin with an overview of the JDMWE (Japanese Dictionary of Multi-Word Expressions). It has approximately 104,000 dictionary entries and covers potentially at least 750,000 expressions. The most important features of the JDMWE are: 1. A large notational, syntactic, and semantic diversity of contained expressions 2. A detailed description of syntactic function and structure for each entry expression 3. An indication of the syntactic flexibility of entry expressions (i.e., possibility of internal modification of constituent words) of entry expressions. In section 2, we outline the main features of the present study, first presenting a brief summary of significant previous work on this topic. In section 3, we propose and describe the criteria for selecting MWEs and introduce a number of classes of multiword expressions. In section 4, we outline the format and contents of the JDMWE, discussing the information on notational variants, syntactic functions, syntactic structures, and the syntactic flexibility of MWEs. In section 5, we describe and explain the contextual conditions stipulated in the JDMWE. In section 6, we illustrate some important statistical properties of the JDMWE by comparing the dictionary with a large-scale Japanese N-gram frequency dataset, the LDC2009T08, generated by Google Inc. (Kudo et al. 2009). The paper ends with concluding remarks in section 7. 161 2 Related Work Gross (1986) analyzed French compound adverbs and compound verbs. According to his estimate, the lexical stock of such words in French would be respectively 3.3 and 1.7 times greater than that of single-word adverbs and single-word verbs. Jackendoff (1997) notes that an English speaker’s lexicon would contain as many MWEs as single words. Sag et al. (2002) pointed out that 41% of the entries of WordNet 1.7 (Fellbaum 1999) are multiword; and Uchiyama et al. (2003) reported that 44% of Japanese verbs are VV-type compounds. These and other similar observations underscore the great need for a well-designed, extensive MWE lexicon for practical natural language processing. In the past, attempts have been made to produce an MWE dictionary. Examples include the following: Gross (1986) reported on a dictionary of French verbal MWEs with description of 22 syntactic structures; Kuiper et al. (2003) constructed a database of 13,000 English idioms tagged with syntactic structures; Villavicencio (2004) attempted to compile lexicons of English idioms and verb-particle constructions (VPCs) by augmenting existing single-word dictionaries with specific tables; Baptista et al. (2004) reported on a dictionary of 3,500 Portuguese verbal MWEs with ten syntactic structures; Fellbaum et al. (2006) reported corpus-based studies in developing German verb phrase idiom resources; and recently, Laporte et al. (2008) have reported on a dictionary of 6,800 French adverbial MWEs annotated with 15 syntactic structures. Our JDMWE approach differs from these studies in that it can treat more comprehensive types of MWEs. Our system can handle almost all types of MWEs except compositional compounds, named entities, acronyms, blends, politeness expressions, and functional expressions; in contrast, the types of MWEs that most of the other studies can deal with are limited to verb-object idioms, VPCs, verbal MWEs, support-verb constructions (SVCs) and so forth. Many attempts have been made to extract MWEs automatically using statistical corpus-based methods. For example, Pantel et al. (2001) sought to extract Chinese compounds using mutual information and the log-likelihood measure. Fazly et al. (2006) attempted to extract English verbobject type idioms by recognizing their structural fixedness in terms of mutual information and relative entropy. Bannard (2007) tried to extract English syntactically fixed verb-noun combinations using pointwise mutual information, and so on. In spite of these and many similar efforts, it is still difficult to adequately extract MWEs from corpora using a statistical approach, because regarding the types of multiword expressions, realistically speaking, the corpus-wide distribution can be far from exhaustive. Paradoxically, to compile an MWE lexicon we need a reliable standard MWE lexicon, as it is impossible to evaluate the automatic extraction by recall rate without such a reference. The conventional idiom dictionaries published for human readers have been occasionally used for the evaluation of automatic extraction methods in some past studies. However, no conventional Japanese dictionary of idioms would suffice for an MWE lexicon for the practical NLP because they lack entries related to the diverse MWE objects we frequently encounter in common textual materials, such as quasi-idioms, quasi-clichés, metaphoric fixed or partly fixed expressions. In addition, they provide no systematic information on the notational variants, syntactic functions, or syntactic structures of the entry expressions. The JDMWE is intended to circumvent these problems. In past Japanese MWE studies, Shudo et al. (1980) compiled a lexicon of 3,500 functional multiword expressions and used the lexicon for a morphological analysis of Japanese. Koyama et al. (1998) made a seven-point increase in the precision rate of kana-to-kanji conversion for a commercial Japanese word processor by using a prototype of the JDMWE with 65,000 MWEs. Baldwin et al. (2003) discussed the treatment of Japanese MWEs in the framework of Sag et al. (2002). Shudo et al. (2004) pointed out the importance of the auxiliary-verbal MWEs and their non-propositional meanings (i.e., modality in a generalized sense). Hashimoto et al. (2009) studied a disambiguation method of semantically ambiguous idioms using 146 basic idioms. 3 MWEs Selected for the JDMWE The human deliberate judgment is indispensable for the correct, extensive extraction of MWEs. In 162 view of this, we have manually extracted multiword expressions that have definite syntactic, semantic, or communicative functions and are linguistically idiosyncratic from a variety of publications, such as newspaper articles, journals, magazines, novels, and dictionaries. In principle, the idiosyncrasy of MWEs is twofold: first, the semantic non-compositionality (i.e., idiomaticity); second, the strong probabilistic affinity between component words. Here we have treated them differently. The number of words included in a MWE ranges from two to eighteen. The length distribution is shown in Figure 1. 0 5 10 15 20 25 30 35 40 45 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Length Constituent Ratio (%) Figure 1: Length distribution of MWEs type example Idiom: Semantically Non-Compositional Expression 赤-の-他人aka-no-tanin (lit. red stranger) “complete stranger” Morphologically or Syntactically NonCompositional Expression, CranberryType Expression と-は-いえto-ha-ie “however” SVC: Support-Verb Construction 批判- を- 加える hihan-wo-kuwaeru (lit. add criticism) “criticize” Compound Noun; Compound Verb; Compound Adjective; Compound AdjectiveVerb 打ち-拉がれる uti-hisigareru (lit. be hit and smashed) “become depressed” Four-Character-Idiom 支離-滅裂 siri-meturetu “incoherence” Metaphorical Expression 命-の-限り inoti-no-kagiri (lit. limit of life) “at the risk of life” Quasi-Idiom 辞書-を-引くjisho-wo-hiku (lit. pull dictionary) “look up in a dictionary” Table 1: Non-Compositional Expressions 3.1 Non-Compositional MWEs In our approach, we use non-substitutability criterion to define a word string as an MWE, the logic being that an MWE expression is usually fixed in its form and the substitution of one of its constituent words would yield a meaningless expression or an expression with a meaning that is completely different from that of the original MWE expression. Formally, a word string w1w2… wi…wn (2≤n≤18) is an MWE if it has a definite syntactic, semantic, or communicative function of its own, and if w1w2 … wi’ … wn is either meaningless or has a meaning completely different from that of w1w2…wi…wn for some i, where wi’ is any synonym or synonymous phrase of wi. For example, 赤(w1)-の-他人 aka-no-tanin (lit. “red stranger”) is selected because it has a definite nominal meaning of “complete stranger” and neither 真紅(w1’)-の-他人sinku-no-tanin nor レ ッド(w1’)- の- 他人 reddo-no-tanin means “complete stranger”. The evaluation of semantic relevance of MWEs was carried out by human judges entirely. It is just too difficult to judge the semantic relevance automatically and correctly. Table 1 shows a number of MWEs of this type.1 3.2 Probabilistically Idiosyncratic MWEs An MWE must form a linguistic unit of its own. This and the following transition probability condition constitute another criterion that we adopt to define what an MWE is. Formally, a word string w1w2…wi…wn (2≤n≤18) is an MWE if it has a definite syntactic, semantic, or communicative function of its own, and if its forward or backward transition probability pf(wi+1|w1…wi) or pb(wi|wi+1 …wn), respectively is judged to be in the relatively high range for some i. With this definition, for example, 手-を-拱くte-wo-komaneku “fold arms” is selected as an MWE because it is a well-formed verb phrase and pb( 手| を- 拱く) is judged empirically to be very high. No general probabilistic threshold value can be fixed a priori because the value is expression-dependent. Although the probabilistic judgment was performed, for each expression in turn, on the basis of the developer’s empirical language model, the resulting dataset is consistent with this criterion on 1 These classes are not necessarily disjoint. 163 the whole as shown in section 6.1. Table 2 lists some MWEs of this type.2 type example Cliché, Stereotyped, Hackneyed, or Set Expression 風前-の-灯 fuuzen-no-tomosibi (lit. light in front of the wind) “candle flickering in the wind” Proverb, Old-Saying 急が-ば-回れ isoga-ba-maware (lit. make a detour when in a hurry) “more haste, less speed” Onomatopoeic or Mimetic Expression ノロノロ-と-歩く noronoro-to-aruku (lit. slouchingly walk) “walk slowly” Quasi-Cliché, Institutionalized Phrase 肩-の-荷-を-下ろす kata-no-ni-wo-orosu (lit. lower lord from the shoulder) “take a big load off one’s mind” Table 2: Probabilistically Idiosyncratic Expressions With entries like these, an NLP system can use the JDMWE as a reliable reference while effectively disambiguating the structures in the syntactic analysis process. Of the MWEs in the JDMWE, approximately 38% and 92% of them were judged to meet criterion 3.1 and criterion 3.2, respectively. These are illustrated in Figure 2. Figure 2: Approximate constituent ratio of noncompositional MWEs and probabilistically bound MWEs Figure 3: Example JDMWE entry 2 These classes are not necessarily disjoint. 4 Contents of the JDMWE The JDMWE has approximately 104,000 entries, one for each MWE, composed of six fields, namely, Field-H, -N, -F, -S, -Cf, and -Cb. The dictionary entry form of an MWE is stated in Field-H in the form of a non-segmented hira-kana (phonetic character) string. An example is given in Figure 3. 4.1 Notational Information (Field-N) Japanese has three notational options: hira-kana, kata-kana, and kanji. The two kanas are phonological syllabaries. Kanji are originally Chinese idiographic characters. As we have many kanji characters that are both homophonic and synonymous, sentences can contain kanji replaceable by others. In addition, the inflectional suffix of some verbs can be absent in some contexts. The JDMWE has flexible conventions to cope with these characteristics. It uses brackets to indicate an optional word (or a series of interchangeable words marked off by the slash “/”) in the Field-N description. Therefore, the entry whose Field-H (the first field) is きのいいやつkino-ii-yatu (lit. “a guy who has a good spirit”) “good-natured guy”, can have (き/気)-の-(い/良/ 好/善)い-(やつ/奴/ヤツ) in its Field-N. The dash “-” is used as a word boundary indicator. This example can stand for twenty-four combinatorial variants, i.e., きのいいやつ,…, 気の良い奴,…, 気の善いヤツ. If fully expanded with this information, the JDMWE’s total number of MWEs can exceed 750,000. 4.2 Functional Information (Field-F) Linguistic functions of MWEs can be simply classified by means of codes, as shown in Tables 3 and 4. Field-F is filled with one of those codes which corresponds to a root node label in the syntactic tree representation of a MWE. code function size example Cdis Discourse- Connective 1,000 言い-換えれ-ば ii-kaere-ba (lit. if (I) paraphrase) “in other words” Adv Adverbial 6,000 不思議-と fusigi-to “strangely enough” Pren Prenominal- Adjectival 13,700 確-たる kaku-taru “definite” 164 Nom Nominal 12,000 灰汁-の-強さ aku-no-tuyosa (lit. strong taste of lye) “strong harshness” Nd Nominal/ Dynamic 4,700 一目-惚れ hitome-bore “love at first sight” Nk Nominal/State- describing 5,400 二-枚-舌 ni-mai-jita “being double-tongued” Ver Verbal 49,000 油-を-売る abura-wo-uru (lit. sell oil) “idle away” Adj Adjectival 4,600 眼-に-入れ-ても-痛く-ない me-ni-ire-temo-itaku-nai (lit. have no pain even if put into eyes) “an apple in ones eye” K Adjective- Verbal 3,500 経験-豊か keiken-yutaka “abundant in experience” Ono Onomatopoeic or Mimetic Expression 1,300 スラスラ-と surasura-to “smoothly”, “easily”, “fluently” Table 3: Syntactic Functions and Examples code function size example _P Proverb, Old-Saying 2,300 百聞-は-一見-に-如か-ず hyakubun-ha-ikken-ni-sika-zu (lit. hearing about something a hundred times is not as good as seeing it once) “a picture is worth a thousand words” _Self Soliloquy, Monologue 200 困っ-た-なあ komat-ta-naa “Oh boy, we’re in trouble!” _Call Call, Yell 150 済み-ませ-ん-が sumi-mase-n-ga “Excuse me.” _Grt Greeting 200 いらっしゃい-ませ irasshai-mase “Welcome!” _Res Response 350 どう-いたし-まし-て dou-itasi-masi-te “You’re welcome.” Table 4: Communicative Functions and Examples 4.3 Structural Information (Field-S) 4.3.1 Dependency Structure The dependency structure of an MWE is given in Field-S by a phrase marker bracketing the modifier-head pairs, using POS symbols for conceptual words.3 For example, an idiom 真っ赤な- 嘘 makka-na-uso (lit. “crimson lie”) “downright lie” is given a marker [[K00 na] N]. This description represents the structure shown in Figure 4, where K00 and N are POS symbols denoting an adjective-verb stem and a noun, respectively. 3 The intra-sentential dependency relation in Japanese is unilateral, i.e., the left modifier depends on the right head. The JDMWE contains 49,000 verbal entries, making this the largest functional class in the JDMWE. For these verbal entries, more than 90 patterns are actually used as structural descriptors in Field-S. This fact can indicate the broadness of the structural spectrum of Japanese verbal MWEs. Some examples are shown in Table 5. Figure 4: Example of dependency structure given in Field-S example of structural pattern of verbal MWE example of MWE [[N wo] V30] 異-を-唱える i-wo-tonaeru (lit. chant the difference) “raise an objection” [[N ga] V30] 撚り-が-戻る yori-ga-modoru (lit. the twist comes undone) “get reconciled” [[N ni] V30] 手-に-入れる te-ni-ireru (lit. put...into hands) “get”, “obtain” [[[[N no] N] ga] V30] 化け-の-皮-が-剥げる bake-no-kawaga-hageru (lit. peel off disguise) “expose the true colors” [[[[N no] N] ni] V30] 玉-の-輿-に-乗る tama-no-kosi-ninoru (lit. ride on a palanquin for the nobility) “marry into wealth” [[N de][[N wo] V30]] 顎-で-人-を-使うago-de-hito-wo-tukau (lit. use person by a chin) “order a person around” [[N ni][[N ga] V30]] 尻-に-火-が-付く siri-ni-higa-tuku (lit. buttocks catch fire) “get in great haste” [[V23 te] V30] 切っ-て-落とす ki-te-otosu (lit. cut and drop) “cut off” [[V23 ba] V30] 打て-ば-響く ute-ba-hibiku (lit. reverberate if hit) “respond quickly” [[[[N ni] V23] te] V30] 束-に-なっ-て-掛かる taba-ni-nat-te-kakaru (lit. attack someone by becoming a bunch) “attack all at once” [Adv [[N ga] V30]] どっと-疲れ-が-出る dotto-tukare-ga-deru (lit. fatigue bursts out) “being suddenly overcome with fatigue” Table 5: Examples of structural types of verbal MWEs (N: noun, V23: verb (adverbial form), V30: verb (end form), Adv: adverb, wo, ga, ni, no, de, te, and ba: particle) 165 4.3.2 Coordinate Structure Approximately 2,500 MWEs in the JDMWE contain internal coordinate structures. This information is described in Field-S by bracketing with “<” and “>”, and the coordinated parts by “(” and “)”. The coordinative phrase specification usually requires that the conjuncts must be parallel with respect to the syntactic function of the constituents appearing in the bracketed description. For example, an expression 後-は-野-と-なれ-山と-なれ ato-ha-no-to-nare-yama-to-nare (lit. “the rest might become either a field or a mountain”) “what will be, will be”, has an internal coordinate structure. Thus, its Field-S is [[N ha]<([[N to] V60])([[N to] V60])>]. This description represents the structure shown in Figure 5, where V60 denotes an imperative form of the verb. Figure 5: Example of the coordinate structure shown by “<” and “>” in Field-S 4.3.3 Non-phrasal Structure Approximately 250 MWEs in the JDMWE are syntactically ill-formed in the sense of context-free grammar but still form a syntactic unit on their own. For example, 揺り籠- から- 墓場- まで yurikago-kara-hakaba-made “from the cradle to the grave” is an adjunct of two postpositional phrases but is often used as a state-describing noun as in 揺り籠-から-墓場-まで-の-保証yurikagokara-hakaba-made-no-hoshou (lit. security of from cradle to grave) “security from the cradle to the grave”. Thus Field-F and Field-S have a functional code Nk and a description [[N kara][[N made] $]], respectively. The symbol “$” denotes a null constituent occupying the position of the governor on which this MWE depends. This structure is shown in Figure 6. Figure 6: Example of a non-phrasal expression with a null constituent marked with “$” in Field-S The total number of structural types specified in Field-S is nearly 6,000. This indicates that Japanese MWEs present a wide structural variety. 4.3.4 Internal Modifiability Some MWEs are not fixed-length word strings, but allow the occurrence of phrasal modifiers internally. In our system, this aspect is captured by prefixing a modifiable element of the structural description stated in the Field-S with an asterisk “*”. An adverbial MWE 上-に-述べ-た-様-に ueni-nobe-ta-you-ni “as I explained above” is one such MWE and thus has a description [[[[[N ni] *V23] ta] N] ni] in Field-S, meaning that the third element V23 is a verb that can be modified internally by adverb phrases. Since the asterisk designates such optional phrasal modification, our system allows a derivative expression like 理 由 -を-上-に-詳しく-述べ-た-様-に riyuu-woue-ni-kuwasiku-nobe-ta-you-ni “as I explained in detail the reason above”, which contains two additional, internal modifiers. The structure is shown in Figure 7.4 Figure 7: Example of internal modifiability marked by “*” in Field-S 4 The positions to be taken by an internal modifier can be easily decided by the structural description given in Field-S along with the nest structure requirement. 166 Roughly speaking, 30,000 MWEs in the JDMWE have no asterisk in their Field-S. Our rigid examination reveals that internal modification is not allowed for them. 5 Contextual Condition (Field-Cf , Cb) Approximately 6,700 MWEs need to be classified differently because they require particular forward contexts, i.e., they require co-occurrence of a particular syntactic phrase in the context that immediately precedes them. For example, 顔-をする kao-wo-suru (lit. “do face”) which is a support-verb construction, cannot occur without an immediately preceding adnominal modifier, e.g., the adjective 悲しい kanasii “sad”, yielding 悲し い-顔-を-する kanasii-kao-wo-suru (lit. “do sad face”) “make a sad face”. This adnominal modifier co-occurrence requirement is stipulated in Field-Cf by a code <adnom. modifier>. There are about 30 of these forward contextual requirements. Similarly, backward contextual requirements, of which there are about 70, are stated in Field-Cb. Approximately 300 MWEs require particular backward contexts. 6 Statistical Properties Without a rule system of semantic composition, it is difficult to evaluate the validity of the JDMWE concerning idiomaticity. However, we can confirm that 3,600 Japanese standard idioms that Sato (2007) listed from five Japanese idiom dictionaries published for human readers are included in the JDMWE as a proper subset. In addition, the JDMWE contains the information about their syntactic functions, structures, and flexibilities. 6.1 Comparison with Web N-gram Frequency Data We examined the statistical properties of the JDMWE using the Japanese Web N-gram, version 1: LDC2009T08, which is a word N-gram (1≤N≤7) frequency dataset generated from 2 × 1010 sentences in a Japanese Web corpus, supplied by Google Inc. (Kudo et al. 2009). We will refer to this (or the Web corpus examined) subsequently as GND. We will refer to trigram w1w2w3 as an NpVtrigram only when w1 and w3 are restricted to a noun and a verb (end form), respectively, and w2 is one of the following case-particles: accusative を wo, subjective がga, or dative にni.5 We write the number of occurrences of an expression x, counted in the GND, as C(x). First, we obtain from the GND sets G, T, D, B, and Ri’s defined below, using a Japanese word dictionary IPADIC (Asahara et al. 2003): G={w1w2w3| w1w2w3 ∊ GND, w1w2w3 is an NpV-trigram.} T={w1w2w3| w1w2w3 ∊JDMWE, w1w2w3 is an NpV-trigram.} D={w1w2|∃w3, w1w2w3∊G} B={w1w2|∃w3, w1w2w3∊T} Ri={w1w2w3| w1w2w3∊T, C(w1w2w3) is the i-th largest among C(w1w2v)’s for all w1w2v ∊G}. We then found the following data: ・|B|=10,548 ・|D|=110,822 ・|R1|=4,983, |R2|=1,495, |R3|=786, |R4|=433, … From these, we realize, for example, that 47.2% =(|R1|/|B|)×100 of trigrams in T have verbs that occur most frequently in the GND, succeeding the individual bigrams. An example of such a trigram is アクション-を-起こす akushon-wo-okosu (lit. “raise action”) “take action”. Similarly, 14.0%= (|R2|/|B|)×100 have the second most frequent verbs, 7.5% have the third most frequent verbs, and so on. Figure 8(a) illustrates the results. From this, we can assume that the higher probability pf(w3|w1w2) a trigram w1w2w3 has, the more likely w3 is chosen for each w1w2 in the JDMWE. This is consistent with what we wrote in section 3.2. Figure 8(b) is the accumulative substitute of Figure 8(a). Extrapolating Figure 8(b) suggests that 10% of NpV-trigrams in the JDMWE do not occur in the GND. This implies that the size, i.e., 2×1010 sentences of the Web corpus used by the GND is not sufficiently large to allow MWE extraction. 6 5 The NpV-trigrams represent the typical forms of shortest Japanese sentences, corresponding roughly to subject-verb, verb-object/direct, and verb-object/indirect constructions in English. 6 Otherwise, the frequency cut-off point of 20 adopted in GND is too high. 167 (a) (b) 0 10 20 30 40 50 60 70 80 90 100 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Rank Constituent Ratio (%) Figure 8 (a): Constituent ratio (|Ri|/|B|)×100 for rank i of probability pf(w3|w1w2); (b): Accumulative variant of (a) for rank i of probability pf(w3|w1w2) Second, we calculate the (normalized) entropy Hf(w3|w1w2) for each w1w2 ∊D defined below, where the probability pf(w3|w1w2) is estimated by C(w1w2w3)/C(w1w2). This provides a measure of the flatness of the pf(w3|w1w2) distribution canceling out the influence of the number N of verb types w3’s. Hf(w3|w1w2) = − ( 3 w pf(w3|w1w2) log pf(w3|w1w2)) / log N After arranging 110,822 bigrams in D in ascending order of Hf(w3|w1w2), we divided them into 20 intervals A1, A2, … , A20 each with an equal number of bigrams (5,542). We then examined how many bigrams in B were included in each interval. Figures 9(a) and (b) plot the resulting constituent ratio of the bigrams in B and the mean value of Hf(w3|w1w2)’s in each interval, respectively. We found, for example, that 1,262 out of 5,542 bigrams are in B for the first interval, i.e., the constituent ratio is 22.8%=(1,262/5,542) × 100. Similarly, we obtain 22.5%=(1,248/5,542)×100 for the second interval, 20.5%=(1,136/5,542)×100 for the third, and so on. From this, we realize the macroscopic tendency that the larger the entropy Hf(w3|w1w2), or equivalently the perplexity of the succeeding verb w3, a bigram w1w2 has, the less likely it is adopted as a prefix of a trigram in T. Taking the results in Figure 8 and Figure 9 together, we can presume that not only frequently but also exclusively occurring verbs would be the preferred choice in T. (a) (b) 0 5 10 15 20 25 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Interval Constituent Ratio (%) 0 0.2 0.4 0.6 0.8 1 Average Entropy Figure 9 (a): Constituent ratio of the bigrams in B among bigrams in D in interval k (1≤k≤20); (b): Mean value of entropies Hf(w3|w1w2)’s in the interval k (1≤k≤20) This suggests the general feasibility of the JDMWE, for its relative compactness, in effectively disambiguating the syntactic structures of input word strings. The above investigations were carried out on the forward conditional probabilities for restricted types of MWEs. However, the results imply a general validity of the JDMWE since the same criteria for selection were applied to all kinds of multiword expressions. 6.2 Occurrences in Newspapers We examined 2,500 randomly selected sentences in Nikkei newspaper articles (published in 2009) to determine how many MWE tokens of the JDMWE occur in them. We found that in 100 sentences an average of 74 tokens of our MWEs were used. This suggests a large lexical coverage of the JDMWE. 7 Concluding Remarks The JDMWE is a slotted tree bank for idiosyncratic multiword expressions, annotated with detailed notational, syntactic information. The idea underlying the JDMWE is that the volume and meticulousness of the lexical resource crucially affects the outcome of the rule-oriented, large-scale NLP. In view of this, the JDMWE was designed to encompass the wide range of linguistic objects related to Japanese MWEs, by placing importance on the recall rate in the selection of the 168 candidate expressions. 7 The statistical properties clarified in this paper imply the general feasibility of the JDMWE at least in the probabilistic respect. Possible fields of application of the JDMWE include, for example: ・Phrase-based machine translation ・Phrase-based speech recognition ・Phrase-based kana-to-kanji conversion ・Search engine for Japanese corpus ・Paraphrasing system ・Japanese dialoguer ・Japanese language education system Another aspect of the JDMWE is that it would provide linguists with lexicological data. For example, the usage of Japanese onomatopoeic adverbs, which are mostly bound probabilistically to specific verbs or adjectives, is extensively catalogued in the JDMWE. The first version of the JDMWE will be released after proofreading.8 If possible, we would like to add further information to each MWE on morphological variants, passivization, relativization, decomposability, paraphrasing, and semantic disambiguation for future versions. Acknowledgments We would like to thank the late Professors Toshihiko Kurihara and Sho Yoshida, who inspired our current research in the 1970s. Similar thanks go to Makoto Nagao. We are also grateful to everyone who assisted in the development of the JDMWE. Further special thanks go to Akira Shimazu, Takano Ogino, and Kenji Yoshimura for their encouragement and useful discussions, to those who worked on the LDC2009T08 and IPADIC, to the three anonymous reviewers for their valuable comments and advice, and to Stephan Howe for advice on matters of English style in the current paper. References Asahara, M. and Matsumoto, Y. 2003. IPADIC version 2.7.0 User’s Manual (in Japanese). NAIST, Information Science Division. 7 The time required to compile this dictionary is estimated at 24,000 working hours. 8 A portion of the JDMWE is available at http://jefi.info/. Baldwin, T. and Bond, F. 2003. Multiword Expressions: Some Problems for Japanese NLP. Proceedings of the 8th Annual Meeting of the Association for Natural Language Processing (Japan): 379–382. Bannard, C. 2007. A Measure of Syntactic Flexibility for Automatically Identifying Multiword Expressions in Corpora. Proceedings of A Broader Perspective on Multiword Expressions, Workshop at the ACL 2007 Conference: 1–8. Baptista, J., Correia, A., and Fernandes, G. 2004. Frozen Sentences of Portuguese: Formal Descriptions for NLP. Proceedings of ACL 2004 Workshop on Multiword Expressions: Integrating Processing: 72– 79. Fazly, A. and Stevenson, S. 2006. Automatically Constructing a Lexicon of Verb Phrase Idiomatic Combinations. Proceedings of the 11th Conference of the European Chapter of the ACL: 337–344. Fellbaum, C. (ed.) 1999. WordNet. An Electronic Lexical Database, Cambridge, MA: MIT Press. Fellbaum, C., Geyken, A., Herold, A., Koerner, F., and Neumann, G. 2006. Corpus-Based Studies of German Idioms and Light Verbs. International Journal of Lexicography, Vol. 19, No. 4: 349-360. Gross, M. 1986. Lexicon-Grammar. The Representation of Compound Words. Proceedings of the 11th International Conference on Computational Linguistics, COLING86:1–6. Hashimoto, C. and Kawahara, D. 2009. Compilation of an Idiom Example Database for Supervised Idiom Identification. Language Resource and Evaluation Vol. 43, No. 4 : 355-384. Jackendoff, R. 1997. The Architecture of Language Faculty. Cambridge, MA: MIT Press. Koyama, Y., Yasutake, M., Yoshimura, K., and Shudo, K. 1998. Large Scale Collocation Data and Their Application to Japanese Word Processor Technology. Proceedings of the 17th International Conference on Computational Linguistics, COLING98: 694–698. Kudo, T. and Kazawa, H. 2009. Japanese Web N-gram Version 1. Linguistic Data Consortium, Philadelphia. Kuiper, K., McCan, H., Quinn, H., Aitchison, T., and Van der Veer, K. 2003. SAID: A Syntactically Anno tated Idiom Dataset. Linguistic Data Consortium 2003T10. Laporte, É. and Voyatzi, S. 2008. An Electronic Dictionary of French Multiword Adverbs. Proceedings of the LREC Workshop towards a Shared Task for Multiword Expressions (MWE 2008): 31–34. 169 Pantel, P. and Lin, D. 2001. A Statistical Corpus-Based Term Extractor. Proceedings of the 14th Biennial Conference of the Canadian Society on Computational Studies of Intelligence, SpringerVerlag: 36–46. Sag, I. A., Baldwin, T., Bond, F., Copestake, A., and Flickinger, D. 2002. Multiword Expressions: A Pain in the Neck for NLP. Proceedings of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics, CICLING2002: 1–15. Sato, S. 2007. Compilation of a Comparative List of Basic Japanese Idioms from Five Sources (in Japanese). IPSJ SIG Notes 178: 1-6. Shudo, K., Narahara, T., and Yoshida, S. 1980. Morphological Aspect of Japanese Language Processing. Proceedings of the 8th International Conference on Computational Linguistics, COLING80: 1–8. Shudo, K., Tanabe, T., Takahashi, M., and Yoshimura, K. 2004. MWEs as Non-Propositional Content Indicators. Proceedings of ACL 2004 Workshop on Multiword Expressions: Integrating Processing: 31– 39. Uchiyama, K. and Ishizaki, S. 2003. A Disambiguation of Compound Verbs. Proceedings of ACL 2003. Workshop on Multiword Expressions: Analysis, Acquisition and Treatment: 81–88. Villavicencio, A. 2004. Lexical Encoding of MWEs. Proceedings of ACL 2004 Workshop on Multiword Expressions: Integrating Processing: 80–87. 170
|
2011
|
17
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 171–179, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Multi-Modal Annotation of Quest Games in Second Life Sharon Gower Small, Jennifer Stromer-Galley and Tomek Strzalkowski ILS Institute State University of New York at Albany Albany, NY 12222 [email protected], [email protected], [email protected] Abstract We describe an annotation tool developed to assist in the creation of multimodal actioncommunication corpora from on-line massively multi-player games, or MMGs. MMGs typically involve groups of players (5-30) who control their avatars1, perform various activities (questing, competing, fighting, etc.) and communicate via chat or speech using assumed screen names. We collected a corpus of 48 group quests in Second Life that jointly involved 206 players who generated over 30,000 messages in quasisynchronous chat during approximately 140 hours of recorded action. Multiple levels of coordinated annotation of this corpus (dialogue, movements, touch, gaze, wear, etc) are required in order to support development of automated predictors of selected real-life social and demographic characteristics of the players. The annotation tool presented in this paper was developed to enable efficient and accurate annotation of all dimensions simultaneously. 1 Introduction The aim of our project is to predict the real world characteristics of players of massively-multiplayer online games, such as Second Life (SL). We sought to predict actual player attributes like age or education levels, and personality traits including leadership or conformity. Our task was to do so using only the behaviors, communication, and interaction among the players produced during game play. To do so, we logged all players’ avatar movements, 1 All avatar names seen in this paper have been changed to protect players’ identities. “touch events” (putting on or taking off clothing items, for example), and their public chat messages (i.e., messages that can be seen by all players in the group). Given the complex nature of interpreting chat in an online game environment, we required a tool that would allow annotators to have a synchronized view of both the event action as well as the chat utterances. This would allow our annotators to correlate the events and the chat by marking them simultaneously. More importantly, being able to view game events enables more accurate chat annotation; and conversely, viewing chat utterances helps to interpret the significance of certain events in the game, e.g., one avatar following another. For example, an exclamation of: “I can’t do it!” could be simply a response (rejection) to a request from another player; however, when the game action is viewed and the speaker is seen attempting to enter a building without success, another interpretation may arise (an assertion, a call for help, etc.). The Real World (RW) characteristics of SL players (and other on-line games) may be inferred to varying degrees from the appearance of their avatars, the behaviors they engage in, as well as from their on-line chat communications. For example, the avatar gender generally matches the gender of the owner; on the other hand, vocabulary choices in chat are rather poor predictors of a player’s age, even though such correlation is generally seen in real life conversation. Second Life2 was the chosen platform because of the ease of creating objects, controlling the play environment, and collecting players’ movement, chat, and other behaviors. We generated a corpus of chat and movement data from 48 quests comprised of 206 participants who generated over 30,000 2 An online Virtual World developed and launched in 2003, by Linden Lab, San Francisco, CA. http://secondlife.com 171 messages and approximately 140 hours of recorded action. We required an annotation tool to help us efficiently annotate dialogue acts and communication links in chat utterances as well as avatar movements from such a large corpus. Moreover, we required correlation between these two dimensions of chat and movement since movement and other actions may be both causes and effects of verbal communication. We developed a multimodal event and chat annotation tool (called RAT, the Relational Annotation Tool), which will simultaneously display a 2D rendering of all movement activity recorded during our Second Life studies, synchronized with the chat utterances. In this way both chat and movements can be annotated simultaneously: the avatar movement actions can be reviewed while making dialogue act annotations. This has the added advantage of allowing the annotator to see the relationships between chat, behavior, and location/movement. This paper will describe our annotation process and the RAT tool. 2 Related Work Annotation tools have been built for a variety of purposes. The CSLU Toolkit (Sutton et al., 1998) is a suite of tools used for annotating spoken language. Similarly, the EMU System (Cassidy and Harrington, 2001) is a speech database management system that supports multi-level annotations. Systems have been created that allow users to readily build their own tools such as AGTK (Bird et al., 2001). The multi-modal tool DAT (Core and Allen, 1997) was developed to assist testing of the DAMSL annotation scheme. With DAT, annotators were able to listen to the actual dialogues as well as view the transcripts. While these tools are all highly effective for their respective tasks, ours is unique in its synchronized view of both event action and chat utterances. Although researchers studying online communication use either off-the shelf qualitative data analysis programs like Atlas.ti or NVivo, a few studies have annotated chat using custom-built tools. One approach uses computer-mediated discourse analysis approaches and the Dynamic Topic Analysis tool (Herring, 2003; Herring & Nix; 1997; StromerGalley & Martison, 2009), which allows annotators to track a specific phenomenon of online interaction in chat: topic shifts during an interaction. The Virtual Math Teams project (Stahl, 2009) created a ated a tool that allowed for the simultaneous playback of messages posted to a quasi-synchronous discussion forum with whiteboard drawings that student math team members used to illustrate their ideas or visualize the math problem they were trying to solve (Çakir, 2009). A different approach to data capture of complex human interaction is found in the AMI Meeting Corpus (Carletta, 2007). It captures participants’ head movement information from individual headmounted cameras, which allows for annotation of nodding (consent, agreement) or shaking (disagreement), as well as participants’ locations within the room; however, no complex events involving series of movements or participant proximity are considered. We are unaware of any other tools that facilitate the simultaneous playback of multi-modes of communication and behavior. 3 Second Life Experiments To generate player data, we rented an island in Second Life and developed an approximately two hour quest, the Case of the Missing Moonstone. In this quest, small groups of 4 to 5 players, who were previously unacquainted, work their way together through the clues and puzzles to solve a murder mystery. We recruited Second Life players in-game through advertising and setting up a shop that interested players could browse. We also used Facebook ads, which were remarkably effective. The process of the quest experience for players started after they arrived in a starting area of the island (the quest was open only to players who were made temporary members of our island) where they met other players, browsed questappropriate clothing to adorn their avatars, and received information from one of the researchers. Once all players arrived, the main quest began, progressing through five geographic areas in the island. Players were accompanied by a “training sergeant”, a researcher using a robot avatar, that followed players through the quest and provided hints when groups became stymied along their investigation but otherwise had little interaction with the group. The quest was designed for players to encounter obstacles that required coordinated action, such as all players standing on special buttons to activate a door, or the sharing of information between players, such as solutions to a word puzzle, in order to advance to the next area of the quest (Figure 1). 172 Slimy Roastbeef: “who’s got the square gear?” Kenny Superstar: “I do, but I’m stuck” Slimy Roastbeef: “can you hand it to me?” Kenny Superstar: “i don’t know how” Slimy Roastbeef: “open your inventory, click and drag it onto me” Figure 1: Excerpt of dialogue during a coordination activity Quest activities requiring coordination among the players were common and also necessary to ensure a sufficient degree of movement and message traffic to provide enough material to test our predictions, and to allow us to observe particular social characteristics of players. Players answered a survey before and then again after the quest, providing demographic and trait information and evaluating other members of their group on the characteristics of interest. 3.1 Data Collection We recorded all players’ avatar movements as they purposefully moved avatars through the virtual spaces of the game environment, their public chat, and their “touch events”, which are the actions that bring objects out of player inventories, pick up objects to put in their inventories, or to put objects, such as hats or clothes, onto the avatars, and the like. We followed Yee and Bailenson’s (2008) technical approach for logging player behavior. To get a sense of the volume of data generated, 206 players generated over 30,000 messages into the group’s public chat from the 48 sessions. We compiled approximately 140 hours of recorded action. The avatar logger was implemented to record each avatar’s location through their (x,y,z) coordinates, recorded at two second intervals. This information was later used to render the avatar’s position on our 2D representation of the action (section 4.1). 4 RAT The Relational Annotation Tool (RAT) was built to assist in annotating the massive collection of data collected during the Second Life experiments. A tool was needed that would allow annotators to see the textual transcripts of the chat while at the same time view a 2D representation of the action. Additionally, we had a textual transcript for a select set of events: touch an object, stand on an object, attach an object, etc., that we needed to make available to the annotator for review. These tool characteristics were needed for several reasons. First, in order to fully understand the communication and interaction occurring between players in the game environment and accurately annotate those messages, we needed annotators to have as much information about the context as possible. The 2D map coupled with the events information made it easier to understand. For example, in the quest, players in a specific zone, encounter a dead, maimed body. As annotators assigned codes to the chat, they would sometimes encounter exclamations, such as “ew” or “gross”. Annotators would use the 2D map and the location of the exclaiming avatar to determine if the exclamation was a result of their location (in the zone with the dead body) or because of something said or done by another player. Location of avatars on the 2D map synchronized with chat was also helpful for annotators when attempting to disambiguate communicative links. For example, in one subzone, mad scribblings are written on a wall. If player A says “You see that scribbling on the wall?” the annotator needs to use the 2D map to see who the player is speaking to. If player A and player C are both standing in that subzone, then the annotator can make a reasonable assumption that player A is directing the question to player C, and not player B who is located in a different subzone. Second, we annotated coordinated avatar movement actions (such as following each other into a building or into a room), and the only way to readily identify such complex events was through the 2D map of avatar movements. The overall RAT interface, Figure 2, allows the annotator to simultaneously view all modes of representation. There are three distinct panels in this interface. The left hand panel is the 2D representation of the action (section 4.1). The upper right hand panel displays the chat and event transcripts (section 4.2), while the lower right hand portion is reserved for the three annotator sub-panels (section 4.3). 173 Figure 2: RAT interface 4.1 The 2D Game Representation The 2D representation was the most challenging of the panels to implement. We needed to find the proper level of abstraction for the action, while maintaining its usefulness for the annotator. Too complex a representation would cause cognitive overload for the annotator, thus potentially deteriorating the speed and quality of the annotations. Conversely, an overly abstract representation would not be of significant value in the annotation process. There were five distinct geographic areas on our Second Life Island: Starting Area, Mansion, Town Center, Factory and Apartments. An overview of the area in Second Life is displayed in Figure 3. We decided to represent each area separately as each group moves between the areas together, and it was therefore never necessary to display more than one area at a time. The 2D representation of the Mansion Area is displayed in Figure 4 below. Figure 5 is an exterior view of the actual Mansion in Second Life. Each area’s fixed representation was rendered using Java Graphics, reading in the Second Life (x,y,z) coordinates from an XML data file. We represented the walls of the buildings as connected solid black lines with openings left for doorways. Key item locations were marked and labeled, e.g. Kitten, maid, the Idol, etc. Even though annotators visited the island to familiarize themselves with the layout, many mansion rooms were labeled to help the annotator recall the layout of the building, and minimize error of annotation based on flawed recall. Finally, the exact time of the action that is currently being represented is displayed in the lower left hand corner. Figure 3: Second Life overview map 174 Figure 4: 2D representation of Second Life action inside the Mansion/Manor Figure 5: Second Life view of Mansion exterior Avatar location was recorded in our log files as an (x,y,z) coordinate at a two second interval. Avatars were represented in our 2D panel as moving solid color circles, using the x and y coordinates. A color coded avatar key was displayed below the 2D representation. This key related the full name of every avatar to its colored circle representation. The z coordinate was used to determine if the avatar was on the second floor of a building. If the z value indicated an avatar was on a second floor, their icon was modified to include the number “2” for the duration of their time on the second floor. Also logged was the avatar’s degree of rotation. Using this we were able to represent which direction the avatar was looking by a small black dot on their colored circle. As the annotators stepped through the chat and event annotation, the action would move forward, in synchronized step in the 2D map. In this way at any given time the annotator could see the avatar action corresponding to the chat and event transcripts appearing in the right panels. The annotator had the option to step forward or backward through the data at any step interval, where each step corresponded to a two second increment or decrement, to provide maximum flexibility to the annotator in viewing and reviewing the actions and communications to be annotated. Additionally, “Play” and “Stop” buttons were added to the tool so the annotator may simply watch the action play forward rather than manually stepping through. 4.2 The Chat & Event Panel Avatar utterances along with logged Second Life events were displayed in the Chat and Event Panel (Figure 6). Utterances and events were each displayed in their own column. Time was recorded for every utterance and event, and this was displayed in the first column of the Chat and Event Panel. All avatar names in the utterances and events were color coded, where the colors corresponded to the avatar color used in the 2D panel. This panel was synchronized with the 2D Representation panel and as the annotator stepped through the game action on the 2D display, the associated utterances and events populated the Chat and Event panel. 175 Figure 6: Chat & Event Panel 4.3 The Annotator Panels The Annotator Panels (Figures 7 and 10) contains all features needed for the annotator to quickly annotate the events and dialogue. Annotators could choose from a number of categories to label each dialogue utterance. Coding categories included communicative links, dialogue acts, and selected multi-avatar actions. In the following we briefly outline each of these. A more detailed description of the chat annotation scheme is available in (Shaikh et al., 2010). 4.3.1 Communicative Links One of the challenges in multi-party dialogue is to establish which user an utterance is directed towards. Users do not typically add addressing information in their utterances, which leads to ambiguity while creating a communication link between users. With this annotation level, we asked the annotators to determine whether each utterance was addressed to some user, in which case they were asked to mark which specific user it was addressed to; was in response to another prior utterance by a different user, which required marking the specific utterance responded to; or a continuation of the user’s own prior utterance. Communicative link annotation allows for accurate mapping of dialogue dynamics in the multiparty setting, and is a critical component of tracking such social phenomena as disagreements and leadership. 4.3.2 Dialogue Acts We developed a hierarchy of 19 dialogue acts for annotating the functional aspect of the utterance in the discussion. The tagset we adopted is loosely based on DAMSL (Allen & Core, 1997) and SWBD (Jurafsky et al., 1997), but greatly reduced and also tuned significantly towards dialogue pragmatics and away from more surface characteristics of utterances. In particular, we ask our annotators what is the pragmatic function of each utterance within the dialogue, a decision that often depends upon how earlier utterances were classified. Thus augmented, DA tags become an important source of evidence for detecting language uses and such social phenomena as conformity. Examples of dialogue act tags include Assertion-Opinion, Acknowledge, Information-Request, and Confirmation-Request. Using the augmented DA tagset also presents a fairly challenging task to our annotators, who need to be trained for many hours before an acceptable rate of inter-annotator agreement is achieved. For this reason, we consider our current DA tagging as a work in progress. 4.3.3 Zone coding Each of the five main areas had a corresponding set of subzones. A subzone is a building, a room within a building, or any other identifiable area within the playable spaces of the quest, e.g. the Mansion has the subzones: Hall, Dining Room, Kitchen, Outside, Ghost Room, etc. The subzone was determined based on the avatar(s) (x,y,z) coordinates and the known subzone boundaries. This additional piece of data allowed for statistical analysis at different levels: avatar, dialogue unit, and subzone. 176 Figure 7: Chat Annotation Sub-Panel 4.3.4 Multi-avatar events As mentioned, in addition to chat we also were interested in having the annotators record composite events involving multiple avatars over a span of time and space. While the design of the RAT tool will support annotation of any event of interest with only slight modifications, for our purposes, we were interested in annotating two types of events that we considered significant for our research hypotheses. The first type of event was the multiavatar entry (or exit) into a sub-zone, including the order in which the avatars moved. Figure 8 shows an example of a “Moves into Subzone” annotation as displayed in the Chat & Event Panel. Figure 9 shows the corresponding series of progressive moments in time portraying entry into the Bank subzone as represented in RAT. In the annotation, each avatar name is recorded in order of its entry into the subzone (here, the Bank). Additionally, we record the subzone name and the time the event is completed3. The second type of event we annotated was the “follow X” event, i.e., when one or more avatars appeared to be following one another within a subzone. These two types of events were of particular interest because we hypothesized that players who are leaders are likely to enter first into a subzone and be followed around once inside. In addition, support for annotation of other types of composite events can be added as needed; for example, group forming and splitting, or certain 3 We are also able to record the start time of any event but for our purposes we were only concerned with the end time. joint activities involving objects, etc. were fairly common in quests and may be significant for some analyses (although not for our hypotheses). For each type of event, an annotation subpanel is created to facilitate speedy markup while minimizing opportunities for error (Figure 10). A “Moves Into Subzone” event is annotated by recording the ordinal (1, 2, 3, etc.) for each avatar. Similarly, a “Follows” event is coded as avatar group “A” follows group “B’, where each group will contain one or more avatars. Figure 8: The corresponding annotation for Figure 9 event, as displayed in the Chat & Event Panel 5 The Annotation Process To annotate the large volume of data generated from the Second Life quests, we developed an annotation guide that defined and described the annotation categories and decision rules annotators were to follow in categorizing the data units (following previous projects (Shaikh et al., 2010). Two students were hired and trained for approximately 60 hours, during which time they learned how to use the annotation tool and the categories and rules for the annotation process. After establishing a satisfactory level of interrater reliability (average Krippendorff’s alpha of all measures was <0.8. Krippendorff’s alpha accounts for the probability of 177 chance agreement and is therefore a conservative measure of agreement), the two students then annotated the 48 groups over a four-month period. It took approximately 230 hours to annotate the sessions, and they assigned over 39,000 dialogue act tags. Annotators spent roughly 7 hours marking up the movements and chat messages per 2.5 hour quest session. Figure 9: A series of progressive moments in time portraying avatar entry into the Bank subzone Figure 10: Event Annotation Sub-Panel, currently showing the “Moves Into Subzone” event from figure 9, as well as: “Kenny follows Elliot in Vault” 5.1 The Annotated Corpus The current version of the annotated corpus consists of thousands of tagged messages including: 4,294 action-directives, 17,129 assertion-opinions, 4,116 information requests, 471 confirmation requests, 394 offer-commits, 3,075 responses to information requests, 1,317 agree-accepts, 215 disagree-rejects, and 2,502 acknowledgements, from 30,535 presplit utterances (31,801 post-split). We also assigned 4,546 following events. 6 Conclusion In this paper we described the successful implementation and use of our multi-modal annotation tool, RAT. Our tool was used to accurately and simultaneously annotate over 30,000 messages and approximately 140 hours of action. For each hour spent annotating, our annotators were able to tag approximately 170 utterances as well as 36 minutes of action. The annotators reported finding the tool highly functional and very efficient at helping them easily assign categories to the relevant data units, and that they could assign those categories without producing too many errors, such as accidentally assigning the wrong category or selecting the wrong avatar. The function allowing for the synchronized playback of the chat and movement data coupled with the 2D map increased comprehension of utterances 178 and behavior of the players during the quest, improving validity and reliability of the results. Acknowledgements This research is part of an Air Force Research Laboratory sponsored study conducted by Colorado State University, Ohio University, the University at Albany, SUNY, and Lockheed Martin. References Steven Bird, Kazuaki Maeda, Xiaoyi Ma and Haejoong Lee. 2001. annotation tools based on the annotation graph API. In Proceedings of ACL/EACL 2001 Workshop on Sharing Tools and Resources for Research and Education. M. P. Çakir. 2009. The organization of graphical, narrative and symbolic interactions. In Studying virtual math teams (pp. 99-140). New York, Springer. J. Carletta. 2007. Unleashing the killer corpus: experiences in creating the multi-everything AMI Meeting Corpus. Language Resources and Evaluation Journal 41(2): 181-190. Mark G. Core and James F. Allen. 1997. Coding dialogues with the DAMSL annotation scheme. In Proceedings of AAAI Fall 1997 Symposium. Steve Cassidy and Jonathan Harrington. 2001. Multilevel annotation in the Emu speech database management system. Speech Communication, 33:61-77. S. C. Herring. 2003. Dynamic topic analysis of synchronous chat. Paper presented at the New Research for New Media: Innovative Research Symposium. Minneapolis, MN. S. C. Herring and Nix, C. G. 1997. Is “serious chat” an oxymoron? Pedagogical vs. social use of internet relay chat. Paper presented at the American Association of Applied Linguistics, Orlando, FL. Samira Shaikh, Strzalkowski, T., Broadwell, A., Stromer-Galley, J., Taylor, S., and Webb, N. 2010. MPC: A Multi-party chat corpus for modeling social phenomena in discourse. Proceedings of the Seventh Conference on International Language Resources and Evaluation. Valletta, Malta: European Language Resources Association. G. Stahl. 2009. The VMT vision. In G. Stahl, (Ed.), Studying virtual math teams (pp. 17-29). New York, Springer. Stephen Sutton, Ronald Cole, Jacques De Villiers, Johan Schalkwyk, Pieter Vermeulen, Mike Macon, Yonghong Yan, Ed Kaiser, Brian RunRundle, Khaldoun Shobaki, Paul Hosom, Alex Kain, Johan Wouters, Dominic Massaro, Michael Cohen. 1998. Universal Speech Tools: The CSLU toolkit. Proceedings of the 5th ICSLP, Australia. Jennifer Stromer-Galley and Martinson, A. 2009. Coherence in political computer-mediated communication: Comparing topics in chat. Discourse & Communication, 3, 195-216. N. Yee and Bailenson, J. N. 2008. A method for longitudinal behavioral data collection in Second Life. Presence, 17, 594-596. 179
|
2011
|
18
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 180–189, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A New Dataset and Method for Automatically Grading ESOL Texts Helen Yannakoudakis Computer Laboratory University of Cambridge United Kingdom [email protected] Ted Briscoe Computer Laboratory University of Cambridge United Kingdom [email protected] Ben Medlock iLexIR Ltd Cambridge United Kingdom [email protected] Abstract We demonstrate how supervised discriminative machine learning techniques can be used to automate the assessment of ‘English as a Second or Other Language’ (ESOL) examination scripts. In particular, we use rank preference learning to explicitly model the grade relationships between scripts. A number of different features are extracted and ablation tests are used to investigate their contribution to overall performance. A comparison between regression and rank preference models further supports our method. Experimental results on the first publically available dataset show that our system can achieve levels of performance close to the upper bound for the task, as defined by the agreement between human examiners on the same corpus. Finally, using a set of ‘outlier’ texts, we test the validity of our model and identify cases where the model’s scores diverge from that of a human examiner. 1 Introduction The task of automated assessment of free text focuses on automatically analysing and assessing the quality of writing competence. Automated assessment systems exploit textual features in order to measure the overall quality and assign a score to a text. The earliest systems used superficial features, such as word and sentence length, as proxies for understanding the text. More recent systems have used more sophisticated automated text processing techniques to measure grammaticality, textual coherence, prespecified errors, and so forth. Deployment of automated assessment systems gives a number of advantages, such as the reduced workload in marking texts, especially when applied to large-scale assessments. Additionally, automated systems guarantee the application of the same marking criteria, thus reducing inconsistency, which may arise when more than one human examiner is employed. Often, implementations include feedback with respect to the writers’ writing abilities, thus facilitating self-assessment and self-tutoring. Implicitly or explicitly, previous work has mostly treated automated assessment as a supervised text classification task, where training texts are labelled with a grade and unlabelled test texts are fitted to the same grade point scale via a regression step applied to the classifier output (see Section 6 for more details). Different techniques have been used, including cosine similarity of vectors representing text in various ways (Attali and Burstein, 2006), often combined with dimensionality reduction techniques such as Latent Semantic Analysis (LSA) (Landauer et al., 2003), generative machine learning models (Rudner and Liang, 2002), domain-specific feature extraction (Attali and Burstein, 2006), and/or modified syntactic parsers (Lonsdale and Strong-Krause, 2003). A recent review identifies twelve different automated free-text scoring systems (Williamson, 2009). Examples include e-Rater (Attali and Burstein, 2006), Intelligent Essay Assessor (IEA) (Landauer et al., 2003), IntelliMetric (Elliot, 2003; Rudner et al., 2006) and Project Essay Grade (PEG) (Page, 2003). Several of these are now deployed in highstakes assessment of examination scripts. Although there are many published analyses of the perfor180 mance of individual systems, as yet there is no publically available shared dataset for training and testing such systems and comparing their performance. As it is likely that the deployment of such systems will increase, standardised and independent evaluation methods are important. We make such a dataset of ESOL examination scripts available1 (see Section 2 for more details), describe our novel approach to the task, and provide results for our system on this dataset. We address automated assessment as a supervised discriminative machine learning problem and particularly as a rank preference problem (Joachims, 2002). Our reasons are twofold: Discriminative classification techniques often outperform non-discriminative ones in the context of text classification (Joachims, 1998). Additionally, rank preference techniques (Joachims, 2002) allow us to explicitly learn an optimal ranking model of text quality. Learning a ranking directly, rather than fitting a classifier score to a grade point scale after training, is both a more generic approach to the task and one which exploits the labelling information in the training data efficiently and directly. Techniques such as LSA (Landauer and Foltz, 1998) measure, in addition to writing competence, the semantic relevance of a text written in response to a given prompt. However, although our corpus of manually-marked texts was produced by learners of English in response to prompts eliciting free-text answers, the marking criteria are primarily based on the accurate use of a range of different linguistic constructions. For this reason, we believe that an approach which directly measures linguistic competence will be better suited to ESOL text assessment, and will have the additional advantage that it may not require retraining for new prompts or tasks. As far as we know, this is the first application of a rank preference model to automated assessment (hereafter AA). In this paper, we report experiments on rank preference Support Vector Machines (SVMs) trained on a relatively small amount of data, on identification of appropriate feature types derived automatically from generic text processing tools, on comparison with a regression SVM model, and on the robustness of the best model to ‘outlier’ texts. 1http://www.ilexir.com/ We report a consistent, comparable and replicable set of results based entirely on the new dataset and on public-domain tools and data, whilst also experimentally motivating some novel feature types for the AA task, thus extending the work described in (Briscoe et al., 2010). In the following sections we describe in more detail the dataset used for training and testing, the system developed, the evaluation methodology, as well as ablation experiments aimed at studying the contribution of different feature types to the AA task. We show experimentally that discriminative models with appropriate feature types can achieve performance close to the upper bound, as defined by the agreement between human examiners on the same test corpus. 2 Cambridge Learner Corpus The Cambridge Learner Corpus2 (CLC), developed as a collaborative project between Cambridge University Press and Cambridge Assessment, is a large collection of texts produced by English language learners from around the world, sitting Cambridge Assessment’s English as a Second or Other Language (ESOL) examinations3. For the purpose of this work, we extracted scripts produced by learners taking the First Certificate in English (FCE) exam, which assesses English at an upper-intermediate level. The scripts, which are anonymised, are annotated using XML and linked to meta-data about the question prompts, the candidate’s grades, native language and age. The FCE writing component consists of two tasks asking learners to write either a letter, a report, an article, a composition or a short story, between 200 and 400 words. Answers to each of these tasks are annotated with marks (in the range 1–40), which have been fitted to a RASCH model (Fischer and Molenaar, 1995) to correct for inter-examiner inconsistency and comparability. In addition, an overall mark is assigned to both tasks, which is the one we use in our experiments. Each script has been also manually tagged with information about the linguistic errors committed, 2http://www.cup.cam.ac.uk/gb/elt/catalogue/subject/custom/ item3646603/Cambridge-International-Corpus-CambridgeLearner-Corpus/?site locale=en GB 3http://www.cambridgeesol.org/ 181 using a taxonomy of approximately 80 error types (Nicholls, 2003). The following is an example errorcoded sentence: In the morning, you are <NS type = “TV”> waken|woken</NS> up by a singing puppy. In this sentence, TV denotes an incorrect tense of verb error, where waken can be corrected to woken. Our data consists of 1141 scripts from the year 2000 for training written by 1141 distinct learners, and 97 scripts from the year 2001 for testing written by 97 distinct learners. The learners’ ages follow a bimodal distribution with peaks at approximately 16–20 and 26–30 years of age. The prompts eliciting the free text are provided with the dataset. However, in this paper we make no use of prompt information and do not make any attempt to check that the text answer is appropriate to the prompt. Our focus is on developing an accurate AA system for ESOL text that does not require prompt-specific or topic-specific training. There is no overlap between the prompts used in 2000 and in 2001. A typical prompt taken from the 2000 training dataset is shown below: Your teacher has asked you to write a story for the school’s English language magazine. The story must begin with the following words: “Unfortunately, Pat wasn’t very good at keeping secrets”. 3 Approach We treat automated assessment of ESOL text (see Section 2) as a rank preference learning problem (see Section 1). In the experiments reported here we use Support Vector Machines (SVMs) (Vapnik, 1995) through the SVMlight package (Joachims, 1999). Using the dataset described in Section 2, a number of linguistic features are automatically extracted and their contribution to overall performance is investigated. 3.1 Rank preference model SVMs have been extensively used for learning classification, regression and ranking functions. In its basic form, a binary SVM classifier learns a linear threshold function that discriminates data points of two categories. By using a different loss function, the ε-insensitive loss function (Smola, 1996), SVMs can also perform regression. SVMs in regression mode estimate a function that outputs a real number based on the training data. In both cases, the model generalises by computing a hyperplane that has the largest (soft-)margin. In rank preference SVMs, the goal is to learn a ranking function which outputs a score for each data point, from which a global ordering of the data is constructed. This procedure requires a set R consisting of training samples ⃗xn and their target rankings rn: R = {(⃗x1, r1), (⃗x2, r2), ..., (⃗xn, rn)} (1) such that ⃗xi ≻R ⃗xj when ri < rj, where 1 ≤i, j ≤n and i ̸= j. A rank preference model is not trained directly on this set of data objects and their labels; rather a set of pair-wise difference vectors is created. The goal of a linear ranking model is to compute a weight vector ⃗w that maximises the number of correctly ranked pairs: ∀(⃗xi ≻R ⃗xj) : ⃗w(⃗xi −⃗xj) > 0 (2) This is equivalent to solving the following optimisation problem: Minimise: 1 2∥⃗w∥2 + C X ξij (3) Subject to the constraints: ∀(⃗xi ≻R ⃗xj) : ⃗w(⃗xi −⃗xj) ≥1 −ξij (4) ξij ≥0 (5) The factor C allows a trade-off between the training error and the margin size, while ξij are nonnegative slack variables that measure the degree of misclassification. The optimisation problem is equivalent to that for the classification model on pair-wise difference vectors. In this case, generalisation is achieved by maximising the differences between closely-ranked data pairs. The principal advantage of applying rank preference learning to the AA task is that we explicitly 182 model the grade relationships between scripts and do not need to apply a further regression step to fit the classifier output to the scoring scheme. The results reported in this paper are obtained by learning a linear classification function. 3.2 Feature set We parsed the training and test data (see Section 2) using the Robust Accurate Statistical Parsing (RASP) system with the standard tokenisation and sentence boundary detection modules (Briscoe et al., 2006) in order to broaden the space of candidate features suitable for the task. The features used in our experiments are mainly motivated by the fact that lexical and grammatical features should be highly discriminative for the AA task. Our full feature set is as follows: i. Lexical ngrams (a) Word unigrams (b) Word bigrams ii. Part-of-speech (PoS) ngrams (a) PoS unigrams (b) PoS bigrams (c) PoS trigrams iii. Features representing syntax (a) Phrase structure (PS) rules (b) Grammatical relation (GR) distance measures iv. Other features (a) Script length (b) Error-rate Word unigrams and bigrams are lower-cased and used in their inflected forms. PoS unigrams, bigrams and trigrams are extracted using the RASP tagger, which uses the CLAWS4 tagset. The most probable posterior tag per word is used to construct PoS ngram features, but we use the RASP parser’s option to analyse words assigned multiple tags when the posterior probability of the highest ranked tag is less than 0.9, and the next n tags have probability greater than 1 50 of it. 4http://ucrel.lancs.ac.uk/claws/ Based on the most likely parse for each identified sentence, we extract the rule names from the phrase structure (PS) tree. RASP’s rule names are semiautomatically generated and encode detailed information about the grammatical constructions found (e.g. V1/modal bse/+-, ‘a VP consisting of a modal auxiliary head followed by an (optional) adverbial phrase, followed by a VP headed by a verb with base inflection’). Moreover, rule names explicitly represent information about peripheral or rare constructions (e.g. S/pp-ap s-r, ‘a S with preposed PP with adjectival complement, e.g. for better or worse, he left’), as well as about fragmentary and likely extragrammatical sequences (e.g. T/txt-frag, ‘a text unit consisting of 2 or more subanalyses that cannot be combined using any rule in the grammar’). Therefore, we believe that many (longer-distance) grammatical constructions and errors found in texts can be (implicitly) captured by this feature type. In developing our AA system, a number of different grammatical complexity measures were extracted from parses, and their impact on the accuracy of the system was explored. For the experiments reported here, we use complexity measures representing the sum of the longest distance in word tokens between a head and dependent in a grammatical relation (GR) from the RASP GR output, calculated for each GR graph from the top 10 parses per sentence. In particular, we extract the mean and median values of these distances per sentence and use the maximum values per script. Intuitively, this feature captures information about the grammatical sophistication of the writer. However, it may also be confounded in cases where sentence boundaries are not identified through, for example, poor punctuation. Although the CLC contains information about the linguistic errors committed (see Section 2), we try to extract an error-rate in a way that doesn’t require manually tagged data. However, we also use an error-rate calculated from the CLC error tags to obtain an upper bound for the performance of an automated error estimator (true CLC error-rate). In order to estimate the error-rate, we build a trigram language model (LM) using ukWaC (ukWaC LM) (Ferraresi et al., 2008), a large corpus of English containing more than 2 billion tokens. Next, we extend our language model with trigrams extracted from a subset of the texts contained in the 183 Features Pearson’s Spearman’s correlation correlation word ngrams 0.601 0.598 +PoS ngrams 0.682 0.687 +script length 0.692 0.689 +PS rules 0.707 0.708 +complexity 0.714 0.712 Error-rate features +ukWaC LM 0.735 0.758 +CLC LM 0.741 0.773 +true CLC error-rate 0.751 0.789 Table 1: Correlation between the CLC scores and the AA system predicted values. CLC (CLC LM). As the CLC contains texts produced by second language learners, we only extract frequently occurring trigrams from highly ranked scripts to avoid introducing erroneous ones to our language model. A word trigram in test data is counted as an error if it is not found in the language model. We compute presence/absence efficiently using a Bloom filter encoding of the language models (Bloom, 1970). Feature instances of types i and ii are weighted using the tf*idf scheme and normalised by the L2 norm. Feature type iii is weighted using frequency counts, while iii and iv are scaled so that their final value has approximately the same order of magnitude as i and ii. The script length is based on the number of words and is mainly added to balance the effect the length of a script has on other features. Finally, features whose overall frequency is lower than four are discarded from the model. 4 Evaluation In order to evaluate our AA system, we use two correlation measures, Pearson’s product-moment correlation coefficient and Spearman’s rank correlation coefficient (hereafter Pearson’s and Spearman’s correlation respectively). Pearson’s correlation determines the degree to which two linearly dependent variables are related. As Pearson’s correlation is sensitive to the distribution of data and, due to outliers, its value can be misleading, we also report Spearman’s correlation. The latter is a nonparametric robust measure of association which is Ablated Pearson’s Spearman’s feature correlation correlation none 0.741 0.773 word ngrams 0.713 0.762 PoS ngrams 0.724 0.737 script length 0.734 0.772 PS rules 0.712 0.731 complexity 0.738 0.760 ukWaC+CLC LM 0.714 0.712 Table 2: Ablation tests showing the correlation between the CLC and the AA system. sensitive only to the ordinal arrangement of values. As our data contains some tied values, we calculate Spearman’s correlation by using Pearson’s correlation on the ranks. Table 1 presents the Pearson’s and Spearman’s correlation between the CLC scores and the AA system predicted values, when incrementally adding to the model the feature types described in Section 3.2. Each feature type improves the model’s performance. Extending our language model with frequent trigrams extracted from the CLC improves Pearson’s and Spearman’s correlation by 0.006 and 0.015 respectively. The addition of the error-rate obtained from the manually annotated CLC error tags on top of all the features further improves performance by 0.01 and 0.016. An evaluation of our best error detection method shows a Pearson correlation of 0.611 between the estimated and the true CLC error counts. This suggests that there is room for improvement in the language models we developed to estimate the error-rate. In the experiments reported hereafter, we use the ukWaC+CLC LM to calculate the error-rate. In order to assess the independent as opposed to the order-dependent additive contribution of each feature type to the overall performance of the system, we run a number of ablation tests. An ablation test consists of removing one feature of the system at a time and re-evaluating the model on the test set. Table 2 presents Pearson’s and Spearman’s correlation between the CLC and our system, when removing one feature at a time. All features have a positive effect on performance, while the error-rate has a big impact, as its absence is responsible for a 0.061 decrease of Spearman’s correlation. In addition, the 184 Model Pearson’s Spearman’s correlation correlation Regression 0.697 0.706 Rank preference 0.741 0.773 Table 3: Comparison between regression and rank preference model. removal of either the word ngrams, the PS rules, or the error-rate estimate contributes to a large decrease in Pearson’s correlation. In order to test the significance of the improved correlations, we ran one-tailed t-tests with a = 0.05 for the difference between dependent correlations (Williams, 1959; Steiger, 1980). The results showed that PoS ngrams, PS rules, the complexity measures, and the estimated error-rate contribute significantly to the improvement of Spearman’s correlation, while PS rules also contribute significantly to the improvement of Pearson’s correlation. One of the main approaches adopted by previous systems involves the identification of features that measure writing skill, and then the application of linear or stepwise regression to find optimal feature weights so that the correlation with manually assigned scores is maximised. We trained a SVM regression model with our full set of feature types and compared it to the SVM rank preference model. The results are given in Table 3. The rank preference model improves Pearson’s and Spearman’s correlation by 0.044 and 0.067 respectively, and these differences are significant, suggesting that rank preference is a more appropriate model for the AA task. Four senior and experienced ESOL examiners remarked the 97 FCE test scripts drawn from 2001 exams, using the marking scheme from that year (see Section 2). In order to obtain a ceiling for the performance of our system, we calculate the average correlation between the CLC and the examiners’ scores, and find an upper bound of 0.796 and 0.792 Pearson’s and Spearman’s correlation respectively. In order to evaluate the overall performance of our system, we calculate its correlation with the four senior examiners in addition to the RASCH-adjusted CLC scores. Tables 4 and 5 present the results obtained. The average correlation of the AA system with the CLC and the examiner scores shows that it is close CLC E1 E2 E3 E4 AA CLC 0.820 0.787 0.767 0.810 0.741 E1 0.820 0.851 0.845 0.878 0.721 E2 0.787 0.851 0.775 0.788 0.730 E3 0.767 0.845 0.775 0.779 0.747 E4 0.810 0.878 0.788 0.779 0.679 AA 0.741 0.721 0.730 0.747 0.679 Avg 0.785 0.823 0.786 0.782 0.786 0.723 Table 4: Pearson’s correlation of the AA system predicted values with the CLC and the examiners’ scores, where E1 refers to the first examiner, E2 to the second etc. CLC E1 E2 E3 E4 AA CLC 0.801 0.799 0.788 0.782 0.773 E1 0.801 0.809 0.806 0.850 0.675 E2 0.799 0.809 0.744 0.787 0.724 E3 0.788 0.806 0.744 0.794 0.738 E4 0.782 0.850 0.787 0.794 0.697 AA 0.773 0.675 0.724 0.738 0.697 Avg 0.788 0.788 0.772 0.774 0.782 0.721 Table 5: Spearman’s correlation of the AA system predicted values with the CLC and the examiners’ scores, where E1 refers to the first examiner, E2 to the second etc. to the upper bound for the task. Human–machine agreement is comparable to that of human–human agreement, with the exception of Pearson’s correlation with examiner E4 and Spearman’s correlation with examiners E1 and E4, where the discrepancies are higher. It is likely that a larger training set and/or more consistent grading of the existing training data would help to close this gap. However, our system is not measuring some properties of the scripts, such as discourse cohesion or relevance to the prompt eliciting the text, that examiners will take into account. 5 Validity tests The practical utility of an AA system will depend strongly on its robustness to subversion by writers who understand something of its workings and attempt to exploit this to maximise their scores (independently of their underlying ability). Surprisingly, there is very little published data on the robustness of existing systems. However, Powers et al. (2002) invited writing experts to trick the scoring 185 capabilities of an earlier version of e-Rater (Burstein et al., 1998). e-Rater (see Section 6 for more details) assigns a score to a text based on linguistic feature types extracted using relatively domain-specific techniques. Participants were given a description of these techniques as well as of the cue words that the system uses. The results showed that it was easier to fool the system into assigning higher than lower scores. Our goal here is to determine the extent to which knowledge of the feature types deployed poses a threat to the validity of our system, where certain text generation strategies may give rise to large positive discrepancies. As mentioned in Section 2, the marking criteria for FCE scripts are primarily based on the accurate use of a range of different grammatical constructions relevant to specific communicative goals, but our system assesses this indirectly. We extracted 6 high-scoring FCE scripts from the CLC that do not overlap with our training and test data. Based on the features used by our system and without bias towards any modification, we modified each script in one of the following ways: i. Randomly order: (a) word unigrams within a sentence (b) word bigrams within a sentence (c) word trigrams within a sentence (d) sentences within a script ii. Swap words that have the same PoS within a sentence Although the above modifications do not exhaust the potential challenges a deployed AA system might face, they represent a threat to the validity of our system since we are using a highly related feature set. In total, we create 30 such ‘outlier’ texts, which were given to an ESOL examiner for marking. Using the ‘outlier’ scripts as well as their original/unmodified versions, we ran our system on each modification separately and calculated the correlation between the predicted values and the examiner’s scores. Table 6 presents the results. The predicted values of the system have a high correlation with the examiner’s scores when tested on ‘outlier’ texts of modification types i(a), i(b) and Modification Pearson’s Spearman’s correlation correlation i(a) 0.960 0.912 i(b) 0.938 0.914 i(c) 0.801 0.867 i(d) 0.08 0.163 ii 0.634 0.761 Table 6: Correlation between the predicted values and the examiner’s scores on ‘outlier’ texts. i(c). However, as i(c) has a lower correlation compared to i(a) and i(b), it is likely that a random ordering of ngrams with N > 3 will further decrease performance. A modification of type ii, where words with the same PoS within a sentence are swapped, results in a Pearson and Spearman correlation of 0.634 and 0.761 respectively. Analysis of the results showed that our system predicted higher scores than the ones assigned by the examiner. This can be explained by the fact that texts produced using modification type ii contain a small portion of correct sentences. However, the marking criteria are based on the overall writing quality. The final case, where correct sentences are randomly ordered, receives the lowest correlation. As our system is not measuring discourse cohesion, discrepancies are much higher; the system’s predicted scores are high whilst the ones assigned by the examiner are very low. However, for a writer to be able to generate text of this type already requires significant linguistic competence, whilst a number of generic methods for assessing text and/or discourse cohesion have been developed and could be deployed in an extended version of our system. It is also likely that highly creative ‘outlier’ essays may give rise to large negative discrepancies. Recent comments in the British media have focussed on this issue, reporting that, for example, one deployed essay marking system assigned Winston Churchill’s speech ‘We Shall Fight on the Beaches’ a low score because of excessive repetition5. Our model predicted a high passing mark for this text, but not the highest one possible, that some journalists clearly feel it deserves. 5http://news.bbc.co.uk/1/hi/education/8356572.stm 186 6 Previous work In this section we briefly discuss a number of the more influential and/or better described approaches. P´erez-Mar´ın et al. (2009), Williamson (2009), Dikli (2006) and Valenti et al. (2003) provide a more detailed overview of existing AA systems. Project Essay Grade (PEG) (Page, 2003), one of the earliest systems, uses a number of manuallyidentified mostly shallow textual features, which are considered to be proxies for intrinsic qualities of writing competence. Linear regression is used to assign optimal feature weights that maximise the correlation with the examiner’s scores. The main issue with this system is that features such as word length and script length are easy to manipulate independently of genuine writing ability, potentially undermining the validity of the system. In e-Rater (Attali and Burstein, 2006), texts are represented using vectors of weighted features. Each feature corresponds to a different property of texts, such as an aspect of grammar, style, discourse and topic similarity. Additional features, representing stereotypical grammatical errors for example, are extracted using manually-coded task-specific detectors based, in part, on typical marking criteria. An unmarked text is scored based on the cosine similarity between its weighted vector and the ones in the training set. Feature weights and/or scores can be fitted to a marking scheme by stepwise or linear regression. Unlike our approach, e-Rater models discourse structure, semantic coherence and relevance to the prompt. However, the system contains manually developed task-specific components and requires retraining or tuning for each new prompt and assessment task. Intelligent Essay Assessor (IEA) (Landauer et al., 2003) uses Latent Semantic Analysis (LSA) (Landauer and Foltz, 1998) to compute the semantic similarity between texts, at a specific grade point, and a test text. In LSA, text is represented by a matrix, where rows correspond to words and columns to context (texts). Singular Value Decomposition (SVD) is used to obtain a reduced dimension matrix clustering words and contexts. The system is trained on topic and/or prompt specific texts while test texts are assigned a score based on the ones in the training set that are most similar. The overall score, which is calculated using regression techniques, is based on the content score as well as on other properties of texts, such as style, grammar, and so forth, though the methods used to assess these are not described in any detail in published work. Again, the system requires retraining or tuning for new prompts and assessment tasks. Lonsdale and Strong-Krause (2003) use a modified syntactic parser to analyse and score texts. Their method is based on a modified version of the Link Grammar parser (Sleator and Templerley, 1995) where the overall score of a text is calculated as the average of the scores assigned to each sentence. Sentences are scored on a five-point scale based on the parser’s cost vector, which roughly measures the complexity and deviation of a sentence from the parser’s grammatical model. This approach bears some similarities to our use of grammatical complexity and extragrammaticality features, but grammatical features represent only one component of our overall system, and of the task. The Bayesian Essay Test Scoring sYstem (BETSY) (Rudner and Liang, 2002) uses multinomial or Bernoulli Naive Bayes models to classify texts into different classes (e.g. pass/fail, grades A– F) based on content and style features such as word unigrams and bigrams, sentence length, number of verbs, noun–verb pairs etc. Classification is based on the conditional probability of a class given a set of features, which is calculated using the assumption that each feature is independent of the other. This system shows that treating AA as a text classification problem is viable, but the feature types are all fairly shallow, and the approach doesn’t make efficient use of the training data as a separate classifier is trained for each grade point. Recently, Chen et al. (2010) has proposed an unsupervised approach to AA of texts addressing the same topic, based on a voting algorithm. Texts are clustered according to their grade and given an initial Z-score. A model is trained where the initial score of a text changes iteratively based on its similarity with the rest of the texts as well as their Zscores. The approach might be better described as weakly supervised as the distribution of text grades in the training data is used to fit the final Z-scores to grades. The system uses a bag-of-words representation of text, so would be easy to subvert. Never187 theless, exploration of the trade-offs between degree of supervision required in training and grading accuracy is an important area for future research. 7 Conclusions and future work Though many of the systems described in Section 6 have been shown to correlate well with examiners’ marks on test data in many experimental contexts, no cross-system comparisons are available because of the lack of a shared training and test dataset. Furthermore, none of the published work of which we are aware has systematically compared the contribution of different feature types to the AA task, and only one (Powers et al., 2002) assesses the ease with which the system can be subverted given some knowledge of the features deployed. We have shown experimentally how rank preference models can be effectively deployed for automated assessment of ESOL free-text answers. Based on a range of feature types automatically extracted using generic text processing techniques, our system achieves performance close to the upper bound for the task. Ablation tests highlight the contribution of each feature type to the overall performance, while significance of the resulting improvements in correlation with human scores has been calculated. A comparison between regression and rank preference models further supports our approach. Preliminary experiments based on a set of ‘outlier’ texts have shown the types of texts for which the system’s scoring capability can be undermined. We plan to experiment with better error detection techniques, since the overall error-rate of a script is one of the most discriminant features. Briscoe et al. (2010) describe an approach to automatic offprompt detection which does not require retraining for each new question prompt and which we plan to integrate with our system. It is clear from the ‘outlier’ experiments reported here that our system would benefit from features assessing discourse coherence, and to a lesser extent from features assessing semantic (selectional) coherence over longer bounds than those captured by ngrams. The addition of an incoherence metric to the feature set of an AA system has been shown to improve performance significantly (Miltsakaki and Kukich, 2000; Miltsakaki and Kukich, 2004). Finally, we hope that the release of the training and test dataset described here will facilitate further research on the AA task for ESOL free text and, in particular, precise comparison of different systems, feature types, and grade fitting methods. Acknowledgements We would like to thank Cambridge ESOL, a division of Cambridge Assessment, for permission to use and distribute the examination scripts. We are also grateful to Cambridge Assessment for arranging for the test scripts to be remarked by four of their senior examiners. Finally, we would like to thank Marek Rei, Øistein Andersen and the anonymous reviewers for their useful comments. References Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-rater v.2. Journal of Technology, Learning, and Assessment, 4(3):1–30. Burton H. Bloom. 1970. Space/time trade-offs in hash coding with allowable errors. Communications of the ACM, 13(7):422–426, July. E.J. Briscoe, J. Carroll, and R Watson. 2006. The second release of the RASP system. In ACL-Coling’06 Interactive Presentation Session, pages 77–80, Sydney, Australia. E.J. Briscoe, B. Medlock, and Ø. Andersen. 2010. Automated Assessment of ESOL Free Text Examinations. Cambridge University, Computer Laboratory, TR-790. Jill Burstein, Karen Kukich, Susanne Wolff, Chi Lu, Martin Chodorow, Lisa Braden-Harder, and Mary Dee Harris. 1998. Automated scoring using a hybrid feature identification technique. Proceedings of the 36th annual meeting on Association for Computational Linguistics, pages 206–210. YY Chen, CL Liu, TH Chang, and CH Lee. 2010. An Unsupervised Automated Essay Scoring System. IEEE Intelligent Systems, pages 61–67. Semire Dikli. 2006. An overview of automated scoring of essays. Journal of Technology, Learning, and Assessment, 5(1). S. Elliot. 2003. IntelliMetric: From here to validity. In M.D. Shermis and J.C. Burstein, editors, Automated essay scoring: A cross-disciplinary perspective, pages 71–86. Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukWaC, a very large web-derived corpus of English. 188 In S. Evert, A. Kilgarriff, and S. Sharoff, editors, Proceedings of the 4th Web as Corpus Workshop (WAC-4). G.H. Fischer and I.W. Molenaar. 1995. Rasch models: Foundations, recent developments, and applications. Springer. Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. In Proceedings of the European Conference on Machine Learning, pages 137–142. Springer. Thorsten Joachims. 1999. Making large scale SVM learning practical. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT Press. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), pages 133–142. ACM. T.K. Landauer and P.W. Foltz. 1998. An introduction to latent semantic analysis. Discourse processes, pages 259–284. T.K. Landauer, D. Laham, and P.W. Foltz. 2003. Automated scoring and annotation of essays with the Intelligent Essay Assessor. In M.D. Shermis and J.C. Burstein, editors, Automated essay scoring: A crossdisciplinary perspective, pages 87–112. Deryle Lonsdale and D. Strong-Krause. 2003. Automated rating of ESL essays. In Proceedings of the HLT-NAACL 2003 Workshop: Building Educational Applications Using Natural Language Processing. Eleni Miltsakaki and Karen Kukich. 2000. Automated evaluation of coherence in student essays. In Proceedings of LREC 2000. Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring systems. Natural Language Engineering, 10(01):25–55, March. D. Nicholls. 2003. The Cambridge Learner Corpus: Error coding and analysis for lexicography and ELT. In Proceedings of the Corpus Linguistics 2003 conference, pages 572–581. E.B. Page. 2003. Project essay grade: PEG. In M.D. Shermis and J.C. Burstein, editors, Automated essay scoring: A cross-disciplinary perspective, pages 43– 54. D. P´erez-Mar´ın, Ismael Pascual-Nieto, and P. Rodr´ıguez. 2009. Computer-assisted assessment of free-text answers. The Knowledge Engineering Review, 24(04):353–374, December. D.E. Powers, J.C. Burstein, M. Chodorow, M.E. Fowles, and K. Kukich. 2002. Stumping e-rater: challenging the validity of automated essay scoring. Computers in Human Behavior, 18(2):103–134. L.M. Rudner and Tahung Liang. 2002. Automated essay scoring using Bayes’ theorem. The Journal of Technology, Learning and Assessment, 1(2):3–21. L.M. Rudner, Veronica Garcia, and Catherine Welch. 2006. An Evaluation of the IntelliMetric Essay Scoring System. Journal of Technology, Learning, and Assessment, 4(4):1–21. D.D.K. Sleator and D. Templerley. 1995. Parsing English with a link grammar. Proceedings of the 3rd International Workshop on Parsing Technologies, ACL. AJ Smola. 1996. Regression estimation with support vector learning machines. Master’s thesis, Technische Universit¨at Munchen. J.H. Steiger. 1980. Tests for comparing elements of a correlation matrix. Psychological Bulletin, 87(2):245– 251. Salvatore Valenti, Francesca Neri, and Alessandro Cucchiarelli. 2003. An overview of current research on automated essay grading. Journal of Information Technology Education, 2:3–118. Vladimir N. Vapnik. 1995. The nature of statistical learning theory. Springer. E. J. Williams. 1959. The Comparison of Regression Variables. Journal of the Royal Statistical Society. Series B (Methodological), 21(2):396–399. DM Williamson. 2009. A Framework for Implementing Automated Scoring. In Annual Meeting of the American Educational Research Association and the National Council on Measurement in Education, San Diego, CA. 189
|
2011
|
19
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 12–21, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Deciphering Foreign Language Sujith Ravi and Kevin Knight University of Southern California Information Sciences Institute Marina del Rey, California 90292 {sravi,knight}@isi.edu Abstract In this work, we tackle the task of machine translation (MT) without parallel training data. We frame the MT problem as a decipherment task, treating the foreign text as a cipher for English and present novel methods for training translation models from nonparallel text. 1 Introduction Bilingual corpora are a staple of statistical machine translation (SMT) research. From these corpora, we estimate translation model parameters: wordto-word translation tables, fertilities, distortion parameters, phrase tables, syntactic transformations, etc. Starting with the classic IBM work (Brown et al., 1993), training has been viewed as a maximization problem involving hidden word alignments (a) that are assumed to underlie observed sentence pairs (e, f): arg max θ Y e,f Pθ(f|e) (1) = arg max θ Y e,f X a Pθ(f, a|e) (2) Brown et al. (1993) give various formulas that boil Pθ(f, a|e) down to the specific parameters to be estimated. Of course, for many language pairs and domains, parallel data is not available. In this paper, we address the problem of learning a full translation model from non-parallel data, and we use the learned model to translate new foreign strings. As successful work develops along this line, we expect more domains and language pairs to be conquered by SMT. How can we learn a translation model from nonparallel data? Intuitively, we try to construct translation model tables which, when applied to observed foreign text, consistently yield sensible English. This is essentially the same approach taken by cryptanalysts and epigraphers when they deal with source texts. In our case, we observe a large number of foreign strings f, and we apply maximum likelihood training: arg max θ Y f Pθ(f) (3) Following Weaver (1955), we imagine that this corpus of foreign strings “is really written in English, but has been coded in some strange symbols,” thus: arg max θ Y f X e P(e) · Pθ(f|e) (4) The variable e ranges over all possible English strings, and P(e) is a language model built from large amounts of English text that is unrelated to the foreign strings. Re-writing for hidden alignments, we get: arg max θ Y f X e P(e) · X a Pθ(f, a|e) (5) Note that this formula has the same free Pθ(f, a|e) parameters as expression (2). We seek to manipulate these parameters in order to learn the 12 same full translation model. We note that for each f, not only is the alignment a still hidden, but now the English translation e is hidden as well. A language model P(e) is typically used in SMT decoding (Koehn, 2009), but here P(e) actually plays a central role in training translation model parameters. To distinguish the two, we refer to (5) as decipherment, rather than decoding. We can now draw on previous decipherment work for solving simpler substitution/transposition ciphers (Bauer, 2006; Knight et al., 2006). We must keep in mind, however, that foreign language is a much more demanding code, involving highly nondeterministic mappings and very large substitution tables. The contributions of this paper are therefore: • We give first results for training a full translation model from non-parallel text, and we apply the model to translate previously-unseen text. This work is thus distinguished from prior work on extracting or augmenting partial lexicons using non-parallel corpora (Rapp, 1995; Fung and McKeown, 1997; Koehn and Knight, 2000; Haghighi et al., 2008). It also contrasts with self-training (McClosky et al., 2006), which requires a parallel seed and often does not engage in iterative maximization. • We develop novel methods to deal with largescale vocabularies inherent in MT problems. 2 Word Substitution Decipherment Before we tackle machine translation without parallel data, we first solve a simpler problem—word substitution decipherment. Here, we do not have to worry about hidden alignments since there is only one alignment. In a word substitution cipher, every word in the natural language (plaintext) sequence is substituted by a cipher token, according to a substitution key. The key is deterministic—there exists a 1-to-1 mapping between cipher units and the plaintext words they encode. For example, the following English plaintext sequences: I SAW THE BOY . THE BOY RAN . may be enciphered as: xyzz fxyy crqq tmnz lxwz crqq tmnz gdxx lxwz according to the key: THE →crqq, SAW →fxyy, RAN →gdxx, . →lxwz, BOY →tmnz, I →xyzz The goal of word substitution decipherment is to guess the original plaintext from given cipher data without any knowledge of the substitution key. Word substitution decipherment is a good test-bed for unsupervised statistical NLP techniques for two reasons—(1) we face large vocabularies and corpora sizes typically seen in large-scale MT problems, so our methods need to scale well, (2) similar decipherment techniques can be applied for solving NLP problems such as unsupervised part-of-speech tagging. Probabilistic decipherment: Our decipherment method follows a noisy-channel approach. We first model the process by which the ciphertext sequence c = c1...cn is generated. The generative story for decipherment is described here: 1. Generate an English plaintext sequence e = e1...en, with probability P(e). 2. Substitute each plaintext word ei with a ciphertext token ci, with probability Pθ(ci|ei) in order to generate the ciphertext sequence c = c1...cn. We model P(e) using a statistical word n-gram English language model (LM). During decipherment, our goal is to estimate the channel model parameters θ. Re-writing Equations 3 and 4 for word substitution decipherment, we get: arg max θ Y c Pθ(c) (6) = arg max θ Y c X e P(e) · n Y i=1 Pθ(ci|ei) (7) Challenges: Unlike letter substitution ciphers (having only 26 plaintext letters), here we have to deal with large-scale vocabularies (10k-1M word types) and corpora sizes (100k cipher tokens). This poses some serious scalability challenges for word substitution decipherment. 13 We propose novel methods that can deal with these challenges effectively and solve word substitution ciphers: 1. EM solution: We would like to use the Expectation Maximization (EM) algorithm (Dempster et al., 1977) to estimate θ from Equation 7, but EM training is not feasible in our case. First, EM cannot scale to such large vocabulary sizes (running the forward-backward algorithm for each iteration requires O(V 2) time). Secondly, we need to instantiate the entire channel and resulting derivation lattice before we can run EM, and this is too big to be stored in memory. So, we introduce a new training method (Iterative EM) that fixes these problems. 2. Bayesian decipherment: We also propose a novel decipherment approach using Bayesian inference. Typically, Bayesian inference is very slow when applied to such large-scale problems. Our method overcomes these challenges and does fast, efficient inference using (a) a novel strategy for selecting sampling choices, and (b) a parallelized sampling scheme. In the next two sections, we describe these methods in detail. 2.1 Iterative EM We devise a method which overcomes memory and running time efficiency issues faced by EM. Instead of instantiating the entire channel model (with all its parameters), we iteratively train the model in small steps. The training procedure is described here: 1. Identify the top K frequent word types in both the plaintext and ciphertext data. Replace all other word tokens with Unknown. Now, instantiate a small channel with just (K + 1)2 parameters and use the EM algorithm to train this model to maximize likelihood of cipher data. 2. Extend the plaintext and ciphertext vocabularies from the previous step by adding the next K most frequent word types (so the new vocabulary size becomes 2K + 1). Regenerate the plaintext and ciphertext data. 3. Instantiate a new (2K +1)×(2K +1) channel model. From the previous EM-trained channel, identify all the e →c mappings that were assigned a probability P(c|e) > 0.5. Fix these mappings in the new channel, i.e. set P(c|e) = 1.0. From the new channel, eliminate all other parameters e →cj associated with the plaintext word type e (where cj ̸= c). This yields a much smaller channel with size < (2K + 1)2. Retrain the new channel using EM algorithm. 4. Goto Step 2 and repeat the procedure, extending the channel size iteratively in each stage. Finally, we decode the given ciphertext c by using the Viterbi algorithm to choose the plaintext decoding e that maximizes P(e) · Pθtrained(c|e)3, stretching the channel probabilities (Knight et al., 2006). 2.2 Bayesian Decipherment Bayesian inference methods have become popular in natural language processing (Goldwater and Griffiths, 2007; Finkel et al., 2005; Blunsom et al., 2009; Chiang et al., 2010; Snyder et al., 2010). These methods are attractive for their ability to manage uncertainty about model parameters and allow one to incorporate prior knowledge during inference. Here, we propose a novel decipherment approach using Bayesian learning. Our method holds several other advantages over the EM approach—(1) inference using smart sampling strategies permits efficient training, allowing us to scale to large data/vocabulary sizes, (2) incremental scoring of derivations during sampling allows efficient inference even when we use higher-order n-gram LMs, (3) there are no memory bottlenecks since the full channel model and derivation lattice are never instantiated during training, and (4) prior specification allows us to learn skewed distributions that are useful here—word substitution ciphers exhibit 1-to-1 correspondence between plaintext and cipher types. We use the same generative story as before for decipherment, except that we use Chinese Restaurant Process (CRP) formulations for the source and channel probabilities. We use an English word bigram LM as the base distribution (P0) for the source model and specify a uniform P0 distribution for the 14 channel.1 We perform inference using point-wise Gibbs sampling (Geman and Geman, 1984). We define a sampling operator that samples plaintext word choices for every cipher token, one at a time. Using the exchangeability property, we efficiently score the probability of each derivation in an incremental fashion. In addition, we make further improvements to the sampling procedure which makes it faster. Smart sample-choice selection: In the original sampling step, for each cipher token we have to sample from a list of all possible plaintext choices (10k1M English words). There are 100k cipher tokens in our data which means we have to perform ∼109 sampling operations to make one entire pass through the data. We have to then repeat this process for 2000 iterations. Instead, we now reduce our choices in each sampling step. Say that our current plaintext hypothesis contains English words X, Y and Z at positions i −1, i and i+1 respectively. In order to sample at position i, we choose the top K English words Y ranked by P(X Y Z), which can be computed offline from a statistical word bigram LM. If this probability is 0 (i.e., X and Z never co-occurred), we randomly pick K words from the plaintext vocabulary. We set K = 100 in our experiments. This significantly reduces the sampling possibilities (10k-1M reduces to 100) at each step and allows us to scale to large plaintext vocabulary sizes without enumerating all possible choices at every cipher position.2 Parallelized Gibbs sampling: Secondly, we parallelize our sampling step using a Map-Reduce framework. In the past, others have proposed parallelized sampling schemes for topic modeling applications (Newman et al., 2009). In our method, we split the entire corpus into separate chunks and we run the sampling procedure on each chunk in parallel. At 1For word substitution decipherment, we want to keep the language model probabilities fixed during training, and hence we set the prior on that model to be high (α = 104). We use a sparse Dirichlet prior for the channel (β = 0.01). We use the output from Iterative EM decoding (using 101 x 101 channel) as initial sample and run the sampler for 2000 iterations. During sampling, we use a linear annealing schedule decreasing the temperature from 1 →0.08. 2Since we now sample from an approximate distribution, we have to correct this with the Metropolis-Hastings algorithm. But in practice we observe that samples from our proposal distribution are accepted with probability > 0.99, so we skip this step. the end of each sampling iteration, we combine the samples corresponding to each chunk and collect the counts of all events—this forms our cache for the next sampling iteration. In practice, we observe that the parallelized sampling run converges quickly and runs much faster than the conventional point-wise sampling—for example, 3.1 hours (using 10 nodes) versus 11 hours for one of the word substitution experiments. We also notice a higher speedup when scaling to larger vocabularies.3 Decoding the ciphertext: After the sampling run has finished, we choose the final sample and extract a trained version of the channel model Pθ(c|e) from this sample following the technique of Chiang et al. (2010). We then use the Viterbi algorithm to choose the English plaintext e that maximizes P(e) · Pθtrained(c|e)3. 2.3 Experiments and Results Data: For the word substitution experiments, we use two corpora: • Temporal expression corpus containing short English temporal expressions such as “THE NEXT MONTH”, “THE LAST THREE YEARS”, etc. The cipher data contains 5000 expressions (9619 tokens, 153 word types). We also have access to a separate English corpus (which is not parallel to the ciphertext) containing 125k temporal expressions (242k word tokens, 201 word types) for LM training. • Transtac corpus containing full English sentences. The data consists of 10k cipher sentences (102k tokens, 3397 word types); and a plaintext corpus of 402k English sentences (2.7M word tokens, 25761 word types) for LM training. We use all the cipher data for decipherment training but evaluate on the first 1000 cipher sentences. The cipher data was originally generated from English text by substituting each English word with a unique cipher word. We use the plaintext corpus to 3Type sampling could be applied on top of our methods to further optimize performance. But more complex problems like MT do not follow the same principles (1-to-1 key mappings) as seen in word substitution ciphers, which makes it difficult to identify type dependencies. 15 Method Decipherment Accuracy (%) Temporal expr. Transtac 9k 100k 0. EM with 2-gram LM 87.8 Intractable 1. Iterative EM with 2-gram LM 87.8 70.5 71.8 2. Bayesian with 2-gram LM 88.6 60.1 80.0 with 3-gram LM 82.5 Figure 1: Comparison of word substitution decipherment results using (1) Iterative EM, and (2) Bayesian method. For the Transtac corpus, decipherment performance is also shown for different training data sizes (9k versus 100k cipher tokens). build an English word n-gram LM, which is used in the decipherment process. Evaluation: We compute the accuracy of a particular decipherment as the percentage of cipher tokens that were correctly deciphered from the whole corpus. We run the two methods (Iterative EM4 and Bayesian) and then compare them in terms of word substitution decipherment accuracies. Results: Figure 1 compares the word substitution results from Iterative EM and Bayesian decipherment. Both methods achieve high accuracies, decoding 70-90% of the two word substitution ciphers. Overall, Bayesian decipherment (with sparse priors) performs better than Iterative EM and achieves the best results on this task. We also observe that both methods benefit from better LMs and more (cipher) training data. Figure 2 shows sample outputs from Bayesian decipherment. 3 Machine Translation as a Decipherment Task We now turn to the problem of MT without parallel data. From a decipherment perspective, machine translation is a much more complex task than word substitution decipherment and poses several technical challenges: (1) scalability due to large corpora sizes and huge translation tables, (2) nondeterminism in translation mappings (a word can have multiple translations), (3) re-ordering of words 4For Iterative EM, we start with a channel of size 101x101 (K=100) and in every pass we iteratively increase the vocabulary sizes by 50, repeating the training procedure until the channel size becomes 351x351. C: 3894 9411 4357 8446 5433 O: a diploma that’s good . D: a fence that’s good . C: 8593 7932 3627 9166 3671 O: three families living here ? D: three brothers living here ? C: 6283 8827 7592 6959 5120 6137 9723 3671 O: okay and what did they tell you ? D: okay and what did they tell you ? C: 9723 3601 5834 5838 3805 4887 7961 9723 3174 4518 9067 4488 9551 7538 7239 9166 3671 O: you mean if we come to see you in the afternoon after five you’ll be here ? D: i mean if we come to see you in the afternoon after thirty you’ll be here ? ... Figure 2: Comparison of the original (O) English plaintext with output from Bayesian word substitution decipherment (D) for a few samples cipher (C) sentences from the Transtac corpus. or phrases, (4) a single word can translate into a phrase, and (5) insertion/deletion of words. Problem Formulation: We formulate the MT decipherment problem as—given a foreign text f (i.e., foreign word sequences f1...fm) and a monolingual English corpus, our goal is to decipher the foreign text and produce an English translation. Probabilistic decipherment: Unlike parallel training, here we have to estimate the translation model Pθ(f|e) parameters using only monolingual data. During decipherment training, our objective is to estimate the model parameters θ in order to maximize the probability of the foreign corpus f. From Equation 4 we have: arg max θ Y f X e P(e) · Pθ(f|e) For P(e), we use a word n-gram LM trained on monolingual English data. We then estimate parameters of the translation model Pθ(f|e) during training. Next, we present two novel decipherment approaches for MT training without parallel data. 1. EM Decipherment: We propose a new translation model for MT decipherment which can be efficiently trained using the EM algorithm. 2. Bayesian Decipherment: We introduce a novel method for estimating IBM Model 3 parameters without parallel data, using Bayesian learning. Unlike EM, this method does not face any 16 memory issues and we use sampling to perform efficient inference during training. 3.1 EM Decipherment For the translation model Pθ(f|e), we would like to use a well-known statistical model such as IBM Model 3 and subsequently train it using the EM algorithm. But without parallel training data, EM training for IBM Model 3 becomes intractable due to (1) scalability and efficiency issues because of large-sized fertility and distortion parameter tables, and (2) the resulting derivation lattices become too big to be stored in memory. Instead, we propose a simpler generative story for MT without parallel data. Our model accounts for (word) substitutions, insertions, deletions and local re-ordering during the translation process but does not incorporate fertilities or global re-ordering. We describe the generative process here: 1. Generate an English string e = e1...el, with probability P(e). 2. Insert a NULL word at any position in the English string, with uniform probability. 3. For each English word token ei (including NULLs), choose a foreign word translation fi, with probability Pθ(fi|ei). The foreign word may be NULL. 4. Swap any pair of adjacent foreign words fi−1, fi, with probability Pθ(swap). We set this value to 0.1. 5. Output the foreign string f = f1...fm, skipping over NULLs. We use the EM algorithm to estimate all the parameters θ in order to maximize likelihood of the foreign corpus. Finally, we use the Viterbi algorithm to decode the foreign sentence f and produce an English translation e that maximizes P(e) · Pθtrained(f|e). Linguistic knowledge for decipherment: To help limit translation model size and deal with data sparsity problem, we use prior linguistic knowledge. We use identity mappings for numeric values (for example, “8” maps to “8”), and we split nouns into morpheme units prior to decipherment training (for example, “YEARS” →“YEAR” “+S”). Whole-segment Language Models: When using word n-gram models of English for decipherment, we find that some of the foreign sentences are decoded into sequences (such as “THANK YOU TALKING ABOUT ?”) that are not good English. This stems from the fact that n-gram LMs have no global information about what constitutes a valid English segment. To learn this information automatically, we build a P(e) model that only recognizes English whole-segments (entire sentences or expressions) observed in the monolingual training data. We then use this model (in place of word ngram LMs) for decipherment training and decoding. 3.2 Bayesian Method Brown et al. (1993) provide an efficient algorithm for training IBM Model 3 translation model when parallel sentence pairs are available. But we wish to perform IBM Model 3 training under non-parallel conditions, which is intractable using EM training. Instead, we take a Bayesian approach. Following Equation 5, we represent the translation model as Pθ(f, a|e) in terms of hidden alignments a. Recall the generative story for IBM Model 3 translation which has the following formula: Pθ(f, a|e) = lY i=0 tθ(faj|ei) · lY i=1 nθ(φi|ei) · m Y aj̸=0,j=1 dθ(aj|i, l, m) · lY i=0 φi! · 1 φ0! · m −φ0 φ0 ·pφ0 1θ · pm−2φ0 0θ (8) The alignment a is represented as a vector; aj = i implies that the foreign word fj is produced by the English word ei during translation. Bayesian Formulation: Our goal is to learn the probability tables t (translation parameters) n (fertility parameters), d (distortion parameters), and p (English NULL word probabilities) without parallel data. In order to apply Bayesian inference for decipherment, we model each of these tables using a 17 Chinese Restaurant Process (CRP) formulation. For example, to model the translation probabilities, we use the formula: tθ(fj|ei) = α · P0(fj|ei) + Chistory(ei, fj) α + Chistory(ei) (9) where, P0 represents the base distribution (which is set to uniform) and Chistory represents the count of events occurring in the history (cache). Similarly, we use CRP formulations for the other probabilities (n, d and p). We use sparse Dirichlet priors for all these models (i.e., low values for α) and plug these probabilities into Equation 8 to get Pθ(f, a|e). Sampling IBM Model 3: We use point-wise Gibbs sampling to estimate the IBM Model 3 parameters. The sampler is seeded with an initial English sample translation and a corresponding alignment for every foreign sentence. We define several sampling operators, which are applied in sequence one after the other to generate English samples for the entire foreign corpus. Some of the sampling operators are described below: • TranslateWord(j): Sample a new English word translation for foreign word fj, from all possibilities (including NULL). • SwapSegment(i1, i2): Swap the alignment links for English words ei1 and ei2. • JoinWords(i1, i2): Eliminate the English word ei1 and transfer its links to the word ei2. During sampling, we apply each of these operators to generate a new derivation e, a for the foreign text f and compute its score as P(e) · Pθ(f, a|e). These small-change operators are similar to the heuristic techniques used for greedy decoding by German et al. (2001). But unlike the greedy method, which can easily get stuck, our Bayesian approach guarantees that once the sampler converges we will be sampling from the true posterior distribution. As with Bayesian decipherment for word substitution, we compute the probability of each new derivation incrementally, which makes sampling efficient. We also apply blocked sampling on top of point-wise sampling—we treat all occurrences of a particular foreign sentence as a single block and sample a single derivation for the entire block. We also parallelize the sampling procedure (as described in Section 2.2).5 Choosing the best translation: Once the sampling run finishes, we select the final sample and extract the corresponding English translations for every foreign sentence. This yields the final decipherment output. 3.3 MT Experiments and Results Data: We work with the Spanish/English language pair and use the following corpora in our MT experiments: • Time corpus: We mined English newswire text on the Web and collected 295k temporal expressions such as “LAST YEAR”, “THE FOURTH QUARTER”, “IN JAN 1968”, etc. We first process the data and normalize numbers and names of months/weekdays—for example, “1968” is replaced with “NNNN”, “JANUARY” with “[MONTH]”, and so on. We then translate the English temporal phrases into Spanish using an automatic translation software (Google Translate) followed by manual annotation to correct mistakes made by the software. We create the following splits out of the resulting parallel corpus: TRAIN (English): 195k temporal expressions (7588 unique), 382k word tokens, 163 types. TEST (Spanish): 100k temporal expressions (2343 unique), 204k word tokens, 269 types. • OPUS movie subtitle corpus: This is a large open source collection of parallel corpora available for multiple language pairs (Tiedemann, 2009). We downloaded the parallel Spanish/English subtitle corpus which consists of aligned Spanish/English sentences from a collection of movie subtitles. For our MT experiments, we select only Spanish/English sentences with frequency > 10 and create the following train/test splits: 5For Bayesian MT decipherment, we set a high prior value on the language model (104) and use sparse priors for the IBM 3 model parameters t, n, d, p (0.01, 0.01, 0.01, 0.01). We use the output from EM decipherment as the initial sample and run the sampler for 2000 iterations, during which we apply annealing with a linear schedule (2 →0.08). 18 Method Decipherment Accuracy Time expressions OPUS subtitles 1a. Parallel training (MOSES) with 2-gram LM 5.6 (85.6) 26.8 (63.6) with 5-gram LM 4.7 (88.0) 1b. Parallel training (IBM 3 without distortion) with 2-gram LM 10.1 (78.9) 29.9 (59.6) with whole-segment LM 9.0 (79.2) 2a. Decipherment (EM) with 2-gram LM 37.6 (44.6) 67.2 (15.3) with whole-segment LM 28.7 (48.7) 65.1 (19.3) 2b. Decipherment (Bayesian IBM 3) with 2-gram LM 34.0 (30.2) 66.6 (15.1) Figure 3: Comparison of Spanish/English MT performance on the Time and OPUS test corpora achieved by various MT systems trained under (1) parallel—(a) MOSES, (b) IBM 3 without distortion, and (2) decipherment settings— (a) EM, (b) Bayesian. The scores reported here are normalized edit distance values with BLEU scores shown in parentheses. TRAIN (English): 19770 sentences (1128 unique), 62k word tokens, 411 word types. TEST (Spanish): 13181 sentences (1127 unique), 39k word tokens, 562 word types. Both Spanish/English sides of TRAIN are used for parallel MT training, whereas decipherment uses only monolingual English data for training LMs. MT Systems: We build and compare different MT systems under two training scenarios: 1. Parallel training using: (a) MOSES, a phrase translation system (Koehn et al., 2007) widely used in MT literature, and (b) a simpler version of IBM Model 3 (without distortion parameters) which can be trained tractably using the strategy of Knight and Al-Onaizan (1998). 2. Decipherment without parallel data using: (a) EM method (from Section 3.1), and (b) Bayesian method (from Section 3.2). Evaluation: All the MT systems are run on the Spanish test data and the quality of the resulting English translations are evaluated using two different measures—(1) Normalized edit distance score (Navarro, 2001),6 and (2) BLEU (Papineni et 6When computing edit distance, we account for substitutions, insertions, deletions as well as local-swap edit operations required to convert a given English string into the (gold) reference translation. al., 2002), a standard MT evaluation measure. Results: Figure 3 compares the results of various MT systems (using parallel versus decipherment training) on the two test corpora in terms of edit distance scores (a lower score indicates closer match to the gold translation). The figure also shows the corresponding BLEU scores in parentheses for comparison (higher scores indicate better MT output). We observe that even without parallel training data, our decipherment strategies achieve MT accuracies comparable to parallel-trained systems. On the Time corpus, the best decipherment (Method 2a in the figure) achieves an edit distance score of 28.7 (versus 4.7 for MOSES). Better LMs yield better MT results for both parallel and decipherment training—for example, using a segment-based English LM instead of a 2-gram LM yields a 24% reduction in edit distance and a 9% improvement in BLEU score for EM decipherment. We also investigate how the performance of different MT systems vary with the size of the training data. Figure 4 plots the BLEU scores versus training sizes for different MT systems on the Time corpus. Clearly, using more training data yields better performance for all systems. However, higher improvements are observed when using parallel data in comparison to decipherment training which only uses monolingual data. We also notice that the scores do not improve much when going beyond 10,000 train19 Figure 4: Comparison of training data size versus MT accuracy in terms of BLEU score under different training conditions: (1) Parallel training—(a) MOSES, (b) IBM Model 3 without distortion, and (2) Decipherment without parallel data using EM method (from Section 3.1). ing instances for this domain. It is interesting to quantify the value of parallel versus non-parallel data for any given MT task. In other words, “how much non-parallel data is worth how much parallel data in order to achieve the same MT accuracy?” Figure 4 provides a reasonable answer to this question for the Spanish/English MT task described here. We see that deciphering with 10k monolingual Spanish sentences yields the same performance as training with around 200-500 parallel English/Spanish sentence pairs. This is the first attempt at such a quantitative comparison for MT and our results are encouraging. We envision that further developments in unsupervised methods will help reduce this gap further. 4 Conclusion Our work is the first attempt at doing MT without parallel data. We discussed several novel decipherment approaches for achieving this goal. Along the way, we developed efficient training methods that can deal with large-scale vocabularies and data sizes. For future work, it will be interesting to see if we can exploit both parallel and non-parallel data to improve on both. Acknowledgments This material is based in part upon work supported by the National Science Foundation (NSF) under Grant No. IIS-0904684 and the Defense Advanced Research Projects Agency (DARPA) through the Department of Interior/National Business Center under Contract No. NBCHD040058. Any opinion, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA), or the Department of the Interior/National Business Center. References Friedrich L. Bauer. 2006. Decrypted Secrets: Methods and Maxims of Cryptology. Springer-Verlag. Phil Blunsom, Trevor Cohn, Chris Dyer, and Miles Osborne. 2009. A Gibbs sampler for phrasal synchronous grammar induction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP), pages 782–790. Peter Brown, Vincent Della Pietra, Stephen Della Pietra, and Robert Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263–311. David Chiang, Jonathan Graehl, Kevin Knight, Adam Pauls, and Sujith Ravi. 2010. Bayesian inference for finite-state transducers. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL/HLT), pages 447–455. Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Jenny Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 363– 370. Pascal Fung and Kathleen McKeown. 1997. Finding terminology translations from non-parallel corpora. In Proceedings of the Fifth Annual Workshop on Very Large Corpora, pages 192–202. 20 Stuart Geman and Donald Geman. 1984. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):721–741. Ulrich Germann, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2001. Fast decoding and optimal decoding for machine translation. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, pages 228–235. Sharon Goldwater and Thomas Griffiths. 2007. A fully Bayesian approach to unsupervised part-of-speech tagging. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 744– 751. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of the Annual Meeting of the Association for Computational Linguistics - Human Language Technologies (ACL/HLT), pages 771–779. Kevin Knight and Yaser Al-Onaizan. 1998. Translation with finite-state devices. In David Farwell, Laurie Gerber, and Eduard Hovy, editors, Machine Translation and the Information Soup, volume 1529 of Lecture Notes in Computer Science, pages 421–437. Springer Berlin / Heidelberg. Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. 2006. Unsupervised analysis for decipherment problems. In Proceedings of the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics, pages 499–506. Philipp Koehn and Kevin Knight. 2000. Estimating word translation probabilities from unrelated monolingual corpora using the EM algorithm. In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pages 711–715. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. Philip Koehn. 2009. Statistical Machine Translation. Cambridge University Press. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 152–159. Gonzalo Navarro. 2001. A guided tour to approximate string matching. ACM Computing Surveys, 33:31–88, March. David Newman, Arthur Asuncion, Padhraic Smyth, and Max Welling. 2009. Distributed algorithms for topic models. Journal of Machine Learning Research, 10:1801–1828. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318. Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the Conference of the Association for Computational Linguistics, pages 320–322. Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipherment. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1048–1057. J¨org Tiedemann. 2009. News from OPUS - A collection of multilingual parallel corpora with tools and interfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, volume V, pages 237–248. John Benjamins, Amsterdam/Philadelphia. Warren Weaver. 1955. Translation (1949). Reproduced in W.N. Locke, A.D. Booth (eds.). In Machine Translation of Languages, pages 15–23. MIT Press. 21
|
2011
|
2
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 190–200, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Collecting Highly Parallel Data for Paraphrase Evaluation David L. Chen Department of Computer Science The University of Texas at Austin Austin, TX 78712, USA [email protected] William B. Dolan Microsoft Research One Microsoft Way Redmond, WA 98052, USA [email protected] Abstract A lack of standard datasets and evaluation metrics has prevented the field of paraphrasing from making the kind of rapid progress enjoyed by the machine translation community over the last 15 years. We address both problems by presenting a novel data collection framework that produces highly parallel text data relatively inexpensively and on a large scale. The highly parallel nature of this data allows us to use simple n-gram comparisons to measure both the semantic adequacy and lexical dissimilarity of paraphrase candidates. In addition to being simple and efficient to compute, experiments show that these metrics correlate highly with human judgments. 1 Introduction Machine paraphrasing has many applications for natural language processing tasks, including machine translation (MT), MT evaluation, summary evaluation, question answering, and natural language generation. However, a lack of standard datasets and automatic evaluation metrics has impeded progress in the field. Without these resources, researchers have resorted to developing their own small, ad hoc datasets (Barzilay and McKeown, 2001; Shinyama et al., 2002; Barzilay and Lee, 2003; Quirk et al., 2004; Dolan et al., 2004), and have often relied on human judgments to evaluate their results (Barzilay and McKeown, 2001; Ibrahim et al., 2003; Bannard and Callison-Burch, 2005). Consequently, it is difficult to compare different systems and assess the progress of the field as a whole. Despite the similarities between paraphrasing and translation, several major differences have prevented researchers from simply following standards that have been established for machine translation. Professional translators produce large volumes of bilingual data according to a more or less consistent specification, indirectly fueling work on machine translation algorithms. In contrast, there are no “professional paraphrasers”, with the result that there are no readily available large corpora and no consistent standards for what constitutes a high-quality paraphrase. In addition to the lack of standard datasets for training and testing, there are also no standard metrics like BLEU (Papineni et al., 2002) for evaluating paraphrase systems. Paraphrase evaluation is inherently difficult because the range of potential paraphrases for a given input is both large and unpredictable; in addition to being meaning-preserving, an ideal paraphrase must also diverge as sharply as possible in form from the original while still sounding natural and fluent. Our work introduces two novel contributions which combine to address the challenges posed by paraphrase evaluation. First, we describe a framework for easily and inexpensively crowdsourcing arbitrarily large training and test sets of independent, redundant linguistic descriptions of the same semantic content. Second, we define a new evaluation metric, PINC (Paraphrase In N-gram Changes), that relies on simple BLEU-like n-gram comparisons to measure the degree of novelty of automatically generated paraphrases. We believe that this metric, along with the sentence-level paraphrases provided by our data collection approach, will make it possi190 ble for researchers working on paraphrasing to compare system performance and exploit the kind of automated, rapid training-test cycle that has driven work on Statistical Machine Translation. In addition to describing a mechanism for collecting large-scale sentence-level paraphrases, we are also making available to the research community 85K parallel English sentences as part of the Microsoft Research Video Description Corpus 1. The rest of the paper is organized as follows. We first review relevant work in Section 2. Section 3 then describes our data collection framework and the resulting data. Section 4 discusses automatic evaluations of paraphrases and introduces the novel metric PINC. Section 5 presents experimental results establishing a correlation between our automatic metric and human judgments. Sections 6 and 7 discuss possible directions for future research and conclude. 2 Related Work Since paraphrase data are not readily available, various methods have been used to extract parallel text from other sources. One popular approach exploits multiple translations of the same data (Barzilay and McKeown, 2001; Pang et al., 2003). Examples of this kind of data include the Multiple-Translation Chinese (MTC) Corpus 2 which consists of Chinese news stories translated into English by 11 translation agencies, and literary works with multiple translations into English (e.g. Flaubert’s Madame Bovary.) Another method for collecting monolingual paraphrase data involves aligning semantically parallel sentences from different news articles describing the same event (Shinyama et al., 2002; Barzilay and Lee, 2003; Dolan et al., 2004). While utilizing multiple translations of literary work or multiple news stories of the same event can yield significant numbers of parallel sentences, this data tend to be noisy, and reliably identifying good paraphrases among all possible sentence pairs remains an open problem. On the other hand, multiple translations on the sentence level such as the MTC Corpus provide good, natural paraphrases, but rela1Available for download at http://research. microsoft.com/en-us/downloads/ 38cf15fd-b8df-477e-a4e4-a4680caa75af/ 2Linguistic Data Consortium (LDC) Catalog Number LDC2002T01, ISBN 1-58563-217-1. tively little data of this type exists. Finally, some approaches avoid the need for monolingual paraphrase data altogether by using a second language as the pivot language (Bannard and Callison-Burch, 2005; Callison-Burch, 2008; Kok and Brockett, 2010). Phrases that are aligned to the same phrase in the pivot language are treated as potential paraphrases. One limitation of this approach is that only words and phrases are identified, not whole sentences. While most work on evaluating paraphrase systems has relied on human judges (Barzilay and McKeown, 2001; Ibrahim et al., 2003; Bannard and Callison-Burch, 2005) or indirect, task-based methods (Lin and Pantel, 2001; Callison-Burch et al., 2006), there have also been a few attempts at creating automatic metrics that can be more easily replicated and used to compare different systems. ParaMetric (Callison-Burch et al., 2008) compares the paraphrases discovered by an automatic system with ones annotated by humans, measuring precision and recall. This approach requires additional human annotations to identify the paraphrases within parallel texts (Cohn et al., 2008) and does not evaluate the systems at the sentence level. The more recently proposed metric PEM (Paraphrase Evaluation Metric) (Liu et al., 2010) produces a single score that captures the semantic adequacy, fluency, and lexical dissimilarity of candidate paraphrases, relying on bilingual data to learn semantic equivalences without using n-gram similarity between candidate and reference sentences. In addition, the metric was shown to correlate well with human judgments. However, a significant drawback of this approach is that PEM requires substantial in-domain bilingual data to train the semantic adequacy evaluator, as well as sample human judgments to train the overall metric. We designed our data collection framework for use on crowdsourcing platforms such as Amazon’s Mechanical Turk. Crowdsourcing can allow inexpensive and rapid data collection for various NLP tasks (Ambati and Vogel, 2010; Bloodgood and Callison-Burch, 2010a; Bloodgood and CallisonBurch, 2010b; Irvine and Klementiev, 2010), including human evaluations of NLP systems (CallisonBurch, 2009; Denkowski and Lavie, 2010; Zaidan and Callison-Burch, 2009). Of particular relevance are the paraphrasing work by Buzek et al. (2010) 191 and Denkowski et al. (2010). Buzek et al. automatically identified problem regions in a translation task and had workers attempt to paraphrase them, while Denkowski et al. asked workers to assess the validity of automatically extracted paraphrases. Our work is distinct from these earlier efforts both in terms of the task – attempting to collect linguistic descriptions using a visual stimulus – and the dramatically larger scale of the data collected. 3 Data Collection Since our goal was to collect large numbers of paraphrases quickly and inexpensively using a crowd, our framework was designed to make the tasks short, simple, easy, accessible and somewhat fun. For each task, we asked the annotators to watch a very short video clip (usually less than 10 seconds long) and describe in one sentence the main action or event that occurred in the video clip We deployed the task on Amazon’s Mechanical Turk, with video segments selected from YouTube. A screenshot of our annotation task is shown in Figure 1. On average, annotators completed each task within 80 seconds, including the time required to watch the video. Experienced annotators were even faster, completing the task in only 20 to 25 seconds. One interesting aspect of this framework is that each annotator approaches the task from a linguistically independent perspective, unbiased by the lexical or word order choices in a pre-existing description. The data thus has some similarities to parallel news descriptions of the same event, while avoiding much of the noise inherent in news. It is also similar in spirit to the ‘Pear Stories’ film used by Chafe (1997). Crucially, our approach allows us to gather arbitrarily many of these independent descriptions for each video, capturing nearly-exhaustive coverage of how native speakers are likely to summarize a small action. It might be possible to achieve similar effects using images or panels of images as the stimulus (von Ahn and Dabbish, 2004; Fei-Fei et al., 2007; Rashtchian et al., 2010), but we believed that videos would be more engaging and less ambiguous in their focus. In addition, videos have been shown to be more effective in prompting descriptions of motion and contact verbs, as well as verbs that are generally not imageable (Ma and Cook, 2009). Watch and describe a short segment of a video You will be shown a segment of a video clip and asked to describe the main action/event in that segment in ONE SENTENCE. Things to note while completing this task: The video will play only a selected segment by default. You can choose to watch the entire clip and/or with sound although this is not necessary. Please only describe the action/event that occurred in the selected segment and not any other parts of the video. Please focus on the main person/group shown in the segment If you do not understand what is happening in the selected segment, please skip this HIT and move onto the next one Write your description in one sentence Use complete, grammatically-correct sentences You can write the descriptions in any language you are comfortable with Examples of good descriptions: A woman is slicing some tomatoes. A band is performing on a stage outside. A dog is catching a Frisbee. The sun is rising over a mountain landscape. Examples of bad descriptions (With the reasons why they are bad in parentheses): Tomato slicing (Incomplete sentence) This video is shot outside at night about a band performing on a stage (Description about the video itself instead of the action/event in the video) I like this video because it is very cute (Not about the action/event in the video) The sun is rising in the distance while a group of tourists standing near some railings are taking pictures of the sunrise and a small boy is shivering in his jacket because it is really cold (Too much detail instead of focusing only on the main action/event) Segment starts: 25 | ends: 30 | length: 5 seconds Play Segment · Play Entire Video Please describe the main event/action in the selected segment (ONE SENTENCE): Note: If you have a hard time typing in your native language on an English keyboard, you may find Google's transliteration service helpful. http://www.google.com/transliterate Language you are typing in (e.g. English, Spanish, French, Hindi, Urdu, Mandarin Chinese, etc): Your one-sentence description: Please provide any comments or suggestions you may have below, we appreciate your input! Figure 1: A screenshot of our annotation task as it was deployed on Mechanical Turk. 3.1 Quality Control One of the main problems with collecting data using a crowd is quality control. While the cost is very low compared to traditional annotation methods, workers recruited over the Internet are often unqualified for the tasks or are incentivized to cheat in order to maximize their rewards. To encourage native and fluent contributions, we asked annotators to write the descriptions in the language of their choice. The result was a significant amount of translation data, unique in its multilingual parallelism. While included in our data release, we leave aside a full discussion of this multilingual data for future work. 192 To ensure the quality of the annotations being produced, we used a two-tiered payment system. The idea was to reward workers who had shown the ability to write quality descriptions and the willingness to work on our tasks consistently. While everyone had access to the Tier-1 tasks, only workers who had been manually qualified could work on the Tier-2 tasks. The tasks were identical in the two tiers but each Tier-1 task only paid 1 cent while each Tier-2 task paid 5 cents, giving the workers a strong incentive to earn the qualification. The qualification process was done manually by the authors. We periodically evaluated the workers who had submitted the most Tier-1 tasks (usually on the order of few hundred submissions) and granted them access to the Tier-2 tasks if they had performed well. We assessed their work mainly on the grammaticality and spelling accuracy of the submitted descriptions. Since we had hundreds of submissions to base our decisions on, it was fairly easy to identify the cheaters and people with poor English skills 3. Workers who were rejected during this process were still allowed to work on the Tier-1 tasks. While this approach requires significantly more manual effort initially than other approaches such as using a qualification test or automatic postannotation filtering, it creates a much higher quality workforce. Moreover, the initial effort is amortized over time as these quality workers are retained over the entire duration of the data collection. Many of them annotated all the available videos we had. 3.2 Video Collection To find suitable videos to annotate, we deployed a separate task. Workers were asked to submit short (generally 4-10 seconds) video segments depicting single, unambiguous events by specifying links to YouTube videos, along with the start and end times. We again used a tiered payment system to reward and retain workers who performed well. Since the scope of this data collection effort extended beyond gathering English data alone, we 3Everyone who submitted descriptions in a foreign language was granted access to the Tier-2 tasks. This was done to encourage more submissions in different languages and also because we could not verify the quality of those descriptions other than using online translation services (and some of the languages were not available to be translated). • Someone
is
coa+ng
a
pork
chop
in
a
glass
bowl
of
flour.
• A
person
breads
a
pork
chop.
• Someone
is
breading
a
piece
of
meat
with
a
white
powdery
substance.
• A
chef
seasons
a
slice
of
meat.
• Someone
is
pu<ng
flour
on
a
piece
of
meat.
• A
woman
is
adding
flour
to
meat.
• A
woman
is
coa+ng
a
piece
of
pork
with
breadcrumbs.
• A
man
dredges
meat
in
bread
crumbs.
• A
person
breads
a
piece
of
meat.
• A
woman
is
breading
some
meat.
• Someone
is
breading
meat.
• A
woman
coats
a
meat
cutlet
in
a
dish.
• A
woman
is
coa+ng
a
pork
loin
in
bread
crumbs.
• The
laldy
coated
the
meat
in
bread
crumbs.
• The
woman
is
breading
pork
chop.
• A
woman
adds
a
mixture
to
some
meat.
• The
lady
put
the
ba?er
on
the
meat.
Figure 2: Examples of English descriptions collected for a particular video segment. tried to collect videos that could be understood regardless of the annotator’s linguistic or cultural background. In order to avoid biasing lexical choices in the descriptions, we muted the audio and excluded videos that contained either subtitles or overlaid text. Finally, we manually filtered the submitted videos to ensure that each met our criteria and was free of inappropriate content. 3.3 Data We deployed our data collection framework on Mechanical Turk over a two-month period from July to September in 2010, collecting 2,089 video segments and 85,550 English descriptions. The rate of data collection accelerated as we built up our workforce, topping 10K descriptions a day when we ended our data collection. Of the descriptions, 33,855 were from Tier-2 tasks, meaning they were provided by workers who had been manually identified as good performers. Examples of some of the descriptions collected are shown in Figure 2. Overall, 688 workers submitted at least one English description. Of these workers, 113 submitted at least 100 descriptions and 51 submitted at least 500. The largest number of descriptions submitted by a single worker was 3496 4. Out of the 688 workers, 50 were granted access to the Tier-2 tasks. The 4This number exceeds the total number of videos because the worker completed both Tier-1 and Tier-2 tasks for the same videos 193 Tier 1 Tier 2 pay $0.01 $0.05 # workers (English) 683 50 # workers (total) 835 94 # submitted (English) 51510 33829 # submitted (total) 68578 55682 # accepted (English) 51052 33825 # accepted (total) 67968 55658 Table 1: Statistics for the two video description tasks success of our data collection effort was in part due to our ability to retain these good workers, building a reliable and efficient workforce. Table 1 shows some statistics for the Tier-1 and Tier-2 tasks 5. Overall, we spent under $5,000 including Amazon’s service fees, some pilot experiments and surveys. On average, 41 descriptions were produced for each video, with at least 27 for over 95% of the videos. Even limiting the set to descriptions produced from the Tier-2 tasks, there are still 16 descriptions on average for each video, with at least 12 descriptions for over 95% of the videos. For most clusters, then, we have a dozen or more high-quality parallel descriptions that can be paired with one another to create monolingual parallel training data. 4 Paraphrase Evaluation Metrics One of the limitations to the development of machine paraphrasing is the lack of standard metrics like BLEU, which has played a crucial role in driving progress in MT. Part of the issue is that a good paraphrase has the additional constraint that it should be lexically dissimilar to the source sentence while preserving the meaning. These can become competing goals when using n-gram overlaps to establish semantic equivalence. Thus, researchers have been unable to rely on BLEU or some derivative: the optimal paraphrasing engine under these terms would be one that simply returns the input. To combat such problems, Liu et al. (2010) have proposed PEM, which uses a second language as pivot to establish semantic equivalence. Thus, no n-gram overlaps are required to determine the semantic adequacy of the paraphrase candidates. PEM 5The numbers for the English data are slightly underestimated since the workers sometimes incorrectly filled out the form when reporting what language they were using. also separately measures lexical dissimilarity and fluency. Finally, all three scores are combined using a support vector machine (SVM) trained on human ratings of paraphrase pairs. While PEM was shown to correlate well with human judgments, it has some limitations. It only models paraphrasing at the phrase level and not at the sentence level. Further, while it does not need reference sentences for the evaluation dataset, PEM does require suitable bilingual data to train the metric. The result is that training a successful PEM becomes almost as challenging as the original paraphrasing problem, since paraphrases need to be learned from bilingual data. The highly parallel nature of our data suggests a simpler solution to this problem. To measure semantic equivalence, we simply use BLEU with multiple references. The large number of reference paraphrases capture a wide space of sentences with equivalent meanings. While the set of reference sentences can of course never be exhaustive, our data collection method provides a natural distribution of common phrases that might be used to describe an action or event. A tight cluster with many similar parallel descriptions suggests there are only few common ways to express that concept. In addition to measuring semantic adequacy and fluency using BLEU, we also need to measure lexical dissimilarity with the source sentence. We introduce a new scoring metric PINC that measures how many n-grams differ between the two sentences. In essence, it is the inverse of BLEU since we want to minimize the number of n-gram overlaps between the two sentences. Specifically, for source sentence s and candidate sentence c: PINC(s, c) = 1 N N X n=1 1 −| n-grams ∩n-gramc | | n-gramc | where N is the maximum n-gram considered and ngrams and n-gramc are the lists of n-grams in the source and candidate sentences, respectively. We use N = 4 in our evaluations. The PINC score computes the percentage of ngrams that appear in the candidate sentence but not in the source sentence. This score is similar to the Jaccard distance, except that it excludes n-grams that only appear in the source sentence and not in the candidate sentence. In other words, it rewards candi194 dates for introducing new n-grams but not for omitting n-grams from the original sentence. The results for each n are averaged arithmetically. PINC evaluates single sentences instead of entire documents because we can reliably measure lexical dissimilarity at the sentence level. Also notice that we do not put additional constraints on sentence length: while extremely short and extremely long sentences are likely to score high on PINC, they still must maintain semantic adequacy as measured by BLEU. We use BLEU and PINC together as a 2dimensional scoring metric. A good paraphrase, according to our evaluation metric, has few n-gram overlaps with the source sentence but many n-gram overlaps with the reference sentences. This is consistent with our requirement that a good paraphrase should be lexically dissimilar from the source sentence while preserving its semantics. Unlike Liu et al. (2010), we treat these two criteria separately, since different applications might have different preferences for each. For example, a paraphrase suggestion tool for a word processing software might be more concerned with semantic adequacy, since presenting a paraphrase that does not preserve the meaning would likely result in a negative user experience. On the other hand, a query expansion algorithm might be less concerned with preserving the precise meaning so long as additional relevant terms are added to improve search recall. 5 Experiments To verify the usefulness of our paraphrase corpus and the BLEU/PINC metric, we built and evaluated several paraphrase systems and compared the automatic scores to human ratings of the generated paraphrases. We also investigated the pros and cons of collecting paraphrases using video annotation rather than directly eliciting them. 5.1 Building paraphrase models We built 4 paraphrase systems by training English to English translation models using Moses (Koehn et al., 2007) with the default settings. Using our paraphrase corpus to train and to test, we divided the sentence clusters associated with each video into 90% for training and 10% for testing. We restricted our attention to sentences produced from the Tier-2 tasks 1
5
10
all
68.9
69
69.1
69.2
69.3
69.4
69.5
69.6
69.7
69.8
69.9
44.5
45
45.5
46
46.5
47
47.5
48
48.5
BLEU
PINC
Figure 3: Evaluation of paraphrase systems trained on different numbers of parallel sentences. As more training pairs are used, the model produces more varied sentences (PINC) but preserves the meaning less well (BLEU) in order to avoid excessive noise in the datasets, resulting in 28,785 training sentences and 3,367 test sentences. To construct the training examples, we randomly paired each sentence with 1, 5, 10, or all parallel descriptions of the same video segment. This corresponds to 28K, 143K, 287K, and 449K training pairs respectively. For the test set, we used each sentence once as the source sentence with all parallel descriptions as references (there were 16 references on average, with a minimum of 10 and a maximum of 31.) We also included the source sentence as a reference for itself. Overall, all the trained models produce reasonable paraphrase systems, even the model trained on just 28K single parallel sentences. Examples of the outputs produced by the models trained on single parallel sentences and on all parallel sentences are shown in Table 2. Some of the changes are simple word substitutions, e.g. rabbit for bunny or gun for revolver, while others are phrasal, e.g. frying meat for browning pork or made a basket for scores in a basketball game. One interesting result of using videos as the stimulus to collect training data is that sometimes the learned paraphrases are not based on linguistic closeness, but rather on visual similarity, e.g. substituting cricket for baseball. To evaluate the results quantitatively, we used the BLEU/PINC metric. The performance of all the trained models is shown in Figure 3. Unsurprisingly, there is a tradeoff between preserving the meaning 195 Original sentence Trained on 1 parallel sentence Trained on all parallel sentences a bunny is cleaning its paw a rabbit is licking its paw a rabbit is cleaning itself a man fires a revolver a man is shooting targets a man is shooting a gun a big turtle is walking a huge turtle is walking a large tortoise is walking a guy is doing a flip over a park bench a man does a flip over a bench a man is doing stunts on a bench milk is being poured into a mixer a man is pouring milk into a mixer a man is pouring milk into a bowl children are practicing baseball children are doing a cricket children are playing cricket a boy is doing karate a man is doing karate a boy is doing martial arts a woman is browning pork in a pan a woman is browning pork in a pan a woman is frying meat in a pan a player scores in a basketball game a player made a basketball game a player made a basket Table 2: Examples of paraphrases generated by the trained models. and producing more varied paraphrases. Systems trained on fewer parallel sentences are more conservative and make fewer mistakes. On the other hand, systems trained on more parallel sentences often produce very good paraphrases but are also more likely to diverge from the original meaning. As a comparison, evaluating each human description as a paraphrase for the other descriptions in the same cluster resulted in a BLEU score of 52.9 and a PINC score of 77.2. Thus, all the systems performed very well in terms of retaining semantic content, although not as well in producing novel sentences. To validate the results suggested by the automatic metrics, we asked two fluent English speakers to rate the generated paraphrases on the following categories: semantic, dissimilarity, and overall. Semantic measures how well the paraphrase preserves the original meaning while dissimilarity measures how much the paraphrase differs from the source sentence. Each category is rated from 1 to 4, with 4 being the best. A paraphrase identical to the source sentence would receive a score of 4 for meaning and 1 for dissimilarity and overall. We randomly selected 200 source sentences and generated 2 paraphrases for each, representing the two extremes: one paraphrase produced by the model trained with single parallel sentences, and the other by the model trained with all parallel sentences. The average scores of the two human judges are shown in Table 3. The results confirm our finding that the system trained with single parallel sentences preserves the meaning better but is also more conservative. 5.2 Correlation with human judgments Having established rough correspondences between BLEU/PINC scores and human judgments of seSemantic Dissimilarity Overall 1 3.09 2.65 2.51 All 2.91 2.89 2.43 Table 3: Average human ratings of the systems trained on single parallel sentences and on all parallel sentences. mantic equivalence and lexical dissimilarity, we quantified the correlation between these automatic metrics and human ratings using Pearson’s correlation coefficient, a measure of linear dependence between two random variables. We computed the inter-annotator agreement as well as the correlation between BLEU, PINC, PEM (Liu et al., 2010) and the average human ratings on the sentence level. Results are shown in Table 4. In order to measure correlation, we need to score each paraphrase individually. Thus, we recomputed BLEU on the sentence level and left the PINC scores unchanged. While BLEU is typically not reliable at the single sentence level, our large number of reference sentences makes BLEU more stable even at this granularity. Empirically, BLEU correlates fairly well with human judgments of semantic equivalence, although still not as well as the inter-annotator agreement. On the other hand, PINC correlates as well as humans agree with each other in assessing lexical dissimilarity. We also computed each metric’s correlation with the overall ratings, although neither should be used alone to assess the overall quality of paraphrases. PEM had the worst correlation with human judgments of all the metrics. Since PEM was trained on newswire data, its poor adaptation to this domain is expected. However, given the large amount of training data needed (PEM was trained on 250K Chinese196 Semantic Dissimilarity Overall Judge A vs. B 0.7135 0.6319 0.4920 BLEU vs. Human 0.5095 N/A 0.2127 PINC vs. Human N/A 0.6672 0.0775 PEM vs. Human N/A N/A 0.0654 PINC vs. Human (BLEU > threshold) threshold = 0 N/A 0.6541 0.1817 threshold = 30 N/A 0.6493 0.1984 threshold = 60 N/A 0.6815 0.3986 threshold = 90 N/A 0.7922 0.4350 Combined BLEU and PINC vs. Human Arithmetic Mean N/A N/A 0.3173 Geometric Mean N/A N/A 0.3003 Harmonic Mean N/A N/A 0.3036 PINC × Sigmoid(BLEU) N/A N/A 0.3532 Table 4: Correlation between the human judges as well as between the automatic metrics and the human judges. English sentence pairs and 2400 human ratings of paraphrase pairs), it is difficult to use PEM as a general metric. Adapting PEM to a new domain would require sufficient in-domain bilingual data to support paraphrase extraction. In contrast, our approach only requires monolingual data, and evaluation can be performed using arbitrarily small, highly-parallel datasets. Moreover, PEM requires sample human ratings in training, thereby lessening the advantage of having automatic metrics. Since lexical dissimilarity is only desirable when the semantics of the original sentence is unchanged, we also computed correlation between PINC and the human ratings when BLEU is above certain thresholds. As we restrict our attention to the set of paraphrases with higher BLEU scores, we see an increase in correlation between PINC and the human assessments. This confirms our intuition that PINC is a more useful measure when semantic content has been preserved. Finally, while we do not believe any single score could adequately describe the quality of a paraphrase outside of a specific application, we experimented with different ways of combining BLEU and PINC into a single score. Almost any simple combination, such as taking the average of the two, yielded decent correlation with the human ratings. The best correlation was achieved by taking the product of PINC and a sigmoid function of BLEU. This follows the intuition that semantic preservation is closer to a -‐0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
1
2
3
4
5
6
7
8
9
10
11
12
All
Pearson's
Correla,on
Number
of
references
for
BLEU
BLEU
with
source
vs.
SemanCc
BLEU
without
source
vs.
SemanCc
BLEU
with
source
vs.
Overall
BLEU
without
source
vs.
Overall
Figure 4: Correlation between BLEU and human judgments as we vary the number of reference sentences. binary decision (i.e. a paraphrase either preserves the meaning or it does not, in which case PINC does not matter at all) than a linear function. We used an oracle to pick the best logistic function in our experiment. In practice, some sample human ratings would be required to tune this function. Other more complicated methods for combining BLEU and PINC are also possible with sample human ratings, such as using a SVM as was done in PEM. We quantified the utility of our highly parallel data by computing the correlation between BLEU and human ratings when different numbers of references were available. The results are shown in Figure 4. As the number of references increases, the correlation with human ratings also increases. The graph also shows the effect of adding the source sentence as a reference. If our goal is to assess semantic equivalence only, then it is better to include the source sentence. If we are trying to assess the overall quality of the paraphrase, it is better to exclude the source sentence, since otherwise the metric will tend to favor paraphrases that introduce fewer changes. 5.3 Direct paraphrasing versus video annotation In addition to collecting paraphrases through video annotations, we also experimented with the more traditional task of presenting a sentence to an annotator and explicitly asking for a paraphrase. We randomly selected a thousand sentences from our data and collected two paraphrases of each using Mechanical Turk. We conducted a post-annotation sur197 vey of workers who had completed both the video description and the direct paraphrasing tasks, and found that paraphrasing was considered more difficult and less enjoyable than describing videos. Of those surveyed, 92% found video annotations more enjoyable, and 75% found them easier. Based on the comments, the only drawback of the video annotation task is the time required to load and watch the videos. Overall, half of the workers preferred the video annotation task while only 16% of the workers preferred the paraphrasing task. The data produced by the direct paraphrasing task also diverged less, since the annotators were inevitably biased by lexical choices and word order in the original sentences. On average, a direct paraphrase had a PINC score of 70.08, while a parallel description of the same video had a score of 78.75. 6 Discussions and Future Work While our data collection framework yields useful parallel data, it also has some limitations. Finding appropriate videos is time-consuming and remains a bottleneck in the process. Also, more abstract actions such as reducing the deficit or fighting for justice cannot be easily captured by our method. One possible solution is to use longer video snippets or other visual stimuli such as graphs, schemas, or illustrated storybooks to convey more complicated information. However, the increased complexity is also likely to reduce the semantic closeness of the parallel descriptions. Another limitation is that sentences produced by our framework tend to be short and follow similar syntactic structures. Asking annotators to write multiple descriptions or longer descriptions would result in more varied data but at the cost of more noise in the alignments. Other than descriptions, we could also ask the annotators for more complicated responses such as “fill in the blanks” in a dialogue (e.g. “If you were this person in the video, what would you say at this point?”), their opinion of the event shown, or the moral of the story. However, as with the difficulty of aligning news stories, finding paraphrases within these more complex responses could require additional annotation efforts. In our experiments, we only used a subset of our corpus to avoid dealing with excessive noise. However, a significant portion of the remaining data is useful. Thus, an automatic method for filtering those sentences could allow us to utilize even more of the data. For example, sentences from the Tier-2 tasks could be used as positive examples to train a string classifier to determine whether a noisy sentence belongs in the same cluster or not. We have so far used BLEU to measure semantic adequacy since it is the most common MT metric. However, other more advanced MT metrics that have shown higher correlation with human judgments could also be used. In addition to paraphrasing, our data collection framework could also be used to produces useful data for machine translation and computer vision. By pairing up descriptions of the same video in different languages, we obtain parallel data without requiring any bilingual skills. Another application for our data is to apply it to computer vision tasks such as video retrieval. The dataset can be readily used to train and evaluate systems that can automatically generate full descriptions of unseen videos. As far as we know, there are currently no datasets that contain whole-sentence descriptions of open-domain video segments. 7 Conclusion We introduced a data collection framework that produces highly parallel data by asking different annotators to describe the same video segments. Deploying the framework on Mechanical Turk over a two-month period yielded 85K English descriptions for 2K videos, one of the largest paraphrase data resources publicly available. In addition, the highly parallel nature of the data allows us to use standard MT metrics such as BLEU to evaluate semantic adequacy reliably. Finally, we also introduced a new metric, PINC, to measure the lexical dissimilarity between the source sentence and the paraphrase. Acknowledgments We are grateful to everyone in the NLP group at Microsoft Research and Natural Language Learning group at UT Austin for helpful discussions and feedback. We thank Chris Brockett, Raymond Mooney, Katrin Erk, Jason Baldridge and the anonymous reviewers for helpful comments on a previous draft. 198 References Vamshi Ambati and Stephan Vogel. 2010. Can crowds build parallel corpora for machine translation systems? In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL-05). Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiplesequence alignment. In Proceedings of Human Language Technology Conference / North American Association for Computational Linguistics Annual Meeting (HLT-NAACL-2003). Regina Barzilay and Kathleen McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL-2001). Michael Bloodgood and Chris Callison-Burch. 2010a. Bucking the trend: Large-scale cost-focused active learning for statistical machine translation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-2010). Michael Bloodgood and Chris Callison-Burch. 2010b. Using Mechanical Turk to build machine translation evaluation sets. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Olivia Buzek, Philip Resnik, and Benjamin B. Bederson. 2010. Error driven paraphrase annotation using Mechanical Turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Chris Callison-Burch, Philipp Koehn, and Miles Osborne. 2006. Improved statistical machine translation using paraphrases. In Proceedings of Human Language Technology Conference / North American Chapter of the Association for Computational Linguistics Annual Meeting (HLT-NAACL-06). Chris Callison-Burch, Trevor Cohn, and Mirella Lapata. 2008. Parametric: An automatic evaluation metric for paraphrasing. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING-2008). Chris Callison-Burch. 2008. Syntactic constraints on paraphrases extracted from parallel corpora. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP-2008). Chris Callison-Burch. 2009. Fast, cheap, and creative: Evaluating translation quality using Amazon’s Mechanical Turk. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-2009). Wallace L. Chafe. 1997. The Pear Stories: Cognitive, Cultural and Linguistic Aspects of Narrative Production. Ablex, Norwood, NJ. Trevor Cohn, Chris Callison-Burch, and Mirella Lapata. 2008. Constructing corpora for the development and evaluation of paraphrase systems. Computational Linguistics, 34:597–614, December. Michael Denkowski and Alon Lavie. 2010. Exploring normalization techniques for human judgments of machine translation adequacy collected using Amazon Mechanical Turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Michael Denkowski, Hassan Al-Haj, and Alon Lavie. 2010. Turker-assisted paraphrasing for EnglishArabic machine translation. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the 20th International Conference on Computational Linguistics (COLING-2004). Li Fei-Fei, Asha Iyer, Christof Koch, and Pietro Perona. 2007. What do we perceive in a glance of a real-world scene? Journal of Vision, 7(1):1–29. Ali Ibrahim, Boris Katz, and Jimmy Lin. 2003. Extracting structural paraphrases from aligned monolingual corpora. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL03). Ann Irvine and Alexandre Klementiev. 2010. Using Mechanical Turk to annotate lexicons for less commonly used languages. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL-07). Stanley Kok and Chris Brockett. 2010. Hitting the right paraphrases in good time. In Proceedings of Human Language Technologies: The Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT-2010). Dekang Lin and Patrick Pantel. 2001. DIRT-discovery of inference rules from text. In Proceedings of the 199 Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2001). Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2010. PEM: A paraphrase evaluation metric exploiting parallel texts. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP-2010). Xiaojuan Ma and Perry R. Cook. 2009. How well do visual verbs work in daily communication for young and old adults. In Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems. Bo Pang, Kevin Knight, and Daniel Marcu. 2003. Syntax-based alignment of multiple translations: Extracting paraphrases and generating new sentences. In Proceedings of Human Language Technology Conference / North American Association for Computational Linguistics Annual Meeting (HLT-NAACL-2003). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-2002), pages 311–318, Philadelphia, PA, July. Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase generation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP-2004). Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. 2010. Collecting image annotations using Amazon’s Mechanical Turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Yusuke Shinyama, Satoshi Sekine, Kiyoshi Sudo, and Ralph Grishman. 2002. Automatic paraphrase acquisition from news articles. In Proceedings of Human Language Technology Conference. Luis von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems. Omar F. Zaidan and Chris Callison-Burch. 2009. Feasibility of human-in-the-loop minimum error rate training. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-2009). 200
|
2011
|
20
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 201–210, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Large Scale Distributed Syntactic, Semantic and Lexical Language Model for Machine Translation Ming Tan Wenli Zhou Lei Zheng Shaojun Wang Kno.e.sis Center Department of Computer Science and Engineering Wright State University Dayton, OH 45435, USA {tan.6,zhou.23,lei.zheng,shaojun.wang}@wright.edu Abstract This paper presents an attempt at building a large scale distributed composite language model that simultaneously accounts for local word lexical information, mid-range sentence syntactic structure, and long-span document semantic content under a directed Markov random field paradigm. The composite language model has been trained by performing a convergent N-best list approximate EM algorithm that has linear time complexity and a followup EM algorithm to improve word prediction power on corpora with up to a billion tokens and stored on a supercomputer. The large scale distributed composite language model gives drastic perplexity reduction over ngrams and achieves significantly better translation quality measured by the BLEU score and “readability” when applied to the task of re-ranking the N-best list from a state-of-theart parsing-based machine translation system. 1 Introduction The Markov chain (n-gram) source models, which predict each word on the basis of previous n-1 words, have been the workhorses of state-of-the-art speech recognizers and machine translators that help to resolve acoustic or foreign language ambiguities by placing higher probability on more likely original underlying word strings. Research groups (Brants et al., 2007; Zhang, 2008) have shown that using an immense distributed computing paradigm, up to 6grams can be trained on up to billions and trillions of words, yielding consistent system improvements, but Zhang (2008) did not observe much improvement beyond 6-grams. Although the Markov chains are efficient at encoding local word interactions, the n-gram model clearly ignores the rich syntactic and semantic structures that constrain natural languages. As the machine translation (MT) working groups stated on page 3 of their final report (Lavie et al., 2006), “These approaches have resulted in small improvements in MT quality, but have not fundamentally solved the problem. There is a dire need for developing novel approaches to language modeling.” Wang et al. (2006) integrated n-gram, structured language model (SLM) (Chelba and Jelinek, 2000) and probabilistic latent semantic analysis (PLSA) (Hofmann, 2001) under the directed MRF framework (Wang et al., 2005) and studied the stochastic properties for the composite language model. They derived a generalized inside-outside algorithm to train the composite language model from a general EM (Dempster et al., 1977) by following Jelinek’s ingenious definition of the inside and outside probabilities for SLM (Jelinek, 2004) with 6th order of sentence length time complexity. Unfortunately, there are no experimental results reported. In this paper, we study the same composite language model. Instead of using the 6th order generalized inside-outside algorithm proposed in (Wang et al., 2006), we train this composite model by a convergent N-best list approximate EM algorithm that has linear time complexity and a follow-up EM algorithm to improve word prediction power. We conduct comprehensive experiments on corpora with 44 million tokens, 230 million tokens, and 1.3 billion tokens and compare perplexity results with n-grams (n=3,4,5 respectively) on these three corpora, we obtain drastic perplexity reductions. Finally, we ap201 ply our language models to the task of re-ranking the N-best list from Hiero (Chiang, 2005; Chiang, 2007), a state-of-the-art parsing-based MT system, we achieve significantly better translation quality measured by the BLEU score and “readability”. 2 Composite language model The n-gram language model is essentially a word predictor that given its entire document history it predicts next word wk+1 based on the last n-1 words with probability p(wk+1|wk k−n+2) where wk k−n+2 = wk−n+2, · · · , wk. The SLM (Chelba and Jelinek, 1998; Chelba and Jelinek, 2000) uses syntactic information beyond the regular n-gram models to capture sentence level long range dependencies. The SLM is based on statistical parsing techniques that allow syntactic analysis of sentences; it assigns a probability p(W, T) to every sentence W and every possible binary parse T. The terminals of T are the words of W with POS tags, and the nodes of T are annotated with phrase headwords and non-terminal labels. Let W be a sentence of length n words to which we have prepended the sentence beginning marker <s> and appended the sentence end marker </s> so that w0 =<s> and wn+1 =</s>. Let Wk = w0, · · · , wk be the word k-prefix of the sentence – the words from the beginning of the sentence up to the current position k and WkTk the word-parse k-prefix. A word-parse k-prefix has a set of exposed heads h−m, · · · , h−1, with each head being a pair (headword, non-terminal label), or in the case of a root-only tree (word, POS tag). An m-th order SLM (m-SLM) has three operators to generate a sentence: WORDPREDICTOR predicts the next word wk+1 based on the m left-most exposed headwords h−1 −m = h−m, · · · , h−1 in the word-parse k-prefix with probability p(wk+1|h−1 −m), and then passes control to the TAGGER; the TAGGER predicts the POS tag tk+1 to the next word wk+1 based on the next word wk+1 and the POS tags of the m left-most exposed headwords h−1 −m in the word-parse k-prefix with probability p(tk+1|wk+1, h−m.tag, · · · , h−1.tag); the CONSTRUCTOR builds the partial parse Tk from Tk−1, wk, and tk in a series of moves ending with NULL, where a parse move a is made with probability p(a|h−1 −m); a ∈A={(unary, NTlabel), (adjoinleft, NTlabel), (adjoin-right, NTlabel), null}. Once the CONSTRUCTOR hits NULL, it passes control to the WORD-PREDICTOR. See detailed description in (Chelba and Jelinek, 2000). A PLSA model (Hofmann, 2001) is a generative probabilistic model of word-document cooccurrences using the bag-of-words assumption described as follows: (i) choose a document d with probability p(d); (ii) SEMANTIZER: select a semantic class g with probability p(g|d); and (iii) WORD-PREDICTOR: pick a word w with probability p(w|g). Since only one pair of (d, w) is being observed, as a result, the joint probability model is a mixture of log-linear model with the expression p(d, w) = p(d) P g p(w|g)p(g|d). Typically, the number of documents and vocabulary size are much larger than the size of latent semantic class variables. Thus, latent semantic class variables function as bottleneck variables to constrain word occurrences in documents. When combining n-gram, m order SLM and PLSA models together to build a composite generative language model under the directed MRF paradigm (Wang et al., 2005; Wang et al., 2006), the TAGGER and CONSTRUCTOR in SLM and SEMANTIZER in PLSA remain unchanged; however the WORD-PREDICTORs in n-gram, m-SLM and PLSA are combined to form a stronger WORDPREDICTOR that generates the next word, wk+1, not only depending on the m left-most exposed headwords h−1 −m in the word-parse k-prefix but also its n-gram history wk k−n+2 and its semantic content gk+1. The parameter for WORD-PREDICTOR in the composite n-gram/m-SLM/PLSA language model becomes p(wk+1|wk k−n+2h−1 −mgk+1). The resulting composite language model has an even more complex dependency structure but with more expressive power than the original SLM. Figure 1 illustrates the structure of a composite n-gram/mSLM/PLSA language model. The composite n-gram/m-SLM/PLSA language model can be formulated as a directed MRF model (Wang et al., 2006) with local normalization constraints for the parameters of each model component, WORDPREDICTOR, TAGGER, CONSTRUCTOR, SEMANTIZER, i.e., P w∈V p(w|w−1 −n+1h−1 −mg) = 1, P t∈O p(t|wh−1 −m.tag) = 1, P a∈A p(a|h−1 −m) = 1, P g∈G p(g|d) = 1. 202 ...... ...... ...... g w g g g ...... ...... ...... ...... </s> d k k−n+2 j+1 ...... <s> w1 i i ...... ...... g1 wk wk+1 gk+1 h−1 h−2 h−m j+1 w wj gj ...... k−n+2 w ...... Figure 1: A composite n-gram/m-SLM/PLSA language model where the hidden information is the parse tree T and semantic content g. The WORD-PREDICTOR generates the next word wk+1 with probability p(wk+1|wk k−n+2h−1 −mgk+1) instead of p(wk+1|wk k−n+2), p(wk+1|h−1 −m) and p(wk+1|gk+1) respectively. 3 Training algorithm Under the composite n-gram/m-SLM/PLSA language model, the likelihood of a training corpus D, a collection of documents, can be written as L(D, p) = Y d∈D Y l X Gl X T l Pp(W l, T l, Gl|d) !!! (1) where (W l, T l, Gl, d) denote the joint sequence of the lth sentence W l with its parse tree structure T l and semantic annotation string Gl in document d. This sequence is produced by a unique sequence of model actions: WORD-PREDICTOR, TAGGER, CONSTRUCTOR, SEMANTIZER moves, its probability is obtained by chaining the probabilities of these moves Pp(W l, T l, Gl|d) = Y g∈G 0 @p(g|d)#(g,W l,Gl,d) Y h−1,··· ,h−m∈H Y w,w−1,··· ,w−n+1∈V p(w|w−1 −n+1h−1 −mg)#(w−1 −n+1wh−1 −mg,W l,T l,Gl,d) Y t∈O p(t|wh−1 −m.tag)#(t,wh−1 −m.tag,W l,T l,d) Y a∈A p(a|h−1 −m)#(a,h−1 −m,W l,T l,d) ! where #(g, W l, Gl, d) is the count of semantic content g in semantic annotation string Gl of the lth sentence W l in document d, #(w−1 −n+1wh−1 −mg, W l, T l, Gl, d) is the count of n-grams, its m most recent exposed headwords and semantic content g in parse T l and semantic annotation string Gl of the lth sentence W l in document d, #(twh−1 −m.tag, W l, T l, d) is the count of tag t predicted by word w and the tags of m most recent exposed headwords in parse tree T l of the lth sentence W l in document d, and finally #(ah−1 −m, W l, T l, d) is the count of constructor move a conditioning on m exposed headwords h−1 −m in parse tree T l of the lth sentence W l in document d. The objective of maximum likelihood estimation is to maximize the likelihood L(D, p) respect to model parameters. For a given sentence, its parse tree and semantic content are hidden and the number of parse trees grows faster than exponential with sentence length, Wang et al. (2006) have derived a generalized inside-outside algorithm by applying the standard EM algorithm. However, the complexity of this algorithm is 6th order of sentence length, thus it is computationally too expensive to be practical for a large corpus even with the use of pruning on charts (Jelinek and Chelba, 1999; Jelinek, 2004). 3.1 N-best list approximate EM Similar to SLM (Chelba and Jelinek, 2000), we adopt an N-best list approximate EM re-estimation with modular modifications to seamlessly incorporate the effect of n-gram and PLSA components. Instead of maximizing the likelihood L(D, p), we maximize the N-best list likelihood, max T ′N L(D, p, T ′ N) = Y d∈D Y l max T ′l N ∈T ′N X Gl 0 @ X T l∈T ′l N ,||T ′l N ||=N Pp(W l, T l, Gl|d) 1 A 1 A 1 A where T ′l N is a set of N parse trees for sentence W l in document d and || · || denotes the cardinality and T ′N is a collection of T ′l N for sentences over entire corpus D. The N-best list approximate EM involves two steps: 1. N-best list search: For each sentence W in document d, find N-best parse trees, T l N = arg max T ′l N n X Gl X T l∈T ′l N Pp(W l, T l, Gl|d), ||T ′l N|| = N o and denote TN as the collection of N-best list parse trees for sentences over entire corpus D under model parameter p. 2. EM update: Perform one iteration (or several iterations) of EM algorithm to estimate model 203 parameters that maximizes N-best-list likelihood of the training corpus D, ˜L(D, p, TN) = Y d∈D ( Y l ( X Gl ( X T l∈T l N ∈TN Pp(W l, T l, Gl|d)))) That is, (a) E-step: Compute the auxiliary function of the N-best-list likelihood ˜Q(p′, p, TN) = X d∈D X l X Gl X T l∈T l N ∈TN Pp(T l, Gl|W l, d) log Pp′(W l, T l, Gl|d) (b) M-step: Maximize ˜Q(p′, p, TN) with respect to p′ to get new update for p. Iterate steps (1) and (2) until the convergence of the N-best-list likelihood. Due to space constraints, we omit the proof of the convergence of the N-best list approximate EM algorithm which uses Zangwill’s global convergence theorem (Zangwill, 1969). N-best list search strategy: To extract the Nbest parse trees, we adopt a synchronous, multistack search strategy that is similar to the one in (Chelba and Jelinek, 2000), which involves a set of stacks storing partial parses of the most likely ones for a given prefix Wk and the less probable parses are purged. Each stack contains hypotheses (partial parses) that have been constructed by the same number of WORD-PREDICTOR and the same number of CONSTRUCTOR operations. The hypotheses in each stack are ranked according to the log(P Gk Pp(Wk, Tk, Gk|d)) score with the highest on top, where Pp(Wk, Tk, Gk|d) is the joint probability of prefix Wk = w0, · · · , wk with its parse structure Tk and semantic annotation string Gk = g1, · · · , gk in a document d. A stack vector consists of the ordered set of stacks containing partial parses with the same number of WORD-PREDICTOR operations but different number of CONSTRUCTOR operations. In WORD-PREDICTOR and TAGGER operations, some hypotheses are discarded due to the maximum number of hypotheses the stack can contain at any given time. In CONSTRUCTOR operation, the resulting hypotheses are discarded due to either finite stack size or the log-probability threshold: the maximum tolerable difference between the log-probability score of the top-most hypothesis and the bottom-most hypothesis at any given state of the stack. EM update: Once we have the N-best parse trees for each sentence in document d and N-best topics for document d, we derive the EM algorithm to estimate model parameters. In E-step, we compute the expected count of each model parameter over sentence W l in document d in the training corpus D. For the WORDPREDICTOR and the SEMANTIZER, the number of possible semantic annotation sequences is exponential, we use forward-backward recursive formulas that are similar to those in hidden Markov models to compute the expected counts. We define the forward vector αl(g|d) to be αl k+1(g|d) = X Gl k Pp(W l k, T l k, wk k−n+2wk+1h−1 −mg, Gl k|d) that can be recursively computed in a forward manner, where W l k is the word k-prefix for sentence W l, T l k is the parse for k-prefix. We define backward vector βl(g|d) to be βl k+1(g|d) = X Gl k+1,· Pp(W l k+1,·, T l k+1,·, Gl k+1,·|wk k−n+2wk+1h−1 −mg, d) that can be computed in a backward manner, here W l k+1,· is the subsequence after k+1th word in sentence W l, T l k+1,· is the incremental parse structure after the parse structure T l k+1 of word k+1prefix W l k+1 that generates parse tree T l, Gl k+1,· is the semantic subsequence in Gl relevant to W l k+1,·. Then, the expected count of w−1 −n+1wh−1 −mg for the WORD-PREDICTOR on sentence W l in document d is X Gl Pp(T l, Gl|W l, d)#(w−1 −n+1wh−1 −mg, W l, T l, Gl, d) = X l X k αl k+1(g|d)βl k+1(g|d)p(g|d) δ(wk k−n+2wk+1h−1 −mgk+1 = w−1 −n+1wh−1 −mg)/Pp(W l|d) where δ(·) is an indicator function and the expected count of g for the SEMANTIZER on sentence W l in document d is X Gl Pp(T l, Gl|W l, d)#(g, W l, Gl, d) = j−1 X k=0 αl k+1(g|d)βl k+1(g|d)p(g|d)/Pp(W l|d) For the TAGGER and the CONSTRUCTOR, the expected count of each event of twh−1 −m.tag and ah−1 −m over parse T l of sentence W l in 204 document d is the real count appeared in parse tree T l of sentence W l in document d times the conditional distribution Pp(T l|W l, d) = Pp(T l, W l|d)/ P T l∈T l Pp(T l, W l|d) respectively. In M-step, the recursive linear interpolation scheme (Jelinek and Mercer, 1981) is used to obtain a smooth probability estimate for each model component, WORD-PREDICTOR, TAGGER, and CONSTRUCTOR. The TAGGER and CONSTRUCTOR are conditional probabilistic models of the type p(u|z1, · · · , zn) where u, z1, · · · , zn belong to a mixed set of words, POS tags, NTtags, CONSTRUCTOR actions (u only), and z1, · · · , zn form a linear Markov chain. The recursive mixing scheme is the standard one among relative frequency estimates of different orders k = 0, · · · , n as explained in (Chelba and Jelinek, 2000). The WORD-PREDICTOR is, however, a conditional probabilistic model p(w|w−1 −n+1h−1 −mg) where there are three kinds of context w−1 −n+1, h−1 −m and g, each forms a linear Markov chain. The model has a combinatorial number of relative frequency estimates of different orders among three linear Markov chains. We generalize Jelinek and Mercer’s original recursive mixing scheme (Jelinek and Mercer, 1981) and form a lattice to handle the situation where the context is a mixture of Markov chains. 3.2 Follow-up EM As explained in (Chelba and Jelinek, 2000), for the SLM component, a large fraction of the partial parse trees that can be used for assigning probability to the next word do not survive in the synchronous, multistack search strategy, thus they are not used in the N-best approximate EM algorithm for the estimation of WORD-PREDICTOR to improve its predictive power. To remedy this weakness, we estimate WORD-PREDICTOR using the algorithm below. The language model probability assignment for the word at position k+1 in the input sentence of document d can be computed as Pp(wk+1|Wk, d) = X h−1 −m∈Tk;Tk∈Zk,gk+1∈Gd p(wk+1|wk k−n+2h−1 −mgk+1) Pp(Tk|Wk, d)p(gk+1|d) (2) where Pp(Tk|Wk, d) = P Gk Pp(Wk,Tk,Gk|d) P Tk∈Zk P Gk Pp(Wk,Tk,Gk|d) and Zk is the set of all parses present in the stacks at the current stage k during the synchronous multistack pruning strategy and it is a function of the word k-prefix Wk. The likelihood of a training corpus D under this language model probability assignment that uses partial parse trees generated during the process of the synchronous, multi-stack search strategy can be written as ˜L(D, p) = Y d∈D Y l “ X k Pp(w(l) k+1|W l k, d) ” (3) We employ a second stage of parameter reestimation for p(wk+1|wk k−n+2h−1 −mgk+1) and p(gk+1|d) by using EM again to maximize Equation (3) to improve the predictive power of WORD-PREDICTOR. 3.3 Distributed architecture When using very large corpora to train our composite language model, both the data and the parameters can’t be stored in a single machine, so we have to resort to distributed computing. The topic of large scale distributed language models is relatively new, and existing works are restricted to n-grams only (Brants et al., 2007; Emami et al., 2007; Zhang et al., 2006). Even though all use distributed architectures that follow the client-server paradigm, the real implementations are in fact different. Zhang et al. (2006) and Emami et al. (2007) store training corpora in suffix arrays such that one sub-corpus per server serves raw counts and test sentences are loaded in a client. This implies that when computing the language model probability of a sentence in a client, all servers need to be contacted for each ngram request. The approach by Brants et al. (2007) follows a standard MapReduce paradigm (Dean and Ghemawat, 2004): the corpus is first divided and loaded into a number of clients, and n-gram counts are collected at each client, then the n-gram counts mapped and stored in a number of servers, resulting in exactly one server being contacted per n-gram when computing the language model probability of a sentence. We adopt a similar approach to Brants et al. and make it suitable to perform iterations of N-best list approximate EM algorithm, see Figure 2. The corpus is divided and loaded into a number of clients. We use a public available parser to parse the sentences in each client to get the initial counts for w−1 −n+1wh−1 −mg etc., finish the Map part, and then the counts for a particular w−1 −n+1wh−1 −mg at different clients are summed up and stored in one 205 Server 2 Server 1 Server L Client 1 Client 2 Client M Figure 2: Distributed architecture is essentially a MapReduce paradigm: clients store partitioned data and perform E-step: compute expected counts, this is Map; servers store parameters (counts) for M-step where counts of w−1 −n+1wh−1 −mg are hashed by word w−1 (or h−1) and its topic g to evenly distribute these model parameters into servers as much as possible, this is Reduce. of the servers by hashing through the word w−1 (or h−1) and its topic g, finish the Reduce part. This is the initialization of the N-best list approximate EM step. Each client then calls the servers for parameters to perform synchronous multi-stack search for each sentence to get the N-best list parse trees. Again, the expected count for a particular parameter of w−1 −n+1wh−1 −mg at the clients are computed, thus we finish a Map part, then summed up and stored in one of the servers by hashing through the word w−1 (or h−1) and its topic g, thus we finish the Reduce part. We repeat this procedure until convergence. Similarly, we use a distributed architecture as in Figure 2 to perform the follow-up EM algorithm to re-estimate WORD-PREDICTOR. 4 Experimental results We have trained our language models using three different training sets: one has 44 million tokens, another has 230 million tokens, and the other has 1.3 billion tokens. An independent test set which has 354 k tokens is chosen. The independent check data set used to determine the linear interpolation coefficients has 1.7 million tokens for the 44 million tokens training corpus, 13.7 million tokens for both 230 million and 1.3 billion tokens training corpora. All these data sets are taken from the LDC English Gigaword corpus with non-verbalized punctuation and we remove all punctuation. Table 1 gives the detailed information on how these data sets are chosen from the LDC English Gigaword corpus. The vocabulary sizes in all three cases are: • word (also WORD-PREDICTOR operation) 1.3 BILLION TOKENS TRAINING CORPUS AFP 19940512.0003 ∼19961015.0568 AFW 19941111.0001 ∼19960414.0652 NYT 19940701.0001 ∼19950131.0483 NYT 19950401.0001 ∼20040909.0063 XIN 19970901.0001 ∼20041125.0119 230 MILLION TOKENS TRAINING CORPUS AFP 19940622.0336 ∼19961031.0797 APW 19941111.0001 ∼19960419.0765 NYT 19940701.0001 ∼19941130.0405 44 MILLION TOKENS TRAINING CORPUS AFP 19940601.0001 ∼19950721.0137 13.7 MILLION TOKENS CHECK CORPUS NYT 19950201.0001 ∼19950331.0494 1.7 MILLION TOKENS CHECK CORPUS AFP 19940512.0003 ∼19940531.0197 354 K TOKENS TEST CORPUS CNA 20041101.0006 ∼20041217.0009 Table 1: The corpora used in our experiments are selected from the LDC English Gigaword corpus and specified in this table, AFP, AFW, NYT, XIN and CNA denote the sections of the LDC English Gigaword corpus. vocabulary: 60 k, open - all words outside the vocabulary are mapped to the <unk> token, these 60 k words are chosen from the most frequently occurred words in 44 millions tokens corpus; • POS tag (also TAGGER operation) vocabulary: 69, closed; • non-terminal tag vocabulary: 54, closed; • CONSTRUCTOR operation vocabulary: 157, closed. Similar to SLM (Chelba and Jelinek, 2000), after the parses undergo headword percolation and binarization, each model component of WORDPREDICTOR, TAGGER, and CONSTRUCTOR is initialized from a set of parsed sentences. We use the “openNLP” software (Northedge, 2005) to parse a large amount of sentences in the LDC English Gigaword corpus to generate an automatic treebank, which has a slightly different word-tokenization than that of the manual treebank such as the Upenn Treebank used in (Chelba and Jelinek, 2000). For the 44 and 230 million tokens corpora, all sentences are automatically parsed and used to initialize model parameters, while for 1.3 billion tokens corpus, we parse the sentences from a portion of the corpus that 206 contain 230 million tokens, then use them to initialize model parameters. The parser at ”openNLP” is trained by Upenn treebank with 1 million tokens and there is a mismatch between Upenn treebank and LDC English Gigaword corpus. Nevertheless, experimental results show that this approach is effective to provide initial values of model parameters. As we have explained, the proposed EM algorithms can be naturally cast into a MapReduce framework, see more discussion in (Lin and Dyer, 2010). If we have access to a large cluster of machines with Hadoop installed that are powerful enough to process a billion tokens level corpus, we just need to specify a map function and a reduce function etc., Hadoop will automatically parallelize and execute programs written in this functional style. Unfortunately, we don’t have this kind of resources available. Instead, we have access to a supercomputer at a supercomputer center with MPI installed that has more than 1000 core processors usable. Thus we implement our algorithms using C++ under MPI on the supercomputer, where we have to write C++ codes for Map part and Reduce part, and the MPI is used to take care of massage passing, scheduling, synchronization, etc. between clients and servers. This involves a fair amount of programming work, even though our implementation under MPI is not as reliable as under Hadoop but it is more efficient. We use up to 1000 core processors to train the composite language models for 1.3 billion tokens corpus where 900 core processors are used to store the parameters alone. We decide to use linearly smoothed trigram as the baseline model for 44 million token corpus, linearly smoothed 4-gram as the baseline model for 230 million token corpus, and linearly smoothed 5-gram as the baseline model for 1.3 billion token corpus. Model size is a big issue, we have to keep only a small set of topics due to the consideration in both computational time and resource demand. Table 2 shows the perplexity results and computation time of composite n-gram/PLSA language models that are trained on three corpora when the pre-defined number of total topics is 200 but different numbers of most likely topics are kept for each document in PLSA, the rest are pruned. For composite 5-gram/PLSA model trained on 1.3 billion tokens corpus, 400 cores have to be used to keep top 5 most likely topics. For composite trigram/PLSA model trained on 44M tokens corpus, the computation time increases drastically with less than 5% percent perplexity improvement. So in the following experiments, we keep top 5 topics for each document from total 200 topics and all other 195 topics are pruned. All composite language models are first trained by performing N-best list approximate EM algorithm until convergence, then EM algorithm for a second stage of parameter re-estimation for WORDPREDICTOR and SEMANTIZER until convergence. We fix the size of topics in PLSA to be 200 and then prune to 5 in the experiments, where the unpruned 5 topics in general account for 70% probability in p(g|d). Table 3 shows comprehensive perplexity results for a variety of different models such as composite n-gram/m-SLM, n-gram/PLSA, mSLM/PLSA, their linear combinations, etc., where we use online EM with fixed learning rate to reestimate the parameters of the SEMANTIZER of test document. The m-SLM performs competitively with its counterpart n-gram (n=m+1) on large scale corpus. In Table 3, for composite n-gram/m-SLM model (n = 3, m = 2 and n = 4, m = 3) trained on 44 million tokens and 230 million tokens, we cut off its fractional expected counts that are less than a threshold 0.005, this significantly reduces the number of predictor’s types by 85%. When we train the composite language on 1.3 billion tokens corpus, we have to both aggressively prune the parameters of WORD-PREDICTOR and shrink the order of n-gram and m-SLM in order to store them in a supercomputer having 1000 cores. In particular, for composite 5-gram/4-SLM model, its size is too big to store, thus we use its approximation, a linear combination of 5-gram/2-SLM and 2-gram/4-SLM, and for 5-gram/2-SLM or 2-gram/4-SLM, again we cut off its fractional expected counts that are less than a threshold 0.005, this significantly reduces the number of predictor’s types by 85%. For composite 4SLM/PLSA model, we cut off its fractional expected counts that are less than a threshold 0.002, again this significantly reduces the number of predictor’s types by 85%. For composite 4-SLM/PLSA model or its linear combination with models, we ignore all the tags and use only the words in the 4 head words. In this table, we have three items missing (marked by —), since the size of corresponding model is 207 CORPUS n # OF PPL TIME # OF # OF # OF TYPES TOPICS (HOURS) SERVERS CLIENTS OF ww−1 −n+1g 44M 3 5 196 0.5 40 100 120.1M 3 10 194 1.0 40 100 218.6M 3 20 190 2.7 80 100 537.8M 3 50 189 6.3 80 100 1.123B 3 100 189 11.2 80 100 1.616B 3 200 188 19.3 80 100 2.280B 230M 4 5 146 25.6 280 100 0.681B 1.3B 5 2 111 26.5 400 100 1.790B 5 5 102 75.0 400 100 4.391B Table 2: Perplexity (ppl) results and time consumed of composite n-gram/PLSA language model trained on three corpora when different numbers of most likely topics are kept for each document in PLSA. LANGUAGE MODEL 44M REDUC230M REDUC1.3B REDUCn=3,m=2 TION n=4,m=3 TION n=5,m=4 TION BASELINE n-GRAM (LINEAR) 262 200 138 n-GRAM (KNESER-NEY) 244 6.9% 183 8.5% — — m-SLM 279 -6.5% 190 5.0% 137 0.0% PLSA 825 -214.9% 812 -306.0% 773 -460.0% n-GRAM+m-SLM 247 5.7% 184 8.0% 129 6.5% n-GRAM+PLSA 235 10.3% 179 10.5% 128 7.2% n-GRAM+m-SLM+PLSA 222 15.3% 175 12.5% 123 10.9% n-GRAM/m-SLM 243 7.3% 171 14.5% (125) 9.4% n-GRAM/PLSA 196 25.2% 146 27.0% 102 26.1% m-SLM/PLSA 198 24.4% 140 30.0% (103) 25.4% n-GRAM/PLSA+m-SLM/PLSA 183 30.2% 140 30.0% (93) 32.6% n-GRAM/m-SLM+m-SLM/PLSA 183 30.2% 139 30.5% (94) 31.9% n-GRAM/m-SLM+n-GRAM/PLSA 184 29.8% 137 31.5% (91) 34.1% n-GRAM/m-SLM+n-GRAM/PLSA 180 31.3% 130 35.0% — — +m-SLM/PLSA n-GRAM/m-SLM/PLSA 176 32.8% — — — — Table 3: Perplexity results for various language models on test corpus, where + denotes linear combination, / denotes composite model; n denotes the order of n-gram and m denotes the order of SLM; the topic nodes are pruned from 200 to 5. too big to store in the supercomputer. The composite n-gram/m-SLM/PLSA model gives significant perplexity reductions over baseline n-grams, n = 3, 4, 5 and m-SLMs, m = 2, 3, 4. The majority of gains comes from PLSA component, but when adding SLM component into n-gram/PLSA, there is a further 10% relative perplexity reduction. We have applied our composite 5-gram/2SLM+2-gram/4-SLM+5-gram/PLSA language model that is trained by 1.3 billion word corpus for the task of re-ranking the N-best list in statistical machine translation. We used the same 1000-best list that is used by Zhang et al. (2006). This list was generated on 919 sentences from the MT03 Chinese-English evaluation set by Hiero (Chiang, 2005; Chiang, 2007), a state-of-the-art parsing-based translation model. Its decoder uses a trigram language model trained with modified Kneser-Ney smoothing (Kneser and Ney, 1995) on a 200 million tokens corpus. Each translation has 11 features and language model is one of them. We substitute our language model and use MERT (Och, 2003) to optimize the BLEU score (Papineni et al., 2002). We partition the data into ten pieces, 9 pieces are used as training data to optimize the BLEU score (Papineni et al., 2002) by MERT (Och, 208 2003), a remaining single piece is used to re-rank the 1000-best list and obtain the BLEU score. The cross-validation process is then repeated 10 times (the folds), with each of the 10 pieces used exactly once as the validation data. The 10 results from the folds then can be averaged (or otherwise combined) to produce a single estimation for BLEU score. Table 4 shows the BLEU scores through 10-fold cross-validation. The composite 5-gram/2-SLM+2gram/4-SLM+5-gram/PLSA language model gives 1.57% BLEU score improvement over the baseline and 0.79% BLEU score improvement over the 5-gram. This is because there is not much diversity on the 1000-best list, and essentially only 20 ∼30 distinct sentences are there in the 1000-best list. Chiang (2007) studied the performance of machine translation on Hiero, the BLEU score is 33.31% when n-gram is used to re-rank the N-best list, however, the BLEU score becomes significantly higher 37.09% when the n-gram is embedded directly into Hiero’s one pass decoder, this is because there is not much diversity in the N-best list. It is expected that putting the our composite language into a one pass decoder of both phrase-based (Koehn et al., 2003) and parsing-based (Chiang, 2005; Chiang, 2007) MT systems should result in much improved BLEU scores. SYSTEM MODEL MEAN (%) BASELINE 31.75 5-GRAM 32.53 5-GRAM/2-SLM+2-GRAM/4-SLM 32.87 5-GRAM/PLSA 33.01 5-GRAM/2-SLM+2-GRAM/4-SLM 33.32 +5-GRAM/PLSA Table 4: 10-fold cross-validation BLEU score results for the task of re-ranking the N-best list. Besides reporting the BLEU scores, we look at the “readability” of translations similar to the study conducted by Charniak et al. (2003). The translations are sorted into four groups: good/bad syntax crossed with good/bad meaning by human judges, see Table 5. We find that many more sentences are perfect, many more are grammatically correct, and many more are semantically correct. The syntactic language model (Charniak, 2001; Charniak, 2003) only improves translations to have good grammar, but does not improve translations to preserve meaning. The composite 5-gram/2-SLM+2-gram/4-SLM+5gram/PLSA language model improves both significantly. Bear in mind that Charniak et al. (2003) integrated Charniak’s language model with the syntaxbased translation model Yamada and Knight proposed (2001) to rescore a tree-to-string translation forest, whereas we use only our language model for N-best list re-ranking. Also, in the same study in (Charniak, 2003), they found that the outputs produced using the n-grams received higher scores from BLEU; ours did not. The difference between human judgments and BLEU scores indicate that closer agreement may be possible by incorporating syntactic structure and semantic information into the BLEU score evaluation. For example, semantically similar words like “insure” and “ensure” in the example of BLEU paper (Papineni et al., 2002) should be substituted in the formula, and there is a weight to measure the goodness of syntactic structure. This modification will lead to a better metric and such information can be provided by our composite language models. SYSTEM MODEL P S G W BASELINE 95 398 20 406 5-GRAM 122 406 24 367 5-GRAM/2-SLM 151 425 33 310 +2-GRAM/4-SLM +5-GRAM/PLSA Table 5: Results of “readability” evaluation on 919 translated sentences, P: perfect, S: only semantically correct, G: only grammatically correct, W: wrong. 5 Conclusion As far as we know, this is the first work of building a complex large scale distributed language model with a principled approach that is more powerful than ngrams when both trained on a very large corpus with up to a billion tokens. We believe our results still hold on web scale corpora that have trillion tokens, since the composite language model effectively encodes long range dependencies of natural language that n-gram is not viable to consider. Of course, this implies that we have to take a huge amount of resources to perform the computation, nevertheless this becomes feasible, affordable, and cheap in the era of cloud computing. 209 References L. Bahl and J. Baker,F. Jelinek and R. Mercer. 1977. Perplexityła measure of difficulty of speech recognition tasks. 94th Meeting of the Acoustical Society of America, 62:S63, Supplement 1. T. Brants et al.. 2007. Large language models in machine translation. The 2007 Conference on Empirical Methods in Natural Language Processing (EMNLP), 858-867. E. Charniak. 2001. Immediate-head parsing for language models. The 39th Annual Conference on Association of Computational Linguistics (ACL), 124-131. E. Charniak, K. Knight and K. Yamada. 2003. Syntaxbased language models for statistical machine translation. MT Summit IX., Intl. Assoc. for Machine Translation. C. Chelba and F. Jelinek. 1998. Exploiting syntactic structure for language modeling. The 36th Annual Conference on Association of Computational Linguistics (ACL), 225-231. C. Chelba and F. Jelinek. 2000. Structured language modeling. Computer Speech and Language, 14(4):283-332. D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. The 43th Annual Conference on Association of Computational Linguistics (ACL), 263-270. D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228. J. Dean and S. Ghemawat. 2004. MapReduce: Simplified data processing on large clusters. Operating Systems Design and Implementation (OSDI), 137-150. A. Dempster, N. Laird and D. Rubin. 1977. Maximum likelihood estimation from incomplete data via the EM algorithm. Journal of Royal Statistical Society, 39:138. A. Emami, K. Papineni and J. Sorensen. 2007. Largescale distributed language modeling. The 32nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), IV:37-40. T. Hofmann. 2001. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42(1):177-196. F. Jelinek and R. Mercer. 1981. Interpolated estimation of Markov source parameters from sparse data. Pattern Recognition in Practice, 381-397. F. Jelinek and C. Chelba. 1999. Putting language into language modeling. Sixth European Conference on Speech Communication and Technology (EUROSPEECH), Keynote Paper 1. F. Jelinek. 2004. Stochastic analysis of structured language modeling. Mathematical Foundations of Speech and Language Processing, 37-72, Springer-Verlag. D. Jurafsky and J. Martin. 2008. Speech and Language Processing, 2nd Edition, Prentice Hall. R. Kneser and H. Ney. 1995. Improved backing-off for m-gram language modeling. The 20th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 181-184. P. Koehn, F. Och and D. Marcu. 2003. Statistical phrasebased translation. The Human Language Technology Conference (HLT), 48-54. S. Khudanpur and J. Wu. 2000. Maximum entropy techniques for exploiting syntactic, semantic and collocational dependencies in language modeling. Computer Speech and Language, 14(4):355-372. A. Lavie et al. 2006. MINDS Workshops Machine Translation Working Group Final Report. http://wwwnlpir.nist.gov/MINDS/FINAL/MT.web.pdf J. Lin and C. Dyer. 2010. Data-Intensive Text Processing with MapReduce. Morgan and Claypool Publishers. R. Northedge. 2005. OpenNLP software http://www.codeproject.com/KB/recipes/englishpar sing.aspx F. Och. 2003. Minimum error rate training in statistical machine translation. The 41th Annual meeting of the Association for Computational Linguistics (ACL), 311-318. K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. The 40th Annual meeting of the Association for Computational Linguistics (ACL), 311-318. B. Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249-276. S. Wang et al. 2005. Exploiting syntactic, semantic and lexical regularities in language modeling via directed Markov random fields. The 22nd International Conference on Machine Learning (ICML), 953-960. S. Wang et al. 2006. Stochastic analysis of lexical and semantic enhanced structural language model. The 8th International Colloquium on Grammatical Inference (ICGI), 97-111. K. Yamada and K. Knight. 2001. A syntax-based statistical translation model. The 39th Annual Conference on Association of Computational Linguistics (ACL), 1067-1074. W. Zangwill. 1969. Nonlinear Programming: A Unified Approach. Prentice-Hall. Y. Zhang, A. Hildebrand and S. Vogel. 2006. Distributed language modeling for N-best list re-ranking. The 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP), 216-223. Y. Zhang, 2008. Structured language models for statistical machine translation. Ph.D. dissertation, CMU. 210
|
2011
|
21
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 211–219, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Goodness: A Method for Measuring Machine Translation Confidence Nguyen Bach∗ Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Fei Huang and Yaser Al-Onaizan IBM T.J. Watson Research Center 1101 Kitchawan Rd Yorktown Heights, NY 10567, USA {huangfe, onaizan}@us.ibm.com Abstract State-of-the-art statistical machine translation (MT) systems have made significant progress towards producing user-acceptable translation output. However, there is still no efficient way for MT systems to inform users which words are likely translated correctly and how confident it is about the whole sentence. We propose a novel framework to predict wordlevel and sentence-level MT errors with a large number of novel features. Experimental results show that the MT error prediction accuracy is increased from 69.1 to 72.2 in F-score. The Pearson correlation between the proposed confidence measure and the human-targeted translation edit rate (HTER) is 0.6. Improvements between 0.4 and 0.9 TER reduction are obtained with the n-best list reranking task using the proposed confidence measure. Also, we present a visualization prototype of MT errors at the word and sentence levels with the objective to improve post-editor productivity. 1 Introduction State-of-the-art Machine Translation (MT) systems are making progress to generate more usable translation outputs. In particular, statistical machine translation systems (Koehn et al., 2007; Bach et al., 2007; Shen et al., 2008) have advanced to a state that the translation quality for certain language pairs (e.g. SpanishEnglish, French-English, Iraqi-English) in certain domains (e.g. broadcasting news, force-protection, travel) is acceptable to users. However, a remaining open question is how to predict confidence scores for machine translated words and sentences. An MT system typically returns the best translation candidate from its search space, but still has no reliable way to inform users which word is likely to be correctly translated and how confident it is about the whole sentence. Such information is vital ∗Work done during an internship at IBM T.J. Watson Research Center to realize the utility of machine translation in many areas. For example, a post-editor would like to quickly identify which sentences might be incorrectly translated and in need of correction. Other areas, such as cross-lingual question-answering, information extraction and retrieval, can also benefit from the confidence scores of MT output. Finally, even MT systems can leverage such information to do n-best list reranking, discriminative phrase table and rule filtering, and constraint decoding (Hildebrand and Vogel, 2008). Numerous attempts have been made to tackle the confidence estimation problem. The work of Blatz et al. (2004) is perhaps the best known study of sentence and word level features and their impact on translation error prediction. Along this line of research, improvements can be obtained by incorporating more features as shown in (Quirk, 2004; Sanchis et al., 2007; Raybaud et al., 2009; Specia et al., 2009). Soricut and Echihabi (2010) developed regression models which are used to predict the expected BLEU score of a given translation hypothesis. Improvement also can be obtained by using target part-of-speech and null dependency link in a MaxEnt classifier (Xiong et al., 2010). Ueffing and Ney (2007) introduced word posterior probabilities (WPP) features and applied them in the n-best list reranking. From the usability point of view, back-translation is a tool to help users to assess the accuracy level of MT output (Bach et al., 2007). Literally, it translates backward the MT output into the source language to see whether the output of backward translation matches the original source sentence. However, previous studies had a few shortcomings. First, source-side features were not extensively investigated. Blatz et al.(2004) only investigated source ngram frequency statistics and source language model features, while other work mainly focused on target side features. Second, previous work attempted to incorporate more features but faced scalability issues, i.e., to train many features we need many training examples and to train discriminatively we need to search through all possible translations of each training example. Another issue of previous work was that they are all trained with BLEU/TER score computing against 211 the translation references which is different from predicting the human-targeted translation edit rate (HTER) which is crucial in post-editing applications (Snover et al., 2006; Papineni et al., 2002). Finally, the backtranslation approach faces a serious issue when forward and backward translation models are symmetric. In this case, back-translation will not be very informative to indicate forward translation quality. In this paper, we predict error types of each word in the MT output with a confidence score, extend it to the sentence level, then apply it to n-best list reranking task to improve MT quality, and finally design a visualization prototype. We try to answer the following questions: • Can we use a rich feature set such as sourceside information, alignment context, and dependency structures to improve error prediction performance? • Can we predict more translation error types i.e substitution, insertion, deletion and shift? • How good do our prediction methods correlate with human correction? • Do confidence measures help the MT system to select a better translation? • How confidence score can be presented to improve end-user perception? In Section 2, we describe the models and training method for the classifier. We describe novel features including source-side, alignment context, and dependency structures in Section 3. Experimental results and analysis are reported in Section 4. Section 5 and 6 present applications of confidence scores. 2 Confidence Measure Model 2.1 Problem setting Confidence estimation can be viewed as a sequential labelling task in which the word sequence is MT output and word labels can be Bad/Good or Insertion/Substitution/Shift/Good. We first estimate each individual word confidence and extend it to the whole sentence. Arabic text is fed into an ArabicEnglish SMT system and the English translation outputs are corrected by humans in two phases. In phase one, a bilingual speaker corrects the MT system translation output. In phase two, another bilingual speaker does quality checking for the correction done in phase one. If bad corrections were spotted, they correct them again. In this paper we use the final correction data from phase two as the reference thus HTER can be used as an evaluation metric. We have 75 thousand sentences with 2.4 million words in total from the human correction process described above. We obtain training labels for each word by performing TER alignment between MT output and the phasetwo human correction. From TER alignments we observed that out of total errors are 48% substitution, 28% deletion, 13% shift, and 11% insertion errors. Based on the alignment, each word produced by the MT system has a label: good, insertion, substitution and shift. Since a deletion error occurs when it only appears in the reference translation, not in the MT output, our model will not predict deletion errors in the MT output. 2.2 Word-level model In our problem, a training instance is a word from MT output, and its label when the MT sentence is aligned with the human correction. Given a training instance x, y is the true label of x; f stands for its feature vector f(x, y); and w is feature weight vector. We define a feature-rich classifier score(x, y) as follow score(x, y) = w.f(x, y) (1) To obtain the label, we choose the class with the highest score as the predicted label for that data instance. To learn optimized weights, we use the Margin Infused Relaxed Algorithm or MIRA (Crammer and Singer, 2003; McDonald et al., 2005) which is an online learner closely related to both the support vector machine and perceptron learning framework. MIRA has been shown to provide state-of-the-art performance for sequential labelling task (Rozenfeld et al., 2006), and is also able to provide an efficient mechanism to train and optimize MT systems with lots of features (Watanabe et al., 2007; Chiang et al., 2009). In general, weights are updated at each step time t according to the following rule: wt+1 = arg minwt+1 ||wt+1 −wt|| s.t. score(x, y) ≥score(x, y′) + L(y, y′) (2) where L(y, y′) is a measure of the loss of using y′ instead of the true label y. In this problem L(y, y′) is 0-1 loss function. More specifically, for each instance xi in the training data at a time t we find the label with the highest score: y′ = arg max y score(xi, y) (3) the weight vector is updated as follow wt+1 = wt + τ(f(xi, y) −f(xi, y′)) (4) τ can be interpreted as a step size; when τ is a large number we want to update our weights aggressively, otherwise weights are updated conservatively. τ = max(0, α) α = min ( C, L(y,y′)−(score(xi,y)−score(xi,y′)) ||f(xi,y)−f(xi,y′)||2 2 ) (5) where C is a positive constant used to cap the maximum possible value of τ. In practice, a cut-off threshold n is the parameter which decides the number of features kept (whose occurrence is at least n) during 212 training. Note that MIRA is sensitive to constant C, the cut-off feature threshold n, and the number of iterations. The final weight is typically normalized by the number of training iterations and the number of training instances. These parameters are tuned on a development set. 2.3 Sentence-level model Given the feature sets and optimized weights, we use the Viterbi algorithm to find the best label sequence. To estimate the confidence of a sentence S we rely on the information from the forward-backward inference. One approach is to directly use the conditional probabilities of the whole sequence. However, this quantity is the confidence measure for the label sequence predicted by the classifier and it does not represent the goodness of the whole MT output. Another more appropriated method is to use the marginal probability of Good label which can be defined as follow: p(yi = Good|S) = α(yi|S)β(yi|S) P j α(yj|S)β(yj|S) (6) p(yi = Good|S) is the marginal probability of label Good at position i given the MT output sentence S. α(yi|S) and β(yi|S) are forward and backward values. Our confidence estimation for a sentence S of k words is defined as follow goodness(S) = Pk i=1 p(yi = Good|S) k (7) goodness(S) is ranging between 0 and 1, where 0 is equivalent to an absolutely wrong translation and 1 is a perfect translation. Essentially, goodness(S) is the arithmetic mean which represents the goodness of translation per word in the whole sentence. 3 Confidence Measure Features Features are generated from feature types: abstract templates from which specific features are instantiated. Features sets are often parameterized in various ways. In this section, we describe three new feature sets introduced on top of our baseline classifier which has WPP and target POS features (Ueffing and Ney, 2007; Xiong et al., 2010). 3.1 Source-side features From MT decoder log, we can track which source phrases generate target phrases. Furthermore, one can infer the alignment between source and target words within the phrase pair using simple aligners such as IBM Model-1 alignment. Source phrase features: These features are designed to capture the likelihood that source phrase and target word co-occur with a given error label. The intuition behind them is that if a large percentage of the source phrase and target have often been seen together with the Source POS and Phrases WPP: 1.0 0.67 1.0 1.0 1.0 0.67 … Target POS: PRP VBZ IN DT NN RB VBZ TO DT NN IN DT JJ JJ NNS VBP IN DT DTNN RB VBP IN NN NN DTJJ DTJJ DTNNS DTJJ MT output Source POS Source He adds that this process also refers to the inability of the multinational naval forces wydyf an hdhh alamlyt ayda tshyr aly adm qdrt almtaddt aljnsyt alqwat albhryt (a) Source phrase Source POS and Phrases WPP: 1.0 0.67 1.0 1.0 1.0 0.67 … Target POS: PRP VBZ IN DT NN RB VBZ TO DT NN IN DT JJ JJ NNS 1 if source-POS-sequence = “DT DTNN” f125(target-word = “process”) = 0 otherwise MT output Source POS Source wydyf an hdhh alamlyt ayda tshyr aly adm qdrt almtaddt aljnsyt alqwat albhryt He adds that this process also refers to the inability of the multinational naval forces VBP IN DT DTNN RB VBP IN NN NN DTJJ DTJJ DTNNS DTJJ (b) Source POS Source POS and Phrases WPP: 1.0 0.67 1.0 1.0 1.0 0.67 … Target POS: PRP VBZ IN DT NN RB VBZ TO DT NN IN DT JJ JJ NNS MT output Source POS Source He adds that this process also refers to the inability of the multinational naval forces VBP IN DT DTNN RB VBP IN NN NN DTJJ DTJJ DTNNS DTJJ wydyf an hdhh alamlyt ayda tshyr aly adm qdrt almtaddt aljnsyt alqwat albhryt (c) Source POS and phrase in right context Figure 1: Source-side features. same label, then the produced target word should have this label in the future. Figure 1a illustrates this feature template where the first line is source POS tags, the second line is the Buckwalter romanized source Arabic sequence, and the third line is MT output. The source phrase feature is defined as follow f102(process) = 1 if source-phrase=“hdhh alamlyt” 0 otherwise Source POS: Source phrase features might be susceptible to sparseness issues. We can generalize source phrases based on their POS tags to reduce the number of parameters. For example, the example in Figure 1a is generalized as in Figure 1b and we have the following feature: f103(process) = 1 if source-POS=“ DT DTNN ” 0 otherwise Source POS and phrase context features: This feature set allows us to look at the surrounding context of the source phrase. For example, in Figure 1c we have “hdhh alamlyt” generates “process”. We also have other information such as on the right hand side the next two phrases are “ayda” and “tshyr” or the sequence of source target POS on the right hand side is “RB VBP”. An example of this type of feature is f104(process) = 1 if source-POS-context=“ RB VBP ” 0 otherwise 3.2 Alignment context features The IBM Model-1 feature performed relatively well in comparison with the WPP feature as shown by Blatz et al. (2004). In our work, we incorporate not only the 213 Alignment Context WPP: 1.0 0.67 1.0 1.0 1.0 0.67 … PRP VBZ IN DT NN RB VBZ TO DT NN IN DT JJ JJ NNS VBP IN DT DTNN RB VBP IN NN NN DTJJ DTJJ DTNNS DTJJ wydyf an hdhh alamlyt ayda tshyr aly adm qdrt almtaddt aljnsyt alqwat albhryt He adds that this process also refers to the inability of the multinational naval forces MT output Source POS Source Target POS (a) Left source Alignment Context WPP: 1.0 0.67 1.0 1.0 1.0 0.67 … PRP VBZ IN DT NN RB VBZ TO DT NN IN DT JJ JJ NNS VBP IN DT DTNN RB VBP IN NN NN DTJJ DTJJ DTNNS DTJJ wydyf an hdhh alamlyt ayda tshyr aly adm qdrt almtaddt aljnsyt alqwat albhryt He adds that this process also refers to the inability of the multinational naval forces MT output Source POS Source Target POS (b) Right source Alignment Context WPP: 1.0 0.67 1.0 1.0 1.0 0.67 … PRP VBZ IN DT NN RB VBZ TO DT NN IN DT JJ JJ NNS VBP IN DT DTNN RB VBP IN NN NN DTJJ DTJJ DTNNS DTJJ wydyf an hdhh alamlyt ayda tshyr aly adm qdrt almtaddt aljnsyt alqwat albhryt He adds that this process also refers to the inability of the multinational naval forces MT output Source POS Source Target POS (c) Left target Alignment Context WPP: 1.0 0.67 1.0 1.0 1.0 0.67 … PRP VBZ IN DT NN RB VBZ TO DT NN IN DT JJ JJ NNS wydyf an hdhh alamlyt ayda tshyr aly adm qdrt almtaddt aljnsyt alqwat albhryt MT output Source POS Source Target POS He adds that this process also refers to the inability of the multinational naval forces VBP IN DT DTNN RB VBP IN NN NN DTJJ DTJJ DTNNS DTJJ (d) Source POS & right target Figure 2: Alignment context features. IBM Model-1 feature but also the surrounding alignment context. The key intuition is that collocation is a reliable indicator for judging if a target word is generated by a particular source word (Huang, 2009). Moreover, the IBM Model-1 feature was already used in several steps of a translation system such as word alignment, phrase extraction and scoring. Also the impact of this feature alone might fade away when the MT system is scaled up. We obtain word-to-word alignments by applying IBM Model-1 to bilingual phrase pairs that generated the MT output. The IBM Model-1 assumes one target word can only be aligned to one source word. Therefore, given a target word we can always identify which source word it is aligned to. Source alignment context feature: We anchor the target word and derive context features surrounding its source word. For example, in Figure 2a and 2b we have an alignment between “tshyr” and “refers” The source contexts “tshyr” with a window of one word are “ayda” to the left and “aly” to the right. Target alignment context feature: Similar to source alignment context features, we anchor the source word and derive context features surrounding the aligned target word. Figure 2c shows a left target context feature of word “refers”. Our features are derived from a window of four words. Combining alignment context with POS tags: Instead of using lexical context we have features to look at source and target POS alignment context. For instance, the feature in Figure 2d is f141(refers) = ( 1 if source-POS = “VBP” and target-context = “to” 0 otherwise Source & Target Dependency Structures WPP: 1.0 0.67 1.0 1.0 1.0 0.67 … PRP VBZ IN DT NN RB VBZ TO DT NN IN DT JJ JJ NNS wydyf an hdhh alamlyt ayda tshyr aly adm qdrt almtaddt aljnsyt alqwat albhryt He adds that this process also refers to the inability of the multinational naval forces VBP IN DT DTNN RB VBP IN NN NN DTJJ DTJJ DTNNS DTJJ null (a) Source-Target dependency Source & Target Dependency Structures PRP VBZ IN DT NN RB VBZ TO DT NN IN DT JJ JJ NNS wydyf an hdhh alamlyt ayda tshyr aly adm qdrt almtaddt aljnsyt alqwat albhryt He adds that this process also refers to the inability of the multinational naval forces VBP IN DT DTNN RB VBP IN NN NN DTJJ DTJJ DTNNS DTJJ WPP: 1.0 0.67 1.0 1.0 1.0 0.67 … (b) Child-Father agreement Source & Target Dependency Structures PRP VBZ IN DT NN RB VBZ TO DT NN IN DT JJ JJ NNS wydyf an hdhh alamlyt ayda tshyr aly adm qdrt almtaddt aljnsyt alqwat albhryt He adds that this process also refers to the inability of the multinational naval forces VBP IN DT DTNN RB VBP IN NN NN DTJJ DTJJ DTNNS DTJJ Children Agreement: 2 WPP: 1.0 0.67 1.0 1.0 1.0 0.67 … (c) Children agreement Figure 3: Dependency structures features. 3.3 Source and target dependency structure features The contextual and source information in the previous sections only take into account surface structures of source and target sentences. Meanwhile, dependency structures have been extensively used in various translation systems (Shen et al., 2008; Ma et al., 2008; Bach et al., 2009). The adoption of dependency structures might enable the classifier to utilize deep structures to predict translation errors. Source and target structures are unlikely to be isomorphic as shown in Figure 3a. However, we expect some high-level linguistic structures are likely to transfer across certain language pairs. For example, prepositional phrases (PP) in Arabic and English are similar in a sense that PPs generally appear at the end of the sentence (after all the verbal arguments) and to a lesser extent at its beginning (Habash and Hu, 2009). We use the Stanford parser to obtain dependency trees and POS tags (Marneffe et al., 2006). Child-Father agreement: The motivation is to take advantage of the long distance dependency relations between source and target words. Given an alignment between a source word si and a target word tj. A child214 father agreement exists when sk is aligned to tl, where sk and tl are father of si and tj in source and target dependency trees, respectively. Figure 3b illustrates that “tshyr” and “refers” have a child-father agreement. To verify our intuition, we analysed 243K words of manual aligned Arabic-English bitext. We observed 29.2% words having child-father agreements. In term of structure types, we found 27.2% of copula verb and 30.2% prepositional structures, including object of a preposition, prepositional modifier, and prepositional complement, are having child-father agreements. Children agreement: In the child-father agreement feature we look up in the dependency tree, however, we also can look down to the dependency tree with a similar motivation. Essentially, given an alignment between a source word si and a target word tj, how many children of si and tj are aligned together? For example, “tshyr” and “refers” have 2 aligned children which are “ayda-also” and “aly-to” as shown in Figure 3c. 4 Experiments 4.1 Arabic-English translation system The SMT engine is a phrase-based system similar to the description in (Tillmann, 2006), where various features are combined within a log-linear framework. These features include source-to-target phrase translation score, source-to-target and target-to-source wordto-word translation scores, language model score, distortion model scores and word count. The training data for these features are 7M Arabic-English sentence pairs, mostly newswire and UN corpora released by LDC. The parallel sentences have word alignment automatically generated with HMM and MaxEnt word aligner (Ge, 2004; Ittycheriah and Roukos, 2005). Bilingual phrase translations are extracted from these word-aligned parallel corpora. The language model is a 5-gram model trained on roughly 3.5 billion English words. Our training data contains 72k sentences ArabicEnglish machine translation with human corrections which include of 2.2M words in newswire and weblog domains. We have a development set of 2,707 sentences, 80K words (dev); an unseen test set of 2,707 sentences, 79K words (test). Feature selection and parameter tuning has been done on the development set in which we experimented values of C, n and iterations in range of [0.5:10], [1:5], and [50:200] respectively. The final MIRA classifier was trained by using pocket crf toolkit1 with 100 iterations, hyper-parameter C was 5 and cut-off feature threshold n was 1. We use precision (P), recall (R) and F-score (F) to evaluate the classifier performance and they are com1http://pocket-crf-1.sourceforge.net/ puted as follow: P = the number of correctly tagged labels the number of tagged labels R = the number of correctly tagged labels the number of reference labels F = 2*P*R P+R (8) 4.2 Contribution of feature sets We designed our experiments to show the impact of each feature separately as well as their cumulative impact. We trained two types of classifiers to predict the error type of each word in MT output, namely Good/Bad with a binary classifier and Good/Insertion/Substitution/Shift with a 4-class classifier. Each classifier is trained with different feature sets as follow: • WPP: we reimplemented WPP calculation based on n-best lists as described in (Ueffing and Ney, 2007). • WPP + target POS: only WPP and target POS features are used. This is a similar feature set used by Xiong et al. (2010). • Our features: the classifier has source side, alignment context, and dependency structure features; WPP and target POS features are excluded. • WPP + our features: adding our features on top of WPP. • WPP + target POS + our features: using all features. binary 4-class dev test dev test WPP 69.3 68.7 64.4 63.7 + source side 72.1 71.6 66.2 65.7 + alignment context 71.4 70.9 65.7 65.3 + dependency structures 69.9 69.5 64.9 64.3 WPP+ target POS 69.6 69.1 64.4 63.9 + source side 72.3 71.8 66.3 65.8 + alignment context 71.9 71.2 66 65.6 + dependency structures 70.4 70 65.1 64.4 Table 1: Contribution of different feature sets measure in F-score. To evaluate the effectiveness of each feature set, we apply them on two different baseline systems: using WPP and WPP+target POS, respectively. We augment each baseline with our feature sets separately. Table 1 shows the contribution in F-score of our proposed feature sets. Improvements are consistently obtained when combining the proposed features with baseline features. Experimental results also indicate that sourceside information, alignment context and dependency 215 Predicting Good/Bad words 59.4 59.3 69.3 68.7 69.6 69.1 72.1 71.5 72.4 72 72.6 72.2 58 60 62 64 66 68 70 72 74 dev test Test sets F-score WPP+target POS+Our features WPP+Our features Our features WPP+target POS WPP All-Good (a) Binary Predicting Good/Insertion/Substitution/Shift words 59.4 59.3 64.4 63.7 64.4 63.9 66.2 65.6 66.6 65.9 66.8 66.1 58 59 60 61 62 63 64 65 66 67 68 dev test Test sets F-score WPP+target POS+Our features WPP+Our features Our features WPP+target POS WPP All-Good (b) 4-class Figure 4: Performance of binary and 4-class classifiers trained with different feature sets on the development and unseen test sets. structures have unique and effective levers to improve the classifier performance. Among the three proposed feature sets, we observe the source side information contributes the most gain, which is followed by the alignment context and dependency structure features. 4.3 Performance of classifiers We trained several classifiers with our proposed feature sets as well as baseline features. We compare their performances, including a naive baseline All-Good classifier, in which all words in the MT output are labelled as good translations. Figure 4 shows the performance of different classifiers trained with different feature sets on development and unseen test sets. On the unseen test set our proposed features outperform WPP and target POS features by 2.8 and 2.4 absolute F-score respectively. Improvements of our features are consistent in development and unseen sets as well as in binary and 4-class classifiers. We reach the best performance by combining our proposed features with WPP and target POS features. Experiments indicate that the gaps in Fscore between our best system with the naive All-Good system is 12.9 and 6.8 in binary and 4-class cases, respectively. Table 2 presents precision, recall, and Fscore of individual class of the best binary and 4-class classifiers. It shows that Good label is better predicted than other labels, meanwhile, Substitution is generally easier to predict than Insertion and Shift. 4.4 Correlation between Goodness and HTER We estimate sentence level confidence score based on Equation 7. Figure 5 illustrates the correlation between our proposed goodness sentence level confidence score and the human-targeted translation edit rate (HTER). The Pearson correlation between goodness and HTER is 0.6, while the correlation of WPP and HTER is 0.52. This experiment shows that goodness has a large correlation with HTER. The black bar is the linear regression line. Blue and red Label P R F Binary Good 74.7 80.6 77.5 Bad 68 60.1 63.8 4-class Good 70.8 87 78.1 Insertion 37.5 16.9 23.3 Substitution 57.8 44.9 50.5 Shift 35.2 14.1 20.1 Table 2: Detailed performance in precision, recall and F-score of binary and 4-class classifiers with WPP+target POS+Our features on the unseen test set. bars are thresholds used to visualize good and bad sentences respectively. We also experimented goodness computation in Equation 7 using geometric mean and harmonic mean; their Pearson correlation values are 0.5 and 0.35 respectively. 5 Improving MT quality with N-best list reranking Experiments reporting in Section 4 indicate that the proposed confidence measure has a high correlation with HTER. However, it is not very clear if the core MT system can benefit from confidence measure by providing better translations. To investigate this question we present experimental results for the n-best list reranking task. The MT system generates top n hypotheses and for each hypothesis we compute sentence-level confidence scores. The best candidate is the hypothesis with highest confidence score. Table 3 shows the performance of reranking systems using goodness scores from our best classifier in various n-best sizes. We obtained 0.7 TER reduction and 0.4 BLEU point improvement on the development set with a 5-best list. On the unseen test, we obtained 0.6 TER reduction and 0.2 BLEU point improvement. Although, the improvement of BLEU score 216 0.9 1 Good Bad Linear fit 0.7 0.8 0 4 0.5 0.6 Goodness 0.2 0.3 0.4 G 0 0.1 0.2 0 20 40 60 80 100 HTER Figure 5: Correlation between Goodness and HTER. Dev Test TER BLEU TER BLEU Baseline 49.9 31.0 50.2 30.6 2-best 49.5 31.4 49.9 30.8 5-best 49.2 31.4 49.6 30.8 10-best 49.2 31.2 49.5 30.8 20-best 49.1 31.0 49.3 30.7 30-best 49.0 31.0 49.3 30.6 40-best 49.0 31.0 49.4 30.5 50-best 49.1 30.9 49.4 30.5 100-best 49.0 30.9 49.3 30.5 Table 3: Reranking performance with goodness score. is not obvious, TER reductions are consistent in both development and unseen sets. Figure 6 shows the improvement of reranking with goodness score. Besides, the figure illustrates the upper and lower bound performances with TER metric in which the lower bound is our baseline system and the upper bound is the best hypothesis in a given n-best list. Oracle scores of each nbest list are computed by choosing the translation candidate with lowest TER score. 6 Visualizing translation errors Besides the application of confidence score in the nbest list reranking task, we propose a method to visualize translation error using confidence scores. Our purpose is to visualize word and sentence-level confidence scores with the following objectives 1) easy for spotting translations errors; 2) simple and intuitive; and 3) helpful for post-editing productivity. We define three categories of translation quality (good/bad/decent) on both word and sentence level. On word level, the marginal probability of good label is used to visualize translation errors as follow: Li = good if p(yi = Good|S) ≥0.8 bad if p(yi = Good|S) ≤0.45 decent otherwise 42 43 44 45 46 47 48 49 50 51 1 2 5 10 20 30 40 50 100 TER N-best size Oracle Our models Baseline Figure 6: A comparison between reranking and oracle scores with different n-best size in TER metric on the development set. On sentence level, the goodness score is used as follow: LS = good if goodness(S) ≥0.7 bad if goodness(S) ≤0.5 decent otherwise Choices Intention Font size big bad small good medium decent Colors red bad black good orange decent Table 4: Choices of layout Different font sizes and colors are used to catch the attention of post-editors whenever translation errors are likely to appear as shown in Table 4. Colors are applied on word level, while font size is applied on both word and sentence level. The idea of using font size and colour to visualize translation confidence is similar to the idea of using tag/word cloud to describe the content of websites2. The reason we are using big font size and red color is to attract post-editors’ attention and help them find translation errors quickly. Figure 7 shows an example of visualizing confidence scores by font size and colours. It shows that “not to deprive yourself”, displayed in big font and red color, is likely to be bad translations. Meanwhile, other words, such as “you”, “different”, “from”, and “assimilation”, displayed in small font and black color, are likely to be good translation. Medium font and orange color words are decent translations. 2http://en.wikipedia.org/wiki/Tag cloud 217 you totally different from zaid amr , and not to deprive yourself in a basement of imitation and assimilation . أ" ! ز و
و
داب ا وا آة وا&و%ن MT output Source you totally different from zaid amr , and not to deprive yourself in a basement of imitation and assimilation . We predict and visualize Human correction you are quite different from zaid and amr , so do not cram yourself in the tunnel of simulation , imitation and assimilation . (a) the poll also showed that most of the participants in the developing countries are ready to introduce qualitative changes in the pattern of their lives for the sake of reducing the effects of climate change. وا"! اع ا ان ا
رآ ا
ول ا
ون د-ل #, ات &*) )(' &% $ #! . 4 3 -21 #0/ ات ا
, ا
- MT output Source the poll also showed that most of the participants in the developing countries are ready to introduce qualitative changes in the pattern of their lives for the sake of reducing the effects of climate change. We predict and visualize the survey also showed that most of the participants in developing countries are ready to introduce changes to the quality of their lifestyle in order to reduce the effects of climate change . Human correction (b) Figure 7: MT errors visualization based on confidence scores. 7 Conclusions In this paper we proposed a method to predict confidence scores for machine translated words and sentences based on a feature-rich classifier using linguistic and context features. Our major contributions are three novel feature sets including source side information, alignment context, and dependency structures. Experimental results show that by combining the source side information, alignment context, and dependency structure features with word posterior probability and target POS context (Ueffing & Ney 2007; Xiong et al., 2010), the MT error prediction accuracy is increased from 69.1 to 72.2 in F-score. Our framework is able to predict error types namely insertion, substitution and shift. The Pearson correlation with human judgement increases from 0.52 to 0.6. Furthermore, we show that the proposed confidence scores can help the MT system to select better translations and as a result improvements between 0.4 and 0.9 TER reduction are obtained. Finally, we demonstrate a prototype to visualize translation errors. This work can be expanded in several directions. First, we plan to apply confidence estimation to perform a second-pass constraint decoding. After the first pass decoding, our confidence estimation model can label which word is likely to be correctly translated. The second-pass decoding utilizes the confidence information to constrain the search space and hopefully can find a better hypothesis than in the first pass. This idea is very similar to the multi-pass decoding strategy employed by speech recognition engines. Moreover, we also intend to perform a user study on our visualization prototype to see if it increases the productivity of post-editors. Acknowledgements We would like to thank Christoph Tillmann and the IBM machine translation team for their supports. Also, we would like to thank anonymous reviewers, Qin Gao, Joy Zhang, and Stephan Vogel for their helpful comments. References Nguyen Bach, Matthias Eck, Paisarn Charoenpornsawat, Thilo Khler, Sebastian Stker, ThuyLinh Nguyen, Roger Hsiao, Alex Waibel, Stephan Vogel, Tanja Schultz, and Alan Black. 2007. The CMU TransTac 2007 Eyes-free and Hands-free Two-way Speech-to-Speech Translation System. In Proceedings of the IWSLT’07, Trento, Italy. Nguyen Bach, Qin Gao, and Stephan Vogel. 2009. Sourceside dependency tree reordering models with subtree movements and constraints. In Proceedings of the MTSummit-XII, Ottawa, Canada, August. International Association for Machine Translation. 218 John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto Sanchis, and Nicola Ueffing. 2004. Confidence estimation for machine translation. In The JHU Workshop Final Report, Baltimore, Maryland, USA, April. David Chiang, Kevin Knight, and Wei Wang. 2009. 11,001 new features for statistical machine translation. In Proceedings of HLT-ACL, pages 218–226, Boulder, Colorado, June. Association for Computational Linguistics. Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951–991. Niyu Ge. 2004. Max-posterior HMM alignment for machine translation. In Presentation given at DARPA/TIDES NIST MT Evaluation workshop. Nizar Habash and Jun Hu. 2009. Improving arabic-chinese statistical machine translation using english as pivot language. In Proceedings of the 4th Workshop on Statistical Machine Translation, pages 173–181, Morristown, NJ, USA. Association for Computational Linguistics. Almut Silja Hildebrand and Stephan Vogel. 2008. Combination of machine translation systems via hypothesis selection from combined n-best lists. In Proceedings of the 8th Conference of the AMTA, pages 254–261, Waikiki, Hawaii, October. Fei Huang. 2009. Confidence measure for word alignment. In Proceedings of the ACL-IJCNLP ’09, pages 932–940, Morristown, NJ, USA. Association for Computational Linguistics. Abraham Ittycheriah and Salim Roukos. 2005. A maximum entropy word aligner for arabic-english machine translation. In Proceedings of the HTL-EMNLP’05, pages 89– 96, Morristown, NJ, USA. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL’07, pages 177–180, Prague, Czech Republic, June. Yanjun Ma, Sylwia Ozdowska, Yanli Sun, and Andy Way. 2008. Improving word alignment using syntactic dependencies. In Proceedings of the ACL-08: HLT SSST-2, pages 69–77, Columbus, OH. Marie-Catherine Marneffe, Bill MacCartney, and Christopher Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC’06, Genoa, Italy. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Flexible text segmentation with structured multilabel classification. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 987– 994, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of ACL’02, pages 311–318, Philadelphia, PA, July. Chris Quirk. 2004. Training a sentence-level machine translation confidence measure. In Proceedings of the 4th LREC. Sylvain Raybaud, Caroline Lavecchia, David Langlois, and Kamel Smaili. 2009. Error detection for statistical machine translation using linguistic features. In Proceedings of the 13th EAMT, Barcelona, Spain, May. Binyamin Rozenfeld, Ronen Feldman, and Moshe Fresko. 2006. A systematic cross-comparison of sequence classifiers. In Proceedings of the SDM, pages 563–567, Bethesda, MD, USA, April. Alberto Sanchis, Alfons Juan, and Enrique Vidal. 2007. Estimation of confidence measures for machine translation. In Proceedings of the MT Summit XI, Copenhagen, Denmark. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of ACL-08: HLT, pages 577–585, Columbus, Ohio, June. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of AMTA’06, pages 223–231, August. Radu Soricut and Abdessamad Echihabi. 2010. Trustrank: Inducing trust in automatic translations via ranking. In Proceedings of the 48th ACL, pages 612–621, Uppsala, Sweden, July. Association for Computational Linguistics. Lucia Specia, Zhuoran Wang, Marco Turchi, John ShaweTaylor, and Craig Saunders. 2009. Improving the confidence of machine translation quality estimates. In Proceedings of the MT Summit XII, Ottawa, Canada. Christoph Tillmann. 2006. Efficient dynamic programming search algorithms for phrase-based SMT. In Proceedings of the Workshop on Computationally Hard Problems and Joint Inference in Speech and Language Processing, pages 9–16, Morristown, NJ, USA. Association for Computational Linguistics. Nicola Ueffing and Hermann Ney. 2007. Word-level confidence estimation for machine translation. Computational Linguistics, 33(1):9–40. Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin training for statistical machine translation. In Proceedings of the EMNLPCoNLL, pages 764–773, Prague, Czech Republic, June. Association for Computational Linguistics. Deyi Xiong, Min Zhang, and Haizhou Li. 2010. Error detection for statistical machine translation using linguistic features. In Proceedings of the 48th ACL, pages 604– 611, Uppsala, Sweden, July. Association for Computational Linguistics. 219
|
2011
|
22
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 220–229, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics MEANT: An inexpensive, high-accuracy, semi-automatic metric for evaluating translation utility via semantic frames Chi-kiu Lo and Dekai Wu HKUST Human Language Technology Center Department of Computer Science and Engineering Hong Kong University of Science and Technology {jackielo,dekai}@cs.ust.hk Abstract We introduce a novel semi-automated metric, MEANT, that assesses translation utility by matching semantic role fillers, producing scores that correlate with human judgment as well as HTER but at much lower labor cost. As machine translation systems improve in lexical choice and fluency, the shortcomings of widespread n-gram based, fluency-oriented MT evaluation metrics such as BLEU, which fail to properly evaluate adequacy, become more apparent. But more accurate, nonautomatic adequacy-oriented MT evaluation metrics like HTER are highly labor-intensive, which bottlenecks the evaluation cycle. We first show that when using untrained monolingual readers to annotate semantic roles in MT output, the non-automatic version of the metric HMEANT achieves a 0.43 correlation coefficient with human adequacy judgments at the sentence level, far superior to BLEU at only 0.20, and equal to the far more expensive HTER. We then replace the human semantic role annotators with automatic shallow semantic parsing to further automate the evaluation metric, and show that even the semiautomated evaluation metric achieves a 0.34 correlation coefficient with human adequacy judgment, which is still about 80% as closely correlated as HTER despite an even lower labor cost for the evaluation procedure. The results show that our proposed metric is significantly better correlated with human judgment on adequacy than current widespread automatic evaluation metrics, while being much more cost effective than HTER. 1 Introduction In this paper we show that evaluating machine translation by assessing the translation accuracy of each argument in the semantic role framework correlates with human judgment on translation adequacy as well as HTER, at a significantly lower labor cost. The correlation of this new metric, MEANT, with human judgment is far superior to BLEU and other automatic n-gram based evaluation metrics. We argue that BLEU (Papineni et al., 2002) and other automatic n-gram based MT evaluation metrics do not adequately capture the similarity in meaning between the machine translation and the reference translation—which, ultimately, is essential for MT output to be useful. Ngram based metrics assume that “good” translations tend to share the same lexical choices as the reference translations. While BLEU score performs well in capturing the translation fluency, Callison-Burch et al. (2006) and Koehn and Monz (2006) report cases where BLEU strongly disagree with human judgment on translation quality. The underlying reason is that lexical similarity does not adequately reflect the similarity in meaning. As MT systems improve, the shortcomings of the n-gram based evaluation metrics are becoming more apparent. State-of-the-art MT systems are often able to output fluent translations that are nearly grammatical and contain roughly the correct words, but still fail to express meaning that is close to the input. At the same time, although HTER (Snover et al., 2006) is more adequacy-oriented, it is only employed in very large scale MT system evaluation instead of day-to-day research activities. The underlying reason is that it requires rigorously trained human experts to make difficult combinatorial decisions on the minimal number of edits so as to make the MT output convey the same meaning as the reference translation—a highly labor-intensive, costly process that bottlenecks the evaluation cycle. Instead, with MEANT, we adopt at the outset the principle that a good translation is one that is useful, in the sense that human readers may successfully understand at least the basic event structure—“who did what to whom, when, where and why” (Pradhan et al., 2004)—representing the central meaning of the source utterances. It is true that limited tasks might exist for which inadequate translations are still useful. But for meaningful tasks, generally speaking, for a translation to be useful, at least the basic event structure must be correctly understood. Therefore, our objective is to evaluate translation utility: from a user’s point of view, how well is 220 the most essential semantic information being captured by machine translation systems? In this paper, we detail the methodology that underlies MEANT, which extends and implements preliminary directions proposed in (Lo and Wu, 2010a) and (Lo and Wu, 2010b). We present the results of evaluating translation utility by measuring the accuracy within a semantic role labeling (SRL) framework. We show empirically that our proposed SRL based evaluation metric, which uses untrained monolingual humans to annotate semantic frames in MT output, correlates with human adequacy judgments as well as HTER, and far better than BLEU and other commonly used metrics. Finally, we show that replacing the human semantic role labelers with an automatic shallow semantic parser in our proposed metric yields an approximation that is about 80% as closely correlated with human judgment as HTER, at an even lower cost—and is still far better correlated than n-gram based evaluation metrics. 2 Related work Lexical similarity based metrics BLEU (Papineni et al., 2002) is the most widely used MT evaluation metric despite the fact that a number of large scale metaevaluations (Callison-Burch et al., 2006; Koehn and Monz, 2006) report cases where it strongly disagree with human judgment on translation accuracy. Other lexical similarity based automatic MT evaluation metrics, like NIST (Doddington, 2002), METEOR (Banerjee and Lavie, 2005), PER (Tillmann et al., 1997), CDER (Leusch et al., 2006) and WER (Nießen et al., 2000), also perform well in capturing translation fluency, but share the same problem that although evaluation with these metrics can be done very quickly at low cost, their underlying assumption—that a “good” translation is one that shares the same lexical choices as the reference translation—is not justified semantically. Lexical similarity does not adequately reflect similarity in meaning. State-of-the-art MT systems are often able to output translations containing roughly the correct words, yet expressing meaning that is not close to that of the input. We argue that a translation metric that reflects meaning similarity is better based on similarity in semantic structure, rather than simply flat lexical similarity. HTER (non-automatic) Despite the fact that Humantargeted Translation Edit Rate (HTER) as proposed by Snover et al. (2006) shows a high correlation with human judgment on translation adequacy, it is not widely used in day-to-day machine translation evaluation because of its high labor cost. HTER not only requires human experts to understand the meaning expressed in both the reference translation and the machine translation, but also requires them to propose the minimum number of edits to the MT output such that the post-edited MT output conveys the same meaning as the reference translation. Requiring such heavy manual decision making greatly increases the cost of evaluation, bottlenecking the evaluation cycle. To reduce the cost of evaluation, we aim to reduce any human decisions in the evaluation cycle to be as simple as possible, such that even untrained humans can quickly complete the evaluation. The human decisions should also be defined in a way that can be closely approximated by automatic methods, so that similar objective functions might potentially be used for tuning in MT system development cycles. Task based metrics (non-automatic) Voss and Tate (2006) proposed a task-based approach to MT evaluation that is in some ways similar in spirit to ours, but rather than evaluating how well people understand the meaning as a whole conveyed by a sentence translation, they measured the recall with which humans can extract one of the who, when, or where elements from MT output—and without attaching them to any predicate or frame. A large number of human subjects were instructed to extract only one particular type of wh-item from each sentence. They evaluated only whether the role fillers were correctly identified, without checking whether the roles were appropriately attached to the correct predicate. Also, the actor, experiencer, and patient were all conflated into the undistinguished who role, while other crucial elements, like the action, purpose, manner, were ignored. Instead, we argue, evaluating meaning similarity should be done by evaluating the semantic structure as a whole: (a) all core semantic roles should be checked, and (b) not only should we evaluate the presence of semantic role fillers in isolation, but also their relations to the frames’ predicates. Syntax based metrics Unlike Voss and Tate, Liu and Gildea (2005) proposed a structural approach, but it was based on syntactic rather than semantic structure, and focused on checking the correctness of the role structure without checking the correctness of the role fillers. Their subtree metric (STM) and headword chain metric (HWC) address the failure of BLEU to evaluate translation grammaticality; however, the problem remains that a grammatical translation can achieve a high syntax-based score even if contains meaning errors arising from confusion of semantic roles. STM was the first proposed metric to incorporate syntactic features in MT evaluation, and STM underlies most other recently proposed syntactic MT evaluation metrics, for example the evaluation metric based on lexicalfunctional grammar of Owczarzak et al. (2008). STM is a precision-based metric that measures what fraction of subtree structures are shared between the parse trees of 221 machine translations and reference translations (averaging over subtrees up to some depth threshold). Unlike Voss and Tate, however, STM does not check whether the role fillers are correctly translated. HWC is similar, but is based on dependency trees containing lexical as well as syntactic information. HWC measures what fraction of headword chains (a sequence of words corresponding to a path in the dependency tree) also appear in the reference dependency tree. This can be seen as a similarity measure on n-grams of dependency chains. Note that the HWC’s notion of lexical similarity still requires exact word match. Although STM-like syntax-based metrics are an improvement over flat lexical similarity metrics like BLEU, they are still more fluency-oriented than adequacyoriented. Similarity of syntactic rather than semantic structure still inadequately reflects meaning preservation. Moreover, properly measuring translation utility requires verifying whether role fillers have been correctly translated—verifying only the abstract structures fails to penalize when role fillers are confused. Semantic roles as features in aggregate metrics Gim´enez and M`arquez (2007, 2008) introduced ULC, an automatic MT evaluation metric that aggregates many types of features, including several shallow semantic similarity features: semantic role overlapping, semantic role matching, and semantic structure overlapping. Unlike Liu and Gildea (2007) who use discriminative training to tune the weight on each feature, ULC uses uniform weights. Although the metric shows an improved correlation with human judgment of translation quality (Callison-Burch et al., 2007; Gim´enez and M`arquez, 2007; Callison-Burch et al., 2008; Gim´enez and M`arquez, 2008), it is not commonly used in large-scale MT evaluation campaigns, perhaps due to its high time cost and/or the difficulty of interpreting its score because of its highly complex combination of many heterogenous types of features. Specifically, note that the feature based representations of semantic roles used in these aggregate metrics do not actually capture the structural predicate-argument relations. “Semantic structure overlapping” can be seen as the shallow semantic version of STM: it only measures the similarity of the tree structure of the semantic roles, without considering the lexical realization. “Semantic role overlapping” calculates the degree of lexical overlap between semantic roles of the same type in the machine translation and its reference translation, using simple bagof-words counting; this is then aggregated into an average over all semantic role types. “Semantic role matching” is just like “semantic role overlapping”, except that bagof-words degree of similarity is replaced (rather harshly) by a boolean indicating whether the role fillers are an exact string match. It is important to note that “semantic role overlapping” and “semantic role matching” both use flat feature based representations which do not capture the structural relations in semantic frames, i.e., the predicateargument relations. Like system combination approaches, ULC is a vastly more complex aggregate metric compared to widely used metrics like BLEU or STM. We believe it is important to retain a focus on developing simpler metrics which not only correlate well with human adequacy judgments, but nevertheless still directly provide representational transparency via simple, clear, and transparent scoring schemes that are (a) easily human readable to support error analysis, and (b) potentially directly usable for automatic credit/blame assignment in tuning tree-structured SMT systems. We also believe that to provide a foundation for better design of efficient automated metrics, making use of humans for annotating semantic roles and judging the role translation accuracy in MT output is an essential step that should not be bypassed, in order to adequately understand the upper bounds of such techniques. We agree with Przybocki et al. (2010), who observe in the NIST MetricsMaTr 2008 report that “human [adequacy] assessments only pertain to the translations evaluated, and are of no use even to updated translations from the same systems”. Instead, we aim for MT evaluation metrics that provide fine-grained scores in a way that also directly reflects interpretable insights on the strengths and weaknesses of MT systems rather than simply replicating human assessments. 3 MEANT: SRL for MT evaluation A good translation is one from which human readers may successfully understand at least the basic event structure—“who did what to whom, when, where and why” (Pradhan et al., 2004)—which represents the most essential meaning of the source utterances. MEANT measures this as follows. First, semantic role labeling is performed (either manually or automatically) on both the reference translation and the machine translation. The semantic frame structures thus obtained for the MT output are compared to those in the reference translations, frame by frame, argument by argument. The frame translation accuracy is a weighted sum of the number of correctly translated arguments. Conceptually, MEANT is defined in terms of f-score, with respect to the precision/recall for sentence translation accuracy as calculated by averaging the translation accuracy for all frames in the MT output across the number of frames in the MT output/reference translations. Details are given below. 3.1 Annotating semantic frames In designing a semantic MT evaluation metric, one important issue that should be addressed is how to evaluate the similarity of meaning objectively and systematically 222 Figure 1: Example of source sentence and reference translation with reconstructed semantic frames in Propbank format and MT output with reconstructed semantic frames by minimal trained human annotators. Following Propbank, there are no semantic frames for MT3 because there is no predicate. using fine-grained measures. We adopted the Propbank SRL style predicate-argument framework, which captures the basic event structure in a sentence in a way that clearly indicates many strengths and weaknesses of MT. Figure 1 shows the reference translation with reconstructed semantic frames in Propbank format and the corresponding MT output with reconstructed semantic frames by minimal trained human annotators. 3.2 Comparing semantic frames After annotating the semantic frames, we must determine the translation accuracy for each semantic role filler in the reference and machine translations. Although ultimately it would be nice to do this automatically, it is essential to first understand extremely well the upper bound of accuracy for MT evaluation via semantic frame theory. Thus, instead of resorting to excessively permissive bagof-words matching or excessively restrictive exact string matching, for the experiments reported here we employed a group of human judges to evaluate the correctness of each role filler translation between the reference and machine translations. In order to facilitate a finer-grained measurement of utility, the human judges were not only allowed to mark each role filler translation as “correct” or “incorrect”, but also “partial”. Translations of role fillers are judged “correct” if they express the same meaning as that of the reference translations (or the original source input, in the bilinguals experiment discussed later). Translations may also be judged “partial” if only part of the meaning is correctly translated. Extra meaning in a role filler is not penalized unless it belongs in another role. We also assume that a wrongly translated predicate means that the entire semantic frame is incorrect; therefore, the “correct” and “partial” argument counts are collected only if their associated predicate is correctly translated in the first place. Table 1 shows an example of SRL annotation of MT1 in Figure 1 by one of the annotators, along with the human judgment on translation accuracy of each argument. The predicate ceased in the reference translation did not match with any predicate annotated in MT1, while the predicate resumed matched with the predicate resume annotated in MT1. All arguments of the untranslated ceased are automatically considered incorrect (with no need to consider each argument individually), under our assumption that a wrongly translated predicate causes the entire event frame to be considered mistranslated. The ARGM-TMP argument, Until after their sales had ceased in mainland China for almost two months, in the reference translation is partially translated to ARGM-TMP argument, So far , nearly two months, in MT1. Similar decisions are made for the ARG1 argument and the other ARGM-TMP argument; now in the reference translation is missing in MT1. 3.3 Quantifying semantic frame match To quantify the above in a summary metric, we define MEANT in terms of an f-score that balances the precision and recall analysis of the comparative matrices collected from the human judges, as follows. Ci,j = # correct fillers of ARG j for PRED i in MT Pi,j = # partial fillers of ARG j for PRED i in MT Mi,j = total # fillers of ARG j for PRED i in MT Ri,j = total # fillers of ARG j of PRED i in REF 223 Table 1: SRL annotation of MT1 in Figure 1 and the human judgment of translation accuracy for each argument (see text). SRL REF MT1 Decision PRED (Action) ceased – no match PRED (Action) resumed resume match ARG0 (Agent) – sk - ii the sale of products in the mainland of China incorrect ARG1 (Experiencer) sales of complete range of SK - II products sales partial ARGM-TMP (Temporal) Until after , their sales had ceased in mainland China for almost two months So far , nearly two months partial ARGM-TMP (Temporal) now – incorrect Cprecision = ∑ matched i wpred + ∑ j wjCi,j wpred + ∑ j wjMi,j Crecall = ∑ matched i wpred + ∑ j wjCi,j wpred + ∑ j wjRi,j Pprecision = ∑ matched i ∑ j wjPi,j wpred + ∑ j wjMi,j Precall = ∑ matched i ∑ j wjPi,j wpred + ∑ j wjRi,j precision = Cprecision + (wpartial × Pprecision) total # predicates in MT recall = Crecall + (wpartial × Precall) total # predicates in REF f-score = 2 ∗precision ∗recall precision + recall Cprecision, Pprecision, Crecall and Precall are the sum of the fractional counts of correctly or partially translated semantic frames in the MT output and the reference, respectively, which can be viewed as the true positive for precision and recall of the whole semantic structure in one source utterence. Therefore, the SRL based MT evaluation metric is equivalent to the f-score, i.e., the translation accuracy for the whole predicate-argument structure. Note that wpred, wj and wpartial are the weights for the matched predicate, arguments of type j, and partial translations. These weights can be viewed as the importance of meaning preservation for each different category of semantic roles, and the penalty for partial translations. We will describe below how these weights are estimated. If all the reconstructed semantic frames in the MT output are completely identical to those annotated in the reference translation, and all the arguments in the reconstructed frames express the same meaning as the corresponding arguments in the reference translations, then the f-score will be equal to 1. For instance, consider MT1 in Figure 1. The number of frames in MT1 and the reference translation are 1 and 2, respectively. The total number of participants (including both predicates and arguments) of the resume frame in both MT1 and the reference translation is 4 (one predicate and three arguments), with 2 of the arguments (one ARG1/experiencer and one ARGM-TMP/temporal) only partially translated. Assuming for now that the metric aggregates ten types of semantic roles with uniform weight for each role (optimization of weights will be discussed later), then wpred = wj = 0.1, and so Cprecision and Crecall are both zero while Pprecision and Precallare both 0.5. If we further assume that wpartial = 0.5, then precison and recall are 0.25 and 0.125 respectively. Thus the f-score for this example is 0.17. Both human and semi-automatic variants of the MEANT translation evaluation metric were metaevaluated, as described next. 4 Meta-evaluation methodology 4.1 Evaluation Corpus We leverage work from Phase 2.5 of the DARPA GALE program in which both a subset of the Chinese source sentences, as well as their English reference, are being annotated with semantic role labels in Propbank style. The corpus also includes three participating stateof-the-art MT systems’ output. For present purposes, we randomly drew 40 sentences from the newswire genre of the corpus to form a meta-evaluation corpus. To maintain a controlled environment for experiments and consistent comparison, the evaluation corpus is fixed throughout this work. 4.2 Correlation with human judgements on adequacy We followed the benchmark assessment procedure in WMT and NIST MetricsMaTr (Callison-Burch et al., 2008, 2010), assessing the performance of the proposed evaluation metric at the sentence level using ranking preference consistency, which also known as Kendall’s τ rank correlation coefficient, to evaluate the correlation of the proposed metric with human judgments on translation adequacy ranking. A higher value for τ indicates more similarity to the ranking by the evaluation metric to the human judgment. The range of possible values of correlation coefficient is [-1,1], where 1 means the systems are ranked 224 Table 2: List of semantic roles that human judges are requested to label. Label Event Label Event Agent who Location where Action did Purpose why Experiencer what Manner how Patient whom Degree or Extent how Temporal when Other adverbial arg. how in the same order as the human judgment and -1 means the systems are ranked in the reverse order as the human judgment. 5 Experiment: Using human SRL The first experiment aims to provide a more concrete understanding of one of the key questions as to the upper bounds of the proposed evaluation metric: how well can human annotators perform in reconstructing the semantic frames in MT output? This is important since MT output is still not close to perfectly grammatical for a good syntactic parsing—applying automatic shallow semantic parsers, which are trained on grammatical input and valid syntactic parse trees, on MT output may significantly underestimate translation utility. 5.1 Experimental setup We thus introduce HMEANT, a variant of MEANT based on the idea that semantic role labeling can be simplified into a task that is easy and fast even for untrained humans. The human annotators are given only very simple instructions of less than half a page, along with two examples. Table 2 shows the list of labels annotators are requested to annotate, where the semantic role labeling instructions are given in the intuitive terms of “who did what to whom, when, where, why and how”. To facilitate the inter-annotator agreement experiments discussed later, each sentence is independently assigned to at least two annotators. After calculating the SRL scores based on the confusion matrix collected from the annotation and evaluation, we estimate the weights using grid search to optimize correlation with human adequacy judgments. 5.2 Results: Correlation with human judgement Table 3 shows results indicating that HMEANT correlates with human judgment on adequacy as well as HTER does (0.432), and is far superior to BLEU (0.198) or other surface-oriented metrics. Inspection of the cross validation results shown in Table 4 indicates that the estimated weights are not overfitting. Recall that the weights used in HMEANT are globally estimated (by grid search) using the evaluation Table 3: Sentence-level correlation with human adequacy judgments, across the evaluation metrics. Metrics Kendall τ HMEANT 0.4324 HTER 0.4324 NIST 0.2883 BLEU 0.1982 METEOR 0.1982 TER 0.1982 PER 0.1982 CDER 0.1171 WER 0.0991 Table 4: Analysis of stability for HMEANT’s weight settings, with RHMEANT rank and Kendall’s τ correlation scores (see text). Fold 0 Fold 1 Fold 2 Fold 3 RHMEANT 3 1 3 5 distinct R 16 29 19 17 τHMEANT 0.33 0.48 0.48 0.40 τHTER 0.59 0.41 0.44 0.30 τCV train 0.45 0.42 0.40 0.43 τCV test 0.33 0.37 0.48 0.40 corpus. To analyze stability, the corpus is also partitioned randomly into four folds of equal size. For each fold, another grid search is also run. RHMEANT is the rank at which the Kendall’s correlation for HMEANT is found, if the Kendall’s correlations for all points in the grid search space are sorted. Many similar weightvectors produce the same Kendall’s correlation score, so “distinct R” shows how many distinct Kendall’s correlation scores exist in each case—between 16 and 29. HMEANT’s weight settings always produce Kendall’s correlation scores among the top 5, regardless of which fold is chosen, indicating good stability of HMEANT’s weight-vector. Next, Kendall’s τ correlation scores are shown for HMEANT on each fold. They vary from 0.33 to 0.48, and are at least as stable as those shown for HTER, where τ varies from 0.30 to 0.59. Finally, τCV shows Kendall’s correlations if the weightvector is instead subjected to full cross-validation training and testing, again demonstrating good stability. In fact, the correlations for the training set in three of the folds (0, 2, and 3) are identical to those for HMEANT. 5.3 Results: Cost of evaluating The time needed for training non-expert humans to carry out our annotation protocol is significantly less than HTER and gold standard Propbank annotation. The halfpage instructions given to annotators required only between 5 to 15 minutes for all annotators, including time 225 for asking questions if necessary. Aside from providing two annotated examples, no further training was given. Similarly, the time needed for running the evaluation metric is also significantly less than HTER—under at most 5 minutes per sentence, even for non-expert humans using no computer-assisted UI tools. The average time used for annotating each sentence was lower bounded by 2 minutes and upper bounded by 3 minutes, and the time used for determing the translation accuracy of role fillers averaged under 2 minutes. Note that these figures are for unskilled non-experts. These times tend to diminish significantly after annotators acquire experience. 6 Experiment: Monolinguals vs. bilinguals We now show that using monolingual annotators is essentially just as effective as using more expensive bilingual annotators. We study the cost/benefit trade-off of using human annotators from different language backgrounds for the proposed evaluation metric, and compare whether providing the original source text helps. Note that this experiment focuses on the SRL annotation step, rather than the judgments of role filler paraphrasing accuracy, because the latter is only a simple three-way decision between “correct”, “partial”, and “incorrect” that is far less sensitive to the annotators’ language backgrounds. MT output is typically poor. Therefore, readers of MT output often guess the original meaning in the source input using their own language background knowledge. Readers’ language background thus affects their understanding of the translation, which could affect the accuracy of capturing the key semantic roles in the translation. 6.1 Experimental Setup Both English monolinguals and Chinese-English bilinguals (Chinese as first language and English as second language) were employed to annotate the semantic roles. For bilinguals, we also experimented with the difference in guessing constraints by optionally providing the original source input together with the translation. Therefore, there are three variations in the experiment setup: monolinguals seeing translation output only; bilinguals seeing translation output only; and bilinguals seeing both input and output. The aim here is to do a rough sanity check on the effect of the variation of language background of the annotators; thus for these experiments we have not run the weight estimation step after SRL based f-score calculation. Instead, we simply assigned a uniform weight to all the semantic elements, and evaluated the variation under the same weight settings. (The correlation scores reported in this section are thus expected to be lower than that reported in the last section.) Table 5: Sentence-level correlation with human adequacy judgments, for monolinguals vs. bilinguals. Uniform rather than optimized weights are used. Metrics Kendall τ HMEANT - bilinguals 0.3514 HMEANT - monolinguals 0.3153 HMEANT - bilinguals with input 0.3153 6.2 Results Table 5 of our results shows that using more expensive bilinguals for SRL annotation instead of monolinguals improves the correlation only slightly. The correlation coefficient of the SRL based evaluation metric driven by bilingual human annotators (0.351) is slightly better than that driven by monolingual human annotators (0.315); however, using bilinguals in the evaluation process is more costly than using monolinguals. The results show that even allowing the bilinguals to see the input as well as the translation output for SRL annotation does not help the correlation. The correlation coefficient of the SRL based evaluation metric driven by bilingual human annotators who see also the source input sentences is 0.315 which is the same as that driven by monolingual human annotators. We find that the correlation coefficient of the proposed with human judgment on adequacy drops when bilinguals are shown to the source input sentence during annotation. Error analyses lead us to believe that annotators will drop some parts of the meaning in the translations when trying to align them to the source input. This suggests that HMEANT requires only monolingual English annotators, who can be employed at low cost. 7 Inter-annotator agreement One of the concerns of the proposed metric is that, given only minimal training on the task, humans would annotate the semantic roles so inconsistently as to reduce the reliability of the evaluation metric. Inter-annotator agreement (IAA) measures the consistency of human in performing the annotation task. A high IAA suggests that the annotation is consistent and the evaluation results are reliable and reproducible. To obtain a clear analysis on where any inconsistency might lie, we measured IAA in two steps: role identification and role classification. 7.1 Experimental setup Role identification Since annotators are not consistent in handling articles or punctuation at the beginning or the end of the annotated arguments, the agreement of semantic role identification is counted over the matching of 226 Table 6: Inter-annotator agreement rate on role identification (matching of word span) Experiments REF MT bilinguals working on output only 76% 72% monolinguals working on output only 93% 75% bilinguals working on input-output 75% 73% Table 7: Inter-annotator agreement rate on role classification (matching of role label associated with matched word span) Experiments Ref MT bilinguals working on output only 69% 65% monolinguals working on output only 88% 70% bilinguals working on input-output 70% 69% word span in the annotated role fillers with a tolerance of ±1 word in mismatch. The inter-annotator agreement rate (IAA) on the role identification task is calculated as follows. A1 and A2 denote the number of annotated predicates and arguments by annotator 1 and annotator 2 respectively. Mspan denotes the number of annotated predicates and arguments with matching word span between annotators. Pidentification = Mspan A1 Ridentification = Mspan A2 IAAidentification = 2 ∗Pidentification ∗Ridentification Pidentification + Ridentification Role classification The agreement of classified roles is counted over the matching of the semantic role labels within two aligned word spans. The IAA on the role classification task is calculated as follows. Mlabel denotes the number of annotated predicates and arguments with matching role label between annotators. Pclassification = Mlabel A1 Rclassification = Mlabel A2 IAAclassification = 2 ∗Pclassification ∗Rclassification Pclassification + Rclassification 7.2 Results The high inter-annotator agreement suggests that the annotation instructions provided to the annotators are in general sufficient and the evaluation is repeatable and could be automated in the future. Table 6 and 7 show the annotators reconstructed the semantic frames quite consistently, even they were given only simple and minimal training. We have noticed that the agreement on role identification is higher than that on role classification. This suggests that there are role confusion errors among the annotators. We expect a slightly more detailed instructions and explanations on different roles will further improve the IAA on role classification. The results also show that monolinguals seeing output only have the highest IAA in semantic frame reconstruction. Data analyses lead us to believe the monolinguals are the most constrained group in the experiments. The monolingual annotators can only guess the meaning in the MT output using their English language knowledge. Therefore, they all understand the translation almost the same way, even if the translation is incorrect. On the other hand, bilinguals seeing both the input and output discover the mistranslated portions, and often unconsciously try to compensate by re-interpreting the MT output with information not necessarily appearing in the translation, in order to better annotate what they think it should have conveyed. Since there are many degrees of freedom in this sort of compensatory re-interpretation, this group achieved a lower IAA than the monolinguals. Bilinguals seeing only output appear to take this even a step further: confronted with a poor translation, they often unconsciously try to guess what the original input might have been. Consequently, they agree the least, because they have the most freedom in applying their own knowledge of the unseen input language, when compensating for poor translations. 8 Experiment: Using automatic SRL In the previous experiment, we showed that the proposed evaluation metric driven by human semantic role annotators performed as well as HTER. It is now worth asking a deeper question: can we further reduce the labor cost of MEANT by using automatic shallow semantic parsing instead of humans for semantic role labeling? Note that this experiment focuses on understanding the cost/benefit trade-off for the semantic frame reconstruction step. For SRL annotation, we replace humans with automatic shallow semantic parsing. We decouple this from the ternary judgments of role filler accuracy, which are still made by humans. However, we believe the evaluation of role filler accuracy will also be automatable. 8.1 Experimental setup We performed three variations of the experiments to assess the performance degradation from the automatic approximation of semantic frame reconstruction in each translation (reference translation and MT output): we applied automatic shallow semantic parsing on the MT output only; on the reference translation only; and on both reference translation and MT output. For the semantic 227 Table 8: Sentence-level correlation with human adequacy judgments. *The weights for individual roles in the metric are tuned by optimizing the correlation. Metrics Kendall τ HTER 0.4324 HMEANT gold - monolinguals * 0.4324 HMEANT auto - monolinguals * 0.3964 MEANT gold - auto * 0.3694 MEANT auto - auto * 0.3423 NIST 0.2883 BLEU / METEOR / TER / PER 0.1982 CDER 0.1171 WER 0.0991 parser, we used ASSERT (Pradhan et al., 2004) which achieves roughly 87% semantic role labeling accuracy. 8.2 Results Table 8 shows that the proposed SRL based evaluation metric correlates slightly worse than HTER with a much lower labor cost. The correlation with human judgment on adequacy of the fully automated SRL annotation version, i.e., applying ASSERT on both the reference translation and the MT output, of the SRL based evaluation metric is about 80% of that of HTER. The results also show that the correlation with human judgment on adequacy of either one side of translation using automatic SRL is in the 85% to 95% range of that HTER. 9 Conclusion We have presented MEANT, a novel semantic MT evaluation metric that assesses the translation accuracy via Propbank-style semantic predicates, roles, and fillers. MEANT provides an intuitive picture on how much information is correctly translated in the MT output. MEANT can be run using inexpensive untrained monolinguals and yet correlates with human judgments on adequacy as well as HTER with a lower labor cost. In contrast to HTER, which requires rigorous training of human experts to find a minimum edit of the translation (an exponentially large search space), MEANT requires untrained humans to make well-defined, bounded decisions on annotating semantic roles and judging translation correctness. The process by which MEANT reconstructs the semantic frames in a translation and then judges translation correctness of the role fillers conceptually models how humans read and understand translation output. We also showed that using automatic shallow semantic parser to further reduce the labor cost of the proposed metric successfully approximates roughly 80% of the correlation with human judgment on adequacy. The results suggest future potential for a fully automatic variant of MEANT that could out-perform current automatic MT evaluation metrics and still perform near the level of HTER. Numerous intriguing questions arise from this work. A further investigation into the correlation of each of the individual roles to human adequacy judgments is detailed elsewhere, along with additional improvements to the MEANT family of metrics (Lo and Wu, 2011). Another interesting investigation would then be to similarly replicate this analysis of the impact of each individual role, but using automatically rather than manually labeled semantic roles, in order to ascertain whether the more difficult semantic roles for automatic semantic parsers might also correspond to the less important aspects of end-to-end MT utility. Acknowledgments This material is based upon work supported in part by the Defense Advanced Research Projects Agency (DARPA) under GALE Contract Nos. HR0011-06C-0022 and HR0011-06-C-0023 and by the Hong Kong Research Grants Council (RGC) research grants GRF621008, GRF612806, DAG03/04.EG09, RGC6256/00E, and RGC6083/99E. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Defense Advanced Research Projects Agency. References Satanjeev Banerjee and Alon Lavie. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proceedings of the 43th Annual Meeting of the Association of Computational Linguistics (ACL-05), pages 65–72, 2005. Chris Callison-Burch, Miles Osborne, and Philipp Koehn. Re-evaluating the role of BLEU in Machine Translation Research. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL-06), pages 249–256, 2006. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. (Meta-) evaluation of Machine Translation. In Proceedings of the 2nd Workshop on Statistical Machine Translation, pages 136–158, 2007. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. Further Metaevaluation of Machine Translation. In Proceedings of the 3rd Workshop on Statistical Machine Translation, pages 70–106, 2008. Chris Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Pryzbocki, and Omar Zaidan. 228 Findings of the 2010 Joint Workshop on Statistical Machine Translation and Metrics for Machine Translation. In Proceedings of the Joint 5th Workshop on Statistical Machine Translation and MetricsMATR, pages 17–53, Uppsala, Sweden, 15-16 July 2010. G. Doddington. Automatic Evaluation of Machine Translation Quality using N-gram Co-occurrence Statistics. In Proceedings of the 2nd International Conference on Human Language Technology Research (HLT-02), pages 138–145, San Francisco, CA, USA, 2002. Morgan Kaufmann Publishers Inc. Jes´us Gim´enez and Llu´is M`arquez. Linguistic Features for Automatic Evaluation of Heterogenous MT Systems. In Proceedings of the 2nd Workshop on Statistical Machine Translation, pages 256–264, Prague, Czech Republic, June 2007. Association for Computational Linguistics. Jes´us Gim´enez and Llu´is M`arquez. A Smorgasbord of Features for Automatic MT Evaluation. In Proceedings of the 3rd Workshop on Statistical Machine Translation, pages 195–198, Columbus, OH, June 2008. Association for Computational Linguistics. Philipp Koehn and Christof Monz. Manual and Automatic Evaluation of Machine Translation between European Languages. In Proceedings of the Workshop on Statistical Machine Translation, pages 102–121, 2006. Gregor Leusch, Nicola Ueffing, and Hermann Ney. CDer: Efficient MT Evaluation Using Block Movements. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL-06), 2006. Ding Liu and Daniel Gildea. Syntactic Features for Evaluation of Machine Translation. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, page 25, 2005. Ding Liu and Daniel Gildea. Source-Language Features and Maximum Correlation Training for Machine Translation Evaluation. In Proceedings of the 2007 Conference of the North American Chapter of the Association of Computational Linguistics (NAACL-07), 2007. Chi-kiu Lo and Dekai Wu. Evaluating machine translation utility via semantic role labels. In Seventh International Conference on Language Resources and Evaluation (LREC-2010), pages 2873–2877, Malta, May 2010. Chi-kiu Lo and Dekai Wu. Semantic vs. syntactic vs. n-gram structure for machine translation evaluation. In Dekai Wu, editor, Proceedings of SSST-4, Fourth Workshop on Syntax and Structure in Statistical Translation (at COLING 2010), pages 52–60, Beijing, Aug 2010. Chi-kiu Lo and Dekai Wu. SMT vs. AI redux: How semantic frames evaluate MT more accurately. In 22nd International Joint Conference on Artificial Intelligence (IJCAI-11), Barcelona, Jul 2011. To appear. Sonja Nießen, Franz Josef Och, Gregor Leusch, and Hermann Ney. A Evaluation Tool for Machine Translation: Fast Evaluation for MT Research. In Proceedings of the 2nd International Conference on Language Resources and Evaluation (LREC-2000), 2000. Karolina Owczarzak, Josef van Genabith, and Andy Way. Evaluating machine translation with LFG dependencies. Machine Translation, 21:95–119, 2008. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-02), pages 311–318, 2002. Sameer Pradhan, Wayne Ward, Kadri Hacioglu, James H. Martin, and Dan Jurafsky. Shallow Semantic Parsing Using Support Vector Machines. In Proceedings of the 2004 Conference on Human Language Technology and the North American Chapter of the Association for Computational Linguistics (HLT-NAACL-04), 2004. Mark Przybocki, Kay Peterson, S´ebastien Bronsart, and Gregory Sanders. The NIST 2008 Metrics for Machine Translation Challenge - Overview, Methodology, Metrics, and Results. Machine Tr, 23:71–103, 2010. Matthew Snover, Bonnie J. Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. A Study of Translation Edit Rate with Targeted Human Annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas (AMTA-06), pages 223–231, 2006. Christoph Tillmann, Stephan Vogel, Hermann Ney, Arkaitz Zubiaga, and Hassan Sawaf. Accelerated DP Based Search For Statistical Translation. In Proceedings of the 5th European Conference on Speech Communication and Technology (EUROSPEECH-97), 1997. Clare R. Voss and Calandra R. Tate. Task-based Evaluation of Machine Translation (MT) Engines: Measuring How Well People Extract Who, When, Where-Type Elements in MT Output. In Proceedings of the 11th Annual Conference of the European Association for Machine Translation (EAMT-2006), pages 203–212, Oslo, Norway, June 2006. 229
|
2011
|
23
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 230–238, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics An exponential translation model for target language morphology Michael Subotin Paxfire, Inc. Department of Linguistics & UMIACS, University of Maryland [email protected] Abstract This paper presents an exponential model for translation into highly inflected languages which can be scaled to very large datasets. As in other recent proposals, it predicts targetside phrases and can be conditioned on sourceside context. However, crucially for the task of modeling morphological generalizations, it estimates feature parameters from the entire training set rather than as a collection of separate classifiers. We apply it to English-Czech translation, using a variety of features capturing potential predictors for case, number, and gender, and one of the largest publicly available parallel data sets. We also describe generation and modeling of inflected forms unobserved in training data and decoding procedures for a model with non-local target-side feature dependencies. 1 Introduction Translation into languages with rich morphology presents special challenges for phrase-based methods. Thus, Birch et al (2008) find that translation quality achieved by a popular phrase-based system correlates significantly with a measure of targetside, but not source-side morphological complexity. Recently, several studies (Bojar, 2007; Avramidis and Koehn, 2009; Ramanathan et al., 2009; Yeniterzi and Oflazer, 2010) proposed modeling targetside morphology in a phrase-based factored models framework (Koehn and Hoang, 2007). Under this approach linguistic annotation of source sentences is analyzed using heuristics to identify relevant structural phenomena, whose occurrences are in turn used to compute additional relative frequency (maximum likelihood) estimates predicting targetside inflections. This approach makes it difficult to handle the complex interplay between different predictors for inflections. For example, the accusative case is usually preserved in translation, so that nouns appearing in the direct object position of English clauses tend to be translated to words with accusative case markings in languages with richer morphology, and vice versa. However, there are exceptions. For example, some verbs that place their object in the accusative case in Czech may be rendered as prepositional constructions in English (Naughton, 2005): David was looking for Jana David hledal Janu David searched Jana-ACC Conversely, direct objects of some English verbs can be translated by nouns with genitive case markings in Czech: David asked Jana where Karel was David zeptal se Jany kde je Karel David asked SELF Jana-GEN where is Karel Furthermore, English noun modifiers are often rendered by Czech possessive adjectives and a verbal complement in one language is commonly translated by a nominalizing complement in another language, so that the part of speech (POS) of its head need not be preserved. These complications make it difficult to model morphological phenomena using 230 closed-form estimates. This paper presents an alternative approach based on exponential phrase models, which can straightforwardly handle feature sets with arbitrarily elaborate source-side dependencies. 2 Hierarchical phrase-based translation We take as our starting point David Chiang’s Hiero system, which generalizes phrase-based translation to substrings with gaps (Chiang, 2007). Consider for instance the following set of context-free rules with a single non-terminal symbol: ⟨A , A ⟩→⟨A1 A2 , A1 A2 ⟩ ⟨A , A ⟩→⟨d′ A1 id´ees A2 , A1 A2 ideas ⟩ ⟨A , A ⟩→⟨incolores , colorless ⟩ ⟨A , A ⟩→⟨vertes , green ⟩ ⟨A , A ⟩→⟨dorment A , sleep A ⟩ ⟨A , A ⟩→⟨furieusement , furiously ⟩ It is one of many rule sets that would suffice to generate the English translation 1b for the French sentence 1a. 1a. d’ incolores id´ees vertes dorment furieusement 1b. colorless green ideas sleep furiously As shown by Chiang (2007), a weighted grammar of this form can be collected and scored by simple extensions of standard methods for phrase-based translation and efficiently combined with a language model in a CKY decoder to achieve large improvements over a state-of-the-art phrase-based system. The translation is chosen to be the target-side yield of the highest-scoring synchronous parse consistent with the source sentence. Although a variety of scores interpolated into the decision rule for phrasebased systems have been investigated over the years, only a handful have been discovered to be consistently useful. In this work we concentrate on extending the target-given-source phrase model1. 3 Exponential phrase models with shared features The model used in this work is based on the familiar equation for conditional exponential models: 1To avoid confusion with features of the exponential models described below we shall use the term “model” rather than “feature” for the terms interpolated using MERT. p(Y |X) = e⃗w·⃗f(X,Y ) P Y ′∈GEN(X) e⃗w·⃗f(X,Y ′) where ⃗f(X, Y ) is a vector of feature functions, ⃗w is a corresponding weight vector, so that ⃗w · ⃗f(X, Y ) = P i wifi(X, Y ), and GEN(X) is a set of values corresponding to Y . For a targetgiven-source phrase model the predicted outcomes are target-side phrases ry, the model is conditioned on a source-side phrase rx together with some context, and each GEN(X) consists of target phrases ry co-occurring with a given source phrase rx in the grammar. Maximum likelihood estimation for exponential model finds the values of weights that maximize the likelihood of the training data, or, equivalently, its logarithm: LL(⃗w) = log M Y m=1 p(Ym|Xm) = M X m=1 log p(Ym|Xm) where the expressions range over all training instances {m}. In this work we extend the objective using an ℓ2 regularizer (Ng, 2004; Gao et al., 2007). We obtain the counts of instances and features from the standard heuristics used to extract the grammar from a word-aligned parallel corpus. Exponential models and other classifiers have been used in several recent studies to condition phrase model probabilities on source-side context (Chan et al 2007; Carpuat and Wu 2007a; Carpuat and Wu 2007b). However, this has been generally accomplished by training independent classifiers associated with different source phrases. This approach is not well suited to modeling targetlanguage inflections, since parameters for the features associated with morphological markings and their predictors would be estimated separately from many, generally very small training sets, thereby preventing the model from making precisely the kind of generalization beyond specific phrases that we seek to obtain. Instead we continue the approach proposed in Subotin (2008), where a single model defined by the equations above is trained on all of the data, so that parameters for features that are shared by rule sets with difference source sides reflect cumulative feature counts, while the standard relative 231 frequency model can be obtained as a special case of maximum likelihood estimation for a model containing only the features for rules.2 Recently, Jeong et al (2010) independently proposed an exponential model with shared features for target-side morphology in application to lexical scores in a treelet-based system. 4 Features The feature space for target-side inflection models used in this work consists of features tracking the source phrase and the corresponding target phrase together with its complete morphological tag, which will be referred to as rule features for brevity. The feature space also includes features tracking the source phrase together with the lemmatized representation of the target phrase, called lemma features below. Since there is little ambiguity in lemmatization for Czech, the lemma representations were for simplicity based on the most frequent lemma for each token. Finally, we include features associating aspects of source-side annotation with inflections of aligned target words. The models include features for three general classes of morphological types: number, case, and gender. We add inflection features for all words aligned to at least one English verb, adjective, noun, pronoun, or determiner, excepting definite and indefinite articles. A separate feature type marks cases where an intended inflection category is not applicable to a target word falling under these criteria due to a POS mismatch between aligned words. 4.1 Number The inflection for number is particularly easy to model in translating from English, since it is generally marked on the source side, and POS taggers based on the Penn treebank tag set attempt to infer it in cases where it is not. For word pairs whose source-side word is a verb, we add a feature marking the number of its subject, with separate features for noun and pronoun subjects. For word pairs whose source side is an adjective, we add a feature marking the number of the head of the smallest noun phrase that contains it. 2Note that this model is estimated from the full parallel corpus, rather than a held-out development set. 4.2 Case Among the inflection types of Czech nouns, the only type that is not generally observed in English and does not belong to derivational morphology is inflection for case. Czech marks seven cases: nominal, genitive, dative, accusative, vocative, locative, and instrumental. Not all of these forms are overtly distinguished for all lexical items, and some words that function syntactically as nouns do not inflect at all. Czech adjectives also inflect for case and their case has to match the case of their governing noun. However, since the source sentence and its annotation contain a variety of predictors for case, we model it using only source-dependent features. The following feature types for case were included: • The structural role of the aligned source word or the head of the smallest noun phrase containing the aligned source word. Features were included for the roles of subject, direct object, and nominal predicate. • The preposition governing the smallest noun phrase containing the aligned source word, if it is governed by a preposition. • An indicator for the presence of a possessive marker modifying the aligned source word or the head of the smallest noun phrase containing the aligned source word. • An indicator for the presence of a numeral modifying the aligned source word or the head of the smallest noun phrase containing the aligned source word. • An indication that aligned source word modified by quantifiers many, most, such, or half. These features would be more properly defined based on the identity of the target word aligned to these quantifiers, but little ambiguity seems to arise from this substitution in practice. • The lemma of the verb governing the aligned source word or the head of the smallest noun phrase containing the aligned source word. This is the only lexicalized feature type used in the model and we include only those features which occur over 1,000 times in the training data. 232 wx 1 wx 2 wx 3 wy 1 wy 2 wy 3 wx 4 r1 r2 observed dependency: wx 2 → wx 3 assumed dependency: wy 1 → wy 3 Figure 1: Inferring syntactic dependencies. Features corresponding to aspects of the source word itself and features corresponding to aspects of the head of a noun phrase containing it were treated as separate types. 4.3 Gender Czech nouns belong to one of three cases: feminine, masculine, and neuter. Verbs and adjectives have to agree with nouns for gender, although this agreement is not marked in some forms of the verb. In contrast to number and case, Czech gender generally cannot be predicted from any aspect of the English source sentence, which necessitates the use of features that depend on another target-side word. This, in turn, requires a more elaborate decoding procedure, described in the next section. For verbs we add a feature associating the gender of the verb with the gender of its subject. For adjectives, we add a feature tracking the gender of the governing nouns. These dependencies are inferred from source-side annotation via word alignments, as depicted in figure 1, without any use of target-side dependency parses. 5 Decoding with target-side model dependencies The procedure for decoding with non-local targetside feature dependencies is similar in its general outlines to the standard method of decoding with a language model, as described in Chiang (2007). The search space is organized into arrays called charts, each containing a set of items whose scores can be compared with one another for the purposes of pruning. Each rule that has matched the source sentence belongs to a rule chart associated with its location-anchored sequence of non-terminal and terminal source-side symbols and any of its aspects which may affect the score of a translation hypothesis when it is combined with another rule. In the case of the language model these aspects include any of its target-side words that are part of still incomplete n-grams. In the case of non-local target-side dependencies this includes any information about features needed for this rule’s estimate and tracking some target-side inflection beyond it or features tracking target-side inflections within this rule and needed for computation of another rule’s estimate. We shall refer to both these types of information as messages, alluding to the fact that it will need to be conveyed to another point in the derivation to finish the computation. Thus, a rule chart for a rule with one nonterminal can be denoted as as D xi1 i+1Axj j1+1, µ E , where we have introduced the symbol µ to represent the set of messages associated with a given item in the chart. Each item in the chart is associated with a score s, based on any submodels and heuristic estimates that can already be computed for that item and used to arrange the chart items into a priority queue. Combinations of one or more rules that span a substring of terminals are arranged into a different type of chart which we shall call span charts. A span chart has the form [i1, j1; µ1], where µ1 is a set of messages, and its items are likewise prioritized by a partial score s1. The decoding procedure used in this work is based on the cube pruning method, fully described in Chiang (2007). Informally, whenever a rule chart is combined with one or more span charts corresponding to its non-terminals, we select best-scoring items from each chart and update derivation scores by performing any model computations that become possible once we combine the corresponding items. Crucially, whenever an item in one of the charts crosses a pruning threshold, we discard the rest of that chart’s items, even though one of them could generate a better-scoring partial derivation in com233 bination with an item from another chart. It is therefore important to estimate incomplete model scores as well as we can. We estimate these scores by computing exponential models using all features without non-local dependencies. Schematically, our decoding procedure can be illustrated by three elementary cases. We take the example of computing an estimate for a rule whose only terminal on both sides is a verb and which requires a feature tracking the target-side gender inflection of the subject. We make use of a cache storing all computed numerators and denominators of the exponential model, which makes it easy to recompute an estimate given an additional feature and use the difference between it and the incomplete estimate to update the score of the partial derivation. In the simplest case, illustrated in figure 2, the non-local feature depends on the position within the span of the rule’s non-terminal symbol, so that its model estimate can be computed when its rule chart is combined with the span chart for its non-terminal symbol. This is accomplished using a feature message, which indicates the gender inflection for the subject and is denoted as mf(i), where the index i refers to the position of its “recipient”. Figure 3 illustrates the case where the non-local feature lies outside the rule’s span, but the estimated rule lies inside a non-terminal of the rule which contains the feature dependency. This requires sending a rule message mr(i), which includes information about the estimated rule (which also serves as a pointer to the score cache) and its feature dependency. The final example, shown in figure 4, illustrates the case where both types of messages need to be propagated until we reach a rule chart that spans both ends of the dependency. In this case, the full estimate for a rule is computed while combining charts neither of which corresponds directly to that rule. A somewhat more formal account of the decoding procedure is given in figure 5, which shows a partial set of inference rules, generally following the formalism used in Chiang (2007), but simplifying it in several ways for brevity. Aside from the notation introduced above, we also make use of two updating functions. The message-updating function um(µ) takes a set of messages and outputs another set that includes those messages mr(k) and mf(k) whose destination k lies outside the span i, j of the A Sb A V 1 2 mf(2) Score cache Figure 2: Non-local dependency, case A. A Sb A V 1 2 mr(1) Score cache Figure 3: Non-local dependency, case B. A Sb A V 1 2 Score cache mr(1) Adv A 3 mf(3) Figure 4: Non-local dependency, case C. 234 Figure 5: Simplified set of inference rules for decoding with target-side model dependencies. chart. The score-updating function us(µ) computes those model estimates which can be completed using a message in the set µ and returns the difference between the new and old scores. 6 Modeling unobserved target inflections As a consequence of translating into a morphologically rich language, some inflected forms of target words are unobserved in training data and cannot be generated by the decoder under standard phrasebased approaches. Exponential models with shared features provide a straightforward way to estimate probabilities of unobserved inflections. This is accomplished by extending the sets of target phrases GEN(X) over which the model is normalized by including some phrases which have not been observed in the original sets. When additional rule features with these unobserved target phrases are included in the model, their weights will be estimated even though they never appear in the training examples (i.e, in the numerator of their likelihoods). We generate unobserved morphological variants for target phrases starting from a generation procedure for target words. Morphological variants for words were generated using the ´UFAL MORPHO tool (Kolovratn´ık and Pˇrikryl, 2008). The forms produced by the tool from the lemma of an observed inflected word form were subjected to several restrictions: • For nouns, generated forms had to match the original form for number. • For verbs, generated forms had to match the original form for tense and negation. • For adjectives, generated forms had to match the original form for degree of comparison and negation. • For pronouns, excepting relative and interrogative pronouns, generated forms had to match the original form for number, case, and gender. • Non-standard inflection forms for all POS were excluded. The following criteria were used to select rules for which expanded inflection sets were generated: • The target phrase had to contain exactly one word for which inflected forms could be generated according to the criteria given above. • If the target phrase contained prepositions or numerals, they had to be in a position not adjacent to the inflected word. The rationale for this criterion was the tendency of prepositions and numerals to determine the inflection of adjacent words. • The lemmatized form of the phrase had to account for at least 25% of target phrases extracted for a given source phrase. The standard relative frequency estimates for the p(X|Y ) phrase model and the lexical models do not provide reasonable values for the decoder scores for unobserved rules and words. In contrast, exponential models with surface and lemma features can be straightforwardly trained for all of them. For the experiments described below we trained an exponential model for the p(Y |X) lexical model. For greater speed we estimate the probabilities for the other two models using interpolated Kneser-Ney smoothing (Chen and Goodman, 1998), where the surface form of a rule or an aligned word pair plays to role of a trigram, the pairing of the source surface form with the lemmatized target form plays the role of a bigram, and the source surface form alone plays the role of a unigram. 235 7 Corpora and baselines We investigate the models using the 2009 edition of the parallel treebank from ´UFAL (Bojar and ˇZabokrtsk´y, 2009), containing 8,029,801 sentence pairs from various genres. The corpus comes with automatically generated annotation and a randomized split into training, development, and testing sets. Thus, the annotation for the development and testing sets provides a realistic reflection of what could be obtained for arbitrary source text. The English-side annotation follows the standards of the Penn Treebank and includes dependency parses and structural role labels such as subject and object. The Czech side is labeled with several layers of annotation, of which only the morphological tags and lemmas are used in this study. The Czech tags follow the standards of the Prague Dependency Treebank 2.0. The impact of the models on translation accuracy was investigated for two experimental conditions: • Small data set: trained on the news portion of the data, containing 140,191 sentences; development and testing sets containing 1500 sentences of news text each. • Large data set: trained on all the training data; developing and testing sets each containing 1500 sentences of EU, news, and fiction data in equal portions. The other genres were excluded from the development and testing sets because manual inspection showed them to contain a considerable proportion of non-parallel sentences pairs. All conditions use word alignments produced by sequential iterations of IBM model 1, HMM, and IBM model 4 in GIZA++, followed by “diag-and” symmetrization (Koehn et al., 2003). Thresholds for phrase extraction and decoder pruning were set to values typical for the baseline system (Chiang, 2007). Unaligned words at the outer edges of rules or gaps were disallowed. A 5-gram language model with modified interpolated Kneser-Ney smoothing (Chen and Goodman, 1998) was trained by the SRILM toolkit (Stolcke, 2002) on a set of 208 million running words of text obtained by combining the monolingual Czech text distributed by the 2010 ACL MT workshop with the Czech portion of the training data. The decision rule was based on the standard log-linear interpolation of several models, with weights tuned by MERT on the development set (Och, 2003). The baselines consisted of the language model, two phrase translation models, two lexical models, and a brevity penalty. The proposed exponential phrase model contains several modifications relative to a standard phrase model (called baseline A below) with potential to improve translation accuracy, including smoothed estimates and estimates incorporating target-side tags. To gain better insight into the role played by different elements of the model, we also tested a second baseline phrase model (baseline B), which attempted to isolate the exponential model itself from auxiliary modifications. Baseline B was different from the experimental condition in using a grammar limited to observed inflections and in replacing the exponential p(Y |X) phrase model by a relative frequency phrase model. It was different from baseline A in computing the frequencies for the p(Y |X) phrase model based on counts of tagged target phrases and in using the same smoothed estimates in the other models as were used in the experimental condition. 8 Parameter estimation Parameter estimation was performed using a modified version of the maximum entropy module from SciPy (Jones et al., 2001) and the LBFGS-B algorithm (Byrd et al., 1995). The objective included an ℓ2 regularizer with the regularization trade-off set to 1. The amount of training data presented a practical challenge for parameter estimation. Several strategies were pursued to reduce the computational expenses. Following the approach of Mann et al (2009), the training set was split into many approximately equal portions, for which parameters were estimated separately and then averaged for features observed in multiple portions. The sets of target phrases for each source phrase prior to generation of additional inflected variants were truncated by discarding extracted rules which were observed with frequency less than the 200-th most frequent target phrase for that source phrase. Additional computational challenges remained 236 due to an important difference between models with shared features and usual phrase models. Features appearing with source phrases found in development and testing data share their weights with features appearing with other source phrases, so that filtering the training set for development and testing data affects the solution. Although there seems to be no reason why this would positively affect translation accuracy, to be methodologically strict we estimate parameters for rule and lemma features without inflection features for larger models, and then combine them with weights for inflection feature estimated from a smaller portion of training data. This should affect model performance negatively, since it precludes learning trade-offs between evidence provided by the different kinds of features, and therefore it gives a conservative assessment of the results that could be obtained at greater computational costs. The large data model used parameters for the inflection features estimated from the small data set. In the runs where exponential models were used they replaced the corresponding baseline phrase translation model. 9 Results and discussion Table 1 shows the results. Aside from the two baselines described in section 7 and the full exponential model, the table also reports results for an exponential model that excluded gender-based features (and hence non-local target-side dependencies). The highest scores were achieved by the full exponential model, although baseline B produced surprisingly disparate effects for the two data sets. This suggests a complex interplay of the various aspects of the model and training data whose exploration could further improve the scores. Inclusion of genderbased features produced small but consistent improvements. Table 2 shows a summary of the grammars. We further illustrate general properties of these models using toy examples and the actual parameters estimated from the large data set. Table 3 shows representative rules with two different source sides. The column marked “no infl.” shows model estimates computed without inflection features. One can see that for both rule sets the estimated probabilities for rules observed a single time is only slightly Condition Small set Large set Baseline A 0.1964 0.2562 Baseline B 0.2067 0.2522 Expon-gender 0.2114 0.2598 Expon+gender 0.2128 0.2615 Table 1: BLUE scores on testing. See section 7 for a description of the baselines. Condition Total rules Observed rules Small set 17,089,850 3,983,820 Large set 39,349,268 23,679,101 Table 2: Grammar sizes after and before generation of unobserved inflections (all filtered for dev/test sets). higher than probabilities for generated unobserved rules. However, rules with relatively high counts in the second set receive proportionally higher estimates, while the difference between the singleton rule and the most frequent rule in the second set, which was observed 3 times, is smoothed away to an even greater extent. The last two columns show model estimates when various inflection features are included. There is a grammatical match between nominative case for the target word and subject position for the aligned source word and between accusative case for the target word and direct object role for the aligned source word. The other pairings represent grammatical mismatches. One can see that the probabilities for rules leading to correct case matches are considerably higher than the alternatives with incorrect case matches. rx Count Case No infl. Sb Obj 1 1 Dat 0.085 0.037 0.035 1 3 Acc 0.086 0.092 0.204 1 0 Nom 0.063 0.416 0.063 2 1 Instr 0.007 0.002 0.003 2 31 Nom 0.212 0.624 0.169 2 0 Acc 0.005 0.002 0.009 Table 3: The effect of inflection features on estimated probabilities. 237 10 Conclusion This paper has introduced a scalable exponential phrase model for target languages with complex morphology that can be trained on the full parallel corpus. We have showed how it can provide estimates for inflected forms unobserved in the training data and described decoding procedures for features with non-local target-side dependencies. The results suggest that the model should be especially useful for languages with sparser resources, but that performance improvements can be obtained even for a very large parallel corpus. Acknowledgments I would like to thank Philip Resnik, Amy Weinberg, Hal Daum´e III, Chris Dyer, and the anonymous reviewers for helpful comments relating to this work. References E. Avramidis and P. Koehn. 2008. Enriching Morphologically Poor Languages for Statistical Machine Translation. In Proc. ACL 2008. A. Birch, M. Osborne and P. Koehn. 2008. Predicting Success in Machine Translation. The Conference on Empirical Methods in Natural Language Processing (EMNLP), 2008. O. Bojar. 2007. English-to-Czech Factored Machine Translation. In Proceedings of the Second Workshop on Statistical Machine Translation. O. Bojar and Z. ˇZabokrtsk´y. 2009. Large Parallel Treebank with Rich Annotation. Charles University, Prague. http://ufal.mff.cuni.cz/czeng/czeng09/, 2009. R. H. Byrd, P. Lu and J. Nocedal. 1995. A Limited Memory Algorithm for Bound Constrained Optimization. SIAM Journal on Scientific and Statistical Computing, 16(5), pp. 1190-1208. M. Carpuat and D. Wu. 2007a. Improving Statistical Machine Translation using Word Sense Disambiguation. Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2007). M. Carpuat and D. Wu. 2007b. How Phrase Sense Disambiguation outperforms Word Sense Disambiguation for Statistical Machine Translation. 11th Conference on Theoretical and Methodological Issues in Machine Translation (TMI 2007) Y.S. Chan, H.T. Ng, and D. Chiang. 2007. Word sense disambiguation improves statistical machine translation. In Proc. ACL 2007. S.F. Chen and J.T. Goodman. 1998. An Empirical Study of Smoothing Techniques for Language Modeling. Technical Report TR-10-98, Computer Science Group, Harvard University. D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228. J. Gao, G. Andrew, M. Johnson and K. Toutanova. 2007. A Comparative Study of Parameter Estimation Methods for Statistical Natural Language Processing. In Proc. ACL 2007. M. Jeong, K. Toutanova, H. Suzuki, and C. Quirk. 2010. A Discriminative Lexicon Model for Complex Morphology. The Ninth Conference of the Association for Machine Translation in the Americas (AMTA-2010). E. Jones, T. Oliphant, P. Peterson and others. SciPy: Open source scientific tools for Python. http://www.scipy.org/ P. Koehn and H. Hoang. 2007. Factored translation models. The Conference on Empirical Methods in Natural Language Processing (EMNLP), 2007. P. Koehn, F.J. Och, and D. Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of the Human Language Technology Conference (HLT-NAACL 2003). D. Kolovratn´ık and L. Pˇrikryl. 2008. Program´atorsk´a dokumentace k projektu Morfo. http://ufal.mff.cuni.cz/morfo/, 2008. G. Mann, R. McDonald, M. Mohri, N. Silberman, D. Walker. 2009. Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models. Advances in Neural Information Processing Systems (NIPS), 2009. J. Naughton. 2005. Czech. An Essential Grammar. Routledge, 2005. A.Y. Ng. 2004. Feature selection, L1 vs. L2 regularization, and rotational invariance. In Proceedings of the Twenty-first International Conference on Machine Learning. F.J. Och. 2003. Minimum Error Rate Training for Statistical Machine Translation. In Proc. ACL 2003. A. Ramanathan, H. Choudhary, A. Ghosh, P. Bhattacharyya. 2009. Case markers and Morphology: Addressing the crux of the fluency problem in EnglishHindi SMT. In Proc. ACL 2009. A. Stolcke. 2002. SRILM – An Extensible Language Modeling Toolkit. International Conference on Spoken Language Processing, 2002. M. Subotin. 2008. Generalizing Local Translation Models. Proceedings of SSST-2, Second Workshop on Syntax and Structure in Statistical Translation. R. Yeniterzi and K. Oflazer. 2010. Syntax-toMorphology Mapping in Factored Phrase-Based Statistical Machine Translation from English to Turkish. In Proc. ACL 2010. 238
|
2011
|
24
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 239–247, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Bayesian Inference for Zodiac and Other Homophonic Ciphers Sujith Ravi and Kevin Knight University of Southern California Information Sciences Institute Marina del Rey, California 90292 {sravi,knight}@isi.edu Abstract We introduce a novel Bayesian approach for deciphering complex substitution ciphers. Our method uses a decipherment model which combines information from letter n-gram language models as well as word dictionaries. Bayesian inference is performed on our model using an efficient sampling technique. We evaluate the quality of the Bayesian decipherment output on simple and homophonic letter substitution ciphers and show that unlike a previous approach, our method consistently produces almost 100% accurate decipherments. The new method can be applied on more complex substitution ciphers and we demonstrate its utility by cracking the famous Zodiac-408 cipher in a fully automated fashion, which has never been done before. 1 Introduction Substitution ciphers have been used widely in the past to encrypt secrets behind messages. These ciphers replace (English) plaintext letters with cipher symbols in order to generate the ciphertext sequence. There exist many published works on automatic decipherment methods for solving simple lettersubstitution ciphers. Many existing methods use dictionary-based attacks employing huge word dictionaries to find plaintext patterns within the ciphertext (Peleg and Rosenfeld, 1979; Ganesan and Sherman, 1993; Jakobsen, 1995; Olson, 2007). Most of these methods are heuristic in nature and search for the best deterministic key during decipherment. Others follow a probabilistic decipherment approach. Knight et al. (2006) use the Expectation Maximization (EM) algorithm (Dempster et al., 1977) to search for the best probabilistic key using letter n-gram models. Ravi and Knight (2008) formulate decipherment as an integer programming problem and provide an exact method to solve simple substitution ciphers by using letter n-gram models along with deterministic key constraints. Corlett and Penn (2010) work with large ciphertexts containing thousands of characters and provide another exact decipherment method using an A* search algorithm. Diaconis (2008) presents an analysis of Markov Chain Monte Carlo (MCMC) sampling algorithms and shows an example application for solving simple substitution ciphers. Most work in this area has focused on solving simple substitution ciphers. But there are variants of substitution ciphers, such as homophonic ciphers, which display increasing levels of difficulty and present significant challenges for decipherment. The famous Zodiac serial killer used one such cipher system for communication. In 1969, the killer sent a three-part cipher message to newspapers claiming credit for recent shootings and crimes committed near the San Francisco area. The 408-character message (Zodiac-408) was manually decoded by hand in the 1960’s. Oranchak (2008) presents a method for solving the Zodiac-408 cipher automatically with a dictionary-based attack using a genetic algorithm. However, his method relies on using plaintext words from the known solution to solve the cipher, which departs from a strict decipherment scenario. In this paper, we introduce a novel method for 239 solving substitution ciphers using Bayesian learning. Our novel contributions are as follows: • We present a new probabilistic decipherment approach using Bayesian inference with sparse priors, which can be used to solve different types of substitution ciphers. • Our new method combines information from word dictionaries along with letter n-gram models, providing a robust decipherment model which offsets the disadvantages faced by previous approaches. • We evaluate the Bayesian decipherment output on three different types of substitution ciphers and show that unlike a previous approach, our new method solves all the ciphers completely. • Using the Bayesian decipherment, we show for the first time a truly automated system that successfully solves the Zodiac-408 cipher. 2 Letter Substitution Ciphers We use natural language processing techniques to attack letter substitution ciphers. In a letter substitution cipher, every letter p in the natural language (plaintext) sequence is replaced by a cipher token c, according to some substitution key. For example, an English plaintext “H E L L O W O R L D ...” may be enciphered as: “N O E E I T I M E L ...” according to the key: p: ABCDEFGHIJKLMNOPQRSTUVWXYZ c: XYZLOHANBCDEFGIJKMPQRSTUVW where, “ ” represents the space character (word boundary) in the English and ciphertext messages. If the recipients of the ciphertext message have the substitution key, they can use it (in reverse) to recover the original plaintext. The sender can encrypt the message using one of many different cipher systems. The particular type of cipher system chosen determines the properties of the key. For example, the substitution key can be deterministic in both the encipherment and decipherment directions as shown in the above example—i.e., there is a 1-to1 correspondence between the plaintext letters and ciphertext symbols. Other types of keys exhibit nondeterminism either in the encipherment (or decipherment) or both directions. 2.1 Simple Substitution Ciphers The key used in a simple substitution cipher is deterministic in both the encipherment and decipherment directions, i.e., there is a 1-to-1 mapping between plaintext letters and ciphertext symbols. The example shown earlier depicts how a simple substitution cipher works. Data: In our experiments, we work with a 414letter simple substitution cipher. We encrypt an original English plaintext message using a randomly generated simple substitution key to create the ciphertext. During the encipherment process, we preserve spaces between words and use this information for decipherment—i.e., plaintext character “ ” maps to ciphertext character “ ”. Figure 1 (top) shows a portion of the ciphertext along with the original plaintext used to create the cipher. 2.2 Homophonic Ciphers A homophonic cipher uses a substitution key that maps a plaintext letter to more than one cipher symbol. For example, the English plaintext: “H E L L O W O R L D ...” may be enciphered as: “65 82 51 84 05 60 54 42 51 45 ...” according to the key: A: 09 12 33 47 53 67 78 92 B: 48 81 ... E: 14 16 24 44 46 55 57 64 74 82 87 ... L: 51 84 ... Z: 02 Here, “ ” represents the space character in both English and ciphertext. Notice the non-determinism involved in the enciphering direction—the English 240 letter “L” is substituted using different symbols (51, 84) at different positions in the ciphertext. These ciphers are more complex than simple substitution ciphers. Homophonic ciphers are generated via a non-deterministic encipherment process—the key is 1-to-many in the enciphering direction. The number of potential cipher symbol substitutes for a particular plaintext letter is often proportional to the frequency of that letter in the plaintext language— for example, the English letter “E” is assigned more cipher symbols than “Z”. The objective of this is to flatten out the frequency distribution of ciphertext symbols, making a frequency-based cryptanalysis attack difficult. The substitution key is, however, deterministic in the decipherment direction—each ciphertext symbol maps to a single plaintext letter. Since the ciphertext can contain more than 26 types, we need a larger alphabet system—we use a numeric substitution alphabet in our experiments. Data: For our decipherment experiments on homophonic ciphers, we use the same 414-letter English plaintext used in Section 2.1. We encrypt this message using a homophonic substitution key (available from http://www.simonsingh.net/The Black Chamber/ho mophoniccipher.htm). As before, we preserve spaces between words in the ciphertext. Figure 1 (middle) displays a section of the homophonic cipher (with spaces) and the original plaintext message used in our experiments. 2.3 Homophonic Ciphers without spaces (Zodiac-408 cipher) In the previous two cipher systems, the wordboundary information was preserved in the cipher. We now consider a more difficult homophonic cipher by removing space characters from the original plaintext. The English plaintext from the previous example now looks like this: “HELLOWORLD ...” and the corresponding ciphertext is: “65 82 51 84 05 60 54 42 51 45 ...” Without the word boundary information, typical dictionary-based decipherment attacks fail on such ciphers. Zodiac-408 cipher: Homophonic ciphers without spaces have been used extensively in the past to encrypt secret messages. One of the most famous homophonic ciphers in history was used by the infamous Zodiac serial killer in the 1960’s. The killer sent a series of encrypted messages to newspapers and claimed that solving the ciphers would reveal clues to his identity. The identity of the Zodiac killer remains unknown to date. However, the mystery surrounding this has sparked much interest among cryptanalysis experts and amateur enthusiasts. The Zodiac messages include two interesting ciphers: (1) a 408-symbol homophonic cipher without spaces (which was solved manually by hand), and (2) a similar looking 340-symbol cipher that has yet to be solved. Here is a sample of the Zodiac-408 cipher message: ... and the corresponding section from the original English plaintext message: I L I K E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I S M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O S T D A N G E R O U E A N A M A L O F A L L T O K I L L S O M E T H I N G G I ... Besides the difficulty with missing word boundaries and non-determinism associated with the key, the Zodiac-408 cipher poses several additional challenges which makes it harder to solve than any standard homophonic cipher. There are spelling mistakes in the original message (for example, the English word “PARADISE” is misspelt as 241 “PARADICE”) which can divert a dictionary-based attack. Also, the last 18 characters of the plaintext message does not seem to make any sense (“EBEORIETEMETHHPITI”). Data: Figure 1 (bottom) displays the Zodiac-408 cipher (consisting of 408 tokens, 54 symbol types) along with the original plaintext message. We run the new decipherment method (described in Section 3.1) and show that our approach can successfully solve the Zodiac-408 cipher. 3 Decipherment Given a ciphertext message c1...cn, the goal of decipherment is to uncover the hidden plaintext message p1...pn. The size of the keyspace (i.e., number of possible key mappings) that we have to navigate during decipherment is huge—a simple substitution cipher has a keyspace size of 26!, whereas a homophonic cipher such as the Zodiac-408 cipher has 2654 possible key mappings. Next, we describe a new Bayesian decipherment approach for tackling substitution ciphers. 3.1 Bayesian Decipherment Bayesian inference methods have become popular in natural language processing (Goldwater and Griffiths, 2007; Finkel et al., 2005; Blunsom et al., 2009; Chiang et al., 2010). Snyder et al. (2010) proposed a Bayesian approach in an archaeological decipherment scenario. These methods are attractive for their ability to manage uncertainty about model parameters and allow one to incorporate prior knowledge during inference. A common phenomenon observed while modeling natural language problems is sparsity. For simple letter substitution ciphers, the original substitution key exhibits a 1-to-1 correspondence between the plaintext letters and cipher types. It is not easy to model such information using conventional methods like EM. But we can easily specify priors that favor sparse distributions within the Bayesian framework. Here, we propose a novel approach for deciphering substitution ciphers using Bayesian inference. Rather than enumerating all possible keys (26! for a simple substitution cipher), our Bayesian framework requires us to sample only a small number of keys during the decipherment process. Probabilistic Decipherment: Our decipherment method follows a noisy-channel approach. We are faced with a ciphertext sequence c = c1...cn and we want to find the (English) letter sequence p = p1...pn that maximizes the probability P(p|c). We first formulate a generative story to model the process by which the ciphertext sequence is generated. 1. Generate an English plaintext sequence p = p1...pn, with probability P(p). 2. Substitute each plaintext letter pi with a ciphertext token ci, with probability P(ci|pi) in order to generate the ciphertext sequence c = c1...cn. We build a statistical English language model (LM) for the plaintext source model P(p), which assigns a probability to any English letter sequence. Our goal is to estimate the channel model parameters θ in order to maximize the probability of the observed ciphertext c: arg max θ P(c) = arg max θ X p Pθ(p, c) (1) = arg max θ X p P(p) · Pθ(c|p) (2) = arg max θ X p P(p) · n Y i=1 Pθ(ci|pi) (3) We estimate the parameters θ using Bayesian learning. In our decipherment framework, a Chinese Restaurant Process formulation is used to model both the source and channel. The detailed generative story using CRPs is shown below: 1. i ←1 2. Generate the English plaintext letter p1, with probability P0(p1) 3. Substitute p1 with cipher token c1, with probability P0(c1|p1) 4. i ←i + 1 5. Generate English plaintext letter pi, with probability α · P0(pi|pi−1) + Ci−1 1 (pi−1, pi) α + Ci−1 1 (pi−1) 242 Plaintext: D E C I P H E R M E N T I S T H E A N A L Y S I S O F D O C U M E N T S W R I T T E N I N A N C I E N T L A N G U A G E S W H E R E T H E ... Ciphertext: i n g c m p n q s n w f c v f p n o w o k t v c v h u i h g z s n w f v r q c f f n w c w o w g c n w f k o w a z o a n v r p n q n f p n ... Bayesian solution: D E C I P H E R M E N T I S T H E A N A L Y S I S O F D O C U M E N T S W R I T T E N I N A N C I E N T L A N G U A G E S W H E R E T H E ... Plaintext: D E C I P H E R M E N T I S T H E A N A L Y S I S O F D O C U M E N T S W R I T T E N I N ... Ciphertext: 79 57 62 93 95 68 44 77 22 74 59 97 32 86 85 56 82 67 59 67 84 52 86 73 11 99 10 45 90 13 61 27 98 71 49 19 60 80 88 85 20 55 59 32 91 ... Bayesian solution: D E C I P H E R M E N T I S T H E A N A L Y S I S O F D O C U M E N T S W R I T T E N I N ... Ciphertext: Plaintext: Bayesian solution (final decoding): I L I K E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O A T D A N G E R T U E A N A M A L O F A L L ... (with spaces shown): I L I K E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O A T D A N G E R T U E A N A M A L O F A L L ... Figure 1: Samples from the ciphertext sequence, corresponding English plaintext message and output from Bayesian decipherment (using word+3-gram LM) for three different ciphers: (a) Simple Substitution Cipher (top), (b) Homophonic Substitution Cipher with spaces (middle), and (c) Zodiac-408 Cipher (bottom). 243 6. Substitute pi with cipher token ci, with probability β · P0(ci|pi) + Ci−1 1 (pi, ci) β + Ci−1 1 (pi) 7. With probability Pquit, quit; else go to Step 4. This defines the probability of any given derivation, i.e., any plaintext hypothesis corresponding to the given ciphertext sequence. The base distribution P0 represents prior knowledge about the model parameter distributions. For the plaintext source model, we use probabilities from an English language model and for the channel model, we specify a uniform distribution (i.e., a plaintext letter can be substituted with any given cipher type with equal probability). Ci−1 1 represents the count of events occurring before plaintext letter pi in the derivation (we call this the “cache”). α and β represent Dirichlet prior hyperparameters over the source and channel models respectively. A large prior value implies that characters are generated from the base distribution P0, whereas a smaller value biases characters to be generated with reference to previous decisions inside the cache (favoring sparser distributions). Efficient inference via type sampling: We use a Gibbs sampling (Geman and Geman, 1984) method for performing inference on our model. We could follow a point-wise sampling strategy, where we sample plaintext letter choices for every cipher token, one at a time. But we already know that the substitution ciphers described here exhibit determinism in the deciphering direction,1 i.e., although we have no idea about the key mappings themselves, we do know that there exists only a single plaintext letter mapping for every cipher symbol type in the true key. So sampling plaintext choices for every cipher token separately is not an efficient strategy— our sampler may spend too much time exploring invalid keys (which map the same cipher symbol to different plaintext letters). Instead, we use a type sampling technique similar to the one proposed by Liang et al. (2010). Under 1This assumption does not strictly apply to the Zodiac-408 cipher where a few cipher symbols exhibit non-determinism in the decipherment direction as well. this scheme, we sample plaintext letter choices for each cipher symbol type. In every step, we sample a new plaintext letter for a cipher type and update the entire plaintext hypothesis (i.e., plaintext letters at all corresponding positions) to reflect this change. For example, if we sample a new choice pnew for a cipher symbol which occurs at positions 4, 10, 18, then we update plaintext letters p4, p10 and p18 with the new choice pnew. Using the property of exchangeability, we derive an incremental formula for re-scoring the probability of a new derivation based on the probability of the old derivation—when sampling at position i, we pretend that the area affected (within a context window around i) in the current plaintext hypothesis occurs at the end of the corpus, so that both the old and new derivations share the same cache.2 While we may make corpus-wide changes to a derivation in every sampling step, exchangeability allows us to perform scoring in an efficient manner. Combining letter n-gram language models with word dictionaries: Many existing probabilistic approaches use statistical letter n-gram language models of English to assign P(p) probabilities to plaintext hypotheses during decipherment. Other decryption techniques rely on word dictionaries (using words from an English dictionary) for attacking substitution ciphers. Unlike previous approaches, our decipherment method combines information from both sources— letter n-grams and word dictionaries. We build an interpolated word+n-gram LM and use it to assign P(p) probabilities to any plaintext letter sequence p1...pn.3 The advantage is that it helps direct the sampler towards plaintext hypotheses that resemble natural language—high probability letter sequences which form valid words such as “H E L L O” instead of sequences like “‘T X H R T”. But in addition to this, using letter n-gram information makes 2The relevant context window that is affected when sampling at position i is determined by the word boundaries to the left and right of i. 3We set the interpolation weights for the word and n-gram LM as (0.9, 0.1). The word-based LM is constructed from a dictionary consisting of 9,881 frequently occurring words collected from Wikipedia articles. We train the letter n-gram LM on 50 million words of English text available from the Linguistic Data Consortium. 244 our model robust against variations in the original plaintext (for example, unseen words or misspellings as in the case of Zodiac-408 cipher) which can easily throw off dictionary-based attacks. Also, it is hard for a point-wise (or type) sampler to “find words” starting from a random initial sample, but easier to “find n-grams”. Sampling for ciphers without spaces: For ciphers without spaces, dictionaries are hard to use because we do not know where words start and end. We introduce a new sampling operator which counters this problem and allows us to perform inference using the same decipherment model described earlier. In a first sampling pass, we sample from 26 plaintext letter choices (e.g., “A”, “B”, “C”, ...) for every cipher symbol type as before. We then run a second pass using a new sampling operator that iterates over adjacent plaintext letter pairs pi−1, pi in the current hypothesis and samples from two choices—(1) add a word boundary (space character “ ”) between pi−1 and pi, or (2) remove an existing space character between pi−1 and pi. For example, given the English plaintext hypothesis “... A B O Y ...”, there are two sampling choices for the letter pair A,B in the second step. If we decide to add a word boundary, our new plaintext hypothesis becomes “... A B O Y ...”. We compute the derivation probability of the new sample using the same efficient scoring procedure described earlier. The new strategy allows us to apply Bayesian decipherment even to ciphers without spaces. As a result, we now have a new decipherment method that consistently works for a range of different types of substitution ciphers. Decoding the ciphertext: After the sampling run has finished,4 we choose the final sample as our English plaintext decipherment output. 4For letter substitution decipherment we want to keep the language model probabilities fixed during training, and hence we set the prior on that model to be high (α = 104). We use a sparse prior for the channel (β = 0.01). We instantiate a key which matches frequently occurring plaintext letters to frequent cipher symbols and use this to generate an initial sample for the given ciphertext and run the sampler for 5000 iterations. We use a linear annealing schedule during sampling decreasing the temperature from 10 →1. 4 Experiments and Results We run decipherment experiments on different types of letter substitution ciphers (described in Section 2). In particular, we work with the following three ciphers: (a) 414-letter Simple Substitution Cipher (b) 414-letter Homophonic Cipher (with spaces) (c) Zodiac-408 Cipher Methods: For each cipher, we run and compare the output from two different decipherment approaches: 1. EM Method using letter n-gram LMs following the approach of Knight et al. (2006). They use the EM algorithm to estimate the channel parameters θ during decipherment training. The given ciphertext c is then decoded by using the Viterbi algorithm to choose the plaintext decoding p that maximizes P(p)·Pθ(c|p)3, stretching the channel probabilities. 2. Bayesian Decipherment method using word+n-gram LMs (novel approach described in Section 3.1). Evaluation: We evaluate the quality of a particular decipherment as the percentage of cipher tokens that are decoded correctly. Results: Figure 2 compares the decipherment performance for the EM method with Bayesian decipherment (using type sampling and sparse priors) on three different types of substitution ciphers. Results show that our new approach (Bayesian) outperforms the EM method on all three ciphers, solving them completely. Even with a 3-gram letter LM, our method yields a +63% improvement in decipherment accuracy over EM on the homophonic cipher with spaces. We observe that the word+3-gram LM proves highly effective when tackling more complex ciphers and cracks the Zodiac-408 cipher. Figure 1 shows samples from the Bayesian decipherment output for all three ciphers. For ciphers without spaces, our method automatically guesses the word boundaries for the plaintext hypothesis. 245 Method LM Accuracy (%) on 414-letter Simple Substitution Cipher Accuracy (%) on 414-letter Homophonic Substitution Cipher (with spaces) Accuracy (%) on Zodiac408 Cipher 1. EM 2-gram 83.6 30.9 3-gram 99.3 32.6 0.3∗ (∗28.8 with 100 restarts) 2. Bayesian 3-gram 100.0 95.2 23.0 word+2-gram 100.0 100.0 word+3-gram 100.0 100.0 97.8 Figure 2: Comparison of decipherment accuracies for EM versus Bayesian method when using different language models of English on the three substitution ciphers: (a) 414-letter Simple Substitution Cipher, (b) 414-letter Homophonic Substitution Cipher (with spaces), and (c) the famous Zodiac-408 Cipher. For the Zodiac-408 cipher, we compare the performance achieved by Bayesian decipherment under different settings: • Letter n-gram versus Word+n-gram LMs— Figure 2 shows that using a word+3-gram LM instead of a 3-gram LM results in +75% improvement in decipherment accuracy. • Sparse versus Non-sparse priors—We find that using a sparse prior for the channel model (β = 0.01 versus 1.0) helps for such problems and produces better decipherment results (97.8% versus 24.0% accuracy). • Type versus Point-wise sampling—Unlike point-wise sampling, type sampling quickly converges to better decipherment solutions. After 5000 sampling passes over the entire data, decipherment output from type sampling scores 97.8% accuracy compared to 14.5% for the point-wise sampling run.5 We also perform experiments on shorter substitution ciphers. On a 98-letter simple substitution cipher, EM using 3-gram LM achieves 41% accuracy, whereas the method from Ravi and Knight (2009) scores 84% accuracy. Our Bayesian method performs the best in this case, achieving 100% with word+3-gram LM. 5 Conclusion In this work, we presented a novel Bayesian decipherment approach that can effectively solve a va5Both sampling runs were seeded with the same random initial sample. riety of substitution ciphers. Unlike previous approaches, our method combines information from letter n-gram language models and word dictionaries and provides a robust decipherment model. We empirically evaluated the method on different substitution ciphers and achieve perfect decipherments on all of them. Using Bayesian decipherment, we can successfully solve the Zodiac-408 cipher—the first time this is achieved by a fully automatic method in a strict decipherment scenario. For future work, there are other interesting decipherment tasks where our method can be applied. One challenge is to crack the unsolved Zodiac-340 cipher, which presents a much harder problem than the solved version. Acknowledgements The authors would like to thank the reviewers for their comments. This research was supported by NSF grant IIS-0904684. References Phil Blunsom, Trevor Cohn, Chris Dyer, and Miles Osborne. 2009. A Gibbs sampler for phrasal synchronous grammar induction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP), pages 782–790. David Chiang, Jonathan Graehl, Kevin Knight, Adam Pauls, and Sujith Ravi. 2010. Bayesian inference for finite-state transducers. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL/HLT), pages 447–455. 246 Eric Corlett and Gerald Penn. 2010. An exact A* method for deciphering letter-substitution ciphers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1040–1047. Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Persi Diaconis. 2008. The Markov Chain Monte Carlo revolution. Bulletin of the American Mathematical Society, 46(2):179–205. Jenny Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 363– 370. Ravi Ganesan and Alan T. Sherman. 1993. Statistical techniques for language recognition: An introduction and guide for cryptanalysts. Cryptologia, 17(4):321– 366. Stuart Geman and Donald Geman. 1984. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):721–741. Sharon Goldwater and Thomas Griffiths. 2007. A fully Bayesian approach to unsupervised part-of-speech tagging. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 744– 751. Thomas Jakobsen. 1995. A fast method for cryptanalysis of substitution ciphers. Cryptologia, 19(3):265–274. Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. 2006. Unsupervised analysis for decipherment problems. In Proceedings of the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics, pages 499–506. Percy Liang, Michael I. Jordan, and Dan Klein. 2010. Type-based MCMC. In Proceedings of the Conference on Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 573– 581. Edwin Olson. 2007. Robust dictionary attack of short simple substitution ciphers. Cryptologia, 31(4):332– 342. David Oranchak. 2008. Evolutionary algorithm for decryption of monoalphabetic homophonic substitution ciphers encoded as constraint satisfaction problems. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation, pages 1717–1718. Shmuel Peleg and Azriel Rosenfeld. 1979. Breaking substitution ciphers using a relaxation algorithm. Comm. ACM, 22(11):598–605. Sujith Ravi and Kevin Knight. 2008. Attacking decipherment problems optimally with low-order n-gram models. In Proceedings of the Empirical Methods in Natural Language Processing (EMNLP), pages 812– 819. Sujith Ravi and Kevin Knight. 2009. Probabilistic methods for a Japanese syllable cipher. In Proceedings of the International Conference on the Computer Processing of Oriental Languages (ICCPOL), pages 270– 281. Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipherment. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1048–1057. 247
|
2011
|
25
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 248–257, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Interactive Topic Modeling Yuening Hu Department of Computer Science University of Maryland [email protected] Jordan Boyd-Graber iSchool University of Maryland [email protected] Brianna Satinoff Department of Computer Science University of Maryland [email protected] Abstract Topic models have been used extensively as a tool for corpus exploration, and a cottage industry has developed to tweak topic models to better encode human intuitions or to better model data. However, creating such extensions requires expertise in machine learning unavailable to potential end-users of topic modeling software. In this work, we develop a framework for allowing users to iteratively refine the topics discovered by models such as latent Dirichlet allocation (LDA) by adding constraints that enforce that sets of words must appear together in the same topic. We incorporate these constraints interactively by selectively removing elements in the state of a Markov Chain used for inference; we investigate a variety of methods for incorporating this information and demonstrate that these interactively added constraints improve topic usefulness for simulated and actual user sessions. 1 Introduction Probabilistic topic models, as exemplified by probabilistic latent semantic indexing (Hofmann, 1999) and latent Dirichlet allocation (LDA) (Blei et al., 2003) are unsupervised statistical techniques to discover the thematic topics that permeate a large corpus of text documents. Topic models have had considerable application beyond natural language processing in computer vision (Rob et al., 2005), biology (Shringarpure and Xing, 2008), and psychology (Landauer et al., 2006) in addition to their canonical application to text. For text, one of the few real-world applications of topic models is corpus exploration. Unannotated, noisy, and ever-growing corpora are the norm rather than the exception, and topic models offer a way to quickly get the gist a large corpus.1 1For examples, see Rexa http://rexa.info/, JSTOR Contrary to the impression given by the tables shown in topic modeling papers, topics discovered by topic modeling don’t always make sense to ostensible end users. Part of the problem is that the objective function of topic models doesn’t always correlate with human judgements (Chang et al., 2009). Another issue is that topic models — with their bagof-words vision of the world — simply lack the necessary information to create the topics as end-users expect. There has been a thriving cottage industry adding more and more information to topic models to correct these shortcomings; either by modeling perspective (Paul and Girju, 2010; Lin et al., 2006), syntax (Wallach, 2006; Gruber et al., 2007), or authorship (Rosen-Zvi et al., 2004; Dietz et al., 2007). Similarly, there has been an effort to inject human knowledge into topic models (Boyd-Graber et al., 2007; Andrzejewski et al., 2009; Petterson et al., 2010). However, these are a priori fixes. They don’t help a frustrated consumer of topic models staring at a collection of topics that don’t make sense. In this paper, we propose interactive topic modeling (ITM), an in situ method for incorporating human knowledge into topic models. In Section 2, we review prior work on creating probabilistic models that incorporate human knowledge, which we extend in Section 3 to apply to ITM sessions. Section 4 discusses the implementation of this process during the inference process. Via a motivating example in Section 5, simulated ITM sessions in Section 6, and a real interactive test in Section 7, we demonstrate that our approach is able to focus a user’s desires in a topic model, better capture the key properties of a corpus, and capture diverse interests from users on the web. http://showcase.jstor.org/blei/, and the NIH https://app.nihmaps.org/nih/. 248 2 Putting Knowledge in Topic Models At a high level, topic models such as LDA take as input a number of topics K and a corpus. As output, a topic model discovers K distributions over words — the namesake topics — and associations between documents and topics. In LDA both of these outputs are multinomial distributions; typically they are presented to users in summary form by listing the elements with highest probability. For an example of topics discovered from a 20-topic model of New York Times editorials, see Table 1. When presented with poor topics learned from data, users can offer a number of complaints:2 these documents should have similar topics but don’t (Daum´e III, 2009); this topic should have syntactic coherence (Gruber et al., 2007; Boyd-Graber and Blei, 2008); this topic doesn’t make any sense at all (Newman et al., 2010); this topic shouldn’t be associated with this document but is (Ramage et al., 2009); these words shouldn’t be the in same topic but are (Andrzejewski et al., 2009); or these words should be in the same topic but aren’t (Andrzejewski et al., 2009). Many of these complaints can be addressed by using “must-link” constraints on topics, retaining Andrzejewski et al’s (2009) terminology borrowed from the database literature. A “must-link” constraint is a group of words whose probability must be correlated in the topic. For example, Figure 1 shows an example constraint: {plant, factory}. After this constraint is added, the probabilities of “plant” and “factory” in each topic are likely to both be high or both be low. It’s unlikely for “plant” to have high probability in a topic and “factory” to have a low probability. In the next section, we demonstrate how such constraints can be built into a model and how they can even be added while inference is underway. In this paper, we view constraints as transitive; if “plant” is in a constraint with “factory” and “factory” is in a constraint with “production,” then “plant” is in a constraint with “production.” Making this assumption can simplify inference slightly, which we take advantage of in Section 3.1, but the real reason for this assumption is because not doing so would 2Citations in this litany of complaints are offline solutions for addressing the problem; the papers also give motivation why such complaints might arise. Constraints Prior Structure {} dog bark tree plant factory leash β β β β β β {plant, factory} dog bark tree plant factory leash β β β η 2ββ η {plant, factory} {dog, bark, leash} dog bark tree plant factory leash η η β η 2β η η 3β Figure 1: How adding constraints (left) creates new topic priors (right). The trees represent correlated distributions (assuming η >> β). After the {plant, factory} constraint is added, it is now highly unlikely for a topic drawn from the distribution to have a high probability for “plant” and a low probability for “factory” or vice versa. The bottom panel adds an additional constraint, so now dog-related words are also correlated. Notice that the two constraints themselves are uncorrelated. It’s possible for both, either, or none of “bark” and “plant” (for instance) to have high probability in a topic. introduce ambiguity over the path associated with an observed token in the generative process. As long as a word is either in a single constraint or in the general vocabulary, there is only a single path. The details of this issue are further discussed in Section 4. 3 Constraints Shape Topics As discussed above, LDA views topics as distributions over words, and each document expresses an admixture of these topics. For “vanilla” LDA (no constraints), these are symmetric Dirichlet distributions. A document is composed of a number of observed words, which we call tokens to distinguish specific observations from the more abstract word (type) associated with each token. Because LDA assumes a document’s tokens are interchangeable, it treats the document as a bag-of-words, ignoring potential relations between words. This problem with vanilla LDA can be solved by encoding constraints, which will “guide” different words into the same topic. Constraints can be added to vanilla LDA by replacing the multinomial distribution over words for each topic with a collection of 249 tree-structured multinomial distributions drawn from a prior as depicted in Figure 1. By encoding word distributions as a tree, we can preserve conjugacy and relatively simple inference while encouraging correlations between related concepts (Boyd-Graber et al., 2007; Andrzejewski et al., 2009; Boyd-Graber and Resnik, 2010). Each topic has a top-level distribution over words and constraints, and each constraint in each topic has second-level distribution over the words in the constraint. Critically, the perconstraint distribution over words is engineered to be non-sparse and close to uniform. The top level distribution encodes which constraints (and unconstrained words) to include; the lower-level distribution forces the probabilities to be correlated for each of the constraints. In LDA, a document’s token is produced in the generative process by choosing a topic z and sampling a word from the multinomial distribution φz of topic z. For a constrained topic, the process now can take two steps. First, a first-level node in the tree is selected from φz. If that is an unconstrained word, the word is emitted and the generative process for that token is done. Otherwise, if the first level node is constraint l, then choose a word to emit from the constraint’s distribution over words πz,l. More concretely, suppose for a corpus with M documents we have a set of constraints Ω. The prior structure has B branches (one branch for each word not in a constraint and one for each constraint). Then the generative process for constrained LDA is: 1. For each topic i ∈{1, . . . K}: (a) draw a distribution over the B branches (words and constraints) φi ∼Dir(⃗β), and (b) for each constraint Ωj ∈Ω, draw a distribution over the words in the constraint πi,j ∼Dir(η), where πi,j is a distribution over the words in Ωj 2. Then for each document d ∈{1, . . . M}: (a) first draw a distribution over topics θd ∼Dir(α), (b) then for each token n ∈{1, . . . Nd}: i. choose a topic assignment zd,n ∼Mult(θd), and then ii. choose either a constraint or word from Mult(φzd,n): A. if we chose a word, emit that word wd,n B. otherwise if we chose a constraint index ld,n, emit a word wd,n from the constraint’s distribution over words in topic zd,n: wd,n ∼ Mult(πzd,n,ld,n). In this model, α, β, and η are Dirichlet hyperparameters set by the user; their role is explained below. 3.1 Gibbs Sampling for Topic Models In topic modeling, collapsed Gibbs sampling (Griffiths and Steyvers, 2004) is a standard procedure for obtaining a Markov chain over the latent variables in the model. Given certain technical conditions, the stationary distribution of the Markov chain is the posterior (Neal, 1993). Given M documents the state of a Gibbs sampler for LDA consists of topic assignments for each token in the corpus and is represented as Z = {z1,1 . . . z1,N1, z2,1, . . . zM,NM }. In each iteration, every token’s topic assignment zd,n is resampled based on topic assignments for all the tokens except for zd,n. (This subset of the state is denoted Z−(d,n)). The sampling equation for zd,n is p(zd,n = k|Z−(d,n), α, β) ∝Td,k + α Td,· + Kα Pk,wd,n + β Pk,· + V β (1) where Td,k is the number of times topic k is used in document d, Pk,wd,n is the number of times the type wd,n is assigned to topic k, and α, β are the hyperparameters of the two Dirichlet distributions, and B is the number of top-level branches (this is the vocabulary size for vanilla LDA). When a dot replaces a subscript of a count, it represents the marginal sum over all possible topics or words, e.g. Td,· = P k Td,k. The count statistics P and T provide summaries of the state. Typically, these only change based on assignments of latent variables in the sampler; in Section 4 we describe how changes in the model’s structure (in addition to the latent state) can be reflected in these count statistics. Contrasting with the above inference is the inference for a constrained model. (For a derivation, see Boyd-Graber, Blei, and Zhu (2007) for the general case or Andrzejewski, Zhu, and Craven (2009) for the specific case of constraints.) In this case the sampling equation for zd,n is changed to p(zd,n = k|Z−(d,n), α, β, η) ∝ Td,k+α Td,·+Kα Pk,wd,n +β Pk,·+V β if ∀l, wd,n ̸∈Ωl Td,k+α Td,·+Kα Pk,l+Clβ Pk,·+V β Wk,l,wd,n +η Wk,l,·+Clη wd,n ∈Ωl , (2) where Pk,wd,n is the number of times the unconstrained word wd,n appears in topic k; Pk,l is the 250 number of times any word of constraint Ωl appears in topic k; Wk,l,wd,n is the number of times word wd,n appears in constraint Ωl in topic k; V is the vocabulary size; Cl is the number of words in constraint Ωl. Note the differences between these two samplers for constrained words; however, for unconstrained LDA and for unconstrained words in constrained LDA, the conditional probability is the same. In order to make the constraints effective, we set the constraint word-distribution hyperparameter η to be much larger than the hyperparameter for the distribution over constraints and vocabulary β. This gives the constraints higher weight. Normally, estimating hyperparameters is important for topic modeling (Wallach et al., 2009). However, in ITM, sampling hyperparameters often (but not always) undoes the constraints (by making η comparable to β), so we keep the hyperparameters fixed. 4 Interactively adding constraints For a static model, inference in ITM is the same as in previous models (Andrzejewski et al., 2009). In this section, we detail how interactively changing constraints can be accommodated in ITM, smoothly transitioning from unconstrained LDA (n.b. Equation 1) to constrained LDA (n.b. Equation 2) with one constraint, to constrained LDA with two constraints, etc. A central tool that we will use is the strategic unassignment of states, which we call ablation (distinct from feature ablation in supervised learning). As described in the previous section, a sampler stores the topic assignment of each token. In the implementation of a Gibbs sampler, unassignment is done by setting a token’s topic assignment to an invalid topic (e.g. -1, as we use here) and decrementing any counts associated with that word. The constraints created by users implicitly signal that words in constraints don’t belong in a given topic. In other models, this input is sometimes used to “fix,” i.e. deterministically hold constant topic assignments (Ramage et al., 2009). Instead, we change the underlying model, using the current topic assignments as a starting position for a new Markov chain with some states strategically unassigned. How much of the existing topic assignments we use leads to four different options, which are illustrated in Figure 2. Previous New [bark:2, dog:3, leash:3 dog:2] [bark:2, bark:2, plant:2, tree:3] [tree:2,play:2,forest:1,leash:2] [bark:2, dog:3, leash:3 dog:2] [bark:2, bark:2, plant:2, tree:3] [tree:2,play:2,forest:1,leash:2] [bark:2, dog:3, leash:3 dog:2] [bark:2, bark:2, plant:2, tree:3] [tree:2,play:2,forest:1,leash:2] [bark:-1, dog:-1, leash:-1 dog:-1] [bark:-1, bark:-1, plant:-1, tree:-1] [tree:2,play:2,forest:1,leash:2] [bark:2, dog:3, leash:3 dog:3] [bark:2, bark:2, plant:2, tree:3] [tree:2,play:2,forest:1,leash:2] [bark:-1, dog:-1, leash:3 dog:-1] [bark:-1, bark:-1, plant:2, tree:3] [tree:2,play:2,forest:1,leash:2] [bark:2, dog:3, leash:3 dog:2] [bark:2, bark:2, plant:2, tree:3] [tree:2,play:2,forest:1,leash:2] [bark:-1, dog:-1, leash:-1 dog:-1] [bark:-1, bark:-1, plant:-1, tree:-1] [tree:-1,play:-1,forest:-1,leash:-1] None Term Doc All Figure 2: Four different strategies for state ablation after the words “dog” and “bark” are added to the constraint {“leash,” “puppy”} to make the constraint {“dog,” “bark,” “leash,” “puppy”}. The state is represented by showing the current topic assignment after each word (e.g. “leash” in the first document has topic 3, while “forest” in the third document has topic 1). On the left are the assignments before words were added to constraints, and on the right are the ablated assignments. Unassigned words are given the new topic assignment -1 and are highlighted in red. All We could revoke all state assignments, essentially starting the sampler from scratch. This does not allow interactive refinement, as there is nothing to enforce that the new topics will be in any way consistent with the existing topics. Once the topic assignments of all states are revoked, the counts for T, P and W (as described in Section 3.1) will be zero, retaining no information about the state the user observed. Doc Because topic models treat the document context as exchangeable, a document is a natural context for partial state ablation. Thus if a user adds a set of words S to constraints, then we have reason to suspect that all documents containing any one of S may have incorrect topic assignments. This is reflected in the state of the sampler by performing the UNASSIGN (Algorithm 1) operation for each word in any document containing a word added to a constraint. Algorithm 1 UNASSIGN(d, n, wd,n, zd,n = k) 1: T : Td,k ←Td,k −1 2: If wd,n /∈Ωold, P : Pk,wd,n ←Pk,wd,n −1 3: Else: suppose wd,n ∈Ωold m , P : Pk,m ←Pk,m −1 W : Wk,m,wd,n ←Wk,m,wd,n −1 251 This is equivalent to the Gibbs2 sampler of Yao et al. (2009) for incorporating new documents in a streaming context. Viewed in this light, a user is using words to select documents that should be treated as “new” for this refined model. Term Another option is to perform ablation only on the topic assignments of tokens whose words have added to a constraint. This applies the unassignment operation (Algorithm 1) only to tokens whose corresponding word appears in added constraints (i.e. a subset of the Doc strategy). This makes it less likely that other tokens in similar contexts will follow the words explicitly included in the constraints to new topic assignments. None The final option is to move words into constraints but keep the topic assignments fixed. Thus, P and W change, but not T, as described in Algorithm 2.3 This is arguably the simplest option, and in principle is sufficient, as the Markov chain should find a stationary distribution regardless of the starting position. In practice, however, this strategy is less interactive, as users don’t feel that their constraints are actually incorporated in the model, and inertia can keep the chain from reflecting the constraints. Algorithm 2 MOVE(d, n, wd,n, zd,n = k, Ωl) 1: If wd,n /∈Ωold, P : Pk,wd,n ←Pk,wd,n −1, Pk,l ←Pk,l + 1 W : Wk,l,wd,n ←Wk,l,wd,n + 1 2: Else, suppose wd,n ∈Ωold m , P : Pk,m ←Pk,m −1, Pk,l ←Pk,l + 1 W : Wk,m,wd,n ←Wk,m,wd,n −1 Wk,l,wd,n ←Wk,l,wd,n + 1 Regardless of what ablation scheme is used, after the state of the Markov chain is altered, the next step is to actually run inference forward, sampling assignments for the unassigned tokens for the “first” time and changing the topic assignment of previously assigned tokens. How many additional iterations are 3This assumes that there is only one possible path in the constraint tree that can generate a word; in other words, this assumes that constraints are transitive, as discussed at the end of Section 2. In the more general case, when words lack a unique path in the constraint tree, an additional latent variable specifies which possible paths in the constraint tree produced the word; this would have to be sampled. All other updating strategies are immune to this complication, as the assignments are left unassigned. required after adding constraints is a delicate tradeoff between interactivity and effectiveness, which we investigate further in the next sections. 5 Motivating Example To examine the viability of ITM, we begin with a qualitative demonstration that shows the potential usefulness of ITM. For this task, we used a corpus of about 2000 New York Times editorials from the years 1987 to 1996. We started by finding 20 initial topics with no constraints, as shown in Table 1 (left). Notice that topics 1 and 20 both deal with Russia. Topic 20 seems to be about the Soviet Union, with topic 1 about the post-Soviet years. We wanted to combine the two into a single topic, so we created a constraint with all of the clearly Russian or Soviet words (boris, communist, gorbachev, mikhail, russia, russian, soviet, union, yeltsin ). Running inference forward 100 iterations with the Doc ablation strategy yields the topics in Table 1 (right). The two Russia topics were combined into Topic 20. This combination also pulled in other relevant words that not near the top of either topic before: “moscow” and “relations.” Topic 1 is now more about elections in countries other than Russia. The other 18 topics changed little. While we combined the Russian topics, other researchers analyzing large corpora might preserve the Soviet vs. post-Soviet distinction but combine topics about American government. ITM allows tuning for specific tasks. 6 Simulation Experiment Next, we consider a process for evaluating our ITM using automatically derived constraints. These constraints are meant to simulate a user with a predefined list of categories (e.g. reviewers for journal submissions, e-mail folders, etc.). The categories grow more and more specific during the session as the simulated users add more constraint words. To test the ability of ITM to discover relevant subdivisions in a corpus, we use a dataset with predefined, intrinsic labels and assess how well the discovered latent topic structure can reproduce the corpus’s inherent structure. Specifically, for a corpus with M classes, we use the per-document topic distribution as a feature vector in a supervised classi252 Topic Words 1 election, yeltsin, russian, political, party, democratic, russia, president, democracy, boris, country, south, years, month, government, vote, since, leader, presidential, military 2 new, york, city, state, mayor, budget, giuliani, council, cuomo, gov, plan, year, rudolph, dinkins, lead, need, governor, legislature, pataki, david 3 nuclear, arms, weapon, defense, treaty, missile, world, unite, yet, soviet, lead, secretary, would, control, korea, intelligence, test, nation, country, testing 4 president, bush, administration, clinton, american, force, reagan, war, unite, lead, economic, iraq, congress, america, iraqi, policy, aid, international, military, see ... 20 soviet, lead, gorbachev, union, west, mikhail, reform, change, europe, leaders, poland, communist, know, old, right, human, washington, western, bring, party Topic Words 1 election, democratic, south, country, president, party, africa, lead, even, democracy, leader, presidential, week, politics, minister, percent, voter, last, month, years 2 new, york, city, state, mayor, budget, council, giuliani, gov, cuomo, year, rudolph, dinkins, legislature, plan, david, governor, pataki, need, cut 3 nuclear, arms, weapon, treaty, defense, war, missile, may, come, test, american, world, would, need, lead, get, join, yet, clinton, nation 4 president, administration, bush, clinton, war, unite, force, reagan, american, america, make, nation, military, iraq, iraqi, troops, international, country, yesterday, plan ... 20 soviet, union, economic, reform, yeltsin, russian, lead, russia, gorbachev, leaders, west, president, boris, moscow, europe, poland, mikhail, communist, power, relations Table 1: Five topics from a 20 topic topic model on the editorials from the New York times before adding a constraint (left) and after (right). After the constraint was added, which encouraged Russian and Soviet terms to be in the same topic, non-Russian terms gained increased prominence in Topic 1, and “Moscow” (which was not part of the constraint) appeared in Topic 20. fier (Hall et al., 2009). The lower the classification error rate, the better the model has captured the structure of the corpus.4 6.1 Generating automatic constraints We used the 20 Newsgroups corpus, which contains 18846 documents divided into 20 constituent newsgroups. We use these newsgroups as ground-truth labels.5 We simulate a user’s constraints by ranking words in the training split by their information gain (IG).6 After ranking the top 200 words for each class by IG, we delete words associated with multiple labels to prevent constraints for different labels from merging. The smallest class had 21 words remaining after removing duplicates (due to high 4Our goal is to understand the phenomena of ITM, not classification, so these classification results are well below state of the art. However, adding interactively selected topics to the state of the art features (tf-idf unigrams) gives a relative error reduction of 5.1%, while just adding topics from vanilla LDA gives a relative error reduction of 1.1%. Both measurements were obtained without tuning or weighting features, so presumably better results are possible. 5http://people.csail.mit.edu/jrennie/20Newsgroups/ In preprocessing, we deleted short documents, leaving 15160 documents, including 9131 training documents and 6029 test documents (default split). Tokenization, lemmatization, and stopword removal was performed using the Natural Language Toolkit (Loper and Bird, 2002). Topic modeling was performed using the most frequent 5000 lemmas as the vocabulary. 6IG is computed by the Rainbow toolbox http://www.cs.umass.edu/ mccallum/bow/rainbow/ overlaps of 125 words between “talk.religion.misc” and “soc.religion.christian,” and 110 words between “talk.religion.misc” and “alt.atheism”), so the top 21 words for each class were the ingredients for our simulated constraints. For example, for the class “soc.religion.christian,” the 21 constraint words include “catholic, scripture, resurrection, pope, sabbath, spiritual, pray, divine, doctrine, orthodox.” We simulate a user’s ITM session by adding a word to each of the 20 constraints until each of the constraints has 21 words. 6.2 Simulation scheme Starting with 100 base iterations, we perform successive rounds of refinement. In each round a new constraint is added corresponding to the newsgroup labels. Next, we perform one of the strategies for state ablation, add additional iterations of Gibbs sampling, use the newly obtained topic distribution of each document as the feature vector, and perform classification on the test / train split. We do this for 21 rounds until each label has 21 constraint words. The number of LDA topics is set to 20 to match the number of newsgroups. The hyperparameters for all experiments are α = 0.1, β = 0.01, and η = 100. At 100 iterations, the chain is clearly not converged. However, we chose this number of iterations because it more closely matches the likely use case as users do not wait for convergence. Moreover, while investigations showed that the patterns shown in Fig253 ure 4 were broadly consistent with larger numbers of iterations, such configurations sometimes had too much inertia to escape from local extrema. More iterations make it harder for the constraints to influence the topic assignment. 6.3 Investigating Ablation Strategies First, we investigate which ablation strategy best allows constraints to be incorporated. Figure 3 shows the classification error of six different ablation strategies based on the number of words in each constraint, ranging from 0 to 21. Each is averaged over five different chains using 10 additional iterations of Gibbs sampling per round (other numbers of iterations are discussed in Section 6.4). The model runs forward 10 iterations after the first round, another 10 iterations after the second round, etc. In general, as the number of words per constraint increases, the error decreases as models gain more information about the classes. Strategy Null is the non-interactive baseline that contains no constraints (vanilla LDA), but runs inference for a comparable number of rounds. All Initial and All Full are non-interactive baselines with all constraints known a priori. All Initial runs the model for the only the initial number of iterations (100 iterations in this experiment), while All Full runs the model for the total number of iterations added for the interactive version. (That is, if there were 21 rounds and each round of interactive modeling added 10 iterations, All Full would have 210 iterations more than All Initial). While Null sees no constraints, it serves as an upper baseline for the error rate (lower error being better) but shows the effect of additional inference. All Full is a lower baseline for the error rate since it both sees the constraints at the beginning and also runs for the maximum number of total iterations. All Initial sees the constraints before the other ablation techniques but it has fewer total iterations. The Null strategy does not perform as well as the interactive versions, especially with larger constraints. Both All Initial and All Full, however, show a larger variance (as denoted by error bands around the average trends) than the interactive schemes. This can be viewed as akin to simulated annealing, as the interactive search has more freedom to explore in early rounds. As more constraint words are added each round, the model is less free to explore. Words per constraint Error 0.38 0.40 0.42 0.44 0.46 0.48 0.50 0 5 10 15 20 Strategy All Full All Initial Doc None Null Term Figure 3: Error rate (y-axis, lower is better) using different ablation strategies as additional constraints are added (xaxis). Null represents standard LDA, as the unconstrained baseline. All Initial and All Full are non-interactive, constrained baselines. The results of None, Term, Doc are more stable (as denoted by the error bars), and the error rate is reduced gradually as more constraint words are added. The error rate of each interactive ablation strategy is (as expected) between the lower and upper baselines. Generally, the constraints will influence not only the topics of the constraint words, but also the topics of the constraint words’ context in the same document. Doc ablation gives more freedom for the constraints to overcome the inertia of the old topic distribution and move towards a new one influenced by the constraints. 6.4 How many iterations do users have to wait? Figure 4 shows the effect of using different numbers of Gibbs sampling iterations after changing a constraint. For each of the ablation strategies, we run {10, 20, 30, 50, 100} additional Gibbs sampling iterations. As expected, more iterations reduce error, although improvements diminish beyond 100 iterations. With more constraints, the impact of additional iterations is lessened, as the model has more a priori knowledge to draw upon. For all numbers of additional iterations, while the Null serves as the upper baseline on the error rate in all cases, the Doc ablation clearly outperforms the other ablation schemes, consistently yielding a lower error rate. Thus, there is a benefit when the model has a chance to relearn the document context when constraints are added. The difference is even larger with more iterations, suggesting Doc needs more iterations to “recover” from unassignment. The luxury of having hundreds or thousands of additional iterations for each constraint would be im254 Words per constraint Error 0.40 0.42 0.44 0.46 0.48 0.50 10 0 5 10 15 20 20 0 5 10 15 20 30 0 5 10 15 20 50 0 5 10 15 20 100 0 5 10 15 20 Strategy Doc None Null Term Figure 4: Classification accuracy by strategy and number of additional iterations. The Doc ablation strategy performs best, suggesting that the document context is important for ablation constraints. While more iterations are better, there is a tradeoff with interactivity. practical. For even moderately sized datasets, even one iteration per second can tax the patience of individuals who want to use the system interactively. Based on these results and an ad hoc qualitative examination of the resulting topics, we found that 30 additional iterations of inference was acceptable; this is used in later experiments. 7 Getting Humans in the Loop To move beyond using simulated users adding the same words regardless of what topics were discovered by the model, we needed to expose the model to human users. We solicited approximately 200 judgments from Mechanical Turk, a popular crowdsourcing platform that has been used to gather linguistic annotations (Snow et al., 2008), measure topic quality (Chang et al., 2009), and supplement traditional inference techniques for topic models (Chang, 2010). After presenting our interface for collecting judgments, we examine the results from these ITM sessions both quantitatively and qualitatively. 7.1 Interface for soliciting refinements Figure 5 shows the interface used in the Mechanical Turk tests. The left side of the screen shows the current topics in a scrollable list, with the top 30 words displayed for each topic. Users create constraints by clicking on words from the topic word lists. The word lists use a color-coding scheme to help the users keep track of which words they are currently grouping into constraints. The right side of the screen displays the existing constraints. Users can click on icons to edit or delete each one. The constraint currently being built is also shown. Figure 5: Interface for Mechanical Turk experiments. Users see the topics discovered by the model and select words (by clicking on them) to build constraints to be added to the model. Clicking on a word will remove that word from the current constraint. As in Section 6, we can compute the classification error for these users as they add words to constraints. The best users, who seemed to understand the task well, were able to decrease classification error. (Figure 6). The median user, however, had an error reduction indistinguishable from zero. Despite this, we can examine the users’ behavior to better understand their goals and how they interact with the system. 7.2 Untrained users and ITM Most of the large (10+ word) user-created constraints corresponded to the themes of the individual newsgroups, which users were able to infer from the discovered topics. Common constraint themes that 255 Round Relative Error 0.94 0.96 0.98 1.00 0 1 2 3 4 Best Session 10 Topics 20 Topics 50 Topics 75 Topics Figure 6: The relative error rate (using round 0 as a baseline) of the best Mechanical Turk user session for each of the four numbers of topics. While the 10-topic model does not provide enough flexibility to create good constraints, the best users could clearly improve classification with more topics. matched specific newsgroups included religion, space exploration, graphics, and encryption. Other common themes were broader than individual newsgroups (e.g. sports, government and computers). Others matched sub-topics of a single newsgroup, such as homosexuality, Israel or computer programming. Some users created inscrutable constraints, like (“better, people, right, take, things”) and (“fbi, let, says”). They may have just clicked random words to finish the task quickly. While subsequent users could delete poor constraints, most chose not to. Because we wanted to understand broader behavior we made no effort to squelch such responses. The two-word constraints illustrate an interesting contrast. Some pairs are linked together in the corpus, like (“jesus, christ”) and (“solar, sun”). With others, like (“even, number”) and (“book, list”), the users seem to be encouraging collocations to be in the same topic. However, the collocations may not be in any document in this corpus. Another user created a constraint consisting of male first names. A topic did emerge with these words, but the rest of the words in that topic seemed random, as male first names are not likely to co-occur in the same document. Not all sensible constraints led to successful topic changes. Many users grouped “mac” and “windows” together, but they were almost never placed in the same topic. The corpus includes separate newsgroups for Macintosh and Windows hardware, and divergent contexts of “mac” and “windows” overpowered the prior distribution. The constraint size ranged from one word to over 40. In general, the more words in the constraint, the more likely it was to noticeably affect the topic distribution. This observation makes sense given our ablation method. A constraint with more words will cause the topic assignments to be reset for more documents. 8 Discussion In this work, we introduced a means for end-users to refine and improve the topics discovered by topic models. ITM offers a paradigm for non-specialist consumers of machine learning algorithms to refine models to better reflect their interests and needs. We demonstrated that even novice users are able to understand and build constraints using a simple interface and that their constraints can improve the model’s ability to capture the latent structure of a corpus. As presented here, the technique for incorporating constraints is closely tied to inference with Gibbs sampling. However, most inference techniques are essentially optimization problems. As long as it is possible to define a transition on the state space that moves from one less-constrained model to another more-constrained model, other inference procedures can also be used. We hope to engage these algorithms with more sophisticated users than those on Mechanical Turk to measure how these models can help them better explore and understand large, uncurated data sets. As we learn their needs, we can add more avenues for interacting with topic models. Acknowledgements We would like to thank the anonymous reviewers, Edmund Talley, Jonathan Chang, and Philip Resnik for their helpful comments on drafts of this paper. This work was supported by NSF grant #0705832. Jordan Boyd-Graber is also supported by the Army Research Laboratory through ARL Cooperative Agreement W911NF-09-2-0072 and by NSF grant #1018625. Any opinions, findings, conclusions, or recommendations expressed are the authors’ and do not necessarily reflect those of the sponsors. 256 References David Andrzejewski, Xiaojin Zhu, and Mark Craven. 2009. Incorporating domain knowledge into topic modeling via Dirichlet forest priors. In Proceedings of International Conference of Machine Learning. David M. Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Jordan Boyd-Graber and David M. Blei. 2008. Syntactic topic models. In Proceedings of Advances in Neural Information Processing Systems. Jordan Boyd-Graber and Philip Resnik. 2010. Holistic sentiment analysis across languages: Multilingual supervised latent Dirichlet allocation. In Proceedings of Emperical Methods in Natural Language Processing. Jordan Boyd-Graber, David M. Blei, and Xiaojin Zhu. 2007. A topic model for word sense disambiguation. In Proceedings of Emperical Methods in Natural Language Processing. Jonathan Chang, Jordan Boyd-Graber, Chong Wang, Sean Gerrish, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Neural Information Processing Systems. Jonathan Chang. 2010. Not-so-latent Dirichlet allocation: Collapsed Gibbs sampling using human judgments. In NAACL Workshop: Creating Speech and Language Data With Amazon’ss Mechanical Turk. Hal Daum´e III. 2009. Markov random topic fields. In Proceedings of Artificial Intelligence and Statistics. Laura Dietz, Steffen Bickel, and Tobias Scheffer. 2007. Unsupervised prediction of citation influences. In Proceedings of International Conference of Machine Learning. Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences, 101(Suppl 1):5228–5235. Amit Gruber, Michael Rosen-Zvi, and Yair Weiss. 2007. Hidden topic Markov models. In Artificial Intelligence and Statistics. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: An update. SIGKDD Explorations, 11(1):10–18. Thomas Hofmann. 1999. Probabilistic latent semantic analysis. In Proceedings of Uncertainty in Artificial Intelligence. Thomas K. Landauer, Danielle S. McNamara, Dennis S. Marynick, and Walter Kintsch, editors. 2006. Probabilistic Topic Models. Laurence Erlbaum. Wei-Hao Lin, Theresa Wilson, Janyce Wiebe, and Alexander Hauptmann. 2006. Which side are you on? identifying perspectives at the document and sentence levels. In Proceedings of the Conference on Natural Language Learning (CoNLL). Edward Loper and Steven Bird. 2002. NLTK: the natural language toolkit. In Tools and methodologies for teaching. Radford M. Neal. 1993. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1, University of Toronto. David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In Conference of the North American Chapter of the Association for Computational Linguistics. Michael Paul and Roxana Girju. 2010. A twodimensional topic-aspect model for discovering multifaceted topics. In Association for the Advancement of Artificial Intelligence. James Petterson, Smola Alex, Tiberio Caetano, Wray Buntine, and Narayanamurthy Shravan. 2010. Word features for latent Dirichlet allocation. In Neural Information Processing Systems. Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D. Manning. 2009. Labeled LDA: A supervised topic model for credit attribution in multilabeled corpora. In Proceedings of Emperical Methods in Natural Language Processing. Fergus Rob, Li Fei-Fei, Perona Pietro, and Zisserman Andrew. 2005. Learning object categories from Google’s image search. In International Conference on Computer Vision. Michal Rosen-Zvi, Thomas L. Griffiths, Mark Steyvers, and Padhraic Smyth. 2004. The author-topic model for authors and documents. In Proceedings of Uncertainty in Artificial Intelligence. Suyash Shringarpure and Eric P. Xing. 2008. mStruct: a new admixture model for inference of population structure in light of both genetic admixing and allele mutations. In Proceedings of International Conference of Machine Learning. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. In Proceedings of Emperical Methods in Natural Language Processing. Hanna Wallach, David Mimno, and Andrew McCallum. 2009. Rethinking LDA: Why priors matter. In Proceedings of Advances in Neural Information Processing Systems. Hanna M. Wallach. 2006. Topic modeling: Beyond bagof-words. In Proceedings of International Conference of Machine Learning. Limin Yao, David Mimno, and Andrew McCallum. 2009. Efficient methods for topic model inference on streaming document collections. In Knowledge Discovery and Data Mining. 257
|
2011
|
26
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 258–267, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Faster and Smaller N-Gram Language Models Adam Pauls Dan Klein Computer Science Division University of California, Berkeley {adpauls,klein}@cs.berkeley.edu Abstract N-gram language models are a major resource bottleneck in machine translation. In this paper, we present several language model implementations that are both highly compact and fast to query. Our fastest implementation is as fast as the widely used SRILM while requiring only 25% of the storage. Our most compact representation can store all 4 billion n-grams and associated counts for the Google n-gram corpus in 23 bits per n-gram, the most compact lossless representation to date, and even more compact than recent lossy compression techniques. We also discuss techniques for improving query speed during decoding, including a simple but novel language model caching technique that improves the query speed of our language models (and SRILM) by up to 300%. 1 Introduction For modern statistical machine translation systems, language models must be both fast and compact. The largest language models (LMs) can contain as many as several hundred billion n-grams (Brants et al., 2007), so storage is a challenge. At the same time, decoding a single sentence can trigger hundreds of thousands of queries to the language model, so speed is also critical. As always, trade-offs exist between time, space, and accuracy, with many recent papers considering smallbut-approximate noisy LMs (Chazelle et al., 2004; Guthrie and Hepple, 2010) or small-but-slow compressed LMs (Germann et al., 2009). In this paper, we present several lossless methods for compactly but efficiently storing large LMs in memory. As in much previous work (Whittaker and Raj, 2001; Hsu and Glass, 2008), our methods are conceptually based on tabular trie encodings wherein each n-gram key is stored as the concatenation of one word (here, the last) and an offset encoding the remaining words (here, the context). After presenting a bit-conscious basic system that typifies such approaches, we improve on it in several ways. First, we show how the last word of each entry can be implicitly encoded, almost entirely eliminating its storage requirements. Second, we show that the deltas between adjacent entries can be efficiently encoded with simple variable-length encodings. Third, we investigate block-based schemes that minimize the amount of compressed-stream scanning during lookup. To speed up our language models, we present two approaches. The first is a front-end cache. Caching itself is certainly not new to language modeling, but because well-tuned LMs are essentially lookup tables to begin with, naive cache designs only speed up slower systems. We present a direct-addressing cache with a fast key identity check that speeds up our systems (or existing fast systems like the widelyused, speed-focused SRILM) by up to 300%. Our second speed-up comes from a more fundamental change to the language modeling interface. Where classic LMs take word tuples and produce counts or probabilities, we propose an LM that takes a word-and-context encoding (so the context need not be re-looked up) and returns both the probability and also the context encoding for the suffix of the original query. This setup substantially accelerates the scrolling queries issued by decoders, and also exploits language model state equivalence (Li and Khudanpur, 2008). Overall, we are able to store the 4 billion n-grams of the Google Web1T (Brants and Franz, 2006) cor258 pus, with associated counts, in 10 GB of memory, which is smaller than state-of-the-art lossy language model implementations (Guthrie and Hepple, 2010), and significantly smaller than the best published lossless implementation (Germann et al., 2009). We are also able to simultaneously outperform SRILM in both total size and speed. Our LM toolkit, which is implemented in Java and compatible with the standard ARPA file formats, is available on the web.1 2 Preliminaries Our goal in this paper is to provide data structures that map n-gram keys to values, i.e. probabilities or counts. Maps are fundamental data structures and generic implementations of mapping data structures are readily available. However, because of the sheer number of keys and values needed for n-gram language modeling, generic implementations do not work efficiently “out of the box.” In this section, we will review existing techniques for encoding the keys and values of an n-gram language model, taking care to account for every bit of memory required by each implementation. To provide absolute numbers for the storage requirements of different implementations, we will use the Google Web1T corpus as a benchmark. This corpus, which is on the large end of corpora typically employed in language modeling, is a collection of nearly 4 billion n-grams extracted from over a trillion tokens of English text, and has a vocabulary of about 13.5 million words. 2.1 Encoding Values In the Web1T corpus, the most frequent n-gram occurs about 95 billion times. Storing this count explicitly would require 37 bits, but, as noted by Guthrie and Hepple (2010), the corpus contains only about 770 000 unique counts, so we can enumerate all counts using only 20 bits, and separately store an array called the value rank array which converts the rank encoding of a count back to its raw count. The additional array is small, requiring only about 3MB, but we save 17 bits per n-gram, reducing value storage from around 16GB to about 9GB for Web1T. We can rank encode probabilities and back-offs in the same way, allowing us to be agnostic to whether 1http://code.google.com/p/berkeleylm/ we encode counts, probabilities and/or back-off weights in our model. In general, the number of bits per value required to encode all value ranks for a given language model will vary – we will refer to this variable as v . 2.2 Trie-Based Language Models The data structure of choice for the majority of modern language model implementations is a trie (Fredkin, 1960). Tries or variants thereof are implemented in many LM tool kits, including SRILM (Stolcke, 2002), IRSTLM (Federico and Cettolo, 2007), CMU SLM (Whittaker and Raj, 2001), and MIT LM (Hsu and Glass, 2008). Tries represent collections of n-grams using a tree. Each node in the tree encodes a word, and paths in the tree correspond to n-grams in the collection. Tries ensure that each n-gram prefix is represented only once, and are very efficient when n-grams share common prefixes. Values can also be stored in a trie by placing them in the appropriate nodes. Conceptually, trie nodes can be implemented as records that contain two entries: one for the word in the node, and one for either a pointer to the parent of the node or a list of pointers to children. At a low level, however, naive implementations of tries can waste significant amounts of space. For example, the implementation used in SRILM represents a trie node as a C struct containing a 32-bit integer representing the word, a 64-bit memory2 pointer to the list of children, and a 32-bit floating point number representing the value stored at a node. The total storage for a node alone is 16 bytes, with additional overhead required to store the list of children. In total, the most compact implementation in SRILM uses 33 bytes per n-gram of storage, which would require around 116 GB of memory to store Web1T. While it is simple to implement a trie node in this (already wasteful) way in programming languages that offer low-level access to memory allocation like C/C++, the situation is even worse in higher level programming languages. In Java, for example, Cstyle structs are not available, and records are most naturally implemented as objects that carry an additional 64 bits of overhead. 2While 32-bit architectures are still in use today, their limited address space is insufficient for modern language models and we will assume all machines use a 64-bit architecture. 259 Despite its relatively large storage requirements, the implementation employed by SRILM is still widely in use today, largely because of its speed – to our knowledge, SRILM is the fastest freely available language model implementation. We will show that we can achieve access speeds comparable to SRILM but using only 25% of the storage. 2.3 Implicit Tries A more compact implementation of a trie is described in Whittaker and Raj (2001). In their implementation, nodes in a trie are represented implicitly as entries in an array. Each entry encodes a word with enough bits to index all words in the language model (24 bits for Web1T), a quantized value, and a 32-bit3 offset that encodes the contiguous block of the array containing the children of the node. Note that 32 bits is sufficient to index all n-grams in Web1T; for larger corpora, we can always increase the size of the offset. Effectively, this representation replaces systemlevel memory pointers with offsets that act as logical pointers that can reference other entries in the array, rather than arbitrary bytes in RAM. This representation saves space because offsets require fewer bits than memory pointers, but more importantly, it permits straightforward implementation in any higherlevel language that provides access to arrays of integers.4 2.4 Encoding n-grams Hsu and Glass (2008) describe a variant of the implicit tries of Whittaker and Raj (2001) in which each node in the trie stores the prefix (i.e. parent). This representation has the property that we can refer to each n-gram wn 1 by its last word wn and the offset c(wn−1 1 ) of its prefix wn−1 1 , often called the context. At a low-level, we can efficiently encode this pair (wn, c(wn−1 1 )) as a single 64-bit integer, where the first 24 bits refer to wn and the last 40 bits 3The implementation described in the paper represents each 32-bit integer compactly using only 16 bits, but this representation is quite inefficient, because determining the full 32-bit offset requires a binary search in a look up table. 4Typically, programming languages only provide support for arrays of bytes, not bits, but it is of course possible to simulate arrays with arbitrary numbers of bits using byte arrays and bit manipulation. encode c(wn−1 1 ). We will refer to this encoding as a context encoding. Note that typically, n-grams are encoded in tries in the reverse direction (first-rest instead of lastrest), which enables a more efficient computation of back-offs. In our implementations, we found that the speed improvement from switching to a first-rest encoding and implementing more efficient queries was modest. However, as we will see in Section 4.2, the last-rest encoding allows us to exploit the scrolling nature of queries issued by decoders, which results in speedups that far outweigh those achieved by reversing the trie. 3 Language Model Implementations In the previous section, we reviewed well-known techniques in language model implementation. In this section, we combine these techniques to build simple data structures in ways that are to our knowledge novel, producing language models with stateof-the-art memory requirements and speed. We will also show that our data structures can be very effectively compressed by implicitly encoding the word wn, and further compressed by applying a variablelength encoding on context deltas. 3.1 Sorted Array A standard way to implement a map is to store an array of key/value pairs, sorted according to the key. Lookup is carried out by performing binary search on a key. For an n-gram language model, we can apply this implementation with a slight modification: we need n sorted arrays, one for each n-gram order. We construct keys (wn, c(wn−1 1 )) using the context encoding described in the previous section, where the context offsets c refer to entries in the sorted array of (n −1)-grams. This data structure is shown graphically in Figure 1. Because our keys are sorted according to their context-encoded representation, we cannot straightforwardly answer queries about an n-gram w without first determining its context encoding. We can do this efficiently by building up the encoding incrementally: we start with the context offset of the unigram w1, which is simply its integer representation, and use that to form the context encoding of the bigram w2 1 = (w2, c(w1)). We can find the offset of 260 “ran” w c 15180053 w c 24 bits 40 bits 64 bits “cat” 15176585 “dog” 6879 6879 6879 6879 6879 6880 6880 6880 6879 00004598 “slept” 00004588 00004568 00004530 00004502 00004668 00004669 00004568 00004577 6879 00004498 15176583 15176585 15176593 15176613 15179801 15180051 15176589 15176591 “had” “the” “left” 1933 . . . . . . . . . . . . “slept” 1933 1933 1933 1933 1933 1933 1935 1935 1935 . . val val val v bits . . “dog” 3-grams 2-grams 1-grams w Figure 1: Our SORTED implementation of a trie. The dotted paths correspond to “the cat slept”, “the cat ran”, and “the dog ran”. Each node in the trie is an entry in an array with 3 parts: w represents the word at the node; val represents the (rank encoded) value; and c is an offset in the array of n −1 grams that represents the parent (prefix) of a node. Words are represented as offsets in the unigram array. the bigram using binary search, and form the context encoding of the trigram, and so on. Note, however, that if our queries arrive in context-encoded form, queries are faster since they involve only one binary search in the appropriate array. We will return to this later in Section 4.2 This implementation, SORTED, uses 64 bits for the integer-encoded keys and v bits for the values. Lookup is linear in the length of the key and logarithmic in the number of n-grams. For Web1T (v = 20), the total storage is 10.5 bytes/n-gram or about 37GB. 3.2 Hash Table Hash tables are another standard way to implement associative arrays. To enable the use of our context encoding, we require an implementation in which we can refer to entries in the hash table via array offsets. For this reason, we use an open address hash map that uses linear probing for collision resolution. As in the sorted array implementation, in order to insert an n-gram wn 1 into the hash table, we must form its context encoding incrementally from the offset of w1. However, unlike the sorted array implementation, at query time, we only need to be able to check equality between the query key wn 1 = (wn, c(wn−1 1 )) and a key w ′n 1 = (w′ n, c(w ′n−1 1 )) in the table. Equality can easily be checked by first checking if wn = w′ n, then recursively checking equality between wn−1 1 and w ′n−1 1 , though again, equality is even faster if the query is already contextencoded. This HASH data structure also uses 64 bits for integer-encoded keys and v bits for values. However, to avoid excessive hash collisions, we also allocate additional empty space according to a userdefined parameter that trades off speed and time – we used about 40% extra space in our experiments. For Web1T, the total storage for this implementation is 15 bytes/n-gram or about 53 GB total. Look up in a hash map is linear in the length of an n-gram and constant with respect to the number 261 of n-grams. Unlike the sorted array implementation, the hash table implementation also permits efficient insertion and deletion, making it suitable for stream-based language models (Levenberg and Osborne, 2009). 3.3 Implicitly Encoding wn The context encoding we have used thus far still wastes space. This is perhaps most evident in the sorted array representation (see Figure 1): all ngrams ending with a particular word wi are stored contiguously. We can exploit this redundancy by storing only the context offsets in the main array, using as many bits as needed to encode all context offsets (32 bits for Web1T). In auxiliary arrays, one for each n-gram order, we store the beginning and end of the range of the trie array in which all (wi, c) keys are stored for each wi. These auxiliary arrays are negligibly small – we only need to store 2n offsets for each word. The same trick can be applied in the hash table implementation. We allocate contiguous blocks of the main array for n-grams which all share the same last word wi, and distribute keys within those ranges using the hashing function. This representation reduces memory usage for keys from 64 bits to 32 bits, reducing overall storage for Web1T to 6.5 bytes/n-gram for the sorted implementation and 9.1 bytes for the hashed implementation, or about 23GB and 32GB in total. It also increases query speed in the sorted array case, since to find (wi, c), we only need to search the range of the array over which wi applies. Because this implicit encoding reduces memory usage without a performance cost, we will assume its use for the rest of this paper. 3.4 A Compressed Implementation 3.4.1 Variable-Length Coding The distribution of value ranks in language modeling is Zipfian, with far more n-grams having low counts than high counts. If we ensure that the value rank array sorts raw values by descending order of frequency, then we expect that small ranks will occur much more frequently than large ones, which we can exploit with a variable-length encoding. To compress n-grams, we can exploit the context encoding of our keys. In Figure 2, we show a portion w c val 1933 15176585 3 1933 15176587 2 1933 15176593 1 1933 15176613 8 1933 15179801 1 1935 15176585 298 1935 15176589 1 1933 15176585 563097887 956 3 0 +0 +2 2 +0 +5 1 +0 +40 8 !w !c val 1933 15176585 3 +0 +2 1 +0 +5 1 +0 +40 8 +0 +188 1 +2 15176585 298 +0 +4 1 |!w| |!c| |val| 24 40 3 2 3 3 2 3 3 2 9 6 2 12 3 4 36 15 2 6 3 . . . (a) Context-Encoding (b) Context Deltas (c) Bits Required (d) Compressed Array Number of bits in this block Value rank for header key Header key Logical offset of this block True if all !w in block are 0 Figure 2: Compression using variable-length encoding. (a) A snippet of an (uncompressed) context-encoded array. (b) The context and word deltas. (c) The number of bits required to encode the context and word deltas as well as the value ranks. Word deltas use variable-length block coding with k = 1, while context deltas and value ranks use k = 2. (d) A snippet of the compressed encoding array. The header is outlined in bold. of the key array used in our sorted array implementation. While we have already exploited the fact that the 24 word bits repeat in the previous section, we note here that consecutive context offsets tend to be quite close together. We found that for 5-grams, the median difference between consecutive offsets was about 50, and 90% of offset deltas were smaller than 10000. By using a variable-length encoding to represent these deltas, we should require far fewer than 32 bits to encode context offsets. We used a very simple variable-length coding to encode offset deltas, word deltas, and value ranks. Our encoding, which is referred to as “variablelength block coding” in Boldi and Vigna (2005), works as follows: we pick a (configurable) radix r = 2k. To encode a number m, we determine the number of digits d required to express m in base r. We write d in unary, i.e. d −1 zeroes followed by a one. We then write the d digits of m in base r, each of which requires k bits. For example, using k = 2, we would encode the decimal number 7 as 010111. We can choose k separately for deltas and value indices, and also tune these parameters to a given language model. We found this encoding outperformed other standard prefix codes, including Golomb codes (Golomb, 1966; Church et al., 2007) 262 and Elias γ and δ codes. We also experimented with the ζ codes of Boldi and Vigna (2005), which modify variable-length block codes so that they are optimal for certain power law distributions. We found that ζ codes performed no better than variable-length block codes and were slightly more complex. Finally, we found that Huffman codes outperformed our encoding slightly, but came at a much higher computational cost. 3.4.2 Block Compression We could in principle compress the entire array of key/value pairs with the encoding described above, but this would render binary search in the array impossible: we cannot jump to the mid-point of the array since in order to determine what key lies at a particular point in the compressed bit stream, we would need to know the entire history of offset deltas. Instead, we employ block compression, a technique also used by Harb et al. (2009) for smaller language models. In particular, we compress the key/value array in blocks of 128 bytes. At the beginning of the block, we write out a header consisting of: an explicit 64-bit key that begins the block; a 32-bit integer representing the offset of the header key in the uncompressed array;5 the number of bits of compressed data in the block; and the variablelength encoding of the value rank of the header key. The remainder of the block is filled with as many compressed key/value pairs as possible. Once the block is full, we start a new block. See Figure 2 for a depiction. When we encode an offset delta, we store the delta of the word portion of the key separately from the delta of the context offset. When an entire block shares the same word portion of the key, we set a single bit in the header that indicates that we do not encode any word deltas. To find a key in this compressed array, we first perform binary search over the header blocks (which are predictably located every 128 bytes), followed by a linear search within a compressed block. Using k = 6 for encoding offset deltas and k = 5 for encoding value ranks, this COMPRESSED implementation stores Web1T in less than 3 bytes per n-gram, or about 10.2GB in total. This is about 5We need this because n-grams refer to their contexts using array offsets. 6GB less than the storage required by Germann et al. (2009), which is the best published lossless compression to date. 4 Speeding up Decoding In the previous section, we provided compact and efficient implementations of associative arrays that allow us to query a value for an arbitrary n-gram. However, decoders do not issue language model requests at random. In this section, we show that language model requests issued by a standard decoder exhibit two patterns we can exploit: they are highly repetitive, and also exhibit a scrolling effect. 4.1 Exploiting Repetitive Queries In a simple experiment, we recorded all of the language model queries issued by the Joshua decoder (Li et al., 2009) on a 100 sentence test set. Of the 31 million queries, only about 1 million were unique. Therefore, we expect that keeping the results of language model queries in a cache should be effective at reducing overall language model latency. To this end, we added a very simple cache to our language model. Our cache uses an array of key/value pairs with size fixed to 2b −1 for some integer b (we used 24). We use a b-bit hash function to compute the address in an array where we will always place a given n-gram and its fully computed language model score. Querying the cache is straightforward: we check the address of a key given by its b-bit hash. If the key located in the cache array matches the query key, then we return the value stored in the cache. Otherwise, we fetch the language model probability from the language model and place the new key and value in the cache, evicting the old key in the process. This scheme is often called a direct-mapped cache because each key has exactly one possible address. Caching n-grams in this way reduces overall latency for two reasons: first, lookup in the cache is extremely fast, requiring only a single evaluation of the hash function, one memory lookup to find the cache key, and one equality check on the key. In contrast, even our fastest (HASH) implementation may have to perform multiple memory lookups and equality checks in order to resolve collisions. Second, when calculating the probability for an n-gram 263 the cat + fell down the cat fell cat fell down 18569876 fell 35764106 down LM 0.76 LM 0.12 LM LM 0.76 0.12 “the cat” “cat fell” 3576410 Explicit Representation Context Encoding Figure 3: Queries issued when scoring trigrams that are created when a state with LM context “the cat” combines with “fell down”. In the standard explicit representation of an n-gram as list of words, queries are issued atomically to the language model. When using a contextencoding, a query from the n-gram “the cat fell” returns the context offset of “cat fell”, which speeds up the query of “cat fell down”. not in the language model, language models with back-off schemes must in general perform multiple queries to fetch the necessary back-off information. Our cache retains the full result of these calculations and thus saves additional computation. Federico and Cettolo (2007) also employ a cache in their language model implementation, though based on traditional hash table cache with linear probing. Unlike our cache, which is of fixed size, their cache must be cleared after decoding a sentence. We would not expect a large performance increase from such a cache for our faster models since our HASH implementation is already a hash table with linear probing. We found in our experiments that a cache using linear probing provided marginal performance increases of about 40%, largely because of cached back-off computation, while our simpler cache increases performance by about 300% even over our HASH LM implementation. More timing results are presented in Section 5. 4.2 Exploiting Scrolling Queries Decoders with integrated language models (Och and Ney, 2004; Chiang, 2005) score partial translation hypotheses in an incremental way. Each partial hypothesis maintains a language model context consisting of at most n −1 target-side words. When we combine two language model contexts, we create several new n-grams of length of n, each of which generate a query to the language model. These new WMT2010 Order #n-grams 1gm 4,366,395 2gm 61,865,588 3gm 123,158,761 4gm 217,869,981 5gm 269,614,330 Total 676,875,055 WEB1T Order #n-grams 1gm 13,588,391 2gm 314,843,401 3gm 977,069,902 4gm 1,313,818,354 5gm 1,176,470,663 Total 3,795,790,711 Table 1: Sizes of the two language models used in our experiments. n-grams exhibit a scrolling effect, shown in Figure 3: the n −1 suffix words of one n-gram form the n −1 prefix words of the next. As discussed in Section 3, our LM implementations can answer queries about context-encoded ngrams faster than explicitly encoded n-grams. With this in mind, we augment the values stored in our language model so that for a key (wn, c(wn−1 1 )), we store the offset of the suffix c(wn 2) as well as the normal counts/probabilities. Then, rather than represent the LM context in the decoder as an explicit list of words, we can simply store context offsets. When we query the language model, we get back both a language model score and context offset c(ˆwn−1 1 ), where ˆwn−1 1 is the the longest suffix of wn−1 1 contained in the language model. We can then quickly form the context encoding of the next query by simply concatenating the new word with the offset c(ˆwn−1 1 ) returned from the previous query. In addition to speeding up language model queries, this approach also automatically supports an equivalence of LM states (Li and Khudanpur, 2008): in standard back-off schemes, whenever we compute the probability for an n-gram (wn, c(wn−1 1 )) when wn−1 1 is not in the language model, the result will be the same as the result of the query (wn, c(ˆwn−1 1 ). It is therefore only necessary to store as much of the context as the language model contains instead of all n −1 words in the context. If a decoder maintains LM states using the context offsets returned by our language model, then the decoder will automatically exploit this equivalence and the size of the search space will be reduced. This same effect is exploited explicitly by some decoders (Li and Khudanpur, 2008). 264 WMT2010 LM Type bytes/ bytes/ bytes/ Total key value n-gram Size SRILM-H – – 42.2 26.6G SRILM-S – – 33.5 21.1G HASH 5.6 6.0 11.6 7.5G SORTED 4.0 4.5 8.5 5.5G TPT – – 7.5∗∗ 4.7G∗∗ COMPRESSED 2.1 3.8 5.9 3.7G Table 2: Memory usages of several language model implementations on the WMT2010 language model. A ∗∗indicates that the storage in bytes per n-gram is reported for a different language model of comparable size, and the total size is thus a rough projection. 5 Experiments 5.1 Data To test our LM implementations, we performed experiments with two different language models. Our first language model, WMT2010, was a 5gram Kneser-Ney language model which stores probability/back-off pairs as values. We trained this language model on the English side of all FrenchEnglish corpora provided6 for use in the WMT 2010 workshop, about 2 billion tokens in total. This data was tokenized using the tokenizer.perl script provided with the data. We trained the language model using SRILM. We also extracted a countbased language model, WEB1T, from the Web1T corpus (Brants and Franz, 2006). Since this data is provided as a collection of 1- to 5-grams and associated counts, we used this data without further preprocessing. The make up of these language models is shown in Table 1. 5.2 Compression Experiments We tested our three implementations (HASH, SORTED, and COMPRESSED) on the WMT2010 language model. For this language model, there are about 80 million unique probability/back-off pairs, so v ≈36. Note that here v includes both the cost per key of storing the value rank as well as the (amortized) cost of storing two 32 bit floating point numbers (probability and back-off) for each unique value. The results are shown in Table 2. 6www.statmt.org/wmt10/translation-task.html WEB1T LM Type bytes/ bytes/ bytes/ Total key value n-gram Size Gzip – – 7.0 24.7G T-MPHR† – – 3.0 10.5G COMPRESSED 1.3 1.6 2.9 10.2G Table 3: Memory usages of several language model implementations on the WEB1T. A † indicates lossy compression. We compare against three baselines. The first two, SRILM-H and SRILM-S, refer to the hash tableand sorted array-based trie implementations provided by SRILM. The third baseline is the TightlyPacked Trie (TPT) implementation of Germann et al. (2009). Because this implementation is not freely available, we use their published memory usage in bytes per n-gram on a language model of similar size and project total usage. The memory usage of all of our models is considerably smaller than SRILM – our HASH implementation is about 25% the size of SRILM-H, and our SORTED implementation is about 25% the size of SRILM-S. Our COMPRESSED implementation is also smaller than the state-of-the-art compressed TPT implementation. In Table 3, we show the results of our COMPRESSED implementation on WEB1T and against two baselines. The first is compression of the ASCII text count files using gzip, and the second is the Tiered Minimal Perfect Hash (T-MPHR) of Guthrie and Hepple (2010). The latter is a lossy compression technique based on Bloomier filters (Chazelle et al., 2004) and additional variable-length encoding that achieves the best published compression of WEB1T to date. Our COMPRESSED implementation is even smaller than T-MPHR, despite using a lossless compression technique. Note that since TMPHR uses a lossy encoding, it is possible to reduce the storage requirements arbitrarily at the cost of additional errors in the model. We quote here the storage required when keys7 are encoded using 12bit hash codes, which gives a false positive rate of about 2−12 =0.02%. 7Guthrie and Hepple (2010) also report additional savings by quantizing values, though we could perform the same quantization in our storage scheme. 265 LM Type No Cache Cache Size COMPRESSED 9264±73ns 565±7ns 3.7G SORTED 1405±50ns 243±4ns 5.5G HASH 495±10ns 179±6ns 7.5G SRILM-H 428±5ns 159±4ns 26.6G HASH+SCROLL 323±5ns 139±6ns 10.5G Table 4: Raw query speeds of various language model implementations. Times were averaged over 3 runs on the same machine. For HASH+SCROLL, all queries were issued to the decoder in context-encoded form, which speeds up queries that exhibit scrolling behaviour. Note that memory usage is higher than for HASH because we store suffix offsets along with the values for an n-gram. LM Type No Cache Cache Size COMPRESSED 9880±82s 1547±7s 3.7G SRILM-H 1120±26s 938±11s 26.6G HASH 1146±8s 943±16s 7.5G Table 5: Full decoding times for various language model implementations. Our HASH LM is as fast as SRILM while using 25% of the memory. Our caching also reduces total decoding time by about 20% for our fastest models and speeds up COMPRESSED by a factor of 6. Times were averaged over 3 runs on the same machine. 5.3 Timing Experiments We first measured pure query speed by logging all LM queries issued by a decoder and measuring the time required to query those n-grams in isolation. We used the the Joshua decoder8 with the WMT2010 model to generate queries for the first 100 sentences of the French 2008 News test set. This produced about 30 million queries. We measured the time9 required to perform each query in order with and without our direct-mapped caching, not including any time spent on file I/O. The results are shown in Table 4. As expected, HASH is the fastest of our implementations, and comparable10 in speed to SRILM-H, but using sig8We used a grammar trained on all French-English data provided for WMT 2010 using the make scripts provided at http://sourceforge.net/projects/joshua/files /joshua/1.3/wmt2010-experiment.tgz/download 9All experiments were performed on an Amazon EC2 HighMemory Quadruple Extra Large instance, with an Intel Xeon X5550 CPU running at 2.67GHz and 8 MB of cache. 10Because we implemented our LMs in Java, we issued queries to SRILM via Java Native Interface (JNI) calls, which introduces a performance overhead. When called natively, we found that SRILM was about 200 ns/query faster. Unfortunificantly less space. SORTED is slower but of course more memory efficient, and COMPRESSED is the slowest but also the most compact representation. In HASH+SCROLL, we issued queries to the language model using the context encoding, which speeds up queries substantially. Finally, we note that our direct-mapped cache is very effective. The query speed of all models is boosted substantially. In particular, our COMPRESSED implementation with caching is nearly as fast as SRILM-H without caching, and even the already fast HASH implementation is 300% faster in raw query speed with caching enabled. We also measured the effect of LM performance on overall decoder performance. We modified Joshua to optionally use our LM implementations during decoding, and measured the time required to decode all 2051 sentences of the 2008 News test set. The results are shown in Table 5. Without caching, SRILM-H and HASH were comparable in speed, while COMPRESSED introduces a performance penalty. With caching enabled, overall decoder speed is improved for both HASH and SRILMH, while the COMPRESSED implementation is only about 50% slower that the others. 6 Conclusion We have presented several language model implementations which are state-of-the-art in both size and speed. Our experiments have demonstrated improvements in query speed over SRILM and compression rates against state-of-the-art lossy compression. We have also described a simple caching technique which leads to performance increases in overall decoding time. Acknowledgements This work was supported by a Google Fellowship for the first author and by BBN under DARPA contract HR001106-C-0022. We would like to thank David Chiang, Zhifei Li, and the anonymous reviewers for their helpful comments. nately, it is not completely fair to compare our LMs against either of these numbers: although the JNI overhead slows down SRILM, implementing our LMs in Java instead of C++ slows down our LMs. In the tables, we quote times which include the JNI overhead, since this reflects the true cost to a decoder written in Java (e.g. Joshua). 266 References Paolo Boldi and Sebastiano Vigna. 2005. Codes for the world wide web. Internet Mathematics, 2. Thorsten Brants and Alex Franz. 2006. Google web1t 5-gram corpus, version 1. In Linguistic Data Consortium, Philadelphia, Catalog Number LDC2006T13. Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Bernard Chazelle, Joe Kilian, Ronitt Rubinfeld, and Ayellet Tal. 2004. The Bloomier filter: an efficient data structure for static support lookup tables. In Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In The Annual Conference of the Association for Computational Linguistics. Kenneth Church, Ted Hart, and Jianfeng Gao. 2007. Compressing trigram language models with golomb coding. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Marcello Federico and Mauro Cettolo. 2007. Efficient handling of n-gram language models for statistical machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation. Edward Fredkin. 1960. Trie memory. Communications of the ACM, 3:490–499, September. Ulrich Germann, Eric Joanis, and Samuel Larkin. 2009. Tightly packed tries: how to fit large models into memory, and make them load fast, too. In Proceedings of the Workshop on Software Engineering, Testing, and Quality Assurance for Natural Language Processing. S. W. Golomb. 1966. Run-length encodings. IEEE Transactions on Information Theory, 12. David Guthrie and Mark Hepple. 2010. Storing the web in memory: space efficient language models with constant time retrieval. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Boulos Harb, Ciprian Chelba, Jeffrey Dean, and Sanjay Ghemawat. 2009. Back-off language model compression. In Proceedings of Interspeech. Bo-June Hsu and James Glass. 2008. Iterative language model estimation: Efficient data structure and algorithms. In Proceedings of Interspeech. Abby Levenberg and Miles Osborne. 2009. Streambased randomised language models for smt. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Zhifei Li and Sanjeev Khudanpur. 2008. A scalable decoder for parsing-based machine translation with equivalent language model state maintenance. In Proceedings of the Second Workshop on Syntax and Structure in Statistical Translation. Zhifei Li, Chris Callison-Burch, Chris Dyer, Juri Ganitkevitch, Sanjeev Khudanpur, Lane Schwartz, Wren N. G. Thornton, Jonathan Weese, and Omar F. Zaidan. 2009. Joshua: an open source toolkit for parsingbased machine translation. In Proceedings of the Fourth Workshop on Statistical Machine Translation. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computationl Linguistics, 30:417–449, December. Andreas Stolcke. 2002. SRILM: An extensible language modeling toolkit. In Proceedings of Interspeech. E. W. D. Whittaker and B. Raj. 2001. Quantizationbased language model compression. In Proceedings of Eurospeech. 267
|
2011
|
27
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 268–277, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Learning to Win by Reading Manuals in a Monte-Carlo Framework S.R.K. Branavan David Silver * Regina Barzilay Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {branavan, regina}@csail.mit.edu * Department of Computer Science University College London [email protected] Abstract This paper presents a novel approach for leveraging automatically extracted textual knowledge to improve the performance of control applications such as games. Our ultimate goal is to enrich a stochastic player with highlevel guidance expressed in text. Our model jointly learns to identify text that is relevant to a given game state in addition to learning game strategies guided by the selected text. Our method operates in the Monte-Carlo search framework, and learns both text analysis and game strategies based only on environment feedback. We apply our approach to the complex strategy game Civilization II using the official game manual as the text guide. Our results show that a linguistically-informed game-playing agent significantly outperforms its language-unaware counterpart, yielding a 27% absolute improvement and winning over 78% of games when playing against the builtin AI of Civilization II. 1 1 Introduction In this paper, we study the task of grounding linguistic analysis in control applications such as computer games. In these applications, an agent attempts to optimize a utility function (e.g., game score) by learning to select situation-appropriate actions. In complex domains, finding a winning strategy is challenging even for humans. Therefore, human players typically rely on manuals and guides that describe promising tactics and provide general advice about the underlying task. Surprisingly, such textual information has never been utilized in control algorithms despite its potential to greatly improve performance. 1The code, data and complete experimental setup for this work are available at http://groups.csail.mit.edu/rbg/code/civ. The natural resources available where a population settles affects its ability to produce food and goods. Build your city on a plains or grassland square with a river running through it if possible. Figure 1: An excerpt from the user manual of the game Civilization II. Consider for instance the text shown in Figure 1. This is an excerpt from the user manual of the game Civilization II.2 This text describes game locations where the action “build-city” can be effectively applied. A stochastic player that does not have access to this text would have to gain this knowledge the hard way: it would repeatedly attempt this action in a myriad of states, thereby learning the characterization of promising state-action pairs based on the observed game outcomes. In games with large state spaces, long planning horizons, and high-branching factors, this approach can be prohibitively slow and ineffective. An algorithm with access to the text, however, could learn correlations between words in the text and game attributes – e.g., the word “river” and places with rivers in the game – thus leveraging strategies described in text to better select actions. The key technical challenge in leveraging textual knowledge is to automatically extract relevant information from text and incorporate it effectively into a control algorithm. Approaching this task in a supervised framework, as is common in traditional information extraction, is inherently difficult. Since the game’s state space is extremely large, and the states that will be encountered during game play cannot be known a priori, it is impractical to manually annotate the information that would be relevant to those states. Instead, we propose to learn text analysis based on a feedback signal inherent to the control application, such as game score. 2http://en.wikipedia.org/wiki/Civilization II 268 Our general setup consists of a game in a stochastic environment, where the goal of the player is to maximize a given utility function R(s) at state s. We follow a common formulation that has been the basis of several successful applications of machine learning to games. The player’s behavior is determined by an action-value function Q(s, a) that assesses the goodness of an action a in a given state s based on the features of s and a. This function is learned based solely on the utility R(s) collected via simulated game-play in a Monte-Carlo framework. An obvious way to enrich the model with textual information is to augment the action-value function with word features in addition to state and action features. However, adding all the words in the document is unlikely to help since only a small fraction of the text is relevant for a given state. Moreover, even when the relevant sentence is known, the mapping between raw text and the action-state representation may not be apparent. This representation gap can be bridged by inducing a predicate structure on the sentence—e.g., by identifying words that describe actions, and those that describe state attributes. In this paper, we propose a method for learning an action-value function augmented with linguistic features, while simultaneously modeling sentence relevance and predicate structure. We employ a multilayer neural network where the hidden layers represent sentence relevance and predicate parsing decisions. Despite the added complexity, all the parameters of this non-linear model can be effectively learned via Monte-Carlo simulations. We test our method on the strategy game Civilization II, a notoriously challenging game with an immense action space.3 As a source of knowledge for guiding our model, we use the official game manual. As a baseline, we employ a similar MonteCarlo search based player which does not have access to textual information. We demonstrate that the linguistically-informed player significantly outperforms the baseline in terms of number of games won. Moreover, we show that modeling the deeper linguistic structure of sentences further improves performance. In full-length games, our algorithm yields a 27% improvement over a language unaware base3Civilization II was #3 in IGN’s 2007 list of top video games of all time (http://top100.ign.com/2007/ign top game 3.html) line, and wins over 78% of games against the builtin, hand-crafted AI of Civilization II.4 2 Related Work Our work fits into the broad area of grounded language acquisition where the goal is to learn linguistic analysis from a situated context (Oates, 2001; Siskind, 2001; Yu and Ballard, 2004; Fleischman and Roy, 2005; Mooney, 2008a; Mooney, 2008b; Branavan et al., 2009; Vogel and Jurafsky, 2010). Within this line of work, we are most closely related to reinforcement learning approaches that learn language by proactively interacting with an external environment (Branavan et al., 2009; Branavan et al., 2010; Vogel and Jurafsky, 2010). Like the above models, we use environment feedback (in the form of a utility function) as the main source of supervision. The key difference, however, is in the language interpretation task itself. Previous work has focused on the interpretation of instruction text where input documents specify a set of actions to be executed in the environment. In contrast, game manuals provide high-level advice but do not directly describe the correct actions for every potential game state. Moreover, these documents are long, and use rich vocabularies with complex grammatical constructions. We do not aim to perform a comprehensive interpretation of such documents. Rather, our focus is on language analysis that is sufficiently detailed to help the underlying control task. The area of language analysis situated in a game domain has been studied in the past (Eisenstein et al., 2009). Their method, however, is different both in terms of the target interpretation task, and the supervision signal it learns from. They aim to learn the rules of a given game, such as which moves are valid, given documents describing the rules. Our goal is more open ended, in that we aim to learn winning game strategies. Furthermore, Eisenstein et al. (2009) rely on a different source of supervision – game traces collected a priori. For complex games, like the one considered in this paper, collecting such game traces is prohibitively expensive. Therefore our approach learns by actively playing the game. 4In this paper, we focus primarily on the linguistic aspects of our task and algorithm. For a discussion and evaluation of the non-linguistic aspects please see Branavan et al. (2011). 269 3 Monte-Carlo Framework for Computer Games Our method operates within the Monte-Carlo search framework (Tesauro and Galperin, 1996), which has been successfully applied to complex computer games such as Go, Poker, Scrabble, multi-player card games, and real-time strategy games, among others (Gelly et al., 2006; Tesauro and Galperin, 1996; Billings et al., 1999; Sheppard, 2002; Sch¨afer, 2008; Sturtevant, 2008; Balla and Fern, 2009). Since Monte-Carlo search forms the foundation of our approach, we briefly describe it in this section. Game Representation The game is defined by a large Markov Decision Process ⟨S, A, T, R⟩. Here S is the set of possible states, A is the space of legal actions, and T(s′|s, a) is a stochastic state transition function where s, s′ ∈S and a ∈A. Specifically, a state encodes attributes of the game world, such as available resources and city locations. At each step of the game, a player executes an action a which causes the current state s to change to a new state s′ according to the transition function T(s′|s, a). While this function is not known a priori, the program encoding the game can be viewed as a black box from which transitions can be sampled. Finally, a given utility function R(s) ∈R captures the likelihood of winning the game from state s (e.g., an intermediate game score). Monte-Carlo Search Algorithm The goal of the Monte-Carlo search algorithm is to dynamically select the best action for the current state st. This selection is based on the results of multiple roll-outs which measure the outcome of a sequence of actions in a simulated game – e.g., simulations played against the game’s built-in AI. Specifically, starting at state st, the algorithm repeatedly selects and executes actions, sampling state transitions from T. On game completion at time τ, we measure the final utility R(sτ).5 The actual game action is then selected as the one corresponding to the roll-out with the best final utility. See Algorithm 1 for details. The success of Monte-Carlo search is based on its ability to make a fast, local estimate of the ac5In general, roll-outs are run till game completion. However, if simulations are expensive as is the case in our domain, rollouts can be truncated after a fixed number of steps. procedure PlayGame () Initialize game state to fixed starting state s1 ←s0 for t = 1 . . . T do Run N simulated games for i = 1 . . . N do (ai, ri) ←SimulateGame(s) end Compute average observed utility for each action at ←arg max a 1 Na X i:ai=a ri Execute selected action in game st+1 ←T(s′|st, at) end procedure SimulateGame (st) for u = t . . . τ do Compute Q function approximation Q(s, a) = ⃗w · ⃗f(s, a) Sample action from action-value function in ϵ-greedy fashion: au ∼ ( uniform(a ∈A) with probability ϵ arg max a Q(s, a) otherwise Execute selected action in game: su+1 ←T(s′|su, au) if game is won or lost break end Update parameters ⃗w of Q(s, a) Return action and observed utility: return at, R(sτ) Algorithm 1: The general Monte-Carlo algorithm. tion quality at each step of the roll-outs. States and actions are evaluated by an action-value function Q(s, a), which is an estimate of the expected outcome of action a in state s. This action-value function is used to guide action selection during the roll-outs. While actions are usually selected to maximize the action-value function, sometimes other actions are also randomly explored in case they are more valuable than predicted by the current estimate of Q(s, a). As the accuracy of Q(s, a) improves, the quality of action selection improves and vice 270 versa, in a cycle of continual improvement (Sutton and Barto, 1998). In many games, it is sufficient to maintain a distinct action-value for each unique state and action in a large search tree. However, when the branching factor is large it is usually beneficial to approximate the action-value function, so that the value of many related states and actions can be learned from a reasonably small number of simulations (Silver, 2009). One successful approach is to model the action-value function as a linear combination of state and action attributes (Silver et al., 2008): Q(s, a) = ⃗w · ⃗f(s, a). Here ⃗f(s, a) ∈Rn is a real-valued feature function, and ⃗w is a weight vector. We take a similar approach here, except that our feature function includes latent structure which models language. The parameters ⃗w of Q(s, a) are learned based on feedback from the roll-out simulations. Specifically, the parameters are updated by stochastic gradient descent by comparing the current predicted Q(s, a) against the observed utility at the end of each rollout. We provide details on parameter estimation in the context of our model in Section 4.2. The roll-outs themselves are fully guided by the action-value function. At every step of the simulation, actions are selected by an ϵ-greedy strategy: with probability ϵ an action is selected uniformly at random; otherwise the action is selected greedily to maximize the current action-value function, arg maxa Q(s, a). 4 Adding Linguistic Knowledge to the Monte-Carlo Framework In this section we describe how we inform the simulation-based player with information automatically extracted from text – in terms of both model structure and parameter estimation. 4.1 Model Structure To inform action selection with the advice provided in game manuals, we modify the action-value function Q(s, a) to take into account words of the document in addition to state and action information. Conditioning Q(s, a) on all the words in the document is unlikely to be effective since only a small Hidden layer encoding sentence relevance Output layer Input layer: Deterministic feature layer: Hidden layer encoding predicate labeling Figure 2: The structure of our model. Each rectangle represents a collection of units in a layer, and the shaded trapezoids show the connections between layers. A fixed, real-valued feature function ⃗x(s, a, d) transforms the game state s, action a, and strategy document d into the input vector ⃗x. The first hidden layer contains two disjoint sets of units ⃗y and ⃗z corresponding to linguistic analyzes of the strategy document. These are softmax layers, where only one unit is active at any time. The units of the second hidden layer ⃗f(s, a, d, yi, zi) are a set of fixed real valued feature functions on s, a, d and the active units yi and zi of ⃗y and ⃗z respectively. fraction of the document provides guidance relevant to the current state, while the remainder of the text is likely to be irrelevant. Since this information is not known a priori, we model the decision about a sentence’s relevance to the current state as a hidden variable. Moreover, to fully utilize the information presented in a sentence, the model identifies the words that describe actions and those that describe state attributes, discriminating them from the rest of the sentence. As with the relevance decision, we model this labeling using hidden variables. As shown in Figure 2, our model is a four layer neural network. The input layer ⃗x represents the current state s, candidate action a, and document d. The second layer consists of two disjoint sets of units ⃗y and ⃗z which encode the sentence-relevance and predicate-labeling decisions respectively. Each of these sets of units operates as a stochastic 1-of-n softmax selection layer (Bridle, 1990) where only a single unit is activated. The activation function for units in this layer is the standard softmax function: p(yi = 1|⃗x) = e⃗ui·⃗x . X k e⃗uk·⃗x, where yi is the ith hidden unit of ⃗y, and ⃗ui is the weight vector corresponding to yi. Given this acti271 vation function, the second layer effectively models sentence relevance and predicate labeling decisions via log-linear distributions, the details of which are described below. The third feature layer ⃗f of the neural network is deterministically computed given the active units yi and zj of the softmax layers, and the values of the input layer. Each unit in this layer corresponds to a fixed feature function fk(st, at, d, yi, zj) ∈R. Finally the output layer encodes the action-value function Q(s, a, d), which now also depends on the document d, as a weighted linear combination of the units of the feature layer: Q(st, at, d) = ⃗w · ⃗f, where ⃗w is the weight vector. Modeling Sentence Relevance Given a strategy document d, we wish to identify a sentence yi that is most relevant to the current game state st and action at. This relevance decision is modeled as a loglinear distribution over sentences as follows: p(yi|st, at, d) ∝e⃗u·φ(yi,st,at,d). Here φ(yi, st, at, d) ∈Rn is a feature function, and ⃗u are the parameters we need to estimate. Modeling Predicate Structure Our goal here is to label the words of a sentence as either actiondescription, state-description or background. Since these word label assignments are likely to be mutually dependent, we model predicate labeling as a sequence prediction task. These dependencies do not necessarily follow the order of words in a sentence, and are best expressed in terms of a syntactic tree. For example, words corresponding to state-description tend to be descendants of actiondescription words. Therefore, we label words in dependency order — i.e., starting at the root of a given dependency tree, and proceeding to the leaves. This allows a word’s label decision to condition on the label of the corresponding dependency tree parent. Given sentence yi and its dependency parse qi, we model the distribution over predicate labels ⃗ei as: p(⃗ei |yi, qi) = Y j p(ej|j,⃗e1:j−1, yi, qi), p(ej|j,⃗e1:j−1, yi, qi) ∝ e⃗v·ψ(ej,j,⃗e1:j−1,yi,qi). Here ej is the predicate label of the jth word being labeled, and ⃗e1:j−1 is the partial predicate labeling constructed so far for sentence yi. In the second layer of the neural network, the units ⃗z represent a predicate labeling ⃗ei of every sentence yi ∈d. However, our intention is to incorporate, into action-value function Q, information from only the most relevant sentence. Thus, in practice, we only perform a predicate labeling of the sentence selected by the relevance component of the model. Given the sentence selected as relevant and its predicate labeling, the output layer of the network can now explicitly learn the correlations between textual information, and game states and actions – for example, between the word “grassland” in Figure 1, and the action of building a city. This allows our method to leverage the automatically extracted textual information to improve game play. 4.2 Parameter Estimation Learning in our method is performed in an online fashion: at each game state st, the algorithm performs a simulated game roll-out, observes the outcome of the game, and updates the parameters ⃗u, ⃗v and ⃗w of the action-value function Q(st, at, d). These three steps are repeated a fixed number of times at each actual game state. The information from these roll-outs is used to select the actual game action. The algorithm re-learns Q(st, at, d) for every new game state st. This specializes the actionvalue function to the subgame starting from st. Since our model is a non-linear approximation of the underlying action-value function of the game, we learn model parameters by applying non-linear regression to the observed final utilities from the simulated roll-outs. Specifically, we adjust the parameters by stochastic gradient descent, to minimize the mean-squared error between the actionvalue Q(s, a) and the final utility R(sτ) for each observed game state s and action a. The resulting update to model parameters θ is of the form: ∆θ = −α 2 ∇θ [R(sτ) −Q(s, a)]2 = α [R(sτ) −Q(s, a)] ∇θQ(s, a; θ), where α is a learning rate parameter. This minimization is performed via standard error backpropagation (Bryson and Ho, 1969; Rumelhart 272 et al., 1986), which results in the following online updates for the output layer parameters ⃗w: ⃗w ←⃗w + αw [Q −R(sτ)] ⃗f(s, a, d, yi, zj), where αw is the learning rate, and Q = Q(s, a, d). The corresponding updates for the sentence relevance and predicate labeling parameters ⃗u and ⃗v are: ⃗ui ←⃗ui + αu [Q −R(sτ)] Q ⃗x [1 −p(yi|·)], ⃗vi ←⃗vi + αv [Q −R(sτ)] Q ⃗x [1 −p(zi|·)]. 5 Applying the Model We apply our model to playing the turn-based strategy game, Civilization II. We use the official manual 6 of the game as the source of textual strategy advice for the language aware algorithms. Civilization II is a multi-player game set on a gridbased map of the world. Each grid location represents a tile of either land or sea, and has various resources and terrain attributes. For example, land tiles can have hills with rivers running through them. In addition to multiple cities, each player controls various units – e.g., settlers and explorers. Games are won by gaining control of the entire world map. In our experiments, we consider a two-player game of Civilization II on a grid of 1000 squares, where we play against the built-in AI player. Game States and Actions We define the game state of Civilization II to be the map of the world, the attributes of each map tile, and the attributes of each player’s cities and units. Some examples of the attributes of states and actions are shown in Figure 3. The space of possible actions for a given city or unit is known given the current game state. The actions of a player’s cities and units combine to form the action space of that player. In our experiments, on average a player controls approximately 18 units, and each unit can take one of 15 actions. This results in a very large action space for the game – i.e., 1021. To effectively deal with this large action space, we assume that given the state, the actions of a single unit are independent of the actions of all other units of the same player. Utility Function The Monte-Carlo algorithm uses the utility function to evaluate the outcomes of 6www.civfanatics.com/content/civ2/reference/Civ2manual.zip Map tile attributes: City attributes: Unit attributes: - Terrain type (e.g. grassland, mountain, etc) - Tile resources (e.g. wheat, coal, wildlife, etc) - City population - Amount of food produced - Unit type (e.g., worker, explorer, archer, etc) - Is unit in a city ? 1 if action=build-city & tile-has-river=true & action-words={build,city} & state-words={river,hill} 0 otherwise 1 if action=build-city & tile-has-river=true & words={build,city,river} 0 otherwise 1 if label=action & word-type='build' & parent-label=action 0 otherwise Figure 3: Example attributes of the game (box above), and features computed using the game manual and these attributes (box below). simulated game roll-outs. In the typical application of the algorithm, the final game outcome is used as the utility function (Tesauro and Galperin, 1996). Given the complexity of Civilization II, running simulation roll-outs until game completion is impractical. The game, however, provides each player with a game score, which is a noisy indication of how well they are currently playing. Since we are playing a two-player game, we use the ratio of the game score of the two players as our utility function. Features The sentence relevance features ⃗φ and the action-value function features ⃗f consider the attributes of the game state and action, and the words of the sentence. Some of these features compute text overlap between the words of the sentence, and text labels present in the game. The feature function ⃗ψ used for predicate labeling on the other hand operates only on a given sentence and its dependency parse. It computes features which are the Cartesian product of the candidate predicate label with word attributes such as type, part-of-speech tag, and dependency parse information. Overall, ⃗f, ⃗φ and ⃗ψ compute approximately 306,800, 158,500, and 7,900 features respectively. Figure 3 shows some examples of these features. 273 6 Experimental Setup Datasets We use the official game manual for Civilization II as our strategy guide. This manual uses a large vocabulary of 3638 words, and is composed of 2083 sentences, each on average 16.9 words long. Experimental Framework To apply our method to the Civilization II game, we use the game’s open source implementation Freeciv.7 We instrument the game to allow our method to programmatically measure the current state of the game and to execute game actions. The Stanford parser (de Marneffe et al., 2006) was used to generate the dependency parse information for sentences in the game manual. Across all experiments, we start the game at the same initial state and run it for 100 steps. At each step, we perform 500 Monte-Carlo roll-outs. Each roll-out is run for 20 simulated game steps before halting the simulation and evaluating the outcome. For our method, and for each of the baselines, we run 200 independent games in the above manner, with evaluations averaged across the 200 runs. We use the same experimental settings across all methods, and all model parameters are initialized to zero. The test environment consisted of typical PCs with single Intel Core i7 CPUs (4 hyper-threaded cores each), with the algorithms executing 8 simulation roll-outs in parallel. In this setup, a single game of 100 steps runs in approximately 1.5 hours. Evaluation Metrics We wish to evaluate two aspects of our method: how well it leverages textual information to improve game play, and the accuracy of the linguistic analysis it produces. We evaluate the first aspect by comparing our method against various baselines in terms of the percentage of games won against the built-in AI of Freeciv. This AI is a fixed algorithm designed using extensive knowledge of the game, with the intention of challenging human players. As such, it provides a good open-reference baseline. Since full games can last for multiple days, we compute the percentage of games won within the first 100 game steps as our primary evaluation. To confirm that performance under this evaluation is meaningful, we also compute the percentage of full games won over 50 independent runs, where each game is run to completion. 7http://freeciv.wikia.com. Game version 2.2 Method % Win % Loss Std. Err. Random 0 100 — Built-in AI 0 0 — Game only 17.3 5.3 ± 2.7 Sentence relevance 46.7 2.8 ± 3.5 Full model 53.7 5.9 ± 3.5 Random text 40.3 4.3 ± 3.4 Latent variable 26.1 3.7 ± 3.1 Table 1: Win rate of our method and several baselines within the first 100 game steps, while playing against the built-in game AI. Games that are neither won nor lost are still ongoing. Our model’s win rate is statistically significant against all baselines except sentence relevance. All results are averaged across 200 independent game runs. The standard errors shown are for percentage wins. Method % Wins Standard Error Game only 45.7 ± 7.0 Latent variable 62.2 ± 6.9 Full model 78.8 ± 5.8 Table 2: Win rate of our method and two baselines on 50 full length games played against the built-in AI. 7 Results Game performance As shown in Table 1, our language aware Monte-Carlo algorithm substantially outperforms several baselines – on average winning 53.7% of all games within the first 100 steps. The dismal performance, on the other hand, of both the random baseline and the game’s own built-in AI (playing against itself) is an indicator of the difficulty of the task. This evaluation is an underestimate since it assumes that any game not won within the first 100 steps is a loss. As shown in Table 2, our method wins over 78% of full length games. To characterize the contribution of the language components to our model’s performance, we compare our method against two ablative baselines. The first of these, game-only, does not take advantage of any textual information. It attempts to model the action value function Q(s, a) only in terms of the attributes of the game state and action. The performance of this baseline – a win rate of 17.3% – effectively confirms the benefit of automatically extracted textual information in the context of our task. The second ablative baseline, sentence-relevance, is 274 After the road is built, use the settlers to start improving the terrain. S S A A A A A S When the settlers becomes active, chose build road. A A S S S A Use settlers or engineers to improve a terrain square within the city radius A A A S A S S S S S ✘ ✘ Phalanxes are twice as effective at defending cities as warriors. You can rename the city if you like, but we'll refer to it as washington. Build the city on plains or grassland with a river running through it. There are many different strategies dictating the order in which advances are researched Figure 4: Examples of our method’s sentence relevance and predicate labeling decisions. The box above shows two sentences (identified by check marks) which were predicted as relevant, and two which were not. The box below shows the predicted predicate structure of three sentences, with “S” indicating state description,“A” action description and background words unmarked. Mistakes are identified with crosses. identical to our model, but lacks the predicate labeling component. This method wins 46.7% of games, showing that while identifying the text relevant to the current game state is essential, a deeper structural analysis of the extracted text provides substantial benefits. One possible explanation for the improved performance of our method is that the non-linear approximation simply models game characteristics better, rather than modeling textual information. We directly test this possibility with two additional baselines. The first, random-text, is identical to our full model, but is given a document containing random text. We generate this text by randomly permuting the word locations of the actual game manual, thereby maintaining the document’s overall statistical properties. The second baseline, latent variable, extends the linear action-value function Q(s, a) of the game only baseline with a set of latent variables – i.e., it is a four layer neural network, where the second layer’s units are activated only based on game information. As shown in Table 1 both of these baselines significantly underperform with respect to our model, confirming the benefit of automatically extracted textual information in the context of this task. Sentence Relevance Figure 4 shows examples of the sentence relevance decisions produced by our method. To evaluate the accuracy of these decisions, we ideally require a ground-truth relevance annotation of the game’s user manual. This however, is 20 40 60 80 100 Game step 0 0.2 0.4 0.6 0.8 1 Sentence relevance accuracy Sentence relevance Moving average Figure 5: Accuracy of our method’s sentence relevance predictions, averaged over 100 independent runs. impractical since the relevance decision is dependent on the game context, and is hence specific to each time step of each game instance. Therefore, for the purposes of this evaluation, we modify the game manual by adding to it sentences randomly selected from the Wall Street Journal corpus (Marcus et al., 1993) – sentences that are highly unlikely to be relevant to game play. We then evaluate the accuracy with which sentences from the original manual are picked as relevant. In this evaluation, our method achieves an average accuracy of 71.8%. Given that our model only has to differentiate between the game manual text and the Wall Street Journal, this number may seem disappointing. Furthermore, as can be seen from Figure 5, the sentence relevance accuracy varies widely as the game progresses, with a high average of 94.2% during the initial 25 game steps. In reality, this pattern of high initial accuracy followed by a lower average is not entirely surprising: the official game manual for Civilization II is written for first time players. As such, it focuses on the initial portion of the game, providing little strategy advice relevant to subsequence game play.8 If this is the reason for the observed sentence relevance trend, we would also expect the final layer of the neural network to emphasize game features over text features after the first 25 steps of the game. This is indeed the case, as can be seen from Figure 6. To further test this hypothesis, we perform an experiment where the first 50 steps of the game are played using our full model, and the subsequent 50 steps are played without using any textual informa8This is reminiscent of opening books for games like Chess or Go, which aim to guide the player to a playable middle game. 275 20 40 60 80 Game step 0 0.5 1 Text feature importance Text features dominate Game features dominate 1.5 Figure 6: Difference between the norms of the text features and game features of the output layer of the neural network. Beyond the initial 25 steps of the game, our method relies increasingly on game features. tion. This hybrid method performs as well as our full model, achieving a 53.3% win rate, confirming that textual information is most useful during the initial phase of the game. This shows that our method is able to accurately identify relevant sentences when the information they contain is most pertinent to game play. Predicate Labeling Figure 4 shows examples of the predicate structure output of our model. We evaluate the accuracy of this labeling by comparing it against a gold-standard annotation of the game manual. Table 3 shows the performance of our method in terms of how accurately it labels words as state, action or background, and also how accurately it differentiates between state and action words. In addition to showing a performance improvement over the random baseline, these results display two clear trends: first, under both evaluations, labeling accuracy is higher during the initial stages of the game. This is to be expected since the model relies heavily on textual features only during the beginning of the game (see Figure 6). Second, the model clearly performs better in differentiating between state and action words, rather than in the three-way labeling. To verify the usefulness of our method’s predicate labeling, we perform a final set of experiments where predicate labels are selected uniformly at random within our full model. This random labeling results in a win rate of 44% – a performance similar to the sentence relevance model which uses no predicate information. This confirms that our method is able identify a predicate structure which, while noisy, provides information relevant to game play. Method S/A/B S/A Random labeling 33.3% 50.0% Model, first 100 steps 45.1% 78.9% Model, first 25 steps 48.0% 92.7% Table 3: Predicate labeling accuracy of our method and a random baseline. Column “S/A/B” shows performance on the three-way labeling of words as state, action or background, while column “S/A” shows accuracy on the task of differentiating between state and action words. state: grassland "city" state: grassland "build" action: settlers_build_city "city" action: set_research "discovery" game attribute word Figure 7: Examples of word to game attribute associations that are learned via the feature weights of our model. Figure 7 shows examples of how this textual information is grounded in the game, by way of the associations learned between words and game attributes in the final layer of the full model. 8 Conclusions In this paper we presented a novel approach for improving the performance of control applications by automatically leveraging high-level guidance expressed in text documents. Our model, which operates in the Monte-Carlo framework, jointly learns to identify text relevant to a given game state in addition to learning game strategies guided by the selected text. We show that this approach substantially outperforms language-unaware alternatives while learning only from environment feedback. Acknowledgments The authors acknowledge the support of the NSF (CAREER grant IIS-0448168, grant IIS-0835652), DARPA Machine Reading Program (FA8750-09C-0172) and the Microsoft Research New Faculty Fellowship. Thanks to Michael Collins, Tommi Jaakkola, Leslie Kaelbling, Nate Kushman, Sasha Rush, Luke Zettlemoyer, the MIT NLP group, and the ACL reviewers for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. 276 References R. Balla and A. Fern. 2009. UCT for tactical assault planning in real-time strategy games. In 21st International Joint Conference on Artificial Intelligence. Darse Billings, Lourdes Pe˜na Castillo, Jonathan Schaeffer, and Duane Szafron. 1999. Using probabilistic knowledge and simulation to play poker. In 16th National Conference on Artificial Intelligence, pages 697–703. S.R.K Branavan, Harr Chen, Luke Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of ACL, pages 82–90. S.R.K Branavan, Luke Zettlemoyer, and Regina Barzilay. 2010. Reading between the lines: Learning to map high-level instructions to commands. In Proceedings of ACL, pages 1268–1277. S.R.K. Branavan, David Silver, and Regina Barzilay. 2011. Non-linear monte-carlo search in civilization ii. In Proceedings of IJCAI. John S. Bridle. 1990. Training stochastic model recognition algorithms as networks can lead to maximum mutual information estimation of parameters. In Advances in NIPS, pages 211–217. Arthur E. Bryson and Yu-Chi Ho. 1969. Applied optimal control: optimization, estimation, and control. Blaisdell Publishing Company. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In LREC 2006. Jacob Eisenstein, James Clarke, Dan Goldwasser, and Dan Roth. 2009. Reading to learn: Constructing features from semantic abstracts. In Proceedings of EMNLP, pages 958–967. Michael Fleischman and Deb Roy. 2005. Intentional context in situated natural language learning. In Proceedings of CoNLL, pages 104–111. S. Gelly, Y. Wang, R. Munos, and O. Teytaud. 2006. Modification of UCT with patterns in Monte-Carlo Go. Technical Report 6062, INRIA. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330. Raymond J. Mooney. 2008a. Learning language from its perceptual context. In Proceedings of ECML/PKDD. Raymond J. Mooney. 2008b. Learning to connect language and perception. In Proceedings of AAAI, pages 1598–1601. James Timothy Oates. 2001. Grounding knowledge in sensors: Unsupervised learning for language and planning. Ph.D. thesis, University of Massachusetts Amherst. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by backpropagating errors. Nature, 323:533–536. J. Sch¨afer. 2008. The UCT algorithm applied to games with imperfect information. Diploma Thesis. Ottovon-Guericke-Universit¨at Magdeburg. B. Sheppard. 2002. World-championship-caliber Scrabble. Artificial Intelligence, 134(1-2):241–275. D. Silver, R. Sutton, and M. M¨uller. 2008. Samplebased learning and search with permanent and transient memories. In 25th International Conference on Machine Learning, pages 968–975. D. Silver. 2009. Reinforcement Learning and Simulation-Based Search in the Game of Go. Ph.D. thesis, University of Alberta. Jeffrey Mark Siskind. 2001. Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic. Journal of Artificial Intelligence Research, 15:31–90. N. Sturtevant. 2008. An analysis of UCT in multi-player games. In 6th International Conference on Computers and Games, pages 37–49. Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. The MIT Press. G. Tesauro and G. Galperin. 1996. On-line policy improvement using Monte-Carlo search. In Advances in Neural Information Processing 9, pages 1068–1074. Adam Vogel and Daniel Jurafsky. 2010. Learning to follow navigational directions. In Proceedings of the ACL, pages 806–814. Chen Yu and Dana H. Ballard. 2004. On the integration of grounding language and learning objects. In Proceedings of AAAI, pages 488–493. 277
|
2011
|
28
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 278–287, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Creative Language Retrieval: A Robust Hybrid of Information Retrieval and Linguistic Creativity Tony Veale School of Computer Science and Informatics, University College Dublin, Belfield, Dublin D4, Ireland. [email protected] Abstract Information retrieval (IR) and figurative language processing (FLP) could scarcely be more different in their treatment of language and meaning. IR views language as an open-ended set of mostly stable signs with which texts can be indexed and retrieved, focusing more on a text’s potential relevance than its potential meaning. In contrast, FLP views language as a system of unstable signs that can be used to talk about the world in creative new ways. There is another key difference: IR is practical, scalable and robust, and in daily use by millions of casual users. FLP is neither scalable nor robust, and not yet practical enough to migrate beyond the lab. This paper thus presents a mutually beneficial hybrid of IR and FLP, one that enriches IR with new operators to enable the non-literal retrieval of creative expressions, and which also transplants FLP into a robust, scalable framework in which practical applications of linguistic creativity can be implemented. 1 Introduction Words should not always be taken at face value. Figurative devices like metaphor can communicate far richer meanings than are evident from a superficial – and perhaps literally nonsensical – reading. Figurative Language Processing (FLP) thus uses a variety of special mechanisms and representations, to assign non-literal meanings not just to metaphors, but to similes, analogies, epithets, puns and other creative uses of language (see Martin, 1990; Fass, 1991; Way, 1991; Indurkhya, 1992; Fass, 1997; Barnden, 2006; Veale and Butnariu, 2010). Computationalists have explored heterodox solutions to the procedural and representational challenges of metaphor, and FLP more generally, ranging from flexible representations (e.g. the preference semantics of Wilks (1978) and the collative semantics of Fass (1991, 1997)) to processes of cross-domain structure alignment (e.g. structure mapping theory; see Gentner (1983) and Falkenhainer et al. 1989) and even structural inversion (Veale, 2006). Though thematically related, each approach to FLP is broadly distinct, giving computational form to different cognitive demands of creative language: thus, some focus on interdomain mappings (e.g. Gentner, 1983) while others focus more on intra-domain inference (e.g. Barnden, 2006). However, while computationally interesting, none has yet achieved the scalability or robustness needed to make a significant practical impact outside the laboratory. Moreover, such systems tend to be developed in isolation, and are rarely designed to cohere as part of a larger framework of creative reasoning (e.g. Boden, 1994). In contrast, Information Retrieval (IR) is both scalable and robust, and its results translate easily from the laboratory into practical applications (e.g. see Salton, 1968; Van Rijsbergen, 1979). Whereas FLP derives its utility and its fragility from its attempts to identify deeper meanings beneath the surface, the widespread applicability of IR stems directly from its superficial treatment of language 278 and meaning. IR does not distinguish between creative and conventional uses of language, or between literal and non-literal meanings. IR is also remarkably modular: its components are designed to work together interchangeably, from stemmers and indexers to heuristics for query expansion and document ranking. Yet, because IR treats all language as literal language, it relies on literal matching between queries and the texts that they retrieve. Documents are retrieved precisely because they contain stretches of text that literally resemble the query. This works well in the main, but it means that IR falls flat when the goal of retrieval is not to identify relevant documents but to retrieve new and creative ways of expressing a given idea. To retrieve creative language, and to be potentially surprised or inspired by the results, one needs to facilitate a non-literal relationship between queries and the texts that they match. The complementarity of FLP and IR suggests a productive hybrid of both paradigms. If the most robust elements of FLP are used to provide new non-literal query operators for IR, then IR can be used to retrieve potentially new and creative ways of speaking about a topic from a large text collection. In return, IR can provide a stable, robust and extensible platform on which to use these operators to build FLP systems that exhibit linguistic creativity. In the next section we consider the related work on which the current realization of these ideas is founded, before presenting a specific trio of new semantic query operators in section 3. We describe three simple but practical applications of this creative IR paradigm in section 4. Empirical support for the FLP intuitions that underpin our new operators is provided in section 5. The paper concludes with some closing observations about future goals and developments in section 6. 2 Related Work and Ideas IR works on the premise that a user can turn an information need into an effective query by anticipating the language that is used to talk about a given topic in a target collection. If the collection uses creative language in speaking about a topic, then a query must also contain the seeds of this creative language. Veale (2004) introduces the idea of creative information retrieval to explore how an IR system can itself provide a degree of creative anticipation, acting as a mediator between the literal specification of a meaning and the retrieval of creative articulations of this meaning. This anticipation ranges from simple re-articulation (e.g. a text may implicitly evoke “Qur’an” even if it only contains “Muslim bible”) to playful allusions and epithets (e.g. the CEO of a rubber company may be punningly described as a “rubber baron”). A creative IR system may even anticipate out-ofdictionary words, like chocoholic and sexoholic. Conventional IR systems use a range of query expansion techniques to automatically bolster a user’s query with additional keywords or weights, to permit the retrieval of relevant texts it might not otherwise match (e.g. Vernimb, 1977; Voorhees, 1994). Techniques vary, from the use of stemmers and morphological analysis to the use of thesauri (such as WordNet; see Fellbaum, 1998; Voorhees, 1998) to pad a query with synonyms, to the use of statistical analysis to identify more appropriate context-sensitive associations and near-synonyms (e.g. Xu and Croft, 1996). While some techniques may suggest conventional metaphors that have become lexicalized in a language, they are unlikely to identify relatively novel expressions. Crucially, expansion improves recall at the expense of overall precision, making automatic techniques even more dangerous when the goal is to retrieve results that are creative and relevant. Creative IR must balance a need for fine user control with the statistical breadth and convenience of automatic expansion. Fortunately, statistical corpus analysis is an obvious area of overlap for IR and FLP. Distributional analyses of large corpora have been shown to produce nuanced models of lexical similarity (e.g. Weeds and Weir, 2005) as well as contextsensitive thesauri for a given domain (Lin, 1998). Hearst (1992) shows how a pattern like “Xs and other Ys” can be used to construct more fluid, context-specific taxonomies than those provided by WordNet (e.g. “athletes and other celebrities” suggests a context in which athletes are viewed as stars). Mason (2004) shows how statistical analysis can automatically detect and extract conventional metaphors from corpora, though creative metaphors still remain a tantalizing challenge. Hanks (2005) shows how the “Xs like A, B and C” construction allows us to derive flexible ad-hoc categories from corpora, while Hanks (2006) argues for a gradable conception of metaphoricity based on word-sense distributions in corpora. 279 Veale and Hao (2007) exploit the simile frame “as X as Y” to harvest a great many common similes and their underlying stereotypes from the web (e.g. “as hot as an oven”), while Veale and Hao (2010) show that the pattern “about as X as Y” retrieves an equally large collection of creative (if mostly ironic) comparisons. These authors demonstrate that a large vocabulary of stereotypical ideas (over 4000 nouns) and their salient properties (over 2000 adjectives) can be harvested from the web. We now build on these results to develop a set of new semantic operators, that use corpus-derived knowledge to support finely controlled non-literal matching and automatic query expansion. 3 Creative Text Retrieval In language, creativity is always a matter of construal. While conventional IR queries articulate a need for information, creative IR queries articulate a need for expressions to convey the same meaning in a fresh or unusual way. A query and a matching phrase can be figuratively construed to have the same meaning if there is a non-literal mapping between the elements of the query and the elements of the phrase. In creative IR, this non-literal mapping is facilitated by the query’s explicit use of semantic wildcards (e.g. see Mihalcea, 2002). The wildcard * is a boon for power-users of the Google search engine, precisely because it allows users to focus on the retrieval of matching phrases rather than relevant documents. For instance, * can be used to find alternate ways of instantiating a culturally-established linguistic pattern, or “snowclone”: thus, the Google queries “In * no one can hear you scream” (from Alien), “Reader, I * him” (from Jane Eyre) and “This is your brain on *” (from a famous TV advert) find new ways in which old patterns have been instantiated for humorous effect on the Web. On a larger scale, Veale and Hao (2007) used the * wildcard to harvest web similes, but reported that harvesting cultural data with wildcards is not a straightforward process. Google and other engines are designed to maximize document relevance and to rank results accordingly. They are not designed to maximize the diversity of results, or to find the largest set of wildcard bindings. Nor are they designed to find the most commonplace bindings for wildcards. Following Guilford’s (1950) pioneering work, diversity is widely considered a key component in the psychology of creativity. By focusing on the phrase level rather than the document level, and by returning phrase sets rather than document sets, creative IR maximizes diversity by finding as many bindings for its wildcards as a text collection will support. But we need more flexible and precise wildcards than *. We now consider three varieties of semantic wildcards that build on insights from corpus-linguistic approaches to FLP. 3.1 The Neighborhood Wildcard ?X Semantic query expansion replaces a query term X with a set {X, X1, X2, …, Xn} where each Xi is related to X by a prescribed lexico-semantic relationship, such as synonymy, hyponymy or meronymy. A generic, lightweight resource like WordNet can provide these relations, or a richer ontology can be used if one is available (e.g. see Navigli and Velardi, 2003). Intuitively, each query term suggests other terms from its semantic neighborhood, yet there are practical limits to this intuition. Xi may not be an obvious or natural substitute for X. A neighborhood can be drawn too small, impacting recall, or too large, impacting precision. Corpus analysis suggests an approach that is both semantic and pragmatic. As noted in Hanks (2005), languages provide constructions for building ad-hoc sets of items that can be considered comparable in a given context. For instance, a coordination of bare plurals suggests that two ideas are related at a generic level, as in “priests and imams” or “mosques and synagogues”. More generally, consider the pattern “X and Y”, where X and Y are proper-names (e.g., “Zeus and Hera”), or X and Y are inflected nouns or verbs with the same inflection (e.g., the plurals “cats and dogs” or the verb forms “kicking and screaming”). Millions of matches for this pattern can be found in the Google 3-grams (Brants and Franz, 2006), allowing us to build a map of comparable terms by linking the root-forms of X and Y with a similarity score obtained via a WordNet-based measure (e.g. see Budanitsky and Hirst (2006) for a good selection). The pragmatic neighborhood of a term X can be defined as {X, X1, X2, …, Xn}, so that for each Xi, the Google 3-grams contain “X+inf and Xi+inf” or “X+inf and Xi+inf”. The boundaries of neighborhoods are thus set by usage patterns: if ?X denotes the neighborhood of X, then ?artist 280 matches not just artist, composer and poet, but studio, portfolio and gallery, and many other terms that are semantically dissimilar but pragmatically linked to artist. Since each Xi ∈ ?X is ranked by similarity to X, query matches can also be ranked by similarity. When X is an adjective, then ?X matches any element of {X, Xi, X2, …, Xn}, where each Xi pragmatically reinforces X, and X pragmatically reinforces each Xi. To ensure X and Xi really are mutually reinforcing adjectives, we use the doubleground simile pattern “as X and Xi as” to harvest {X1, …, Xn} for each X. Moreover, to maximize recall, we use the Google API (rather than Google ngrams) to harvest suitable bindings for X and Xi from the web. For example, @witty = {charming, clever, intelligent, entertaining, …, edgy, fun}. 3.2 The Cultural Stereotype Wildcard @X Dickens claims in A Christmas Carol that “the wisdom of a people is in the simile”. Similes exploit familiar stereotypes to describe a less familiar concept, so one can learn a great deal about a culture and its language from the similes that have the most currency (Taylor, 1954). The wildcard @X builds on the results of Veale and Hao (2007) to allow creative IR queries to retrieve matches on the basis of cultural expectations. This foundation provides a large set of adjectival features (over 2000) for a larger set of nouns (over 4000) denoting stereotypes for which these features are salient. If N is a noun, then @N matches any element of the set {A1, A2, …, An}, where each Ai is an adjective denoting a stereotypical property of N. For example, @diamond matches any element of {transparent, immutable, beautiful, tough, expensive, valuable, shiny, bright, lasting, desirable, strong, …, hard} . If A is an adjective, then @A matches any element of the set {N1, N2, …, Nn}, where each Ni is a noun denoting a stereotype for which A is a culturally established property. For example, @tall matches any element of {giraffe, skyscraper, tree, redwood, tower, sunflower, lighthouse, beanstalk, rocket, …, supermodel}. Stereotypes crystallize in a language as clichés, so one can argue that stereotypes and clichés are little or no use to a creative IR system. Yet, as demonstrated in Fishlov (1992), creative language is replete with stereotypes, not in their clichéd guises, but in novel and often incongruous combinations. The creative value of a stereotype lies in how it is used, as we’ll show later in section 4. 3.3 The Ad-Hoc Category Wildcard ^X Barsalou (1983) introduced the notion of an adhoc category, a cross-cutting collection of often disparate elements that cohere in the context of a specific task or goal. The ad-hoc nature of these categories is reflected in the difficulty we have in naming them concisely: the cumbersome “things to take on a camping trip” is Barsalou’s most cited example. But ad-hoc categories do not replace natural kinds; rather, they supplement an existing system of more-or-less rigid categories, such as the categories found in WordNet. The semantic wildcard ^C matches C and any element of {C1, C2, …, Cn}, where each Ci is a member of the category named by C. ^C can denote a fixed category in a resource like WordNet or even Wikipedia; thus, ^fruit matches any member of {apple, orange, pear, …, lemon} and ^animal any member of {dog, cat, mouse, …, deer, fox}. Ad-hoc categories arise in creative IR when the results of a query – or more specifically, the bindings for a query wildcard – are funneled into a new user-defined category. For instance, the query “^fruit juice” matches any phrase in a text collection that denotes a named fruit juice, from “lemon juice” to “pawpaw juice”. A user can now funnel the bindings for ^fruit in this query into an ad-hoc category juicefruit, to gather together those fruits that are used for their juice. Elements of ^juicefruit are ranked by the corpus frequencies discovered by the original query; low-frequency juicefruit members in the Google ngrams include coffee, raisin, almond, carob and soybean. Ad-hoc categories allow users of IR to remake a category system in their own image, and create a new vocabulary of categories to serve their own goals and interests, as when “^food pizza” is used to suggest disparate members for the ad-hoc category pizzatopping. The more subtle a query, the more disparate the elements it can funnel into an ad-hoc category. We now consider how basic semantic wildcards can be combined to generate even more diverse results. 3.4 Compound Operators Each wildcard maps a query term onto a set of ex281 pansion terms. The compositional semantics of a wildcard combination can thus be understood in set-theoretic terms. The most obvious and useful combinations of ?, @ and ^ are described below: ?? Neighbor-of-a-neighbor: if ?X matches any element of {X, X1, X2, …, Xn} then ??X matches any of ?X ∪ ?X1 ∪ … ∪ ?Xn, where the ranking of Xij in ??X is a function of the ranking of Xi in ?X and the ranking of Xij in ?Xi. Thus, ??artist matches far more terms than ?artist, yielding more diversity, more noise, and more creative potential. @@ Stereotype-of-a-stereotype: if @X matches any element of {X1, X2, …, Xn} then @@X matches any of @X1 ∪ @X2 ∪ … ∪ @Xn. For instance, @@diamond matches any stereotype that shares a salient property with diamond, and @@sharp matches any salient property of any noun for which sharp is a stereotypical property. ?@ Neighborhood-of-a-stereotype: if @X matches any element of {X1, X2, …, Xn} then ? @ X matches any of ?X1 ∪ ?X2 ∪ … ∪ ?Xn. Thus, ?@cunning matches any term in the pragmatic neighborhood of a stereotype for cunning, while ?@knife matches any property that mutually reinforces any stereotypical property of knife @? Stereotypes-in-a-neighborhood: if ?X matches any of {X, X1, X2, …, Xn} then @?X matches any of @X ∪ @X1 ∪ … ∪ @Xn. Thus, @?corpse matches any salient property of any stereotype in the neighborhood of corpse, while @?fast matches any stereotype noun with a salient property that is similar to, and reinforced by, fast. ?^ Neighborhood-of-a-category: if ^C matches any of {C, C1, C2, …, Cn} then ?^C matches any of ?C ∪ ?C1 ∪ … ∪ ?Cn. ^? Categories-in-a-neighborhood: if ?X matches any of {X, X1, X2, …, Xn} then ^?X matches any of ^X ∪ ^X1 ∪ … ∪ ^Xn. @^ Stereotypes-in-a-category: if ^C matches any of {C, C1, C2, …, Cn} then @^C matches any of @C ∪ @C1 ∪ … ∪ @Cn. ^@ Members-of-a-stereotype-category: if @X matches any element of {X1, X2, …, Xn} then ^@X matches any of ^X1 ∪ ^X2 ∪ … ∪ ^Xn. So ^@strong matches any member of a category (such as warrior) that is stereotypically strong. 4 Applications of Creative Retrieval The Google ngrams comprise a vast array of extracts from English web texts, of 1 to 5 words in length (Brants and Franz, 2006). Many extracts are well-formed phrases that give lexical form to many different ideas. But an even greater number of ngrams are not linguistically well-formed. The Google ngrams can be seen as a lexicalized idea space, embedded within a larger sea of noise. Creative IR can be used to explore this idea space. Each creative query is a jumping off point in a space of lexicalized ideas that is implied by a large corpus, with each successive match leading the user deeper into the space. By turning matches into queries, a user can perform a creative exploration of the space of phrases and ideas (see Boden, 1994) while purposefully sidestepping the noise of the Google ngrams. Consider the pleonastic query “Catholic ?pope”. Retrieved phrases include, in descending order of lexical similarity, “Catholic president”, “Catholic politician”, “Catholic king”, “Catholic emperor” and “Catholic patriarch”. Suppose a user selects “Catholic king”: the new query “Catholic ?king” now retrieves “Catholic queen”, “Catholic court”, “Catholic knight”, “Catholic kingdom” and “Catholic throne”. The subsequent query “Catholic ?kingdom” in turn retrieves “Catholic dynasty” and “Catholic army”, among others. In this way, creative IR allows a user to explore the text-supported ramifications of a metaphor like Popes are Kings (e.g., if popes are kings, they too might have queens, command armies, found dynasties, or sit on thrones). Creative IR gives users the tools to conduct their own explorations of language. The more wildcards a query contains, the more degrees of freedom it offers to the explorer. Thus, the query “?scientist ‘s ?laboratory” uncovers a plethora of analogies for the relationship between scientists and their labs: matches in the Google 3-grams include “technician’s workshop”, “artist’s studio”, “chef’s kitchen” and “gardener’s greenhouse”. 282 4.1 Metaphors with Aristotle For a term X, the wildcard ?X suggests those other terms that writers have considered to be comparable to X, while ??X extrapolates beyond the corpus evidence to suggest an even larger space of potential comparisons. A meaningful metaphor can be constructed for X by framing X with any stereotype to which it is pragmatically comparable, that is, any stereotype in ?X. Collectively, these stereotypes can impart the properties @?X to X. Suppose one wants to metaphorically ascribe the property P to X. The set @P contains those stereotypes for which P is culturally salient. Thus, close metaphors for X (what MacCormac (1985) dubs epiphors) in the context of P are suggested by ?X ∩ @P. More distant metaphors (MacCormac dubs these diaphors) are suggested by ??X ∩ @P. For instance, to describe a scholar as wise, one can use poet, yogi, philosopher or rabbi as comparisons. Yet even a simple metaphor will impart other features to a topic. If ^PS denotes the ad-hoc set of additional properties that may be inferred for X when a stereotype S is used to convey property P, then ^PS = ?P ∩ @@P. The query “^PS X” now finds corpus-attested elements of ^PS that can meaningfully be used to modify X. These IR formulations are used by Aristotle, an online metaphor generator, to generate targeted metaphors that highlight a property P in a topic X. Aristotle uses the Google ngrams to supply values for ?X, ??X, ?P and ^PS. The system can be accessed at: www.educatedinsolence.com/aristotle 4.2 Expressing Attitude with Idiom Savant Our retrieval goals in IR are often affective in nature: we want to find a way of speaking about a topic that expresses a particular sentiment and carries a certain tone. However, affective categories are amongst the most cross-cutting structures in language. Words for disparate ideas are grouped according to the sentiments in which they are generally held. We respect judges but dislike critics; we respect heroes but dislike killers; we respect sharpshooters but dislike snipers; and respect rebels but dislike insurgents. It seems therefore that the particulars of sentiment are best captured by a set of culture-specific ad-hoc categories. We thus construct two ad-hoc categories, ^posword and ^negword, to hold the most obviously positive or negative words in Whissell’s (1989) Dictionary of Affect. We then grow these categories to include additional reinforcing elements from their pragmatic neighborhoods, ?^posword and ?^negword. As these categories grow, so too do their neighborhoods, allowing a simple semi-automated bootstrapping process to significantly grow the categories over several iterations. We construct two phrasal equivalents of these categories, ^posphrase and ^negphrase, using the queries “^posword - ^pastpart” (e.g., matching “high-minded” and “sharp-eyed”) and “^negword - ^pastpart” (e.g., matching “flatfooted” and “dead-eyed”) to mine affective phrases from the Google 3-grams. The resulting ad-hoc categories (of ~600 elements each) are manually edited to fix any obvious mis-categorizations. Idiom Savant is a web application that uses ^posphrase and ^negphrase to suggest flattering and insulting epithets for a given topic. The query “^posphrase ?X” retrieves phrases for a topic X that put a positive spin on a related topic to which X is sometimes compared, while “^negphrase ?X” conversely imparts a negative spin. Thus, for politician, the Google 4-grams provide the flattering epithets “much-needed leader”, “awe-inspiring leader”, “hands-on boss” and “far-sighted statesman”, as well as insults like “power-mad leader”, “back-stabbing boss”, “ice-cold technocrat” and “self-promoting hack”. Riskier diaphors can be retrieved via “^posphrase ??X” and “^negphrase ??X”. Idiom Savant is accessible online at: www.educatedinsolence.com/idiom-savant/ 4.3 Poetic Similes with The Jigsaw Bard The well-formed phrases of a large corpus can be viewed as the linguistic equivalent of objets trouvés in art: readymade or “found” objects that might take on fresh meanings in a creative context. The phrase “robot fish”, for instance, denotes a moreor-less literal object in the context of autonomous robotic submersibles, but can also be used to convey a figurative meaning as part of a creative comparison (e.g., “he was as cold as a robot fish”). Fishlov (1992) argues that poetic comparisons are most resonant when they combine mutuallyreinforcing (if distant) ideas, to create memorable images and evoke nuanced feelings. Building on Fishlov’s argument, creative IR can be used to turn 283 the readymade phrases of the Google ngrams into vehicles for creative comparison. For a topic X and a property P, simple similes of the form “X is as P as S” are easily generated, where S ∈ @P ∩ ??X. Fishlov would dub these non-poetic similes (NPS). However, the query “?P @P” will retrieve corpus-attested elaborations of stereotypes in @P to suggest similes of the form “X is as P as P1 S”, where P1 ∈ ?P. These similes exhibit elements of what Fishlov dubs poetic similes (PS). Why say “as cold as a fish” when you can say “as cold as a wet fish”, “a dead haddock”, “a wet January”, “a frozen corpse”, or “a heartless robot”? Complex queries can retrieve more creative combinations, so “@P @P” (e.g. “robot fish” or “snow storm” for cold), “?P @P @P” (e.g. “creamy chocolate mousse” for rich) and “@P - ^pastpart @P” (e.g. “snow-covered graveyard” and “bullet-riddled corpse” for cold) each retrieve ngrams that blend two different but overlapping stereotypes. Blended properties also make for nuanced similes of the form “as P and ?P as S”, where S ∈ @P ∩ @?P. While one can be “as rich as a fat king”, something can be “as rich and enticing as a chocolate truffle”, “a chocolate brownie”, “a chocolate fruitcake”, and even “a chocolate king”. The Jigsaw Bard is a web application that harnesses the readymades of the Google ngrams to formulate novel similes from existing phrases. By mapping blended properties to ngram phrases that combine multiple stereotypes, the Bard expands its generative scope considerably, allowing this application to generate hundreds of thousands of evocative comparisons. The Bard can be accessed online at: www.educatedinsolence.com/jigsaw/ 5 Empirical Evaluation Though ^ is the most overtly categorical of our wildcards, all three wildcards – ?, @ and ^ – are categorical in nature. Each has a semantic or pragmatic membership function that maps a term onto an expansion set of related members. The membership functions for specific uses of ^ are created in an ad-hoc fashion by the users that exploit it; in contrast, the membership functions for uses of @ and ? are derived automatically, via pattern-matching and corpus analysis. Nonetheless, ad-hoc categories in creative IR are often populated with the bindings produced by uses of @ and ? and combinations thereof. In a sense, ?X and @X and their variations are themselves ad-hoc categories. But how well do they serve as categories? Are they large, but noisy? Or too small, with limited coverage? We can evaluate the effectiveness of ? and @, and indirectly that of ^ too, by comparing the use of ? and @ as category builders to a hand-crafted gold standard like WordNet. Other researchers have likewise used WordNet as a gold standard for categorization experiments, and we replicate here the experimental set-up of Almuhareb and Poesio (2004, 2005), which is designed to measure the effectiveness of webacquired conceptual descriptions. Almuhareb and Poesio choose 214 English nouns from 13 of WordNet’s upper-level semantic categories, and proceed to harvest property values for these concepts from the web using the Hearst-like pattern “a|an|the * C is|was”. This pattern yields a combined total of 51,045 values for all 214 nouns; these values are primarily adjectives, such as hot and black for coffee, but noun-modifiers of C are also allowed, such as fruit for cake. They also harvest 8934 attribute nouns, such as temperature and color, using the query “the * of the C is|was”. These values and attributes are then used as the basis of a clustering algorithm to partition the 214 nouns back into their original 13 categories. Comparing these clusters with the original WordNetbased groupings, Almuhareb and Poesio report a cluster accuracy of 71.96% using just values like hot and black (51,045 values), an accuracy of 64.02% using just attributes like temperature and color (8,934 attributes), and an accuracy of 85.5% using both together (a combined 59,979 features). How concisely and accurately does @X describe a noun X for purposes of categorization? Let ^AP denote the set of 214 WordNet nouns used by Almuhareb and Poesio. Then @^AP denotes a set of 2,209 adjectival properties; this should be contrasted with the space of 51,045 adjectival values used by Almuhareb and Poesio. Using the same clustering algorithm over this feature set, @ X achieves a clustering accuracy (as measured via cluster purity) of 70.2%, compared to 71.96% for Almuhareb and Poesio. However, when @X is used to harvest a further set of attribute nouns for X, via web queries of the form “the P * of X” (where P ∈ @X), then @ X augmented with this additional set of attributes (like hands for surgeon) 284 produces a larger space of 7,183 features. This in turn yields a cluster accuracy of 90.2% which contrasts with Almuhareb and Poesio’s 85.5% for 59,979 features. In either case, @X produces comparable clustering quality to Almuhareb and Poesio, with just a small fraction of the features. So how concisely and accurately does ?X describe a noun X for purposes of categorization? While @X denotes a set of salient adjectives, ?X denotes a set of comparable nouns. So this time, ?^AP denotes a set of 8,300 nouns in total, to act as a feature space for the 214 nouns of Almuhareb and Poesio. Remember, the contents of each ?X, and of ?^AP overall, are determined entirely by the contents of the Google 3-grams; the elements of ?X are not ranked in any way, and all are treated as equals. When the 8,300 features in ?^AP are clustered into 13 categories, the resulting clusters have a purity of 93.4% relative to WordNet. The pragmatic neighborhood of X, ?X, appears to be an accurate and concise proxy for the meaning of X. What about adjectives? Almuhareb and Poesio’s set of 214 words does not contain adjectives, and besides, WordNet does not impose a category structure on its adjectives. In any case, the role of adjectives in the applications of section 4 is largely an affective one: if X is a noun, then one must have confidence that the adjectives in @X are consonant with our understanding of X, and if P is a property, that the adjectives in ?P evoke much the same mood and sentiment as P. Our evaluation of @X and ?P should thus be an affective one. So how well do the properties in @X capture our sentiments about a noun X? Well enough to estimate the pleasantness of X from the adjectives in @X, perhaps? Whissell’s (1989) dictionary of affect provides pleasantness ratings for a sizeable number of adjectives and nouns (over 8,000 words in total), allowing us to estimate the pleasantness of X as a weighted average of the pleasantness of each Xi in @X (the weights here are web frequencies for the similes that underpin @ in section 3.2). We thus estimate the affect of all stereotype nouns for which Whissell also records a score. A twotailed Pearson test (p < 0.05) shows a positive correlation of 0.5 between these estimates and the pleasantness scores assigned by Whissell. In contrast, estimates based on the pleasantness of adjectives found in corresponding WordNet glosses show a positive correlation of just 0.278. How well do the elements of ?P capture our sentiments toward an adjective P? After all, we hypothesize that the adjectives in ?P are highly suggestive of P, and vice versa. Aristotle and the Jigsaw Bard each rely on ?P to suggest adjectives that evoke an unstated property in a metaphor or simile, or to suggest coherent blends of properties. When we estimate the pleasantness of each adjective P in Whissell’s dictionary via the weighted mean of the pleasantness of adjectives in ?P (again using web frequencies as weights), a two-tailed Pearson test (p < 0.05) shows a correlation of 0.7 between estimates and actual scores. It seems ?P does a rather good job of capturing the feel of P. 6 Concluding Remarks Creative information retrieval is not a single application, but a paradigm that allows us to conceive of many different kinds of application for creatively manipulating text. It is also a tool-kit for implementing such an application, as shown here in the cases of Aristotle, Idiom Savant and Jigsaw Bard. The wildcards @, ? and ^ allow users to formulate their own task-specific ontologies of ad-hoc categories. In a fully automated application, they provide developers with a simple but powerful vocabulary for describing the range and relationships of the words, phrases and ideas to be manipulated. The @, ? and ^ wildcards are just a start. We expect other aspects of figurative language to be incorporated into the framework whenever they prove robust enough for use in an IR context. In this respect, we aim to position Creative IR as an open, modular platform in which diverse results in FLP, from diverse researchers, can be meaningfully integrated. One can imagine wildcards for matching potential puns, portmanteau words and other novel forms, as well as wildcards for figurative processes like metonymy, synecdoche, hyperbolae and even irony. Ultimately, it is hoped that creative IR can serve as a textual bridge between high-level creativity and the low-level creative potentials that are implicit in a large corpus. Acknowledgments This work was funded in part by Science Foundation Ireland (SFI), via the Centre for Next Generation Localization. (CNGL). 285 References Almuhareb, A. and Poesio, M. (2004). Attribute-Based and Value-Based Clustering: An Evaluation. In Proc. of EMNLP 2004. Barcelona. Almuhareb, A. and Poesio, M. (2005). Concept Learning and Categorization from the Web. In Proc. of the 27th Annual meeting of the Cognitive Science Society. Barnden, J. A. (2006). Artificial Intelligence, figurative language and cognitive linguistics. In: G. Kristiansen, M. Achard, R. Dirven, and F. J. Ruiz de Mendoza Ibanez (Eds.), Cognitive Linguistics: Current Application and Future Perspectives, 431-459. Berlin: Mouton de Gruyter. Barsalou, L. W. (1983). Ad hoc categories. Memory and Cognition, 11:211–227. Boden, M. (1994). Creativity: A Framework for Research, Behavioural & Brain Sciences 17(3):558568. Brants, T. and Franz, A. (2006). Web 1T 5-gram Ver. 1. Linguistic Data Consortium. Budanitsky, A. and Hirst, G. (2006). Evaluating WordNet-based Measures of Lexical Semantic Relatedness. Computational Linguistics, 32(1):13-47. Falkenhainer, B., Forbus, K. and Gentner, D. (1989). Structure-Mapping Engine: Algorithm and Examples. Artificial Intelligence, 41:1-63. Fass, D. (1991). Met*: a method for discriminating metonymy and metaphor by computer. Computational Linguistics, 17(1):49-90. Fass, D. (1997). Processing Metonymy and Metaphor. Contemporary Studies in Cognitive Science & Technology. New York: Ablex. Fellbaum, C. (1998). WordNet: An Electronic Lexical Database. MIT Press, Cambridge. Fishlov, D. (1992). Poetic and Non-Poetic Simile: Structure, Semantics, Rhetoric. Poetics Today, 14(1), 1-23. Gentner, D. (1983), Structure-mapping: A Theoretical Framework. Cognitive Science 7:155–170. Guilford, J.P. (1950) Creativity, American Psychologist 5(9):444–454. Hanks, P. (2005). Similes and Sets: The English Preposition ‘like’. In: Blatná, R. and Petkevic, V. (Eds.), Languages and Linguistics: Festschrift for Fr. Cermak. Charles University, Prague. Hanks, P. (2006). Metaphoricity is gradable. In: Anatol Stefanowitsch and Stefan Th. Gries (Eds.), CorpusBased Approaches to Metaphor and Metonymy,. 1735. Berlin: Mouton de Gruyter. Hearst, M. (1992). Automatic acquisition of hyponyms from large text corpora. In Proc. of the 14th Int. Conf. on Computational Linguistics, pp 539–545. Indurkhya, B. (1992). Metaphor and Cognition: Studies in Cognitive Systems. Kluwer Academic Publishers, Dordrecht: The Netherlands. Lin, D. (1998). Automatic retrieval and clustering of similar words. In Proc. of the 17th international conference on Computational linguistics, 768-774. MacCormac, E. R. (1985). A Cognitive Theory of Metaphor. MIT Press. Martin, J. H. (1990). A Computational Model of Metaphor Interpretation. New York: Academic Press. Mason, Z. J. (2004). CorMet: A Computational, Corpus-Based Conventional Metaphor Extraction System, Computational Linguistics, 30(1):23-44. Mihalcea, R. (2002). The Semantic Wildcard. In Proc. of the LREC Workshop on Creating and Using Semantics for Information Retrieval and Filtering. Canary Islands, Spain, May 2002. Navigli, R. and Velardi, P. (2003). An Analysis of Ontology-based Query Expansion Strategies. In Proc. of the workshop on Adaptive Text Extraction and Mining (ATEM 2003), at ECML 2003, the 14th European Conf. on Machine Learning, 42–49 Salton, G. (1968). Automatic Information Organization and Retrieval. New York: McGraw-Hill. Taylor, A. (1954). Proverbial Comparisons and Similes from California. Folklore Studies 3. Berkeley: University of California Press. Van Rijsbergen, C. J. (1979). Information Retrieval. Oxford: Butterworth-Heinemann. Veale, T. (2004). The Challenge of Creative Information Retrieval. Computational Linguistics and Intelligent Text Processing: Lecture Notes in Computer Science, Volume 2945/2004, 457-467. Veale, T. (2006). Re-Representation and Creative Analogy: A Lexico-Semantic Perspective. New Generation Computing 24, pp 223-240. Veale, T. and Hao, Y. (2007). Making Lexical Ontologies Functional and Context-Sensitive. In Proc. of the 46th Annual Meeting of the Assoc. of Computational Linguistics. Veale, T. and Hao, Y. (2010). Detecting Ironic Intent in Creative Comparisons. In Proc. of ECAI’2010, the 19th European Conference on Artificial Intelligence. 286 Veale, T. and Butnariu, C. (2010). Harvesting and Understanding On-line Neologisms. In: Onysko, A. and Michel, S. (Eds.), Cognitive Perspectives on Word Formation. 393-416. Mouton De Gruyter. Vernimb, C. (1977). Automatic Query Adjustment in Document Retrieval. Information Processing & Management. 13(6):339-353. Voorhees, E. M. (1994). Query Expansion Using Lexical-Semantic Relations. In the proc. of SIGIR 94, the 17th International Conference on Research and Development in Information Retrieval. Berlin: SpringerVerlag, 61-69. Voorhees, E. M. (1998). Using WordNet for text retrieval. WordNet, An Electronic Lexical Database, 285–303. The MIT Press. Way, E. C. (1991). Knowledge Representation and Metaphor. Studies in Cognitive systems. Holland: Kluwer. Weeds, J. and Weir, D. (2005). Co-occurrence retrieval: A flexible framework for lexical distributional similarity. Computational Linguistics, 31(4):433–475. Whissell, C. (1989). The dictionary of affect in language. R. Plutchnik & H. Kellerman (Eds.) Emotion: Theory and research. NY: Harcourt Brace, 113-131. Wilks, Y. (1978). Making Preferences More Active, Artificial Intelligence 11. Xu, J. and Croft, B. W. (1996). Query expansion using local and global document analysis. In Proc. of the 19th annual international ACM SIGIR conference on Research and development in information retrieval. 287
|
2011
|
29
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 22–31, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Effective Use of Function Words for Rule Generalization in Forest-Based Translation Xianchao Wu† Takuya Matsuzaki† Jun’ichi Tsujii†‡∗ †Department of Computer Science, The University of Tokyo ‡School of Computer Science, University of Manchester ∗National Centre for Text Mining (NaCTeM) {wxc, matuzaki, tsujii}@is.s.u-tokyo.ac.jp Abstract In the present paper, we propose the effective usage of function words to generate generalized translation rules for forest-based translation. Given aligned forest-string pairs, we extract composed tree-to-string translation rules that account for multiple interpretations of both aligned and unaligned target function words. In order to constrain the exhaustive attachments of function words, we limit to bind them to the nearby syntactic chunks yielded by a target dependency parser. Therefore, the proposed approach can not only capture source-tree-to-target-chunk correspondences but can also use forest structures that compactly encode an exponential number of parse trees to properly generate target function words during decoding. Extensive experiments involving large-scale English-toJapanese translation revealed a significant improvement of 1.8 points in BLEU score, as compared with a strong forest-to-string baseline system. 1 Introduction Rule generalization remains a key challenge for current syntax-based statistical machine translation (SMT) systems. On the one hand, there is a tendency to integrate richer syntactic information into a translation rule in order to better express the translation phenomena. Thus, flat phrases (Koehn et al., 2003), hierarchical phrases (Chiang, 2005), and syntactic tree fragments (Galley et al., 2006; Mi and Huang, 2008; Wu et al., 2010) are gradually used in SMT. On the other hand, the use of syntactic phrases continues due to the requirement for phrase coverage in most syntax-based systems. For example, Mi et al. (2008) achieved a 3.1-point improvement in BLEU score (Papineni et al., 2002) by including bilingual syntactic phrases in their forest-based system. Compared with flat phrases, syntactic rules are good at capturing global reordering, which has been reported to be essential for translating between languages with substantial structural differences, such as English and Japanese, which is a subject-objectverb language (Xu et al., 2009). Forest-based translation frameworks, which make use of packed parse forests on the source and/or target language side(s), are an increasingly promising approach to syntax-based SMT, being both algorithmically appealing (Mi et al., 2008) and empirically successful (Mi and Huang, 2008; Liu et al., 2009). However, forest-based translation systems, and, in general, most linguistically syntax-based SMT systems (Galley et al., 2004; Galley et al., 2006; Liu et al., 2006; Zhang et al., 2007; Mi et al., 2008; Liu et al., 2009; Chiang, 2010), are built upon word aligned parallel sentences and thus share a critical dependence on word alignments. For example, even a single spurious word alignment can invalidate a large number of otherwise extractable rules, and unaligned words can result in an exponentially large set of extractable rules for the interpretation of these unaligned words (Galley et al., 2006). What makes word alignment so fragile? In order to investigate this problem, we manually analyzed the alignments of the first 100 parallel sentences in our English-Japanese training data (to be shown in Table 2). The alignments were generated by running GIZA++ (Och and Ney, 2003) and the grow-diag-final-and symmetrizing strategy (Koehn et al., 2007) on the training set. Of the 1,324 word alignment pairs, there were 309 error pairs, among 22 which there were 237 target function words, which account for 76.7% of the error pairs1. This indicates that the alignments of the function words are more easily to be mistaken than content words. Moreover, we found that most Japanese function words tend to align to a few English words such as ‘of’ and ‘the’, which may appear anywhere in an English sentence. Following these problematic alignments, we are forced to make use of relatively large English tree fragments to construct translation rules that tend to be ill-formed and less generalized. This is the motivation of the present approach of re-aligning the target function words to source tree fragments, so that the influence of incorrect alignments is reduced and the function words can be generated by tree fragments on the fly. However, the current dominant research only uses 1-best trees for syntactic realignment (Galley et al., 2006; May and Knight, 2007; Wang et al., 2010), which adversely affects the rule set quality due to parsing errors. Therefore, we realign target function words to a packed forest that compactly encodes exponentially many parses. Given aligned forest-string pairs, we extract composed tree-to-string translation rules that account for multiple interpretations of both aligned and unaligned target function words. In order to constrain the exhaustive attachments of function words, we further limit the function words to bind to their surrounding chunks yielded by a dependency parser. Using the composed rules of the present study in a baseline forest-to-string translation system results in a 1.8-point improvement in the BLEU score for large-scale English-to-Japanese translation. 2 Backgrounds 2.1 Japanese function words In the present paper, we limit our discussion on Japanese particles and auxiliary verbs (Martin, 1975). Particles are suffixes or tokens in Japanese grammar that immediately follow modified content words or sentences. There are eight types of Japanese function words, which are classified depending on what function they serve: case markers, parallel markers, sentence ending particles, interjec1These numbers are language/corpus-dependent and are not necessarily to be taken as a general reflection of the overall quality of the word alignments for arbitrary language pairs. tory particles, adverbial particles, binding particles, conjunctive particles, and phrasal particles. Japanese grammar also uses auxiliary verbs to give further semantic or syntactic information about the preceding main or full verb. Alike English, the extra meaning provided by a Japanese auxiliary verb alters the basic meaning of the main verb so that the main verb has one or more of the following functions: passive voice, progressive aspect, perfect aspect, modality, dummy, or emphasis. 2.2 HPSG forests Following our precious work (Wu et al., 2010), we use head-drive phrase structure grammar (HPSG) forests generated by Enju2 (Miyao and Tsujii, 2008), which is a state-of-the-art HPSG parser for English. HPSG (Pollard and Sag, 1994; Sag et al., 2003) is a lexicalist grammar framework. In HPSG, linguistic entities such as words and phrases are represented by a data structure called a sign. A sign gives a factored representation of the syntactic features of a word/phrase, as well as a representation of their semantic content. Phrases and words represented by signs are collected into larger phrases by the applications of schemata. The semantic representation of the new phrase is calculated at the same time. As such, an HPSG parse forest can be considered to be a forest of signs. Making use of these signs instead of part-of-speech (POS)/phrasal tags in PCFG results in a fine-grained rule set integrated with deep syntactic information. For example, an aligned HPSG forest3-string pair is shown in Figure 1. For simplicity, we only draw the identifiers for the signs of the nodes in the HPSG forest. Note that the identifiers that start with ‘c’ denote non-terminal nodes (e.g., c0, c1), and the identifiers that start with ‘t’ denote terminal nodes (e.g., t3, t1). In a complete HPSG forest given in (Wu et al., 2010), the terminal signs include features such as the POS tag, the tense, the auxiliary, the voice of a verb, etc.. The non-terminal signs include features such as the phrasal category, the name of the schema 2http://www-tsujii.is.s.u-tokyo.ac.jp/enju/index.html 3The forest includes three parse trees rooted at c0, c1, and c2. In the 1-best tree, ‘by’ modifies the passive verb ‘verified’. Yet in the 2- and 3-best tree, ‘by’ modifies ‘this result was verified’. Furthermore, ‘verified’ is an adjective in the 2-best tree and a passive verb in the 3-best tree. 23 jikken niyotte kono kekka ga sa re ta kensyou Realign target function words 実験0 によって1 この2 結果3 が4 さ6 れ7 た8 検証5 this result was verified by the experiments t3 t1 t4 t8 t10 t7 t0 t6 t5 t2 t9 c9 c10 c16 c22 c4 c21 c12 c18 c19 c14 c15 c23 c8 c13 c5 c17 c3 c6 c2 c7 c11 c0 c20 c1 1-best tree 2-best tree 3-best tree experiments by this result verified c1 実験0 によって1 この2 結果3 が4 さ6 れ7 た8 検証5 C1 C2 C3 C4 this result was verified by the experiments t3 t1 t4 t8 t10 t7 t0 t6 t5 t2 t9 c9 c16 c22 c45-7 | 5-8 c125-7 | 5-8 c18 c19 c14 c15 c2 c0 c215-7 | 5-8 c23 c8 c13 c5 c17 c3 c6 c7 c11 c20 c103 | 3-4 Figure 1: Illustration of an aligned HPSG forest-string pair for English-to-Japanese translation. The chunk-level dependency tree for the Japanese sentence is shown as well. applied in the node, etc.. 3 Composed Rule Extraction In this section, we first describe an algorithm that attaches function words to a packed forest guided by target chunk information. That is, given a triple ⟨FS, T, A⟩, namely an aligned (A) source forest (FS) to target sentence (T) pair, we 1) tailor the alignment A by removing the alignments for target function words, 2) seek attachable nodes in the source forest FS for each function word, and 3) construct a derivation forest by topologically traversing FS. Then, we identify minimal and composed rules from the derivation forest and estimate the probabilities of rules and scores of derivations using the expectation-maximization (EM) (Dempster et al., 1977) algorithm. 3.1 Definitions In the proposed algorithm, we make use of the following definitions, which are similar to those described in (Galley et al., 2004; Mi and Huang, 2008): • s(·): the span of a (source) node v or a (target) chunk C, which is an index set of the words that 24 v or C covers; • t(v): the corresponding span of v, which is an index set of aligned words on another side; • c(v): the complement span of v, which is the union of corresponding spans of nodes v′ that share an identical parse tree with v but are neither antecedents nor descendants of v; • PA: the frontier set of FS, which contains nodes that are consistent with an alignment A (gray nodes in Figure 1), i.e., t(v) ̸= ∅and closure(t(v)) ∩c(v) = ∅. The function closure covers the gap(s) that may appear in the interval parameter. For example, closure(t(c3)) = closure({0-1, 4-7}) = {0-7}. Examples of the applications of these functions can be found in Table 1. Following (Galley et al., 2006), we distinguish between minimal and composed rules. The composed rules are generated by combining a sequence of minimal rules. 3.2 Free attachment of target function words 3.2.1 Motivation We explain the motivation for the present research using an example that was extracted from our training data, as shown in Figure 1. In the alignment of this example, three lines (in dot lines) are used to align was and the with ga (subject particle), and was with ta (past tense auxiliary verb). Under this alignment, we are forced to extract rules with relatively large tree fragments. For example, by applying the GHKM algorithm (Galley et al., 2004), a rule rooted at c0 will take c7, t4, c4, c19, t2, and c15 as the leaves. The final tree fragment, with a height of 7, contains 13 nodes. In order to ensure that this rule is used during decoding, we must generate subtrees with a height of 7 for c0. Suppose that the input forest is binarized and that |E| is the average number of hyperedges of each node, then we must generate O(|E|26−1) subtrees4 for c0 in the worst case. Thus, 4For one (binarized) hyperedge e of a node, suppose there are x subtrees in the left tail node and y subtrees in the right tail node. Then the number of subtrees guided by e is (x + 1) × (y + 1). Thus, the recursive formula is Nh = |E|(Nh−1 + 1)2, where h is the height of the hypergraph and Nh is the number of subtrees. When h = 1, we let Nh = 0. the existence of these rules prevents the generalization ability of the final rule set that is extracted. In order to address this problem, we tailor the alignment by ignoring these three alignment pairs in dot lines. For example, by ignoring the ambiguous alignments on the Japanese function words, we enlarge the frontier set to include from 12 to 19 of the 24 non-terminal nodes. Consequently, the number of extractable minimal rules increases from 12 (with three reordering rules rooted at c0, c1, and c2) to 19 (with five reordering rules rooted at c0, c1, c2, c5, and c17). With more nodes included in the frontier set, we can extract more minimal and composed monotonic/reordering rules and avoid extracting the less generalized rules with extremely large tree fragments. 3.2.2 Why chunking? In the proposed algorithm, we use a target chunk set to constrain the attachment explosion problem because we use a packed parse forest instead of a 1best tree, as in the case of (Galley et al., 2006). Multiple interpretations of unaligned function words for an aligned tree-string pair result in a derivation forest. Now, we have a packed parse forest in which each tree corresponds to a derivation forest. Thus, pruning free attachments of function words is practically important in order to extract composed rules from this “(derivation) forest of (parse) forest”. In the English-to-Japanese translation test case of the present study, the target chunk set is yielded by a state-of-the-art Japanese dependency parser, Cabocha v0.535 (Kudo and Matsumoto, 2002). The output of Cabocha is a list of chunks. A chunk contains roughly one content word (usually the head) and affixed function words, such as case markers (e.g., ga) and verbal morphemes (e.g., sa re ta, which indicate past tense and passive voice). For example, the Japanese sentence in Figure 1 is separated into four chunks, and the dependencies among these chunks are identified by arrows. These arrows point out the head chunk that the current chunk modifies. Moreover, we also hope to gain a fine-grained alignment among these syntactic chunks and source tree fragments. Thereby, during decoding, we are binding the generation of function words with the generation of target chunks. 5http://chasen.org/∼taku/software/cabocha/ 25 Algorithm 1 Aligning function words to the forest Input: HPSG forest FS, target sentence T, word alignment A = {(i, j)}, target function word set {fw} appeared in T, and target chunk set {C} Output: a derivation forest DF 1: A′ ←A \ {(i, s(fw))} ◃fw ∈{fw} 2: for each node v ∈PA′ in topological order do 3: Tv ←∅ ◃store the corresponding spans of v 4: for each function word fw ∈{fw} do 5: if fw ∈C and t(v)∩(C) ̸= ∅and fw are not attached to descendants of v then 6: append t(v) ∪{s(fw)} to Tv 7: end if 8: end for 9: for each corresponding span t(v) ∈Tv do 10: R ←IDENTIFYMINRULES(v, t(v), T) ◃range over the hyperedges of v, and discount the factional count of each rule r ∈R by 1/|Tv| 11: create a node n in DF for each rule r ∈R 12: create a shared parent node ⊕when |R| > 1 13: end for 14: end for 3.2.3 The algorithm Algorithm 1 outlines the proposed approach to constructing a derivation forest to include multiple interpretations of target function words. The derivation forest is a hypergraph as previously used in (Galley et al., 2006), to maintain the constraint that one unaligned target word be attached to some node v exactly once in one derivation tree. Starting from a triple ⟨FS, T, A⟩, we first tailor the alignment A to A′ by removing the alignments for target function words. Then, we traverse the nodes v ∈PA′ in topological order. During the traversal, a function word fw will be attached to v if 1) t(v) overlaps with the span of the chunk to which fw belongs, and 2) fw has not been attached to the descendants of v. We identify translation rules that take v as the root of their tree fragments. Each tree fragment is a frontier tree that takes a node in the frontier set PA′ of FS as the root node and non-lexicalized frontier nodes or lexicalized non-frontier nodes as the leaves. Also, a minimal frontier tree used in a minimal rule is limited to be a frontier tree such that all nodes other than the root and leaves are non-frontier nodes. We use Algorithm 1 described in (Mi and Huang, 2008) to collect minimal frontier trees rooted at v in FS. That is, we range over each hyperedges headed at v and continue to expand downward until the curA →(A′) node s(·) t(·) c(·) consistent c0 0-6 0-8(0-3,5-7) ∅ 1 c1 0-6 0-8(0-3,5-7) ∅ 1 c2 0-6 0-8(0-3,5-7) ∅ 1 c3 3-6 0-1,4-7(0-1, 5-7) 2,8 0 c4 3 5-7 0,8(0-3) 1 c5* 4-6 0,4(0-1) 2-8(2-3,5-7) 0(1) c6* 0-3 2-8(2-3,5-7) 0,4(0-1) 0(1) c7 0-1 2-3 0-1,4-8(0-1,5-7) 1 c8* 2-3 4-8(5-7) 0-4(0-3) 0(1) c9 0 2 0-1,3-8(0-1,3,5-7) 1 c10 1 3 0-2,4-8(0-2,5-7) 1 c11 2-6 0-1,4-8(0-1,5-7) 2-3 0 c12 3 5-7 0,8(0-3) 1 c13* 5-6 0,4(0) 1-8(1-3,5-7) 0(1) c14 5 4(∅) 0-8(0-3,5-7) 0 c15 6 0 1-8(1-3,5-7) 1 c16 2 4,8(∅) 0-7(0-3,5-7) 0 c17* 4-6 0,4(0-1) 2-8(2-3,5-7) 0(1) c18 4 1 0,2-8(0,2-3,5-7) 1 c19 4 1 0,2-8(0,2-3,5-7) 1 c20* 0-3 2-8(2-3,5-7) 0,4(0-1) 0(1) c21 3 5-7 0,8(0-3) 1 c22 2 4,8(∅) 0-7(0-3,5-7) 0 c23* 2-3 4-8(5-7) 0-4(0-3) 0(1) Table 1: Change of node attributes after alignment modification from A to A′ of the example in Figure 1. Nodes with * superscripts are consistent with A′ but not consistent with A. rent set of hyperedges forms a minimal frontier tree. In the derivation forest, we use ⊕nodes to manage minimal/composed rules that share the same node and the same corresponding span. Figure 2 shows some minimal rule and ⊕nodes derived from the example in Figure 1. Even though we bind function words to their nearby chunks, these function words may still be attached to relative large tree fragments, so that richer syntactic information can be used to predict the function words. For example, in Figure 2, the tree fragments rooted at node c0−8 0 can predict ga and/or ta. The syntactic foundation behind is that, whether to use ga as a subject particle or to use wo as an object particle depends on both the left-hand-side noun phrase (kekka) and the right-hand-side verb (kensyou sa re ta). This type of node v′ (such as c0−8 0 ) should satisfy the following two heuristic conditions: • v′ is included in the frontier set PA′ of FS, and • t(v′) covers the function word, or v′ is the root node of FS if the function word is the beginning or ending word in the target sentence T. Starting from this derivation forest with minimal 26 c103-4 t13: result kekka ga * c103 t13: result kekka c92 t32: the kono c72-3 c103 c92 x0 x1 x0 x1 c72-4 c103-4 c92 x0 x1 x0 x1 * c62-7 c85-7 c72-3 x0 ga x1 x0 x1 * c62-7 c85-7 c72-4 x0 x1 x0 x1 * c00-8 c16 c45-7 c50-1 c3 c72-4 c11 x2 x0 x1 ta x0 x1 x2 + * c00-8 c16 c45-8 c50-1 c3 c72-4 c11 x2 x0 x1 x0 x1 x2 + * c00-8 c16 c45-7 c50-1 c3 c72-3 c11 x2 x0 ga x1 ta x0 x1 x2 * + c00-8 c16 c45-8 c50-1 c3 c72-3 c11 x2 x0 ga x1 x0 x1 x2 * + t4{}:was t4{}:was t4{}:was t4{}:was Figure 2: Illustration of a (partial) derivation forest. Gray nodes include some unaligned target function word(s). Nodes annotated by “*” include ga, and nodes annotated by “+” include ta. rules as nodes, we can further combine two or more minimal rules to form composed rules nodes and can append these nodes to the derivation forest. 3.3 Estimating rule probabilities We use the EM algorithm to jointly estimate 1) the translation probabilities and fractional counts of rules and 2) the scores of derivations in the derivation forests. As reported in (May and Knight, 2007), EM, as has been used in (Galley et al., 2006) to estimate rule probabilities in derivation forests, is an iterative procedure and prefers shorter derivations containing large rules over longer derivations containing small rules. In order to overcome this bias problem, we discount the fractional count of a rule by the product of the probabilities of parse hyperedges that are included in the tree fragment of the rule. 4 Experiments 4.1 Setup We implemented the forest-to-string decoder described in (Mi et al., 2008) that makes use of forestbased translation rules (Mi and Huang, 2008) as the baseline system for translating English HPSG forests into Japanese sentences. We analyzed the performance of the proposed translation rule sets by Train Dev. Test # sentence pairs 994K 2K 2K # En 1-best trees 987,401 1,982 1,984 # En forests 984,731 1,979 1,983 # En words 24.7M 50.3K 49.9K # Jp words 28.2M 57.4K 57.1K # Jp function words 8.0M 16.1K 16.1K Table 2: Statistics of the JST corpus. Here, En = English and Jp = Japanese. using the same decoder. The JST Japanese-English paper abstract corpus6 (Utiyama and Isahara, 2007), which consists of one million parallel sentences, was used for training, tuning, and testing. Table 2 shows the statistics of this corpus. Note that Japanese function words occupy more than a quarter of the Japanese words. Making use of Enju 2.3.1, we generated 987,401 1-best trees and 984,731 parse forests for the English sentences in the training set, with successful parse rates of 99.3% and 99.1%, respectively. Using the pruning criteria expressed in (Mi and Huang, 2008), we continue to prune a parse forest by setting pe to be 8, 5, and 2, until there are no more than e10 = 22, 026 trees in a forest. After pruning, there are an average of 82.3 trees in a parse forest. 6http://www.jst.go.jp 27 C3-T M&H-F Min-F C3-F free fw Y N Y Y alignment A′ A A′ A′ English side tree forest forest forest # rule 86.30 96.52 144.91 228.59 # reorder rule 58.50 91.36 92.98 162.71 # tree types 21.62 93.55 72.98 120.08 # nodes/tree 14.2 42.1 26.3 18.6 extract time 30.2 52.2 58.6 130.7 EM time 9.4 11.2 29.0 # rules in dev. 0.77 1.22 1.37 2.18 # rules in test 0.77 1.23 1.37 2.15 DT(sec./sent.) 2.8 15.7 22.4 35.4 BLEU (%) 26.15 27.07 27.93 28.89 Table 3: Statistics and translation results for four types of tree-to-string rules. With the exception of ‘# nodes/tree’, the numbers in the table are in millions and the time is in hours. Here, fw denotes function word, and DT denotes the decoding time, and the BLEU scores were computed on the test set. We performed GIZA++ (Och and Ney, 2003) and the grow-diag-final-and symmetrizing strategy (Koehn et al., 2007) on the training set to obtain alignments. The SRI Language Modeling Toolkit (Stolcke, 2002) was employed to train a five-gram Japanese LM on the training set. We evaluated the translation quality using the BLEU-4 metric (Papineni et al., 2002). Joshua v1.3 (Li et al., 2009), which is a freely available decoder for hierarchical phrasebased SMT (Chiang, 2005), is used as an external baseline system for comparison. We extracted 4.5M translation rules from the training set for the 4K English sentences in the development and test sets. We used the default configuration of Joshua, with the exception of the maximum number of items/rules, and the value of k (of the k-best outputs) is set to be 200. 4.2 Results Table 3 lists the statistics of the following translation rule sets: • C3-T: a composed rule set extracted from the derivation forests of 1-best HPSG trees that were constructed using the approach described in (Galley et al., 2006). The maximum number of internal nodes is set to be three when generating a composed rule. We free attach target function words to derivation forests; 0 5 10 15 20 25 2 12 22 32 42 52 62 72 82 92 # of rules (M) # of tree nodes in rule M&H-F Min-F C3-T C3-F Figure 3: Distributions of the number of tree nodes in the translation rule sets. Note that the curves of Min-F and C3-F are duplicated when the number of tree nodes being larger than 9. • M&H-F: a minimal rule set extracted from HPSG forests using the extracting algorithm of (Mi and Huang, 2008). Here, we make use of the original alignments. We use the two heuristic conditions described in Section 3.2.3 to attach unaligned words to some node(s) in the forest; • Min-F: a minimal rule set extracted from the derivation forests of HPSG forests that were constructed using Algorithm 1 (Section 3). • C3-F: a composed rule set extracted from the derivation forests of HPSG forests. Similar to C3-T, the maximum number of internal nodes during combination is three. We investigate the generalization ability of these rule sets through the following aspects: 1. the number of rules, the number of reordering rules, and the distributions of the number of tree nodes (Figure 3), i.e., more rules with relatively small tree fragments are preferred; 2. the number of rules that are applicable to the development and test sets (Table 3); and 3. the final translation accuracies. Table 3 and Figure 3 reflect that the generalization abilities of these four rule sets increase in the order of C3-T < M&H-F < Min-F < C3-F. The advantage of using a packed forest for re-alignment is verified by comparing the statistics of the rules and 28 0 10 20 30 40 50 0.0 0.5 1.0 1.5 2.0 2.5 C3-T M&H-F Min-F C3-F Decoding time (sec./sent.) # of rules (M) # rules (M) DT Figure 4: Comparison of decoding time and the number of rules used for translating the test set. the final BLEU scores of C3-T with Min-F and C3F. Using the composed rule set C3-F in our forestbased decoder, we achieved an optimal BLEU score of 28.89 (%). Taking M&H-F as the baseline translation rule set, we achieved a significant improvement (p < 0.01) of 1.81 points. In terms of decoding time, even though we used Algorithm 3 described in (Huang and Chiang, 2005), which lazily generated the N-best translation candidates, the decoding time tended to be increased because more rules were available during cubepruning. Figure 4 shows a comparison of decoding time (seconds per sentence) and the number of rules used for translating the test set. Easy to observe that, decoding time increases in a nearly linear way following the increase of the number of rules used during decoding. Finally, compared with Joshua, which achieved a BLEU score of 24.79 (%) on the test set with a decoding speed of 8.8 seconds per sentence, our forest-based decoder achieved a significantly better (p < 0.01) BLEU score by using either of the four types of translation rules. 5 Related Research Galley et al. (2006) first used derivation forests of aligned tree-string pairs to express multiple interpretations of unaligned target words. The EM algorithm was used to jointly estimate 1) the translation probabilities and fractional counts of rules and 2) the scores of derivations in the derivation forests. By dealing with the ambiguous word alignment instead of unaligned target words, syntaxbased re-alignment models were proposed by (May and Knight, 2007; Wang et al., 2010) for tree-based translations. Free attachment of the unaligned target word problem was ignored in (Mi and Huang, 2008), which was the first study on extracting tree-to-string rules from aligned forest-string pairs. This inspired the idea to re-align a packed forest and a target sentence. Specially, we observed that most incorrect or ambiguous word alignments are caused by function words rather than content words. Thus, we focus on the realignment of target function words to source tree fragments and use a dependency parser to limit the attachments of unaligned target words. 6 Conclusion We have proposed an effective use of target function words for extracting generalized transducer rules for forest-based translation. We extend the unaligned word approach described in (Galley et al., 2006) from the 1-best tree to the packed parse forest. A simple yet effective modification is that, during rule extraction, we account for multiple interpretations of both aligned and unaligned target function words. That is, we chose to loose the ambiguous alignments for all of the target function words. The consideration behind is in order to generate target function words in a robust manner. In order to avoid generating too large a derivation forest for a packed forest, we further used chunk-level information yielded by a target dependency parser. Extensive experiments on large-scale English-to-Japanese translation resulted in a significant improvement in BLEU score of 1.8 points (p < 0.01), as compared with our implementation of a strong forest-to-string baseline system (Mi et al., 2008; Mi and Huang, 2008). The present work only re-aligns target function words to source tree fragments. It will be valuable to investigate the feasibility to re-align all the target words to source tree fragments. Also, it is interesting to automatically learn a word set for realigning7. Given source parse forests and a target word set for re-aligning beforehand, we argue our approach is generic and applicable to any language pairs. Finally, we intend to extend the proposed approach to tree-to-tree translation frameworks by 7This idea comes from one reviewer, we express our thankfulness here. 29 re-aligning subtree pairs (Liu et al., 2009; Chiang, 2010) and consistency-to-dependency frameworks by re-aligning consistency-tree-to-dependency-tree pairs (Mi and Liu, 2010) in order to tackle the rulesparseness problem. Acknowledgments The present study was supported in part by a Grantin-Aid for Specially Promoted Research (MEXT, Japan), by the Japanese/Chinese Machine Translation Project through Special Coordination Funds for Promoting Science and Technology (MEXT, Japan), and by Microsoft Research Asia Machine Translation Theme. Wu ([email protected]) has moved to NTT Communication Science Laboratories and Tsujii ([email protected]) has moved to Microsoft Research Asia. References David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL, pages 263–270, Ann Arbor, MI. David Chiang. 2010. Learning to translate with source and target syntax. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1443–1452, Uppsala, Sweden, July. Association for Computational Linguistics. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, 39:1–38. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of HLT-NAACL. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of COLING-ACL, pages 961–968, Sydney. Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of IWPT. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference (HLT/NAACL), Edomonton, Canada, May 27-June 1. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the ACL 2007 Demo and Poster Sessions, pages 177–180. Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analysis using cascaded chunking. In Proceedings of CoNLL-2002, pages 63–69. Taipei, Taiwan. Zhifei Li, Chris Callison-Burch, Chris Dyery, Juri Ganitkevitch, Sanjeev Khudanpur, Lane Schwartz, Wren N. G. Thornton, Jonathan Weese, and Omar F. Zaidan. 2009. Demonstration of joshua: An open source toolkit for parsing-based machine translation. In Proceedings of the ACL-IJCNLP 2009 Software Demonstrations, pages 25–28, August. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment templates for statistical machine transaltion. In Proceedings of COLING-ACL, pages 609–616, Sydney, Australia. Yang Liu, Yajuan L¨u, and Qun Liu. 2009. Improving tree-to-tree translation with packed forests. In Proceedings of ACL-IJCNLP, pages 558–566, August. Samuel E. Martin. 1975. A Reference Grammar of Japanese. New Haven, Conn.: Yale University Press. Jonathan May and Kevin Knight. 2007. Syntactic realignment models for machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 360–368, Prague, Czech Republic, June. Association for Computational Linguistics. Haitao Mi and Liang Huang. 2008. Forest-based translation rule extraction. In Proceedings of EMNLP, pages 206–214, October. Haitao Mi and Qun Liu. 2010. Constituency to dependency translation with forests. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1433–1442, Uppsala, Sweden, July. Association for Computational Linguistics. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. In Proceedings of ACL-08:HLT, pages 192–199, Columbus, Ohio. Yusuke Miyao and Jun’ichi Tsujii. 2008. Feature forest models for probabilistic hpsg parsing. Computational Lingustics, 34(1):35–80. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318. 30 Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press. Ivan A. Sag, Thomas Wasow, and Emily M. Bender. 2003. Syntactic Theory: A Formal Introduction. Number 152 in CSLI Lecture Notes. CSLI Publications. Andreas Stolcke. 2002. Srilm-an extensible language modeling toolkit. In Proceedings of International Conference on Spoken Language Processing, pages 901–904. Masao Utiyama and Hitoshi Isahara. 2007. A japaneseenglish patent parallel corpus. In Proceedings of MT Summit XI, pages 475–482, Copenhagen. Wei Wang, Jonathan May, Kevin Knight, and Daniel Marcu. 2010. Re-structuring, re-labeling, and realigning for syntax-based machine translation. Computational Linguistics, 36(2):247–277. Xianchao Wu, Takuya Matsuzaki, and Jun’ichi Tsujii. 2010. Fine-grained tree-to-string translation rule extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 325–334, Uppsala, Sweden, July. Association for Computational Linguistics. Peng Xu, Jaeho Kang, Michael Ringgaard, and Franz Och. 2009. Using a dependency parser to improve smt for subject-object-verb languages. In Proceedings of HLT-NAACL, pages 245–253. Min Zhang, Hongfei Jiang, Ai Ti Aw, Jun Sun, Sheng Li, and Chew Lim Tan. 2007. A tree-to-tree alignmentbased model for statistical machine translation. In Proceedings of MT Summit XI, pages 535–542, Copenhagen, Denmark, September. 31
|
2011
|
3
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 288–298, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Local Histograms of Character N-grams for Authorship Attribution Hugo Jair Escalante Graduate Program in Systems Eng. Universidad Aut´onoma de Nuevo Le´on, San Nicol´as de los Garza, NL, 66450, M´exico [email protected] Thamar Solorio Dept. of Computer and Information Sciences University of Alabama at Birmingham, Birmingham, AL, 35294, USA [email protected] Manuel Montes-y-G´omez Computer Science Department, INAOE, Tonantzintla, Puebla, 72840, M´exico Department of Computer and Information Sciences, University of Alabama at Birmingham, Birmingham, AL, 35294, USA [email protected] Abstract This paper proposes the use of local histograms (LH) over character n-grams for authorship attribution (AA). LHs are enriched histogram representations that preserve sequential information in documents; they have been successfully used for text categorization and document visualization using word histograms. In this work we explore the suitability of LHs over n-grams at the character-level for AA. We show that LHs are particularly helpful for AA, because they provide useful information for uncovering, to some extent, the writing style of authors. We report experimental results in AA data sets that confirm that LHs over character n-grams are more helpful for AA than the usual global histograms, yielding results far superior to state of the art approaches. We found that LHs are even more advantageous in challenging conditions, such as having imbalanced and small training sets. Our results motivate further research on the use of LHs for modeling the writing style of authors for related tasks, such as authorship verification and plagiarism detection. 1 Introduction Authorship attribution (AA) is the task of deciding whom, from a set of candidates, is the author of a given document (Houvardas and Stamatatos, 2006; Luyckx and Daelemans, 2010; Stamatatos, 2009b). There is a broad field of application for AA methods, including spam filtering (de Vel et al., 2001), fraud detection, computer forensics (Lambers and Veenman, 2009), cyber bullying (Pillay and Solorio, 2010) and plagiarism detection (Stamatatos, 2009a). Therefore, the development of automated AA techniques has received much attention recently (Stamatatos, 2009b). The AA problem can be naturally posed as one of single-label multiclass classification, with as many classes as candidate authors. However, unlike usual text categorization tasks, where the core problem is modeling the thematic content of documents (Sebastiani, 2002), the goal in AA is modeling authors’ writing style (Stamatatos, 2009b). Hence, document representations that reveal information about writing style are required to achieve good accuracy in AA. Word and character based representations have been used in AA with some success so far (Houvardas and Stamatatos, 2006; Luyckx and Daelemans, 2010; Plakias and Stamatatos, 2008b). Such representations can capture style information through word or character usage, but they lack sequential information, which can reveal further stylistic information. In this paper, we study the use of richer document representations for the AA task. In particular, we consider local histograms over n-grams at the character-level obtained via the locally-weighted bag of words (LOWBOW) framework (Lebanon et al., 2007). Under LOWBOW, a document is represented by a set of local histograms, computed across the whole document but smoothed by kernels centered on different document locations. In this way, document 288 representations preserve both word/character usage and sequential information (i.e., information about the positions in which words or characters occur), which can be more helpful for modeling the writing style of authors. We report experimental results in an AA data set used in previous studies under several conditions (Houvardas and Stamatatos, 2006; Plakias and Stamatatos, 2008b; Plakias and Stamatatos, 2008a). Results confirm that local histograms of character n-grams are more helpful for AA than the usual global histograms of words or character n-grams (Luyckx and Daelemans, 2010); our results are superior to those reported in related works. We also show that local histograms over character n-grams are more helpful than local histograms over words, as originally proposed by (Lebanon et al., 2007). Further, we performed experiments with imbalanced and small training sets (i.e., under a realistic AA setting) using the aforementioned representations. We found that the LOWBOW-based representation resulted even more advantageous in these challenging conditions. The contributions of this work are as follows: • We show that the LOWBOW framework can be helpful for AA, giving evidence that sequential information encoded in local histograms is useful for modeling the writing style of authors. • We propose the use of local histograms over character-level n-grams for AA. We show that character-level representations, which have proved to be very effective for AA (Luyckx and Daelemans, 2010), can be further improved by adopting a local histogram formulation. Also, we empirically show that local histograms at the character-level are more helpful than local histograms at the word-level for AA. • We study several kernels for a support vector machine AA classifier under the local histograms formulation. Our study confirms that the diffusion kernel (Lafferty and Lebanon, 2005) is the most effective among those we tried, although competitive performance can be obtained with simpler kernels. • We report experimental results that are superior to state of the art approaches (Plakias and Stamatatos, 2008b; Plakias and Stamatatos, 2008a), with improvements ranging from 2%−6% in balanced data sets and from 14% −30% in imbalanced data sets. 2 Related Work AA can be faced as a multiclass classification task with as many classes as candidate authors. Standard classification methods have been applied to this problem, including support vector machine (SVM) classifiers (Houvardas and Stamatatos, 2006) and variants thereon (Plakias and Stamatatos, 2008b; Plakias and Stamatatos, 2008a), neural networks (Tearle et al., 2008), Bayesian classifiers (Coyotl-Morales et al., 2006), decision tree methods (Koppel et al., 2009) and similarity based techniques (Keselj et al., 2003; Lambers and Veenman, 2009; Stamatatos, 2009b; Koppel et al., 2009). In this work, we chose an SVM classifier as it has reported acceptable performance in AA and because it will allow us to directly compare results with previous work that has used this same classifier. A broad diversity of features has been used to represent documents in AA (Stamatatos, 2009b). However, as in text categorization (Sebastiani, 2002), word-based and character-based features are among the most widely used features (Stamatatos, 2009b; Luyckx and Daelemans, 2010). With respect to word-based features, word histograms (i.e., the bagof-words paradigm) are the most frequently used representations in AA (Zhao and Zobel, 2005; Argamon and Levitan, 2005; Stamatatos, 2009b). Some researchers have gone a step further and have attempted to capture sequential information by using n-grams at the word-level (Peng et al., 2004) or by discovering maximal frequent word sequences (Coyotl-Morales et al., 2006). Unfortunately, because of computational limitations, the latter methods cannot discover enough sequential information from documents (e.g., word n-grams are often restricted to n ∈{1, 2, 3}, while full sequential information would be obtained with n ∈ {1 . . . D} where D is the maximum number of words in a document). With respect to character-based features, n-grams at the character level have been widely used in AA as well (Plakias and Stamatatos, 2008b; Peng et al., 2003; Luyckx and Daelemans, 2010). Peng et al. (2003) propose the use of language models at the n-gram character-level for AA, whereas Keselj et al. (2003) build author profiles based on a selection of frequent n-grams for each author. Stamatatos and co-workers have studied the impact of feature selection, with character n-grams, in AA (Houvardas and Stamatatos, 2006; Stamatatos, 2006a), ensemble learning with character n-grams (Stamatatos, 2006b) and novel classification techniques based 289 on characters at the n-gram level (Plakias and Stamatatos, 2008a). Acceptable performance in AA has been reported with character n-gram representations. However, as with word-based features, character n-grams are unable to incorporate sequential information from documents in their original form (in terms of the positions in which the terms appear across a document). We believe that sequential clues can be helpful for AA because different authors are expected to use different character n-grams or words in different parts of the document. Accordingly, in this work we adopt the popular character-based and word-based representations, but we enrich them in a way that they incorporate sequential information via the LOWBOW framework. Hence, the proposed features preserve sequential information besides capturing character and word usage information. Our hypothesis is that the combination of sequential and frequency information can be particularly helpful for AA. The LOWBOW framework has been mainly used for document visualization (Lebanon et al., 2007; Mao et al., 2007), where researchers have used information derived from local histograms for displaying a 2D representation of document’s content. More recently, Chasanis et al. (2009) used the LOWBOW framework for segmenting movies into chapters and scenes. LOWBOW representations have also been applied to discourse segmentation (AMIDA, 2007) and have been suggested for text summarization (Das and Martins, 2007). However, to the best of our knowledge the use of the LOWBOW framework for AA has not been studied elsewhere. Actually, the only two references using this framework for text categorization are (Lebanon et al., 2007; AMIDA, 2007). The latter can be due to the fact that local histograms provide little gain over usual global histograms for thematic classification tasks. In this paper we show that LOWBOW representations provide important improvements over global histograms for AA; in particular, local histograms at the character-level achieve the highest performance in our experiments. 3 Background This section describes preliminary information on document representations and pattern classification with SVMs. 3.1 Bag of words representations In the bag of words (BOW) representation, documents are represented by histograms over the vocabulary1 that was used to generate a collection of documents; that is, a document i is represented as: di = [xi,1, . . . , xi,|V |] (1) where V is the vocabulary and |V | is the number of elements in V , di,j = xi,j is a weight that denotes the contribution of term j to the representation of document i; usually xi,j is related to the occurrence (binary weighting) or the weighted frequency of occurrence (e.g., the tf-idf weighting scheme) of the term j in document i. 3.2 Locally-weighted bag-of-words representation Instead of using the BOW framework directly, we adopted the LOWBOW framework for document representation (Lebanon et al., 2007). The underlying idea in LOWBOW is to compute several local histograms per document, where these histograms are smoothed by a kernel function, see Figure 1. The parameters of the kernel specify the position of the kernel in the document (i.e., where the local histogram is centered) and its scale (i.e., to what extent it is smoothed). In this way the sequential information in the document is preserved together with term usage statistics. Let Wi = {wi,1, . . . , wi,Ni}, denote the terms (in order of appearance) in document i where Ni is the number of terms that appear in document i and wi,j ∈V is the term appearing at position j; let vi = {vi,1, . . . , vi,Ni} be the set of indexes in the vocabulary V of the terms appearing in Wi, such that vi,j is the index in V of the term wi,j; let t = [t1, . . . , tNi] be a set of (equally spaced) scalars that determine intervals, with 0 ≤tj ≤1 and PNi j=1 tj = 1, such that each tj can be associated to a position in Wi. Given a kernel smoothing function Ks µ,σ : [0, 1] →R with location parameter µ and scale parameter σ, where Pk j=1 Ks µ,σ(tj) = 1 and 1In the following we will refer to arbitrary vocabularies, which can be formed with terms from either words or character n-grams. 290 Figure 1: Diagram of the process for obtaining local histograms. Terms (wi) appearing in different positions (1, . . . , N) of the document are weighted according to the locations (µ1, . . . , µk) of the smoothing function Kµ,σ(x). Then, the term position weighting is combined with term frequency weighting for obtaining local histograms over the terms in the vocabulary (1, . . . , |V |). µ ∈[0, 1]. The LOWBOW framework computes a local histogram for each position µj ∈{µ1, . . . , µk} as follows: dlj i,{vi,1,...,vi,Ni} = di,{vi,1,...,vi,Ni} × Ks µj,σ(t) (2) where dli,vj:vj̸∈vi = const, a small constant value, and di,j is defined as above. Hence, a set dl{1,...,k} i of k local histograms are computed for each document i. Each histogram dlj i carries information about the distribution of terms at a certain position µj of the document, where σ determines how the nearby terms to µj influence the local histogram j. Thus, sequential information of the document is considered throughout these local histograms. Note that when σ is small, most of the sequential information is preserved, as local histograms are calculated at very local scales; whereas when σ ≥1, local histograms resemble the traditional BOW representation. Under LOWBOW documents can be represented in two forms (Lebanon et al., 2007): as a single histogram dL i = const × Pk j=1 dlj i (hereafter LOWBOW histograms) or by the set of local histograms itself dl{1,...,k} i . We performed experiments with both forms of representation and considered words and n-grams at the character-level as terms (c.f. Section 5). Regarding the smoothing function, we considered the re-normalized Gaussian pdf restricted to [0, 1]: Ks µ,σ(x) = N(x;µ,σ) φ( 1−µ σ )−φ( −µ σ ) if x ∈[0, 1] 0 otherwise (3) where φ(x) is the cumulative distribution function for a Gaussian with mean 0 and standard deviation 1, evaluated at x, see (Lebanon et al., 2007) for further details. 3.3 Support vector machines Support vector machines (SVMs) are pattern classification methods that aim to find an optimal separating hyperplane between examples from two different classes (Shawe-Taylor and Cristianini, 2004). Let {xi, yi}N be pairs of training patterns-outputs, where xi ∈Rd and y ∈{−1, 1}, with d the dimensionality of the problem. SVMs aim at learning a mapping from training instances to outputs. This is done by considering a linear function of the form: f(x) = Wx + b, where parameters W and b are learned from training data. The particular linear function considered by SVMs is as follows: f(x) = X i αiyiK(xi, x) −b (4) that is, a linear function over (a subset of) training examples, where αi is the weight associated with training example i (those for which αi > 0 are the so called support vectors) and yi is the label associated with training example i, K(xi, xj) is a kernel2 function that aims at mapping the input vectors, (xi, xj), into the so called feature space, and b is a bias term. Intuitively, K(xi, xj) evaluates how similar instances xi and xj are, thus the particular choice of kernel is problem dependent. The parameters in expression (4), namely α{1,...,N} and b, are learned by using exact optimization techniques (Shawe-Taylor and Cristianini, 2004). 2One should not confuse the kernel smoothing function, Ks µ,σ(x), defined in Equation (3) with the Mercer kernel in Equation (4), as the former acts as a smoothing function and the latter acts as a similarity function. 291 4 Authorship Attribution with LOWBOW Representations For AA we represent the training documents of each author using the framework described in Section 3.2, thus each document of each candidate author is either a LOWBOW histogram or a bag of local histograms (BOLH). Recall that LOWBOW histograms are an un-weighted sum of local histograms and hence can be considered a summary of term usage and sequential information; whereas the BOLH can be seen as term occurrence frequencies across different locations of the document. For both types of representations we consider an SVM classifier under the one-vs-all formulation for facing the AA problem. We consider SVM as base classifier because this method has proved to be very effective in a large number of applications, including AA (Houvardas and Stamatatos, 2006; Plakias and Stamatatos, 2008b; Plakias and Stamatatos, 2008a); further, since SVMs are kernel-based methods, they allow us to use local histograms for AA by considering kernels that work over sets of histograms. We build a multiclass SVM classifier by considering the pairs of patterns-outputs associated to documents-authors. Where each pattern can be either a LOWBOW histogram or the set of local histograms associated with the corresponding document, and the output associated to each pattern is a categorical random variable (outputs) that associates the representation of each document to its corresponding author y1,...,N ∈{1, . . . , C}, with C the number of candidate authors. For building the multiclass classifier we adopted the one-vs-all formulation, where C binary classifiers are built and where each classifier fi discriminates among examples from class i (positive examples) and the rest j : j ∈{1, . . . , C}, j ̸= i; despite being one of the simplest formulations, this approach has shown to obtain comparable and even superior performance to that obtained by more complex formulations (Rifkin and Klautau, 2004). For AA using LOWBOW histograms, we consider a linear kernel since it has been successfully applied to a wide variety of problems (ShaweTaylor and Cristianini, 2004), including AA (Houvardas and Stamatatos, 2006; Plakias and Stamatatos, 2008b). However, standard kernels cannot work for input spaces where each instance is described by a set of vectors. Therefore, usual kernels are not applicable for AA using BOLH. Instead, we rely on particular kernels defined for sets of vectors rather than for a single vector. Specifically, we consider kernels of the form (Rubner et al., 2001; Grauman, 2006): K(P, Q) = exp ¡ −D(P, Q)2 γ ¢ (5) where D(P, Q) is the sum of the distances between the elements of the bag of local histograms associated to author P and the elements of the bag of histograms associated with author Q; γ is the scale parameter of K. Let P = {p1, . . . , pk} and Q = {q1, . . . , qk} be the elements of the bags of local histograms for instances P and Q, respectively, Table 1 presents the distance measures we consider for AA using local histograms. Kernel Distance Diffusion D(P, Q) = Pk l=1 arccos ¡ ⟨√pl · √ql⟩ ¢ EMD D(P, Q) = EMD(P, Q) Eucidean D(P, Q) = qPk l=1(pl −ql).2 χ2 D(P, Q) = qPk l=1 (pl−ql)2 (pl+ql) Table 1: Distance functions used to calculate the kernel defined in Equation (5). Diffusion, Euclidean, and χ2 kernels compare local histograms one to one, which means that the local histograms calculated at the same locations are compared to each other. We believe that for AA this is advantageous as it is expected that an author uses similar terms at similar locations of the document. The Earth mover’s distance (EMD), on the other hand, is an estimate of the optimal cost in taking local histograms from Q to local histograms in P (Rubner et al., 2001); that is, this measure computes the optimal matching distance between local histograms from different authors that are not necessarily computed at similar locations. 5 Experiments and Results For our experiments we considered the data set used in (Plakias and Stamatatos, 2008b; Plakias and Stamatatos, 2008a). This corpus is a subset of the RCV1 collection (Lewis et al., 2004) and comprises 292 documents authored by 10 authors. All of the documents belong to the same topic. Since this data set has predefined training and testing partitions, our results are comparable to those obtained by other researchers. There are 50 documents per author for training and 50 documents per author for testing. We performed experiments with LOWBOW3 representations at word and character-level. For the experiments with words, we took the top 2,500 most common words used across the training documents and obtained LOWBOW representations. We used this setting in agreement with previous work on AA (Houvardas and Stamatatos, 2006). For our character n-gram experiments, we obtained LOWBOW representations for character 3-grams (only n-grams of size n = 3 were used) considering the 2, 500 most common n-grams. Again, this setting was adopted in agreement with previous work on AA with character n-grams (Houvardas and Stamatatos, 2006; Plakias and Stamatatos, 2008b; Plakias and Stamatatos, 2008a; Luyckx and Daelemans, 2010). All our experiments use the SVM implementation provided by Canu et al. (2005). 5.1 Experimental settings In order to compare our methods to related works we adopted the following experimental setting. We perform experiments using all of the training documents per author, that is, a balanced corpus (we call this setting BC). Next we evaluate the performance of classifiers over reduced training sets. We tried balanced reduced data sets with: 1, 3, 5 and 10 documents per author (we call this configuration RBC). Also, we experimented with reducedimbalanced data sets using the same imbalance rates reported in (Plakias and Stamatatos, 2008b; Plakias and Stamatatos, 2008a): we tried settings 2 −10, 5 −10, and 10 −20, where, for example, setting 210 means that we use at least 2 and at most 10 documents per author (we call this setting IRBC). BC setting represents the AA problem under ideal conditions, whereas settings RBC and IRBC aim at emulating a more realistic scenario, where limited sample documents are available and the whole data set is highly imbalanced (Plakias and Stamatatos, 2008b). 3We used LOWBOW code of G. Lebanon and Y. Mao available from http://www.cc.gatech.edu/∼ymao8/lowbow.htm 5.2 Experimental results in balanced data We first compare the performance of the LOWBOW histogram representation to that of the traditional BOW representation. Table 2 shows the accuracy (i.e., percentage of documents in the test set that were associated to its correct author) for the BOW and LOWBOW histogram representations when using words and character n-grams information. For LOWBOW histograms, we report results with three different configurations for µ. As in (Lebanon et al., 2007), we consider uniformly distributed locations and we varied the number of locations that were included in each setting. We denote with k the number of local histograms. In preliminary experiments we tried several other values for k, although we found that representative results can be obtained with the values we considered here. Method Parameters Words Characters BOW 78.2% 75.0% LOWBOW k = 2; σ = 0.2 75.8% 72.0% LOWBOW k = 5; σ = 0.2 77.4% 75.2% LOWBOW k = 20; σ = 0.2 77.4% 75.0% Table 2: Authorship attribution accuracy for the BOW representation and LOWBOW histograms. Column 2 shows the parameters we used for the LOWBOW histograms; columns 3 and 4 show results using words and character n-grams, respectively. From Table 2 we can see that the BOW representation is very effective, outperforming most of the LOWBOW histogram configurations. Despite a small difference in performance, BOW is advantageous over LOWBOW histograms because it is simpler to compute and it does not rely on parameter selection. Recall that the LOWBOW histogram representations are obtained by the combination of several local histograms calculated at different locations of the document, hence, it seems that the raw sum of local histograms results in a loss of useful information for representing documents. The worse performance was obtained when k = 2 local histograms are considered (see row 3 in Table 2). This result is somewhat expected since the larger the number of local histograms, the more LOWBOW histograms approach the BOW formulation (Lebanon et al., 2007). We now describe the AA performance obtained when using the BOLH formulation; these results 293 are shown in Table 3. Most of the results from this table are superior to those reported in Table 2, showing that bags of local histograms are a better way to exploit the LOWBOW framework for AA. As expected, different kernels yield different results. However, the diffusion kernel outperformed most of the results obtained with other kernels; confirming the results obtained by other researchers (Lebanon et al., 2007; Lafferty and Lebanon, 2005). Kernel Euc. Diffusion EMD χ2 Words Setting-1 78.6% 81.0% 75.0% 75.4% Setting-2 77.6% 82.0% 76.8% 77.2% Setting-3 79.2% 80.8% 77.0% 79.0% Characters Setting-1 83.4% 82.8% 84.4% 83.8% Setting-2 83.4% 84.2% 82.2% 84.6% Setting-3 83.6% 86.4% 81.0% 85.2% Table 3: Authorship attribution accuracy when using bags of local histograms and different kernels for word-based and character-based representations. The BC data set is used. Settings 1, 2 and 3 correspond to k = 2, 5 and 20, respectively. On average, the worse kernel was that based on the earth mover’s distance (EMD), suggesting that the comparison of local histograms at different locations is not a fruitful approach (recall that this is the only kernel that compares local histograms at different locations). This result evidences that authors use similar word/character distributions at similar locations when writing different documents. The best performance across settings and kernels was obtained with the diffusion kernel (in bold, column 3, row 9) (86.4%); that result is 8% higher than that obtained with the BOW representation and 9% better than the best configuration of LOWBOW histograms, see Table 2. Furthermore, that result is more than 5% higher than the best reported result in related work (80.8% as reported in (Plakias and Stamatatos, 2008b)). Therefore, the considered local histogram representations over character n-grams have proved to be very effective for AA. One should note that, in general, better performance was obtained when using character-level rather than word-level information. This confirms the results already reported by other researchers that have used character-level and word-level information for AA (Houvardas and Stamatatos, 2006; Plakias and Stamatatos, 2008b; Plakias and Stamatatos, 2008a; Peng et al., 2003). We believe this can be attributed to the fact that character n-grams provide a representation for the document at a finer granularity, which can be better exploited with local histogram representations. Note that by considering 3-grams, words of length up to three are incorporated, and usually these words are function words (e.g., the, it, as, etc.), which are known to be indicative of writing style. Also, n-gram information is more dense in documents than word-level information. Hence, the local histograms are less sparse when using character-level information, which results in better AA performance. True author AC AS BL DL JM JG MM MD RS TN 88 2 0 0 0 0 0 0 0 0 10 98 0 0 0 0 0 0 0 0 0 0 68 0 40 0 0 0 0 0 0 0 0 80 0 0 0 0 0 4 0 0 12 2 42 0 0 2 0 0 0 0 0 0 0 100 0 0 0 2 2 0 2 0 0 0 100 0 0 0 0 0 18 0 18 0 0 98 0 0 0 0 0 2 0 0 0 0 100 4 0 0 0 16 0 0 0 0 0 90 Table 4: Confusion matrix (in terms of percentages) for the best result in the BC corpus (i.e., last row, column 3 in Table 3). Columns show the true author for test documents and rows show the authors predicted by the SVM. Table 4 shows the confusion matrix for the setting that reached the best results (i.e., column 3, last row in Table 3). From this table we can see that 8 out of the 10 authors were recognized with an accuracy higher or equal to 80%. For these authors sequential information seems to be particularly helpful. However, low recognition performance was obtained for authors BL (B. K. Lim) and JM (J. MacArtney). The SVM with BOW representation of character ngrams achieved recognition rates of 40% and 50% for BL and JM respectively. Thus, we can state that sequential information was indeed helpful for modeling BL writing style (improvement of 28%), although it is an author that resulted very difficult to model. On the other hand, local histograms were not very useful for identifying documents written by JM (made it worse by −8%). The largest improvement (38%) of local histograms over the BOW formulation was obtained for author TN (T. Nissen). This 294 result gives evidence that TN uses a similar distribution of words in similar locations across the documents he writes. These results are interesting, although we would like to perform a careful analysis of results in order to determine for what type of authors it would be beneficial to use local histograms, and what type of authors are better modeled with a standard BOW approach. 5.3 Experimental results in imbalanced data In this section we report results with RBC and IRBC data sets, which aim to evaluate the performance of our methods in a realistic setting. For these experiments we compare the performance of the BOW, LOWBOW histogram and BOLH representations; for the latter, we considered the best setting as reported in Table 3 (i.e., an SVM with diffusion kernel and k = 20). Tables 5 and 6 show the AA performances when using word and character information, respectively. We first analyze the results in the RBC data set (recall that for this data set we consider 1, 3, 5, 10, and 50, randomly selected documents per author). From Tables 5 and 6 we can see that BOW and LOWBOW histogram representations obtained similar performance to each other across the different training set sizes, which agree with results in Table 2 for the BC data sets. The best performance across the different configurations of the RBC data set was obtained with the BOLH formulation (row 6 in Tables 5 and 6). The improvements of local histograms over the BOW formulation vary across different settings and when using information at word-level and character-level. When using words (columns 2-6 in Table 5) the differences in performance are of 15.6%, 6.2%, 6.8%, 2.9%, 3.8% when using 1, 3, 5, 10 and 50 documents per author, respectively. Thus, it is evident that local histograms are more beneficial when less documents are considered. Here, the lack of information is compensated by the availability of several histograms per author. When using character n-grams (columns 2-6 in Table 6) the corresponding differences in performance are of 5.4%, 6.4%, 6.4%, 6% and 11.4%, when using 1, 3, 5, 10, and 50 documents per author, respectively. In this case, the larger improvement was obtained when 50 documents per author are available; nevertheless, one should note that results using character-level information are, in general, significantly better than those obtained with word-level information; hence, improvements are expected to be smaller. When we compare the results of the BOLH formulation with the best reported results elsewhere (c.f. last row 6 in Tables 5 and 6) (Plakias and Stamatatos, 2008b), we found that the improvements range from 14% to 30.2% when using character ngrams and from 1.2% to 26% when using words. The differences in performance are larger when less information is used (e.g., when 5 documents are used for training) and we believe the differences would be even larger if results for 1 and 3 documents were available. These are very positive results; for example, we can obtain almost 71% of accuracy, using local histograms of character n-grams when a single document is available per author (recall that we have used all of the test samples for evaluating the performance of our methods). We now analyze the performance of the different methods when using the IRBC data set (columns 79 in Tables 5 and 6). The same pattern as before can be observed in experimental results for these data sets as well: BOW and LOWBOW histograms obtained comparable performance to each other and the BOLH formulation performed the best. The BOLH formulation outperforms state of the art approaches by a considerable margin that ranges from 10% to 27%. Again, better results were obtained when using character n-grams for the local histograms. With respect to RBC data sets, the BOLH at the character-level resulted very robust to the reduction of training set size and the highly imbalanced data. Summarizing, the results obtained in RBC and IRBC data sets show that the use of local histograms is advantageous under challenging conditions. An SVM under the BOLH representation is less sensitive to the number of training examples available and to the imbalance of data than an SVM using the BOW representation. Our hypothesis for this behavior is that local histograms can be thought of as expanding training instances, because for each training instance in the BOW formulation we have k−training instances under BOLH. The benefits of such expansion become more notorious as the number of available documents per author decreases. 295 WORDS Data set Balanced Imbalanced Setting 1-doc 3-docs 5-docs 10-docs 50-docs 2-10 5-10 10-20 BOW 36.8% 57.1% 62.4% 69.9% 78.2% 62.3% 67.2% 71.2% LOWBOW 37.9% 55.6% 60.5% 69.3% 77.4% 61.1% 67.4% 71.5% Diffusion kernel 52.4% 63.3% 69.2% 72.8% 82.0% 66.6% 70.7% 74.1% Reference 53.4% 67.8% 80.8% 49.2% 59.8% 63.0% Table 5: AA accuracy in RBC (columns 2-6) and IRBC (columns 7-9) data sets when using words as terms. We report results for the BOW, LOWBOW histogram and BOLH representations. For reference (last row), we also include the best result reported in (Plakias and Stamatatos, 2008b), when available, for each configuration. CHARACTER N-GRAMS Data set Balanced Imbalanced Setting 1-doc 3-docs 5-docs 10-docs 50-docs 2-10 5-10 10-20 BOW 65.3% 71.9% 74.2% 76.2% 75.0% 70.1% 73.4% 73.1% LOWBOW 61.9% 71.6% 74.5% 73.8% 75.0% 70.8% 72.8% 72.1% Diffusion kernel 70.7% 78.3% 80.6% 82.2% 86.4% 77.8% 80.5% 82.2% Reference 50.4% 67.8% 76.6% 49.2% 59.8% 63.0% Table 6: AA accuracy in the RBC and IRBC data sets when using character n-grams as terms. 6 Conclusions We have described the use of local histograms (LH) over character n-grams for AA. LHs are enriched histogram representations that preserve sequential information in documents (in terms of the positions of terms in documents); we explored the suitability of LHs over n-grams at the character-level for AA. We showed evidence supporting our hypothesis that LHs are very helpful for AA; we believe that this is due to the fact that LOWBOW representations can uncover, to some extent, the writing preferences of authors. Our experimental results showed that LHs outperform traditional bag-of-words formulations and state of the art techniques in balanced, imbalanced, and reduced data sets. The improvements were larger in reduced and imbalanced data sets, which is a very positive result as in real AA applications one often faces highly imbalanced and small sample issues. Our results are promising and motivate further research on the use and extension of the LOWBOW framework for related tasks (e.g. authorship verification and plagiarism detection). As future work we would like to explore the use of LOWBOW representations for profile-based AA and related tasks. Also, we would like to develop model selection strategies for learning what combination of hyperparameters works better for modeling each author. Acknowledgments We thank E. Stamatatos for making his data set available. Also, we are grateful for the thoughtful comments of L. A. Barr´on and those of the anonymous reviewers. This work was partially supported by CONACYT under project grants 61335, and CB-2009-134186, and by UAB faculty development grant 3110841. References AMIDA. 2007. Augmented multi-party interaction with distance access. Available from http://www. amidaproject.org/, AMIDA Report. S. Argamon and S. Levitan. 2005. Measuring the usefulness of function words for authorship attribution. In Proceedings of the Joint Conference of the Association for Computers and the Humanities and the Association for Literary and Linguistic Computing, Victoria, BC, Canada. S. Canu, Y. Grandvalet, V. Guigue, and A. Rakotomamonjy. 2005. SVM and kernel methods Matlab toolbox. Perception Systmes et Information, INSA de Rouen, Rouen, France. V. Chasanis, A. Kalogeratos, and A. Likas. 2009. Movie segmentation into scenes and chapters using locally weighted bag of visual words. In Proceedings of the ACM International Conference on Image and Video Retrieval, pages 35:1–35:7, Santorini, Fira, Greece. ACM Press. R. M. Coyotl-Morales, L. Villase˜nor-Pineda, M. Montesy-G´omez, and P. Rosso. 2006. Authorship attribution using word sequences. In Proceedings of 11th 296 Iberoamerican Congress on Pattern Recognition, volume 4225 of LNCS, pages 844–852, Cancun, Mexico. Springer. D. Das and A. Martins. 2007. A survey on automatic text summarization. Available from: http://www.cs.cmu.edu/˜nasmith/LS2/ das-martins.07.pdf, Literature Survey for the Language and Statistics II course at Carnegie Mellon University. O. de Vel, A. Anderson, M. Corney, and G. Mohay. 2001. Multitopic email authorship attribution forensics. In Proceedings of the ACM Conference on Computer Security - Workshop on Data Mining for Security Applications, Philadelphia, PA, USA. K. Grauman. 2006. Matching Sets of Features for Efficient Retrieval and Recognition. Ph.D. thesis, Massachusetts Institute of Technology. J. Houvardas and E. Stamatatos. 2006. N-gram feature selection for author identification. In Proceedings of the 12th International Conference on Artificial Intelligence: Methodology, Systems, and Applications, volume 4183 of LNCS, pages 77–86, Varna, Bulgaria. Springer. V. Keselj, F. Peng, N. Cercone, and C. Thomas. 2003. Ngram-based author profiles for authorship attribution. In Proceedings of the Pacific Association for Computational Linguistics, pages 255–264, Halifax, Canada. M. Koppel, J. Schler, and S. Argamon. 2009. Computational methods in authorship attribution. Journal of the American Society for Information Science and Technology, 60:9–26. J. Lafferty and G. Lebanon. 2005. Diffusion kernels on statistical manifolds. Journal of Machine Learning Research, 6:129–163. M. Lambers and C. J. Veenman. 2009. Forensic authorship attribution using compression distances to prototypes. In Computational Forensics, Lecture Notes in Computer Science, Volume 5718. ISBN 978-3-64203520-3. Springer Berlin Heidelberg, 2009, p. 13, volume 5718 of LNCS, pages 13–24. Springer. G. Lebanon, Y. Mao, and J. Dillon. 2007. The locally weighted bag of words framework for document representation. Journal of Machine Learning Research, 8:2405–2441. D. Lewis, T. Yang, and F. Rose. 2004. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361–397. K. Luyckx and W. Daelemans. 2010. The effect of author set size and data size in authorship attribution. Literary and Linguistic Computing, pages 1–21, August. Y. Mao, J. Dillon, and G. Lebanon. 2007. Sequential document visualization. IEEE Transactions on Visualization and Computer Graphics, 13(6):1208–1215. F. Peng, D. Shuurmans, V. Keselj, and S. Wang. 2003. Language independent authorship attribution using character level language models. In Proceedings of the 10th conference of the European chapter of the Association for Computational Linguistics, volume 1, pages 267–274, Budapest, Hungary. F. Peng, D. Shuurmans, and S. Wang. 2004. Augmenting naive Bayes classifiers with statistical language models. Information Retrieval Journal, 7(1):317–345. S. R. Pillay and T. Solorio. 2010. Authorship attribution of web forum posts. In Proceedings of the eCrime Researchers Summit (eCrime), 2010, pages 1–7, Dallas, TX, USA. IEEE. S. Plakias and E. Stamatatos. 2008a. Author identification using a tensor space representation. In Proceedings of the 18th European Conference on Artificial Intelligence, volume 178, pages 833–834, Patras, Greece. IOS Press. S. Plakias and E. Stamatatos. 2008b. Tensor space models for authorship attribution. In Proceedings of the 5th Hellenic Conference on Artificial Intelligence: Theories, Models and Applications, volume 5138 of LNCS, pages 239–249, Syros, Greece. Springer. R. Rifkin and A. Klautau. 2004. In defense of one-vs-all classification. Journal of Machine Learning Research, 5:101–141. Y. Rubner, C. Tomasi, J. Leonidas, and J. Guibas. 2001. The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision, 40(2):99–121. F. Sebastiani. 2002. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1–47. J. Shawe-Taylor and N. Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press. E. Stamatatos. 2006a. Authorship attribution based on feature set subspacing ensembles. International Journal on Artificial Intelligence Tools, 15(5):823–838. E. Stamatatos. 2006b. Ensemble-based author identification using character n-grams. In Proceedings of the 3rd International Workshop on Text-based Information Retrieval, pages 41–46, Riva del Garda, Italy. E. Stamatatos. 2009a. Intrinsic plagiarism detection using character n-gram profiles. In Proceedings of the 3rd International Workshop on Uncovering Plagiarism, Authorship, and Social Software Misuse, PAN’09, pages 38–46, Donostia-San Sebastian, Spain. E. Stamatatos. 2009b. A survey of modern authorship attribution methods. Journal of the American Society for Information Science and Technology, 60(3):538– 556. M. Tearle, K. Taylor, and H. Demuth. 2008. An algorithm for automated authorship attribution using neural networks. Literary and Linguist Computing, 23(4):425–442. 297 Y. Zhao and J. Zobel. 2005. Effective and scalable authorship attribution using function words. In Proceedings of 2nd Asian Information Retrieval Symposium, volume 3689 of LNCS, pages 174–189, Jeju Island, Korea. Springer. 298
|
2011
|
30
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 299–308, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Word Maturity: Computational Modeling of Word Knowledge Kirill Kireyev Thomas K Landauer Pearson Education, Knowledge Technologies Boulder, CO {kirill.kireyev, tom.landauer}@pearson.com Abstract While computational estimation of difficulty of words in the lexicon is useful in many educational and assessment applications, the concept of scalar word difficulty and current corpus-based methods for its estimation are inadequate. We propose a new paradigm called word meaning maturity which tracks the degree of knowledge of each word at different stages of language learning. We present a computational algorithm for estimating word maturity, based on modeling language acquisition with Latent Semantic Analysis. We demonstrate that the resulting metric not only correlates well with external indicators, but captures deeper semantic effects in language. 1 Motivation It is no surprise that through stages of language learning, different words are learned at different times and are known to different extents. For example, a common word like “dog” is familiar to even a first-grader, whereas a more advanced word like “focal” does not usually enter learners’ vocabulary until much later. Although individual rates of learning words may vary between high- and low-performing students, it has been observed that “children […] acquire word meanings in roughly the same sequence” (Biemiller, 2008). The aim of this work is to model the degree of knowledge of words at different learning stages. Such a metric would have extremely useful applications in personalized educational technologies, for the purposes of accurate assessment and personalized vocabulary instruction. 2 Rethinking Word Difficulty Previously, related work in education and psychometrics has been concerned with measuring word difficulty or classifying words into different difficulty categories. Examples of such approaches include creation of word lists for targeted vocabulary instruction at various grade levels that were compiled by educational experts, such as Nation (1993) or Biemiller (2008). Such word difficulty assignments are also implicitly present in some readability formulas that estimate difficulty of texts, such as Lexiles (Stenner, 1996), which include a lexical difficulty component based on the frequency of occurrence of words in a representative corpus, on the assumption that word difficulty is inversely correlated to corpus frequency. Additionally, research in psycholinguistics has attempted to outline and measure psycholinguistic dimensions of words such as age-of-acquisition and familiarity, which aim to track when certain words become known and how familiar they appear to an average person. Importantly, all such word difficulty measures can be thought of as functions that assign a single scalar value to each word w: !"##"$%&'( ∶! ! →ℝ (1) There are several important limitations to such metrics, regardless of whether they are derived from corpus frequency, expert judgments or other measures. First, learning each word is a continual process, one that is interdependent with the rest of the vocabulary. Wolter (2001) writes: 299 […] Knowing a word is quite often not an either-or situation; some words are known well, some not at all, and some are known to varying degrees. […] How well a particular word is known may condition the connections made between that particular word and the other words in the mental lexicon. Thus, instead of modeling when a particular word will become fully known, it makes more sense to model the degree to which a word is known at different levels of language exposure. Second, word difficulty is inherently perspectival: the degree of word understanding depends not only on the word itself, but also on the sophistication of a given learner. Consider again the difference between “dog” and “focal”: a typical firstgrader will have much more difficulty understanding the latter word compared to the former, whereas a well-educated adult will be able to use these words with equal ease. Therefore, the degree, or maturity, of word knowledge is inherently a function of two parameters -- word w and learner level l: !"#$%&#' ∶! !, ! →ℝ (2) As the level l increases (i.e. for more advanced learners), we would expect the degree of understanding of word w to approach its full value corresponding to perfect knowledge; this will happen at different rates for different words. Ideally, we would obtain maturity values by testing word knowledge of learners across different levels (ages or school grades) for all the words in the lexicon. Such a procedure, however, is prohibitively expensive; so instead we would like to estimate word maturity by using computational models. To summarize: our aim is to model the development of meaning of words as a function of increasing exposure to language, and ultimately - the degree to which the meaning of words at each stage of exposure resemble their “adult” meaning. We therefore define word meaning maturity to be the degree to which the understanding of the word (expected for the average learner of a particular level) resembles that of an ideal mature learner. 3 Modeling Word Meaning Acquisition with Latent Semantic Analysis 3.1 Latent Semantic Analysis (LSA) An appealing choice for quantitatively modeling word meanings and their growth over time is Latent Semantic Analysis (LSA), an unsupervised method for representing word and document meaning in a multi-dimensional vector space. The LSA vector representation is derived in an unsupervised manner, based on occurrence patterns of words in a large corpus of natural language documents. A Singular Value Decomposition on the high-dimensional matrix of word/document occurrence counts (A) in the corpus, followed by zeroing all but the largest r elements1 of the diagonal matrix S, yields a lowerrank word vector matrix (U). The dimensionality reduction has the effect of smoothing out incidental co-occurrences and preserving significant semantic relationships between words. The resulting word vectors2 in U are positioned in such a way that semantically related words vectors point in similar directions or, equivalently, have higher cosine values between them. For more details, please refer to Landauer et al. (2007) and others. Figure 1. The SVD process in LSA illustrated. The original high-dimensional word-by-document matrix A is decomposed into word (U) and document (V) matrices of lower dimensionality. In addition to merely measuring semantic relatedness, LSA has been shown to emulate the learning of word meanings from natural language (as can be evidenced by a broad range of applications from synonym tests to automated essay grading), at rates that resemble those of human learners (Laundauer et al, 1997). Landauer and Dumais (1997) have demonstrated empirically that LSA can emulate not only the rate of human language acquisition, but also more subtle phenomena, such as the effects of learning certain words on meaning of other words. LSA can model meaning with 1 Typically the first approx. 300 dimensions are retained 2 UΣ is used to project word vectors into V-space SVD x x Document
Vectors Word
Vectors Original
Matrix word
1 word
2 word
n doc
1 doc
2 doc
m .
.
. .
.
. .
.
. r r r r A U S V Σ 300 high accuracy, as attested, for example, by 90% correlation with human judgments on assessing the quality of student essay content (Landauer, 2002). 3.2 Using LSA to Compute Word Maturity In this work, the general procedure behind computationally estimating word maturity of a learner at a particular intermediate level (i.e. age or school grade level) is as follows: 1. Create an intermediate corpus for the given level. This corpus approximates the amount and sophistication of language encountered by a learner at the given level. 2. Build an LSA space on that corpus. The resulting LSA word vectors model the meaning of each word to the particular intermediate-level learner. 3. Compare the meaning representation of each word (its LSA vector) to the corresponding one in a reference model. The reference model is trained on a much larger corpus and approximates the word meanings by a mature adult learner. We can repeat this process for each of a number of levels. These levels may directly correspond to school grades, learner ages or any other arbitrary gradations. In summary, we estimate word maturity of a given word at a given learner level by comparing the word vector from an intermediate LSA model (trained on a corpus of size and sophistication comparable to that which a typical real student at the given level encounters) to the corresponding vector from a reference adult LSA model (trained on a larger corpus corresponding to a mature language learner). A high discrepancy between the vectors would suggest that an intermediate model’s meaning of a particular word is quite different from the reference meaning, and thus the word maturity at the corresponding level is relatively low. 3.3 Procrustes Alignment (PA) Comparing vectors across different LSA spaces is less straightforward, since the individual dimensions in LSA do not have a meaningful interpretation, and are an artifact of the content and ordering of the training corpus used. Therefore, direct comparisons across two different spaces, even of the same dimensionality, are meaningless, due to a mismatch in their coordinate systems. Fortunately, we can employ a multivariate algebra technique known as Procrustes Alignment (or Procrustes Analysis) (PA) typically used to align two multivariate configurations of a corresponding set of points in two different geometric spaces. PA has been used in conjunction with LSA, for example, in cross-language information retrieval (Littman, 1998). The basic idea behind PA is to derive a rotation matrix that allows one space to be rotated into the other. The rotation matrix is computed in such a way as to minimize the differences (namely: sum of squared distances) between corresponding points, which in the case of LSA can be common words or documents in the training set. For more details, the reader is advised to consult chapter 5 of (Krzanowski, 2000) or similar literature on multivariate analysis. In summary, given two matrices containing coordinates of n corresponding points X and Y (and assuming mean-centering and equal number of dimensions, as is the case in this work), we would like to minimize the sum of squared distances between the points: !! = !!" −!!" ! ! !!! ! !!! We try to find an orthogonal rotation matrix Q, which minimizes M2 by rotating Y relative to X. That matrix can be obtained by solving the equation: !! = !"#$%(!!! + !!! −2!!!!!) It turns out that the solution to Q is given by VU’, where UΣV’ is the singular value decomposition of the matrix X’Y. In our situation, where there are two spaces, adult and intermediate, the alignment points are the corresponding document vectors corresponding to the documents that the training corpora of the two models have in common (recall that the adult corpus is a superset of each of the intermediate corpora). The result of the Procrustes Alignment of the two spaces is effectively a joint LSA space containing two distinct word vectors for each word (e.g. “dog1”, “dog2”), corresponding to the vectors from each of the original spaces. After 301 merging using Procrustes Alignment, the comparison of word meanings becomes a simple problem of comparing word vectors in the joint space using the standard cosine metric. 4 Implementation Details In our experiments we used passages from the MetaMetrics Inc. 2002 corpus3, largely consisting of educational and literary content representative of the reading material used in American schools at different grade levels. The average length of each passage is approximately 135 words. The first-level intermediate corpus was composed of 6,000 text passages, intended for school grade 1 or below. The grade level is approximated using the Coleman-Liau readability formula (Coleman, 1975), which estimates the US grade level necessary to comprehend a given text, based on its average sentence and word length statistics: !"# = 0.0588! −0.296! −15.8 (4) where L is the average number of letters per 100 words and S is the average number of sentences per 100 words. Each subsequent intermediate corpus contains additional 6,000 new passages of the next grade level, in addition to the previous corpus. In this way, we create 14 levels. The adult corpus is twice as large, and of same grade level range (0-14) as the largest intermediate corpus. In summary, the following describes the size and makeup of the corpora used: Corpus Size (passages) Approx. Grade Level (Coleman-Liau Index) Intermediate 1 6,000 0.0 - 1.0 Intermediate 2 12,000 0.0 - 2.0 Intermediate 3 18,000 0.0 - 3.0 Intermediate 4 24,000 0.0 - 4.0 … Intermediate 14 84,000 0.0 - 14.0 Adult 168,000 0.0 - 14.0 Table 1. Size and makeup of corpora. used for LSA models. The particular choice of the Coleman-Liau readability formula (CLI) is not essential; our experiments show that other well-known readability formulas (such as Lexiles) work equally well. All that is needed is some approximate ordering of 3 We would like to acknowledge Jack Stenner and MetaMetrics for the use of their corpus. passages by difficulty, in order to mimic the way typical human learners encounter progressively more difficult materials at successive school grades. After creating the corpora, we: 1. Build LSA spaces on the adult and each of the intermediate corpora 2. Merge the intermediate space for level l with the adult space, using Procrustes Alignment. This results in a joint space with two sets of vectors: the versions from the intermediate space {vlw}, and adult space{vaw}. 3. Compute the cosine in the joint space between the two word vectors for the given word w !" !, ! = !"# (!"!, !"!) (5) In the cases where a word w has not been encountered in a given intermediate space, or in the rare cases where the cosine value falls below 0, the word maturity value is set to 0. Hence, the range for the word maturity function falls in the closed interval [0.0, 1.0]. A higher cosine value means greater similarity in meaning between the reference and intermediate spaces, which implies a more mature meaning of word w at the level l, i.e. higher word meaning maturity. The scores between discrete levels are interpolated, resulting in a continuous word maturity curve for each word. Figure 1 below illustrates resulting word maturity curves for some of the words. !" !#$" !#%" !#&" !#'" (" !" (" $" )" %" *" &" +" '" ," (!" ((" ($" ()" (%"-." !"#$%&'()#*(+% ,-.-/% /01" 234567" 846/9204" :0;9<" Figure 2. Word maturity curves for selected words. Consistent with intuition, simple words like “dog” approach their adult meaning rather quickly, while “focal” takes much longer to become known to any degree. An interesting example is “turkey”, which has a noticeable plateau in the middle. This can be explained by the fact that this word has two distinct senses. Closer analysis of the corpus and the semantic near-neighbor word vectors at each in302 termediate space, shows that earlier meaning deal almost exclusively with the first sense (bird), while later readings with the other (country). Therefore, even though the word “turkey” is quite prevalent in earlier readings, its full meaning is not learned until later levels. This demonstrates that our method takes into account the meaning, and not merely the frequency of occurrence. 5 Evaluation 5.1 Time-to-maturity Evaluation of the word maturity metric against external data is not always straightforward because, to the best of our knowledge, data that contains word knowledge statistics at different learner levels does not exist. Instead, we often have to evaluate against external data consisting of scalar difficulty values (see Section 2 for discussion) for each word, such as age-of-acquisition norms described in the following subsection. There are two ways to make such comparisons possible. One is to compute the word maturity at a particular level, obtaining a single number for each word. Another is by computing time-tomaturity: the minimum level (the value on the xaxis of the word maturity graph) at which the word maturity reaches4 a particular threshold α: !!" ! = min ! !. !. !" !, ! > ! (6) Intuitively, this measure corresponds to the age in a learner’s development when a given word becomes sufficiently understood. The parameter α can be estimated empirically (in practice α=0.45 gives good correlations with external measures). Since the values of word maturity are interpolated, the ttm(w) can take on fractional values. It should be emphasized that such a collapsing of word maturity into a scalar value inherently results in loss of information; we only perform it in order to allow evaluation against external data sources. As a baseline for these experiments we include word frequency, namely the document frequency of words in the adult corpus. 4 Values between discrete levels are obtained using piecewise linear interpolation 5.2 Age-of-Acquisition Norms Age-of-Acquisition (AoA) is a psycholinguistic property of words originally reported by Carol & White (1973). Age of Acquisition approximates the age at which a word is first learned and has been proposed as a significant contributor to language and memory processes. With some exceptions, AoA norms are collected by subjective measures, typically by asking each of a large number of participants to estimate in years the age when they have learned the word. AoA estimates have been shown to be reliable and provide a valid estimate for the objective age at which a word is acquired; see (Davis, in press) for references and discussion. In this experiment we compute Spearman correlations between time-to-maturity and two available collections of AoA norms: Gilhooly et al., (1980) norms5, and Bristol norms6 (StadthagenGonzalez et al., 2010). Measure Gilhooly (n=1643) Bristol (n=1402) (-) Frequency 0.59 0.59 Time-to-Maturity (α=0.45) 0.72 0.64 Table 2. Correlations with Age of Acquisition norms. 5.3 Instruction Word Lists In this experiment, we examine leveled lists of words, as created by Biemiller (2008) in the book entitled “Words Worth Teaching: Closing the Vocabulary Gap”. Based on results of multiplechoice word comprehension tests administered to students of different grades as well as expert judgments, the author derives several word difficulty lists for vocabulary instruction in schools, including: o Words known by most children in grade 2 o Words known by 40-80% of children in grade 2 o Words known by 40-80% of children in grade 6 o Words known by fewer than 40% of children in grade 6 One would expect the words in these four groups to increase in difficulty, in the order they are presented above. 5 http://www.psy.uwa.edu.au/mrcdatabase/uwa_mrc.htm 6 http://language.psy.bris.ac.uk/bristol_norms.html 303 To verify how these word groups correspond to the word maturity metric, we assign each of the words in the four groups a difficulty rating 1-4 respectively, and measure the correlation with time-to-maturity. Measure Correlation (-) Frequency 0.43 Time-to-maturity (α=0.45) 0.49 Table 3. Correlations with instruction word lists (n=4176). The word maturity metric shows higher correlation with instruction word list norms than word frequency. 5.4 Text Complexity Another way in which our metric can be evaluated is by examining the word maturity in texts that have been leveled, i.e. have been assigned ratings of difficulty. On average, we would expect more difficult texts to contain more difficult words. Thus, the correlation between text difficulty and our word maturity metric can serve as another validation of the metric. For this purpose, we obtained a collection of readings that are used as reading comprehension tests by different state websites in the US7. The collection consists of 1,220 readings, each annotated with a US school grade level (in the range between 3-12) for which the reading is intended. The average length each passage was approximately 489 words. In this experiment we computed the correlation of the grade level with time-to-maturity, and two other measures, namely: • Time-to-maturity: average time-tomaturity of unique words in text (excluding stopwords) with α=0.45. • Coleman-Liau. The Coleman-Liau readability index (Equation 4). • Frequency. Average of corpus logfrequency for unique words in the text, excluding stopwords. 7 The collection was created as part of the “Aspects of Text Complexity” project funded by the Bill and Melinda Gates Foundation, 2010. Measure Correlation Frequency (avg. of unique words) 0.60 Coleman-Liau 0.64 Time-to-maturity (α=0.45) (avg. of unique non-stopwords) 0.70 Table 4. Correlations of grade levels with different metrics. 6 Emphasis on Meaning In this section, we would like to highlight certain properties of the LSA-based word maturity metric, particularly aiming to illustrate the fact that the metric tracks acquisition of meaning from exposure to language and not merely more shallow effects, such as word frequency in the training corpus. 6.1 Maturity based on Frequency For a baseline that does not take meaning into account, let us construct a set of maturity-like curves based on frequency statistics alone. More specifically, we define the frequency-maturity for a particular word at a given level as the ratio of the number of occurrences at the intermediate corpus for that level (l) to the number of occurrences in the reference corpus (a): !" !, ! = !"#_!""#$!(!) !"#_!""#$!(!) Similarly to the original LSA-based word maturity metric, this ratio increases from 0 to 1 for each word as the amount of cumulative language exposure increases. The corpora used at each intermediate level are identical to the original word maturity model, but instead of creating LSA spaces we simply use the corpora to compute word frequency. The following figure shows the Spearman correlations between the external measures used for experiments in Section 5, and time-to-maturity computed based on the two maturity metrics: the new frequency-based maturity and the original LSA-based word maturity. 304 Figure 3. Correlations of word maturity computed using frequency (as well as the original) against external metrics described in Section 5. The results indicate that the original LSA-based word maturity correlates better with real-world data than a maturity metric simply based on frequency. 6.2 Homographs Another insight into the fact that the LSA-based word maturity metric tracks word meaning rather than mere frequency may be gained from analysis of words that are homographs: words that contain two or more unrelated meanings in the same written form, such as the word “turkey” illustrated in Section 4. (This is related to but distinct from the merely polysemous words that have several related meanings), Because of the conflation of several unrelated meanings into the same orthographic form, homographs implicitly contain more semantic content in a single word. Therefore, one would expect the meaning of homographs to mature more slowly than would be predicted by frequency alone: all things being equal, a learner has to learn the meanings for all of the senses of a homograph word before the word can be considered fully known. More specifically, one would expect the timeto-maturity of homographs to have greater values than words of similar frequency. To test this hypothesis, we obtained8 a list 174 common English homographs. For each of them, we compared their time-to-maturity to the average time-to-maturity of words that have the same (+/- 1%) corpus frequency. 8 http://en.wikipedia.org/wiki/List_of_English_homographs The results of a paired t-test confirms the hypothesis that the time-to-maturity of homographs is greater than other words of the same frequency, with the p-value = 5.9e-6. This is consistent with the observation that homographs will take longer to learn and serves as evidence that LSA-based word maturity approximates effects related to meaning. 6.3 Size of the Reference Corpus Another area of investigation is the repercussions of the choice of the corpus for the reference (adult) model. The size (and content) of the corpus used to train the reference model is potentially important, since it affects the word maturity calculations, which are comparisons of the intermediate LSA spaces to the reference LSA space built on this corpus. It is interesting to investigate how the word maturity model would be affected if the adult corpus were made significantly more sophisticated. If the word maturity metric were simply based on word frequency (including the frequency-based maturity baseline described in Section 6.1), one would expect the word maturity of the words at each level to decrease significantly if the reference model is made significantly larger, since each intermediate level will have encountered fewer words by comparison. Intuition about language learning, however, tells us that with enough language exposure a learner learns virtually all there is to know about any particular word; after the word reaches its adult maturity, subsequent encounters of natural readings do little to further change the knowledge of that word. Therefore, if word maturity were tracking something similar to real word knowledge, one would expect the word maturity for most words to plateau over time, and subsequently not change significantly, no matter how sophisticated the reference model becomes. To evaluate this inquiry we created a reference corpus that is twice as large as before (four times as large and of the same difficulty range as the corpus for the last intermediate level), containing roughly 329,000 passages. We computed the word maturity model using this larger reference corpus, while keeping all the original intermediate corpora of the same size and content. The results show that the average word maturity of words at the last intermediate level (14) de0.66
0.56
0.42
0.68
0.72
0.64
0.49
0.71
0.0
0.2
0.4
0.6
0.8
1.0
AoA
(Gilhooly)
AoA
(Bristol)
Word
Lists
Readings
freq-‐WM
(α=0.15)
LSA-‐WM
(α=0.45)
305 creases by less than 14% as a result of doubling the adult corpus. Furthermore, this number is as low as 6%, if one only considers more common words that occur 50 times or more in the corpus. This relatively small difference, in spite of a twofold increase of the adult corpus, is consistent with the idea that word knowledge should approach a plateau, after which further exposure to language does little to change most word meanings. 6.4 Integration into Lexicon Another important consideration with respect to word learning mentioned in Wotler (2001), is the “connections made between [a] particular word and the other words in the mental lexicon.” One implication of that is that measuring word maturity must take into account the way words in the language are integrated with other words. One way to test this effect is to introduce readings where a large part of the important vocabulary is not well known to learners at a given level. One would expect learning to be impeded when the learning materials are inappropriate for the learner level. This can be simulated in the word maturity model by rearranging the order of some of the training passages, by introducing certain advanced passages at a very early level. If the results of the word maturity metric were merely based on frequency, such a reordering would have no effect on the maturity of important words (measured after all the passages containing these words have been encountered), since the total number of relevant word encounters does not change as a result of this reshuffling. If, however, the metric reflected at least some degree of semantics, we would expect word maturities for important words in these readings to be lower as a result of such rearranging, due to the fact that they are being introduced in contexts consisting of words that are not well known at the early levels. To test this effect, we first collected all passages in the training corpus of intermediate models containing some advanced words from different topics, namely: “chromosome”, “neutron” and “filibuster” together with their plural variants. We changed the order of inclusion of these 89 passages into the intermediate models in each of the two following ways: 1. All the passages were introduced at the first level (l=1) intermediate corpus 2. All the passages were introduced at the last level (l=14) intermediate corpus. This resulted in two new variants of word maturity models, which were computed in all the same ways as before, except that all of these 89 advanced passages were introduced either at the very first level or at the very last level. We then computed the word maturity at the levels they were introduced. The hypothesis consistent with a meaning-based maturity method would be that less learning (i.e. lower word maturity) of the relevant words will occur when passages are introduced prematurely (at level 1). Table 5 shows the word maturities measured for each of those cases, at the level (1 or 14) when all of the passages have been introduced. Word Introduced at l=1 (WM at l=1) Introduced at l=14 (WM at l=14) chromosome 0.51 0.73 neutron 0.51 0.72 filibuster 0.58 0.85 Table 5. Word maturity of words resulting when all the relevant passages are introduced early vs late. Indeed, the results show lower word maturity values when advanced passages are introduced too early, and higher ones when the passages are introduced at a later stage, when the rest of the supporting vocabulary is known. 7 Conclusion We have introduced a new metric for estimating the degree of knowledge of words by learners at different levels. We have also proposed and evaluated an implementation of this metric using Latent Semantic Analysis. The implementation is based on unsupervised word meaning acquisition from natural text, from corpora that resemble in volume and complexity the reading materials a typical human learner might encounter. The metric correlates better than word frequency to a range of external measures, including vocabulary word lists, psycholinguistic norms and leveled texts. Furthermore, we have shown that the metric is based on word meaning (to the extent that it can be approximated with LSA), and not merely on shallow measures like word frequency. 306 Many interesting research questions still remain pertaining to the best way to select and partition the training corpora, align adult and intermediate LSA models, correlate the results with real school grade levels, as well as other free parameters in the model. Nevertheless, we have shown that LSA can be employed to usefully mimic model word knowledge. The models are currently used (at Pearson Education) to create state-of-the-art personalized vocabulary instruction and assessment tools. 307 References Andrew Biemiller (2008). Words Worth Teaching. Columbus, OH: SRA/McGraw-Hill. John B. Carrol and M. N. White (1973). Age of acquisition norms for 220 picturable nouns. Journal of Verbal Learning & Verbal Behavior, 12, 563-576. Meri Coleman and T.L. Liau (1975). A computer readability formula designed for machine scoring, Journal of Applied Psychology, Vol. 60, pp. 283-284. Ken J. Gilhooly and R. H. Logie (1980). Age of acquisition, imagery, concreteness, familiarity and ambiguity measures for 1944 words. Behaviour Research Methods & Instrumentation, 12, 395-427. Wojtek J. Krzanowski (2000) Principles of Multivariate Analysis: A User’s Perspective (Oxford Statistical Science Series). Oxford University Press, USA. Thomas K Landauer and Susan Dumais (1997). A solution to Plato's problem: The Latent Semantic Analysis Theory of the Acquisition, Induction, and Representation of Knowledge. Psychological Review, 104, pp 211-240. Thomas K Landauer (2002). On the Computation Basis of Learning and Cognition: Arguments from LSA. In N. Ross (Ed.), The Psychology of Learning and Motivation, 41, 43-84. Thomas K Landauer, Danielle S. McNamara, Simon Dennis, and Walter Kintsch (2007). Handbook of Latent Semantic Analysis. Lawrence Erlbaum. Paul Nation (1993). Measuring readiness for simplified material: a test of the first 1,000 words of English. In Simplification: Theory and Application M. L. Tickoo (ed.), RELC Anthology Series 31: 193-203. Hans Stadthagen-Gonzalez and C. J. Davis (2006). The Bristol Norms for Age of Acquisition, Imageability and Familiarity. Behavior Research Methods, 38, 598-605. A. Jackson Stenner (1996). Measuring Reading Comprehension with the Lexile Framework. Forth North American Conference on Adolescent/Adult Literacy. Brent Wolter (2001). Comparing the L1 and L2 Mental Lexicon. Studies in Second Language Acquisition. Cambridge University Press. 308
|
2011
|
31
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 309–319, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Finding Deceptive Opinion Spam by Any Stretch of the Imagination Myle Ott Yejin Choi Claire Cardie Department of Computer Science Cornell University Ithaca, NY 14853 {myleott,ychoi,cardie}@cs.cornell.edu Jeffrey T. Hancock Department of Communication Cornell University Ithaca, NY 14853 [email protected] Abstract Consumers increasingly rate, review and research products online (Jansen, 2010; Litvin et al., 2008). Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam—fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90% accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing. 1 Introduction With the ever-increasing popularity of review websites that feature user-generated opinions (e.g., TripAdvisor1 and Yelp2), there comes an increasing potential for monetary gain through opinion spam— inappropriate or fraudulent reviews. Opinion spam can range from annoying self-promotion of an unrelated website or blog to deliberate review fraud, as in the recent case3 of a Belkin employee who 1http://tripadvisor.com 2http://yelp.com 3http://news.cnet.com/8301-1001_ 3-10145399-92.html hired people to write positive reviews for an otherwise poorly reviewed product.4 While other kinds of spam have received considerable computational attention, regrettably there has been little work to date (see Section 2) on opinion spam detection. Furthermore, most previous work in the area has focused on the detection of DISRUPTIVE OPINION SPAM—uncontroversial instances of spam that are easily identified by a human reader, e.g., advertisements, questions, and other irrelevant or nonopinion text (Jindal and Liu, 2008). And while the presence of disruptive opinion spam is certainly a nuisance, the risk it poses to the user is minimal, since the user can always choose to ignore it. We focus here on a potentially more insidious type of opinion spam: DECEPTIVE OPINION SPAM—fictitious opinions that have been deliberately written to sound authentic, in order to deceive the reader. For example, one of the following two hotel reviews is truthful and the other is deceptive opinion spam: 1. I have stayed at many hotels traveling for both business and pleasure and I can honestly stay that The James is tops. The service at the hotel is first class. The rooms are modern and very comfortable. The location is perfect within walking distance to all of the great sights and restaurants. Highly recommend to both business travellers and couples. 2. My husband and I stayed at the James Chicago Hotel for our anniversary. This place is fantastic! We knew as soon as we arrived we made the right choice! The rooms are BEAUTIFUL and the staff very attentive and wonderful!! The area of the hotel is great, since I love to shop I couldn’t ask for more!! We will definatly be 4It is also possible for opinion spam to be negative, potentially in order to sully the reputation of a competitor. 309 back to Chicago and we will for sure be back to the James Chicago. Typically, these deceptive opinions are neither easily ignored nor even identifiable by a human reader;5 consequently, there are few good sources of labeled data for this research. Indeed, in the absence of gold-standard data, related studies (see Section 2) have been forced to utilize ad hoc procedures for evaluation. In contrast, one contribution of the work presented here is the creation of the first largescale, publicly available6 dataset for deceptive opinion spam research, containing 400 truthful and 400 gold-standard deceptive reviews. To obtain a deeper understanding of the nature of deceptive opinion spam, we explore the relative utility of three potentially complementary framings of our problem. Specifically, we view the task as: (a) a standard text categorization task, in which we use n-gram–based classifiers to label opinions as either deceptive or truthful (Joachims, 1998; Sebastiani, 2002); (b) an instance of psycholinguistic deception detection, in which we expect deceptive statements to exemplify the psychological effects of lying, such as increased negative emotion and psychological distancing (Hancock et al., 2008; Newman et al., 2003); and, (c) a problem of genre identification, in which we view deceptive and truthful writing as sub-genres of imaginative and informative writing, respectively (Biber et al., 1999; Rayson et al., 2001). We compare the performance of each approach on our novel dataset. Particularly, we find that machine learning classifiers trained on features traditionally employed in (a) psychological studies of deception and (b) genre identification are both outperformed at statistically significant levels by ngram–based text categorization techniques. Notably, a combined classifier with both n-gram and psychological deception features achieves nearly 90% cross-validated accuracy on this task. In contrast, we find deceptive opinion spam detection to be well beyond the capabilities of most human judges, who perform roughly at-chance—a finding that is consistent with decades of traditional deception detection research (Bond and DePaulo, 2006). 5The second example review is deceptive opinion spam. 6Available by request at: http://www.cs.cornell. edu/˜myleott/op_spam Additionally, we make several theoretical contributions based on an examination of the feature weights learned by our machine learning classifiers. Specifically, we shed light on an ongoing debate in the deception literature regarding the importance of considering the context and motivation of a deception, rather than simply identifying a universal set of deception cues. We also present findings that are consistent with recent work highlighting the difficulties that liars have encoding spatial information (Vrij et al., 2009). Lastly, our study of deceptive opinion spam detection as a genre identification problem reveals relationships between deceptive opinions and imaginative writing, and between truthful opinions and informative writing. The rest of this paper is organized as follows: in Section 2, we summarize related work; in Section 3, we explain our methodology for gathering data and evaluate human performance; in Section 4, we describe the features and classifiers employed by our three automated detection approaches; in Section 5, we present and discuss experimental results; finally, conclusions and directions for future work are given in Section 6. 2 Related Work Spam has historically been studied in the contexts of e-mail (Drucker et al., 2002), and the Web (Gy¨ongyi et al., 2004; Ntoulas et al., 2006). Recently, researchers have began to look at opinion spam as well (Jindal and Liu, 2008; Wu et al., 2010; Yoo and Gretzel, 2009). Jindal and Liu (2008) find that opinion spam is both widespread and different in nature from either e-mail or Web spam. Using product review data, and in the absence of gold-standard deceptive opinions, they train models using features based on the review text, reviewer, and product, to distinguish between duplicate opinions7 (considered deceptive spam) and non-duplicate opinions (considered truthful). Wu et al. (2010) propose an alternative strategy for detecting deceptive opinion spam in the absence 7Duplicate (or near-duplicate) opinions are opinions that appear more than once in the corpus with the same (or similar) text. While these opinions are likely to be deceptive, they are unlikely to be representative of deceptive opinion spam in general. Moreover, they are potentially detectable via off-the-shelf plagiarism detection software. 310 of gold-standard data, based on the distortion of popularity rankings. Both of these heuristic evaluation approaches are unnecessary in our work, since we compare gold-standard deceptive and truthful opinions. Yoo and Gretzel (2009) gather 40 truthful and 42 deceptive hotel reviews and, using a standard statistical test, manually compare the psychologically relevant linguistic differences between them. In contrast, we create a much larger dataset of 800 opinions that we use to develop and evaluate automated deception classifiers. Research has also been conducted on the related task of psycholinguistic deception detection. Newman et al. (2003), and later Mihalcea and Strapparava (2009), ask participants to give both their true and untrue views on personal issues (e.g., their stance on the death penalty). Zhou et al. (2004; 2008) consider computer-mediated deception in role-playing games designed to be played over instant messaging and e-mail. However, while these studies compare n-gram–based deception classifiers to a random guess baseline of 50%, we additionally evaluate and compare two other computational approaches (described in Section 4), as well as the performance of human judges (described in Section 3.3). Lastly, automatic approaches to determining review quality have been studied—directly (Weimer et al., 2007), and in the contexts of helpfulness (Danescu-Niculescu-Mizil et al., 2009; Kim et al., 2006; O’Mahony and Smyth, 2009) and credibility (Weerkamp and De Rijke, 2008). Unfortunately, most measures of quality employed in those works are based exclusively on human judgments, which we find in Section 3 to be poorly calibrated to detecting deceptive opinion spam. 3 Dataset Construction and Human Performance While truthful opinions are ubiquitous online, deceptive opinions are difficult to obtain without resorting to heuristic methods (Jindal and Liu, 2008; Wu et al., 2010). In this section, we report our efforts to gather (and validate with human judgments) the first publicly available opinion spam dataset with gold-standard deceptive opinions. Following the work of Yoo and Gretzel (2009), we compare truthful and deceptive positive reviews for hotels found on TripAdvisor. Specifically, we mine all 5-star truthful reviews from the 20 most popular hotels on TripAdvisor8 in the Chicago area.9 Deceptive opinions are gathered for those same 20 hotels using Amazon Mechanical Turk10 (AMT). Below, we provide details of the collection methodologies for deceptive (Section 3.1) and truthful opinions (Section 3.2). Ultimately, we collect 20 truthful and 20 deceptive opinions for each of the 20 chosen hotels (800 opinions total). 3.1 Deceptive opinions via Mechanical Turk Crowdsourcing services such as AMT have made large-scale data annotation and collection efforts financially affordable by granting anyone with basic programming skills access to a marketplace of anonymous online workers (known as Turkers) willing to complete small tasks. To solicit gold-standard deceptive opinion spam using AMT, we create a pool of 400 HumanIntelligence Tasks (HITs) and allocate them evenly across our 20 chosen hotels. To ensure that opinions are written by unique authors, we allow only a single submission per Turker. We also restrict our task to Turkers who are located in the United States, and who maintain an approval rating of at least 90%. Turkers are allowed a maximum of 30 minutes to work on the HIT, and are paid one US dollar for an accepted submission. Each HIT presents the Turker with the name and website of a hotel. The HIT instructions ask the Turker to assume that they work for the hotel’s marketing department, and to pretend that their boss wants them to write a fake review (as if they were a customer) to be posted on a travel review website; additionally, the review needs to sound realistic and portray the hotel in a positive light. A disclaimer 8TripAdvisor utilizes a proprietary ranking system to assess hotel popularity. We chose the 20 hotels with the greatest number of reviews, irrespective of the TripAdvisor ranking. 9It has been hypothesized that popular offerings are less likely to become targets of deceptive opinion spam, since the relative impact of the spam in such cases is small (Jindal and Liu, 2008; Lim et al., 2010). By considering only the most popular hotels, we hope to minimize the risk of mining opinion spam and labeling it as truthful. 10http://mturk.com 311 Time spent t (minutes) All submissions count: 400 tmin: 0.08, tmax: 29.78 ¯t: 8.06, s: 6.32 Length ℓ(words) All submissions ℓmin: 25, ℓmax: 425 ¯ℓ: 115.75, s: 61.30 Time spent t < 1 count: 47 ℓmin: 39, ℓmax: 407 ¯ℓ: 113.94, s: 66.24 Time spent t ≥1 count: 353 ℓmin: 25, ℓmax: 425 ¯ℓ: 115.99, s: 60.71 Table 1: Descriptive statistics for 400 deceptive opinion spam submissions gathered using AMT. s corresponds to the sample standard deviation. indicates that any submission found to be of insufficient quality (e.g., written for the wrong hotel, unintelligible, unreasonably short,11 plagiarized,12 etc.) will be rejected. It took approximately 14 days to collect 400 satisfactory deceptive opinions. Descriptive statistics appear in Table 1. Submissions vary quite dramatically both in length, and time spent on the task. Particularly, nearly 12% of the submissions were completed in under one minute. Surprisingly, an independent two-tailed t-test between the mean length of these submissions (¯ℓt<1) and the other submissions (¯ℓt≥1) reveals no significant difference (p = 0.83). We suspect that these “quick” users may have started working prior to having formally accepted the HIT, presumably to circumvent the imposed time limit. Indeed, the quickest submission took just 5 seconds and contained 114 words. 3.2 Truthful opinions from TripAdvisor For truthful opinions, we mine all 6,977 reviews from the 20 most popular Chicago hotels on TripAdvisor. From these we eliminate: • 3,130 non-5-star reviews; • 41 non-English reviews;13 • 75 reviews with fewer than 150 characters since, by construction, deceptive opinions are 11A submission is considered unreasonably short if it contains fewer than 150 characters. 12Submissions are individually checked for plagiarism at http://plagiarisma.net. 13Language is determined using http://tagthe.net. at least 150 characters long (see footnote 11 in Section 3.1); • 1,607 reviews written by first-time authors— new users who have not previously posted an opinion on TripAdvisor—since these opinions are more likely to contain opinion spam, which would reduce the integrity of our truthful review data (Wu et al., 2010). Finally, we balance the number of truthful and deceptive opinions by selecting 400 of the remaining 2,124 truthful reviews, such that the document lengths of the selected truthful reviews are similarly distributed to those of the deceptive reviews. Work by Serrano et al. (2009) suggests that a log-normal distribution is appropriate for modeling document lengths. Thus, for each of the 20 chosen hotels, we select 20 truthful reviews from a log-normal (lefttruncated at 150 characters) distribution fit to the lengths of the deceptive reviews.14 Combined with the 400 deceptive reviews gathered in Section 3.1 this yields our final dataset of 800 reviews. 3.3 Human performance Assessing human deception detection performance is important for several reasons. First, there are few other baselines for our classification task; indeed, related studies (Jindal and Liu, 2008; Mihalcea and Strapparava, 2009) have only considered a random guess baseline. Second, assessing human performance is necessary to validate the deceptive opinions gathered in Section 3.1. If human performance is low, then our deceptive opinions are convincing, and therefore, deserving of further attention. Our initial approach to assessing human performance on this task was with Mechanical Turk. Unfortunately, we found that some Turkers selected among the choices seemingly at random, presumably to maximize their hourly earnings by obviating the need to read the review. While a similar effect has been observed previously (Akkaya et al., 2010), there remains no universal solution. Instead, we solicit the help of three volunteer undergraduate university students to make judgments on a subset of our data. This balanced subset, corresponding to the first fold of our cross-validation 14We use the R package GAMLSS (Rigby and Stasinopoulos, 2005) to fit the left-truncated log-normal distribution. 312 TRUTHFUL DECEPTIVE Accuracy P R F P R F HUMAN JUDGE 1 61.9% 57.9 87.5 69.7 74.4 36.3 48.7 JUDGE 2 56.9% 53.9 95.0 68.8 78.9 18.8 30.3 JUDGE 3 53.1% 52.3 70.0 59.9 54.7 36.3 43.6 META MAJORITY 58.1% 54.8 92.5 68.8 76.0 23.8 36.2 SKEPTIC 60.6% 60.8 60.0 60.4 60.5 61.3 60.9 Table 2: Performance of three human judges and two meta-judges on a subset of 160 opinions, corresponding to the first fold of our cross-validation experiments in Section 5. Boldface indicates the largest value for each column. experiments described in Section 5, contains all 40 reviews from each of four randomly chosen hotels. Unlike the Turkers, our student volunteers are not offered a monetary reward. Consequently, we consider their judgements to be more honest than those obtained via AMT. Additionally, to test the extent to which the individual human judges are biased, we evaluate the performance of two virtual meta-judges. Specifically, the MAJORITY meta-judge predicts “deceptive” when at least two out of three human judges believe the review to be deceptive, and the SKEPTIC meta-judge predicts “deceptive” when any human judge believes the review to be deceptive. Human and meta-judge performance is given in Table 2. It is clear from the results that human judges are not particularly effective at this task. Indeed, a two-tailed binomial test fails to reject the null hypothesis that JUDGE 2 and JUDGE 3 perform at-chance (p = 0.003, 0.10, 0.48 for the three judges, respectively). Furthermore, all three judges suffer from truth-bias (Vrij, 2008), a common finding in deception detection research in which human judges are more likely to classify an opinion as truthful than deceptive. In fact, JUDGE 2 classified fewer than 12% of the opinions as deceptive! Interestingly, this bias is effectively smoothed by the SKEPTIC meta-judge, which produces nearly perfectly class-balanced predictions. A subsequent reevaluation of human performance on this task suggests that the truth-bias can be reduced if judges are given the class-proportions in advance, although such prior knowledge is unrealistic; and ultimately, performance remains similar to that of Table 2. Inter-annotator agreement among the three judges, computed using Fleiss’ kappa, is 0.11. While there is no precise rule for interpreting kappa scores, Landis and Koch (1977) suggest that scores in the range (0.00, 0.20] correspond to “slight agreement” between annotators. The largest pairwise Cohen’s kappa is 0.12, between JUDGE 2 and JUDGE 3—a value far below generally accepted pairwise agreement levels. We suspect that agreement among our human judges is so low precisely because humans are poor judges of deception (Vrij, 2008), and therefore they perform nearly at-chance respective to one another. 4 Automated Approaches to Deceptive Opinion Spam Detection We consider three automated approaches to detecting deceptive opinion spam, each of which utilizes classifiers (described in Section 4.4) trained on the dataset of Section 3. The features employed by each strategy are outlined here. 4.1 Genre identification Work in computational linguistics has shown that the frequency distribution of part-of-speech (POS) tags in a text is often dependent on the genre of the text (Biber et al., 1999; Rayson et al., 2001). In our genre identification approach to deceptive opinion spam detection, we test if such a relationship exists for truthful and deceptive reviews by constructing, for each review, features based on the frequencies of each POS tag.15 These features are also intended to provide a good baseline with which to compare our other automated approaches. 4.2 Psycholinguistic deception detection The Linguistic Inquiry and Word Count (LIWC) software (Pennebaker et al., 2007) is a popular automated text analysis tool used widely in the social sciences. It has been used to detect personality 15We use the Stanford Parser (Klein and Manning, 2003) to obtain the relative POS frequencies. 313 traits (Mairesse et al., 2007), to study tutoring dynamics (Cade et al., 2010), and, most relevantly, to analyze deception (Hancock et al., 2008; Mihalcea and Strapparava, 2009; Vrij et al., 2007). While LIWC does not include a text classifier, we can create one with features derived from the LIWC output. In particular, LIWC counts and groups the number of instances of nearly 4,500 keywords into 80 psychologically meaningful dimensions. We construct one feature for each of the 80 LIWC dimensions, which can be summarized broadly under the following four categories: 1. Linguistic processes: Functional aspects of text (e.g., the average number of words per sentence, the rate of misspelling, swearing, etc.) 2. Psychological processes: Includes all social, emotional, cognitive, perceptual and biological processes, as well as anything related to time or space. 3. Personal concerns: Any references to work, leisure, money, religion, etc. 4. Spoken categories: Primarily filler and agreement words. While other features have been considered in past deception detection work, notably those of Zhou et al. (2004), early experiments found LIWC features to perform best. Indeed, the LIWC2007 software used in our experiments subsumes most of the features introduced in other work. Thus, we focus our psycholinguistic approach to deception detection on LIWC-based features. 4.3 Text categorization In contrast to the other strategies just discussed, our text categorization approach to deception detection allows us to model both content and context with n-gram features. Specifically, we consider the following three n-gram feature sets, with the corresponding features lowercased and unstemmed: UNIGRAMS, BIGRAMS+, TRIGRAMS+, where the superscript + indicates that the feature set subsumes the preceding feature set. 4.4 Classifiers Features from the three approaches just introduced are used to train Na¨ıve Bayes and Support Vector Machine classifiers, both of which have performed well in related work (Jindal and Liu, 2008; Mihalcea and Strapparava, 2009; Zhou et al., 2008). For a document ⃗x, with label y, the Na¨ıve Bayes (NB) classifier gives us the following decision rule: ˆy = arg max c Pr(y = c) · Pr(⃗x | y = c) (1) When the class prior is uniform, for example when the classes are balanced (as in our case), (1) can be simplified to the maximum likelihood classifier (Peng and Schuurmans, 2003): ˆy = arg max c Pr(⃗x | y = c) (2) Under (2), both the NB classifier used by Mihalcea and Strapparava (2009) and the language model classifier used by Zhou et al. (2008) are equivalent. Thus, following Zhou et al. (2008), we use the SRI Language Modeling Toolkit (Stolcke, 2002) to estimate individual language models, Pr(⃗x | y = c), for truthful and deceptive opinions. We consider all three n-gram feature sets, namely UNIGRAMS, BIGRAMS+, and TRIGRAMS+, with corresponding language models smoothed using the interpolated Kneser-Ney method (Chen and Goodman, 1996). We also train Support Vector Machine (SVM) classifiers, which find a high-dimensional separating hyperplane between two groups of data. To simplify feature analysis in Section 5, we restrict our evaluation to linear SVMs, which learn a weight vector ⃗w and bias term b, such that a document ⃗x can be classified by: ˆy = sign(⃗w · ⃗x + b) (3) We use SVMlight (Joachims, 1999) to train our linear SVM models on all three approaches and feature sets described above, namely POS, LIWC, UNIGRAMS, BIGRAMS+, and TRIGRAMS+. We also evaluate every combination of these features, but for brevity include only LIWC+BIGRAMS+, which performs best. Following standard practice, document vectors are normalized to unit-length. For LIWC+BIGRAMS+, we unit-length normalize LIWC and BIGRAMS+ features individually before combining them. 314 TRUTHFUL DECEPTIVE Approach Features Accuracy P R F P R F GENRE IDENTIFICATION POSSVM 73.0% 75.3 68.5 71.7 71.1 77.5 74.2 PSYCHOLINGUISTIC LIWCSVM 76.8% 77.2 76.0 76.6 76.4 77.5 76.9 DECEPTION DETECTION TEXT CATEGORIZATION UNIGRAMSSVM 88.4% 89.9 86.5 88.2 87.0 90.3 88.6 BIGRAMS+ SVM 89.6% 90.1 89.0 89.6 89.1 90.3 89.7 LIWC+BIGRAMS+ SVM 89.8% 89.8 89.8 89.8 89.8 89.8 89.8 TRIGRAMS+ SVM 89.0% 89.0 89.0 89.0 89.0 89.0 89.0 UNIGRAMSNB 88.4% 92.5 83.5 87.8 85.0 93.3 88.9 BIGRAMS+ NB 88.9% 89.8 87.8 88.7 88.0 90.0 89.0 TRIGRAMS+ NB 87.6% 87.7 87.5 87.6 87.5 87.8 87.6 HUMAN / META JUDGE 1 61.9% 57.9 87.5 69.7 74.4 36.3 48.7 JUDGE 2 56.9% 53.9 95.0 68.8 78.9 18.8 30.3 SKEPTIC 60.6% 60.8 60.0 60.4 60.5 61.3 60.9 Table 3: Automated classifier performance for three approaches based on nested 5-fold cross-validation experiments. Reported precision, recall and F-score are computed using a micro-average, i.e., from the aggregate true positive, false positive and false negative rates, as suggested by Forman and Scholz (2009). Human performance is repeated here for JUDGE 1, JUDGE 2 and the SKEPTIC meta-judge, although they cannot be directly compared since the 160-opinion subset on which they are assessed only corresponds to the first cross-validation fold. 5 Results and Discussion The deception detection strategies described in Section 4 are evaluated using a 5-fold nested crossvalidation (CV) procedure (Quadrianto et al., 2009), where model parameters are selected for each test fold based on standard CV experiments on the training folds. Folds are selected so that each contains all reviews from four hotels; thus, learned models are always evaluated on reviews from unseen hotels. Results appear in Table 3. We observe that automated classifiers outperform human judges for every metric, except truthful recall where JUDGE 2 performs best.16 However, this is expected given that untrained humans often focus on unreliable cues to deception (Vrij, 2008). For example, one study examining deception in online dating found that humans perform at-chance detecting deceptive profiles because they rely on text-based cues that are unrelated to deception, such as second-person pronouns (Toma and Hancock, In Press). Among the automated classifiers, baseline performance is given by the simple genre identification approach (POSSVM) proposed in Section 4.1. Surprisingly, we find that even this simple auto16As mentioned in Section 3.3, JUDGE 2 classified fewer than 12% of opinions as deceptive. While achieving 95% truthful recall, this judge’s corresponding precision was not significantly better than chance (two-tailed binomial p = 0.4). mated classifier outperforms most human judges (one-tailed sign test p = 0.06, 0.01, 0.001 for the three judges, respectively, on the first fold). This result is best explained by theories of reality monitoring (Johnson and Raye, 1981), which suggest that truthful and deceptive opinions might be classified into informative and imaginative genres, respectively. Work by Rayson et al. (2001) has found strong distributional differences between informative and imaginative writing, namely that the former typically consists of more nouns, adjectives, prepositions, determiners, and coordinating conjunctions, while the latter consists of more verbs,17 adverbs,18 pronouns, and pre-determiners. Indeed, we find that the weights learned by POSSVM (found in Table 4) are largely in agreement with these findings, notably except for adjective and adverb superlatives, the latter of which was found to be an exception by Rayson et al. (2001). However, that deceptive opinions contain more superlatives is not unexpected, since deceptive writing (but not necessarily imaginative writing in general) often contains exaggerated language (Buller and Burgoon, 1996; Hancock et al., 2008). Both remaining automated approaches to detecting deceptive opinion spam outperform the simple 17Past participle verbs were an exception. 18Superlative adverbs were an exception. 315 TRUTHFUL/INFORMATIVE DECEPTIVE/IMAGINATIVE Category Variant Weight Category Variant Weight NOUNS Singular 0.008 VERBS Base -0.057 Plural 0.002 Past tense 0.041 Proper, singular -0.041 Present participle -0.089 Proper, plural 0.091 Singular, present -0.031 ADJECTIVES General 0.002 Third person 0.026 Comparative 0.058 singular, present Superlative -0.164 Modal -0.063 PREPOSITIONS General 0.064 ADVERBS General 0.001 DETERMINERS General 0.009 Comparative -0.035 COORD. CONJ. General 0.094 PRONOUNS Personal -0.098 VERBS Past participle 0.053 Possessive -0.303 ADVERBS Superlative -0.094 PRE-DETERMINERS General 0.017 Table 4: Average feature weights learned by POSSVM. Based on work by Rayson et al. (2001), we expect weights on the left to be positive (predictive of truthful opinions), and weights on the right to be negative (predictive of deceptive opinions). Boldface entries are at odds with these expectations. We report average feature weights of unit-normalized weight vectors, rather than raw weights vectors, to account for potential differences in magnitude between the folds. genre identification baseline just discussed. Specifically, the psycholinguistic approach (LIWCSVM) proposed in Section 4.2 performs 3.8% more accurately (one-tailed sign test p = 0.02), and the standard text categorization approach proposed in Section 4.3 performs between 14.6% and 16.6% more accurately. However, best performance overall is achieved by combining features from these two approaches. Particularly, the combined model LIWC+BIGRAMS+ SVM is 89.8% accurate at detecting deceptive opinion spam.19 Surprisingly, models trained only on UNIGRAMS—the simplest n-gram feature set— outperform all non–text-categorization approaches, and models trained on BIGRAMS+ perform even better (one-tailed sign test p = 0.07). This suggests that a universal set of keyword-based deception cues (e.g., LIWC) is not the best approach to detecting deception, and a context-sensitive approach (e.g., BIGRAMS+) might be necessary to achieve state-of-the-art deception detection performance. To better understand the models learned by these automated approaches, we report in Table 5 the top 15 highest weighted features for each class (truthful and deceptive) as learned by LIWC+BIGRAMS+ SVM and LIWCSVM. In agreement with theories of reality monitoring (Johnson and Raye, 1981), we observe that truthful opinions tend to include more sensorial and concrete language than deceptive opinions; in 19The result is not significantly better than BIGRAMS+ SVM. LIWC+BIGRAMS+ SVM LIWCSVM TRUTHFUL DECEPTIVE TRUTHFUL DECEPTIVE chicago hear i ... my number family on hotel allpunct perspron location , and negemo see ) luxury dash pronoun allpunctLIWC experience exclusive leisure floor hilton we exclampunct ( business sexual sixletters the hotel vacation period posemo bathroom i otherpunct comma small spa space cause helpful looking human auxverb $ while past future hotel . husband inhibition perceptual other my husband assent feel Table 5: Top 15 highest weighted truthful and deceptive features learned by LIWC+BIGRAMS+ SVM and LIWCSVM. Ambiguous features are subscripted to indicate the source of the feature. LIWC features correspond to groups of keywords as explained in Section 4.2; more details about LIWC and the LIWC categories are available at http://liwc.net. particular, truthful opinions are more specific about spatial configurations (e.g., small, bathroom, on, location). This finding is also supported by recent work by Vrij et al. (2009) suggesting that liars have considerable difficultly encoding spatial information into their lies. Accordingly, we observe an increased focus in deceptive opinions on aspects external to the hotel being reviewed (e.g., husband, business, 316 vacation). We also acknowledge several findings that, on the surface, are in contrast to previous psycholinguistic studies of deception (Hancock et al., 2008; Newman et al., 2003). For instance, while deception is often associated with negative emotion terms, our deceptive reviews have more positive and fewer negative emotion terms. This pattern makes sense when one considers the goal of our deceivers, namely to create a positive review (Buller and Burgoon, 1996). Deception has also previously been associated with decreased usage of first person singular, an effect attributed to psychological distancing (Newman et al., 2003). In contrast, we find increased first person singular to be among the largest indicators of deception, which we speculate is due to our deceivers attempting to enhance the credibility of their reviews by emphasizing their own presence in the review. Additional work is required, but these findings further suggest the importance of moving beyond a universal set of deceptive language features (e.g., LIWC) by considering both the contextual (e.g., BIGRAMS+) and motivational parameters underlying a deception as well. 6 Conclusion and Future Work In this work we have developed the first large-scale dataset containing gold-standard deceptive opinion spam. With it, we have shown that the detection of deceptive opinion spam is well beyond the capabilities of human judges, most of whom perform roughly at-chance. Accordingly, we have introduced three automated approaches to deceptive opinion spam detection, based on insights coming from research in computational linguistics and psychology. We find that while standard n-gram–based text categorization is the best individual detection approach, a combination approach using psycholinguisticallymotivated features and n-gram features can perform slightly better. Finally, we have made several theoretical contributions. Specifically, our findings suggest the importance of considering both the context (e.g., BIGRAMS+) and motivations underlying a deception, rather than strictly adhering to a universal set of deception cues (e.g., LIWC). We have also presented results based on the feature weights learned by our classifiers that illustrate the difficulties faced by liars in encoding spatial information. Lastly, we have discovered a plausible relationship between deceptive opinion spam and imaginative writing, based on POS distributional similarities. Possible directions for future work include an extended evaluation of the methods proposed in this work to both negative opinions, as well as opinions coming from other domains. Many additional approaches to detecting deceptive opinion spam are also possible, and a focus on approaches with high deceptive precision might be useful for production environments. Acknowledgments This work was supported in part by National Science Foundation Grants BCS-0624277, BCS0904822, HSD-0624267, IIS-0968450, and NSCC0904822, as well as a gift from Google, and the Jack Kent Cooke Foundation. We also thank, alphabetically, Rachel Boochever, Cristian DanescuNiculescu-Mizil, Alicia Granstein, Ulrike Gretzel, Danielle Kirshenblat, Lillian Lee, Bin Lu, Jack Newton, Melissa Sackler, Mark Thomas, and Angie Yoo, as well as members of the Cornell NLP seminar group and the ACL reviewers for their insightful comments, suggestions and advice on various aspects of this work. References C. Akkaya, A. Conrad, J. Wiebe, and R. Mihalcea. 2010. Amazon mechanical turk for subjectivity word sense disambiguation. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazons Mechanical Turk, Los Angeles, pages 195–203. D. Biber, S. Johansson, G. Leech, S. Conrad, E. Finegan, and R. Quirk. 1999. Longman grammar of spoken and written English, volume 2. MIT Press. C.F. Bond and B.M. DePaulo. 2006. Accuracy of deception judgments. Personality and Social Psychology Review, 10(3):214. D.B. Buller and J.K. Burgoon. 1996. Interpersonal deception theory. Communication Theory, 6(3):203– 242. W.L. Cade, B.A. Lehman, and A. Olney. 2010. An exploration of off topic conversation. In Human Language Technologies: The 2010 Annual Conference of 317 the North American Chapter of the Association for Computational Linguistics, pages 669–672. Association for Computational Linguistics. S.F. Chen and J. Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, pages 310–318. Association for Computational Linguistics. C. Danescu-Niculescu-Mizil, G. Kossinets, J. Kleinberg, and L. Lee. 2009. How opinions are received by online communities: a case study on amazon.com helpfulness votes. In Proceedings of the 18th international conference on World wide web, pages 141–150. ACM. H. Drucker, D. Wu, and V.N. Vapnik. 2002. Support vector machines for spam categorization. Neural Networks, IEEE Transactions on, 10(5):1048–1054. G. Forman and M. Scholz. 2009. Apples-to-Apples in Cross-Validation Studies: Pitfalls in Classifier Performance Measurement. ACM SIGKDD Explorations, 12(1):49–57. Z. Gy¨ongyi, H. Garcia-Molina, and J. Pedersen. 2004. Combating web spam with trustrank. In Proceedings of the Thirtieth international conference on Very large data bases-Volume 30, pages 576–587. VLDB Endowment. J.T. Hancock, L.E. Curry, S. Goorha, and M. Woodworth. 2008. On lying and being lied to: A linguistic analysis of deception in computer-mediated communication. Discourse Processes, 45(1):1–23. J. Jansen. 2010. Online product research. Pew Internet & American Life Project Report. N. Jindal and B. Liu. 2008. Opinion spam and analysis. In Proceedings of the international conference on Web search and web data mining, pages 219–230. ACM. T. Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. Machine Learning: ECML-98, pages 137–142. T. Joachims. 1999. Making large-scale support vector machine learning practical. In Advances in kernel methods, page 184. MIT Press. M.K. Johnson and C.L. Raye. 1981. Reality monitoring. Psychological Review, 88(1):67–85. S.M. Kim, P. Pantel, T. Chklovski, and M. Pennacchiotti. 2006. Automatically assessing review helpfulness. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 423– 430. Association for Computational Linguistics. D. Klein and C.D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational LinguisticsVolume 1, pages 423–430. Association for Computational Linguistics. J.R. Landis and G.G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1):159. E.P. Lim, V.A. Nguyen, N. Jindal, B. Liu, and H.W. Lauw. 2010. Detecting product review spammers using rating behaviors. In Proceedings of the 19th ACM international conference on Information and knowledge management, pages 939–948. ACM. S.W. Litvin, R.E. Goldsmith, and B. Pan. 2008. Electronic word-of-mouth in hospitality and tourism management. Tourism management, 29(3):458–468. F. Mairesse, M.A. Walker, M.R. Mehl, and R.K. Moore. 2007. Using linguistic cues for the automatic recognition of personality in conversation and text. Journal of Artificial Intelligence Research, 30(1):457–500. R. Mihalcea and C. Strapparava. 2009. The lie detector: Explorations in the automatic recognition of deceptive language. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 309–312. Association for Computational Linguistics. M.L. Newman, J.W. Pennebaker, D.S. Berry, and J.M. Richards. 2003. Lying words: Predicting deception from linguistic styles. Personality and Social Psychology Bulletin, 29(5):665. A. Ntoulas, M. Najork, M. Manasse, and D. Fetterly. 2006. Detecting spam web pages through content analysis. In Proceedings of the 15th international conference on World Wide Web, pages 83–92. ACM. M.P. O’Mahony and B. Smyth. 2009. Learning to recommend helpful hotel reviews. In Proceedings of the third ACM conference on Recommender systems, pages 305–308. ACM. F. Peng and D. Schuurmans. 2003. Combining naive Bayes and n-gram language models for text classification. Advances in Information Retrieval, pages 547– 547. J.W. Pennebaker, C.K. Chung, M. Ireland, A. Gonzales, and R.J. Booth. 2007. The development and psychometric properties of LIWC2007. Austin, TX, LIWC. Net. N. Quadrianto, A.J. Smola, T.S. Caetano, and Q.V. Le. 2009. Estimating labels from label proportions. The Journal of Machine Learning Research, 10:2349– 2374. P. Rayson, A. Wilson, and G. Leech. 2001. Grammatical word class variation within the British National Corpus sampler. Language and Computers, 36(1):295– 306. R.A. Rigby and D.M. Stasinopoulos. 2005. Generalized additive models for location, scale and shape. Journal of the Royal Statistical Society: Series C (Applied Statistics), 54(3):507–554. 318 F. Sebastiani. 2002. Machine learning in automated text categorization. ACM computing surveys (CSUR), 34(1):1–47. M. ´A. Serrano, A. Flammini, and F. Menczer. 2009. Modeling statistical properties of written text. PloS one, 4(4):5372. A. Stolcke. 2002. SRILM-an extensible language modeling toolkit. In Seventh International Conference on Spoken Language Processing, volume 3, pages 901– 904. Citeseer. C. Toma and J.T. Hancock. In Press. What Lies Beneath: The Linguistic Traces of Deception in Online Dating Profiles. Journal of Communication. A. Vrij, S. Mann, S. Kristen, and R.P. Fisher. 2007. Cues to deception and ability to detect lies as a function of police interview styles. Law and human behavior, 31(5):499–518. A. Vrij, S. Leal, P.A. Granhag, S. Mann, R.P. Fisher, J. Hillman, and K. Sperry. 2009. Outsmarting the liars: The benefit of asking unanticipated questions. Law and human behavior, 33(2):159–166. A. Vrij. 2008. Detecting lies and deceit: Pitfalls and opportunities. Wiley-Interscience. W. Weerkamp and M. De Rijke. 2008. Credibility improves topical blog post retrieval. ACL-08: HLT, pages 923–931. M. Weimer, I. Gurevych, and M. M¨uhlh¨auser. 2007. Automatically assessing the post quality in online discussions on software. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 125–128. Association for Computational Linguistics. G. Wu, D. Greene, B. Smyth, and P. Cunningham. 2010. Distortion as a validation criterion in the identification of suspicious reviews. Technical report, UCD-CSI2010-04, University College Dublin. K.H. Yoo and U. Gretzel. 2009. Comparison of Deceptive and Truthful Travel Reviews. Information and Communication Technologies in Tourism 2009, pages 37–47. L. Zhou, J.K. Burgoon, D.P. Twitchell, T. Qin, and J.F. Nunamaker Jr. 2004. A comparison of classification methods for predicting deception in computermediated communication. Journal of Management Information Systems, 20(4):139–166. L. Zhou, Y. Shi, and D. Zhang. 2008. A Statistical Language Modeling Approach to Online Deception Detection. IEEE Transactions on Knowledge and Data Engineering, 20(8):1077–1081. 319
|
2011
|
32
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 320–330, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Joint Bilingual Sentiment Classification with Unlabeled Parallel Corpora Bin Lu1,3*, Chenhao Tan2, Claire Cardie2 and Benjamin K. Tsou3,1 1 Department of Chinese, Translation and Linguistics, City University of Hong Kong, Hong Kong 2 Department of Computer Science, Cornell University, Ithaca, NY, USA 3 Research Centre on Linguistics and Language Information Sciences, Hong Kong Institute of Education, Hong Kong [email protected], {chenhao, cardie}@cs.cornell.edu, [email protected] Abstract Most previous work on multilingual sentiment analysis has focused on methods to adapt sentiment resources from resource-rich languages to resource-poor languages. We present a novel approach for joint bilingual sentiment classification at the sentence level that augments available labeled data in each language with unlabeled parallel data. We rely on the intuition that the sentiment labels for parallel sentences should be similar and present a model that jointly learns improved monolingual sentiment classifiers for each language. Experiments on multiple data sets show that the proposed approach (1) outperforms the monolingual baselines, significantly improving the accuracy for both languages by 3.44%-8.12%; (2) outperforms two standard approaches for leveraging unlabeled data; and (3) produces (albeit smaller) performance gains when employing pseudo-parallel data from machine translation engines. 1 Introduction The field of sentiment analysis has quickly attracted the attention of researchers and practitioners alike (e.g. Pang et al., 2002; Turney, 2002; Hu and Liu, 2004; Wiebe et al., 2005; Breck et al., 2007; Pang and Lee, 2008). 1 Indeed, sentiment analysis systems, which mine opinions from textual sources (e.g. news, blogs, and reviews), can be used in a wide variety of *The work was conducted when the first author was visiting Cornell University. applications, including interpreting product reviews, opinion retrieval and political polling. Not surprisingly, most methods for sentiment classification are supervised learning techniques, which require training data annotated with the appropriate sentiment labels (e.g. document-level or sentence-level positive vs. negative polarity). This data is difficult and costly to obtain, and must be acquired separately for each language under consideration. Previous work in multilingual sentiment analysis has therefore focused on methods to adapt sentiment resources (e.g. lexicons) from resourcerich languages (typically English) to other languages, with the goal of transferring sentiment or subjectivity analysis capabilities from English to other languages (e.g. Mihalcea et al. (2007); Banea et al. (2008; 2010); Wan (2008; 2009); Prettenhofer and Stein (2010)). In recent years, however, sentiment-labeled data is gradually becoming available for languages other than English (e.g. Seki et al. (2007; 2008); Nakagawa et al. (2010); Schulz et al. (2010)). In addition, there is still much room for improvement in existing monolingual (including English) sentiment classifiers, especially at the sentence level (Pang and Lee, 2008). This paper tackles the task of bilingual sentiment analysis. In contrast to previous work, we (1) assume that some amount of sentimentlabeled data is available for the language pair under study, and (2) investigate methods to simultaneously improve sentiment classification for both languages. Given the labeled data in each language, we propose an approach that exploits an unlabeled parallel corpus with the following 320 intuition: two sentences or documents that are parallel (i.e. translations of one another) should exhibit the same sentiment — their sentiment labels (e.g. polarity, subjectivity, intensity) should be similar. The proposed maximum entropy-based EM approach jointly learns two monolingual sentiment classifiers by treating the sentiment labels in the unlabeled parallel text as unobserved latent variables, and maximizes the regularized joint likelihood of the language-specific labeled data together with the inferred sentiment labels of the parallel text. Although our approach should be applicable at the document-level and for additional sentiment tasks, we focus on sentence-level polarity classification in this work. We evaluate our approach for English and Chinese on two dataset combinations (see Section 4) and find that the proposed approach outperforms the monolingual baselines (i.e. maximum entropy and SVM classifiers) as well as two alternative methods for leveraging unlabeled data (transductive SVMs (Joachims, 1999b) and cotraining (Blum and Mitchell, 1998)). Accuracy is significantly improved for both languages, by 3.44%-8.12%. We furthermore find that improvements, albeit smaller, are obtained when the parallel data is replaced with a pseudo-parallel (i.e. automatically translated) corpus. To our knowledge, this is the first multilingual sentiment analysis study to focus on methods for simultaneously improving sentiment classification for a pair of languages based on unlabeled data rather than resource adaptation from one language to another. The rest of the paper is organized as follows. Section 2 introduces related work. In Section 3, the proposed joint model is described. Sections 4 and 5, respectively, provide the experimental setup and results; the conclusion (Section 6) follows. 2 Related Work Multilingual Sentiment Analysis. There is a growing body of work on multilingual sentiment analysis. Most approaches focus on resource adaptation from one language (usually English) to other languages with few sentiment resources. Mihalcea et al. (2007), for example, generate subjectivity analysis resources in a new language from English sentiment resources by leveraging a bilingual dictionary or a parallel corpus. Banea et al. (2008; 2010) instead automatically translate the English resources using automatic machine translation engines for subjectivity classification. Prettenhofer and Stein (2010) investigate crosslingual sentiment classification from the perspective of domain adaptation based on structural correspondence learning (Blitzer et al., 2006). Approaches that do not explicitly involve resource adaptation include Wan (2009), which uses co-training (Blum and Mitchell, 1998) with English vs. Chinese features comprising the two independent ―views‖ to exploit unlabeled Chinese data and a labeled English corpus and thereby improves Chinese sentiment classification. Another notable approach is the work of BoydGraber and Resnik (2010), which presents a generative model --- supervised multilingual latent Dirichlet allocation --- that jointly models topics that are consistent across languages, and employs them to better predict sentiment ratings. Unlike the methods described above, we focus on simultaneously improving the performance of sentiment classification in a pair of languages by developing a model that relies on sentimentlabeled data in each language as well as unlabeled parallel text for the language pair. Semi-supervised Learning. Another line of related work is semi-supervised learning, which combines labeled and unlabeled data to improve the performance of the task of interest (Zhu and Goldberg, 2009). Among the popular semisupervised methods (e.g. EM on Naïve Bayes (Nigam et al., 2000), co-training (Blum and Mitchell, 1998), transductive SVMs (Joachims, 1999b), and co-regularization (Sindhwani et al., 2005; Amini et al., 2010)), our approach employs the EM algorithm, extending it to the bilingual case based on maximum entropy. We compare to co-training and transductive SVMs in Section 5. Multilingual NLP for Other Tasks. Finally, there exists related work using bilingual resources to help other NLP tasks, such as word sense disambiguation (e.g. Ido and Itai (1994)), parsing (e.g. Burkett and Klein (2008); Zhao et al. (2009); Burkett et al. (2010)), information retrieval (Gao et al., 2009), named entity detection (Burkett et al., 2010); topic extraction (e.g. Zhang et al., 2010), text classification (e.g. Amini et al., 2010), and hyponym-relation acquisition (e.g. Oh et al., 2009). 321 In these cases, multilingual models increase performance because different languages contain different ambiguities and therefore present complementary views on the shared underlying labels. Our work shares a similar motivation. 3 A Joint Model with Unlabeled Parallel Text We propose a maximum entropy-based statistical model. Maximum entropy (MaxEnt) models1 have been widely used in many NLP tasks (Berger et al., 1996; Ratnaparkhi, 1997; Smith, 2006). The models assign the conditional probability of the label given the observation as follows: (1) where is a real-valued vector of feature weights and is a feature function that maps pairs to a nonnegative real-valued feature vector. Each feature has an associated parameter, , which is called its weight; and is the corresponding normalization factor. Maximum likelihood parameter estimation (training) for such a model, with a set of labeled examples , amounts to solving the following optimization problem: (2) 3.1 Problem Definition Given two languages and , suppose we have two distinct (i.e. not parallel) sets of sentimentlabeled data, and written in and respectively. In addition, we have unlabeled (w.r.t. sentiment) bilingual (in and ) parallel data that are defined as follows. where denotes the polarity of the -th instance (positive or negative); and are respectively the numbers of labeled instances in and ; and are parallel instances in and , respectively (i.e. they are supposed to be 1They are sometimes referred to as log-linear models, but also known as exponential models, generalized linear models, or logistic regression. translations of one another), whose labels and are unobserved, but according to the intuition outlined in Section 1, should be similar. Given the input data and , our task is to jointly learn two monolingual sentiment classifiers — one for and one for . With MaxEnt, we learn from the input data: where and are the vectors of feature weights for and , respectively (for brevity we denote them as and in the remaining sections). In this study, we focus on sentence-level sentiment classification, i.e. each is a sentence, and and are parallel sentences. 3.2 The Joint Model Given the problem definition above, we now present a novel model to exploit the correspondence of parallel sentences in unlabeled bilingual text. The model maximizes the following joint likelihood with respect to and : (3) where denotes or ; the first term on the right-hand side is the likelihood of labeled data for both and ; and the second term is the likelihood of the unlabeled parallel data . If we assume that parallel sentences are perfect translations, the two sentences in each pair should have the same polarity label, which gives us: (4) where is the unobserved class label for the -th instance in the unlabeled data. This probability directly models the sentiment label agreement between and . However, there could be considerable noise in real-world parallel data, i.e. the sentence pairs may be noisily parallel (or even comparable) instead of fully parallel (Munteanu and Marcu, 2005). In such noisy cases, the labels (positive or negative) could be different for the two monolingual sentences in a sentence pair. Although we do not know the exact probability that a sentence pair exhibits the same label, we can approximate it using their translation 322 probabilities, which can be computed using word alignment toolkits such as Giza++ (Och and Ney, 2003) or the Berkeley word aligner (Liang et al., 2006). The intuition here is that if the translation probability of two sentences is high, the probability that they have the same sentiment label should be high as well. Therefore, by considering the noise in parallel data, we get: (5) where is the translation probability of the -th sentence pair in ;2 is the opposite of ; the first term models the probability that and have the same label; and the second term models the probability that they have different labels. By further considering the weight to ascribe to the unlabeled data vs. the labeled data (and the weight for the L2-norm regularization), we get the following regularized joint log likelihood to be maximized: (6) where the first term on the right-hand side is the log likelihood of the labeled data from both and the second is the log likelihood of the unlabeled parallel data , multiplied by , a constant that controls the contribution of the unlabeled data; and is a regularization constant that penalizes model complexity or large feature weights. When is 0, the algorithm ignores the unlabeled data and degenerates to two MaxEnt models trained on only the labeled data. 3.3 The EM Algorithm on MaxEnt To solve the optimization problem for the model, we need to jointly estimate the optimal parameters for the two monolingual classifiers by finding: (7) This can be done with an EM algorithm, whose steps are summarized in Algorithm 1. First, the MaxEnt parameters, and , are estimated from 2The probability should be rescaled within the range of [0, 1], where 0.5 means that we are completely unsure if the sentences are translations of each other or not, and only those translation pairs with a probability larger than 0.5 are meaningful for our purpose. just the labeled data. Then, in the E-step, the classifiers, based on current values of and , compute for each labeled example and assign probabilistically-weighted class labels to each unlabeled example. Next, in the M-step, the parameters, and , are updated using both the original labeled data ( and ) and the newly labeled data . These last two steps are iterated until convergence or a predefined iteration limit . Algorithm 1. The MaxEnt-based EM Algorithm for Multilingual Sentiment Classification Input: Labeled data and Unlabeled parallel data Output: Two monolingual MaxEnt classifiers with parameters and , respectively 1. Train two initial monolingual models Train and initialize and on the labeled data 2. Jointly optimize two monolingual models for to do // T: number of iterations E-Step: Compute for each example in , and based on and ; Compute the expectation of the log likelihood with respect to ; M-Step: Find and by maximizing the regularized joint log likelihood; Convergence: If the increase of the joint log likelihood is sufficiently small, break; end for 3. Output as s, and as In the M-step, we can optimize the regularized joint log likelihood using any gradient-based optimization technique (Malouf, 2002). The gradient for Equation 3 based on Equation 4 is shown in Appendix A; those for Equations 5 and 6 can be derived similarly. In our experiments, we use the L-BFGS algorithm (Liu et al., 1989) and run EM until the change in regularized joint log likelihood is less than 1e-5 or we reach 100 iterations.3 3Since the EM-based algorithm may find a local maximum of the objective function, the initialization of the parameters is important. Our experiments show that an effective maximum can usually be found by initializing the parameters with those learned from the labeled data; performance would be much worse if we initialize all the parameters to 0 or 1. 323 3.4 Pseudo-Parallel Labeled and Unlabeled Data We also consider the case where a parallel corpus is not available: to obtain a pseudo-parallel corpus (i.e. sentences in one language with their corresponding automatic translations), we use an automatic machine translation system (e.g. Google machine translation 4 ) to translate unlabeled indomain data from to or vice versa. Since previous work (Banea et al., 2008; 2010; Wan, 2009) has shown that it could be useful to automatically translate the labeled data from the source language into the target language, we can further incorporate such translated labeled data into the joint model by adding the following component into Equation 6: (8) where is the alternative class of , is the automatically translated example from ; and is a constant that controls the weight of the translated labeled data. 4 Experimental Setup 4.1 Data Sets and Preprocessing The following labeled datasets are used in our experiments. MPQA (Labeled English Data): The MultiPerspective Question Answering (MPQA) corpus (Wiebe et al., 2005) consists of newswire documents manually annotated with phrase-level subjectivity information. We extract all sentences containing strong (i.e. intensity is medium or higher), sentiment-bearing (i.e. polarity is positive or negative) expressions following Choi and Cardie (2008). Sentences with both positive and negative strong expressions are then discarded, and the polarity of each remaining sentence is set to that of its sentiment-bearing expression(s). NTCIR-EN (Labeled English Data) and NTCIR-CH (Labeled Chinese Data): The NTCIR Opinion Analysis task (Seki et al., 2007; 2008) provides sentiment-labeled news data in Chinese, Japanese and English. Only those sentences with a polarity label (positive or negative) agreed to by at least two annotators are extracted. We use the Chinese data from NTCIR-6 4http://translate.google.com/ as our Chinese labeled data. Since far fewer sentences in the English data pass the annotator agreement filter, we combine the English data from NTCIR-6 and NTCIR-7. The Chinese sentences are segmented using the Stanford Chinese word segmenter (Tseng et al., 2005). The number of sentences in each of these datasets is shown in Table 1. In our experiments, we evaluate two settings of the data: (1) MPQA+NTCIR-CH, and (2) NTCIR-EN+NTCIRCH. In each setting, the English labeled data constitutes and the Chinese labeled data, . MPQA NTCIR-EN NTCIR-CH Positive 1,471 (30%) 528 (30%) 2,378 (55%) Negative 3,487 (70%) 1,209 (70%) 1,916 (45%) Total 4,958 1,737 4,294 Table 1: Sentence Counts for the Labeled Data Unlabeled Parallel Text and its Preprocessing. For the unlabeled parallel text, we use the ISI Chinese-English parallel corpus (Munteanu and Marcu, 2005), which was extracted automatically from news articles published by Xinhua News Agency in the Chinese Gigaword (2nd Edition) and English Gigaword (2nd Edition) collections. Because sentence pairs in the ISI corpus are quite noisy, we rely on Giza++ (Och and Ney, 2003) to obtain a new translation probability for each sentence pair, and select the 100,000 pairs with the highest translation probabilities.5 We also try to remove neutral sentences from the parallel data since they can introduce noise into our model, which deals only with positive and negative examples. To do this, we train a single classifier from the combined Chinese and English labeled data for each data setting above by concatenating the original English and Chinese feature sets. We then classify each unlabeled sentence pair by combining the two sentences in each pair into one. We choose the most confidently predicted 10,000 positive and 10,000 negative pairs to constitute the unlabeled parallel corpus for each data setting. 5We removed sentence pairs with an original confidence score (given in the corpus) smaller than 0.98, and also removed the pairs that are too long (more than 60 characters in one sentence) to facilitate Giza++. We first obtain translation probabilities for both directions (i.e. Chinese to English and English to Chinese) with Giza++, take the log of the product of those two probabilities, and then divide it by the sum of lengths of the two sentences in each pair. 324 4.2 Baseline Methods In our experiments, the proposed joint model is compared with the following baseline methods. MaxEnt: This method learns a MaxEnt classifier for each language given the monolingual labeled data; the unlabeled data is not used. SVM: This method learns an SVM classifier for each language given the monolingual labeled data; the unlabeled data is not used. SVM-light (Joachims, 1999a) is used for all the SVM-related experiments. Monolingual TSVM (TSVM-M): This method learns two transductive SVM (TSVM) classifiers given the monolingual labeled data and the monolingual unlabeled data for each language. Bilingual TSVM (TSVM-B): This method learns one TSVM classifier given the labeled training data in two languages together with the unlabeled sentences by combining the two sentences in each unlabeled pair into one. We expect this method to perform better than TSVMM since the combined (bilingual) unlabeled sentences could be more helpful than the unlabeled monolingual sentences. Co-Training with SVMs (Co-SVM): This method applies SVM-based co-training given both the labeled training data and the unlabeled parallel data following Wan (2009). First, two monolingual SVM classifiers are built based on only the corresponding labeled data, and then they are bootstrapped by adding the most confident predicted examples from the unlabeled data into the training set. We run bootstrapping for 100 iterations. In each iteration, we select the most confidently predicted 50 positive and 50 negative sentences from each of the two classifiers, and take the union of the resulting 200 sentence pairs as the newly labeled training data. (Examples with conflicting labels within the pair are not included.) 5 Results and Analysis In our experiments, the methods are tested in the two data settings with the corresponding unlabeled parallel corpus as mentioned in Section 4.6 We use 6 The results reported in this section employ Equation 4. Preliminary experiments showed that Equation 5 does not significantly improve the performance in our case, which is reasonable since we choose only sentence pairs with the highest translation probabilities to be our unlabeled data (see Section 4.1). 5-fold cross-validation and report average accuracy (also MicroF1 in this case) and MacroF1 scores. Unigrams are used as binary features for all models, as Pang et al. (2002) showed that binary features perform better than frequency features for sentiment classification. The weights for unlabeled data and regularization, and , are set to 1 unless otherwise stated. Later, we will show that the proposed approach performs well with a wide range of parameter values.7 5.1 Method Comparison We first compare the proposed joint model (Joint) with the baselines in Table 2. As seen from the table, the proposed approach outperforms all five baseline methods in terms of both accuracy and MacroF1 for both English and Chinese and in both of the data settings. 8 By making use of the unlabeled parallel data, our proposed approach improves the accuracy, compared to MaxEnt, by 8.12% (or 33.27% error reduction) on English and 3.44% (or 16.92% error reduction) on Chinese in the first setting, and by 5.07% (or 19.67% error reduction) on English and 3.87% (or 19.4% error reduction) on Chinese in the second setting. Among the baselines, the best is Co-SVM; TSVMs do not always improve performance using the unlabeled data compared to the standalone SVM; and TSVM-B outperforms TSVM-M except for Chinese in the second setting. The MPQA data is more difficult in general compared to the NTCIR data. Without unlabeled parallel data, the performance on the Chinese data is better than on the English data, which is consistent with results reported in NTCIR-6 (Seki et al., 2007). Overall, the unlabeled parallel data improves classification accuracy for both languages when using our proposed joint model and Co-SVM. The joint model makes better use of the unlabeled parallel data than Co-SVM or TSVMs presumably because of its attempt to jointly optimize the two monolingual models via soft (probabilistic) assignments of the unlabeled instances to classes in each iteration, instead of the hard assignments in Co-SVM and TSVMs. Although English sentiment 7The code is at http://sites.google.com/site/lubin2010. 8Significance is tested using paired t-tests with <0.05: € denotes statistical significance compared to the corresponding performance of MaxEnt; * denotes statistical significance compared to SVM; and Γ denotes statistical significance compared to Co-SVM. 325 classification alone is more difficult than Chinese for our datasets, we obtain greater performance gains for English by exploiting unlabeled parallel data as well as the Chinese labeled data. 5.2 Varying the Weight and Amount of Unlabeled Data Figure 1 shows the accuracy curve of the proposed approach for the two data settings when varying the weight for the unlabeled data, , from 0 to 1. When is set to 0, the joint model degenerates to two MaxEnt models trained with only the labeled data. We can see that the performance gains for the proposed approach are quite remarkable even when is set to 0.1; performance is largely stable after reaches 0.4. Although MPQA is more difficult in general compared to the NTCIR data, we still see steady improvements in performance with unlabeled parallel data. Overall, the proposed approach performs quite well for a wide range of parameter values of . Figure 2 shows the accuracy curve of the proposed approach for the two data settings when varying the amount of unlabeled data from 0 to 20,000 instances. We see that the performance of the proposed approach improves steadily by adding more and more unlabeled data. However, even with only 2,000 unlabeled sentence pairs, the proposed approach still produces large performance gains. 5.3 Results on Pseudo-Parallel Unlabeled Data As discussed in Section 3.4, we generate pseudoparallel data by translating the monolingual sentences in each setting using Google’s machine translation system. Figures 3 and 4 show the performance of our model using the pseudoparallel data versus the real parallel data, in the two settings, respectively. The EN->CH pseudoparallel data consists of the English unlabeled data and its automatic Chinese translation, and vice versa. Although not as significant as those with parallel data, we can still obtain improvements using the pseudo-parallel data, especially in the first setting. The difference between using parallel versus pseudo-parallel data is around 2-4% in Figures 3 and 4, which is reasonable since the quality of the pseudo-parallel data is not as good as that of the parallel data. Therefore, the performance using pseudo-parallel data is better with a small weight (e.g. = 0.1) in some cases. Setting 1: NTCIR-EN+NTCIR-CH Setting 2: MPQA+NTCIR-CH Accuracy MacroF1 Accuracy MacroF1 English Chinese English Chinese English Chinese English Chinese MaxEnt 75.59 79.67 66.61* 79.34 74.22 79.67 65.09* 79.34 SVM 76.34 81.02 61.12 80.75€ 76.74€ 81.02 61.35 80.75€ TSVM-M 73.46 80.21 55.33 79.99 72.89 81.14 52.82 79.99 TSVM-B 78.36 81.60€ 65.53 81.42 76.42€ 78.51 61.66 78.32 Co-SVM 82.44€* 82.79€ 72.61€* 82.67€* 78.18€* 82.63€* 68.03€* 82.51€* Joint 83.71€* 83.11€* 75.89€*Γ 82.97€* 79.29€*Γ 83.54€* 72.58€*Γ 83.37€* Table 2: Comparison of Results Figure 1. Accuracy vs. Weight of Unlabeled Data Figure 2. Accuracy vs. Amount of Unlabeled Data 0 0.2 0.4 0.6 0.8 1 72 74 76 78 80 82 84 86 Weight of Unlabeled Data Accuracy(%) English on NTCIR-EN+NTCIR-CH Chinese on NTCIR-EN+NTCIR-CH English on MPQA+NTCIR-CH Chinese on MPQA+NTCIR-CH 0 0.5 1 1.5 2 72 74 76 78 80 82 84 86 Size of Unlabeled Data Accuracy (%) English on NTCIR-EN+NTCIR-CH Chinese on NTCIR-EN+NTCIR-CH English on MPQA+NTCIR-CH Chinese on MPQA+NTCIR-CH 326 5.4 Adding Pseudo-Parallel Labeled Data In this section, we investigate how adding automatically translated labeled data might influence the performance as mentioned in Section 3.4. We use only the translated labeled data to train classifiers, and then directly classify the test data. The average accuracies in setting 1 are 66.61% and 63.11% on English and Chinese, respectively; while the accuracies in setting 2 are 58.43% and 54.07% on English and Chinese, respectively. This result is reasonable because of the language gap between the original language and the translated language. In addition, the class distributions of the English labeled data and the Chinese are quite different (30% vs. 55% for positive as shown in Table 1). Figures 5 and 6 show the accuracies when varying the weight of the translated labeled data vs. the labeled data, with and without the unlabeled parallel data. From Figure 5 for setting 1, we can see that the translated data can be helpful given the labeled data and even the unlabeled data, as long as is small; while in Figure 6, the translated data decreases the performance in most cases for setting 2. One possible reason is that in the first data setting, the NTCIR English data covers the same topics as the NTCIR Chinese data and thus direct translation is helpful, while the English and Chinese topics are quite different in the second data setting, and thus direct translation hurts the performance given the existing labeled data in each language. 5.5 Discussion To further understand what contributions our proposed approach makes to the performance gain, we look inside the parameters in the MaxEnt models learned before and after adding the parallel unlabeled data. Table 3 shows the features in the model learned from the labeled data that have the largest weight change after adding the parallel data; Figure 3. Accuracy with Pseudo-Parallel Unlabeled Figure 4. Accuracy with Pseudo-Parallel Unlabeled Data in Setting 1 Data in Setting 2 Figure 5. Accuracy with Pseudo-Parallel Labeled Figure 6. Accuracy with Pseudo-Parallel Labeled Data in Setting 1 Data in Setting 2 0 0.2 0.4 0.6 0.8 1 74 76 78 80 82 84 86 Weight of Unlabeled Data Accuracy(%) English on Parallel Data Chinese on Parallel Data English on EN->CH Pseudo-Parallel Data Chinese on EN->CH Pseudo-Parallel Data English on CH->EN Pseudo-Parallel Data Chinese on CH->EN Pseudo-Parallel Data 0 0.2 0.4 0.6 0.8 1 65 70 75 80 85 Weight of Unlabeled Data Accuracy(%) English on Parallel Data Chinese on Parallel Data English on EN->CH Pseudo-Parallel Data Chinese on EN->CH Pseudo-Parallel Data English on CH->EN Pseudo-Parallel Data Chinese on CH->EN Pseudo-Parallel Data 0 0.2 0.4 0.6 0.8 1 70 72 74 76 78 80 82 84 86 Weight of Translated Labeled Data Accuracy(%) English w/o Unlabeled Data Chinese w/o Unlabeled Data English with Unlabeled Data Chinese with Unlabeled Data 0 0.2 0.4 0.6 0.8 1 68 70 72 74 76 78 80 82 84 86 Weight of Translated Labeled Data Accuracy(%) English w/o Unlabeled Data Chinese w/o Unlabeled Data English with Unlabeled Data Chinese with Unlabeled Data 327 Positive Negative Word Weight Word Weight friendly 0.701 german 0.783 principles 0.684 arduous 0.531 hopes 0.630 oppose 0.511 hoped 0.553 administrations 0.431 cooperative 0.552 oau9 0.408 Table 4. New Features Learned from Unlabeled Data and Table 4 shows the newly learned features from the unlabeled data with the largest weights. From Table 3 10 we can see that the weight changes of the original features are quite reasonable, e.g. the top words in the positive class are obviously positive and the proposed approach gives them higher weights. The new features also seem reasonable given the knowledge that the labeled and unlabeled data includes negative news about for specific topics (e.g. Germany, Taiwan),. We also examine the process of joint training by checking the performance on test data and the agreement of the two monolingual models on the unlabeled parallel data in both settings. The average agreement across 5 folds is 85.06% and 73.87% in settings 1 and 2, respectively, before the joint training, and increases to 100% and 99.89%, respectively, after 100 iterations of joint training. Although the average agreement has already increased to 99.50% and 99.02% in settings 1 and 2, respectively, after 30 iterations, the performance on the test set steadily improves in both settings until around 50-60 iterations, and then becomes relatively stable after that. Examination of those sentence pairs in setting 2 for which the two monolingual models still 9This is an abbreviation for the Organization of African Unity. 10The features and weights in Tables 3 and 4 are extracted from the English model in the first fold of setting 1. disagree after 100 iterations of joint training often produces sentences that are not quite parallel, e.g.: English: The two sides attach great importance to international cooperation on protection and promotion of human rights. Chinese: 双方认为,在人权问题上不能采取―双重标准‖,反对在 国际关系中利用人权问题施压。(Both sides agree that double standards on the issue of human rights are to be avoided, and are opposed to using pressure on human rights issues in international relations.) Since the two sentences discuss human rights from very different perspectives, it is reasonable that the two monolingual models will classify them with different polarities (i.e. positive for the English sentence and negative for the Chinese sentence) even after joint training. 6 Conclusion In this paper, we study bilingual sentiment classification and propose a joint model to simultaneously learn better monolingual sentiment classifiers for each language by exploiting an unlabeled parallel corpus together with the labeled data available for each language. Our experiments show that the proposed approach can significantly improve sentiment classification for both languages. Moreover, the proposed approach continues to produce (albeit smaller) performance gains when employing pseudo-parallel data from machine translation engines. In future work, we would like to apply the joint learning idea to other learning frameworks (e.g. SVMs), and to extend the proposed model to handle word-level parallel information, e.g. bilingual dictionaries or word alignment information. Another issue is to investigate how to improve multilingual sentiment analysis by exploiting comparable corpora. Acknowledgments We thank Shuo Chen, Long Jiang, Thorsten Joachims, Lillian Lee, Myle Ott, Yan Song, Xiaojun Wan, Ainur Yessenalina, Jingbo Zhu and the anonymous reviewers for many useful comments and discussion. This work was supported in part by National Science Foundation Grants BCS-0904822, BCS-0624277, IIS0968450; and by a gift from Google. Chenhao Tan is supported by NSF (DMS-0808864), ONR (YIPN000140910911), and a grant from Microsoft. Word Weight Before After Change Positive important 0.452 1.659 1.207 cooperation 0.325 1.492 1.167 support 0.533 1.483 0.950 importance 0.450 1.193 0.742 agreed 0.347 1.061 0.714 Negative difficulties 0.018 0.663 0.645 not 0.202 0.844 0.641 never 0.245 0.879 0.634 germany 0.035 0.664 0.629 taiwan 0.590 1.216 0.626 Table 3. Original Features with Largest Weight Change 328 References Massih-Reza Amini, Cyril Goutte, and Nicolas Usunier. 2010. Combining coregularization and consensusbased self-training for multilingual text categorization. In Proceeding of SIGIR’10. Carmen Banea, Rada Mihalcea, and Janyce Wiebe. 2010. Multilingual subjectivity: Are more languages better? In Proceedings of COLING’10. Carmen Banea, Rada Mihalcea, Janyce Wiebe, and Samer Hassan. 2008. Multilingual subjectivity analysis using machine translation. In Proceedings of EMNLP’08. Adam L. Berger, Stephen A. Della Pietra and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1). John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural corresponddence learning. In Proceedings of EMNLP’06. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of COLT’98. Jordan Boyd-Graber and Philip Resnik. 2010. Holistic sentiment analysis across languages: Multilingual supervised Latent Dirichlet Allocation. In Proceedings of EMNLP’10. Eric Breck, Yejin Choi, and Claire Cardie. 2007. Identifying expressions of opinion in context. In Proceedings of IJCAI’07. David Burkett, Slav Petrov, John Blitzer, and Dan Klein. 2010. Learning better monolingual models with unannotated bilingual text. In Proceedings of CoNLL’10. David Burkett and Dan Klein. 2008. Two languages are better than one (for syntactic parsing). In Proceedings of EMNLP’08. Yejin Choi and Claire Cardie. 2008. Learning with compositional semantics as structural inference for subsentential sentiment analysis. In Proceedings of EMNLP’08. Wei Gao, John Blitzer, Ming Zhou, and Kam-Fai Wong. 2009. Exploiting bilingual information to improve web search. In Proceedings of ACL/IJCNLP‘09. Minqing Hu and Bing Liu. 2004. Mining opinion features in customer reviews. In Proceedings of AAAI’04. Ido Dagan, and Alon Itai. 1994. Word sense disambiguation using a second language monolingual corpus, Computational Linguistics, 20(4): 563-596. Thorsten Joachims. 1999a. Making Large-Scale SVM Learning Practical. In: Advances in Kernel Methods - Support Vector Learning, B. Schölkopf, C. Burges, and A. Smola (ed.), MIT Press. Thorsten Joachims. 1999b. Transductive inference for text classification using support vector machines. In Proceedings of ICML’99. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of NAACL’06. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, (45): 503–528. Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of CoNLL’02. Rada Mihalcea, Carmen Banea, and Janyce Wiebe. 2007. Learning multilingual subjective language via cross-lingual projections. In Proceedings of ACL’07. Dragos S. Munteanu and Daniel Marcu. 2005. Improving machine translation performance by exploiting non-parallel corpora. Computational Linguistics, 31(4): 477–504. Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using CRFs with hidden variables. In Proceedings of NAACL/HLT ‘10. Kamal Nigam, Andrew K. Mccallum, Sebastian Thrun, and Tom Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2): 103–134. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1): 19-51. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis, Foundations and Trends in Information Retrieval, Now Publishers. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of EMNLP’02. Peter Prettenhofer and Benno Stein. 2010. Crosslanguage text classification using structural correspondence learning. In Proceedings of ACL’10. Adwait Ratnaparkhi. 1997. A simple introduction to maximum entropy models for natural language processing. Technical Report 97-08, University of Pennsylvania. 329 Julia M. Schulz, Christa Womser-Hacker, and Thomas Mandl. 2010. Multilingual corpus development for opinion mining. In Proceedings of LREC’10. Yohei Seki, David Kirk Evans, Lun-Wei Ku, Le Sun, Hsin-His Chen, and Noriko Kando. 2008. Overview of multilingual opinion analysis task at NTCIR-7. In Proceedings of the NTCIR-7 Workshop. Yohei Seki, David K. Evans, Lun-Wei Ku, Le Sun, Hsin-His Chen, Noriko Kando, and Chin-Yew Lin. 2007. Overview of opinion analysis pilot task at NTCIR-6. In Proceedings of the NTCIR-6 Workshop. Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. 2005. A co-regularization approach to semisupervised learning with multiple views. In Proceedings of ICML’05. Noah A. Smith. 2006. Novel estimation methods for unsupervised discovery of latent structure in natural language text. Ph.D. thesis, Department of Computer Science, Johns Hopkins University. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky and Christopher Manning. 2005. A conditional random field word segmenter. In Proeedings of the 4th SIGHAN Workshop. Peter D. Turney. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews, In Proceedings of ACL’02. Xiaojun Wan. 2008. Using Bilingual Knowledge and Ensemble Techniques for Unsupervised Chinese Sentiment Analysis. In Proceedings of EMNLP’08. Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In Proceedings of ACL/AFNLP’09. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2- 3): 165-210. Duo Zhang, Qiaozhu Mei, and ChengXiang Zhai. 2010. Cross-lingual latent topic extraction, In Proceedings of ACL’10. Hai Zhao, Yan Song, Chunyu Kit, and Guodong Zhou. 2009. Cross language dependency parsing using a bilingual lexicon. In Proceedings of ACL/IJCNLP’09. Xiaojin Zhu and Andrew B. Goldberg. 2009. Introduction to Semi-Supervised Learning. Morgan & Claypool Publishers. Appendix A. Equation Deduction In this appendix, we derive the gradient for the objective function in Equation 3, which is used in parameter estimation. As mentioned in Section 3.3, the parameters can be learned by finding: Since the first term on the right-hand side is just the expression for the standard MaxEnt problem, we will focus on the gradient for the second term, and denote as ( ). Let denote or , and be the th weight in the vector . For brevity, we drop the in the above notation, and write to denote . Then the partial derivative of (*) based on Equation 4 with respect to is as follows: (1) Further, we obtain: (2) Merge (2) into (1), we get: 330
|
2011
|
33
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 331–339, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Pilot Study of Opinion Summarization in Conversations Dong Wang Yang Liu The University of Texas at Dallas dongwang,[email protected] Abstract This paper presents a pilot study of opinion summarization on conversations. We create a corpus containing extractive and abstractive summaries of speaker’s opinion towards a given topic using 88 telephone conversations. We adopt two methods to perform extractive summarization. The first one is a sentence-ranking method that linearly combines scores measured from different aspects including topic relevance, subjectivity, and sentence importance. The second one is a graph-based method, which incorporates topic and sentiment information, as well as additional information about sentence-to-sentence relations extracted based on dialogue structure. Our evaluation results show that both methods significantly outperform the baseline approach that extracts the longest utterances. In particular, we find that incorporating dialogue structure in the graph-based method contributes to the improved system performance. 1 Introduction Both sentiment analysis (opinion recognition) and summarization have been well studied in recent years in the natural language processing (NLP) community. Most of the previous work on sentiment analysis has been conducted on reviews. Summarization has been applied to different genres, such as news articles, scientific articles, and speech domains including broadcast news, meetings, conversations and lectures. However, opinion summarization has not been explored much. This can be useful for many domains, especially for processing the increasing amount of conversation recordings (telephone conversations, customer service, round-table discussions or interviews in broadcast programs) where we often need to find a person’s opinion or attitude, for example, “how does the speaker think about capital punishment and why?”. This kind of questions can be treated as a topic-oriented opinion summarization task. Opinion summarization was run as a pilot task in Text Analysis Conference (TAC) in 2008. The task was to produce summaries of opinions on specified targets from a set of blog documents. In this study, we investigate this problem using spontaneous conversations. The problem is defined as, given a conversation and a topic, a summarization system needs to generate a summary of the speaker’s opinion towards the topic. This task is built upon opinion recognition and topic or query based summarization. However, this problem is challenging in that: (a) Summarization in spontaneous speech is more difficult than well structured text (Mckeown et al., 2005), because speech is always less organized and has recognition errors when using speech recognition output; (b) Sentiment analysis in dialogues is also much harder because of the genre difference compared to other domains like product reviews or news resources, as reported in (Raaijmakers et al., 2008); (c) In conversational speech, information density is low and there are often off topic discussions, therefore presenting a need to identify utterances that are relevant to the topic. In this paper we perform an exploratory study on opinion summarization in conversations. We compare two unsupervised methods that have been 331 widely used in extractive summarization: sentenceranking and graph-based methods. Our system attempts to incorporate more information about topic relevancy and sentiment scores. Furthermore, in the graph-based method, we propose to better incorporate the dialogue structure information in the graph in order to select salient summary utterances. We have created a corpus of reasonable size in this study. Our experimental results show that both methods achieve better results compared to the baseline. The rest of this paper is organized as follows. Section 2 briefly discusses related work. Section 3 describes the corpus and annotation scheme we used. We explain our opinion-oriented conversation summarization system in Section 4 and present experimental results and analysis in Section 5. Section 6 concludes the paper. 2 Related Work Research in document summarization has been well established over the past decades. Many tasks have been defined such as single-document summarization, multi-document summarization, and querybased summarization. Previous studies have used various domains, including news articles, scientific articles, web documents, reviews. Recently there is an increasing research interest in speech summarization, such as conversational telephone speech (Zhu and Penn, 2006; Zechner, 2002), broadcast news (Maskey and Hirschberg, 2005; Lin et al., 2009), lectures (Zhang et al., 2007; Furui et al., 2004), meetings (Murray et al., 2005; Xie and Liu, 2010), voice mails (Koumpis and Renals, 2005). In general speech domains seem to be more difficult than well written text for summarization. In previous work, unsupervised methods like Maximal Marginal Relevance (MMR), Latent Semantic Analysis (LSA), and supervised methods that cast the extraction problem as a binary classification task have been adopted. Prior research has also explored using speech specific information, including prosodic features, dialog structure, and speech recognition confidence. In order to provide a summary over opinions, we need to find out which utterances in the conversation contain opinion. Most previous work in sentiment analysis has focused on reviews (Pang and Lee, 2004; Popescu and Etzioni, 2005; Ng et al., 2006) and news resources (Wiebe and Riloff, 2005). Many kinds of features are explored, such as lexical features (unigram, bigram and trigram), part-of-speech tags, dependency relations. Most of prior work used classification methods such as naive Bayes or SVMs to perform the polarity classification or opinion detection. Only a handful studies have used conversational speech for opinion recognition (Murray and Carenini, 2009; Raaijmakers et al., 2008), in which some domain-specific features are utilized such as structural features and prosodic features. Our work is also related to question answering (QA), especially opinion question answering. (Stoyanov et al., 2005) applies a subjectivity filter based on traditional QA systems to generate opinionated answers. (Balahur et al., 2010) answers some specific opinion questions like “Why do people criticize Richard Branson?” by retrieving candidate sentences using traditional QA methods and selecting the ones with the same polarity as the question. Our work is different in that we are not going to answer specific opinion questions, instead, we provide a summary on the speaker’s opinion towards a given topic. There exists some work on opinion summarization. For example, (Hu and Liu, 2004; Nishikawa et al., 2010) have explored opinion summarization in review domain, and (Paul et al., 2010) summarizes contrastive viewpoints in opinionated text. However, opinion summarization in spontaneous conversation is seldom studied. 3 Corpus Creation Though there are many annotated data sets for the research of speech summarization and sentiment analysis, there is no corpus available for opinion summarization on spontaneous speech. Thus for this study, we create a new pilot data set using a subset of the Switchboard corpus (Godfrey and Holliman, 1997).1 These are conversational telephone speech between two strangers that were assigned a topic to talk about for around 5 minutes. They were told to find the opinions of the other person. There are 70 topics in total. From the Switchboard cor1Please contact the authors to obtain the data. 332 pus, we selected 88 conversations from 6 topics for this study. Table 1 lists the number of conversations in each topic, their average length (measured in the unit of dialogue acts (DA)) and standard deviation of length. topic #Conv. avg len stdev space flight and exploration 6 165.5 71.40 capital punishment 24 gun control 15 universal health insurance 9 drug testing 12 universal public service 22 Table 1: Corpus statistics: topic description, number of conversations in each topic, average length (number of dialog acts), and standard deviation. We recruited 3 annotators that are all undergraduate computer science students. From the 88 conversations, we selected 18 (3 from each topic) and let all three annotators label them in order to study inter-annotator agreement. The rest of the conversations has only one annotation. The annotators have access to both conversation transcripts and audio files. For each conversation, the annotator writes an abstractive summary of up to 100 words for each speaker about his/her opinion or attitude on the given topic. They were told to use the words in the original transcripts if possible. Then the annotator selects up to 15 DAs (no minimum limit) in the transcripts for each speaker, from which their abstractive summary is derived. The selected DAs are used as the human generated extractive summary. In addition, the annotator is asked to select an overall opinion towards the topic for each speaker among five categories: strongly support, somewhat support, neutral, somewhat against, strongly against. Therefore for each conversation, we have an abstractive summary, an extractive summary, and an overall opinion for each speaker. The following shows an example of such annotation for speaker B in a dialogue about “capital punishment”: [Extractive Summary] I think I’ve seen some statistics that say that, uh, it’s more expensive to kill somebody than to keep them in prison for life. committing them mostly is, you know, either crimes of passion or at the moment or they think they’re not going to get caught but you also have to think whether it’s worthwhile on the individual basis, for example, someone like, uh, jeffrey dahlmer, by putting him in prison for life, there is still a possibility that he will get out again. I don’t think he could ever redeem himself, but if you look at who gets accused and who are the ones who actually get executed, it’s very racially related – and ethnically related [Abstractive Summary] B is against capital punishment except under certain circumstances. B finds that crimes deserving of capital punishment are “crimes of the moment” and as a result feels that capital punishment is not an effective deterrent. however, B also recognizes that on an individual basis some criminals can never “redeem” themselves. [Overall Opinion] Somewhat against Table 2 shows the compression ratio of the extractive summaries and abstractive summaries as well as their standard deviation. Because in conversations, utterance length varies a lot, we use words as units when calculating the compression ratio. avg ratio stdev extractive summaries 0.26 0.13 abstractive summaries 0.13 0.06 Table 2: Compression ratio and standard deviation of extractive and abstractive summaries. We measured the inter-annotator agreement among the three annotators for the 18 conversations (each has two speakers, thus 36 “documents” in total). Results are shown in Table 3. For the extractive or abstractive summaries, we use ROUGE scores (Lin, 2004), a metric used to evaluate automatic summarization performance, to measure the pairwise agreement of summaries from different annotators. ROUGE F-scores are shown in the table for different matches, unigram (R-1), bigram (R-2), and longest subsequence (R-L). For the overall opinion category, since it is a multiclass label (not binary decision), we use Krippendorff’s α coefficient to measure human agreement, and the difference function for interval data: δ2 ck = (c −k)2 (where c, k are the interval values, on a scale of 1 to 5 corresponding to the five categories for the overall opinion). We notice that the inter-annotator agreement for extractive summaries is comparable to other speech 333 extractive summaries R-1 0.61 R-2 0.52 R-L 0.61 abstractive summaries R-1 0.32 R-2 0.13 R-L 0.25 overall opinion α = 0.79 Table 3: Inter-annotator agreement for extractive and abstractive summaries, and overall opinion. summary annotation (Liu and Liu, 2008). The agreement on abstractive summaries is much lower than extractive summaries, which is as expected. Even for the same opinion or sentence, annotators use different words in the abstractive summaries. The agreement for the overall opinion annotation is similar to other opinion/emotion studies (Wilson, 2008b), but slightly lower than the level recommended by Krippendorff for reliable data (α = 0.8) (Hayes and Krippendorff, 2007), which shows it is even difficult for humans to determine what opinion a person holds (support or against something). Often human annotators have different interpretations about the same sentence, and a speaker’s opinion/attitude is sometimes ambiguous. Therefore this also demonstrates that it is more appropriate to provide a summary rather than a simple opinion category to answer questions about a person’s opinion towards something. 4 Opinion Summarization Methods Automatic summarization can be divided into extractive summarization and abstractive summarization. Extractive summarization selects sentences from the original documents to form a summary; whereas abstractive summarization requires generation of new sentences that represent the most salient content in the original documents like humans do. Often extractive summarization is used as the first step to generate abstractive summary. As a pilot study for the problem of opinion summarization in conversations, we treat this problem as an extractive summarization task. This section describes two approaches we have explored in generating extractive summaries. The first one is a sentence-ranking method, in which we measure the salience of each sentence according to a linear combination of scores from several dimensions. The second one is a graph-based method, which incorporates the dialogue structure in ranking. We choose to investigate these two methods since they have been widely used in text and speech summarization, and perform competitively. In addition, they do not require a large labeled data set for modeling training, as needed in some classification or feature based summarization approaches. 4.1 Sentence Ranking In this method, we use Equation 1 to assign a score to each DA s, and select the most highly ranked ones until the length constriction is satisfied. score(s) = λsimsim(s, D) + λrelREL(s, topic) +λsentsentiment(s) + λlenlength(s) X i λi = 1 (1) • sim(s, D) is the cosine similarity between DA s and all the utterances in the dialogue from the same speaker, D. It measures the relevancy of s to the entire dialogue from the target speaker. This score is used to represent the salience of the DA. It has been shown to be an important indicator in summarization for various domains. For cosine similarity measure, we use TF*IDF (term frequency, inverse document frequency) term weighting. The IDF values are obtained using the entire Switchboard corpus, treating each conversation as a document. • REL(s, topic) measures the topic relevance of DA s. It is the sum of the topic relevance of all the words in the DA. We only consider the content words for this measure. They are identified using TreeTagger toolkit.2 To measure the relevance of a word to a topic, we use Pairwise Mutual Information (PMI): PMI(w, topic) = log2 p(w&topic) p(w)p(topic) (2) 2http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/De cisionTreeTagger.html 334 where all the statistics are collected from the Switchboard corpus: p(w&topic) denotes the probability that word w appears in a dialogue of topic t, and p(w) is the probability of w appearing in a dialogue of any topic. Since our goal is to rank DAs in the same dialog, and the topic is the same for all the DAs, we drop p(topic) when calculating PMI scores. Because the value of PMI(w, topic) is negative, we transform it into a positive one (denoted by PMI+(w, topic)) by adding the absolute value of the minimum value. The final relevance score of each sentence is normalized to [0, 1] using linear normalization: RELorig(s, topic) = X w∈s PMI+(w, topic) REL(s, topic) = RELorig(s, topic) −Min Max −Min • sentiment(s) indicates the probability that utterance s contains opinion. To obtain this, we trained a maximum entropy classifier with a bag-of-words model using a combination of data sets from several domains, including movie data (Pang and Lee, 2004), news articles from MPQA corpus (Wilson and Wiebe, 2003), and meeting transcripts from AMI corpus (Wilson, 2008a). Each sentence (or DA) in these corpora is annotated as “subjective” or “objective”. We use each utterance’s probability of being “subjective” predicted by the classifier as its sentiment score. • length(s) is the length of the utterance. This score can effectively penalize the short sentences which typically do not contain much important content, especially the backchannels that appear frequently in dialogues. We also perform linear normalization such that the final value lies in [0, 1]. 4.2 Graph-based Summarization Graph-based methods have been widely used in document summarization. In this approach, a document is modeled as an adjacency matrix, where each node represents a sentence, and the weight of the edge between each pair of sentences is their similarity (cosine similarity is typically used). An iterative process is used until the scores for the nodes converge. Previous studies (Erkan and Radev, 2004) showed that this method can effectively extract important sentences from documents. The basic framework we use in this study is similar to the query-based graph summarization system in (Zhao et al., 2009). We also consider sentiment and topic relevance information, and propose to incorporate information obtained from dialog structure in this framework. The score for a DA s is based on its content similarity with all other DAs in the dialogue, the connection with other DAs based on the dialogue structure, the topic relevance, and its subjectivity, that is: score(s) = λsim X v∈C sim(s, v) P z∈C sim(z, v)score(v) +λrel REL(s, topic) P z∈C REL(z, topic) +λsent sentiment(s) P z∈C sentiment(z) +λadj X v∈C ADJ(s, v) P z∈C ADJ(z, v)score(v) X i λi = 1 (3) where C is the set of all DAs in the dialogue; REL(s, topic) and sentiment(s) are the same as those in the above sentence ranking method; sim(s, v) is the cosine similarity between two DAs s and v. In addition to the standard connection between two DAs with an edge weight sim(s, v), we introduce new connections ADJ(s, v) to model dialog structure. It is a directed edge from s to v, defined as follows: • If s and v are from the same speaker and within the same turn, there is an edge from s to v and an edge from v to s with weight 1/dis(s, v) (ADJ(s, v) = ADJ(v, s) = 1/dis(s, v)), where dis(s, v) is the distance between s and v, measured based on their DA indices. This way the DAs in the same turn can reinforce each other. For example, if we consider that 335 one DA is important, then the other DAs in the same turn are also important. • If s and v are from the same speaker, and separated only by one DA from another speaker with length less than 3 words (usually backchannel), there is an edge from s to v as well as an edge from v to s with weight 1 (ADJ(s, v) = ADJ(v, s) = 1). • If s and v form a question-answer pair from two speakers, then there is an edge from question s to answer v with weight 1 (ADJ(s, v) = 1). We use a simple rule-based method to determine question-answer pairs — sentence s has question marks or contains “wh-word” (i.e., “what, how, why”), and sentence v is the immediately following one. The motivation for adding this connection is, if the score of a question sentence is high, then the answer’s score is also boosted. • If s and v form an agreement or disagreement pair, then there is an edge from v to s with weight 1 (ADJ(v, s) = 1). This is also determined by simple rules: sentence v contains the word “agree” or “disagree”, s is the previous sentence, and from a different speaker. The reason for adding this is similar to the above question-answer pairs. • If there are multiple edges generated from the above steps between two nodes, then we use the highest weight. Since we are using a directed graph for the sentence connections to model dialog structure, the resulting adjacency matrix is asymmetric. This is different from the widely used graph methods for summarization. Also note that in the first sentence ranking method or the basic graph methods, summarization is conducted for each speaker separately. Utterances from one speaker have no influence on the summary decision for the other speaker. Here in our proposed graph-based method, we introduce connections between the two speakers, so that the adjacency pairs between them can be utilized to extract salient utterances. 5 Experiments 5.1 Experimental Setup The 18 conversations annotated by all 3 annotators are used as test set, and the rest of 70 conversations are used as development set to tune the parameters (determining the best combination weights). In preprocessing we applied word stemming. We perform extractive summarization using different word compression ratios (ranging from 10% to 25%). We use human annotated dialogue acts (DA) as the extraction units. The system-generated summaries are compared to human annotated extractive and abstractive summaries. We use ROUGE as the evaluation metrics for summarization performance. We compare our methods to two systems. The first one is a baseline system, where we select the longest utterances for each speaker. This has been shown to be a relatively strong baseline for speech summarization (Gillick et al., 2009). The second one is human performance. We treat each annotator’s extractive summary as a system summary, and compare to the other two annotators’ extractive and abstractive summaries. This can be considered as the upper bound of our system performance. 5.2 Results From the development set, we used the grid search method to obtain the best combination weights for the two summarization methods. In the sentenceranking method, the best parameters found on the development set are λsim = 0, λrel = 0.3, λsent = 0.3, λlen = 0.4. It is surprising to see that the similarity score is not useful for this task. The possible reason is, in Switchboard conversations, what people talk about is diverse and in many cases only topic words (except stopwords) appear more than once. In addition, REL score is already able to catch the topic relevancy of the sentence. Thus, the similarity score is redundant here. In the graph-based method, the best parameters are λsim = 0, λadj = 0.3, λrel = 0.4, λsent = 0.3. The similarity between each pair of utterances is also not useful, which can be explained with similar reasons as in the sentence-ranking method. This is different from graph-based summarization systems for text domains. A similar finding has also been shown in (Garg et al., 2009), where similarity be336 38 43 48 53 58 63 0.1 0.15 0.2 0.25 compression ratio ROUGE-1(%) max-length sentence-ranking graph human (a) compare to reference extractive summary 17 19 21 23 25 27 29 31 0.1 0.15 0.2 0.25 compression ratio ROUGE-1(%) max-length sentence-ranking graph human (b) compare to reference abstractive summary Figure 1: ROUGE-1 F-scores compared to extractive and abstractive reference summaries for different systems: max-length, sentence-ranking method, graphbased method, and human performance. tween utterances does not perform well in conversation summarization. Figure 1 shows the ROUGE-1 F-scores comparing to human extractive and abstractive summaries for different compression ratios. Similar patterns are observed for other ROUGE scores such as ROUGE2 or ROUGE-L, therefore they are not shown here. Both methods improve significantly over the baseline approach. There is relatively less improvement using a higher compression ratio, compared to a lower one. This is reasonable because when the compression ratio is low, the most salient utterances are not necessarily the longest ones, thus using more information sources helps better identify important sentences; but when the compression ratio is higher, longer utterances are more likely to be selected since they contain more content. There is no significant difference between the two methods. When compared to extractive reference summaries, sentence-ranking is slightly better except for the compression ratio of 0.1. When compared to abstractive reference summaries, the graphbased method is slightly better. The two systems share the same topic relevance score (REL) and sentiment score, but the sentence-ranking method prefers longer DAs and the graph-based method prefers DAs that are emphasized by the ADJ matrix, such as the DA in the middle of a cluster of utterances from the same speaker, the answer to a question, etc. 5.3 Analysis To analyze the effect of dialogue structure we introduce in the graph-based summarization method, we compare two configurations: λadj = 0 (only using REL score and sentiment score in ranking) and λadj = 0.3. We generate summaries using these two setups and compare with human selected sentences. Table 4 shows the number of false positive instances (selected by system but not by human) and false negative ones (selected by human but not by system). We use all three annotators’ annotation as reference, and consider an utterance as positive if one annotator selects it. This results in a large number of reference summary DAs (because of low human agreement), and thus the number of false negatives in the system output is very high. As expected, a smaller compression ratio (fewer selected DAs in the system output) yields a higher false negative rate and a lower false positive rate. From the results, we can see that generally adding adjacency matrix information is able to reduce both types of errors except when the compression ratio is 0.15. The following shows an example, where the third DA is selected by the system with λadj = 0.3, but not by λadj = 0. This is partly because the weight of the second DA is enhanced by the the question337 λadj = 0 λadj = 0.3 ratio FP FN FP FN 0.1 37 588 33 581 0.15 60 542 61 546 0.2 100 516 90 511 0.25 137 489 131 482 Table 4: The number of false positive (FP) and false negative (FN) instances using the graph-based method with λadj = 0 and λadj = 0.3 for different compression ratios. answer pair (the first and the second DA), and thus subsequently boosting the score of the third DA. A: Well what do you think? B: Well, I don’t know, I’m thinking about from one to ten what my no would be. B: It would probably be somewhere closer to, uh, less control because I don’t see, We also examined the system output and human annotation and found some reasons for the system errors: (a) Topic relevance measure. We use the statistics from the Switchboard corpus to measure the relevance of each word to a given topic (PMI score), therefore only when people use the same word in different conversations of the topic, the PMI score of this word and the topic is high. However, since the size of the corpus is small, some topics only contain a few conversations, and some words only appear in one conversation even though they are topicrelevant. Therefore the current PMI measure cannot properly measure a word’s and a sentence’s topic relevance. This problem leads to many false negative errors (relevant sentences are not captured by our system). (b) Extraction units. We used DA segments as units for extractive summarization, which can be problematic. In conversational speech, sometimes a DA segment is not a complete sentence because of overlaps and interruptions. We notice that annotators tend to select consecutive DAs that constitute a complete sentence, however, since each individual DA is not quite meaningful by itself, they are often not selected by the system. The following segment is extracted from a dialogue about “universal health insurance”. The two DAs from speaker B are not selected by our system but selected by human annotators, causing false negative errors. B: and it just can devastate – A: and your constantly, B: – your budget, you know. 6 Conclusion and Future Work This paper investigates two unsupervised methods in opinion summarization on spontaneous conversations by incorporating topic score and sentiment score in existing summarization techniques. In the sentence-ranking method, we linearly combine several scores in different aspects to select sentences with the highest scores. In the graph-based method, we use an adjacency matrix to model the dialogue structure and utilize it to find salient utterances in conversations. Our experiments show that both methods are able to improve the baseline approach, and we find that the cosine similarity between utterances or between an utterance and the whole document is not as useful as in other document summarization tasks. In future work, we will address some issues identified from our error analysis. First, we will investigate ways to represent a sentence’s topic relevance. Second, we will evaluate using other extraction units, such as applying preprocessing to remove disfluencies and concatenate incomplete sentence segments together. In addition, it would be interesting to test our system on speech recognition output and automatically generated DA boundaries to see how robust it is. 7 Acknowledgments The authors thank Julia Hirschberg and Ani Nenkova for useful discussions. This research is supported by NSF awards CNS-1059226 and IIS0939966. References Alexandra Balahur, Ester Boldrini, Andr´es Montoyo, and Patricio Mart´ınez-Barco. 2010. Going beyond traditional QA systems: challenges and keys in opinion question answering. In Proceedings of COLING. G¨unes Erkan and Dragomir R. Radev. 2004. LexRank: graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research. 338 Sadaoki Furui, Tomonori Kikuchi, Yousuke Shinnaka, and Chior i Hori. 2004. Speech-to-text and speech-tospeech summarization of spontaneous speech. IEEE Transactions on Audio, Speech & Language Processing, 12(4):401–408. Nikhil Garg, Benoit Favre, Korbinian Reidhammer, and Dilek Hakkani T¨ur. 2009. ClusterRank: a graph based method for meeting summarization. In Proceedings of Interspeech. Dan Gillick, Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-Tur. 2009. A global optimization framework for meeting summarization. In Proceedings of ICASSP. John J. Godfrey and Edward Holliman. 1997. Switchboard-1 Release 2. In Linguistic Data Consortium, Philadelphia. Andrew Hayes and Klaus Krippendorff. 2007. Answering the call for a standard reliability measure for coding data. Journal of Communication Methods and Measures, 1:77–89. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of ACM SIGKDD. Konstantinos Koumpis and Steve Renals. 2005. Automatic summarization of voicemail messages using lexical and prosodic features. ACM - Transactions on Speech and Language Processing. Shih Hsiang Lin, Berlin Chen, and Hsin min Wang. 2009. A comparative study of probabilistic ranking models for chinese spoken document summarization. ACM Transactions on Asian Language Information Processing, 8(1). Chin-Yew Lin. 2004. ROUGE: a package for automatic evaluation of summaries. In Proceedings of ACL workshop on Text Summarization Branches Out. Fei Liu and Yang Liu. 2008. What are meeting summaries? An analysis of human extractive summaries in meeting corpus. In Proceedings of SIGDial. Sameer Maskey and Julia Hirschberg. 2005. Comparing lexical, acoustic/prosodic, structural and discourse features for speech summarization. In Proceedings of Interspeech. Kathleen Mckeown, Julia Hirschberg, Michel Galley, and Sameer Maskey. 2005. From text to speech summarization. In Proceedings of ICASSP. Gabriel Murray and Giuseppe Carenini. 2009. Detecting subjectivity in multiparty speech. In Proceedings of Interspeech. Gabriel Murray, Steve Renals, and Jean Carletta. 2005. Extractive summarization of meeting recordings. In Proceedings of EUROSPEECH. Vincent Ng, Sajib Dasgupta, and S.M.Niaz Arifin. 2006. Examining the role of linguistic knowledge sources in the automatic identification and classification of reviews. In Proceedings of the COLING/ACL. Hitoshi Nishikawa, Takaaki Hasegawa, Yoshihiro Matsuo, and Genichiro Kikui. 2010. Opinion summarization with integer linear programming formulation for sentence extraction and ordering. In Proceedings of COLING. Bo Pang and Lilian Lee. 2004. A sentiment education: sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of ACL. Michael Paul, ChengXiang Zhai, and Roxana Girju. 2010. Summarizing contrastive viewpoints in opinionated text. In Proceedings of EMNLP. Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In Proceedings of HLT-EMNLP. Stephan Raaijmakers, Khiet Truong, and Theresa Wilson. 2008. Multimodal subjectivity analysis of multiparty conversation. In Proceedings of EMNLP. Veselin Stoyanov, Claire Cardie, and Janyce Wiebe. 2005. Multi-perspective question answering using the OpQA corpus. In Proceedings of EMNLP/HLT. Janyce Wiebe and Ellen Riloff. 2005. Creating subjective and objective sentence classifiers from unannotated texts. In Proceedings of CICLing. Theresa Wilson and Janyce Wiebe. 2003. Annotating opinions in the world press. In Proceedings of SIGDial. Theresa Wilson. 2008a. Annotating subjective content in meetings. In Proceedings of LREC. Theresa Wilson. 2008b. Fine-grained subjectivity and sentiment analysis: recognizing the intensity, polarity, and attitudes of private states. Ph.D. thesis, University of Pittsburgh. Shasha Xie and Yang Liu. 2010. Improving supervised learning for meeting summarization using sampling and regression. Computer Speech and Language, 24:495–514. Klaus Zechner. 2002. Automatic summarization of open-domain multiparty dialogues in dive rse genres. Computational Linguistics, 28:447–485. Justin Jian Zhang, Ho Yin Chan, and Pascale Fung. 2007. Improving lecture speech summarization using rhetorical information. In Proceedings of Biannual IEEE Workshop on ASRU. Lin Zhao, Lide Wu, and Xuanjing Huang. 2009. Using query expansion in graph-based approach for queryfocused multi-document summarization. Journal of Information Processing and Management. Xiaodan Zhu and Gerald Penn. 2006. Summarization of spontaneous conversations. In Proceedings of Interspeech. 339
|
2011
|
34
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 340–349, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Contrasting Opposing Views of News Articles on Contentious Issues Souneil Park1, KyungSoon Lee2, Junehwa Song1 1Korea Advanced Institute of Science and Technology 2Chonbuk National University 291 Daehak-ro, Yuseong-gu, 664-14 1ga Deokjin-dong Jeonju, Daejeon, Republic of Korea Jeonbuk, Republic of Korea {spark,junesong}@nclab.kaist.ac.kr [email protected] Abstract We present disputant relation-based method for classifying news articles on contentious issues. We observe that the disputants of a contention are an important feature for understanding the discourse. It performs unsupervised classification on news articles based on disputant relations, and helps readers intuitively view the articles through the opponent-based frame. The readers can attain balanced understanding on the contention, free from a specific biased view. We applied a modified version of HITS algorithm and an SVM classifier trained with pseudo-relevant data for article analysis. 1 Introduction The coverage of contentious issues of a community is an essential function of journalism. Contentious issues continuously arise in various domains, such as politics, economy, environment; each issue involves diverse participants and their different complex arguments. However, news articles are frequently biased and fail to fairly deliver conflicting arguments of the issue. It is difficult for ordinary readers to analyze the conflicting arguments and understand the contention; they mostly perceive the issue passively, often through a single article. Advanced news delivery models are required to increase awareness on conflicting views. In this paper, we present disputant relationbased method for classifying news articles on contentious issues. We observe that the disputants of a contention, i.e., people who take a position and participate in the contention such as politicians, companies, stakeholders, civic groups, experts, commentators, etc., are an important feature for understanding the discourse. News producers primarily shape an article on a contention by selecting and covering specific disputants (Baker. 1994). Readers also intuitively understand the contention by identifying who the opposing disputants are. The method helps readers intuitively view the news articles through the opponent-based frame. It performs classification in an unsupervised manner: it dynamically identifies opposing disputant groups and classifies the articles according to their positions. As such, it effectively helps readers contrast articles of a contention and attain balanced understanding, free from specific biased viewpoints. The proposed method differs from those used in related tasks as it aims to perform classification under the opponent-based frame. Research on sentiment classification and debate stance recognition takes a topic-oriented view, and attempts to perform classification under the „positive vs. negative‟ or „for vs. against‟ frame for the given topic, e.g., positive vs. negative about iPhone. However, such frames are often not appropriate for classifying news articles of a contention. The coverage of a contention often spans over different topics (Miller. 2001). For the contention on the health care bill, an article may discuss the enlarged coverage whereas another may discuss the increase of insurance premiums. In addition, we observe that opposing arguments of a contention are often complex to classify under these frames. For exam340 ple, in a political contention on holding a referendum on the Sejong project1, the opposition parties strongly opposed and criticized the president office. Meanwhile, the president office argued that they were not considering holding the referendum and the contention arose from a misunderstanding. In such a case, it is difficult to classify any argument to the “positive” category of the frame. We demonstrate that the opponent-based frame is clear and effective for contrasting opposing views of contentious issues. For the contention on the referendum, „president office vs. opposition parties‟ provides an intuitive frame to understand the contention. The frame does not require the documents to discuss common topics nor the opposing arguments to be positive vs. negative. Under the proposed frame, it becomes important to analyze which side is more centrally covered in an article. Unlike debate posts or product reviews news articles, in general, do not take a position explicitly (except a few types such as editorials). They instead quote a specific side, elaborate them, and provide supportive facts. On the other hand, the opposing disputants compete for news coverage to influence more readers and gain support (Miller et al. 2001). Thus, the method focuses on identifying the disputants of each side and classifying the articles based on the side it covers. We applied a modified version of HITS algorithm to identify the key opponents of an issue, and used disputant extraction techniques combined with an SVM classifier for article analysis. We observe that the method achieves acceptable performance for practical use with basic language resources and tools, i.e., Named Entity Recognizer (Lee et al. 2006), POS tagger (Shim et al. 2002), and a translated positive/negative lexicon. As we deal with non-English (Korean) news articles, it is difficult to obtain rich resources and tools, e.g., WordNet, dependency parser, annotated corpus such as MPQA. When applied to English, we believe the method could be further improved by adopting them. 2 Background and Related Work Research has been made on sentiment classification in document-level (Turney et al., 2002, Pang et al., 2002, Seki et al. 2008, Ounis et al. 2006). It aims to automatically identify and classify the sen 1 http://www.koreatimes.co.kr/www/news/nation/2010/07/116_61649.html timent of documents into positive or negative. Opinion summarization aims a similar goal, to identify different opinions on a topic and generate summaries of them. Paul et al. (2010) developed an unsupervised method for generating summaries of contrastive opinions on a common topic. These works make a number of assumptions that are difficult to apply to the discourse of contentious news issues. They assume that the input documents have a common opinion target, e.g., a movie. Many of them primarily deal with documents which explicitly reveal opinions on the selected target, e.g., movie reviews. They usually apply one static classification frame, positive vs. negative, to the topic. The discourse of contentious issues in news articles show different characteristics from that studied in the sentiment classification tasks. First, the opponents of a contentious issue often discuss different topics, as discussed in the example above. Research in mass communication has showed that opposing disputants talk across each other, not by dialogue, i.e., they martial different facts and interpretations rather than to give different answers to the same topics (Schon et al., 1994). Second, the frame of argument is not fixed as „positive vs. negative‟. We frequently observed both sides of a contention articulating negative arguments attacking each other. The forms of arguments are also complex and diverse to classify them as positive or negative; for example, an argument may just neglect the opponent‟s argument without positive or negative expressions, or emphasize a different discussion point. In addition, a position of a contention can be communicated without explicit expression of opinion or sentiment. It is often conveyed through objective sentences that include carefully selected facts. For example, a news article can cast a negative light on a government program simply by covering the increase of deficit caused by it. A number of works deal with debate stance recognition, which is a closely related task. They attempt to identify a position of a debate, such as ideological (Somasundaran et al., 2010, Lin et al., 2006) or product comparison debate (Somasundaran et al., 2009). They assume a debate frame, which is similar to the frame of the sentiment classification task, i.e., for vs. against the debate topic. All articles of a debate in their corpus cover a coherent debate topic, e.g., iPhone vs. Blackberry, and explicitly express opinions for or 341 against to the topic, e.g., for or against iPhone or Blackberry. The proposed methods assume that the debate frame is known apriori. This debate frame is often not appropriate for contentious issues for similar reasons as the positive/negative frame. In contrast, our method does not assume a fixed debate frame, and rather develops one based on the opponents of the contention at hand. The news corpus is also different from the debate corpus. News articles of a contentious issue are more diverse than debate articles conveying explicit argument of a specific side. There are news articles which cover both sides, facts without explicit opinions, and different topics unrelated to the arguments of either side. Several works have used the relation between speakers or authors for classifying their debate stance (Thomas et al., 2006, Agrawal et al., 2003). However, these works also assume the same debate frame and use the debate corpus, e.g., floor debates in the House of Representatives, online debate forums. Their approaches are also supervised, and require training data for relation analysis, e.g., voting records of congresspeople. 3 Argument Frame Comparison Establishing an appropriate argument frame is important. It provides a framework which enable readers to intuitively understand the contention. It also determines how classification methods should classify articles of the issue. We conducted a user study to compare the opponent-based frame and the positive (for) vs. negative (against) frame. In the experiment, multiple human annotators classified the same set of news articles under each of the two frames. We compared which frame is clearer for the classification, and more effective for exposing opposing views. We selected 14 contentious issues from Naver News (a popular news portal in Korea) issue archive. We randomly sampled about 20 articles per each issue, for a total of 250 articles. The selected issues range over diverse domains such as politics, local, diplomacy, economy; to name a few for example, the contention on the 4 river project, of which the key opponents are the government vs. catholic church; the entrance of big retailers to the supermarket business, of which the key opponents are the small store owners vs. big retail companies; the refusal to approve an integrated civil servants‟ union, of which the key opponents are government vs. Korean government employees‟ union. We use an internationally known contention, i.e., the dispute about the Cheonan sinking incident, as an example to give more details on the disputants. Our data set includes 25 articles that were published after the South Korea‟s announcement of their investigation result. Many disputants appear in the articles, e.g., South Korean Government, South Korea defense secretary, North Korean Government, United States officials, Chinese experts, political parties of South Korea, etc. Three annotators performed the classification. All of them were students. For impartiality, two of them were recruited from outside the team, who were not aware of this research. The annotators performed two subtasks for classification. As for the positive vs. negative frame, first, we asked them to designate the main topic of the contention. Second, they classified the articles which mainly deliver arguments for the topic to the “positive” category and those delivering arguments against the topic to the “negative” category. The articles are classified to the “Other” category if they do not deal with the main topic nor cover positive or negative arguments. As for the opponent-based frame, first, we asked them to designate the competing opponents. Second, we asked to classify articles to a specific side if the articles cover only the positions, arguments, or information supportive of that side or if they cover information detrimental or criticism to its opposite side. Other articles were classified to the “Other” category. Examples of this category include articles covering both sides fairly, describing general background or implications of the issue. Issue # Free-marginal kappa Issue # Free-marginal kappa Pos.-Neg. Opponent Pos.-Neg. Opponent 1 0.83 0.67 8 0.26 0.58 2 0.57 0.48 9 0.07 1.00 3 0.44 0.95 10 0.48 0.84 4 0.75 0.87 11 0.71 0.86 5 0.36 0.64 12 0.71 0.71 6 0.30 0.70 13 0.63 0.79 7 0.18 0.96 14 0.48 0.87 Avg. 0.50 0.78 Table 1. Inter-rater agreement result. The agreement in classification was higher for the opponent-based frame in most issues. This indicates that the annotators could apply the frame more clearly, resulting in smaller difference between them. The kappa measure was 0.78 on aver342 age. The kappa measure near 0.8 indicates a substantial level of agreement, and the value can be achieved, for example, when 8 or 9 out of 10 items are annotated equally (Table 1). In addition, fewer articles were classified to the “Other” category under the opponent-based frame. The annotators classified about half of the articles to this category under the positive vs. negative frame whereas they classified about 35% to the category under the opponent-based frame. This is because the frame is more flexible to classify diverse articles of an issue, such as those covering arguments on different points, and those covering detrimental facts to a specific side without explicit positive or negative arguments. The kappa measure was less than 0.5 for near half of the issues under the positive-negative frame. The agreement was low especially when the main topic of the contention was interpreted differently among the annotators; the main topic was interpreted differently for issue 3, 7, 8, and 9. Even when the topic was interpreted identically, the annotators were confused in judging complex arguments either as positive or negative. One annotator commented that “it was confusing as the arguments were not clearly for or against the topic often. Even when a disputant was assumed to have a positive attitude towards the topic, the disputant‟s main argument was not about the topic but about attacking the opponent” The annotators all agreed that the opponent-based frame is more effective to understand the contention. 4 Disputant relation-based method Disputant relation-based method adopts the opponent-based frame for classification. It attempts to identify the two opposing groups of the issue at hand, and analyzes whether an article more reflects the position of a specific side. The method is based on the observation that there exists two opposing groups of disputants, and the groups compete for news coverage. They strive to influence readers‟ interpretation, evaluation of the issue and gain support from them (Miller et al. 2001). In this competing process, news articles may give more chance of speaking to a specific side, explain or elaborate them, or provide supportive facts of that side (Baker 1994). The proposed method is performed in three stages: the first stage, disputant extraction, extracts the disputants appearing in an article set; the second stage, disputant partition, partitions the extracted disputants into two opposing groups; lastly, the news classification stage classifies the articles into three categories, i.e., two for the articles biased to each group, and one for the others. 4.1 Disputant Extraction In this stage, the disputants who participate in the contention have to be extracted. We utilize that many disputants appear as the subject of quotes in the news article set. The articles actively quote or cover their action in order to deliver the contention lively. We used straight forward methods for extraction of subjects. The methods were effective in practice as quotes of articles frequently had a regular pattern. The subjects of direct and indirect quotes are extracted. The sentences including an utterance inside double quotes are considered as direct quotes. The sentences which convey an utterance without double quotes, and those describing the action of a disputant are considered as indirect quotes (See the translated example 1 below). The indirect quotes are identified based on the morphology of the ending word. The ending word of the indirect quotes frequently has a verb as its root or includes a verbalization suffix. Other sentences, typically, those describing the reporter‟s interpretation or comments are not considered as quotes. (See example sentence 2. The ending word of the original sentence is written in boldface). (1) The government clarified that there won‟t be any talks unless North Korea apologizes for the attack. (2) The government‟s belief is that a stern response is the only solution for the current crisis A named entity combined with a topic particle or a subject particle is identified as the subject of these quotes. We detect the name of an organization, person, or country using the Korean Named Entity Recognizer (Lee et al. 2006). A simple anaphora resolution is conducted to identify subjects also from abbreviated references or pronouns in subsequent quotes. 4.2 Disputant Partitioning We develop key opponent-based partitioning method for disputant partitioning. The method first identifies two key opponents, each representing 343 one side, and uses them as a pivot for partitioning other disputants. The other disputants are divided according to their relation with the key opponents, i.e., which key opponent they stand for or against. The intuition behind the method is that there usually exists key opponents who represent the contention, and many participants argue about the key opponents whereas they seldom recognize and talk about minor disputants. For instance, in the contention on “investigation result of the Cheonan sinking incident”, the government of North Korea and that of South Korea are the key opponents; other disputants, such as politicians, experts, civic group of South Korea, the government of U.S., and that of China, mostly speak about the key opponents. Thus, it is effective to analyze where the disputants stand regarding their attitude toward the key opponents. Selecting key opponents: In order to identify the key opponents of the issue, we search for the disputants who frequently criticize, and are also criticized by other disputants. As the key opponents get more news coverage, they have more chance to articulate their argument, and also have more chance to face counter-arguments by other disputants. This is done in two steps. First, for each disputant, we analyze whom he or she criticizes and by whom he or she is criticized. The method goes through each sentence of the article set and searches for both disputant‟s criticisms and the criticisms about the disputant. Based on the criticisms, it analyzes relationships among disputants. A sentence is considered to express the disputant‟s criticism to another disputant if the following holds: 1) the sentence is a quote, 2) the disputant is the subject of the quote, 3) another disputant appears in the quote, and 4) a negative lexicon appears in the sentence. On the other hand, if the disputant is not the subject but appears in the quote, the sentence is considered to express a criticism about the disputant made by another disputant (See example 3. The disputants are written in italic, and negative words are in boldface.). (3) the government defined that “the attack of North Korea is an act of invasion and also a violation of North-South Basic Agreement” The negative lexicon we use is carefully built from the Wilson lexicon (Wilson et al. 2005). We translated all the terms in it using the Google translation, and manually inspected the translated result to filter out inappropriate translations and the terms that are not negative in the Korean context. Second, we apply an adapted version of HITS graph algorithm to find major disputants. For this, the criticizing relationships obtained in the first step are represented in a graph. Each disputant is modeled as a node, and a link is made from a criticizing disputant to a criticized disputant. South Korea government North Korea government Ministry of Defense China Opposition party (A: 0.3, H: 0.2) (A: 0, H: 0.1) (A: 0.28, H: 0.15) (A: 0, H: 0.1) A: Authority score H: Hub score Figure 1. Example HITS graph illustration Originally, the HITS algorithm (Kleinberg, 1999) is designed to rate Web pages regarding the link structure. The feature of the algorithm is that it separately models the value of outlinks and inlinks. Each node, i.e., a web page, has two scores: the authority score, which reflects the value of inlinks toward itself, and the hub score, which reflects the value of its outlinks to others. The hub score of a node increases if it links to nodes with high authority score, and the authority score increases if it is pointed by many nodes with high hub score. We adopt the HITS algorithm due to above feature. It enables us to separately measure the significance of a disputant‟s criticism (using the hub score) and the criticism about the disputant (using the authority score). We aim to find the nodes which have both high hub score and high authority score; the key opponents will have many links to others and also be pointed by many nodes. The modified HITS algorithm is shown in Figure 2. We make some adaptation to make the algorithm reflect the disputants‟ characteristics. The initial hub score of a node is set to the number of quotes in which the corresponding disputant is the subject. The initial authority score is set to the number of quotes in which the disputant appears but not as the subject. In addition, the weight of each link (from a criticizing disputant to a criticized disputant) is set to the number of sentences that express such criticism. We select the nodes which show relatively high hub score and high authority score compared to other nodes. We rank the nodes according to the sum of hub and authority scores, and select from 344 the top ranking node. The node is not selected if its hub or authority score is zero. The selection is finished if more than two nodes are selected and the sum of hub and authority scores is less than half of the sum of the previously selected node. Modified HITS(G,W,k) G = <V, E> where V is a set of vertex, a vertex vi represents a disputant E is a set of edges, an edge eij represents a criticizing quote from disputant i to j W = {wij| weight of edge eij} For all vi V Auth1(vi) = # of quotes of which the subject is disputant i Hub1 (vi) = # of quotes of which disputant i appears, but not as the subject For t = 1 to k: Autht+1(vi) = Hubt+1 (vi) = Normalize Autht+1(vi) and Hubt+1 (vi) Figure 2. Algorithm of the Modified HITS More than two disputants can be selected if more than one disputant is active from a specific side. In such cases, we choose the two disputants whose criticizing relationship is the strongest among the selected ones, i.e., the two who show the highest ratio of criticism between them. Partitioning minor disputants: Given the two key opponents, we partition the rest of disputants based on their relations with the key opponents. For this, we identify whether each disputant has positive or negative relations with the key opponents. The disputant is classified to the side of the key opponent who shows more positive relations. If the disputant shows more negative relations, the disputant is classified to the opposite side. We analyze the relationship not only from the article set but also from the web news search results. The minor disputants may not be covered importantly in the article set; hence, it can be difficult to obtain sufficient data for analysis. The web news search results provide supplementary data for the analysis of relationships. We develop four features to capture the positive and negative relationships between the disputants. 1) Positive Quote Rate (PQRab): Given two disputants (a key opponent a, and a minor disputant b), the feature measures the ratio of positive quotes between them. A sentence is considered as a positive quote if the following conditions hold: the sentence is a direct or indirect quote, the two disputants appear in the sentence, one is the subject of the quote, and a positive lexicon appears in the sentence. The number of such sentences is divided by the number of all quotes in which the two disputants appear and one appears as the subject. 2) Negative Quote Rate (NQRab): This feature is an opposite version of PQR. It measures the ratio of negative quotes between the two disputants. The same conditions are considered to detect negative quotes except that negative lexicon is used instead of positive lexicon. 3) Frequency of Standing Together (FSTab): This feature attempts to capture whether the two disputants share a position, e.g., “South Korea and U.S. both criticized North Korea for…” It counts how many times they are co-located or connected with the conjunction “and” in the sentences. 4) Frequency of Division (FDab): This feature is an opposite version of the FST. It counts how many times they are not co-located in the sentences. The same features are also calculated from the web news search results; we collect news articles of which the title includes the two disputants, i.e., a key opponent a and a minor disputant b. The calculation method of PQR and NQR is slightly adapted since the titles are mostly not complete sentences. For PQR (NQR), it counts the titles which the two disputants appear with a positive (negative) lexicon. The counted number is divided by the number of total search results. The calculation method of FST and FD is the same except that they are calculated from the titles. We combine the features obtained from web news search with the corresponding ones obtained from the article set by calculating a weighted sum. We currently give equal weights. The disputants are partitioned by the following rule: given a minor disputant a, and the two key opponents b and c, classify a to b‟s side if, (PQRab – NQRab) > (PQRac – NQRac) or ((FSTab > FDab) and (FSTac = 0)); classify a to c‟s side if, (PQRac – NQRac) > (PQRab – NQRab) or ((FSTac > FDac) and (FSTab = 0)); classify a to other, otherwise. 4.3 Article Classification Each news article of the set is classified by analyzing which side is importantly covered. The method classifies the articles into three categories, either to one of the two sides or the category “other”. 345 We observed that the major components which shape an article on a contention are quotes from disputants and journalists‟ commentary. Thus, our method considers two points for classification: first, from which side the article‟s quotes came; second, for the rest of the article‟s text, the similarity of the text to the arguments of each side. As for the quotes of an article, the method calculates the proportion of the quotes from each side based on the disputant partitioning result. As for the rest of the sentences, a similarity analysis is conducted with an SVM classifier. The classifier takes a sentence as input, determines its class to one of the three categories, i.e., one of the two sides, or other. It is trained with the quotes from each side (tf.idf of unigram and bigram is used as features). The same number of quotes from each side is used for training. The training data is pseudo-relevant: it is automatically obtained based on the partitioning result of the previous stage. An article is classified to a specific side if more of its quotes are from that side and more sentences are similar to that side: given an article a, and the two sides b and c, classify a to b if classify a to c if classify a to other, otherwise. where SU: number of all sentences of the article Qi: number of quotes from the side i. Qij: number of quotes from either side i or j. Si: number of sentences classified to i by SVM. Sij:: number of sentences classified to either i or j. We currently set the parameters heuristically. We set 0.7 and 0.6 for the two parameters α and β respectively. Thus, for an article written purely with quotes, the article is classified to a specific side if more than 70% of the quotes are from that side. On the other hand, for an article which does not include quotes from any side, more than 60% of the sentences have to be determined similar to a specific side‟s quotes. We set a lower value for β to classify articles with less number of biased sentences (Articles often include non-quote sentences unrelated to any side to give basic information). 5 Evaluation and Discussion Our evaluation of the method is twofold: first, we evaluate the disputant partitioning results, second, the accuracy of classification. The method was evaluated using the same data set used for the classification frame comparison experiment. A gold result was created through the three human annotators. To evaluate the disputant partitioning results, we had the annotators to extract the disputants of each issue, divide them into opposing two groups. We then created a gold partitioning result, by taking a union of the three annotators‟ results. A gold classification is also created from the classification of the annotators. We resolved the disagreements between the annotators‟ results by following the decision of the majority. 5.1 Evaluation of Disputant Partitioning We evaluated the partitioning result of the two opposing groups, denoted as G1 and G2. The performance is measured using precision and recall. Table 2 presents the results. The precision of the partitioning was about 70% on average. The false positives were mostly the disputants who appear only a few times both in the article set and the news search results. As they appeared rarely, there was not enough data to infer their position. The effect of these false positives in article classification was limited. The recall was slightly lower than precision. This was mainly because some disputants were omitted in the disputant extraction stage. The NER we used occasionally missed the names of unpopular organizations, e.g., civic groups, and the extraction rule failed to capture the subject in some complex sentences. However, most disputants who frequently appear in the article set were extracted and partitioned appropriately. Table 2. Disputant Partitioning Result 5.2 Evaluation of Article Classification We evaluate our method and compare it with two unsupervised methods below. Similarity-based clustering (Sim.): The method implements a typical method. It clusters articles of an issue into three groups based on text similari 346 Issue # Method wF Group 1 Group 2 Other Issue # Method wF Group 1 Group 2 Other F P R F P R F P R F P R F P R F P R 1 DrC 0.47 0.64 0.47 1.00 0.62 1.00 0.44 N/A 0.00 0.00 8 DrC 0.90 0.86 0.75 1.00 1.00 1.00 1.00 0.86 1.00 0.75 QbC 0.50 0.62 0.47 0.89 0.71 1.00 0.55 N/A 0.00 0.00 QbC 0.48 0.57 0.50 0.67 0.57 0.50 0.67 0.33 0.50 0.25 Sim. 0.27 0.20 1.00 0.11 0.20 1.00 0.11 0.47 0.30 1.00 Sim. 0.56 0.67 0.67 0.67 0.50 0.40 0.67 0.50 1.00 0.33 2 DrC 0.65 0.67 0.62 0.73 0.86 1.00 0.75 0.53 0.57 0.50 9 DrC 0.77 N/A 0.00 N/A 0.57 0.50 0.67 0.82 1.00 0.70 QbC 0.65 0.76 0.80 0.73 0.60 0.50 0.75 0.53 0.57 0.50 QbC 0.79 N/A 0.00 N/A 0.67 0.67 0.67 0.82 1.00 0.70 Sim. 0.37 0.63 0.48 0.91 N/A 0.00 0.00 0.22 1.00 0.13 Sim. 0.49 N/A 0.00 N/A 0.00 0.00 0.00 0.63 0.67 0.60 3 DrC 0.72 0.57 0.40 1.00 0.67 1.00 0.50 0.86 0.75 1.00 10 DrC 0.66 0.71 0.56 1.00 0.73 1.00 0.57 0.40 0.50 0.33 QbC 0.74 0.57 0.40 1.00 0.75 1.00 0.60 0.77 0.71 0.83 QbC 0.72 0.77 0.63 1.00 0.77 0.83 0.71 0.50 1.00 0.33 Sim. 0.59 N/A 0.00 0.00 0.70 0.62 0.80 0.60 0.75 0.50 Sim. 0.40 0.33 1.00 0.20 0.44 1.00 0.29 0.40 0.25 1.00 4 DrC 0.80 0.82 0.69 1.00 0.86 1.00 0.75 0.57 0.67 0.50 11 DrC 0.61 0.73 0.80 0.67 0.50 0.43 0.60 0.57 0.67 0.50 QbC 0.81 0.90 0.82 1.00 0.86 1.00 0.75 0.44 0.40 0.50 QbC 0.39 0.62 0.57 0.67 0.20 0.20 0.20 0.29 0.33 0.25 Sim. 0.67 0.80 1.00 0.67 0.80 0.67 1.00 N/A 0.00 0.00 Sim. 0.47 0.63 0.46 1.00 0.33 1.00 0.20 0.40 1.00 0.25 5 DrC 0.60 0.63 0.50 0.83 0.71 0.83 0.63 0.33 0.50 0.25 12 DrC 0.67 0.29 0.20 0.50 0.67 0.67 0.67 0.77 1.00 0.63 QbC 0.55 0.40 0.50 0.33 0.71 0.67 0.75 0.44 0.40 0.50 QbC 0.38 0.33 0.25 0.50 0.44 0.33 0.67 0.36 0.47 0.25 Sim. 0.51 0.63 0.46 1.00 0.67 1.00 0.50 N/A 0.00 0.00 Sim. 0.43 N/A 0.00 0.00 0.55 0.38 1.00 0.50 0.75 0.38 6 DrC 0.89 N/A 0.00 N/A 0.89 1.00 0.80 0.89 1.00 0.80 13 DrC 0.65 0.79 0.69 0.92 0.33 1.00 0.20 0.67 1.00 0.50 QbC 0.50 N/A 0.00 N/A 0.50 0.67 0.40 0.50 0.67 0.40 QbC 0.59 0.75 0.75 0.75 0.33 1.00 0.20 0.29 0.20 0.50 Sim. 0.55 N/A 0.00 N/A 0.77 0.63 1.00 0.33 1.00 0.20 Sim. 0.54 0.71 0.63 0.83 0.33 1.00 0.20 N/A 0.00 0.00 7 DrC 0.48 0.67 1.00 0.50 0.71 0.55 1.00 N/A N/A 0.00 14 DrC 0.61 0.77 0.77 0.77 0.50 0.57 0.44 0.25 0.20 0.33 QbC 0.48 0.67 1.00 0.50 0.62 0.53 0.73 0.17 0.20 0.14 QbC 0.66 0.83 0.75 0.92 0.53 0.67 0.44 0.33 0.33 0.33 Sim. 0.44 0.40 0.27 0.75 0.57 0.60 0.55 0.25 1.00 0.14 Sim. 0.37 0.29 1.00 0.17 0.60 0.43 1.00 N/A 0.00 0.00 Issue # Total G1 G2 Other 1 24 9 9 6 2 23 11 4 8 3 18 2 10 6 4 25 9 12 4 5 18 5 9 4 6 10 0 5 5 7 22 4 11 7 8 10 3 3 4 9 13 0 3 10 10 15 5 7 3 11 15 6 5 4 12 13 2 3 8 13 19 12 5 2 14 25 13 10 2 *N/A: The metric could not be calculated in some cases. This happened when no articles were classified to a category. Table 3. Number of articles of each issue and group (left), and classification performance (right) ty. It uses tf.idf of unigram and bigram as features, and cosine similarity as the similarity measure. We used the K-means clustering algorithm. Quote-based classification (QbC.): The method is a partial implementation of our method. The disputant extraction and disputant partitioning is performed identically; however, it classifies news articles merely based on quotes. An article is classified to one of the two opposing sides if more than 70% of the quotes are from that side, or to the “other” category otherwise. Results: We evaluated the classification result of the three categories, the two groups G1 and G2, and the category Other. The performance is measured using precision, recall, and f-measure. We additionally used the weighted f-measure (wF) to aggregate the f-measure of the three categories. It is the weighted average of the three f-measures. The weight is proportional to the number of articles in each category of the gold result. The disputant relation-based method (DrC) performed better than the two comparison methods. The overall average of the weighted f-measure among issues was 0.68, 0.59, and 0.48 for the DrC, QbC, and Sim. method, respectively (See Table 3). The performance of the similarity-based clustering was lower than that of the other two in most issues. A number of works have reported that text similarity is reliable in stance classification in political domains. These experiments were conducted in political debate corpus (Lin et al. 2006). However, news article set includes a number of articles covering different topics irrelevant to the arguments of the disputants. For example, there can be an article describing general background of the contention. Similarity-based clustering approach reacted sensitively to such articles and failed to capture the difference of the covered side. Quote-based classification performs better than similarity-based approach as it classifies articles primarily based on the quoted disputants. The performance is comparable to DrC in many issues. The method performs similarly to DrC if most articles of an issue include many qutes. DrC performs better for other issues which include a number of articles with only a few quotes. Error analysis: As for our method, we observed three main reasons of misclassification. 1) Articles with few quotes: Although the proposed method better classifies such articles than the quote-based classification, there were some misclassifications. There are sentences that are not directly related to the argument of any side, e.g., plain description of an event, summarizing the development of the issue, etc. The method made errors while trying to decide to which side these sentences are close to. Detecting such sentences and avoiding decisions for them would be one way of improvement. Research on classification 347 of subjective and objective sentences would be helpful (Wiebe et al. 99). 2) Article criticizing the quoted disputants: There were some articles criticizing the quoted disputants. For example, an article quoted the president frequently but occasionally criticized him between the quotes. The method misclassified such articles as it interpreted that the article is mainly delivering the president‟s argument. 3) Errors in disputant partitioning: Some misclassifications were made due to the errors in the disputant partitioning stage, specifically, those who were classified to a wrong side. Articles which refer to such disputants many times were misclassified. 6 Conclusion We study the problem of classifying news articles on contentious issues. It involves new challenges as the discourse of contentious issues is complex, and news articles show different characteristics from commonly studied corpus, such as product reviews. We propose opponent-based frame, and demonstrate that it is a clear and effective classification frame to contrast arguments of contentious issues. We develop disputant relation-based classification and show that the method outperforms a text similarity-based approach. Our method assumes polarization for contentious issues. This assumption was valid for most of the tested issues. For a few issues, there were some participants who do not belong to either side; however, they usually did not take a particular position nor make strong arguments. Thus, the effect on classification performance was limited. Discovering and developing methods for issues which involve more than two disputants groups is a future work. References Rakesh Agrawal, Sridhar Rajagopalan, Ramakrishnan Srikant, and Yirong Xu. 2003. Mining newsgroups using networks arising from social behavior. In Proceedings of WWW. Baker, B. 1994. How to Identify, Expose and Correct Liberal Media Bias. Media Research Center. Mohit Bansal, Claire Cardie, and Lillian Lee. 2008. The power of negative thinking: Exploiting label disagreement in the min-cut classification framework. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING-2008). Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. In Journal of ACM, 46(5): 604-632. Landis JR, Koch G. 1977. The measurement of observer agreement for categorical data. Biometrics 33:159-174. Changki Lee, Yi-Gyu Hwang, Hyo-Jung Oh, Soojong Lim, Jeong Heo, Chung-Hee Lee, HyeonJin Kim, Ji-Hyun Wang, Myung-Gil Jang. 2006. Fine-Grained Named Entity Recognition using Conditional Random Fields for Question Answering, In Proceedings of Human & Cognitive Language Technology (HCLT), pp. 268~272. (in Korean) Wei-Hao Lin, Theresa Wilson, Janyce Wiebe, and Alexander Hauptmann. 2006. Which side are you on? Identifying perspectives at the document and sentence levels. In Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL-2006), pages 109– 116, New York. Mark M. Miller and Bonnie P. Riechert. 2001. Spiral Opportunity and Frame Resonance: Mapping Issue Cycle in News and Public Discourse. In Framing Public Life: Perspectives on Media and our Understanding of the Social World, NJ: Lawrence Erlbaum Associates. I Ounis, M de Rijke, C Macdonald, G Mishne, and I Soboroff. 2006. Overview of the TREC-2006 Blog Track. In Proceedings of TREC. Pang, Bo, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment Classification using Machine Learning Techniques, Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP). Paul, M. J., Zhai, C., Girju, R. 2010. Summarizing Contrastive Viewpoints in Opinionated Text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP). 348 Schon, D.A., and Rien, M. 1994. Frame reflection: Toward the resolution of intractable policy controversies. New York: Basic Books. Y. Seki, D. Evans, L. Ku, L. Sun, H. Chen, and N. Kando. 2008. Overview of Multilingual Opinion Analysis Task at NTCIR-7. In Proceedings of 7th NTCIR Evaluation Workshop, pages 185203 Kwangseob Shim and Jaehyung Yang. 2002. MACH : A Supersonic Korean Morphological Analyzer, Proceedings of the 19th International Conference on Computational Linguistics (COLING-2002), pp.939-945. Swapna Somasundaran and Janyce Wiebe. 2009. Recognizing stances in online debates. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 226– 234, Suntec, Singapore, August. Association for Computational Linguistics. Swapna Somasundaran and Janyce Wiebe. 2010. Recognizing stances in ideological online debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text (CAAGET ’10). Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get outthe vote: Determining support or opposition from congressional floor-debate transcripts. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 327–335, Sydney, Australia, July. Association for Computational Linguistics. Turney, Peter D. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews, Proceedings of ACL-02, Philadelphia, Pennsylvania, 417424 Wiebe, Janyce M., Bruce, Rebecca F., & O'Hara, Thomas P. 1999. Development and use of a gold standard data set for subjectivity classifications. In Proc. 37th Annual Meeting of the Assoc. for Computational Linguistics (ACL-99). June, pp. 246-253. T. Wilson, J. Wiebe, and P. Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 349
|
2011
|
35
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 350–358, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Content Models with Attitude Christina Sauper, Aria Haghighi, Regina Barzilay Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology [email protected], [email protected], [email protected] Abstract We present a probabilistic topic model for jointly identifying properties and attributes of social media review snippets. Our model simultaneously learns a set of properties of a product and captures aggregate user sentiments towards these properties. This approach directly enables discovery of highly rated or inconsistent properties of a product. Our model admits an efficient variational meanfield inference algorithm which can be parallelized and run on large snippet collections. We evaluate our model on a large corpus of snippets from Yelp reviews to assess property and attribute prediction. We demonstrate that it outperforms applicable baselines by a considerable margin. 1 Introduction Online product reviews have become an increasingly valuable and influential source of information for consumers. Different reviewers may choose to comment on different properties or aspects of a product; therefore their reviews focus on different qualities of the product. Even when they discuss the same properties, their experiences and, subsequently, evaluations of the product can differ dramatically. Thus, information in any single review may not provide a complete and balanced view representative of the product as a whole. To address this need, online retailers often use simple aggregation mechanisms to represent the spectrum of user sentiment. For instance, product pages on Amazon prominently display the distribution of numerical scores across reCoherent property cluster + The martinis were very good. The drinks - both wine and martinis - were tasty. The wine list was pricey. Their wine selection is horrible. Incoherent property cluster + The sushi is the best I’ve ever had. Best paella I’d ever had. The fillet was the best steak we’d ever had. It’s the best soup I’ve ever had. Table 1: Example clusters of restaurant review snippets. The first cluster represents a coherent property of the underlying product, namely the cocktail property, and assesses distinctions in user sentiment. The latter cluster simply shares a common attribute expression and does not represent snippets discussing the same product property. In this work, we aim to produce the first type of property cluster with correct sentiment labeling. views, providing access to reviews at different levels of satisfaction. The goal of our work is to provide a mechanism for review content aggregation that goes beyond numerical scores. Specifically, we are interested in identifying fine-grained product properties across reviews (e.g., battery life for electronics or pizza for restaurants) as well as capturing attributes of these properties, namely aggregate user sentiment. For this task, we assume as input a set of product review snippets (i.e., standalone phrases such as “battery life is the best I’ve found”) rather than complete reviews. There are many techniques for extracting this type of snippet in existing work; we use the Sauper et al. (2010) system. 350 At first glance, this task can be solved using existing methods for review analysis. These methods can effectively extract product properties from individual snippets along with their corresponding sentiment. While the resulting property-attribute pairs form a useful abstraction for cross-review analysis, in practice direct comparison of these pairs is challenging. Consider, for instance, the two clusters of restaurant review snippets shown in Figure 1. While both clusters have many words in common among their members, only the first describes a coherent property cluster, namely the cocktail property. The snippets of the latter cluster do not discuss a single product property, but instead share similar expressions of sentiment. To solve this issue, we need a method which can correctly identify both property and sentiment words. In this work, we propose an approach that jointly analyzes the whole collection of product review snippets, induces a set of learned properties, and models the aggregate user sentiment towards these properties. We capture this idea using a Bayesian topic model where a set of properties and corresponding attribute tendencies are represented as hidden variables. The model takes product review snippets as input and explains how the observed text arises from the latent variables, thereby connecting text fragments with corresponding properties and attributes. The advantages of this formulation are twofold. First, this encoding provides a common ground for comparing and aggregating review content in the presence of varied lexical realizations. For instance, this representation allows us to directly compare how many reviewers liked a given property of a product. Second, our model yields an efficient mean-field variational inference procedure which can be parallelized and run on a large number of review snippets. We evaluate our approach in the domain of snippets taken from restaurant reviews on Yelp. In this collection, each restaurant has on average 29.8 snippets representing a wide spectrum of opinions about a restaurant. The evaluation we present demonstrates that the model can accurately retrieve clusters of review fragments that describe the same property, yielding 20% error reduction over a standalone clustering baseline. We also show that the model can effectively identify binary snippet attributes with 9.2% error reduction over applicable baselines, demonstrating that learning to identify attributes in the context of other product reviews yields significant gains. Finally, we evaluate our model on its ability to identify product properties for which there is significant sentiment disagreement amongst user snippets. This tests our model’s capacity to jointly identify properties and assess attributes. 2 Related Work Our work on review aggregation has connections to three lines of work in text analysis. First, our work relates to research on extraction of product properties with associated sentiment from review text (Hu and Liu, 2004; Liu et al., 2005a; Popescu et al., 2005). These methods identify relevant information in a document using a wide range of methods such as association mining (Hu and Liu, 2004), relaxation labeling (Popescu et al., 2005) and supervised learning (Kim and Hovy, 2006). While our method also extracts product properties and sentiment, our focus is on multi-review aggregation. This task introduces new challenges which were not addressed in prior research that focused on perdocument analysis. A second related line of research is multidocument review summarization. Some of these methods directly apply existing domainindependent summarization methods (Seki et al., 2006), while others propose new methods targeted for opinion text (Liu et al., 2005b; Carenini et al., 2006; Hu and Liu, 2006; Kim and Zhai, 2009). For instance, these summaries may present contrastive view points (Kim and Zhai, 2009) or relay average sentiment (Carenini et al., 2006). The focus of this line of work is on how to select suitable sentences, assuming that relevant review features (such as numerical scores) are given. Since our emphasis is on multi-review analysis, we believe that the information we extract can benefit existing summarization systems. Finally, a number of approaches analyze review documents using probabilistic topic models (Lu and Zhai, 2008; Titov and McDonald, 2008; Mei et al., 2007). While some of these methods focus primar351 ily on modeling ratable aspects (Titov and McDonald, 2008), others explicitly capture the mixture of topics and sentiments (Mei et al., 2007). These approaches are capable of identifying latent topics in the collection in opinion text (e.g., weblogs) as well as associated sentiment. While our model captures similar high-level intuition, it analyzes fine-grained properties expressed at the snippet level, rather than document-level sentiment. Delivering analysis at such a fine granularity requires a new technique. 3 Problem Formulation In this section, we discuss the core random variables and abstractions of our model. We describe the generative models over these elements in Section 4. Product: A product represents a reviewable object. For the experiments in this paper, we use restaurants as products. Snippets: A snippet is a user-generated short sequence of tokens describing a product. Input snippets are deterministically taken from the output of the Sauper et al. (2010) system. Property: A property corresponds to some finegrained aspect of a product. For instance, the snippet “the pad thai was great” describes the pad thai property. We assume that each snippet has a single property associated with it. We assume a fixed number of possible properties K for each product. For the corpus of restaurant reviews, we assume that the set of properties are specific to a given product, in order to capture fine-grained, relevant properties for each restaurant. For example, reviews from a sandwich shop may contrast the club sandwich with the turkey wrap, while for a more general restaurant, the snippets refer to sandwiches in general. For other domains where the properties are more consistent, it is straightforward to alter our model so that properties are shared across products. Attribute: An attribute is a description of a property. There are multiple attribute types, which may correspond to semantic differences. We assume a fixed, pre-specified number of attributes N. For example, in the case of product reviews, we select N = 2 attributes corresponding to positive and negative sentiment. In the case of information extraction, it may be beneficial to use numeric and alphabetic types. One of the goals of this work in the review domain is to improve sentiment prediction by exploiting correlations within a single property cluster. For example, if there are already many snippets with the attribute representing positive sentiment in a given property cluster, additional snippets are biased towards positive sentiment as well; however, data can always override this bias. Snippets themselves are always observed; the goal of this work is to induce the latent property and attribute underlying each snippet. 4 Model Our model generates the words of all snippets for each product in a collection of products. We use si,j,w to represent the wth word of the jth snippet of the ith product. We use s to denote the collection of all snippet words. We also assume a fixed vocabulary of words V . We present an overview of our generative model in Figure 1 and describe each component in turn: Global Distributions: At the global level, we draw several unigram distributions: a global background distribution θB and attribute distributions θa A for each attribute. The background distribution is meant to encode stop-words and domain whitenoise, e.g., food in the restaurants domain. In this domain, the positive and negative attribute distributions encode words with positive and negative sentiments (e.g., delicious or terrible). Each of these distributions are drawn from Dirichlet priors. The background distribution is drawn from a symmetric Dirichlet with concentration λB = 0.2. The positive and negative attribute distributions are initialized using seed words (Vseeda in Figure 1). These seeds are incorporated into the attribute priors: a non-seed word gets ϵ hyperparameter and a seed word gets ϵ + λA, where ϵ = 0.25 and λA = 1.0. Product Level: For the ith product, we draw property unigram distributions θi,1 P , . . . , θi,K P for each of the possible K product properties. The property distribution represents product-specific content distributions over properties discussed in reviews of the product; for instance in the restaurant domains, properties may correspond to distinct menu items. Each θi,k P is drawn from a symmetric Dirichlet prior 352 Global Level: - Draw background distribution θB ∼DIRICHLET(λBV ) - For each attribute type a, - Draw attribute distribution θa A ∼DIRICHLET(ϵV + λAVseeda) Product Level: - For each product i, - Draw property distributions θk P ∼DIRICHLET(λP V ) for k = 1, . . . , K - Draw property attribute binomial φi,k ∼BETA(αA, βA) for k = 1, . . . , K - Draw property multinomial ψi ∼DIRICHLET(λMK) Snippet Level: - For each snippet j in ith product, - Draw snippet property Zi,j P ∼ψi - Draw snippet attribute Zi,j A ∼φZij P - Draw sequence of word topic indicators Zi,j,w W ∼Λ|Zi,j,w−1 W - Draw snippet word given property Zi,j P and attribute Zi,j A si,j,w ∼ θ i,Zi,j P P , when Zi,j,w W = P θ Zi,j A A , when Zi,j,w W = A θB, when Zi,j,w W = B θB θa A ψ φk Zi−1 W Zi W Zi+1 W wi−1 wi wi+1 HMM over snippet words Background word distribution Attribute word distributions Product Snippet ZP ZA Property multinomial Property attribute binomials θk P Property word distributions Property Snippet attribute Snippet property θa A ZP , θP ZA, θA θB Attribute Figure 1: A high-level verbal and graphical description for our model in Section 4. We use DIRICHLET(λV ) to denote a finite Dirichlet prior where the hyper-parameter counts are a scalar times the unit vector of vocabulary items. For the global attribute distribution, the prior hyper-parameter counts are ϵ for all vocabulary items and λA for Vseeda, the vector of vocabulary items in the set of seed words for attribute a. with hyper-parameter λP = 0.2. For each property k = 1, . . . , K. φi,k, we draw a binomial distribution φi,k. This represents the distribution over positive and negative attributes for that property; it is drawn from a beta prior using hyper-parameters αA = 2 and βA = 2. We also draw a multinomial ψi over K possible properties from a symmetric Dirichlet distribution with hyperparameter λM = 1, 000. This distribution is used to draw snippet properties. Snippet Level: For the jth snippet of the ith product, a property random variable Zi,j P is drawn according to the multinomial ψi. Conditioned on this choice, we draw an attribute Zi,j A (positive or negative) from the property attribute distribution φi,Zj,j P . Once the property Zi,j P and attribute Zi,j A have been selected, the tokens of the snippet are generated using a simple HMM. The latent state underlying a token, Zi,j,w W , indicates whether the wth word comes from the property distribution, attribute distribution, or background distribution; we use P, A, or B to denote these respective values of Zi,j,w W . The sequence Zi,j,1 W , . . . , Zi,j,m W is generated using a first-order Markov model. The full transition parameter matrix Λ parametrizes these decisions. Conditioned on the underlying Zi,j,w W , a word, si,j,w is drawn from θi,j P , θ i,Zi,j P A , or θB for the values P,A, or B respectively. 5 Inference The goal of inference is to predict the snippet property and attribute distributions over each snippet given all the observed snippets P(Zi,j P , Zi,j A |s) for all products i and snippets j. Ideally, we would like to marginalize out nuisance random variables and distributions. Specifically, we approximate the full 353 model posterior using variational inference:1 P(ψ, θP, θB, θA, φ, |s) ≈ Q(ψ, θP, θB, θA, φ) where ψ, θP, φ denote the collection of latent distributions in our model. Here, we assume a full meanfield factorization of the variational distribution; see Figure 2 for the decomposition. Each variational factor q(·) represents an approximation of that variable’s posterior given observed random variables. The variational distribution Q(·) makes the (incorrect) assumption that the posteriors amongst factors are independent. The goal of variational inference is to set factors q(·) so that it minimizes the KL divergence to the true model posterior: min Q(·) KL(P(ψ, θP, θB, θA, φ, |s)∥ Q(ψ, θP, θB, θA, φ) We optimize this objective using coordinate descent on the q(·) factors. Concretely, we update each factor by optimizing the above criterion with all other factors fixed to current values. For instance, the update for the factor q(Zi,j,w W ) takes the form: q(Zi,j,w W ) ← EQ/q(Zi,j,w W ) lg P(ψ, θP, θB, θA, φ, s) The full factorization of Q(·) and updates for all random variable factors are given in Figure 2. Updates of parameter factors are omitted; however these are derived through simple counts of the ZA, ZP , and ZW latent variables. For related discussion, see Blei et al. (2003). 6 Experiments In this section, we describe in detail our data set and present three experiments and their results. Data Set Our data set consists of snippets from Yelp reviews generated by the system described in Sauper et al. (2010). This system is trained to extract snippets containing short descriptions of user sentiment towards some aspect of a restaurant.2 We 1See Liang and Klein (2007) for an overview of variational techniques. 2For exact training procedures, please reference that paper. The [P noodles ] and the [P meat ] were actually [+ pretty good ]. I [+ recommend ] the [P chicken noodle pho ]. The [P noodles ] were [- soggy ]. The [P chicken pho ] was also [+ good ]. The [P spring rolls ] and [P coffee ] were [+ good ] though. The [P spring roll wrappers ] were a [- little dry tasting ]. My [+ favorites ] were the [P crispy spring rolls ]. The [P Crispy Tuna Spring Rolls ] are [+ fantastic ]! The [P lobster roll ] my mother ordered was [- dry ] and [- scant ]. The [P portabella mushroom ] is my [+ go-to ] [P sandwich ]. The [P bread ] on the [P sandwich ] was [- stale ]. The slice of [P tomato ] was [- rather measly ]. The [P shumai ] and [P California maki sushi ] were [+ decent ]. The [P spicy tuna roll ] and [P eel roll ] were [+ perfect ]. The [P rolls ] with [P spicy mayo ] were [- not so great ]. I [+ love ] [P Thai rolls ]. Figure 3: Example snippets from our data set, grouped according to property. Property words are labeled P and colored blue, NEGATIVE attribute words are labeled - and colored red, and POSITIVE attribute words are labeled + and colored green. The grouping and labeling are not given in the data set and must be learned by the model. select only the snippets labeled by that system as referencing food, and we ignore restaurants with fewer than 20 snippets. There are 13,879 snippets in total, taken from 328 restaurants in and around the Boston/Cambridge area. The average snippet length is 7.8 words, and there are an average of 42.1 snippets per restaurant, although there is high variance in number of snippets for each restaurant. Figure 3 shows some example snippets. For sentiment attribute seed words, we use 42 and 33 words for the positive and negative distributions respectively. These are hand-selected based on the restaurant review domain; therefore, they include domain-specific words such as delicious and gross. Tasks We perform three experiments to evaluate our model’s effectiveness. First, a cluster prediction task is designed to test the quality of the learned property clusters. Second, an attribute analysis task will evaluate the sentiment analysis portion of the model. Third, we present a task designed to test whether the system can correctly identify properties which have conflicting attributes, which tests both clustering and sentiment analysis. 354 Mean-field Factorization Q(ψ, θP, θB, θA, φ) = q(θB) N Y a=1 q(θa A) ! n Y i K Y k=1 q(θi,k P )q(φi,k) ! Y j q(Zi,j A )q(Zi,j P ) Y w q(Zi,j,w W ) Snippet Property Indicator lg q(Zi,j P = k) ∝Eq(ψi) lg ψi(p) + X w q(Zi,j,w W = P)Eq(θi,k P ) lg θi,k P (si,j,w) + N X a=1 q(Zi,j A = a)Eq(φi,k) lg φi,k(a) Snippet Attribute Indicator lg q(Zi,j A = a) = X k q(Zi,j P = k)Eq(φi,k) lg φi,k(a) + X w q(Zi,j,w W = A)Eq(θa A) lg θa A(si,j,w) Word Topic Indicator lg q(Zi,j,w W = P) ∝lg P(ZW = P) + X k q(Zi,j P = k)Eq(θi,k P ) lg θi,j P (si,j,w) lg q(Zi,j,w W = A) ∝lg P(ZW = A) + X a∈{+,−} q(Zi,j A = a)Eq(θa A) lg θa A(si,j,w) lg q(Zi,j,w W = B) ∝lg P(ZW = B) + Eq(θB) lg θB(si,j,w) Figure 2: The mean-field variational algorithm used during learning and inference to obtain posterior predictions over snippet properties and attributes, as described in Section 5. Mean-field inference consists of updating each of the latent variable factors as well as a straightforward update of latent parameters in round robin fashion. 6.1 Cluster prediction The goal of this task is to evaluate the quality of property clusters; specifically the Zi,j P variable in Section 4. In an ideal clustering, the predicted clusters will be cohesive (i.e., all snippets predicted for a given property are related to each other) and comprehensive (i.e., all snippets which are related to a property are predicted for it). For example, a snippet will be assigned the property pad thai if and only if that snippet mentions some aspect of the pad thai. Annotation For this task, we use a set of gold clusters over 3,250 snippets across 75 restaurants collected through Mechanical Turk. In each task, a worker was given a set of 25 snippets from a single restaurant and asked to cluster them into as many clusters as they desired, with the option of leaving any number unclustered. This yields a set of gold clusters and a set of unclustered snippets. For verification purposes, each task was provided to two different workers. The intersection of both workers’ judgments was accepted as the gold standard, so the model is not evaluated on judgments which disagree. In total, there were 130 unique tasks, each of which were provided to two workers, for a total output of 210 generated clusters. Baseline The baseline for this task is a clustering algorithm weighted by TF*IDF over the data set as implemented by the publicly available CLUTO package.3 This baseline will put a strong connection between things which are lexically similar. Because our model only uses property words to tie together clusters, it may miss correlations between words which are not correctly identified as property words. The baseline is allowed 10 property clusters per restaurant. We use the MUC cluster evaluation metric for this task (Vilain et al., 1995). This metric measures the number of cluster merges and splits required to recreate the gold clusters given the model’s output. 3Available at http://glaros.dtc.umn.edu/gkhome/cluto/cluto/overview with agglomerative clustering, using the cosine similarity distance metric. 355 Precision Recall F1 Baseline 80.2 61.1 69.3 Our model 72.2 79.1 75.5 Table 2: Results using the MUC metric on the cluster prediction task. Note that while the precision of the baseline is higher, the recall and overall F1 of our model outweighs that. While MUC has a deficiency in that putting everything into a single cluster will artificially inflate the score, parameters on our model are set so that the model uses the same number of clusters as the baseline system. Therefore, it can concisely show how accurate our clusters are as a whole. While it would be possible to artificially inflate the score by putting everything into a single cluster, the parameters on our model and the likelihood objective are such that the model prefers to use all available clusters, the same number as the baseline system. Results Results for our cluster prediction task are in Table 2. While our system does suffer on precision in comparison to the baseline system, the recall gains far outweigh this loss, for a total error reduction of 20% on the MUC measure. The most common cause of poor cluster choices in the baseline system is its inability to distinguish property words from attribute words. For example, if many snippets in a given restaurant use the word delicious, there may end up being a cluster based on that alone. Because our system is capable of distinguishing which words are property words (i.e., words relevant to clustering), it can choose clusters which make more sense overall. We show an example of this in Table 3. 6.2 Attribute analysis We also evaluate the system’s predictions of snippet attribute using the predicted posterior over the attribute distribution for the snippet (i.e., Zi,j A ). For this task, we consider the binary judgment to be simply the one with higher value in q(Zi,j A ) (see Section 5). The goal of this task is to evaluate whether our model correctly distinguishes attribute words. Annotation For this task, we use a set of 260 total snippets from the Yelp reviews for 30 restaurants, evenly split into a training and test sets of 130 snippets each. These snippets are manually labeled POSThe martini selection looked delicious The s’mores martini sounded excellent The martinis were good The martinis are very good The mozzarella was very fresh The fish and various meets were very well made The best carrot cake I’ve ever eaten Carrot cake was deliciously moist The carrot cake was delicious. It was rich, creamy and delicious. The pasta Bolognese was rich and robust. Table 3: Example phrases from clusters in both the baseline and our model. For each pair of clusters, the dashed line indicates separation by the baseline model, while the solid line indicates separation by our model. In the first example, the baseline mistakenly clusters some snippets about martinis with those containing the word very. In the second example, the same occurs with the word delicious. ITIVE or NEGATIVE. Neutral snippets are ignored for the purpose of this experiment. Baseline We use two baselines for this task, one based on a standard discriminative classifier and one based on the seed words from our model. The DISCRIMINATIVE baseline for this task is a standard maximum entropy discriminative binary classifier over unigrams. Given enough snippets from enough unrelated properties, the classifier should be able to identify that words like great indicate positive sentiment and those like bad indicate negative sentiment, while words like chicken are neutral and have no effect. The SEED baseline simply counts the number of words from the positive and negative seed lists used by the model, Vseed+ and Vseed−. If there are more words from Vseed+, the snippet is labeled positive, and if there are more words from Vseed−, the snippet is labeled negative. If there is a tie or there are no seed words, we split the prediction. Because the seed word lists are specifically slanted toward restaurant reviews (i.e., they contain words such as delicious), this baseline should perform well. Results For this experiment, we measure the overall classification accuracy of each system (see Table 356 Accuracy DISCRIMINATIVE baseline 75.9 SEED baseline 78.2 Our model 80.2 Table 4: Attribute prediction accuracy of the full system compared to the DISCRIMINATIVE and SEED baselines. The advantage of our system is its ability to distinguish property words from attribute words in order to restrict judgment to only the relevant terms. The naan was hot and fresh All the veggies were really fresh and crisp. Perfect mix of fresh flavors and comfort food The lo main smelled and tasted rancid My grilled cheese sandwich was a little gross Table 5: Examples of sentences correctly labeled by our system but incorrectly labeled by the DISCRIMINATIVE baseline; the key sentiment words are highlighted. Notice that these words are not the most common sentiment words; therefore, it is difficult for the classifier to make a correct generalization. Only two of these words are seed words for our model (fresh and gross). 4). Our system outperforms both supervised baselines. As in the cluster prediction case, the main flaw with the DISCRIMINATIVE baseline system is its inability to recognize which words are relevant for the task at hand, in this case the attribute words. By learning to separate attribute words from the other words in the snippets, our full system is able to more accurately judge their sentiment. Examples of these cases are found in Table 5. The obvious flaw in the SEED baseline is the inability to pre-specify every possible sentiment word; our model’s performance indicates that it is learning something beyond just these basic words. 6.3 Conflict identification Our final task requires both correct cluster prediction and correct sentiment judgments. In many domains, it is interesting to know not only whether a product is rated highly, but also whether there is conflicting sentiment or debate. In the case of restaurant reviews, it is relevant to know whether the dishes are consistently good or whether there is some variation in quality. Judgment P A Attribute / Snippet Yes Yes The salsa isn’t great + Chips and salsa are sublime The grits were good, but not great. + Grits were the perfect consistency The tom yum kha was bland + It’s the best Thai soup I ever had The naan is a bit doughy and undercooked + The naan was pretty tasty My reuben was a little dry. + The reuben was a good reuben. Yes No Belgian frites are crave-able + The frites are very, very good. No Yes The blackened chicken was meh + Chicken enchiladas are yummy! The taste overall was mediocre + The oysters are tremendous No No The cream cheese wasn’t bad + Ice cream was just delicious Table 6: Example property-attribute correctness for the conflict identification task, over both property and attribute. Property judgment (P) indicates whether the snippets are discussing the same item; attribute judgment (A) indicates whether there is a correct difference in attribute (sentiment), regardless of properties. To evaluate this, we examine the output clusters which contain predictions of both positive and negative snippets. The goal is to identify whether these are true conflicts of sentiment or there was a failure in either property clustering or attribute classification. For this task, the output clusters are manually annotated for correctness of both property and attribute judgments, as in Table 6. As there is no obvious baseline for this experiment, we treat it simply as an analysis of errors. Results For this task, we examine the accuracy of conflict prediction, both with and without the correctly identified properties. The results by propertyattribute correctness are shown in Table 7. From these numbers, we can see that 50% of the clusters are correct in both property (cohesiveness) and attribute (difference in sentiment) dimensions. Overall, the properties are correctly identified (subject of NEG matches the subject of POS) 68% of the time and a correct difference in attribute is identified 67% of the time. Of the clusters which are correct in property, 74% show a correctly labeled 357 Judgment P A # Clusters Yes Yes 52 Yes No 18 No Yes 17 No No 15 Table 7: Results of conflict analysis by correctness of property label (P) and attribute conflict (A). Examples of each type of correctness pair are show in in Table 6. 50% of the clusters are correct in both labels, and there are approximately the same number of errors toward both property and attribute. difference in attribute. 7 Conclusion We have presented a probabilistic topic model for identifying properties and attitudes of product review snippets. The model is relatively simple and admits an efficient variational mean-field inference procedure which is parallelized and can be run on a large number of snippets. We have demonstrated on multiple evaluation tasks that our model outperforms applicable baselines by a considerable margin. Acknowledgments The authors acknowledge the support of the NSF (CAREER grant IIS-0448168), NIH (grant 5R01-LM009723-02), Nokia, and the DARPA Machine Reading Program (AFRL prime contract no. FA8750-09-C-0172). Thanks to Peter Szolovits and the MIT NLP group for their helpful comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. References David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Giuseppe Carenini, Raymond Ng, and Adam Pauls. 2006. Multi-document summarization of evaluative text. In Proceedings of EACL, pages 305–312. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of SIGKDD, pages 168–177. Minqing Hu and Bing Liu. 2006. Opinion extraction and summarization on the web. In Proceedings of AAAI. Soo-Min Kim and Eduard Hovy. 2006. Automatic identification of pro and con reasons in online reviews. In Proceedings of COLING/ACL, pages 483–490. Hyun Duk Kim and ChengXiang Zhai. 2009. Generating comparative summaries of contradictory opinions in text. In Proceedings of CIKM, pages 385–394. P. Liang and D. Klein. 2007. Structured Bayesian nonparametric models with variational inference (tutorial). In Proceedings of ACL. Bing Liu, Minqing Hu, and Junsheng Cheng. 2005a. Opinion observer: Analyzing and comparing opinions on the web. In Proceedings of WWW, pages 342–351. Bing Liu, Minqing Hu, and Junsheng Cheng. 2005b. Opinion observer: analyzing and comparing opinions on the web. In Proceedings of WWW, pages 342–351. Yue Lu and ChengXiang Zhai. 2008. Opinion integration through semi-supervised topic modeling. In Proceedings of WWW, pages 121–130. Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of WWW, pages 171–180. Ana-Maria Popescu, Bao Nguyen, and Oren Etzioni. 2005. OPINE: Extracting product features and opinions from reviews. In Proceedings of HLT/EMNLP, pages 339–346. Christina Sauper, Aria Haghighi, and Regina Barzilay. 2010. Incorporating content structure into text analysis applications. In Proceedings of EMNLP, pages 377–387. Yohei Seki, Koji Eguchi, Noriko K, and Masaki Aono. 2006. Opinion-focused summarization and its analysis at DUC 2006. In Proceedings of DUC, pages 122– 130. Ivan Titov and Ryan McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In Proceedings of ACL, pages 308–316. Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme. In Proceedings of MUC, pages 45–52. 358
|
2011
|
36
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 359–367, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Recognizing Named Entities in Tweets Xiaohua Liu ‡ †, Shaodian Zhang∗§, Furu Wei †, Ming Zhou † ‡School of Computer Science and Technology Harbin Institute of Technology, Harbin, 150001, China §Department of Computer Science and Engineering Shanghai Jiao Tong University, Shanghai, 200240, China †Microsoft Research Asia Beijing, 100190, China †{xiaoliu, fuwei, mingzhou}@microsoft.com § [email protected] Abstract The challenges of Named Entities Recognition (NER) for tweets lie in the insufficient information in a tweet and the unavailability of training data. We propose to combine a K-Nearest Neighbors (KNN) classifier with a linear Conditional Random Fields (CRF) model under a semi-supervised learning framework to tackle these challenges. The KNN based classifier conducts pre-labeling to collect global coarse evidence across tweets while the CRF model conducts sequential labeling to capture fine-grained information encoded in a tweet. The semi-supervised learning plus the gazetteers alleviate the lack of training data. Extensive experiments show the advantages of our method over the baselines as well as the effectiveness of KNN and semisupervised learning. 1 Introduction Named Entities Recognition (NER) is generally understood as the task of identifying mentions of rigid designators from text belonging to named-entity types such as persons, organizations and locations (Nadeau and Sekine, 2007). Proposed solutions to NER fall into three categories: 1) The rule-based (Krupka and Hausman, 1998); 2) the machine learning based (Finkel and Manning, 2009; Singh et al., 2010) ; and 3) hybrid methods (Jansche and Abney, 2002). With the availability of annotated corpora, such as ACE05, Enron (Minkov et al., 2005) and ∗This work has been done while the author was visiting Microsoft Research Asia. CoNLL03 (Tjong Kim Sang and De Meulder, 2003), the data driven methods now become the dominating methods. However, current NER mainly focuses on formal text such as news articles (Mccallum and Li, 2003; Etzioni et al., 2005). Exceptions include studies on informal text such as emails, blogs, clinical notes (Wang, 2009). Because of the domain mismatch, current systems trained on non-tweets perform poorly on tweets, a new genre of text, which are short, informal, ungrammatical and noise prone. For example, the average F1 of the Stanford NER (Finkel et al., 2005) , which is trained on the CoNLL03 shared task data set and achieves state-of-the-art performance on that task, drops from 90.8% (Ratinov and Roth, 2009) to 45.8% on tweets. Thus, building a domain specific NER for tweets is necessary, which requires a lot of annotated tweets or rules. However, manually creating them is tedious and prohibitively unaffordable. Proposed solutions to alleviate this issue include: 1) Domain adaption, which aims to reuse the knowledge of the source domain in a target domain. Two recent examples are Wu et al. (2009), which uses data that is informative about the target domain and also easy to be labeled to bridge the two domains, and Chiticariu et al. (2010), which introduces a high-level rule language, called NERL, to build the general and domain specific NER systems; and 2) semi-supervised learning, which aims to use the abundant unlabeled data to compensate for the lack of annotated data. Suzuki and Isozaki (2008) is one such example. Another challenge is the limited information in tweet. Two factors contribute to this difficulty. One 359 is the tweet’s informal nature, making conventional features such as part-of-speech (POS) and capitalization not reliable. The performance of current NLP tools drops sharply on tweets. For example, OpenNLP 1, the state-of-the-art POS tagger, gets only an accuracy of 74.0% on our test data set. The other is the tweet’s short nature, leading to the excessive abbreviations or shorthand in tweets, and the availability of very limited context information. Tackling this challenge, ideally, requires adapting related NLP tools to fit tweets, or normalizing tweets to accommodate existing tools, both of which are hard tasks. We propose a novel NER system to address these challenges. Firstly, a K-Nearest Neighbors (KNN) based classifier is adopted to conduct word level classification, leveraging the similar and recently labeled tweets. Following the two-stage prediction aggregation methods (Krishnan and Manning, 2006), such pre-labeled results, together with other conventional features used by the state-of-the-art NER systems, are fed into a linear Conditional Random Fields (CRF) (Lafferty et al., 2001) model, which conducts fine-grained tweet level NER. Furthermore, the KNN and CRF model are repeatedly retrained with an incrementally augmented training set, into which high confidently labeled tweets are added. Indeed, it is the combination of KNN and CRF under a semi-supervised learning framework that differentiates ours from the existing. Finally, following Lev Ratinov and Dan Roth (2009), 30 gazetteers are used, which cover common names, countries, locations, temporal expressions, etc. These gazetteers represent general knowledge across domains. The underlying idea of our method is to combine global evidence from KNN and the gazetteers with local contextual information, and to use common knowledge and unlabeled tweets to make up for the lack of training data. 12,245 tweets are manually annotated as the test data set. Experimental results show that our method outperforms the baselines. It is also demonstrated that integrating KNN classified results into the CRF model and semi-supervised learning considerably boost the performance. Our contributions are summarized as follows. 1http://sourceforge.net/projects/opennlp/ 1. We propose to a novel method that combines a KNN classifier with a conventional CRF based labeler under a semi-supervised learning framework to combat the lack of information in tweet and the unavailability of training data. 2. We evaluate our method on a human annotated data set, and show that our method outperforms the baselines and that both the combination with KNN and the semi-supervised learning strategy are effective. The rest of our paper is organized as follows. In the next section, we introduce related work. In Section 3, we formally define the task and present the challenges. In Section 4, we detail our method. In Section 5, we evaluate our method. Finally, Section 6 concludes our work. 2 Related Work Related work can be roughly divided into three categories: NER on tweets, NER on non-tweets (e.g., news, bio-logical medicine, and clinical notes), and semi-supervised learning for NER. 2.1 NER on Tweets Finin et al. (2010) use Amazons Mechanical Turk service 2 and CrowdFlower 3 to annotate named entities in tweets and train a CRF model to evaluate the effectiveness of human labeling. In contrast, our work aims to build a system that can automatically identify named entities in tweets. To achieve this, a KNN classifier with a CRF model is combined to leverage cross tweets information, and the semisupervised learning is adopted to leverage unlabeled tweets. 2.2 NER on Non-Tweets NER has been extensively studied on formal text, such as news, and various approaches have been proposed. For example, Krupka and Hausman (1998) use manual rules to extract entities of predefined types; Zhou and Ju (2002) adopt Hidden Markov Models (HMM) while Finkel et al. (2005) use CRF to train a sequential NE labeler, in which the BIO (meaning Beginning, the Inside and the Outside of 2https://www.mturk.com/mturk/ 3http://crowdflower.com/ 360 an entity, respectively) schema is applied. Other methods, such as classification based on Maximum Entropy models and sequential application of Perceptron or Winnow (Collins, 2002), are also practiced. The state-of-the-art system, e.g., the Stanford NER, can achieve an F1 score of over 92.0% on its test set. Biomedical NER represents another line of active research. Machine learning based systems are commonly used and outperform the rule based systems. A state-of-the-art biomedical NER system (Yoshida and Tsujii, 2007) uses lexical features, orthographic features, semantic features and syntactic features, such as part-of-speech (POS) and shallow parsing. A handful of work on other domains exists. For example, Wang (2009) introduces NER on clinical notes. A data set is manually annotated and a linear CRF model is trained, which achieves an F-score of 81.48% on their test data set; Downey et al. (2007) employ capitalization cues and n-gram statistics to locate names of a variety of classes in web text; most recently, Chiticariu et al. (2010) design and implement a high-level language NERL that is tuned to simplify the process of building, understanding, and customizing complex rule-based named-entity annotators for different domains. Ratinov and Roth (2009) systematically study the challenges in NER, compare several solutions and report some interesting findings. For example, they show that a conditional model that does not consider interactions at the output level performs comparably to beam search or Viterbi, and that the BILOU (Beginning, the Inside and the Last tokens of multi-token chunks as well as Unit-length chunks) encoding scheme significantly outperforms the BIO schema (Beginning, the Inside and Outside of a chunk). In contrast to the above work, our study focuses on NER for tweets, a new genre of texts, which are short, noise prone and ungrammatical. 2.3 Semi-supervised Learning for NER Semi-supervised learning exploits both labeled and un-labeled data. It proves useful when labeled data is scarce and hard to construct while unlabeled data is abundant and easy to access. Bootstrapping is a typical semi-supervised learning method. It iteratively adds data that has been confidently labeled but is also informative to its training set, which is used to re-train its model. Jiang and Zhai (2007) propose a balanced bootstrapping algorithm and successfully apply it to NER. Their method is based on instance re-weighting, which allows the small amount of the bootstrapped training sets to have an equal weight to the large source domain training set. Wu et al. (2009) propose another bootstrapping algorithm that selects bridging instances from an unlabeled target domain, which are informative about the target domain and are also easy to be correctly labeled. We adopt bootstrapping as well, but use human labeled tweets as seeds. Another representative of semi-supervised learning is learning a robust representation of the input from unlabeled data. Miller et al. (2004) use word clusters (Brown et al., 1992) learned from unlabeled text, resulting in a performance improvement of NER. Guo et al. (2009) introduce Latent Semantic Association (LSA) for NER. In our pilot study of NER for tweets, we adopt bag-of-words models to represent a word in tweet, to concentrate our efforts on combining global evidence with local information and semi-supervised learning. We leave it to our future work to explore which is the best input representation for our task. 3 Task Definition We first introduce some background about tweets, then give a formal definition of the task. 3.1 The Tweets A tweet is a short text message containing no more than 140 characters in Twitter, the biggest micro-blog service. Here is an example of tweets: “mycraftingworld: #Win Microsoft Office 2010 Home and Student *2Winners* #Contest from @office and @momtobedby8 #Giveaway http://bit.ly/bCsLOr ends 11/14”, where ”mycraftingworld” is the name of the user who published this tweet. Words beginning with the “#” character, like “”#Win”, “#Contest” and “#Giveaway”, are hash tags, usually indicating the topics of the tweet; words starting with “@”, like “@office” and “@momtobedby8”, represent user names, and “http://bit.ly/bCsLOr” is a shortened link. Twitter users are interested in named entities, such 361 Figure 1: Portion of different types of named entities in tweets. This is based on an investigation of 12,245 randomly sampled tweets, which are manually labeled. as person names, organization names and product names, as evidenced by the abundant named entities in tweets. According to our investigation on 12,245 randomly sampled tweets that are manually labeled, about 46.8% have at least one named entity. Figure 1 shows the portion of named entities of different types. 3.2 The Task Given a tweet as input, our task is to identify both the boundary and the class of each mention of entities of predefined types. We focus on four types of entities in our study, i.e., persons, organizations, products, and locations, which, according to our investigation as shown in Figure 1, account for 89.0% of all the named entities. Here is an example illustrating our task. The input is “...Me without you is like an iphone without apps, Justin Bieber without his hair, Lady gaga without her telephone, it just wouldn...” The expected output is as follows:“...Me without you is like an <PRODUCT >iphone</PRODUCT>without apps, <PERSON>Justin Bieber</PERSON>without his hair,<PERSON>Lady gaga</PERSON> without her telephone, it just wouldn...”, meaning that “iphone” is a product, while “Justin Bieber” and “Lady gaga” are persons. 4 Our Method Now we present our solution to the challenging task of NER for tweets. An overview of our method is first given, followed by detailed discussion of its core components. 4.1 Method Overview NER task can be naturally divided into two subtasks, i.e., boundary detection and type classification. Following the common practice , we adopt a sequential labeling approach to jointly resolve these sub-tasks, i.e., for each word in the input tweet, a label is assigned to it, indicating both the boundary and entity type. Inspired by Ratinov and Roth (2009), we use the BILOU schema. Algorithm 1 outlines our method, where: trains and traink denote two machine learning processes to get the CRF labeler and the KNN classifier, respectively; reprw converts a word in a tweet into a bag-of-words vector; the reprt function transforms a tweet into a feature matrix that is later fed into the CRF model; the knn function predicts the class of a word; the update function applies the predicted class by KNN to the inputted tweet; the crf function conducts word level NE labeling;τ and γ represent the minimum labeling confidence of KNN and CRF, respectively, which are experimentally set to 0.1 and 0.001; N (1,000 in our work) denotes the maximum number of new accumulated training data. Our method, as illustrated in Algorithm 1, repeatedly adds the new confidently labeled tweets to the training set 4 and retrains itself once the number of new accumulated training data goes above the threshold N. Algorithm 1 also demonstrates one striking characteristic of our method: A KNN classifier is applied to determine the label of the current word before the CRF model. The labels of the words that confidently assigned by the KNN classifier are treated as visible variables for the CRF model. 4.2 Model Our model is hybrid in the sense that a KNN classifier and a CRF model are sequentially applied to the target tweet, with the goal that the KNN classifier captures global coarse evidence while the CRF model fine-grained information encoded in a single tweet and in the gazetteers. Algorithm 2 outlines the training process of KNN, which records the labeled word vector for every type of label. Algorithm 3 describes how the KNN classifier 4The training set ts has a maximum allowable number of items, which is 10,000 in our work. Adding an item into it will cause the oldest one being removed if it is full. 362 Algorithm 1 NER for Tweets. Require: Tweet stream i; output stream o. Require: Training tweets ts; gazetteers ga. 1: Initialize ls, the CRF labeler: ls = trains(ts). 2: Initialize lk, the KNN classifier: lk = traink(ts). 3: Initialize n, the # of new training tweets: n = 0. 4: while Pop a tweet t from i and t ̸= null do 5: for Each word w ∈t do 6: Get the feature vector ⃗w: ⃗w = reprw(w, t). 7: Classify ⃗w with knn: (c, cf) = knn(lk, ⃗w). 8: if cf > τ then 9: Pre-label: t = update(t, w, c). 10: end if 11: end for 12: Get the feature vector ⃗t: ⃗t = reprt(t, ga). 13: Label ⃗t with crf: (t, cf) = crf(ls,⃗t). 14: Put labeled result (t, cf) into o. 15: if cf > γ then 16: Add labeled result t to ts , n = n + 1. 17: end if 18: if n > N then 19: Retrain ls: ls = trains(ts). 20: Retrain lk: lk = traink(ts). 21: n = 0. 22: end if 23: end while 24: return o. Algorithm 2 KNN Training. Require: Training tweets ts. 1: Initialize the classifier lk:lk = ∅. 2: for Each tweet t ∈ts do 3: for Each word,label pair (w, c) ∈t do 4: Get the feature vector ⃗w: ⃗w = reprw(w, t). 5: Add the ⃗w and c pair to the classifier: lk = lk ∪{(⃗w, c)}. 6: end for 7: end for 8: return KNN classifier lk. predicts the label of the word. In our work, K is experimentally set to 20, which yields the best performance. Two desirable properties of KNN make it stand out from its alternatives: 1) It can straightforwardly incorporate evidence from new labeled tweets and retraining is fast; and 2) combining with a CRF Algorithm 3 KNN predication. Require: KNN classifier lk ;word vector ⃗w. 1: Initialize nb, the neighbors of ⃗w: nb = neigbors(lk, ⃗w). 2: Calculate the predicted class c∗: c∗ = argmaxc ∑ (⃗w′,c′)∈nb δ(c, c ′) · cos(⃗w, ⃗w ′). 3: Calculate the labeling confidence cf: cf = ∑ ( ⃗ w′ ,c′ )∈nb δ(c,c ′)·cos(⃗w, ⃗w ′) ∑ ( ⃗ w′ ,c′ )∈nb cos(⃗w,⃗w′) . 4: return The predicted label c∗and its confidence cf. model, which is good at encoding the subtle interactions between words and their labels, compensates for KNN’s incapability to capture fine-grained evidence involving multiple decision points. The Linear CRF model is used as the fine model, with the following considerations: 1) It is wellstudied and has been successfully used in state-ofthe-art NER systems (Finkel et al., 2005; Wang, 2009); 2) it can output the probability of a label sequence, which can be used as the labeling confidence that is necessary for the semi-supervised learning framework. In our experiments, the CRF++ 5 toolkit is used to train a linear CRF model. We have written a Viterbi decoder that can incorporate partially observed labels to implement the crf function in Algorithm 1. 4.3 Features Given a word in a tweet, the KNN classifier considers a text window of size 5 with the word in the middle (Zhang and Johnson, 2003), and extracts bag-ofword features from the window as features. For each word, our CRF model extracts similar features as Wang (2009) and Ratinov and Roth (2009), namely, orthographic features, lexical features and gazetteers related features. In our work, we use the gazetteers provided by Ratinov and Roth (2009). Two points are worth noting here. One is that before feature extraction for either the KNN or the CRF, stop words are removed. The stop words used here are mainly from a set of frequently-used words 6. The other is that tweet meta data is normalized, that is, every link becomes *LINK* and every 5http://crfpp.sourceforge.net/ 6http://www.textfixer.com/resources/common-englishwords.txt 363 account name becomes *ACCOUNT*. Hash tags are treated as common words. 4.4 Discussion We now discuss several design considerations related to the performance of our method, i.e., additional features, gazetteers and alternative models. Additional Features. Features related to chunking and parsing are not adopted in our final system, because they give only a slight performance improvement while a lot of computing resources are required to extract such features. The ineffectiveness of these features is linked to the noisy and informal nature of tweets. Word class (Brown et al., 1992) features are not used either, which prove to be unhelpful for our system. We are interested in exploring other tweet representations, which may fit our NER task, for example the LSA models (Guo et al., 2009). Gazetteers. In our work, gazetteers prove to be substantially useful, which is consistent with the observation of Ratinov and Roth (2009). However, the gazetteers used in our work contain noise, which hurts the performance. Moreover, they are static, directly from Ratinov and Roth (2009), thus with a relatively lower coverage, especially for person names and product names in tweets. We are developing tools to clean the gazetteers. In future, we plan to feed the fresh entities correctly identified from tweets back into the gazetteers. The correctness of an entity can rely on its frequency or other evidence. Alternative Models. We have replaced KNN by other classifiers, such as those based on Maximum Entropy and Support Vector Machines, respectively. KNN consistently yields comparable performance, while enjoying a faster retraining speed. Similarly, to study the effectiveness of the CRF model, it is replaced by its alternations, such as the HMM labeler and a beam search plus a maximum entropy based classifier. In contrast to what is reported by Ratinov and Roth (2009), it turns out that the CRF model gives remarkably better results than its competitors. Note that all these evaluations are on the same training and testing data sets as described in Section 5.1. 5 Experiments In this section, we evaluate our method on a manually annotated data set and show that our system outperforms the baselines. The contributions of the combination of KNN and CRF as well as the semisupervised learning are studied, respectively. 5.1 Data Preparation We use the Twigg SDK 7 to crawl all tweets from April 20th 2010 to April 25th 2010, then drop non-English tweets and get about 11,371,389, from which 15,800 tweets are randomly sampled, and are then labeled by two independent annotators, so that the beginning and the end of each named entity are marked with <TYPE> and </TYPE>, respectively. Here TYPE is PERSON, PRODUCT, ORGANIZATION or LOCATION. 3555 tweets are dropped because of inconsistent annotation. Finally we get 12,245 tweets, forming the gold-standard data set. Figure 1 shows the portion of named entities of different types. On average, a named entity has 1.2 words. The gold-standard data set is evenly split into two parts: One for training and the other for testing. 5.2 Evaluation Metrics For every type of named entity, Precision (Pre.), recall (Rec.) and F1 are used as the evaluation metrics. Precision is a measure of what percentage the output labels are correct, and recall tells us to what percentage the labels in the gold-standard data set are correctly labeled, while F1 is the harmonic mean of precision and recall. For the overall performance, we use the average Precision, Recall and F1, where the weight of each name entity type is proportional to the number of entities of that type. These metrics are widely used by existing NER systems to evaluate their performance. 5.3 Baselines Two systems are used as baselines: One is the dictionary look-up system based on the gazetteers; the other is the modified version of our system without KNN and semi-supervised learning. Hereafter these two baselines are called NERDIC and NERBA, respectively. The OpenNLP and the Stanford parser (Klein and Manning, 2003) are used to extract linguistic features for the baselines and our method. 7It is developed by the Bing social search team, and currently is only internally available. 364 System Pre.(%) Rec.(%) F1(%) NERCB 81.6 78.8 80.2 NERBA 83.6 68.6 75.4 NERDIC 32.6 25.4 28.6 Table 1: Overall experimental results. System Pre.(%) Rec.(%) F1(%) NERCB 78.4 74.5 76.4 NERBA 83.6 68.4 75.2 NERDIC 37.1 29.7 33.0 Table 2: Experimental results on PERSON. 5.4 Basic Results Table 1 shows the overall results for the baselines and ours with the name NERCB. Here our system is trained as described in Algorithm 1, combining a KNN classifier and a CRF labeler, with semisupervised learning enabled. As can be seen from Table 1, on the whole, our method significantly outperforms (with p < 0.001) the baselines. Tables 2-5 report the results on each entity type, indicating that our method consistently yields better results on all entity types. 5.5 Effects of KNN Classifier Table 6 shows the performance of our method without combining the KNN classifier, denoted by NERCB−KNN. A drop in performance is observed then. We further check the confidently predicted labels of the KNN classifier, which account for about 22.2% of all predications, and find that its F1 is as high as 80.2% while the baseline system based on the CRF model achieves only an F1 of 75.4%. This largely explains why the KNN classifier helps the CRF labeler. The KNN classifier is replaced with its competitors, and only a slight difference in performance is observed. We do observe that retraining KNN is obviously faster. System Pre.(%) Rec.(%) F1(%) NERCB 81.3 65.4 72.5 NERBA 82.5 58.4 68.4 NERDIC 8.2 6.1 7.0 Table 3: Experimental results on PRODUCT. System Pre.(%) Rec.(%) F1(%) NERCB 80.3 77.5 78.9 NERBA 81.6 69.7 75.2 NERDIC 30.2 30.0 30.1 Table 4: Experimental results on LOCATION. System Pre.(%) Rec.(%) F1(%) NERCB 83.2 60.4 70.0 NERBA 87.6 52.5 65.7 NERDIC 54.5 11.8 19.4 Table 5: Experimental results on ORGANIZATION. 5.6 Effects of the CRF Labeler Similarly, the CRF model is replaced by its alternatives. As is opposite to the finding of Ratinov and Roth (2009), the CRF model gives remarkably better results, i.e., 2.1% higher in F1 than its best followers (with p < 0.001). Table 7 shows the overall performance of the CRF labeler with various feature set combinations, where Fo, Fl and Fg denote the orthographic features, the lexical features and the gazetteers related features, respectively. It can be seen from Table 7 that the lexical and gazetteer related features are helpful. Other advanced features such as chunking are also explored but with no significant improvement. 5.7 Effects of Semi-supervised Learning Table 8 compares our method with its modified version without semi-supervised learning, suggesting that semi-supervised learning considerably boosts the performance. To get more details about selftraining, we evenly divide the test data into 10 parts and feed them into our method sequentially; we record the average F1 score on each part, as shown in Figure 2. 5.8 Error Analysis Errors made by our system on the test set fall into three categories. The first kind of error, accounting for 35.5% of all errors, is largely related to slang expressions and informal abbreviations. For example, our method identifies “Cali”, which actually means “California”, as a PERSON in the tweet “i love Cali so much”. In future, we can design a normalization 365 System Pre.(%) Rec.(%) F1(%) NERCB 81.6 78.8 80.2 NERCB−KNN 82.6 74.8 78.5 Table 6: Overall performance of our system with and without the KNN classifier, respectively. Features Pre.(%) Rec.(%) F1(%) Fo 71.3 42.8 53.5 Fo + Fl 76.2 44.2 55.9 Fo + Fg 80.5 66.2 72.7 Fo + Fl + Fg 82.6 74.8 78.5 Table 7: Overview performance of the CRF labeler (combined with KNN) with different feature sets. component to handle such slang expressions and informal abbreviations. The second kind of error, accounting for 37.2% of all errors, is mainly attributed to the data sparseness. For example, for this tweet “come to see jaxon someday”, our method mistakenly labels “jaxon” as a LOCATION, which actually denotes a PERSON. This error is understandable somehow, since this tweet is one of the earliest tweets that mention “jaxon”, and at that time there was no strong evidence supporting that it represents a person. Possible solutions to these errors include continually enriching the gazetteers and aggregating additional external knowledge from other channels such as traditional news. The last kind of error, which represents 27.3% of all errors, somehow links to the noise prone nature of tweets. Consider this tweet “wesley snipes ws cought 4 nt payin tax coz ths celebz dnt take it cirus.”, in which “wesley snipes” is not identified as a PERSON but simply ignored by our method, because this tweet is too noisy to provide effective features. Tweet normalization technology seems a possible solution to alleviate this kind of error. Features Pre.(%) Rec.(%) F1(%) NERCB 81.6 78.8 80.2 NER ′ CB 82.1 71.9 76.7 Table 8: Performance of our system with and without semi-supervised learning, respectively. Figure 2: F1 score on 10 test data sets sequentially fed into the system, each with 600 instances. Horizontal and vertical axes represent the sequential number of the test data set and the averaged F1 score (%), respectively. 6 Conclusions and Future work We propose a novel NER system for tweets, which combines a KNN classifier with a CRF labeler under a semi-supervised learning framework. The KNN classifier collects global information across recently labeled tweets while the CRF labeler exploits information from a single tweet and from the gazetteers. A serials of experiments show the effectiveness of our method, and particularly, show the positive effects of KNN and semi-supervised learning. In future, we plan to further improve the performance of our method through two directions. Firstly, we hope to develop tweet normalization technology to make tweets friendlier to the NER task. Secondly, we are interested in integrating new entities from tweets or other channels into the gazetteers. Acknowledgments We thank Long Jiang, Changning Huang, Yunbo Cao, Dongdong Zhang, Zaiqing Nie for helpful discussions, and the anonymous reviewers for their valuable comments. We also thank Matt Callcut for his careful proofreading of an early draft of this paper. References Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Classbased n-gram models of natural language. Comput. Linguist., 18:467–479. 366 Laura Chiticariu, Rajasekar Krishnamurthy, Yunyao Li, Frederick Reiss, and Shivakumar Vaithyanathan. 2010. Domain adaptation of rule-based annotators for named-entity recognition tasks. In EMNLP, pages 1002–1012. Michael Collins. 2002. Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. In EMNLP, pages 1–8. Doug Downey, Matthew Broadhead, and Oren Etzioni. 2007. Locating Complex Named Entities in Web Text. In IJCAI. Oren Etzioni, Michael Cafarella, Doug Downey, AnaMaria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2005. Unsupervised named-entity extraction from the web: an experimental study. Artif. Intell., 165(1):91–134. Tim Finin, Will Murnane, Anand Karandikar, Nicholas Keller, Justin Martineau, and Mark Dredze. 2010. Annotating named entities in twitter data with crowdsourcing. In CSLDAMT, pages 80–88. Jenny Rose Finkel and Christopher D. Manning. 2009. Nested named entity recognition. In EMNLP, pages 141–150. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In ACL, pages 363–370. Honglei Guo, Huijia Zhu, Zhili Guo, Xiaoxun Zhang, Xian Wu, and Zhong Su. 2009. Domain adaptation with latent semantic association for named entity recognition. In NAACL, pages 281–289. Martin Jansche and Steven P. Abney. 2002. Information extraction from voicemail transcripts. In EMNLP, pages 320–327. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in nlp. In ACL, pages 264– 271. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In ACL, pages 423–430. Vijay Krishnan and Christopher D. Manning. 2006. An effective two-stage model for exploiting non-local dependencies in named entity recognition. In ACL, pages 1121–1128. George R. Krupka and Kevin Hausman. 1998. Isoquest: Description of the netowlT M extractor system as used in muc-7. In MUC-7. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282–289. Andrew Mccallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In HLT-NAACL, pages 188–191. Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name tagging with word clusters and discriminative training. In HLT-NAACL, pages 337–342. Einat Minkov, Richard C. Wang, and William W. Cohen. 2005. Extracting personal names from email: applying named entity recognition to informal text. In HLT, pages 443–450. David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Linguisticae Investigationes, 30:3–26. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In CoNLL, pages 147–155. Sameer Singh, Dustin Hillard, and Chris Leggetter. 2010. Minimally-supervised extraction of entities from text advertisements. In HLT-NAACL, pages 73–81. Jun Suzuki and Hideki Isozaki. 2008. Semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data. In ACL, pages 665–673. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: languageindependent named entity recognition. In HLTNAACL, pages 142–147. Yefeng Wang. 2009. Annotating and recognising named entities in clinical notes. In ACL-IJCNLP, pages 18– 26. Dan Wu, Wee Sun Lee, Nan Ye, and Hai Leong Chieu. 2009. Domain adaptive bootstrapping for named entity recognition. In EMNLP, pages 1523–1532. Kazuhiro Yoshida and Jun’ichi Tsujii. 2007. Reranking for biomedical named-entity recognition. In BioNLP, pages 209–216. Tong Zhang and David Johnson. 2003. A robust risk minimization based named entity recognition system. In HLT-NAACL, pages 204–207. GuoDong Zhou and Jian Su. 2002. Named entity recognition using an hmm-based chunk tagger. In ACL, pages 473–480. 367
|
2011
|
37
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 368–378, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Lexical Normalisation of Short Text Messages: Makn Sens a #twitter Bo Han and Timothy Baldwin NICTA Victoria Research Laboratory Department of Computer Science and Software Engineering The University of Melbourne [email protected] [email protected] Abstract Twitter provides access to large volumes of data in real time, but is notoriously noisy, hampering its utility for NLP. In this paper, we target out-of-vocabulary words in short text messages and propose a method for identifying and normalising ill-formed words. Our method uses a classifier to detect ill-formed words, and generates correction candidates based on morphophonemic similarity. Both word similarity and context are then exploited to select the most probable correction candidate for the word. The proposed method doesn’t require any annotations, and achieves state-of-the-art performance over an SMS corpus and a novel dataset based on Twitter. 1 Introduction Twitter and other micro-blogging services are highly attractive for information extraction and text mining purposes, as they offer large volumes of real-time data, with around 65 millions tweets posted on Twitter per day in June 2010 (Twitter, 2010). The quality of messages varies significantly, however, ranging from high quality newswire-like text to meaningless strings. Typos, ad hoc abbreviations, phonetic substitutions, ungrammatical structures and emoticons abound in short text messages, causing grief for text processing tools (Sproat et al., 2001; Ritter et al., 2010). For instance, presented with the input u must be talkin bout the paper but I was thinkin movies (“You must be talking about the paper but I was thinking movies”),1 the Stanford parser (Klein and 1Throughout the paper, we will provide a normalised version of examples as a gloss in double quotes. Manning, 2003; de Marneffe et al., 2006) analyses bout the paper and thinkin movies as a clause and noun phrase, respectively, rather than a prepositional phrase and verb phrase. If there were some way of preprocessing the message to produce a more canonical lexical rendering, we would expect the quality of the parser to improve appreciably. Our aim in this paper is this task of lexical normalisation of noisy English text, with a particular focus on Twitter and SMS messages. In this paper, we will collectively refer to individual instances of typos, ad hoc abbreviations, unconventional spellings, phonetic substitutions and other causes of lexical deviation as “illformed words”. The message normalisation task is challenging. It has similarities with spell checking (Peterson, 1980), but differs in that ill-formedness in text messages is often intentional, whether due to the desire to save characters/keystrokes, for social identity, or due to convention in this text sub-genre. We propose to go beyond spell checkers, in performing deabbreviation when appropriate, and recovering the canonical word form of commonplace shorthands like b4 “before”, which tend to be considered beyond the remit of spell checking (Aw et al., 2006). The free writing style of text messages makes the task even more complex, e.g. with word lengthening such as goooood being commonplace for emphasis. In addition, the detection of ill-formed words is difficult due to noisy context. Our objective is to restore ill-formed words to their canonical lexical forms in standard English. Through a pilot study, we compared OOV words in Twitter and SMS data with other domain corpora, 368 revealing their characteristics in OOV word distribution. We found Twitter data to have an unsurprisingly long tail of OOV words, suggesting that conventional supervised learning will not perform well due to data sparsity. Additionally, many illformed words are ambiguous, and require context to disambiguate. For example, Gooood may refer to Good or God depending on context. This provides the motivation to develop a method which does not require annotated training data, but is able to leverage context for lexical normalisation. Our approach first generates a list of candidate canonical lexical forms, based on morphological and phonetic variation. Then, all candidates are ranked according to a list of features generated from noisy context and similarity between ill-formed words and candidates. Our proposed cascaded method is shown to achieve state-of-the-art results on both SMS and Twitter data. Our contributions in this paper are as follows: (1) we conduct a pilot study on the OOV word distribution of Twitter and other text genres, and analyse different sources of non-standard orthography in Twitter; (2) we generate a text normalisation dataset based on Twitter data; (3) we propose a novel normalisation approach that exploits dictionary lookup, word similarity and word context, without requiring annotated data; and (4) we demonstrate that our method achieves state-of-the-art accuracy over both SMS and Twitter data. 2 Related work The noisy channel model (Shannon, 1948) has traditionally been the primary approach to tackling text normalisation. Suppose the ill-formed text is T and its corresponding standard form is S, the approach aims to find arg max P(S|T) by computing arg max P(T|S)P(S), in which P(S) is usually a language model and P(T|S) is an error model. Brill and Moore (2000) characterise the error model by computing the product of operation probabilities on slice-by-slice string edits. Toutanova and Moore (2002) improve the model by incorporating pronunciation information. Choudhury et al. (2007) model the word-level text generation process for SMS messages, by considering graphemic/phonetic abbreviations and unintentional typos as hidden Markov model (HMM) state transitions and emissions, respectively (Rabiner, 1989). Cook and Stevenson (2009) expand the error model by introducing inference from different erroneous formation processes, according to the sampled error distribution. While the noisy channel model is appropriate for text normalisation, P(T|S), which encodes the underlying error production process, is hard to approximate accurately. Additionally, these methods make the strong assumption that a token ti ∈T only depends on si ∈S, ignoring the context around the token, which could be utilised to help in resolving ambiguity. Statistical machine translation (SMT) has been proposed as a means of context-sensitive text normalisation, by treating the ill-formed text as the source language, and the standard form as the target language. For example, Aw et al. (2006) propose a phrase-level SMT SMS normalisation method with bootstrapped phrase alignments. SMT approaches tend to suffer from a critical lack of training data, however. It is labor intensive to construct an annotated corpus to sufficiently cover ill-formed words and context-appropriate corrections. Furthermore, it is hard to harness SMT for the lexical normalisation problem, as even if phrase-level re-ordering is suppressed by constraints on phrase segmentation, word-level re-orderings within a phrase are still prevalent. Some researchers have also formulated text normalisation as a speech recognition problem. For example, Kobus et al. (2008) firstly convert input text tokens into phonetic tokens and then restore them to words by phonetic dictionary lookup. Beaufort et al. (2010) use finite state methods to perform French SMS normalisation, combining the advantages of SMT and the noisy channel model. Kaufmann and Kalita (2010) exploit a machine translation approach with a preprocessor for syntactic (rather than lexical) normalisation. Predominantly, however, these methods require large-scale annotated training data, limiting their adaptability to new domains or languages. In contrast, our proposed method doesn’t require annotated data. It builds on the work on SMS text normalisation, and adapts it to Twitter data, exploiting multiple data sources for normalisation. 369 Figure 1: Out-of-vocabulary word distribution in English Gigaword (NYT), Twitter and SMS data 3 Scoping Text Normalisation 3.1 Task Definition of Lexical Normalisation We define the task of text normalisation to be a mapping from “ill-formed” OOV lexical items to their standard lexical forms, focusing exclusively on English for the purposes of this paper. We define the task as follows: • only OOV words are considered for normalisation; • normalisation must be to a single-token word, meaning that we would normalise smokin to smoking, but not imo to in my opinion; a sideeffect of this is to permit lower-register contractions such as gonna as the canonical form of gunna (given that going to is out of scope as a normalisation candidate, on the grounds of being multi-token). Given this definition, our first step is to identify candidate tokens for lexical normalisation, where we examine all tokens that consist of alphanumeric characters, and categorise them into in-vocabulary (IV) and out-of-vocabulary (OOV) words, relative to a dictionary. The OOV word definition is somewhat rough, because it includes neologisms and proper nouns like hopeable or WikiLeaks which have not made their way into the dictionary. However, it greatly simplifies the candidate identification task, at the cost of pushing complexity downstream to the word detection task, in that we need to explicitly distinguish between correct OOV words and illformed OOV words such as typos (e.g. earthquak “earthquake”), register-specific single-word abbreviations (e.g. lv “love”), and phonetic substitutions (e.g. 2morrow “tomorrow”). An immediate implication of our task definition is that ill-formed words which happen to coincide with an IV word (e.g. the misspelling of can’t as cant) are outside the scope of this research. We also consider that deabbreviation largely falls outside the scope of text normalisation, as abbreviations can be formed freely in standard English. Note that single-word abbreviations such as govt “government” are very much within the scope of lexical normalisation, as they are OOV and match to a single token in their standard lexical form. Throughout this paper, we use the GNU aspell dictionary (v0.60.6)2 to determine whether a token is OOV. In tokenising the text, hyphenanted tokens and tokens containing apostrophes (e.g. take-off and won’t, resp.) are treated as a single token. Twitter mentions (e.g. @twitter), hashtags (e.g. #twitter) and urls (e.g. twitter.com) are excluded from consideration for normalisation, but left in situ for context modelling purposes. Dictionary lookup of Internet slang is performed relative to a dictionary of 5021 items collected from the Internet.3 3.2 OOV Word Distribution and Types To get a sense of the relative need for lexical normalisation, we perform analysis of the distribution of OOV words in different text types. In particular, we calculate the proportion of OOV tokens per message (or sentence, in the case of edited text), bin the messages according to the OOV token proportion, and plot the probability mass contained in each bin for a given text type. The three corpora we compare 2We remove all one character tokens, except a and I, and treat RT as an IV word. 3http://www.noslang.com 370 are the New York Times (NYT),4 SMS,5 and Twitter.6 The results are presented in Figure 1. Both SMS and Twitter have a relatively flat distribution, with Twitter having a particularly large tail: around 15% of tweets have 50% or more OOV tokens. This has implications for any context modelling, as we cannot rely on having only isolated occurrences of OOV words. In contrast, NYT shows a more Zipfian distribution, despite the large number of proper names it contains. While this analysis confirms that Twitter and SMS are similar in being heavily laden with OOV tokens, it does not shed any light on the relative similarity in the makeup of OOV tokens in each case. To further analyse the two data sources, we extracted the set of OOV terms found exclusively in SMS and Twitter, and analysed each. Manual analysis of the two sets revealed that most OOV words found only in SMS were personal names. The Twitter-specific set, on the other hand, contained a heterogeneous collection of ill-formed words and proper nouns. This suggests that Twitter is a richer/noisier data source, and that text normalisation for Twitter needs to be more nuanced than for SMS. To further analyse the ill-formed words in Twitter, we randomly selected 449 tweets and manually analysed the sources of lexical variation, to determine the phenomena that lexical normalisation needs to deal with. We identified 254 token instances of lexical normalisation, and broke them down into categories, as listed in Table 1. “Letter” refers to instances where letters are missing or there are extraneous letters, but the lexical correspondence to the target word form is trivially accessible (e.g. shuld “should”). “Number Substitution” refers to instances of letter–number substitution, where numbers have been substituted for phonetically-similar sequences of letters (e.g. 4 “for”). “Letter&Number” refers to instances which have both extra/missing letters and number substitution (e.g. b4 “before”). “Slang” refers to instances 4Based on 44 million sentences from English Gigaword. 5Based on 12.6 thousand SMS messages from How and Kan (2005) and Choudhury et al. (2007). 6Based on 1.37 million tweets collected from the Twitter streaming API from Aug to Oct 2010, and filtered for monolingual English messages; see Section 5.1 for details of the language filtering methodology. Category Ratio Letter&Number 2.36% Letter 72.44% Number Substitution 2.76% Slang 12.20% Other 10.24% Table 1: Ill-formed word distribution of Internet slang (e.g. lol “laugh out loud”), as found in a slang dictionary (see Section 3.1). “Other” is the remainder of the instances, which is predominantly made up of occurrences of spaces having being deleted between words (e.g. sucha “such a”). If a given instance belongs to multiple error categories (e.g. “Letter&Number” and it is also found in a slang dictionary), we classify it into the higher-occurring category in Table 1. From Table 1, it is clear that “Letter” accounts for the majority of ill-formed words in Twitter, and that most ill-formed words are based on morphophonemic variations. This empirical finding assists in shaping our strategy for lexical normalisation. 4 Lexical normalisation Our proposed lexical normalisation strategy involves three general steps: (1) confusion set generation, where we identify normalisation candidates for a given word; (2) ill-formed word identification, where we classify a word as being ill-formed or not, relative to its confusion set; and (3) candidate selection, where we select the standard form for tokens which have been classified as being ill formed. In confusion set generation, we generate a set of IV normalisation candidates for each OOV word type based on morphophonemic variation. We call this set the confusion set of that OOV word, and aim to include all feasible normalisation candidates for the word type in the confusion set. The confusion candidates are then filtered for each token occurrence of a given OOV word, based on their local context fit with a language model. 4.1 Confusion Set Generation Revisiting our manual analysis from Section 3.2, most ill-formed tokens in Twitter are morphophonemically derived. First, inspired by Kaufmann and Kalita (2010), any repititions of more than 3 letters are reduced back to 3 letters (e.g. cooool is re371 Criterion Recall Average Candidates Tc ≤1 40.4% 24 Tc ≤2 76.6% 240 Tp = 0 55.4% 65 Tp ≤1 83.4% 1248 Tp ≤2 91.0% 9694 Tc ≤2 ∨Tp ≤1 88.8% 1269 Tc ≤2 ∨Tp ≤2 92.7% 9515 Table 2: Recall and average number of candidates for different confusion set generation strategies duced to coool). Second, IV words within a threshold Tc character edit distance of the given OOV word are calculated, as is widely used in spell checkers. Third, the double metaphone algorithm (Philips, 2000) is used to decode the pronunciation of all IV words, and IV words within a threshold Tp edit distance of the given OOV word under phonemic transcription, are included in the confusion set; this allows us to capture OOV words such as earthquick “earthquake”. In Table 2, we list the recall and average size of the confusion set generated by the final two strategies with different threshold settings, based on our evaluation dataset (see Section 5.1). The recall for lexical edit distance with Tc ≤2 is moderately high, but it is unable to detect the correct candidate for about one quarter of words. The combination of the lexical and phonemic strategies with Tc ≤2∨Tp ≤2 is more impressive, but the number of candidates has also soared. Note that increasing the edit distance further in both cases leads to an explosion in the average number of candidates, with serious computational implications for downstream processing. Thankfully, Tc ≤2 ∨Tp ≤1 leads to an extra increment in recall to 88.8%, with only a slight increase in the average number of candidates. Based on these results, we use Tc ≤2∨Tp ≤1 as the basis for confusion set generation. Examples of ill-formed words where we are unable to generate the standard lexical form are clippings such as fav “favourite” and convo “conversation”. In addition to generating the confusion set, we rank the candidates based on a trigram language model trained over 1.5GB of clean Twitter data, i.e. tweets which consist of all IV words: despite the prevalence of OOV words in Twitter, the sheer volume of the data means that it is relatively easy to collect large amounts of all-IV messages. To train the language model, we used SRILM (Stolcke, 2002) with the -<unk> option. If we truncate the ranking to the top 10% of candidates, the recall drops back to 84% with a 90% reduction in candidates. 4.2 Ill-formed Word Detection The next step is to detect whether a given OOV word in context is actually an ill-formed word or not, relative to its confusion set. To the best of our knowledge, we are the first to target the task of ill-formed word detection in the context of short text messages, although related work exists for text with lower relative occurrences of OOV words (Izumi et al., 2003; Sun et al., 2007). Due to the noisiness of the data, it is impractical to use full-blown syntactic or semantic features. The most direct source of evidence is IV words around an OOV word. Inspired by work on labelled sequential pattern extraction (Sun et al., 2007), we exploit large-scale edited corpus data to construct dependency-based features. First, we use the Stanford parser (Klein and Manning, 2003; de Marneffe et al., 2006) to extract dependencies from the NYT corpus (see Section 3.2). For example, from a sentence such as One obvious difference is the way they look, we would extract dependencies such as rcmod(way-6,look-8) and nsubj(look-8,they-7). We then transform the dependencies into relational features for each OOV word. Assuming that way were an OOV word, e.g., we would extract dependencies of the form (look,way,+2), indicating that look occurs 2 words after way. We choose dependencies to represent context because they are an effective way of capturing key relationships between words, and similar features can easily be extracted from tweets. Note that we don’t record the dependency type here, because we have no intention of dependency parsing text messages, due to their noisiness and the volume of the data. The counts of dependency forms are combined together to derive a confidence score, and the scored dependencies are stored in a dependency bank. Given the dependency-based features, a linear kernel SVM classifier (Fan et al., 2008) is trained on clean Twitter data, i.e. the subset of Twitter messages without OOV words. Each word is repre372 sented by its IV words within a context window of three words to either side of the target word, together with their relative positions in the form of (word1,word2,position) tuples, and their score in the dependency bank. These form the positive training exemplars. Negative exemplars are automatically constructed by replacing target words with highly-ranked candidates from their confusion set. Note that the classifier does not require any hand annotation, as all training exemplars are constructed automatically. To predict whether a given OOV word is ill-formed, we form an exemplar for each of its confusion candidates, and extract (word1,word2,position) features. If all its candidates are predicted to be negative by the model, we mark it as correct; otherwise, we treat it as ill-formed, and pass all candidates (not just positively-classified candidates) on to the candidate selection step. For example, given the message way yu lookin shuld be a sin and the OOV word lookin, we would generate context features for each candidate word such as (way,looking,-2), and classify each such candidate. In training, it is possible for the exact same feature vector to occur as both positive and negative exemplars. To prevent positive exemplars being contaminated from the automatic generation, we remove all negative instances in such cases. The (word1,word2,position) features are sparse and sometimes lead to conservative results in illformed word detection. That is, without valid features, the SVM classifier tends to label uncertain cases as correct rather than ill-formed words. This is arguably the right approach to normalisation, in choosing to under- rather than over-normalise in cases of uncertainty. As the context for a target word often contains OOV words which don’t occur in the dependency bank, we expand the dependency features to include context tokens up to a phonemic edit distance of 1 from context tokens in the dependency bank. In this way, we generate dependency-based features for context words such as seee “see” in (seee, flm, +2) (based on the target word flm in the context of flm to seee). However, expanded dependency features may introduce noise, and we therefore introduce expanded dependency weights wd ∈ {0.0, 0.5, 1.0} to ameliorate the effects of noise: a weight of wd = 0.0 means no expansion, while 1.0 means expanded dependencies are indistinguishable from non-expanded (strict match) dependencies. We separately introduce a threshold td ∈ {1, 2, ..., 10} on the number of positive predictions returned by the detection classifier over the set of normalisation candidates for a given OOV token: the token is considered to be ill-formed iff td or more candidates are positively classified, i.e. predicted to be correct candidates. 4.3 Candidate Selection For OOV words which are predicted to be illformed, we select the most likely candidate from the confusion set as the basis of normalisation. The final selection is based on the following features, in line with previous work (Wong et al., 2006; Cook and Stevenson, 2009). Lexical edit distance, phonemic edit distance, prefix substring, suffix substring, and the longest common subsequence (LCS) are exploited to capture morphophonemic similarity. Both lexical and phonemic edit distance (ED) are normalised by the reciprocal of exp(ED). The prefix and suffix features are intended to capture the fact that leading and trailing characters are frequently dropped from words, e.g. in cases such as ish and talkin. We calculate the ratio of the LCS over the maximum string length between ill-formed word and the candidate, since the ill-formed word can be either longer or shorter than (or the same size as) the standard form. For example, mve can be restored to either me or move, depending on context. We normalise these ratios following Cook and Stevenson (2009). For context inference, we employ both language model- and dependency-based frequency features. Ranking by language model score is intuitively appealing for candidate selection, but our trigram model is trained only on clean Twitter data and illformed words often don’t have sufficient context for the language model to operate effectively, as in bt “but” in say 2 sum1 bt nt gonna say “say to someone but not going to say”. To consolidate the context modelling, we obtain dependencies from the dependency bank used in ill-formed word detection. Although text messages are of a different genre to edited newswire text, we assume they form similar 373 dependencies based on the common goal of getting across the message effectively. The dependency features can be used in noisy contexts and are robust to the effects of other ill-formed words, as they do not rely on contiguity. For example, uz “use” in i did #tt uz me and yu, dependencies can capture relationships like aux(use-4, do-2), which is beyond the capabilities of the language model due to the hashtag being treated as a correct OOV word. 5 Experiments 5.1 Dataset and baselines The aim of our experiments is to compare the effectiveness of different methodologies over text messages, based on two datasets: (1) an SMS corpus (Choudhury et al., 2007); and (2) a novel Twitter dataset developed as part of this research, based on a random sampling of 549 English tweets. The English tweets were annotated by three independent annotators. All OOV words were pre-identified, and the annotators were requested to determine: (a) whether each OOV word was ill-formed or not; and (b) what the standard form was for ill-formed words, subject to the task definition outlined in Section 3.1. The total number of ill-formed words contained in the SMS and Twitter datasets were 3849 and 1184, respectively.7 The language filtering of Twitter to automatically identify English tweets was based on the language identification method of Baldwin and Lui (2010), using the EuroGOV dataset as training data, a mixed unigram/bigram/trigram byte feature representation, and a skew divergence nearest prototype classifier. We reimplemented the state-of-art noisy channel model of Cook and Stevenson (2009) and SMT approach of Aw et al. (2006) as benchmark methods. We implement the SMT approach in Moses (Koehn et al., 2007), with synthetic training and tuning data of 90,000 and 1000 sentence pairs, respectively. This data is randomly sampled from the 1.5GB of clean Twitter data, and errors are generated according to distribution of SMS corpus. The 10-fold cross-validated BLEU score (Papineni et al., 2002) over this data is 0.81. 7The Twitter dataset is available at http://www. csse.unimelb.edu.au/research/lt/resources/ lexnorm/. In addition to comparing our method with competitor methods, we also study the contribution of different feature groups. We separately compare dictionary lookup over our Internet slang dictionary, the contextual feature model, and the word similarity feature model, as well as combinations of these three. 5.2 Evaluation metrics The evaluation of lexical normalisation consists of two stages (Hirst and Budanitsky, 2005): (1) illformed word detection, and (2) candidate selection. In terms of detection, we want to make sense of how well the system can identify ill-formed words and leave correct OOV words untouched. This step is crucial to further normalisation, because if correct OOV words are identified as ill-formed, the candidate selection step can never be correct. Conversely, if an ill-formed word is predicted to be correct, the candidate selection will have no chance to normalise it. We evaluate detection performance by token-level precision, recall and F-score (β = 1). Previous work over the SMS corpus has assumed perfect ill-formed word detection and focused only on the candidate selection step, so we evaluate ill-formed word detection for the Twitter data only. For candidate selection, we once again evaluate using token-level precision, recall and F-score. Additionally, we evaluate using the BLEU score over the normalised form of each message, as the SMT method can lead to perturbations of the token stream, vexing standard precision, recall and F-score evaluation. 5.3 Results and Analysis First, we test the impact of the wd and td values on ill-formed word detection effectiveness, based on dependencies from either the Spinn3r blog corpus (Blog: Burton et al. (2009)) or NYT. The results for precision, recall and F-score are presented in Figure 2. Some conclusions can be drawn from the graphs. First, higher detection threshold values (td) give better precision but lower recall. Generally, as td is raised from 1 to 10, the precision improves slightly but recall drops dramatically, with the net effect that the F-score decreases monotonically. Thus, we use a 374 Figure 2: Ill-formed word detection precision, recall and F-score smaller threshold, i.e. td = 1. Second, there are differences between the two corpora, with dependencies from the Blog corpus producing slightly lower precision but higher recall, compared with the NYT corpus. The lower precision for the Blog corpus appears to be due to the text not being as clean as NYT, introducing parser errors. Nevertheless, the difference in F-score between the two corpora is insignificant. Third, we obtain the best results, especially in terms of precision, for wd = 0.5, i.e. with expanded dependencies, but penalised relative to nonexpanded dependencies. Overall, the best F-score is 71.2%, with a precision of 61.1% and recall of 85.3%, obtained over the Blog corpus with td = 1 and wd = 0.5. Clearly there is significant room for immprovements in these results. We leave the improvement of ill-formed word detection for future work, and perform evaluation of candidate selection for Twitter assuming perfect ill-formed word detection, as for the SMS data. From Table 3, we see that the general performance of our proposed method on Twitter is better than that on SMS. To better understand this trend, we examined the annotations in the SMS corpus, and found them to be looser than ours, because they have different task specifications than our lexical normalisation. In our annotation, the annotators only normalised ill-formed word if they had high confidence of how to normalise, as with talkin “talking”. For ill-formed words where they couldn’t be certain of the standard form, the tokens were left untouched. However, in the SMS corpus, annotations such as sammis “same” are also included. This leads to a performance drop for our method over the SMS corpus. The noisy channel method of Cook and Stevenson (2009) shares similar features with word similarity (“WS”), However, when word similarity and context support are combined (“WS+CS”), our method outperforms the noisy channel method by about 7% and 12% in F-score over SMS and Twitter corpora, respectively. This can be explained as follows. First, the Cook and Stevenson (2009) method is typebased, so all token instances of a given ill-formed word will be normalised identically. In the Twitter data, however, the same word can be normalised differently depending on context, e.g. hw “how” in so hw many time remaining so I can calculate it? vs. hw “homework” in I need to finish my hw first. Second, the noisy channel method was developed specifically for SMS normalisation, in which clipping is the most prevalent form of lexical variation, while in the Twitter data, we commonly have instances of word lengthening for emphasis, such as moviiie “movie”. Having said this, our method is superior to the noisy channel method over both the SMS and Twitter data. The SMT approach is relatively stable on the two datasets, but well below the performance of our method. This is due to the limitations of the training data: we obtain the ill-formed words and their standard forms from the SMS corpus, but the ill-formed words in the SMS corpus are not sufficient to cover those in the Twitter data (and we don’t have sufficient Twitter data to train the SMT method directly). Thus, novel ill-formed words are missed in normalisation. This shows the shortcoming of supervised data-driven approaches that require annotated data to cover all possibilities of ill-formed words in Twitter. The dictionary lookup method (“DL”) unsurprisingly achieves the best precision, but the recall on Twitter is not competitive. Consequently, the Twitter normalisation cannot be tackled with dictionary lookup alone, although it is an effective preprocessing strategy when combined with more ro375 Dataset Evaluation NC MT DL WS CS WS+CS DL+WS+CS SMS Precision 0.465 — 0.927 0.521 0.116 0.532 0.756 Recall 0.464 — 0.597 0.520 0.116 0.531 0.754 F-score 0.464 — 0.726 0.520 0.116 0.531 0.755 BLEU 0.746 0.700 0.801 0.764 0.612 0.772 0.876 Twitter Precision 0.452 — 0.961 0.551 0.194 0.571 0.753 Recall 0.452 — 0.460 0.551 0.194 0.571 0.753 F-score 0.452 — 0.622 0.551 0.194 0.571 0.753 BLEU 0.857 0.728 0.861 0.878 0.797 0.884 0.934 Table 3: Candidate selection effectiveness on different datasets (NC = noisy channel model (Cook and Stevenson, 2009); MT = SMT (Aw et al., 2006); DL = dictionary lookup; WS = word similarity; CS = context support) bust techniques such as our proposed method, and effective at capturing common abbreviations such as gf “girlfriend”. Of the component methods proposed in this research, word similarity (“WS”) achieves higher precision and recall than context support (“CS”), signifying that many of the ill-formed words emanate from morphophonemic variations. However, when combined with word similarity features, context support improves over the basic method at a level of statistical significance (based on randomised estimation, p < 0.05: Yeh (2000)), indicating the complementarity of the two methods, especially on Twitter data. The best F-score is achieved when combining dictionary lookup, word similarity and context support (“DL+WS+CS”), in which ill-formed words are first looked up in the slang dictionary, and only if no match is found do we apply our normalisation method. We found several limitations in our proposed approach by analysing the output of our method. First, not all ill-formed words offer useful context. Some highly noisy tweets contain almost all misspellings and unique symbols, and thus no context features can be extracted. This also explains why “CS” features often fail. For such cases, the method falls back to context-independent normalisation. We found that only 32.6% ill-formed words have all IV words in their context windows. Moreover, the IV words may not occur in the dependency bank, further decreasing the effectiveness of context support features. Second, the different features are linearly combined, where a weighted combination is likely to give better results, although it also requires a certain amount of well-sampled annotations for tuning. 6 Conclusion and Future Work In this paper, we have proposed the task of lexical normalisation for short text messages, as found in Twitter and SMS data. We found that most illformed words are based on morphophonemic variation and proposed a cascaded method to detect and normalise ill-formed words. Our ill-formed word detector requires no explicit annotations, and the dependency-based features were shown to be somewhat effective, however, there was still a lot of room for improvement at ill-formed word detection. In normalisation, we compared our method with two benchmark methods from the literature, and achieved that highest F-score and BLEU score by integrating dictionary lookup, word similarity and context support modelling. In future work, we propose to pursue a number of directions. First, we plan to improve our ill-formed word detection classifier by introducing an OOV word whitelist. Furthermore, we intend to alleviate noisy contexts with a bootstrapping approach, in which ill-formed words with high confidence and no ambiguity will be replaced by their standard forms, and fed into the normalisation model as new training data. Acknowledgements NICTA is funded by the Australian government as represented by Department of Broadband, Communication and Digital Economy, and the Australian Research Council through the ICT centre of Excellence programme. References AiTi Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A phrase-based statistical model for SMS text normalization. In Proceedings of the 21st International Con376 ference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 33–40, Sydney, Australia. Timothy Baldwin and Marco Lui. 2010. Language identification: The long and the short of the matter. In HLT ’10: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 229–237, Los Angeles, USA. Richard Beaufort, Sophie Roekhaut, Louise-Am´elie Cougnon, and C´edrick Fairon. 2010. A hybrid rule/model-based finite-state framework for normalizing SMS messages. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 770–779, Uppsala, Sweden. Eric Brill and Robert C. Moore. 2000. An improved error model for noisy channel spelling correction. In ACL ’00: Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 286–293, Hong Kong. Kevin Burton, Akshay Java, and Ian Soboroff. 2009. The ICWSM 2009 Spinn3r Dataset. In Proceedings of the Third Annual Conference on Weblogs and Social Media, San Jose, USA. Monojit Choudhury, Rahul Saraf, Vijit Jain, Animesh Mukherjee, Sudeshna Sarkar, and Anupam Basu. 2007. Investigation and modeling of the structure of texting language. International Journal on Document Analysis and Recognition, 10:157–174. Paul Cook and Suzanne Stevenson. 2009. An unsupervised model for text message normalization. In CALC ’09: Proceedings of the Workshop on Computational Approaches to Linguistic Creativity, pages 71– 78, Boulder, USA. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006), Genoa, Italy. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Graeme Hirst and Alexander Budanitsky. 2005. Correcting real-word spelling errors by restoring lexical cohesion. Natural Language Engineering, 11:87–111. Yijue How and Min-Yen Kan. 2005. Optimizing predictive text entry for short message service on mobile phones. In Human Computer Interfaces International (HCII 05), Las Vegas, USA. Emi Izumi, Kiyotaka Uchimoto, Toyomi Saiga, Thepchai Supnithi, and Hitoshi Isahara. 2003. Automatic error detection in the Japanese learners’ English spoken data. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 2, pages 145–148, Sapporo, Japan. Joseph Kaufmann and Jugal Kalita. 2010. Syntactic normalization of Twitter messages. In International Conference on Natural Language Processing, Kharagpur, India. Dan Klein and Christopher D. Manning. 2003. Fast exact inference with a factored model for natural language parsing. In Advances in Neural Information Processing Systems 15 (NIPS 2002), pages 3–10, Whistler, Canada. Catherine Kobus, Franois Yvon, and Graldine Damnati. 2008. Transcrire les SMS comme on reconnat la parole. In Actes de la Confrence sur le Traitement Automatique des Langues (TALN’08), pages 128–138. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177– 180, Prague, Czech Republic. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318, Philadelphia, USA. James L. Peterson. 1980. Computer programs for detecting and correcting spelling errors. Commun. ACM, 23:676–687, December. Lawrence Philips. 2000. The double metaphone search algorithm. C/C++ Users Journal, 18:38–43. Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversations. In HLT ’10: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 172–180, Los Angeles, USA. Claude Elwood Shannon. 1948. A mathematical theory of communication. Bell System Technical Journal, 27:379–423, 623–656. Richard Sproat, Alan W. Black, Stanley Chen, Shankar Kumar, Mari Ostendorf, and Christopher Richards. 2001. Normalization of non-standard words. Computer Speech and Language, 15(3):287 – 333. Andreas Stolcke. 2002. Srilm - an extensible language modeling toolkit. In International Conference on Spo377 ken Language Processing, pages 901–904, Denver, USA. Guihua Sun, Gao Cong, Xiaohua Liu, Chin-Yew Lin, and Ming Zhou. 2007. Mining sequential patterns and tree patterns to detect erroneous sentences. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 81–88, Prague, Czech Republic. Kristina Toutanova and Robert C. Moore. 2002. Pronunciation modeling for improved spelling correction. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 144–151, Philadelphia, USA. Twitter. 2010. Big goals, big game, big records. http://blog.twitter.com/2010/06/ big-goals-big-game-big-records.html. Retrieved 4 August 2010. Wilson Wong, Wei Liu, and Mohammed Bennamoun. 2006. Integrated scoring for spelling error correction, abbreviation expansion and case restoration in dirty text. In Proceedings of the Fifth Australasian Conference on Data Mining and Analytics, pages 83–89, Sydney, Australia. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th Conference on Computational Linguistics Volume 2, COLING ’00, pages 947–953, Saarbr¨ucken, Germany. 378
|
2011
|
38
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 379–388, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Topical Keyphrase Extraction from Twitter Wayne Xin Zhao† Jing Jiang‡ Jing He† Yang Song† Palakorn Achananuparp‡ Ee-Peng Lim‡ Xiaoming Li† †School of Electronics Engineering and Computer Science, Peking University ‡School of Information Systems, Singapore Management University {batmanfly,peaceful.he,songyangmagic}@gmail.com, {jingjiang,eplim,palakorna}@smu.edu.sg, [email protected] Abstract Summarizing and analyzing Twitter content is an important and challenging task. In this paper, we propose to extract topical keyphrases as one way to summarize Twitter. We propose a context-sensitive topical PageRank method for keyword ranking and a probabilistic scoring function that considers both relevance and interestingness of keyphrases for keyphrase ranking. We evaluate our proposed methods on a large Twitter data set. Experiments show that these methods are very effective for topical keyphrase extraction. 1 Introduction Twitter, a new microblogging website, has attracted hundreds of millions of users who publish short messages (a.k.a. tweets) on it. They either publish original tweets or retweet (i.e. forward) others’ tweets if they find them interesting. Twitter has been shown to be useful in a number of applications, including tweets as social sensors of realtime events (Sakaki et al., 2010), the sentiment prediction power of Twitter (Tumasjan et al., 2010), etc. However, current explorations are still in an early stage and our understanding of Twitter content still remains limited. How to automatically understand, extract and summarize useful Twitter content has therefore become an important and emergent research topic. In this paper, we propose to extract keyphrases as a way to summarize Twitter content. Traditionally, keyphrases are defined as a short list of terms to summarize the topics of a document (Turney, 2000). It can be used for various tasks such as document summarization (Litvak and Last, 2008) and indexing (Li et al., 2004). While it appears natural to use keyphrases to summarize Twitter content, compared with traditional text collections, keyphrase extraction from Twitter is more challenging in at least two aspects: 1) Tweets are much shorter than traditional articles and not all tweets contain useful information; 2) Topics tend to be more diverse in Twitter than in formal articles such as news reports. So far there is little work on keyword or keyphrase extraction from Twitter. Wu et al. (2010) proposed to automatically generate personalized tags for Twitter users. However, user-level tags may not be suitable to summarize the overall Twitter content within a certain period and/or from a certain group of people such as people in the same region. Existing work on keyphrase extraction identifies keyphrases from either individual documents or an entire text collection (Turney, 2000; Tomokiyo and Hurst, 2003). These approaches are not immediately applicable to Twitter because it does not make sense to extract keyphrases from a single tweet, and if we extract keyphrases from a whole tweet collection we will mix a diverse range of topics together, which makes it difficult for users to follow the extracted keyphrases. Therefore, in this paper, we propose to study the novel problem of extracting topical keyphrases for summarizing and analyzing Twitter content. In other words, we extract and organize keyphrases by topics learnt from Twitter. In our work, we follow the standard three steps of keyphrase extraction, namely, keyword ranking, candidate keyphrase generation 379 and keyphrase ranking. For keyword ranking, we modify the Topical PageRank method proposed by Liu et al. (2010) by introducing topic-sensitive score propagation. We find that topic-sensitive propagation can largely help boost the performance. For keyphrase ranking, we propose a principled probabilistic phrase ranking method, which can be flexibly combined with any keyword ranking method and candidate keyphrase generation method. Experiments on a large Twitter data set show that our proposed methods are very effective in topical keyphrase extraction from Twitter. Interestingly, our proposed keyphrase ranking method can incorporate users’ interests by modeling the retweet behavior. We further examine what topics are suitable for incorporating users’ interests for topical keyphrase extraction. To the best of our knowledge, our work is the first to study how to extract keyphrases from microblogs. We perform a thorough analysis of the proposed methods, which can be useful for future work in this direction. 2 Related Work Our work is related to unsupervised keyphrase extraction. Graph-based ranking methods are the state of the art in unsupervised keyphrase extraction. Mihalcea and Tarau (2004) proposed to use TextRank, a modified PageRank algorithm to extract keyphrases. Based on the study by Mihalcea and Tarau (2004), Liu et al. (2010) proposed to decompose a traditional random walk into multiple random walks specific to various topics. Language modeling methods (Tomokiyo and Hurst, 2003) and natural language processing techniques (Barker and Cornacchia, 2000) have also been used for unsupervised keyphrase extraction. Our keyword extraction method is mainly based on the study by Liu et al. (2010). The difference is that we model the score propagation with topic context, which can lower the effect of noise, especially in microblogs. Our work is also related to automatic topic labeling (Mei et al., 2007). We focus on extracting topical keyphrases in microblogs, which has its own challenges. Our method can also be used to label topics in other text collections. Another line of relevant research is Twitterrelated text mining. The most relevant work is by Wu et al. (2010), who directly applied TextRank (Mihalcea and Tarau, 2004) to extract keywords from tweets to tag users. Topic discovery from Twitter is also related to our work (Ramage et al., 2010), but we further extract keyphrases from each topic for summarizing and analyzing Twitter content. 3 Method 3.1 Preliminaries Let U be a set of Twitter users. Let C = {{du,m}Mu m=1}u∈U be a collection of tweets generated by U, where Mu is the total number of tweets generated by user u and du,m is the m-th tweet of u. Let V be the vocabulary. du,m consists of a sequence of words (wu,m,1, wu,m,2, . . . , wu,m,Nu,m) where Nu,m is the number of words in du,m and wu,m,n ∈V (1 ≤n ≤Nu,m). We also assume that there is a set of topics T over the collection C. Given T and C, topical keyphrase extraction is to discover a list of keyphrases for each topic t ∈T . Here each keyphrase is a sequence of words. To extract keyphrases, we first identify topics from the Twitter collection using topic models (Section 3.2). Next for each topic, we run a topical PageRank algorithm to rank keywords and then generate candidate keyphrases using the top ranked keywords (Section 3.3). Finally, we use a probabilistic model to rank the candidate keyphrases (Section 3.4). 3.2 Topic discovery We first describe how we discover the set of topics T . Author-topic models have been shown to be effective for topic modeling of microblogs (Weng et al., 2010; Hong and Davison, 2010). In Twitter, we observe an important characteristic of tweets: tweets are short and a single tweet tends to be about a single topic. So we apply a modified author-topic model called Twitter-LDA introduced by Zhao et al. (2011), which assumes a single topic assignment for an entire tweet. The model is based on the following assumptions. There is a set of topics T in Twitter, each represented by a word distribution. Each user has her topic interests modeled by a distribution over the topics. When a user wants to write a tweet, she first chooses a topic based on her topic distribution. Then she chooses a 380 1. Draw φB ∼Dir(β), π ∼Dir(γ) 2. For each topic t ∈T , (a) draw φt ∼Dir(β) 3. For each user u ∈U, (a) draw θu ∼Dir(α) (b) for each tweet du,m i. draw zu,m ∼Multi(θu) ii. for each word wu,m,n A. draw yu,m,n ∼Bernoulli(π) B. draw wu,m,n ∼Multi(φB) if yu,m,n = 0 and wu,m,n ∼ Multi(φzu,m) if yu,m,n = 1 Figure 1: The generation process of tweets. bag of words one by one based on the chosen topic. However, not all words in a tweet are closely related to the topic of that tweet; some are background words commonly used in tweets on different topics. Therefore, for each word in a tweet, the user first decides whether it is a background word or a topic word and then chooses the word from its respective word distribution. Formally, let φt denote the word distribution for topic t and φB the word distribution for background words. Let θu denote the topic distribution of user u. Let π denote a Bernoulli distribution that governs the choice between background words and topic words. The generation process of tweets is described in Figure 1. Each multinomial distribution is governed by some symmetric Dirichlet distribution parameterized by α, β or γ. 3.3 Topical PageRank for Keyword Ranking Topical PageRank was introduced by Liu et al. (2010) to identify keywords for future keyphrase extraction. It runs topic-biased PageRank for each topic separately and boosts those words with high relevance to the corresponding topic. Formally, the topic-specific PageRank scores can be defined as follows: Rt(wi) = λ X j:wj→wi e(wj, wi) O(wj) Rt(wj) + (1 −λ)Pt(wi), (1) where Rt(w) is the topic-specific PageRank score of word w in topic t, e(wj, wi) is the weight for the edge (wj →wi), O(wj) = P w′ e(wj, w′) and λ is a damping factor ranging from 0 to 1. The topicspecific preference value Pt(w) for each word w is its random jumping probability with the constraint that P w∈V Pt(w) = 1 given topic t. A large Rt(·) indicates a word is a good candidate keyword in topic t. We denote this original version of the Topical PageRank as TPR. However, the original TPR ignores the topic context when setting the edge weights; the edge weight is set by counting the number of co-occurrences of the two words within a certain window size. Taking the topic of “electronic products” as an example, the word “juice” may co-occur frequently with a good keyword “apple” for this topic because of Apple electronic products, so “juice” may be ranked high by this context-free co-occurrence edge weight although it is not related to electronic products. In other words, context-free propagation may cause the scores to be off-topic. So in this paper, we propose to use a topic context sensitive PageRank method. Formally, we have Rt(wi) = λ X j:wj→wi et(wj, wi) Ot(wj) Rt(wj)+(1−λ)Pt(wi). (2) Here we compute the propagation from wj to wi in the context of topic t, namely, the edge weight from wj to wi is parameterized by t. In this paper, we compute edge weight et(wj, wi) between two words by counting the number of co-occurrences of these two words in tweets assigned to topic t. We denote this context-sensitive topical PageRank as cTPR. After keyword ranking using cTPR or any other method, we adopt a common candidate keyphrase generation method proposed by Mihalcea and Tarau (2004) as follows. We first select the top S keywords for each topic, and then look for combinations of these keywords that occur as frequent phrases in the text collection. More details are given in Section 4. 3.4 Probabilistic Models for Topical Keyphrase Ranking With the candidate keyphrases, our next step is to rank them. While a standard method is to simply aggregate the scores of keywords inside a candidate keyphrase as the score for the keyphrase, here we propose a different probabilistic scoring function. Our method is based on the following hypotheses about good keyphrases given a topic: 381 Figure 2: Assumptions of variable dependencies. Relevance: A good keyphrase should be closely related to the given topic and also discriminative. For example, for the topic “news,” “president obama” is a good keyphrase while “math class” is not. Interestingness: A good keyphrase should be interesting and can attract users’ attention. For example, for the topic “music,” “justin bieber” is more interesting than “song player.” Sometimes, there is a trade-off between these two properties and a good keyphrase has to balance both. Let R be a binary variable to denote relevance where 1 is relevant and 0 is irrelevant. Let I be another binary variable to denote interestingness where 1 is interesting and 0 is non-interesting. Let k denote a candidate keyphrase. Following the probabilistic relevance models in information retrieval (Lafferty and Zhai, 2003), we propose to use P(R = 1, I = 1|t, k) to rank candidate keyphrases for topic t. We have P(R = 1, I = 1|t, k) = P(R = 1|t, k)P(I = 1|t, k, R = 1) = P(I = 1|t, k, R = 1)P(R = 1|t, k) = P(I = 1|k)P(R = 1|t, k) = P(I = 1|k) × P(R = 1|t, k) P(R = 1|t, k) + P(R = 0|t, k) = P(I = 1|k) × 1 1 + P (R=0|t,k) P (R=1|t,k) = P(I = 1|k) × 1 1 + P (R=0,k|t) P (R=1,k|t) = P(I = 1|k) × 1 1 + P (R=0|t) P (R=1|t) × P (k|t,R=0) P (k|t,R=1) = P(I = 1|k) × 1 1 + P (R=0) P (R=1) × P (k|t,R=0) P (k|t,R=1) . Here we have assumed that I is independent of t and R given k, i.e. the interestingness of a keyphrase is independent of the topic or whether the keyphrase is relevant to the topic. We have also assumed that R is independent of t when k is unknown, i.e. without knowing the keyphrase, the relevance is independent of the topic. Our assumptions can be depicted by Figure 2. We further define δ = P(R=0) P(R=1). In general we can assume that P(R = 0) ≫P(R = 1) because there are much more non-relevant keyphrases than relevant ones, that is, δ ≫1. In this case, we have log P(R = 1, I = 1|t, k) (3) = log P(I = 1|k) × 1 1 + δ × P (k|t,R=0) P (k|t,R=1) ≈ log P(I = 1|k) × P(k|t, R = 1) P(k|t, R = 0) × 1 δ = log P(I = 1|k) + log P(k|t, R = 1) P(k|t, R = 0) −log δ. We can see that the ranking score log P(R = 1, I = 1|t, k) can be decomposed into two components, a relevance score log P (k|t,R=1) P (k|t,R=0) and an interestingness score log P(I = 1|k). The last term log δ is a constant and thus not relevant. Estimating the relevance score Let a keyphrase candidate k be a sequence of words (w1, w2, . . . , wN). Based on an independent assumption of words given R and t, we have log P(k|t, R = 1) P(k|t, R = 0) = log P(w1w2 . . . wN|t, R = 1) P(w1w2 . . . wN|t, R = 0) = N X n=1 log P(wn|t, R = 1) P(wn|t, R = 0). (4) Given the topic model φt previously learned for topic t, we can set P(w|t, R = 1) to φt w, i.e. the probability of w under φt. Following Griffiths and Steyvers (2004), we estimate φt w as φt w = #(Ct, w) + β #(Ct, ·) + β|V|. (5) Here Ct denotes the collection of tweets assigned to topic t, #(Ct, w) is the number of times w appears in Ct, and #(Ct, ·) is the total number of words in Ct. P(w|t, R = 0) can be estimated using a smoothed background model. P(w|R = 0, t) = #(C, w) + µ #(C, ·) + µ|V|. (6) 382 Here #(C, ·) denotes the number of words in the whole collection C, and #(C, w) denotes the number of times w appears in the whole collection. After plugging Equation (5) and Equation (6) into Equation (4), we get the following formula for the relevance score: log P(k|t, R = 1) P(k|t, R = 0) = X w∈k log #(Ct, w) + β #(C, w) + µ + log #(C, ·) + µ|V| #(Ct, ·) + β|V| = X w∈k log #(Ct, w) + β #(C, w) + µ + |k|η, (7) where η = #(C,·)+µ|V| #(Ct,·)+β|V| and |k| denotes the number of words in k. Estimating the interestingness score To capture the interestingness of keyphrases, we make use of the retweeting behavior in Twitter. We use string matching with RT to determine whether a tweet is an original posting or a retweet. If a tweet is interesting, it tends to get retweeted multiple times. Retweeting is therefore a stronger indicator of user interests than tweeting. We use retweet ratio |ReTweetsk| |Tweetsk| to estimate P(I = 1|k). To prevent zero frequency, we use a modified add-one smoothing method. Finally, we get log P(I = 1|k) = log |ReTweetsk| + 1.0 |Tweetsk| + lavg . (8) Here |ReTweetsk| and |Tweetsk| denote the numbers of retweets and tweets containing the keyphrase k, respectively, and lavg is the average number of tweets that a candidate keyphrase appears in. Finally, we can plug Equation (7) and Equation (8) into Equation (3) and obtain the following scoring function for ranking: Scoret(k) = log |ReTweetsk| + 1.0 |Tweetsk| + lavg (9) + X w∈k log #(Ct, w) + β #(C, w) + µ ! + |k|η. #user #tweet #term #token 13,307 1,300,300 50,506 11,868,910 Table 1: Some statistics of the data set. Incorporating length preference Our preliminary experiments with Equation (9) show that this scoring function usually ranks longer keyphrases higher than shorter ones. However, because our candidate keyphrase are extracted without using any linguistic knowledge such as noun phrase boundaries, longer candidate keyphrases tend to be less meaningful as a phrase. Moreover, for our task of using keyphrases to summarize Twitter, we hypothesize that shorter keyphrases are preferred by users as they are more compact. We would therefore like to incorporate some length preference. Recall that Equation (9) is derived from P(R = 1, I = 1|t, k), but this probability does not allow us to directly incorporate any length preference. We further observe that Equation (9) tends to give longer keyphrases higher scores mainly due to the term |k|η. So here we heuristically incorporate our length preference by removing |k|η from Equation (9), resulting in the following final scoring function: Scoret(k) = log |ReTweetsk| + 1.0 |Tweetsk| + lavg (10) + X w∈k log #(Ct, w) + β #(C, w) + µ ! . 4 Experiments 4.1 Data Set and Preprocessing We use a Twitter data set collected from Singapore users for evaluation. We used Twitter REST API1 to facilitate the data collection. The majority of the tweets collected were published in a 20-week period from December 1, 2009 through April 18, 2010. We removed common stopwords and words which appeared in fewer than 10 tweets. We also removed all users who had fewer than 5 tweets. Some statistics of this data set after cleaning are shown in Table 1. We ran Twitter-LDA with 500 iterations of Gibbs sampling. After trying a few different numbers of 1http://apiwiki.twitter.com/w/page/22554663/REST-APIDocumentation 383 topics, we empirically set the number of topics to 30. We set α to 50.0/|T | as Griffiths and Steyvers (2004) suggested, but set β to a smaller value of 0.01 and γ to 20. We chose these parameter settings because they generally gave coherent and meaningful topics for our data set. We selected 10 topics that cover a diverse range of content in Twitter for evaluation of topical keyphrase extraction. The top 10 words of these topics are shown in Table 2. We also tried the standard LDA model and the author-topic model on our data set and found that our proposed topic model was better or at least comparable in terms of finding meaningful topics. In addition to generating meaningful topics, Twitter-LDA is much more convenient in supporting the computation of tweet-level statistics (e.g. the number of co-occurrences of two words in a specific topic) than the standard LDA or the author-topic model because Twitter-LDA assumes a single topic assignment for an entire tweet. 4.2 Methods for Comparison As we have described in Section 3.1, there are three steps to generate keyphrases, namely, keyword ranking, candidate keyphrase generation, and keyphrase ranking. We have proposed a context-sensitive topical PageRank method (cTPR) for the first step of keyword ranking, and a probabilistic scoring function for the third step of keyphrase ranking. We now describe the baseline methods we use to compare with our proposed methods. Keyword Ranking We compare our cTPR method with the original topical PageRank method (Equation (1)), which represents the state of the art. We refer to this baseline as TPR. For both TPR and cTPR, the damping factor is empirically set to 0.1, which always gives the best performance based on our preliminary experiments. We use normalized P(t|w) to set Pt(w) because our preliminary experiments showed that this was the best among the three choices discussed by Liu et al. (2010). This finding is also consistent with what Liu et al. (2010) found. In addition, we also use two other baselines for comparison: (1) kwBL1: ranking by P(w|t) = φt w. (2) kwBL2: ranking by P(t|w) = P(t)φt w P t′ P(t′)φt′ w . Keyphrase Ranking We use kpRelInt to denote our relevance and interestingness based keyphrase ranking function P(R = 1, I = 1|t, k), i.e. Equation (10). β and µ are empirically set to 0.01 and 500. Usually µ can be set to zero, but in our experiments we find that our ranking method needs a more uniform estimation of the background model. We use the following ranking functions for comparison: • kpBL1: Similar to what is used by Liu et al. (2010), we can rank candidate keyphrases by P w∈k f(w), where f(w) is the score assigned to word w by a keyword ranking method. • kpBL2: We consider another baseline ranking method by P w∈k log f(w). • kpRel: If we consider only relevance but not interestingness, we can rank candidate keyphrases by P w∈k log #(Ct,w)+β #(C,w)+µ . 4.3 Gold Standard Generation Since there is no existing test collection for topical keyphrase extraction from Twitter, we manually constructed our test collection. For each of the 10 selected topics, we ran all the methods to rank keywords. For each method we selected the top 3000 keywords and searched all the combinations of these words as phrases which have a frequency larger than 30. In order to achieve high phraseness, we first computed the minimum value of pointwise mutual information for all bigrams in one combination, and we removed combinations having a value below a threshold, which was empirically set to 2.135. Then we merged all these candidate phrases. We did not consider single-word phrases because we found that it would include too many frequent words that might not be useful for summaries. We asked two judges to judge the quality of the candidate keyphrases. The judges live in Singapore and had used Twitter before. For each topic, the judges were given the top topic words and a short topic description. Web search was also available. For each candidate keyphrase, we asked the judges to score it as follows: 2 (relevant, meaningful and informative), 1 (relevant but either too general or too specific, or informal) and 0 (irrelevant or meaningless). Here in addition to relevance, the other two criteria, namely, whether a phrase is meaningful and informative, were studied by Tomokiyo and Hurst 384 T2 T4 T5 T10 T12 T13 T18 T20 T23 T25 eat twitter love singapore singapore hot iphone song study win food tweet idol road #singapore rain google video school game dinner blog adam mrt #business weather social youtube time team lunch facebook watch sgreinfo #news cold media love homework match eating internet april east health morning ipad songs tomorrow play ice tweets hot park asia sun twitter bieber maths chelsea chicken follow lambert room market good free music class world cream msn awesome sqft world night app justin paper united tea followers girl price prices raining apple feature math liverpool hungry time american built bank air marketing twitter finish arsenal Table 2: Top 10 Words of Sample Topics on our Singapore Twitter Dateset. (2003). We then averaged the scores of the two judges as the final scores. The Cohen’s Kappa coefficients of the 10 topics range from 0.45 to 0.80, showing fair to good agreement2. We further discarded all candidates with an average score less than 1. The number of the remaining keyphrases for each topic ranges from 56 to 282. 4.4 Evaluation Metrics Traditionally keyphrase extraction is evaluated using precision and recall on all the extracted keyphrases. We choose not to use these measures for the following reasons: (1) Traditional keyphrase extraction works on single documents while we study topical keyphrase extraction. The gold standard keyphrase list for a single document is usually short and clean, while for each Twitter topic there can be many keyphrases, some are more relevant and interesting than others. (2) Our extracted topical keyphrases are meant for summarizing Twitter content, and they are likely to be directly shown to the users. It is therefore more meaningful to focus on the quality of the top-ranked keyphrases. Inspired by the popular nDCG metric in information retrieval (J¨arvelin and Kek¨al¨ainen, 2002), we define the following normalized keyphrase quality measure (nKQM) for a method M: nKQM@K = 1 |T | X t∈T PK j=1 1 log2(j+1)score(Mt,j) IdealScore(K,t) , where T is the set of topics, Mt,j is the jth keyphrase generated by method M for topic 2We find that judgments on topics related to social media (e.g. T4) and daily life (e.g. T13) tend to have a higher degree of disagreement. t, score(·) is the average score from the two human judges, and IdealScore(K,t) is the normalization factor—score of the top K keyphrases of topic t under the ideal ranking. Intuitively, if M returns more good keyphrases in top ranks, its nKQM value will be higher. We also use mean average precision (MAP) to measure the overall performance of keyphrase ranking: MAP = 1 |T | X t∈T 1 NM,t |Mt| X j=1 NM,t,j j 1(score(Mt,j) ≥1), where 1(S) is an indicator function which returns 1 when S is true and 0 otherwise, NM,t,j denotes the number of correct keyphrases among the top j keyphrases returned by M for topic t, and NM,t denotes the total number of correct keyphrases of topic t returned by M. 4.5 Experiment Results Evaluation of keyword ranking methods Since keyword ranking is the first step for keyphrase extraction, we first compare our keyword ranking method cTPR with other methods. For each topic, we pooled the top 20 keywords ranked by all four methods. We manually examined whether a word is a good keyword or a noisy word based on topic context. Then we computed the average number of noisy words in the 10 topics for each method. As shown in Table 5, we can observe that cTPR performed the best among the four methods. Since our final goal is to extract topical keyphrases, we further compare the performance of cTPR and TPR when they are combined with a keyphrase ranking algorithm. Here we use the two 385 Method nKQM@5 nKQM@10 nKQM@25 nKQM@50 MAP kpBL1 TPR 0.5015 0.54331 0.5611 0.5715 0.5984 kwBL1 0.6026 0.5683 0.5579 0.5254 0.5984 kwBL2 0.5418 0.5652 0.6038 0.5896 0.6279 cTPR 0.6109 0.6218 0.6139 0.6062 0.6608 kpBL2 TPR 0.7294 0.7172 0.6921 0.6433 0.6379 kwBL1 0.7111 0.6614 0.6306 0.5829 0.5416 kwBL2 0.5418 0.5652 0.6038 0.5896 0.6545 cTPR 0.7491 0.7429 0.6930 0.6519 0.6688 Table 3: Comparisons of keyphrase extraction for cTPR and baselines. Method nKQM@5 nKQM@10 nKQM@25 nKQM@50 MAP cTPR+kpBL1 0.61095 0.62182 0.61389 0.60618 0.6608 cTPR+kpBL2 0.74913 0.74294 0.69303 0.65194 0.6688 cTPR+kpRel 0.75361 0.74926 0.69645 0.65065 0.6696 cTPR+kpRelInt 0.81061 0.75184 0.71422 0.66319 0.6694 Table 4: Comparisons of keyphrase extraction for different keyphrase ranking methods. kwBL1 kwBL2 TPR cTPR 2 3 4.9 1.5 Table 5: Average number of noisy words among the top 20 keywords of the 10 topics. baseline keyphrase ranking algorithms kpBL1 and kpBL2. The comparison is shown in Table 3. We can see that cTPR is consistently better than the three other methods for both kpBL1 and kpBL2. Evaluation of keyphrase ranking methods In this section we compare keypharse ranking methods. Previously we have shown that cTPR is better than TPR, kwBL1 and kwBL2 for keyword ranking. Therefore we use cTPR as the keyword ranking method and examine the keyphrase ranking method kpRelInt with kpBL1, kpBL2 and kpRel when they are combined with cTPR. The results are shown in Table 4. From the results we can see the following: (1) Keyphrase ranking methods kpRelInt and kpRel are more effective than kpBL1 and kpBL2, especially when using the nKQM metric. (2) kpRelInt is better than kpRel, especially for the nKQM metric. Interestingly, we also see that for the nKQM metric, kpBL1, which is the most commonly used keyphrase ranking method, did not perform as well as kpBL2, a modified version of kpBL1. We also tested kpRelInt and kpRel on TPR, kwBL1 and kwBL2 and found that kpRelInt and kpRel are consistently better than kpBL2 and kpBL1. Due to space limit, we do not report all the results here. These findings support our assumption that our proposed keyphrase ranking method is effective. The comparison between kpBL2 with kpBL1 shows that taking the product of keyword scores is more effective than taking their sum. kpRel and kpRelInt also use the product of keyword scores. This may be because there is more noise in Twitter than traditional documents. Common words (e.g. “good”) and domain background words (e.g. “Singapore”) tend to gain higher weights during keyword ranking due to their high frequency, especially in graph-based method, but we do not want such words to contribute too much to keyphrase scores. Taking the product of keyword scores is therefore more suitable here than taking their sum. Further analysis of interestingness As shown in Table 4, kpRelInt performs better in terms of nKQM compared with kpRel. Here we study why it worked better for keyphrase ranking. The only difference between kpRel and kpRelInt is that kpRelInt includes the factor of user interests. By manually examining the top keyphrases, we find that the topics “Movie-TV” (T5), “News” (T12), “Music” (T20) and “Sports” (T25) particularly benefited from kpRelInt compared with other topics. We find that well-known named entities (e.g. celebrities, political leaders, football clubs and big companies) and significant events tend to be ranked higher by kpRelInt than kpRel. We then counted the numbers of entity and event keyphrases for these four topics retrieved by different methods, shown in Table 6 . We can see that in these four topics, kpRelInt is consistently better than kpRel in terms of the number of entity and event keyphrases retrieved. 386 T2 T5 T10 T12 T20 T25 chicken rice adam lambert north east president obama justin bieber manchester united ice cream jack neo rent blk magnitude earthquake music video champions league fried chicken american idol east coast volcanic ash lady gaga football match curry rice david archuleta east plaza prime minister taylor swift premier league chicken porridge robert pattinson west coast iceland volcano demi lovato f1 grand prix curry chicken alexander mcqueen bukit timah chile earthquake youtube channel tiger woods beef noodles april fools street view goldman sachs miley cyrus grand slam(tennis) chocolate cake harry potter orchard road coe prices telephone video liverpool fans cheese fries april fool toa payoh haiti earthquake song lyrics final score instant noodles andrew garcia marina bay #singapore #business joe jonas manchester derby Table 7: Top 10 keyphrases of 6 topics from cTPR+kpRelInt. Methods T5 T12 T20 T25 cTPR+kpRel 8 9 16 11 cTPR+kpRelInt 10 12 17 14 Table 6: Numbers of entity and event keyphrases retrieved by different methods within top 20. On the other hand, we also find that for some topics interestingness helped little or even hurt the performance a little, e.g. for the topics “Food” and “Traffic.” We find that the keyphrases in these topics are stable and change less over time. This may suggest that we can modify our formula to handle different topics different. We will explore this direction in our future work. Parameter settings We also examine how the parameters in our model affect the performance. λ: We performed a search from 0.1 to 0.9 with a step size of 0.1. We found λ = 0.1 was the optimal parameter for cTPR and TPR. However, TPR is more sensitive to λ. The performance went down quickly with λ increasing. µ: We checked the overall performance with µ ∈{400, 450, 500, 550, 600}. We found that µ = 500 ≈0.01|V| gave the best performance generally for cTPR. The performance difference is not very significant between these different values of µ, which indicates that the our method is robust. 4.6 Qualitative evaluation of cTPR+kpRelInt We show the top 10 keyphrases discovered by cTPR+kRelInt in Table 7. We can observe that these keyphrases are clear, interesting and informative for summarizing Twitter topics. We hypothesize that the following applications can benefit from the extracted keyphrases: Automatic generation of realtime trendy phrases: For exampoe, keyphrases in the topic “Food” (T2) can be used to help online restaurant reviews. Event detection and topic tracking: In the topic “News” top keyphrases can be used as candidate trendy topics for event detection and topic tracking. Automatic discovery of important named entities: As discussed previously, our methods tend to rank important named entities such as celebrities in high ranks. 5 Conclusion In this paper, we studied the novel problem of topical keyphrase extraction for summarizing and analyzing Twitter content. We proposed the context-sensitive topical PageRank (cTPR) method for keyword ranking. Experiments showed that cTPR is consistently better than the original TPR and other baseline methods in terms of top keyword and keyphrase extraction. For keyphrase ranking, we proposed a probabilistic ranking method, which models both relevance and interestingness of keyphrases. In our experiments, this method is shown to be very effective to boost the performance of keyphrase extraction for different kinds of keyword ranking methods. In the future, we may consider how to incorporate keyword scores into our keyphrase ranking method. Note that we propose to rank keyphrases by a general formula P(R = 1, I = 1|t, k) and we have made some approximations based on reasonable assumptions. There should be other potential ways to estimate P(R = 1, I = 1|t, k). Acknowledgements This work was done during Xin Zhao’s visit to the Singapore Management University. Xin Zhao and Xiaoming Li are partially supported by NSFC under 387 the grant No. 60933004, 61073082, 61050009 and HGJ Grant No. 2011ZX01042-001-001. References Ken Barker and Nadia Cornacchia. 2000. Using noun phrase heads to extract document keyphrases. In Proceedings of the 13th Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence, pages 40–52. Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences of the United States of America, 101(Suppl. 1):5228–5235. Liangjie Hong and Brian D. Davison. 2010. Empirical study of topic modeling in Twitter. In Proceedings of the First Workshop on Social Media Analytics. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems, 20(4):422–446. John Lafferty and Chengxiang Zhai. 2003. Probabilistic relevance models based on document and query generation. Language Modeling and Information Retrieval, 13. Quanzhi Li, Yi-Fang Wu, Razvan Bot, and Xin Chen. 2004. Incorporating document keyphrases in search results. In Proceedings of the 10th Americas Conference on Information Systems. Marina Litvak and Mark Last. 2008. Graph-based keyword extraction for single-document summarization. In Proceedings of the Workshop on Multi-source Multilingual Information Extraction and Summarization, pages 17–24. Zhiyuan Liu, Wenyi Huang, Yabin Zheng, and Maosong Sun. 2010. Automatic keyphrase extraction via topic decomposition. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 366–376. Qiaozhu Mei, Xuehua Shen, and ChengXiang Zhai. 2007. Automatic labeling of multinomial topic models. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 490–499. R. Mihalcea and P. Tarau. 2004. TextRank: Bringing order into texts. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Daniel Ramage, Susan Dumais, and Dan Liebling. 2010. Characterizing micorblogs with topic models. In Proceedings of the 4th International Conference on Weblogs and Social Media. Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th International World Wide Web Conference. Takashi Tomokiyo and Matthew Hurst. 2003. A language model approach to keyphrase extraction. In Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment, pages 33–40. Andranik Tumasjan, Timm O. Sprenger, Philipp G. Sandner, and Isabell M. Welpe. 2010. Predicting elections with Twitter: What 140 characters reveal about political sentiment. In Proceedings of the 4th International Conference on Weblogs and Social Media. Peter Turney. 2000. Learning algorithms for keyphrase extraction. Information Retrieval, (4):303–336. Jianshu Weng, Ee-Peng Lim, Jing Jiang, and Qi He. 2010. TwitterRank: finding topic-sensitive influential twitterers. In Proceedings of the third ACM International Conference on Web Search and Data Mining. Wei Wu, Bin Zhang, and Mari Ostendorf. 2010. Automatic generation of personalized annotation tags for twitter users. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 689–692. Xin Zhao, Jing Jiang, Jianshu Weng, Jing He, Lim EePeng, Hongfei Yan, and Xiaoming Li. 2011. Comparing Twitter and traditional media using topic models. In Proceedings of the 33rd European Conference on Information Retrieval. 388
|
2011
|
39
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 32–42, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Combining Morpheme-based Machine Translation with Post-processing Morpheme Prediction Ann Clifton and Anoop Sarkar Simon Fraser University Burnaby, British Columbia, Canada {ann clifton,anoop}@sfu.ca Abstract This paper extends the training and tuning regime for phrase-based statistical machine translation to obtain fluent translations into morphologically complex languages (we build an English to Finnish translation system). Our methods use unsupervised morphology induction. Unlike previous work we focus on morphologically productive phrase pairs – our decoder can combine morphemes across phrase boundaries. Morphemes in the target language may not have a corresponding morpheme or word in the source language. Therefore, we propose a novel combination of post-processing morphology prediction with morpheme-based translation. We show, using both automatic evaluation scores and linguistically motivated analyses of the output, that our methods outperform previously proposed ones and provide the best known results on the EnglishFinnish Europarl translation task. Our methods are mostly language independent, so they should improve translation into other target languages with complex morphology. 1 Translation and Morphology Languages with rich morphological systems present significant hurdles for statistical machine translation (SMT), most notably data sparsity, source-target asymmetry, and problems with automatic evaluation. In this work, we propose to address the problem of morphological complexity in an Englishto-Finnish MT task within a phrase-based translation framework. We focus on unsupervised segmentation methods to derive the morphological information supplied to the MT model in order to provide coverage on very large datasets and for languages with few hand-annotated resources. In fact, in our experiments, unsupervised morphology always outperforms the use of a hand-built morphological analyzer. Rather than focusing on a few linguistically motivated aspects of Finnish morphological behaviour, we develop techniques for handling morphological complexity in general. We chose Finnish as our target language for this work, because it exemplifies many of the problems morphologically complex languages present for SMT. Among all the languages in the Europarl data-set, Finnish is the most difficult language to translate from and into, as was demonstrated in the MT Summit shared task (Koehn, 2005). Another reason is the current lack of knowledge about how to apply SMT successfully to agglutinative languages like Turkish or Finnish. Our main contributions are: 1) the introduction of the notion of segmented translation where we explicitly allow phrase pairs that can end with a dangling morpheme, which can connect with other morphemes as part of the translation process, and 2) the use of a fully segmented translation model in combination with a post-processing morpheme prediction system, using unsupervised morphology induction. Both of these approaches beat the state of the art on the English-Finnish translation task. Morphology can express both content and function categories, and our experiments show that it is important to use morphology both within the translation model (for morphology with content) and outside it (for morphology contributing to fluency). Automatic evaluation measures for MT, BLEU (Papineni et al., 2002), WER (Word Error Rate) and PER (Position Independent Word Error Rate) use the word as the basic unit rather than morphemes. In a word com32 prised of multiple morphemes, getting even a single morpheme wrong means the entire word is wrong. In addition to standard MT evaluation measures, we perform a detailed linguistic analysis of the output. Our proposed approaches are significantly better than the state of the art, achieving the highest reported BLEU scores on the English-Finnish Europarl version 3 data-set. Our linguistic analysis shows that our models have fewer morpho-syntactic errors compared to the word-based baseline. 2 Models 2.1 Baseline Models We set up three baseline models for comparison in this work. The first is a basic wordbased model (called Baseline in the results); we trained this on the original unsegmented version of the text. Our second baseline is a factored translation model (Koehn and Hoang, 2007) (called Factored), which used as factors the word, “stem”1 and suffix. These are derived from the same unsupervised segmentation model used in other experiments. The results (Table 3) show that a factored model was unable to match the scores of a simple wordbased baseline. We hypothesize that this may be an inherently difficult representational form for a language with the degree of morphological complexity found in Finnish. Because the morphology generation must be precomputed, for languages with a high degree of morphological complexity, the combinatorial explosion makes it unmanageable to capture the full range of morphological productivity. In addition, because the morphological variants are generated on a per-word basis within a given phrase, it excludes productive morphological combination across phrase boundaries and makes it impossible for the model to take into account any longdistance dependencies between morphemes. We conclude from this result that it may be more useful for an agglutinative language to use morphology beyond the confines of the phrasal unit, and condition its generation on more than just the local target stem. In order to compare the 1see Section 2.2. performance of unsupervised segmentation for translation, our third baseline is a segmented translation model based on a supervised segmentation model (called Sup), using the hand-built Omorfimorphological analyzer (Pirinen and Listenmaa, 2007), which provided slightly higher BLEU scores than the word-based baseline. 2.2 Segmented Translation For segmented translation models, it cannot be taken for granted that greater linguistic accuracy in segmentation yields improved translation (Chang et al., 2008). Rather, the goal in segmentation for translation is instead to maximize the amount of lexical content-carrying morphology, while generalizing over the information not helpful for improving the translation model. We therefore trained several different segmentation models, considering factors of granularity, coverage, and source-target symmetry. We performed unsupervised segmentation of the target data, using Morfessor (Creutz and Lagus, 2005) and Paramor (Monson, 2008), two top systems from the Morpho Challenge 2008 (their combined output was the Morpho Challenge winner). However, translation models based upon either Paramor alone or the combined systems output could not match the wordbased baseline, so we concentrated on Morfessor. Morfessor uses minimum description length criteria to train a HMM-based segmentation model. When tested against a human-annotated gold standard of linguistic morpheme segmentations for Finnish, this algorithm outperforms competing unsupervised methods, achieving an F-score of 67.0% on a 3 million sentence corpus (Creutz and Lagus, 2006). Varying the perplexity threshold in Morfessor does not segment more word types, but rather over-segments the same word types. In order to get robust, common segmentations, we trained the segmenter on the 5000 most frequent words2; we then used this to segment the entire data set. In order to improve coverage, we then further segmented 2For the factored model baseline we also used the same setting perplexity = 30, 5,000 most frequent words, but with all but the last suffix collapsed and called the “stem”. 33 Training Set Test Set Total 64,106,047 21,938 Morph 30,837,615 5,191 Hanging Morph 10,906,406 296 Table 1: Morpheme occurences in the phrase table and in translation. any word type that contained a match from the most frequent suffix set, looking for the longest matching suffix character string. We call this method Unsup L-match. After the segmentation, word-internal morpheme boundary markers were inserted into the segmented text to be used to reconstruct the surface forms in the MT output. We then trained the Moses phrase-based system (Koehn et al., 2007) on the segmented and marked text. After decoding, it was a simple matter to join together all adjacent morphemes with word-internal boundary markers to reconstruct the surface forms. Figure 1(a) gives the full model overview for all the variants of the segmented translation model (supervised/unsupervised; with and without the Unsup L-match procedure). Table 1 shows how morphemes are being used in the MT system. Of the phrases that included segmentations (‘Morph’ in Table 1), roughly a third were ‘productive’, i.e. had a hanging morpheme (with a form such as stem+) that could be joined to a suffix (‘Hanging Morph’ in Table 1). However, in phrases used while decoding the development and test data, roughly a quarter of the phrases that generated the translated output included segmentations, but of these, only a small fraction (6%) had a hanging morpheme; and while there are many possible reasons to account for this we were unable to find a single convincing cause. 2.3 Morphology Generation Morphology generation as a post-processing step allows major vocabulary reduction in the translation model, and allows the use of morphologically targeted features for modeling inflection. A possible disadvantage of this approach is that in this model there is no opportunity to consider the morphology in translation since it is removed prior to training the translation model. Morphology generation models can use a variety of bilingual and contextual information to capture dependencies between morphemes, often more long-distance than what is possible using n-gram language models over morphemes in the segmented model. Similar to previous work (Minkov et al., 2007; Toutanova et al., 2008), we model morphology generation as a sequence learning problem. Unlike previous work, we use unsupervised morphology induction and use automatically generated suffix classes as tags. The first phase of our morphology prediction model is to train a MT system that produces morphologically simplified word forms in the target language. The output word forms are complex stems (a stem and some suffixes) but still missing some important suffix morphemes. In the second phase, the output of the MT decoder is then tagged with a sequence of abstract suffix tags. In particular, the output of the MT decoder is a sequence of complex stems denoted by x and the output is a sequence of suffix class tags denoted by y. We use a list of parts from (x,y) and map to a d-dimensional feature vector Φ(x, y), with each dimension being a real number. We infer the best sequence of tags using: F(x) = argmax y p(y | x, w) where F(x) returns the highest scoring output y∗. A conditional random field (CRF) (Lafferty et al., 2001) defines the conditional probability as a linear score for each candidate y and a global normalization term: log p(y | x, w) = Φ(x, y) · w −log Z where Z = P y′∈GEN(x) exp(Φ(x, y′) · w). We use stochastic gradient descent (using crfsgd3) to train the weight vector w. So far, this is all off-the-shelf sequence learning. However, the output y∗from the CRF decoder is still only a sequence of abstract suffix tags. The third and final phase in our morphology prediction model 3http://leon.bottou.org/projects/sgd 34 Morphological Pre-Processing English Training Data Finnish Training Data words stem+ +morph words Post-Process: Morph Re-Stitching stem+ +morph Evaluation against original reference Fully inflected surface form MT System Alignment: word word word stem+ +morph stem (a) Segmented Translation Model MT System Alignment: word word word stem+ +morph1+ stem Morphological Pre-Processing 1 English Training Data Finnish Training Data words stem+ +morph1+ words Post-Process 1: Morph Re-Stitching stem+ +morph1+ Post-Process 2: CRF Morphology Generation complex stem: stem+morph1+ Language Model surface form mapping stem+morph1+ +morph2 Evaluation against original reference Fully inflected surface form Morphological Pre-Processing 2 stem+ +morph1+ +morph2 (b) Post-Processing Model Translation & Generation Figure 1: Training and testing pipelines for the SMT models. is to take the abstract suffix tag sequence y∗and then map it into fully inflected word forms, and rank those outputs using a morphemic language model. The abstract suffix tags are extracted from the unsupervised morpheme learning process, and are carefully designed to enable CRF training and decoding. We call this model CRFLM for short. Figure 1(b) shows the full pipeline and Figure 2 shows a worked example of all the steps involved. We use the morphologically segmented training data (obtained using the segmented corpus described in Section 2.24) and remove selected suffixes to create a morphologically simplified version of the training data. The MT model is trained on the morphologically simplified training data. The output from the MT system is then used as input to the CRF model. The CRF model was trained on a ∼210,000 Finnish sentences, consisting of ∼1.5 million tokens; the 2,000 sentence Europarl test set consisted of 41,434 stem tokens. The labels in the output sequence y were obtained by selecting the most productive 150 stems, and then collapsing certain vowels into equivalence classes corresponding to Finnish vowel harmony patterns. Thus 4Note that unlike Section 2.2 we do not use Unsup L-match because when evaluating the CRF model on the suffix prediction task it obtained 95.61% without using Unsup L-match and 82.99% when using Unsup L-match. variants -k¨o and -ko become vowel-generic enclitic particle -kO, and variants -ss¨a and -ssa become the vowel-generic inessive case marker -ssA, etc. This is the only language-specific component of our translation model. However, we expect this approach to work for other agglutinative languages as well. For fusional languages like Spanish, another mapping from suffix to abstract tags might be needed. These suffix transformations to their equivalence classes prevent morphophonemic variants of the same morpheme from competing against each other in the prediction model. This resulted in 44 possible label outputs per stem which was a reasonable sized tag-set for CRF training. The CRF was trained on monolingual features of the segmented text for suffix prediction, where t is the current token: Word Stem st−n, .., st, .., st+n(n = 4) Morph Prediction yt−2, yt−1, yt With this simple feature set, we were able to use features over longer distances, resulting in a total of 1,110,075 model features. After CRF based recovery of the suffix tag sequence, we use a bigram language model trained on a full segmented version on the training data to recover the original vowels. We used bigrams only, because the suffix vowel harmony alternation depends only upon the preceding phonemes in the word from which it was segmented. 35 original training data: koskevaa mietint¨o¨a k¨asitell¨a¨an segmentation: koske+ +va+ +a mietint¨o+ +¨a k¨asi+ +te+ +ll¨a+ +¨a+ +n (train bigram language model with mapping A = { a, ¨a }) map final suffix to abstract tag-set: koske+ +va+ +A mietint¨o+ +A k¨asi+ +te+ +ll¨a+ +¨a+ +n (train CRF model to predict the final suffix) peeling of final suffix: koske+ +va+ mietint¨o+ k¨asi+ +te+ +ll¨a+ +¨a+ (train SMT model on this transformation of training data) (a) Training decoder output: koske+ +va+ mietint¨o+ k¨asi+ +te+ +ll¨a+ +¨a+ decoder output stitched up: koskeva+ mietint¨o+ k¨asitell¨a¨a+ CRF model prediction: x = ‘koskeva+ mietint¨o+ k¨asitell¨a¨a+’, y = ‘+A +A +n’ koskeva+ +A mietint¨o+ +A k¨asitell¨a¨a+ +n unstitch morphemes: koske+ +va+ +A mietint¨o+ +A k¨asi+ +te+ +ll¨a+ +¨a+ +n language model disambiguation: koske+ +va+ +a mietint¨o+ +¨a k¨asi+ +te+ +ll¨a+ +¨a+ +n final stitching: koskevaa mietint¨o¨a k¨asitell¨a¨an (the output is then compared to the reference translation) (b) Decoding Figure 2: Worked example of all steps in the post-processing morphology prediction model. 3 Experimental Results For all of the models built in this paper, we used the Europarl version 3 corpus (Koehn, 2005) English-Finnish training data set, as well as the standard development and test data sets. Our parallel training data consists of ∼1 million sentences of 40 words or less, while the development and test sets were each 2,000 sentences long. In all the experiments conducted in this paper, we used the Moses5 phrase-based translation system (Koehn et al., 2007), 2008 version. We trained all of the Moses systems herein using the standard features: language model, reordering model, translation model, and word penalty; in addition to these, the factored experiments called for additional translation and generation features for the added factors as noted above. We used in all experiments the following settings: a hypothesis stack size 100, distortion limit 6, phrase translations limit 20, and maximum phrase length 20. For the language models, we used SRILM 5-gram language models (Stolcke, 2002) for all factors. For our word-based Baseline system, we trained a word-based model using the same Moses system with identical settings. For evaluation against segmented translation systems in segmented forms before word reconstruction, we also segmented the baseline system’s word-based output. All the BLEU scores reported are for lowercase evaluation. We did an initial evaluation of the segmented output translation for each system using the no5http://www.statmt.org/moses/ Segmentation m-BLEU No Uni Baseline 14.84±0.69 9.89 Sup 18.41±0.69 13.49 Unsup L-match 20.74±0.68 15.89 Table 2: Segmented Model Scores. Sup refers to the supervised segmentation baseline model. m-BLEU indicates that the segmented output was evaluated against a segmented version of the reference (this measure does not have the same correlation with human judgement as BLEU). No Uni indicates the segmented BLEU score without unigrams. tion of m-BLEU score (Luong et al., 2010) where the BLEU score is computed by comparing the segmented output with a segmented reference translation. Table 2 shows the m-BLEU scores for various systems. We also show the m-BLEU score without unigrams, since over-segmentation could lead to artificially high m-BLEU scores. In fact, if we compare the relative improvement of our m-BLEU scores for the Unsup L-match system we see a relative improvement of 39.75% over the baseline. Luong et. al. (2010) report an m-BLEU score of 55.64% but obtain a relative improvement of 0.6% over their baseline m-BLEU score. We find that when using a good segmentation model, segmentation of the morphologically complex target language improves model performance over an unsegmented baseline (the confidence scores come from bootstrap resampling). Table 3 shows the evaluation scores for all the baselines and the methods introduced in this paper using standard wordbased lowercase BLEU, WER and PER. We do 36 Model BLEU WER TER Baseline 14.68 74.96 72.42 Factored 14.22 76.68 74.15 (Luong et.al, 2010) 14.82 Sup 14.90 74.56 71.84 Unsup L-match 15.09∗ 74.46 71.78 CRF-LM 14.87 73.71 71.15 Table 3: Test Scores: lowercase BLEU, WER and TER. The ∗indicates a statistically significant improvement of BLEU score over the Baseline model. The boldface scores are the best performing scores per evaluation measure. better than (Luong et al., 2010), the previous best score for this task. We also show a better relative improvement over our baseline when compared to (Luong et al., 2010): a relative improvement of 4.86% for Unsup L-match compared to our baseline word-based model, compared to their 1.65% improvement over their baseline word-based model. Our best performing method used unsupervised morphology with L-match (see Section 2.2) and the improvement is significant: bootstrap resampling provides a confidence margin of ±0.77 and a t-test (Collins et al., 2005) showed significance with p = 0.001. 3.1 Morphological Fluency Analysis To see how well the models were doing at getting morphology right, we examined several patterns of morphological behavior. While we wish to explore minimally supervised morphological MT models, and use as little language specific information as possible, we do want to use linguistic analysis on the output of our system to see how well the models capture essential morphological information in the target language. So, we ran the word-based baseline system, the segmented model (Unsup L-match), and the prediction model (CRF-LM) outputs, along with the reference translation through the supervised morphological analyzer Omorfi(Pirinen and Listenmaa, 2007). Using this analysis, we looked at a variety of linguistic constructions that might reveal patterns in morphological behavior. These were: (a) explicitly marked noun forms, (b) noun-adjective case agreement, (c) subject-verb person/number agreement, (d) transitive object case marking, (e) postpositions, and (f) possession. In each of these categories, we looked for construction matches on a per-sentence level between the models’ output and the reference translation. Table 4 shows the models’ performance on the constructions we examined. In all of the categories, the CRF-LM model achieves the best precision score, as we explain below, while the Unsup L-match model most frequently gets the highest recall score. A general pattern in the most prevalent of these constructions is that the baseline tends to prefer the least marked form for noun cases (corresponding to the nominative) more than the reference or the CRF-LM model. The baseline leaves nouns in the (unmarked) nominative far more than the reference, while the CRF-LM model comes much closer, so it seems to fare better at explicitly marking forms, rather than defaulting to the more frequent unmarked form. Finnish adjectives must be marked with the same case as their head noun, while verbs must agree in person and number with their subject. We saw that in both these categories, the CRFLM model outperforms for precision, while the segmented model gets the best recall. In addition, Finnish generally marks direct objects of verbs with the accusative or the partitive case; we observed more accusative/partitive-marked nouns following verbs in the CRF-LM output than in the baseline, as illustrated by example (1) in Fig. 3. While neither translation picks the same verb as in the reference for the input ‘clarify,’ the CRFLM-output paraphrases it by using a grammatical construction of the transitive verb followed by a noun phrase inflected with the accusative case, correctly capturing the transitive construction. The baseline translation instead follows ‘give’ with a direct object in the nominative case. To help clarify the constructions in question, we have used Google Translate6 to provide back6http://translate.google.com/ 37 Construction Freq. Baseline Unsup L-match CRF-LM P R F P R F P R F Noun Marking 5.5145 51.74 78.48 62.37 53.11 83.63 64.96 54.99 80.21 65.25 Trans Obj 1.0022 32.35 27.50 29.73 33.47 29.64 31.44 35.83 30.71 33.07 Noun-Adj Agr 0.6508 72.75 67.16 69.84 69.62 71.00 70.30 73.29 62.58 67.51 Subj-Verb Agr 0.4250 56.61 40.67 47.33 55.90 48.17 51.48 57.79 40.17 47.40 Postpositions 0.1138 43.31 29.89 35.37 39.31 36.96 38.10 47.16 31.52 37.79 Possession 0.0287 66.67 70.00 68.29 75.68 70.00 72.73 78.79 60.00 68.12 Table 4: Model Accuracy: Morphological Constructions. Freq. refers to the construction’s average number of occurrences per sentence, also averaged over the various translations. P, R and F stand for precision, recall and F-score. The constructions are listed in descending order of their frequency in the texts. The highlighted value in each column is the most accurate with respect to the reference value. translations of our MT output into English; to contextualize these back-translations, we have provided Google’s back-translation of the reference. The use of postpositions shows another difference between the models. Finnish postpositions require the preceding noun to be in the genitive or sometimes partitive case, which occurs correctly more frequently in the CRF-LM than the baseline. In example (2) in Fig. 3, all three translations correspond to the English text, ‘with the basque nationalists.’ However, the CRF-LM output is more grammatical than the baseline, because not only do the adjective and noun agree for case, but the noun ‘baskien’ to which the postposition ‘kanssa’ belongs is marked with the correct genitive case. However, this well-formedness is not rewarded by BLEU, because ‘baskien’ does not match the reference. In addition, while Finnish may express possession using case marking alone, it has another construction for possession; this can disambiguate an otherwise ambiguous clause. This alternate construction uses a pronoun in the genitive case followed by a possessive-marked noun; we see that the CRF-LM model correctly marks this construction more frequently than the baseline. As example (3) in Fig. 3 shows, while neither model correctly translates ‘matkan’ (‘trip’), the baseline’s output attributes the inessive ‘yhteydess’ (‘connection’) as belonging to ‘tulokset’ (‘results’), and misses marking the possession linking it to ‘Commissioner Fischler’. Our manual evaluation shows that the CRFLM model is producing output translations that are more morphologically fluent than the wordbased baseline and the segmented translation Unsup L-match system, even though the word choices lead to a lower BLEU score overall when compared to Unsup L-match. 4 Related Work The work on morphology in MT can be grouped into three categories, factored models, segmented translation, and morphology generation. Factored models (Koehn and Hoang, 2007) factor the phrase translation probabilities over additional information annotated to each word, allowing for text to be represented on multiple levels of analysis. We discussed the drawbacks of factored models for our task in Section 2.1. While (Koehn and Hoang, 2007; Yang and Kirchhoff, 2006; Avramidis and Koehn, 2008) obtain improvements using factored models for translation into English, German, Spanish, and Czech, these models may be less useful for capturing long-distance dependencies in languages with much more complex morphological systems such as Finnish. In our experiments factored models did worse than the baseline. Segmented translation performs morphological analysis on the morphologically complex text for use in the translation model (Brown et al., 1993; Goldwater and McClosky, 2005; de Gispert and Mari˜no, 2008). This method unpacks complex forms into simpler, more frequently occurring components, and may also increase the symmetry of the lexically realized content be38 (1) Input: ‘the charter we are to approve today both strengthens and gives visible shape to the common fundamental rights and values our community is to be based upon.’ a. Reference: perusoikeuskirja , jonka t¨an¨a¨an aiomme hyv¨aksy¨a , sek¨a vahvistaa ett¨a selvent¨a¨a (selvent¨a¨a/VERB/ACT/INF/SG/LAT-clarify) niit¨a (ne/PRONOUN/PL/PAR-them) yhteisi¨a perusoikeuksia ja arvoja , joiden on oltava yhteis¨omme perusta. Back-translation: ‘Charter of Fundamental Rights, which today we are going to accept that clarify and strengthen the common fundamental rights and values, which must be community based.’ b. Baseline: perusoikeuskirja me hyv¨aksymme t¨an¨a¨an molemmat vahvistaa ja antaa (antaa/VERB/INF/SG/LATgive) n¨akyv¨a (n¨aky¨a/VERB/ACT/PCP/SG/NOM-visible) muokata yhteist¨a perusoikeuksia ja arvoja on perustuttava. Back-translation: ‘Charter today, we accept both confirm and modify to make a visible and common values, fundamental rights must be based.’ c. CRF-LM: perusoikeuskirja on hyv¨aksytty t¨an¨a¨an , sek¨a vahvistaa ja antaa (antaa/VERB/ACT/INF/SG/LAT-give) konkreettisen (konkreettinen/ADJECTIVE/SG/GEN,ACC-concrete) muodon (muoto/NOUN/SG/GEN,ACCshape) yhteisi¨a perusoikeuksia ja perusarvoja , yhteis¨on on perustuttava. Back-translation: ‘Charter has been approved today, and to strengthen and give concrete shape to the common basic rights and fundamental values, the Community must be based.’ (2) Input: ‘with the basque nationalists’ a. Reference: baskimaan kansallismielisten kanssa basque-SG/NOM+land-SG/GEN,ACC nationalists-PL/GEN with-POST b. Baseline: baskimaan kansallismieliset kanssa basque-SG/NOM-+land-SG/GEN,ACC kansallismielinen-PL/NOM,ACC-nationalists POST-with c. CRF-LM: kansallismielisten baskien kanssa nationalists-PL/GEN basques-PL/GEN with-POST (3) Input: ‘and in this respect we should value the latest measures from commissioner fischler , the results of his trip to morocco on the 26th of last month and the high level meetings that took place, including the one with the king himself’ a. Reference: ja t¨ass¨a mieless¨a osaamme my¨os arvostaa komission j¨asen fischlerin viimeisimpi¨a toimia , jotka ovat h¨anen (h¨anen/GEN-his) marokkoon 26 lokakuuta tekemns (tekem¨ans¨a/POSS-his) matkan (matkan/GENtour) ja korkean tason kokousten jopa itsens¨a kuninkaan kanssa tulosta Back-translation: ‘and in this sense we can also appreciate the Commissioner Fischler’s latest actions, which are his to Morocco 26 October trip to high-level meetings and even the king himself with the result b. Baseline: ja t¨ass¨a yhteydess¨a olisi arvoa viimeisin toimia komission j¨asen fischler , tulokset monitulkintaisia marokon yhteydess¨a (yhteydess/INE-connection) , ja viime kuussa pidettiin korkean tason kokouksissa , mukaan luettuna kuninkaan kanssa Back-translation: ‘and in this context would be the value of the last act, Commissioner Fischler, the results of the Moroccan context, ambiguous, and last month held high level meetings, including with the king’ c. CRF-LM: ja t¨ass¨a yhteydess¨a meid¨an olisi lis¨aarvoa viimeist¨a toimenpiteit¨a kuin komission j¨asen fischler , ett¨a h¨anen (h¨anen/GEN-his) kokemuksensa (kokemuksensa/POSS-experience) marokolle (marokolle-Moroccan) viime kuun 26 ja korkean tason tapaamiset j¨arjestettiin, kuninkaan kanssa Back-translation: ‘and in this context, we should value the last measures as the Commissioner Fischler, that his experience in Morocco has on the 26th and high-level meetings took place, including with the king.’ Figure 3: Morphological fluency analysis (see Section 3.1). tween source and target. In a somewhat orthogonal approach to ours, (Ma et al., 2007) use alignment of a parallel text to pack together adjacent segments in the alignment output, which are then fed back to the word aligner to bootstrap an improved alignment, which is then used in the translation model. We compared our results against (Luong et al., 2010) in Table 3 since their results are directly comparable to ours. They use a segmented phrase table and language model along with the word-based versions in the decoder and in tuning a Finnish target. Their approach requires segmented phrases to match word boundaries, eliminating morphologically productive phrases. In their work a segmented language model can score a translation, but cannot insert morphology that does not show source-side reflexes. In order to perform a similar experiment that still allowed for morphologically productive phrases, we tried training a segmented translation model, the output of which we stitched up in tuning so as to tune to a word-based reference. The goal of this experiment was to control the segmented model’s tendency to overfit by rewarding it for using correct whole-word forms. However, we found 39 that this approach was less successful than using the segmented reference in tuning, and could not meet the baseline (13.97% BLEU best tuning score, versus 14.93% BLEU for the baseline best tuning score). Previous work in segmented translation has often used linguistically motivated morphological analysis selectively applied based on a language-specific heuristic. A typical approach is to select a highly inflecting class of words and segment them for particular morphology (de Gispert and Mari˜no, 2008; Ramanathan et al., 2009). Popovi¸c and Ney (2004) perform segmentation to reduce morphological complexity of the source to translate into an isolating target, reducing the translation error rate for the English target. For Czech-to-English, Goldwater and McClosky (2005) lemmatized the source text and inserted a set of ‘pseudowords’ expected to have lexical reflexes in English. Minkov et. al. (2007) and Toutanova et. al. (2008) use a Maximum Entropy Markov Model for morphology generation. The main drawback to this approach is that it removes morphological information from the translation model (which only uses stems); this can be a problem for languages in which morphology expresses lexical content. de Gispert (2008) uses a language-specific targeted morphological classifier for Spanish verbs to avoid this issue. Talbot and Osborne (2006) use clustering to group morphological variants of words for word alignments and for smoothing phrase translation tables. Habash (2007) provides various methods to incorporate morphological variants of words in the phrase table in order to help recognize out of vocabulary words in the source language. 5 Conclusion and Future Work We found that using a segmented translation model based on unsupervised morphology induction and a model that combined morpheme segments in the translation model with a postprocessing morphology prediction model gave us better BLEU scores than a word-based baseline. Using our proposed approach we obtain better scores than the state of the art on the EnglishFinnish translation task (Luong et al., 2010): from 14.82% BLEU to 15.09%, while using a simpler model. We show that using morphological segmentation in the translation model can improve output translation scores. We also demonstrate that for Finnish (and possibly other agglutinative languages), phrase-based MT benefits from allowing the translation model access to morphological segmentation yielding productive morphological phrases. Taking advantage of linguistic analysis of the output we show that using a post-processing morphology generation model can improve translation fluency on a sub-word level, in a manner that is not captured by the BLEU word-based evaluation measure. In order to help with replication of the results in this paper, we have run the various morphological analysis steps and created the necessary training, tuning and test data files needed in order to train, tune and test any phrase-based machine translation system with our data. The files can be downloaded from natlang.cs.sfu.ca. In future work we hope to explore the utility of phrases with productive morpheme boundaries and explore why they are not used more pervasively in the decoder. Evaluation measures for morphologically complex languages and tuning to those measures are also important future work directions. Also, we would like to explore a non-pipelined approach to morphological preand post-processing so that a globally trained model could be used to remove the target side morphemes that would improve the translation model and then predict those morphemes in the target language. Acknowledgements This research was partially supported by NSERC, Canada (RGPIN: 264905) and a Google Faculty Award. We would like to thank Christian Monson, Franz Och, Fred Popowich, Howard Johnson, Majid Razmara, Baskaran Sankaran and the anonymous reviewers for their valuable comments on this work. We would particularly like to thank the developers of the open-source Moses machine translation toolkit and the Omorfimorphological analyzer for Finnish which we used for our experiments. 40 References Eleftherios Avramidis and Philipp Koehn. 2008. Enriching morphologically poor languages for statistical machine translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, page 763?770, Columbus, Ohio, USA. Association for Computational Linguistics. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 224–232, Columbus, Ohio, June. Association for Computational Linguistics. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of 43rd Annual Meeting of the Association for Computational Linguistics (ACL05). Association for Computational Linguistics. Mathias Creutz and Krista Lagus. 2005. Inducing the morphological lexicon of a natural language from unannotated text. In Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR’05), pages 106–113, Espoo, Finland. Mathias Creutz and Krista Lagus. 2006. Morfessor in the morpho challenge. In Proceedings of the PASCAL Challenge Workshop on Unsupervised Segmentation of Words into Morphemes. Adri´a de Gispert and Jos´e Mari˜no. 2008. On the impact of morphology in English to Spanish statistical MT. Speech Communication, 50(11-12). Sharon Goldwater and David McClosky. 2005. Improving statistical MT through morphological analysis. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 676–683, Vancouver, B.C., Canada. Association for Computational Linguistics. Philipp Koehn and Hieu Hoang. 2007. Factored translation models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 868–876, Prague, Czech Republic. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL ‘07: Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177–108, Prague, Czech Republic. Association for Computational Linguistics. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of Machine Translation Summit X, pages 79–86, Phuket, Thailand. Association for Computational Linguistics. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, pages 282–289, San Francisco, California, USA. Association for Computing Machinery. Minh-Thang Luong, Preslav Nakov, and Min-Yen Kan. 2010. A hybrid morpheme-word representation for machine translation of morphologically rich languages. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 148–157, Cambridge, Massachusetts. Association for Computational Linguistics. Yanjun Ma, Nicolas Stroppa, and Andy Way. 2007. Bootstrapping word alignment via word packing. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 304–311, Prague, Czech Republic. Association for Computational Linguistics. Einat Minkov, Kristina Toutanova, and Hisami Suzuki. 2007. Generating complex morphology for machine translation. In In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL07), pages 128–135, Prague, Czech Republic. Association for Computational Linguistics. Christian Monson. 2008. Paramor and morpho challenge 2008. In Lecture Notes in Computer Science: Workshop of the Cross-Language Evaluation Forum (CLEF 2008), Revised Selected Papers. Habash Nizar. 2007. Four techniques for online handling of out-of-vocabulary words in arabic-english statistical machine translation. In Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics, Columbus, Ohio. Association for Computational Linguistics. 41 Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics ACL, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Tommi Pirinen and Inari Listenmaa. 2007. Omorfi morphological analzer. http://gna.org/projects/omorfi. Maja Popovi¸c and Hermann Ney. 2004. Towards the use of word stems and suffixes for statistical machine translation. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC), pages 1585–1588, Lisbon, Portugal. European Language Resources Association (ELRA). Ananthakrishnan Ramanathan, Hansraj Choudhary, Avishek Ghosh, and Pushpak Bhattacharyya. 2009. Case markers and morphology: Addressing the crux of the fluency problem in EnglishHindi SMT. In Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, pages 800–808, Suntec, Singapore. Association for Computational Linguistics. Andreas Stolcke. 2002. Srilm – an extensible language modeling toolkit. 7th International Conference on Spoken Language Processing, 3:901–904. David Talbot and Miles Osborne. 2006. Modelling lexical redundancy for machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 969–976, Sydney, Australia, July. Association for Computational Linguistics. Kristina Toutanova, Hisami Suzuki, and Achim Ruopp. 2008. Applying morphology generation models to machine translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 514–522, Columbus, Ohio, USA. Association for Computational Linguistics. Mei Yang and Katrin Kirchhoff. 2006. Phrase-based backoffmodels for machine translation of highly inflected languages. In Proceedings of the European Chapter of the Association for Computational Linguistics, pages 41–48, Trento, Italy. Association for Computational Linguistics. 42
|
2011
|
4
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 389–398, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Event Discovery in Social Media Feeds Edward Benson, Aria Haghighi, and Regina Barzilay Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {eob, aria42, regina}@csail.mit.edu Abstract We present a novel method for record extraction from social streams such as Twitter. Unlike typical extraction setups, these environments are characterized by short, one sentence messages with heavily colloquial speech. To further complicate matters, individual messages may not express the full relation to be uncovered, as is often assumed in extraction tasks. We develop a graphical model that addresses these problems by learning a latent set of records and a record-message alignment simultaneously; the output of our model is a set of canonical records, the values of which are consistent with aligned messages. We demonstrate that our approach is able to accurately induce event records from Twitter messages, evaluated against events from a local city guide. Our method achieves significant error reduction over baseline methods.1 1 Introduction We propose a method for discovering event records from social media feeds such as Twitter. The task of extracting event properties has been well studied in the context of formal media (e.g., newswire), but data sources such as Twitter pose new challenges. Social media messages are often short, make heavy use of colloquial language, and require situational context for interpretation (see examples in Figure 1). Not all properties of an event may be expressed in a single message, and the mapping between messages and canonical event records is not obvious. 1Data and code available at http://groups.csail. mit.edu/rbg/code/twitter Carnegie Hall Artist Venue Craig Ferguson DJ Pauly D Terminal 5 Seated at @carnegiehall waiting for @CraigyFerg’s show to begin RT @leerader : getting REALLY stoked for #CraigyAtCarnegie sat night. Craig, , want to join us for dinner at the pub across the street? 5pm, be there! @DJPaulyD absolutely killed it at Terminal 5 last night. @DJPaulyD : DJ Pauly D Terminal 5 NYC Insanity ! #ohyeah @keadour @kellaferr24 Craig, nice seeing you at #noelnight this weekend @becksdavis! Twitter Messages Records Figure 1: Examples of Twitter messages, along with automatically extracted records. These properties of social media streams make existing extraction techniques significantly less effective. Despite these challenges, this data exhibits an important property that makes learning amenable: the multitude of messages referencing the same event. Our goal is to induce a comprehensive set of event records given a seed set of example records, such as a city event calendar table. While such resources are widely available online, they are typically high precision, but low recall. Social media is a natural place to discover new events missed by curation, but mentioned online by someone planning to attend. We formulate our approach as a structured graphical model which simultaneously analyzes individual messages, clusters them according to event, and induces a canonical value for each event property. At the message level, the model relies on a conditional random field component to extract field values such 389 as location of the event and artist name. We bias local decisions made by the CRF to be consistent with canonical record values, thereby facilitating consistency within an event cluster. We employ a factorgraph model to capture the interaction between each of these decisions. Variational inference techniques allow us to effectively and efficiently make predictions on a large body of messages. A seed set of example records constitutes our only source of supervision; we do not observe alignment between these seed records and individual messages, nor any message-level field annotation. The output of our model consists of an event-based clustering of messages, where each cluster is represented by a single multi-field record with a canonical value chosen for each field. We apply our technique to construct entertainment event records for the city calendar section of NYC.com using a stream of Twitter messages. Our method yields up to a 63% recall against the city table and up to 85% precision evaluated manually, significantly outperforming several baselines. 2 Related Work A large number of information extraction approaches exploit redundancy in text collections to improve their accuracy and reduce the need for manually annotated data (Agichtein and Gravano, 2000; Yangarber et al., 2000; Zhu et al., 2009; Mintz et al., 2009a; Yao et al., 2010b; Hasegawa et al., 2004; Shinyama and Sekine, 2006). Our work most closely relates to methods for multi-document information extraction which utilize redundancy in input data to increase the accuracy of the extraction process. For instance, Mann and Yarowsky (2005) explore methods for fusing extracted information across multiple documents by performing extraction on each document independently and then merging extracted relations by majority vote. This idea of consensus-based extraction is also central to our method. However, we incorporate this idea into our model by simultaneously clustering output and labeling documents rather than performing the two tasks in serial fashion. Another important difference is inherent in the input data we are processing: it is not clear a priori which extraction decisions should agree with each other. Identifying messages that refer to the same event is a large part of our challenge. Our work also relates to recent approaches for relation extraction with distant supervision (Mintz et al., 2009b; Bunescu and Mooney, 2007; Yao et al., 2010a). These approaches assume a database and a collection of documents that verbalize some of the database relations. In contrast to traditional supervised IE approaches, these methods do not assume that relation instantiations are annotated in the input documents. For instance, the method of Mintz et al. (2009b) induces the mapping automatically by bootstrapping from sentences that directly match record entries. These mappings are used to learn a classifier for relation extraction. Yao et al. (2010a) further refine this approach by constraining predicted relations to be consistent with entity types assignment. To capture the complex dependencies among assignments, Yao et al. (2010a) use a factor graph representation. Despite the apparent similarity in model structure, the two approaches deal with various types of uncertainties. The key challenge for our method is modeling message to record alignment which is not an issue in the previous set up. Finally, our work fits into a broader area of text processing methods designed for social-media streams. Examples of such approaches include methods for conversation structure analysis (Ritter et al., 2010) and exploration of geographic language variation (Eisenstein et al., 2010) from Twitter messages. To our knowledge no work has yet addressed record extraction from this growing corpus. 3 Problem Formulation Here we describe the key latent and observed random variables of our problem. A depiction of all random variables is given in Figure 2. Message (x): Each message x is a single posting to Twitter. We use xj to represent the jth token of x, and we use x to denote the entire collection of messages. Messages are always observed during training and testing. Record (R): A record is a representation of the canonical properties of an event. We use Ri to denote the ith record and Rℓ i to denote the value of the ℓth property of that record. In our experiments, each record Ri is a tuple ⟨R1 i , R2 i ⟩which represents that 390 Mercury Lounge Yonder Mountain String Band Craig Ferguson Carnegie Hall Artist Venue 1 2 k R k R+ 1 k ... Really excited for #CraigyAtCarnegie Seeing Yonder Mountain at 8 @YonderMountain rocking Mercury Lounge None None None Artist None None Artist Artist None Venue Venue None Artist xi yi xi−1 yi−1 xi+ 1 yi+ 1 Ai−1 Ai+ 1 Ai Figure 2: The key variables of our model. A collection of K latent records Rk, each consisting of a set of L properties. In the figure above, R1 1 =“Craig Ferguson” and R2 1 =“Carnegie Hall.” Each tweet xi is associated with a labeling over tokens yi and is aligned to a record via the Ai variable. See Section 3 for further details. record’s values for the schema ⟨ARTIST, VENUE⟩. Throughout, we assume a known fixed number K of records R1, . . . , RK, and we use R to denote this collection of records. For tractability, we consider a finite number of possibilities for each Rℓ k which are computed from the input x (see Section 5.1 for details). Records are observed during training and latent during testing. Message Labels (y): We assume that each message has a sequence labeling, where the labels consist of the record fields (e.g., ARTIST and VENUE) as well as a NONE label denoting the token does not correspond to any domain field. Each token xj in a message has an associated label yj. Message labels are always latent during training and testing. Message to Record Alignment (A): We assume that each message is aligned to some record such that the event described in the message is the one represented by that record. Each message xi is associated with an alignment variable Ai that takes a value in {1, . . . , K}. We use A to denote the set of alignments across all xi. Multiple messages can and do align to the same record. As discussed in Section 4, our model will encourage tokens associated with message labels to be “similar” to corresponding aligned record values. Alignments are always latent during training and testing. 4 Model Our model can be represented as a factor graph which takes the form, P(R, A, y|x) ∝ Y i φSEQ(xi, yi) ! (Seq. Labeling) Y ℓ φUNQ(Rℓ) ! (Rec. Uniqueness) Y i,ℓ φPOP (xi, yi, Rℓ Ai) (Term Popularity) Y i φCON(xi, yi, RAi) ! (Rec. Consistency) where Rℓdenotes the sequence Rℓ 1, . . . , Rℓ K of record values for a particular domain field ℓ. Each of the potentials takes a standard log-linear form: φ(z) = θT f(z) where θ are potential-specific parameters and f(·) is a potential-specific feature function. We describe each potential separately below. 4.1 Sequence Labeling Factor The sequence labeling factor is similar to a standard sequence CRF (Lafferty et al., 2001), where the potential over a message label sequence decomposes 391 X i Y i φS E Q R k R k+ 1 R k−1 φU N Q th field (across record s) φP O P R k Ai Y i X i Ai Y i X i R k R+ 1 k φC O N k kth record Figure 3: Factor graph representation of our model. Circles represent variables and squares represent factors. For readability, we depict the graph broken out as a set of templates; the full graph is the combination of these factor templates applied to each variable. See Section 4 for further details. over pairwise cliques: φSEQ(x, y) = exp{θT SEQfSEQ(x, y)} = exp θT SEQ X j fSEQ(x, yj, yj+1) This factor is meant to encode the typical message contexts in which fields are evoked (e.g. going to see X tonight). Many of the features characterize how likely a given token label, such as ARTIST, is for a given position in the message sequence conditioning arbitrarily on message text context. The feature function fSEQ(x, y) for this component encodes each token’s identity; word shape2; whether that token matches a set of regular expressions encoding common emoticons, time references, and venue types; and whether the token matches a bag of words observed in artist names (scraped from Wikipedia; 21,475 distinct tokens from 22,833 distinct names) or a bag of words observed in New York City venue names (scraped from NYC.com; 304 distinct tokens from 169 distinct names).3 The only edge feature is label-to-label. 4.2 Record Uniqueness Factor One challenge with Twitter is the so-called echo chamber effect: when a topic becomes popular, or “trends,” it quickly dominates the conversation online. As a result some events may have only a few referent messages while other more popular events may have thousands or more. In such a circumstance, the messages for a popular event may collect to form multiple identical record clusters. Since we 2e.g.: xxx, XXX, Xxx, or other 3These are just features, not a filter; we are free to extract any artist or venue regardless of their inclusion in this list. fix the number of records learned, such behavior inhibits the discovery of less talked-about events. Instead, we would rather have just two records: one with two aligned messages and another with thousands. To encourage this outcome, we introduce a potential that rewards fields for being unique across records. The uniqueness potential φUNQ(Rℓ) encodes the preference that each of the values Rℓ, . . . , Rℓ K for each field ℓdo not overlap textually. This factor factorizes over pairs of records: φUNQ(Rℓ) = Y k̸=k′ φUNQ(Rℓ k, Rℓ k′) where Rℓ k and Rℓ k′ are the values of field ℓfor two records Rk and Rk′. The potential over this pair of values is given by: φUNQ(Rℓ k, Rℓ k′) = exp{−θT SIMfSIM(Rℓ k, Rℓ k′)} where fSIM is computes the likeness of the two values at the token level: fSIM(Rℓ k, Rℓ k′) = |Rℓ k ∩Rℓ k′| max(|Rℓ k|, |Rℓ k′|) This uniqueness potential does not encode any preference for record values; it simply encourages each field ℓto be distinct across records. 4.3 Term Popularity Factor The term popularity factor φPOP is the first of two factors that guide the clustering of messages. Because speech on Twitter is colloquial, we would like these clusters to be amenable to many variations of the canonical record properties that are ultimately learned. The φPOP factor accomplishes this by representing a lenient compatibility score between a 392 message x, its labels y, and some candidate value v for a record field (e.g., Dave Matthews Band). This factor decomposes over tokens, and we align each token xj with the best matching token vk in v (e.g., Dave). The token level sum is scaled by the length of the record value being matched to avoid a preference for long field values. φPOP (x, y, Rℓ A = v) = X j max k φPOP (xj, yj, Rℓ A = vk) |v| This token-level component may be thought of as a compatibility score between the labeled token xj and the record field assignment Rℓ A = v. Given that token xj aligns with the token vk, the token-level component returns the sum of three parts, subject to the constraint that yj = ℓ: • IDF(xj)I[xj = vk], an equality indicator between tokens xj and vk, scaled by the inverse document frequency of xj • αIDF(xj) I[xj−1 = vk−1] + I[xj+1 = vk+1] , a small bonus of α = 0.3 for matches on adjacent tokens, scaled by the IDF of xj • I[xj = vk and x contains v]/|v|, a bonus for a complete string match, scaled by the size of the value. This is equivalent to this token’s contribution to a complete-match bonus. 4.4 Record Consistency Factor While the uniqueness factor discourages a flood of messages for a single event from clustering into multiple event records, we also wish to discourage messages from multiple events from clustering into the same record. When such a situation occurs, the model may either resolve it by changing inconsistent token labelings to the NONE label or by reassigning some of the messages to a new cluster. We encourage the latter solution with a record consistency factor φCON. The record consistency factor is an indicator function on the field values of a record being present and labeled correctly in a message. While the popularity factor encourages agreement on a per-label basis, this factor influences the joint behavior of message labels to agree with the aligned record. For a given record, message, and labeling, φCON(x, y, RA) = 1 if φPOP (x, y, Rℓ A) > 0 for all ℓ, and 0 otherwise. 4.5 Parameter Learning The weights of the CRF component of our model, θSEQ, are the only weights learned at training time, using a distant supervision process described in Section 6. The weights of the remaining three factors were hand-tuned4 using our training data set. 5 Inference Our goal is to predict a set of records R. Ideally we would like to compute P(R|x), marginalizing out the nuisance variables A and y. We approximate this posterior using variational inference.5 Concretely, we approximate the full posterior over latent variables using a mean-field factorization: P(R, A, y|x) ≈Q(R, A, y) = K Y k=1 Y ℓ q(Rℓ k) ! n Y i=1 q(Ai)q(yi) ! where each variational factor q(·) represents an approximation of that variable’s posterior given observed random variables. The variational distribution Q(·) makes the (incorrect) assumption that the posteriors amongst factors are independent. The goal of variational inference is to set factors q(·) to optimize the variational objective: min Q(·) KL(Q(R, A, y)∥P(R, A, y|x)) We optimize this objective using coordinate descent on the q(·) factors. For instance, for the case of q(yi) the update takes the form: q(yi) ←EQ/q(yi) log P(R, A, y|x) where Q/q(yi) denotes the expectation under all variables except yi. When computing a mean field update, we only need to consider the potentials involving that variable. The complete updates for each of the kinds of variables (y, A, and Rℓ) can be found in Figure 4. We briefly describe the computations involved with each update. q(y) update: The q(y) update for a single message yields an implicit expression in terms of pairwise cliques in y. We can compute arbitrary 4Their values are: θUNQ = −10, θPhrase P OP = 5, θToken P OP = 10, θCON = 2e8 5See Liang and Klein (2007) for an overview of variational techniques. 393 Message labeling update: ln q(y) ∝ n EQ/q(y) ln φSEQ(x, y) + ln h φPOP (x, y, Rℓ A)φCON(x, y, RA) io = ln φSEQ(x, y) + EQ/q(y) ln h φPOP (x, y, Rℓ A)φCON(x, y, RA) i = ln φSEQ(x, y) + X z,v,ℓ q(A = z)q(yj = ℓ)q(Rℓ z = v) ln h φPOP (x, y, Rℓ z = v)φCON(x, y, Rℓ z = v) i Mention record alignment update: ln q(A = z) ∝EQ/q(A) n ln φSEQ(x, y) + ln h φPOP (x, y, Rℓ A)φCON(x, y, RA) io ∝EQ/q(A) n ln h φPOP (x, y, Rℓ A)φCON(x, y, RA) io = X z,v,ℓ q(Rℓ z = v) n ln h φPOP (x, y, Rℓ z = v)φCON(x, y, Rℓ z = v) io = X z,v,ℓ q(Rℓ z = v)q(yj i = ℓ) ln h φPOP (x, y, Rℓ z = v)φCON(x, y, Rℓ z = v) i Record Field update: ln q(Rℓ k = v) ∝EQ/q(Rℓ k) (X k′ ln φUNQ(Rℓ k′, v) + X i ln [φPOP (xi, yi, v)φCON(xi, yi, v)] ) = X k′̸=k,v′ q(Rℓ k′ = v′) ln φUNQ(v, v′) + X i q(Ai = k) X j q(yj i = ℓ) ln h φPOP (x, y, Rℓ z = v, j)φCON(x, y, Rℓ z = v, j) i Figure 4: The variational mean-field updates used during inference (see Section 5). Inference consists of performing updates for each of the three kinds of latent variables: message labels (y), record alignments (A), and record field values (Rℓ). All are relatively cheap to compute except for the record field update q(Rℓ k) which requires looping potentially over all messages. Note that at inference time all parameters are fixed and so we only need to perform updates for latent variable factors. marginals for this distribution by using the forwardsbackwards algorithm on the potentials defined in the update. Therefore computing the q(y) update amounts to re-running forward backwards on the message where there is an expected potential term which involves the belief over other variables. Note that the popularity and consensus potentials (φPOP and φCON) decompose over individual message tokens so this can be tractably computed. q(A) update: The update for individual record alignment reduces to being log-proportional to the expected popularity and consensus potentials. q(Rℓ k) update: The update for the record field distribution is the most complex factor of the three. It requires computing expected similarity with other record field values (the φUNQ potential) and looping over all messages to accumulate a contribution from each, weighted by the probability that it is aligned to the target record. 5.1 Initializing Factors Since a uniform initialization of all factors is a saddle-point of the objective, we opt to initialize the q(y) factors with the marginals obtained using just the CRF parameters, accomplished by running forwards-backwards on all messages using only the 394 φSEQ potentials. The q(R) factors are initialized randomly and then biased with the output of our baseline model. The q(A) factor is initialized to uniform plus a small amount of noise. To simplify inference, we pre-compute a finite set of values that each Rℓ k is allowed to take, conditioned on the corpus. To do so, we run the CRF component of our model (φSEQ) over the corpus and extract, for each ℓ, all spans that have a token-level probability of being labeled ℓgreater than λ = 0.1. We further filter this set down to only values that occur at least twice in the corpus. This simplification introduces sparsity that we take advantage of during inference to speed performance. Because each term in φPOP and φCON includes an indicator function based on a token match between a field-value and a message, knowing the possible values v of each Rℓ k enables us to precompute the combinations of (x, ℓ, v) for which nonzero factor values are possible. For each such tuple, we can also precompute the best alignment position k for each token xj. 6 Evaluation Setup Data We apply our approach to construct a database of concerts in New York City. We used Twitter’s public API to collect roughly 4.7 Million tweets across three weekends that we subsequently filter down to 5,800 messages. The messages have an average length of 18 tokens, and the corpus vocabulary comprises 468,000 unique words6. We obtain labeled gold records using data scraped from the NYC.com music event guide; totaling 110 extracted records. Each gold record had two fields of interest: ARTIST and VENUE. The first weekend of data (messages and events) was used for training and the second two weekends were used for testing. Preprocessing Only a small fraction of Twitter messages are relevant to the target extraction task. Directly processing the raw unfiltered stream would prohibitively increase computational costs and make learning more difficult due to the noise inherent in the data. To focus our efforts on the promising portion of the stream, we perform two types of filter6Only considering English tweets and not counting user names (so-called -mentions.) ing. First, we only retain tweets whose authors list some variant of New York as their location in their profile. Second, we employ a MIRA-based binary classifier (Ritter et al., 2010) to predict whether a message mentions a concert event. After training on 2,000 hand-annotated tweets, this classifier achieves an F1 of 46.9 (precision of 35.0 and recall of 71.0) when tested on 300 messages. While the two-stage filtering does not fully eliminate noise in the input stream, it greatly reduces the presence of irrelevant messages to a manageable 5,800 messages without filtering too many ‘signal’ tweets. We also filter our gold record set to include only records in which each field value occurs at least once somewhere in the corpus, as these are the records which are possible to learn given the input. This yields 11 training and 31 testing records. Training The first weekend of data (2,184 messages and 11 records after preprocessing) is used for training. As mentioned in Section 4, the only learned parameters in our model are those associated with the sequence labeling factor φSEQ. While it is possible to train these parameters via direct annotation of messages with label sequences, we opted instead to use a simple approach where message tokens from the training weekend are labeled via their intersection with gold records, often called “distant supervision” (Mintz et al., 2009b). Concretely, we automatically label message tokens in the training corpus with either the ARTIST or VENUE label if they belonged to a sequence that matched a gold record field, and with NONE otherwise. This is the only use that is made of the gold records throughout training. θSEQ parameters are trained using this labeling with a standard conditional likelihood objective. Testing The two weekends of data used for testing totaled 3,662 tweets after preprocessing and 31 gold records for evaluation. The two weekends were tested separately and their results were aggregated across weekends. Our model assumes a fixed number of records K = 130.7 We rank these records according to a heuristic ranking function that favors the uniqueness of a record’s field values across the set and the number of messages in the testing corpus that have 7Chosen based on the training set 395 0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
0.7
1.00
1.5
2
2.5
3
3.5
4
4.5
5
Recall
against
Gold
Event
Records
k,
as
a
mul5ple
of
the
number
of
gold
records
Low
Thresh
CRF
List
Our
Work
Figure 5: Recall against the gold records. The horizontal axis is the number of records kept from the ranked model output, as a multiple of the number of golds. The CRF lines terminate because of low record yield. token overlap with these values. This ranking function is intended to push garbage collection records to the bottom of the list. Finally, we retain the top k records, throwing away the rest. Results in Section 7 are reported as a function of this k. Baseline We compare our system against three baselines that employ a voting methodology similar to Mann and Yarowsky (2005). The baselines label each message and then extract one record for each combination of labeled phrases. Each extraction is considered a vote for that record’s existence, and these votes are aggregated across all messages. Our List Baseline labels messages by finding string overlaps against a list of musical artists and venues scraped from web data (the same lists used as features in our CRF component). The CRF Baseline is most similar to Mann and Yarowsky (2005)’s CRF Voting method and uses the maximum likelihood CRF labeling of each message. The Low Threshold Baseline generates all possible records from labelings with a token-level likelihood greater than λ = 0.1. The output of these baselines is a set of records ranked by the number of votes cast for each, and we perform our evaluation against the top k of these records. 7 Evaluation The evaluation of record construction is challenging because many induced music events discussed in Twitter messages are not in our gold data set; our gold records are precise but incomplete. Because of this, we evaluate recall and precision separately. Both evaluations are performed using hard zero-one loss at record level. This is a harsh evaluation criterion, but it is realistic for real-world use. Recall We evaluate recall, shown in Figure 5, against the gold event records for each weekend. This shows how well our model could do at replacing the a city event guide, providing Twitter users chat about events taking place. We perform our evaluation by taking the top k records induced, performing a stable marriage matching against the gold records, and then evaluating the resulting matched pairs. Stable marriage matching is a widely used approach that finds a bipartite matching between two groups such that no pairing exists in which both participants would prefer some other pairing (Irving et al., 1987). With our hard loss function and no duplicate gold records, this amounts to the standard recall calculation. We choose this bipartite matching technique because it generalizes nicely to allow for other forms of loss calculation (such as token-level loss). Precision To evaluate precision we assembled a list of the distinct records produced by all models and then manually determined if each record was correct. This determination was made blind to which model produced the record. We then used this aggregate list of correct records to measure precision for each individual model, shown in Figure 6. By construction, our baselines incorporate a hard constraint that each relation learned must be expressed in entirety in at least one message. Our model only incorporates a soft version of this constraint via the φCON factor, but this constraint clearly has the ability to boost precision. To show it’s effect, we additionally evaluate our model, labeled Our Work + Con, with this constraint applied in hard form as an output filter. The downward trend in precision that can be seen in Figure 6 is the effect of our ranking algorithm, which attempts to push garbage collection records towards the bottom of the record list. As we incorporate these records, precision drops. These lines trend up for two of the baselines because the rank396 0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
10
20
30
40
50
Precision
(Manual
Evelua1on)
Number
of
Records
Kept
Low
Thresh
CRF
List
Our
Work
Our
Work
+
Con
Figure 6: Precision, evaluated manually by crossreferencing model output with event mentions in the input data. The CRF and hard-constrained consensus lines terminate because of low record yield. ing heuristic is not as effective for them. These graphs confirm our hypothesis that we gain significant benefit by intertwining constraints on extraction consistency in the learning process, rather than only using this constraint to filter output. 7.1 Analysis One persistent problem is a popular phrase appearing in many records, such as the value “New York” filling many ARTIST slots. The uniqueness factor θUNQ helps control this behavior, but it is a relatively blunt instrument. Ideally, our model would learn, for each field ℓ, the degree to which duplicate values are permitted. It is also possible that by learning, rather than hand-tuning, the θCON, θPOP , and θUNQ parameters, our model could find a balance that permits the proper level of duplication for a particular domain. Other errors can be explained by the lack of constituent features in our model, such as the selection of VENUE values that do not correspond to noun phrases. Further, semantic features could help avoid learning syntactically plausible artists like “Screw the Rain” because of the message: Screw the rainArtist! Grab an umbrella and head down to Webster HallVenue for some American rock and roll. Our model’s soft string comparison-based clustering can be seen at work when our model uncovers records that would have been impossible without this approach. One such example is correcting the misspelling of venue names (e.g. Terminal Five → Terminal 5) even when no message about the event spells the venue correctly. Still, the clustering can introduce errors by combining messages that provide orthogonal field contributions yet have overlapping tokens (thus escaping the penalty of the consistency factor). An example of two messages participating in this scenario is shown below; the shared term “holiday” in the second message gets relabeled as ARTIST: Come check out the holiday cheerArtist parkside is bursting.. Pls tune in to TV Guide NetworkVenue TONIGHT at 8 pm for 25 Most Hilarious Holiday TV Moments... While our experiments utilized binary relations, we believe our general approach should be useful for n-ary relation recovery in the social media domain. Because short messages are unlikely to express high arity relations completely, tying extraction and clustering seems an intuitive solution. In such a scenario, the record consistency constraints imposed by our model would have to be relaxed, perhaps examining pairwise argument consistency instead. 8 Conclusion We presented a novel model for record extraction from social media streams such as Twitter. Our model operates on a noisy feed of data and extracts canonical records of events by aggregating information across multiple messages. Despite the noise of irrelevant messages and the relatively colloquial nature of message language, we are able to extract records with relatively high accuracy. There is still much room for improvement using a broader array of features on factors. 9 Acknowledgements The authors gratefully acknowledge the support of the DARPA Machine Reading Program under AFRL prime contract no. FA8750-09-C-0172. Any opinions, findings, and conclusions expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA, AFRL, or the US government. Thanks also to Tal Wagner for his development assistance and the MIT NLP group for their helpful comments. 397 References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of DL. Razvan C. Bunescu and Raymond J. Mooney. 2007. Learning to extract relations from the web using minimal supervision. In Proceedings of the ACL. J Eisenstein, B O’Connor, and N Smith.... 2010. A latent variable model for geographic lexical variation. Proceedings of the 2010 ... , Jan. Takaaki Hasegawa, Satoshi Sekine, and Ralph Grishman. 2004. Discovering relations among named entities from large corpora. In Proceedings of ACL. Robert W. Irving, Paul Leather, and Dan Gusfield. 1987. An efficient algorithm for the optimal stable marriage. J. ACM, 34:532–543, July. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of International Conference of Machine Learning (ICML), pages 282–289. P. Liang and D. Klein. 2007. Structured Bayesian nonparametric models with variational inference (tutorial). In Association for Computational Linguistics (ACL). Gideon S. Mann and David Yarowsky. 2005. Multi-field information extraction and cross-document fusion. In Proceeding of the ACL. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009a. Distant supervision for relation extraction without labeled data. In Proceedings of ACL/IJCNLP. Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009b. Distant supervision for relation extraction without labeled data. In Proceedings of the ACL, pages 1003–1011. A Ritter, C Cherry, and B Dolan. 2010. Unsupervised modeling of twitter conversations. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 172–180. Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proceedings of HLT/NAACL. Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Automatic acquisition of domain knowledge for information extraction. In Proceedings of COLING. Limin Yao, Sebastian Riedel, and Andrew McCallum. 2010a. Collective cross-document relation extraction without labelled data. In Proceedings of the EMNLP, pages 1013–1023. Limin Yao, Sebastian Riedel, and Andrew McCallum. 2010b. Cross-document relation extraction without labelled data. In Proceedings of EMNLP. Jun Zhu, Zaiqing Nie, Xiaojing Liu, Bo Zhang, and JiRong Wen. 2009. StatSnowball: a statistical approach to extracting entity relationships. In Proceedings of WWW. 398
|
2011
|
40
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 399–408, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics How do you pronounce your name? Improving G2P with transliterations Aditya Bhargava and Grzegorz Kondrak Department of Computing Science University of Alberta Edmonton, Alberta, Canada, T6G 2E8 {abhargava,kondrak}@cs.ualberta.ca Abstract Grapheme-to-phoneme conversion (G2P) of names is an important and challenging problem. The correct pronunciation of a name is often reflected in its transliterations, which are expressed within a different phonological inventory. We investigate the problem of using transliterations to correct errors produced by state-of-the-art G2P systems. We present a novel re-ranking approach that incorporates a variety of score and n-gram features, in order to leverage transliterations from multiple languages. Our experiments demonstrate significant accuracy improvements when re-ranking is applied to n-best lists generated by three different G2P programs. 1 Introduction Grapheme-to-phoneme conversion (G2P), in which the aim is to convert the orthography of a word to its pronunciation (phonetic transcription), plays an important role in speech synthesis and understanding. Names, which comprise over 75% of unseen words (Black et al., 1998), present a particular challenge to G2P systems because of their high pronunciation variability. Guessing the correct pronunciation of a name is often difficult, especially if they are of foreign origin; this is attested by the ad hoc transcriptions which sometimes accompany new names introduced in news articles, especially for international stories with many foreign names. Transliterations provide a way of disambiguating the pronunciation of names. They are more abundant than phonetic transcriptions, for example when news items of international or global significance are reported in multiple languages. In addition, writing scripts such as Arabic, Korean, or Hindi are more consistent and easier to identify than various phonetic transcription schemes. The process of transliteration, also called phonetic translation (Li et al., 2009b), involves “sounding out” a name and then finding the closest possible representation of the sounds in another writing script. Thus, the correct pronunciation of a name is partially encoded in the form of the transliteration. For example, given the ambiguous letter-to-phoneme mapping of the English letter g, the initial phoneme of the name Gershwin may be predicted by a G2P system to be either /g/ (as in Gertrude) or /Ã/ (as in Gerald). The transliterations of the name in other scripts provide support for the former (correct) alternative. Although it seems evident that transliterations should be helpful in determining the correct pronunciation of a name, designing a system that takes advantage of this insight is not trivial. The main source of the difficulty stems from the differences between the phonologies of distinct languages. The mappings between phonemic inventories are often complex and context-dependent. For example, because Hindi has no /w/ sound, the transliteration of Gershwin instead uses a symbol that represents the phoneme /V/, similar to the /v/ phoneme in English. In addition, converting transliterations into phonemes is often non-trivial; although few orthographies are as inconsistent as that of English, this is effectively the G2P task for the particular language in question. In this paper, we demonstrate that leveraging transliterations can, in fact, improve the graphemeto-phoneme conversion of names. We propose a novel system based on discriminative re-ranking that is capable of incorporating multiple transliterations. We show that simplistic approaches to the problem 399 fail to achieve the same goal, and that transliterations from multiple languages are more helpful than from a single language. Our approach can be combined with any G2P system that produces n-best lists instead of single outputs. The experiments that we perform demonstrate significant error reduction for three very different G2P base systems. 2 Improving G2P with transliterations 2.1 Problem definition In both G2P and machine transliteration, we are interested in learning a function that, given an input sequence x, produces an output sequence y. In the G2P task, x is composed of graphemes and y is composed of phonemes; in transliteration, both sequences consist of graphemes but they represent different writing scripts. Unlike in machine translation, the monotonicity constraint is enforced; i.e., we assume that x and y can be aligned without the alignment links crossing each other (Jiampojamarn and Kondrak, 2010). We assume that we have available a base G2P system that produces an n-best list of outputs with a corresponding list of confidence scores. The goal is to improve the base system’s performance by applying existing transliterations of the input x to re-rank the system’s n-best output list. 2.2 Similarity-based methods A simple and intuitive approach to improving G2P with transliterations is to select from the n-best list the output sequence that is most similar to the corresponding transliteration. For example, the Hindi transliteration in Figure 1 is arguably closest in perceptual terms to the phonetic transcription of the second output in the n-best list, as compared to the other outputs. One obvious problem with this method is that it ignores the relative ordering of the n-best lists and their corresponding scores produced by the base system. A better approach is to combine the similarity score with the output score from the base system, allowing it to contribute an estimate of confidence in its output. For this purpose, we apply a linear combination of the two scores, where a single parameter λ, ranging between zero and one, determines the relative weight of the scores. The exact value of λ can be optimized on a training set. This approach is similar to the method used by Finch and Sumita (2010) to combine the scores of two different machine transliteration systems. 2.3 Measuring similarity The approaches presented in the previous section crucially depend on a method for computing the similarity between various symbol sequences that represent the same word. If we have a method of converting transliterations to phonetic representations, the similarity between two sequences of phonemes can be computed with a simple method such as normalized edit distance or the longest common subsequence ratio, which take into account the number and position of identical phonemes. Alternatively, we could apply a more complex approach, such as ALINE (Kondrak, 2000), which computes the distance between pairs of phonemes. However, the implementation of a conversion program would require ample training data or language-specific expertise. A more general approach is to skip the transcription step and compute the similarity between phonemes and graphemes directly. For example, the edit distance function can be learned from a training set of transliterations and their phonetic transcriptions (Ristad and Yianilos, 1998). In this paper, we apply M2M-ALIGNER (Jiampojamarn et al., 2007), an unsupervised aligner, which is a many-to-many generalization of the learned edit distance algorithm. M2M-ALIGNER was originally designed to align graphemes and phonemes, but can be applied to discover the alignment between any sets of symbols (given training data). The logarithm of the probability assigned to the optimal alignment can then be interpreted as a similarity measure between the two sequences. 2.4 Discriminative re-ranking The methods described in Section 2.2, which are based on the similarity between outputs and transliterations, are difficult to generalize when multiple transliterations of a single name are available. A linear combination is still possible but in this case optimizing the parameters would no longer be straightforward. Also, we are interested in utilizing other features besides sequence similarity. The SVM re-ranking paradigm offers a solution 400 Gershwin input /ɡɜːʃwɪn/ /d͡ʒɜːʃwɪn/ /d͡ʒɛɹʃwɪn/ n-best outputs गशिवन ガーシュウィン Гершвин transliterations (/ɡʌrʃʋɪn/) (/ɡaːɕuwiɴ/) (/ɡerʂvin/) Figure 1: An example name showing the data used for feature construction. Each arrow links a pair used to generate features, including n-gram and score features. The score features use similarity scores for transliteration-transcription pairs and system output scores for input-output pairs. One feature vector is constructed for each system output. to the problem. Our re-ranking system is informed by a large number of features, which are based on scores and n-grams. The scores are of three types: 1. The scores produced by the base system for each output in the n-best list. 2. The similarity scores between the outputs and each available transliteration. 3. The differences between scores in the n-best lists for both (1) and (2). Our set of binary n-gram features includes those used for DIRECTL+ (Jiampojamarn et al., 2010). They can be divided into four types: 1. The context features combine output symbols (phonemes) with n-grams of varying sizes in a window of size c centred around a corresponding position on the input side. 2. The transition features are bigrams on the output (phoneme) side. 3. The linear chain features combine the context features with the bigram transition features. 4. The joint n-gram features are n-grams containing both input and output symbols. We apply the features in a new way: instead of being applied strictly to a given input-output set, we expand their use across many languages and use all of them simultaneously. We apply the n-gram features across all transliteration-transcription pairs in addition to the usual input-output pairs corresponding to the n-best lists. Figure 1 illustrates the set of pairs used for feature generation. In this paper, we augment the n-gram features by a set of reverse features. Unlike a traditional G2P generator, our re-ranker has access to the outputs produced by the base system. By swapping the input and the output side, we can add reverse context and linear-chain features. Since the n-gram features are also applied to transliteration-transcription pairs, the reverse features enable us to include features which bind a variety of n-grams in the transliteration string with a single corresponding phoneme. The construction of n-gram features presupposes a fixed alignment between the input and output sequences. If the base G2P system does not provide input-output alignments, we use M2M-ALIGNER for this purpose. The transliteration-transcription pairs are also aligned by M2M-ALIGNER, which at the same time produces the corresponding similarity scores. (We set a lower limit of -100 on the M2MALIGNER scores.) If M2M-ALIGNER is unable to produce an alignment, we indicate this with a binary feature that is included with the n-gram features. 3 Experiments We perform several experiments to evaluate our transliteration-informed approaches. We test simple 401 similarity-based approaches on single-transliteration data, and evaluate our SVM re-ranking approach against this as well. We then test our approach using all available transliterations. Relevant code and scripts required to reproduce our experimental results are available online1. 3.1 Data & setup For pronunciation data, we extracted all names from the Combilex corpus (Richmond et al., 2009). We discarded all diacritics, duplicates and multi-word names, which yielded 10,084 unique names. Both the similarity and SVM methods require transliterations for identifying the best candidates in the nbest lists. They are therefore trained and evaluated on the subset of the G2P corpus for which transliterations available. Naturally, allowing transliterations from all languages results in a larger corpus than the one obtained by the intersection with transliterations from a single language. For our experiments, we split the data into 10% for testing, 10% for development, and 80% for training. The development set was used for initial tests and experiments, and then for our final results the training and development sets were combined into one set for final system training. For SVM reranking, during both development and testing we split the training set into 10 folds; this is necessary when training the re-ranker as it must have system output scores that are representative of the scores on unseen data. We ensured that there was never any overlap between the training and testing data for all trained systems. Our transliteration data come from the shared tasks on transliteration at the 2009 and 2010 Named Entities Workshops (Li et al., 2009a; Li et al., 2010). We use all of the 2010 English-source data plus the English-to-Russian data from 2009, which makes nine languages in total. In cases where the data provide alternative transliterations for a given input, we keep only one; our preliminary experiments indicated that including alternative transliterations did not improve performance. It should be noted that these transliteration corpora are noisy: Jiampojamarn et al. (2009) note a significant increase in 1http://www.cs.ualberta.ca/˜ab31/ g2p-tl-rr Language Corpus size Overlap Bengali 12,785 1,840 Chinese 37,753 4,713 Hindi 12,383 2,179 Japanese 26,206 4,773 Kannada 10,543 1,918 Korean 6,761 3,015 Russian 6,447 487 Tamil 10,646 1,922 Thai 27,023 5,436 Table 1: The number of unique single-word entries in the transliteration corpora for each language and the amount of common data (overlap) with the pronunciation data. English-to-Hindi transliteration performance with a simple cleaning of the data. Our tests involving transliterations from multiple languages are performed on the set of names for which we have both the pronunciation and transliteration data. There are 7,423 names in the G2P corpus for which at least one transliteration is available. Table 1 lists the total size of the transliteration corpora as well as the amount of overlap with the G2P data. Note that the base G2P systems are trained using all 10,084 names in the corpus as opposed to only the 7,423 names for which there are transliterations available. This ensures that the G2P systems have more training data to provide the best possible base performance. For our single-language experiments, we normalize the various scores when tuning the linear combination parameter λ so that we can compare values across different experimental conditions. For SVM re-ranking, we directly implement the method of Joachims (2002) to convert the re-ranking problem into a classification problem, and then use the very fast LIBLINEAR (Fan et al., 2008) to build the SVM models. Optimal hyperparameter values were determined during development. We evaluate using word accuracy, the percentage of words for which the pronunciations are correctly predicted. This measure marks pronunciations that are even slightly different from the correct one as incorrect, so even a small change in pronunciation that might be acceptable or even unnoticeable to humans would count against the system’s performance. 402 3.2 Base systems It is important to test multiple base systems in order to ensure that any gain in performance applies to the task in general and not just to a particular system. We use three G2P systems in our tests: 1. FESTIVAL (FEST), a popular speech synthesis package, which implements G2P conversion with CARTs (decision trees) (Black et al., 1998). 2. SEQUITUR (SEQ), a generative system based on the joint n-gram approach (Bisani and Ney, 2008). 3. DIRECTL+ (DTL), the discriminative system on which our n-gram features are based (Jiampojamarn et al., 2010). All systems are capable of providing n-best output lists along with scores for each output, although for FESTIVAL they had to be constructed from the list of output probabilities for each input character. We run DIRECTL+ with all of the features described in (Jiampojamarn et al., 2010) (i.e., context features, transition features, linear chain features, and joint n-gram features). System parameters, such as maximum number of iterations, were determined during development. For SEQUITUR, we keep default options except for the enabling of the 10 best outputs and we convert the probabilities assigned to the outputs to log-probabilities. We set SEQUITUR’s joint n-gram order to 6 (this was also determined during development). Note that the three base systems differ slightly in terms of the alignment information that they provide in their outputs. FESTIVAL operates letter-byletter, so we use the single-letter inputs with the phoneme outputs as the aligned units. DIRECTL+ specifies many-to-many alignments in its output. For SEQUITUR, however, since it provides no information regarding the output structure, we use M2MALIGNER to induce alignments for n-gram feature generation. 3.3 Transliterations from a single language The goal of the first experiment is to compare several similarity-based methods, and to determine how they compare to our re-ranking approach. In order to find the similarity between phonetic transcriptions, we use the two different methods described in Section 2.2: ALINE and M2M-ALIGNER. We further test the use of a linear combination of the similarity scores with the base system’s score so that its confidence information can be taken into account; the linear combination weight is determined from the training set. These methods are referred to as ALINE+BASE and M2M+BASE. For these experiments, our training and testing sets are obtained by intersecting our G2P training and testing sets respectively with the Hindi transliteration corpus, yielding 1,950 names for training and 229 names for testing. Since the similarity-based methods are designed to incorporate homogeneous same-script transliterations, we can only run this experiment on one language at a time. Furthermore, ALINE operates on phoneme sequences, so we first need to convert the transliterations to phonemes. An alternative would be to train a proper G2P system, but this would require a large set of word-pronunciation pairs. For this experiment, we choose Hindi, for which we constructed a rule-based G2P converter. Aside from simple one-to-one mapping (romanization) rules, the converter has about ten rules to adjust for context. For these experiments, we apply our SVM reranking method in two ways: 1. Using only Hindi transliterations (referred to as SVM-HINDI). 2. Using all available languages (referred to as SVM-ALL). In both cases, the test set is restricted to the same 229 names, in order to provide a valid comparison. Table 2 presents the results. Regardless of the choice of the similarity function, the simplest approaches fail in a spectacular manner, significantly reducing the accuracy with respect to the base system. The linear combination methods give mixed results, improving the accuracy for FESTIVAL but not for SEQUITUR or DIRECTL+ (although the differences are not statistically significant). However, they perform much better than the methods based on similarity scores alone as they are able to take advantage of the base system’s output scores. If we look at the values of λ that provide the best performance 403 Base system FEST SEQ DTL Base 58.1 67.3 71.6 ALINE 28.0 26.6 27.5 M2M 39.3 36.2 36.2 ALINE+BASE 58.5 65.9 71.2 M2M+BASE 58.5 66.4 70.3 SVM-HINDI 63.3 69.0 69.9 SVM-ALL 68.6 72.5 75.6 Table 2: Word accuracy (in percentages) of various methods when only Hindi transliterations are used. on the training set, we find that they are higher for the stronger base systems, indicating more reliance on the base system output scores. For example, for ALINE+BASE the FESTIVAL-based system has λ = 0.58 whereas the DIRECTL+-based system has λ = 0.81. Counter-intuitively, the ALINE+BASE and M2M+BASE methods are unable to improve upon SEQUITUR or DIRECTL+. We would expect to achieve at least the base system’s performance, but disparities between the training and testing sets prevent this. The two SVM-based methods achieve much better results. SVM-ALL produces impressive accuracy gains for all three base systems, while SVMHINDI yields smaller (but still statistically significant) improvements for FESTIVAL and SEQUITUR. These results suggest that our re-ranking method provides a bigger boost to systems built with different design principles than to DIRECTL+ which utilizes a similar set of features. On the other hand, the results also show that the information obtained by consulting a single transliteration may be insufficient to improve an already high-performing G2P converter. 3.4 Transliterations from multiple languages Our second experiment expands upon the first; we use all available transliterations instead of being restricted to one language. This rules out the simple similarity-based approaches, but allows us to test our re-ranking approach in a way that fully utilizes the available data. We test three variants of our transliteration-informed SVM re-ranking approach, Base system FEST SEQ DTL Base 55.3 66.5 70.8 SVM-SCORE 62.1 68.4 71.0 SVM-N-GRAM 66.2 72.5 73.8 SVM-ALL 67.2 73.4 74.3 Table 3: Word accuracy of the base system versus the reranking variants with transliterations from multiple languages. which differ with respect to the set of included features: 1. SVM-SCORE includes only the three types of score features described in Section 2.4. 2. SVM-N-GRAM uses only the n-gram features. 3. SVM-ALL is the full system that combines the score and n-gram features. The objective is to determine the degree to which each of the feature classes contributes to the overall results. Because we are using all available transliterations, we achieve much greater coverage over our G2P data than in the previous experiment; in this case, our training set consists of 6,660 names while the test set has 763 names. Table 3 presents the results. Note that the baseline accuracies are somewhat lower than in Table 2 because of the different test set. We find that, when using all features, the SVM re-ranker can provide a very impressive error reduction over FESTIVAL (26.7%) and SEQUITUR (20.7%) and a smaller but still significant (p < 0.01 with the McNemar test) error reduction over DIRECTL+ (12.1%). When we consider our results using only the score and n-gram features, we can see that, interestingly, the n-gram features are most important. We draw a further conclusion from our results: consider the large disparity in improvements over the base systems. This indicates that FESTIVAL and SEQUITUR are benefiting from the DIRECTL+-style features used in the re-ranking. Without the n-gram features, however, there is still a significant improvement over FESTIVAL, demonstrating that the scores do provide useful information. In this case there is 404 no way for DIRECTL+-style information to make its way into the re-ranking; the process is based purely on the transliterations and their similarities with the transcriptions in the output lists, indicating that the system is capable of extracting useful information directly from transliterations. In the case of DIRECTL+, the transliterations help through the n-gram features rather than the score features; this is probably because the crucial feature that signals the inability of M2M-ALIGNER to align a given transliteration-transcription pair belongs to the set of the n-gram features. Both the n-gram features and score features are dependent on the alignments, but they differ in that the n-gram features allow weights to be learned for local n-gram pairs whereas the score features are based on global information, providing only a single feature for a given transliteration-transcription pair. The two therefore overlap to some degree, although the score features still provide useful information via probabilities learned during the alignment training process. A closer look at the results provides additional insight into the operation of our re-ranking system. For example, consider the name Bacchus, which DIRECTL+ incorrectly converts into /bækÙ@s/. The most likely reason why our re-ranker selects instead the correct pronunciation /bæk@s/ is that M2MALIGNER fails to align three of the five available transliterations with /bækÙ@s/. Such alignment failures are caused by a lack of evidence for the mapping of the grapheme representing the sound /k/ in the transliteration training data with the phoneme /Ù/. In addition, the lack of alignments prevents any n-gram features from being enabled. Considering the difficulty of the task, the top accuracy of almost 75% is quite impressive. In fact, many instances of human transliterations in our corpora are clearly incorrect. For example, the Hindi transliteration of Bacchus contains the /Ù/ consonant instead of the correct /k/. Moreover, our strict evaluation based on word accuracy counts all system outputs that fail to exactly match the dictionary data as errors. The differences are often very minor and may reflect an alternative pronunciation. The phoneme accuracy2 of our best result is 93.1%, 2The phoneme accuracy is calculated from the minimum edit distance between the predicted and correct pronunciations. # TL # Entries Improvement ≤1 111 0.9 ≤2 266 3.0 ≤3 398 3.8 ≤4 536 3.2 ≤5 619 2.8 ≤6 685 3.4 ≤7 732 3.7 ≤8 762 3.5 ≤9 763 3.5 Table 4: Absolute improvement in word accuracy (%) over the base system (DIRECTL+) of the SVM re-ranker for various numbers of available transliterations. which provides some idea of how similar the predicted pronunciation is to the correct one. 3.5 Effect of multiple transliterations One motivating factor for the use of SVM re-ranking was the ability to incorporate multiple transliteration languages. But how important is it to use more than one language? To examine this question, we look particularly at the sets of names having at most k transliterations available. Table 4 shows the results with DIRECTL+ as the base system. Note that the number of names with more than five transliterations was small. Importantly, we see that the increase in performance when only one transliteration is available is so small as to be insignificant. From this, we can conclude that obtaining improvement on the basis of a single transliteration is difficult in general. This corroborates the results of the experiment described in Section 3.3, where we used only Hindi transliterations. 4 Previous work There are three lines of research that are relevant to our work: (1) G2P in general; (2) G2P on names; and (3) combining diverse data sources and/or systems. The two leading approaches to G2P are represented by SEQUITUR (Bisani and Ney, 2008) and DIRECTL+ (Jiampojamarn et al., 2010). Recent comparisons suggests that the former obtains somewhat higher accuracy, especially when it includes joint n-gram features (Jiampojamarn et al., 2010). Systems based on decision trees are far behind. Our 405 results confirm this ranking. Names can present a particular challenge to G2P systems. Kienappel and Kneser (2001) reported a higher error rate for German names than for general words, while on the other hand Black et al. (1998) report similar accuracy on names as for other types of English words. Yang et al. (2006) and van den Heuvel et al. (2007) post-process the output of a general G2P system with name-specific phonemeto-phoneme (P2P) systems. They find significant improvement using this method on data sets consisting of Dutch first names, family names, and geographical names. However, it is unclear whether such an approach would be able to improve the performance of the current state-of-the-art G2P systems. In addition, the P2P approach works only on single outputs, whereas our re-ranking approach is designed to handle n-best output lists. Although our approach is (to the best of our knowledge) the first to use different tasks (G2P and transliteration) to inform each other, this is conceptually similar to model and system combination approaches. In statistical machine translation (SMT), methods that incorporate translations from other languages (Cohn and Lapata, 2007) have proven effective in low-resource situations: when phrase translations are unavailable for a certain language, one can look at other languages where the translation is available and then translate from that language. A similar pivoting approach has also been applied to machine transliteration (Zhang et al., 2010). Notably, the focus of these works have been on cases in which there are less data available; they also modify the generation process directly, rather than operating on existing outputs as we do. Ultimately, a combination of the two approaches is likely to give the best results. Finch and Sumita (2010) combine two very different approaches to transliteration using simple linear interpolation: they use SEQUITUR’s n-best outputs and re-rank them using a linear combination of the original SEQUITUR score and the score for that output of a phrased-based SMT system. The linear weights are hand-tuned. We similarly use linear combinations, but with many more scores and other features, necessitating the use of SVMs to determine the weights. Importantly, we combine different data types where they combine different systems. 5 Conclusions & future work In this paper, we explored the application of transliterations to G2P. We demonstrated that transliterations have the potential for helping choose between n-best output lists provided by standard G2P systems. Simple approaches based solely on similarity do not work when tested using a single transliteration language (Hindi), necessitating the use of smarter methods that can incorporate multiple transliteration languages. We apply SVM reranking to this task, enabling us to use a variety of features based not only on similarity scores but on n-grams as well. Our method shows impressive error reductions over the popular FESTIVAL system and the generative joint n-gram SEQUITUR system. We also find significant error reduction using the state-of-the-art DIRECTL+ system. Our analysis demonstrated that it is essential to provide the re-ranking system with transliterations from multiple languages in order to mitigate the differences between phonological inventories and smooth out noise in the transliterations. In the future, we plan to generalize our approach so that it can be applied to the task of generating transliterations, and to combine data from distinct G2P dictionaries. The latter task is related to the notion of domain adaptation. We would also like to apply our approach to web data; we have shown that it is possible to use noisy transliteration data, so it may be possible to leverage the noisy ad hoc pronunciation data as well. Finally, we plan to investigate earlier integration of such external information into the G2P process for single systems; while we noted that re-ranking provides a general approach applicable to any system that can generate n-best lists, there is a limit as to what re-ranking can do, as it relies on the correct output existing in the n-best list. Modifying existing systems would provide greater potential for improving results even though the changes would be necessarily system-specific. Acknowledgements We are grateful to Sittichai Jiampojamarn and Shane Bergsma for the very helpful discussions. This research was supported by the Natural Sciences and Engineering Research Council of Canada. 406 References Maximilian Bisani and Hermann Ney. 2008. Jointsequence models for grapheme-to-phoneme conversion. Speech Communication, 50(5):434–451, May. Alan W. Black, Kevin Lenzo, and Vincent Pagel. 1998. Issues in building general letter to sound rules. In The Third ESCA/COCOSDA Workshop (ETRW) on Speech Synthesis, Jenolan Caves House, Blue Mountains, New South Wales, Australia, November. Trevor Cohn and Mirella Lapata. 2007. Machine translation by triangulation: Making effective use of multiparallel corpora. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 728–735, Prague, Czech Republic, June. Association for Computational Linguistics. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Andrew Finch and Eiichiro Sumita. 2010. Transliteration using a phrase-based statistical machine translation system to re-score the output of a joint multigram model. In Proceedings of the 2010 Named Entities Workshop (NEWS 2010), pages 48–52, Uppsala, Sweden, July. Association for Computational Linguistics. Sittichai Jiampojamarn and Grzegorz Kondrak. 2010. Letter-phoneme alignment: An exploration. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 780–788, Uppsala, Sweden, July. Association for Computational Linguistics. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and hidden Markov models to letter-to-phoneme conversion. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 372–379, Rochester, New York, USA, April. Association for Computational Linguistics. Sittichai Jiampojamarn, Aditya Bhargava, Qing Dou, Kenneth Dwyer, and Grzegorz Kondrak. 2009. DirecTL: a language independent approach to transliteration. In Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009), pages 28–31, Suntec, Singapore, August. Association for Computational Linguistics. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2010. Integrating joint n-gram features into a discriminative training framework. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 697–700, Los Angeles, California, USA, June. Association for Computational Linguistics. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 133–142, Edmonton, Alberta, Canada. Association for Computing Machinery. Anne K. Kienappel and Reinhard Kneser. 2001. Designing very compact decision trees for graphemeto-phoneme transcription. In EUROSPEECH-2001, pages 1911–1914, Aalborg, Denmark, September. Grzegorz Kondrak. 2000. A new algorithm for the alignment of phonetic sequences. In Proceedings of the First Meeting of the North American Chapter of the Association for Computational Linguistics, pages 288–295, Seattle, Washington, USA, April. Haizhou Li, A Kumaran, Vladimir Pervouchine, and Min Zhang. 2009a. Report of NEWS 2009 machine transliteration shared task. In Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009), pages 1–18, Suntec, Singapore, August. Association for Computational Linguistics. Haizhou Li, A Kumaran, Min Zhang, and Vladimir Pervouchine. 2009b. Whitepaper of NEWS 2009 machine transliteration shared task. In Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009), pages 19–26, Suntec, Singapore, August. Association for Computational Linguistics. Haizhou Li, A Kumaran, Min Zhang, and Vladimir Pervouchine. 2010. Report of NEWS 2010 transliteration generation shared task. In Proceedings of the 2010 Named Entities Workshop (NEWS 2010), pages 1–11, Uppsala, Sweden, July. Association for Computational Linguistics. Korin Richmond, Robert Clark, and Sue Fitt. 2009. Robust LTS rules with the Combilex speech technology lexicon. In Proceedings of Interspeech, pages 1295– 1298, Brighton, UK, September. Eric Sven Ristad and Peter N. Yianilos. 1998. Learning string edit distance. IEEE Transactions on Pattern Recognition and Machine Intelligence, 20(5):522– 532, May. Henk van den Heuvel, Jean-Pierre Martens, and Nanneke Konings. 2007. G2P conversion of names. what can we do (better)? In Proceedings of Interspeech, pages 1773–1776, Antwerp, Belgium, August. Qian Yang, Jean-Pierre Martens, Nanneke Konings, and Henk van den Heuvel. 2006. Development of a phoneme-to-phoneme (p2p) converter to improve the grapheme-to-phoneme (g2p) conversion of names. In 407 Proceedings of the 2006 International Conference on Language Resources and Evaluation, pages 2570– 2573, Genoa, Italy, May. Min Zhang, Xiangyu Duan, Vladimir Pervouchine, and Haizhou Li. 2010. Machine transliteration: Leveraging on third languages. In Coling 2010: Posters, pages 1444–1452, Beijing, China, August. Coling 2010 Organizing Committee. 408
|
2011
|
41
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 409–419, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Unsupervised Word Alignment with Arbitrary Features Chris Dyer Jonathan Clark Alon Lavie Noah A. Smith Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA {cdyer,jhclark,alavie,nasmith}@cs.cmu.edu Abstract We introduce a discriminatively trained, globally normalized, log-linear variant of the lexical translation models proposed by Brown et al. (1993). In our model, arbitrary, nonindependent features may be freely incorporated, thereby overcoming the inherent limitation of generative models, which require that features be sensitive to the conditional independencies of the generative process. However, unlike previous work on discriminative modeling of word alignment (which also permits the use of arbitrary features), the parameters in our models are learned from unannotated parallel sentences, rather than from supervised word alignments. Using a variety of intrinsic and extrinsic measures, including translation performance, we show our model yields better alignments than generative baselines in a number of language pairs. 1 Introduction Word alignment is an important subtask in statistical machine translation which is typically solved in one of two ways. The more common approach uses a generative translation model that relates bilingual string pairs using a latent alignment variable to designate which source words (or phrases) generate which target words. The parameters in these models can be learned straightforwardly from parallel sentences using EM, and standard inference techniques can recover most probable alignments (Brown et al., 1993). This approach is attractive because it only requires parallel training data. An alternative to the generative approach uses a discriminatively trained alignment model to predict word alignments in the parallel corpus. Discriminative models are attractive because they can incorporate arbitrary, overlapping features, meaning that errors observed in the predictions made by the model can be addressed by engineering new and better features. Unfortunately, both approaches are problematic, but in different ways. In the case of discriminative alignment models, manual alignment data is required for training, which is problematic for at least three reasons. Manual alignments are notoriously difficult to create and are available only for a handful of language pairs. Second, manual alignments impose a commitment to a particular preprocessing regime; this can be problematic since the optimal segmentation for translation often depends on characteristics of the test set or size of the available training data (Habash and Sadat, 2006) or may be constrained by requirements of other processing components, such parsers. Third, the “correct” alignment annotation for different tasks may vary: for example, relatively denser or sparser alignments may be optimal for different approaches to (downstream) translation model induction (Lopez, 2008; Fraser, 2007). Generative models have a different limitation: the joint probability of a particular setting of the random variables must factorize according to steps in a process that successively “generates” the values of the variables. At each step, the probability of some value being generated may depend only on the generation history (or a subset thereof), and the possible values a variable will take must form a locally normalized conditional probability distribution (CPD). While these locally normalized CPDs may be pa409 rameterized so as to make use of multiple, overlapping features (Berg-Kirkpatrick et al., 2010), the requirement that models factorize according to a particular generative process imposes a considerable restriction on the kinds of features that can be incorporated. When Brown et al. (1993) wanted to incorporate a fertility model to create their Models 3 through 5, the generative process used in Models 1 and 2 (where target words were generated one by one from source words independently of each other) had to be abandoned in favor of one in which each source word had to first decide how many targets it would generate.1 In this paper, we introduce a discriminatively trained, globally normalized log-linear model of lexical translation that can incorporate arbitrary, overlapping features, and use it to infer word alignments. Our model enjoys the usual benefits of discriminative modeling (e.g., parameter regularization, wellunderstood learning algorithms), but is trained entirely from parallel sentences without gold-standard word alignments. Thus, it addresses the two limitations of current word alignment approaches. This paper is structured as follows. We begin by introducing our model (§2), and follow this with a discussion of tractability, parameter estimation, and inference using finite-state techniques (§3). We then describe the specific features we used (§4) and provide experimental evaluation of the model, showing substantial improvements in three diverse language pairs (§5). We conclude with an analysis of related prior work (§6) and a general discussion (§8). 2 Model In this section, we develop a conditional model p(t | s) that, given a source language sentence s with length m = |s|, assigns probabilities to a target sentence t with length n, where each word tj is an element in the finite target vocabulary Ω. We begin by using the chain rule to factor this probability into two components, a translation model and a length model. p(t | s) = p(t, n | s) = p(t | s, n) | {z } translation model × p(n | s) | {z } length model 1Moore (2005) likewise uses this example to motivate the need for models that support arbitrary, overlapping features. In the translation model, we then assume that each word tj is a translation of one source word, or a special null token. We therefore introduce a latent alignment variable a = ⟨a1, a2, . . . , an⟩∈[0, m]n, where aj = 0 represents a special null token. p(t | s, n) = X a p(t, a | s, n) So far, our model is identical to that of (Brown et al., 1993); however, we part ways here. Rather than using the chain rule to further decompose this probability and motivate opportunities to make independence assumptions, we use a log-linear model with parameters θ ∈Rk and feature vector function H that maps each tuple ⟨a, s, t, n⟩into Rk to model p(t, a | s, n) directly: pθ(t, a | s, n) = exp θ⊤H(t, a, s, n) Zθ(s, n) , where Zθ(s, n) = X t′∈Ωn X a′ exp θ⊤H(t′, a′, s, n) Under some reasonable assumptions (a finite target vocabulary Ωand that all θk < ∞), the partition function Zθ(s, n) will always take on finite values, guaranteeing that p(t, a | s, n) is a proper probability distribution. So far, we have said little about the length model. Since our intent here is to use the model for alignment, where both the target length and target string are observed, it will not be necessary to commit to any length model, even during training. 3 Tractability, Learning, and Inference The model introduced in the previous section is extremely general, and it can incorporate features sensitive to any imaginable aspects of a sentence pair and their alignment, from linguistically inspired (e.g., an indicator feature for whether both the source and target sentences contain a verb), to the mundane (e.g., the probability of the sentence pair and alignment under Model 1), to the absurd (e.g., an indicator if s and t are palindromes of each other). However, while our model can make use of arbitrary, overlapping features, when designing feature functions it is necessary to balance expressiveness and the computational complexity of the inference 410 algorithms used to reason under models that incorporate these features.2 To understand this tradeoff, we assume that the random variables being modeled (t, a) are arranged into an undirected graph G such that the vertices represent the variables and the edges are specified so that the feature function H decomposes linearly over all the cliques C in G, H(t, a, s, n) = X C h(tC, aC, s, n) , where tC and aC are the components associated with subgraph C and h(·) is a local feature vector function. In general, exact inference is exponential in the width of tree-decomposition of G, but, given a fixed width, they can be solved in polynomial time using dynamic programming. For example, when the graph has a sequential structure, exact inference can be carried out using the familiar forwardbackward algorithm (Lafferty et al., 2001). Although our features look at more structure than this, they are designed to keep treewidth low, meaning exact inference is still possible with dynamic programming. Figure 1 gives a graphical representation of our model as well as the more familiar generative (directed) variants. The edge set in the depicted graph is determined by the features that we use (§4). 3.1 Parameter Learning To learn the parameters of our model, we select the θ∗that minimizes the ℓ1 regularized conditional loglikelihood of a set of training data T : L(θ) = − X ⟨s,t⟩∈T log X a pθ(t, a | s, n) + β X k |θk| . Because of the ℓ1 penalty, this objective is not everywhere differentiable, but the gradient with respect to the parameters of the log-likelihood term is as follows. ∂L ∂θ = X ⟨s,t⟩∈T Epθ(a|s,t,n)[H(·)] −Epθ(t,a|s,n)[H(·)] (1) To optimize L, we employ an online method that approximates ℓ1 regularization and only depends on 2One way to understand expressiveness is in terms of independence assumptions, of course. Research in graphical models has done much to relate independence assumptions to the complexity of inference algorithms (Koller and Friedman, 2009). the gradient of the unregularized objective (Tsuruoka et al., 2009). This method is quite attractive since it is only necessary to represent the active features, meaning impractically large feature spaces can be searched provided the regularization strength is sufficiently high. Additionally, not only has this technique been shown to be very effective for optimizing convex objectives, but evidence suggests that the stochasticity of online algorithms often results in better solutions than batch optimizers for nonconvex objectives (Liang and Klein, 2009). On account of the latent alignment variable in our model, L is non-convex (as is the likelihood objective of the generative variant). To choose the regularization strength β and the initial learning rate η0,3 we trained several models on a 10,000-sentence-pair subset of the FrenchEnglish Hansards, and chose values that minimized the alignment error rate, as evaluated on a 447 sentence set of manually created alignments (Mihalcea and Pedersen, 2003). For the remainder of the experiments, we use the values we obtained, β = 0.4 and η0 = 0.3. 3.2 Inference with WFSAs We now describe how to use weighted finite-state automata (WFSAs) to compute the quantities necessary for training. We begin by describing the ideal WFSA representing the full translation search space, which we call the discriminative neighborhood, and then discuss strategies for reducing its size in the next section, since the full model is prohibitively large, even with small data sets. For each training instance ⟨s, t⟩, the contribution to the gradient (Equation 1) is the difference in two vectors of expectations. The first term is the expected value of H(·) when observing ⟨s, n, t⟩and letting a range over all possible alignments. The second is the expectation of the same function, but observing only ⟨s, n⟩and letting t′ and a take on any possible values (i.e., all possible translations of length n and all their possible alignments to s). To compute these expectations, we can construct a WFSA representing the discriminative neighborhood, the set Ωn×[0, m]n, such that every path from the start state to goal yields a pair ⟨t′, a⟩with weight 3For the other free parameters of the algorithm, we use the default values recommended by Tsuruoka et al. (2009). 411 a1 a2 a3 an t1 t2 t3 tn s n Fully directed model (Brown et al., 1993; Vogel et al., 1996; Berg-Kirkpatrick et al., 2010) Our model ... a1 a2 a3 an t1 t2 t3 tn s n ... ... ... s s s s s s s s Figure 1: A graphical representation of a conventional generative lexical translation model (left) and our model with an undirected translation model. For clarity, the observed node s (representing the full source sentence) is drawn in multiple locations. The dashed lines indicate a dependency on a deterministic mapping of tj (not its complete value). H(t′, a, s, n). With our feature set (§4), number of states in this WFSA is O(m×n) since at each target index j, there is a different state for each possible index of the source word translated at position j −1.4 Once the WFSA representing the discriminative neighborhood is built, we use the forward-backward algorithm to compute the second expectation term. We then intersect the WFSA with an unweighted FSA representing the target sentence t (because of the restricted structure of our WFSA, this amounts to removing edges), and finally run the forwardbackward algorithm on the resulting WFSA to compute the first expectation. 3.3 Shrinking the Discriminative Neighborhood The WFSA we constructed requires m × |Ω| transitions between all adjacent states, which is impractically large. We can reduce the number of edges by restricting the set of words that each source word can translate into. Thus, the model will not discriminate 4States contain a bit more information than the index of the previous source word, for example, there is some additional information about the previous translation decision that is passed forward. However, the concept of splitting states to guarantee distinct paths for different values of non-local features is well understood by NLP and machine translation researchers, and the necessary state structure should be obvious from the feature description. among all candidate target strings in Ωn, but rather in Ωn s , where Ωs = Sm i=1 Ωsi, and where Ωs is the set of target words that s may translate into.5 We consider four different definitions of Ωs: (1) the baseline of the full target vocabulary, (2) the set of all target words that co-occur in sentence pairs containing s, (3) the most probable words under IBM Model 1 that are above a threshold, and (4) the same Model 1, except we add a sparse symmetric Dirichlet prior (α = 0.01) on the translation distributions and use the empirical Bayes (EB) method to infer a point estimate, using variational inference. Table 1: Comparison of alternative definitions Ωs (arrows indicate whether higher or lower is better). Ωs time (s) ↓ P s |Ωs| ↓ AER ↓ = Ω 22.4 86.0M 0.0 co-occ. 8.9 0.68M 0.0 Model 1 0.2 0.38M 6.2 EB-Model 1 1.0 0.15M 2.9 Table 1 compares the average per-sentence time required to run the inference algorithm described 5Future work will explore alternative formulations of the discriminative neighborhood with the goal of further improving inference efficiency. Smith and Eisner (2005) show that good performance on unsupervised syntax learning is possible even when learning from very small discriminative neighborhoods, and we posit that the same holds here. 412 above under these four different definitions of Ωs on a 10,000 sentence subset of the Hansards FrenchEnglish corpus that includes manual word alignments. While our constructions guarantee that all references are reachable even in the reduced neighborhoods, not all alignments between source and target are possible. The last column is the oracle AER. Although EB variant of Model 1 neighborhood is slightly more expensive to do inference with than regular Model 1, we use it because it has a lower oracle AER.6 During alignment prediction (rather than during training) for a sentence pair ⟨s, t⟩, it is possible to further restrict Ωs to be just the set of words occurring in t, making extremely fast inference possible (comparable to that of the generative HMM alignment model). 4 Features Feature engineering lets us encode knowledge about what aspects of a translation derivation are useful in predicting whether it is good or not. In this section we discuss the features we used in our model. Many of these were taken from the discriminative alignment modeling literature, but we also note that our features can be much more fine-grained than those used in supervised alignment modeling, since we learn our models from a large amount of parallel data, rather than a small number of manual alignments. Word association features. Word association features are at the heart of all lexical translation models, whether generative or discriminative. In addition to fine-grained boolean indicator features ⟨saj, tj⟩for pair types, we have several orthographic features: identity, prefix identity, and an orthographic similarity measure designed to be informative for predicting the translation of named entities in languages that use similar alphabets.7 It has the property that source-target pairs of long words that are similar are given a higher score than word pairs that are short and similar (dissimilar pairs have a score near zero, 6We included all translations whose probability was within a factor of 10−4 of the highest probability translation. 7In experiments with Urdu, which uses an Arabic-derived script, the orthographic feature was computed after first applying a heuristic Romanization, which made the orthographic forms somewhat comparable. regardless of length). We also include “global” association scores that are precomputed by looking at the full training data: Dice’s coefficient (discretized), which we use to measure association strength between pairs of source and target word types across sentence pairs (Dice, 1945), IBM Model 1 forward and reverse probabilities, and the geometric mean of the Model 1 forward and reverse probabilities. Finally, we also cluster the source and target vocabularies (Och, 1999) and include class pair indicator features, which can learn generalizations that, e.g., “nouns tend to translate into nouns but not modal verbs.” Positional features. Following Blunsom and Cohn (2006), we include features indicating closeness to the alignment matrix diagonal, h(aj, j, m, n) = aj m −j n . We also conjoin this feature with the source word class type indicator to enable the model to learn that certain word types are more or less likely to favor a location on the diagonal (e.g. Urdu’s sentence-final verbs). Source features. Some words are functional elements that fulfill purely grammatical roles and should not be the “source” of a translation. For example, Romance languages require a preposition in the formation of what could be a noun-noun compound in English, thus, it may be useful to learn not to translate certain words (i.e. they should not participate in alignment links), or to have a bias to translate others. To capture this intuition we include an indicator feature that fires each time a source vocabulary item (and source word class) participates in an alignment link. Source path features. One class of particularly useful features assesses the goodness of the alignment ‘path’ through the source sentence (Vogel et al., 1996). Although assessing the predicted path requires using nonlocal features, since each aj ∈ [0, m] and m is relatively small, features can be sensitive to a wider context than is often practical. We use many overlapping source path features, some of which are sensitive to the distance and direction of the jump between aj−1 and aj, and others which are sensitive to the word pair these two points define, and others that combine all three elements. The features we use include a discretized 413 jump distance, the discretized jump conjoined with an indicator feature for the target length n, the discretized jump feature conjoined with the class of saj, and the discretized jump feature conjoined with the class of saj and saj−1. To discretize the features we take a log transform (base 1.3) of the jump width and let an indicator feature fire for the closest integer. In addition to these distance-dependent features, we also include indicator features that fire on bigrams ⟨saj−1, saj⟩and their word classes. Thus, this feature can capture our intuition that, e.g., adjectives are more likely to come before or after a noun in different languages. Target string features. Features sensitive to multiple values in the predicted target string or latent alignment variable must be handled carefully for the sake of computational tractability. While features that look at multiple source words can be computed linearly in the number of source words considered (since the source string is always observable), features that look at multiple target words require exponential time and space!8 However, by grouping the tj’s into coarse equivalence classes and looking at small numbers of variables, it is possible to incorporate such features. We include a feature that fires when a word translates as itself (for example, a name or a date, which occurs in languages that share the same alphabet) in position j, but then is translated again (as something else) in position j −1 or j + 1. 5 Experiments We now turn to an empirical assessment of our model. Using various datasets, we evaluate the performance of the models’ intrinsic quality and theirtheir alignments’ contribution to a standard machine translation system. We make use of parallel corpora from languages with very different typologies: a small (0.8M words) Chinese-English corpus from the tourism and travel domain (Takezawa et al., 2002), a corpus of Czech-English news commentary (3.1M words),9 and an Urdu-English corpus (2M words) provided by NIST for the 2009 Open MT Evaluation. These pairs were selected since each poses different alignment challenges (word or8This is of course what makes history-based language model integration an inference challenge in translation. 9http://statmt.org/wmt10 der in Chinese and Urdu, morphological complexity in Czech, and a non-alphabetic writing system in Chinese), and confining ourselves to these relatively small corpora reduced the engineering overhead of getting an implementation up and running. Future work will explore the scalability characteristics and limits of the model. 5.1 Methodology For each language pair, we train two log-linear translation models as described above (§3), once with English as the source and once with English as the target language. For a baseline, we use the Giza++ toolkit (Och and Ney, 2003) to learn Model 4, again in both directions. We symmetrize the alignments from both model types using the grow-diag-final-and heuristic (Koehn et al., 2003) producing, in total, six alignment sets. We evaluate them both intrinsically and in terms of their performance in a translation system. Since we only have gold alignments for CzechEnglish (Bojar and Prokopov´a, 2006), we can report alignment error rate (AER; Och and Ney, 2003) only for this pair. However, we offer two further measures that we believe are suggestive and that do not require gold alignments. One is the average alignment “fertility” of source words that occur only a single time in the training data (so-called hapax legomena). This assesses the impact of a typical alignment problem observed in generative models trained to maximize likelihood: infrequent source words act as “garbage collectors”, with many target words aligned to them (the word dislike in the Model 4 alignment in Figure 2 is an example). Thus, we expect lower values of this measure to correlate with better alignments. The second measure is the number of rule types learned in the grammar induction process used for translation that match the translation test sets.10 While neither a decrease in the average singleton fertility nor an increase in the number of rules induced guarantees better alignment quality, we believe it is reasonable to assume that they are positively correlated. For the translation experiments in each language pair, we make use of the cdec decoder (Dyer et al., 10This measure does not assess whether the rule types are good or bad, but it does suggest that the system’s coverage is greater. 414 2010), inducing a hierarchical phrase based translation grammar from two sets of symmetrized alignments using the method described by Chiang (2007). Additionally, recent work that has demonstrated that extracting rules from n-best alignments has value (Liu et al., 2009; Venugopal et al., 2008). We therefore define a third condition where rules are extracted from the corpus under both the Model 4 and discriminative alignments and merged to form a single grammar. We incorporate a 3-gram language model learned from the target side of the training data as well as 50M supplemental words of monolingual training data consisting of sentences randomly sampled from the English Gigaword, version 4. In the small Chinese-English travel domain experiment, we just use the LM estimated from the bitext. The parameters of the translation model were tuned using “hypergraph” minimum error rate training (MERT) to maximize BLEU on a held-out development set (Kumar et al., 2009). Results are reported using case-insensitive BLEU (Papineni et al., 2002), METEOR11 (Lavie and Denkowski, 2009), and TER (Snover et al., 2006), with the number of references varying by task. Since MERT is a nondeterministic optimization algorithm and results can vary considerably between runs, we follow Clark et al. (2011) and report the average score and standard deviation of 5 independent runs, 30 in the case of Chinese-English, since observed variance was higher. 5.2 Experimental Results Czech-English. Czech-English poses problems for word alignment models since, unlike English, Czech words have a complex inflectional morphology, and the syntax permits relatively free word order. For this language pair, we evaluate alignment error rate using the manual alignment corpus described by Bojar and Prokopov´a (2006). Table 2 summarizes the results. Chinese-English. Chinese-English poses a different set of problems for alignment. While Chinese words have rather simple morphology, the Chinese writing system renders our orthographic features useless. Despite these challenges, the Chinese re11Meteor 1.0 with exact, stem, synonymy, and paraphrase modules and HTER parameters. Table 2: Czech-English experimental results. ˜φsing. is the average fertility of singleton source words. AER ↓ ˜φsing. ↓ # rules ↑ Model 4 e | f 24.8 4.1 f | e 33.6 6.6 sym. 23.4 2.7 993,953 Our model e | f 21.9 2.3 f | e 29.3 3.8 sym. 20.5 1.6 1,146,677 Alignment BLEU ↑ METEOR ↑ TER ↓ Model 4 16.3±0.2 46.1±0.1 67.4±0.3 Our model 16.5±0.1 46.8±0.1 67.0±0.2 Both 17.4±0.1 47.7±0.1 66.3±0.5 sults in Table 3 show the same pattern of results as seen in Czech-English. Table 3: Chinese-English experimental results. ˜φsing. ↓ # rules ↑ Model 4 e | f 4.4 f | e 3.9 sym. 3.6 52,323 Our model e | f 3.5 f | e 2.6 sym. 3.1 54,077 Alignment BLEU ↑ METEOR ↑ TER ↓ Model 4 56.5±0.3 73.0±0.4 29.1±0.3 Our model 57.2±0.8 73.8±0.4 29.3±1.1 Both 59.1±0.6 74.8±0.7 27.6±0.5 Urdu-English. Urdu-English is a more challenging language pair for word alignment than the previous two we have considered. The parallel data is drawn from numerous genres, and much of it was acquired automatically, making it quite noisy. So our models must not only predict good translations, they must cope with bad ones as well. Second, there has been no previous work on discriminative modeling of Urdu, since, to our knowledge, no manual alignments have been created. Finally, unlike English, Urdu is a head-final language: not only does it have SOV word order, but rather than prepositions, it has post-positions, which follow the nouns they modify, meaning its large scale word order is substantially 415 different from that of English. Table 4 demonstrates the same pattern of improving results with our alignment model. Table 4: Urdu-English experimental results. ˜φsing. ↓ # rules ↑ Model 4 e | f 6.5 f | e 8.0 sym. 3.2 244,570 Our model e | f 4.8 f | e 8.3 sym. 2.3 260,953 Alignment BLEU ↑ METEOR ↑ TER ↓ Model 4 23.3±0.2 49.3±0.2 68.8±0.8 Our model 23.4±0.2 49.7±0.1 67.7±0.2 Both 24.1±0.2 50.6±0.1 66.8±0.5 5.3 Analysis The quantitative results presented in this section strongly suggest that our modeling approach produces better alignments. In this section, we try to characterize how the model is doing what it does and what it has learned. Because of the ℓ1 regularization, the number of active (non-zero) features in the inferred models is small, relative to the number of features considered during training. The number of active features ranged from about 300k for the small Chinese-English corpus to 800k for UrduEnglish, which is less than one tenth of the available features in both cases. In all models, the coarse features (Model 1 probabilities, Dice coefficient, coarse positional features, etc.) typically received weights with large magnitudes, but finer features also played an important role. Language pair differences manifested themselves in many ways in the models that were learned. For example, orthographic features were (unsurprisingly) more valuable in Czech-English, with their largely overlapping alphabets, than in Chinese or Urdu. Examining the more fine-grained features is also illuminating. Table 5 shows the most highly weighted source path bigram features on the three models where English was the source language, and in each, we may observe some interesting characteristics of the target language. Left-most is EnglishCzech. At first it may be surprising that words like since and that have a highly weighted feature for transitioning to themselves. However, Czech punctuation rules require that relative clauses and subordinating conjunctions be preceded by a comma (which is only optional or outright forbidden in English), therefore our model translates these words twice, once to produce the comma, and a second time to produce the lexical item. The middle column is the English-Chinese model. In the training data, many of the sentences are questions directed to a second person, you. However, Chinese questions do not invert and the subject remains in the canonical first position, thus the transition from the start of sentence to you is highly weighted. Finally, Figure 2 illustrates how Model 4 (left) and our discriminative model (right) align an English-Urdu sentence pair (the English side is being conditioned on in both models). A reflex of Urdu’s head-final word order is seen in the list of most highly weighted bigrams, where a path through the English source where verbs that transition to end-of-sentence periods are predictive of good translations into Urdu. Table 5: The most highly weighted source path bigram features in the English-Czech, -Chinese, and -Urdu models. Bigram θk . ⟨/s⟩ 3.08 like like 1.19 one of 1.06 ” . 0.95 that that 0.92 is but 0.92 since since 0.84 ⟨s⟩when 0.83 , how 0.83 , not 0.83 Bigram θk . ⟨/s⟩ 2.67 ? ? 2.25 ⟨s⟩please 2.01 much ? 1.61 ⟨s⟩if 1.58 thank you 1.47 ⟨s⟩sorry 1.46 ⟨s⟩you 1.45 please like 1.24 ⟨s⟩this 1.19 Bigram θk . ⟨/s⟩ 1.87 ⟨s⟩this 1.24 will . 1.17 are . 1.16 is . 1.09 is that 1.00 have . 0.97 has . 0.96 was . 0.91 will ⟨/s⟩ 0.88 6 Related Work The literature contains numerous descriptions of discriminative approaches to word alignment motivated by the desire to be able to incorporate multiple, overlapping knowledge sources (Ayan et al., 2005; Moore, 2005; Taskar et al., 2005; Blunsom and Cohn, 2006; Haghighi et al., 2009; Liu et al., 2010; DeNero and Klein, 2010; Setiawan et al., 2010). This body of work has been an invaluable source of useful features. Several authors have dealt with the problem training log-linear models in an unsu416 IBM Model 4 alignment Our model's alignment Figure 2: Example English-Urdu alignment under IBM Model 4 (left) and our discriminative model (right). Model 4 displays two characteristic errors: garbage collection and an overly-strong monotonicity bias. Whereas our model does not exhibit these problems, and in fact, makes no mistakes in the alignment. pervised setting. The contrastive estimation technique proposed by Smith and Eisner (2005) is globally normalized (and thus capable of dealing with arbitrary features), and closely related to the model we developed; however, they do not discuss the problem of word alignment. Berg-Kirkpatrick et al. (2010) learn locally normalized log-linear models in a generative setting. Globally normalized discriminative models with latent variables (Quattoni et al., 2004) have been used for a number of language processing problems, including MT (Dyer and Resnik, 2010; Blunsom et al., 2008a). However, this previous work relied on translation grammars constructed using standard generative word alignment processes. 7 Future Work While we have demonstrated that this model can be substantially useful, it is limited in some important ways which are being addressed in ongoing work. First, training is expensive, and we are exploring alternatives to the conditional likelihood objective that is currently used, such as contrastive neighborhoods advocated by (Smith and Eisner, 2005). Additionally, there is much evidence that non-local features like the source word fertility are (cf. IBM Model 3) useful for translation and alignment modeling. To be truly general, it must be possible to utilize such features. Unfortunately, features like this that depend on global properties of the alignment vector, a, make the inference problem NP-hard, and approximations are necessary. Fortunately, there is much recent work on approximate inference techniques for incorporating nonlocal features (Blunsom et al., 2008b; Gimpel and Smith, 2009; Cromi`eres and Kurohashi, 2009; Weiss and Taskar, 2010), suggesting that this problem too can be solved using established techniques. 8 Conclusion We have introduced a globally normalized, loglinear lexical translation model that can be trained discriminatively using only parallel sentences, which we apply to the problem of word alignment. Our approach addresses two important shortcomings of previous work: (1) that local normalization of generative models constrains the features that can be used, and (2) that previous discriminatively trained word alignment models required supervised alignments. According to a variety of measures in a variety of translation tasks, this model produces superior alignments to generative approaches. Furthermore, the features learned by our model reveal interesting characteristics of the language pairs being modeled. Acknowledgments This work was supported in part by the DARPA GALE program; the U. S. Army Research Laboratory and the U. S. Army Research Office under contract/grant num417 ber W911NF-10-1-0533; and the National Science Foundation through grants IIS-0844507, IIS-0915187, IIS0713402, and IIS-0915327 and through TeraGrid resources provided by the Pittsburgh Supercomputing Center under grant number TG-DBS110003. We thank Ondˇrej Bojar for providing the Czech-English alignment data, and three anonymous reviewers for their detailed suggestions and comments on an earlier draft of this paper. References N. F. Ayan, B. J. Dorr, and C. Monz. 2005. NeurAlign: combining word alignments using neural networks. In Proc. of HLT-EMNLP. T. Berg-Kirkpatrick, A. Bouchard-Cˆot´e, J. DeNero, and D. Klein. 2010. Painless unsupervised learning with features. In Proc. of NAACL. P. Blunsom and T. Cohn. 2006. Discriminative word alignment with conditional random fields. In Proc. of ACL. P. Blunsom, T. Cohn, and M. Osborne. 2008a. A discriminative latent variable model for statistical machine translation. In Proc. of ACL-HLT. P. Blunsom, T. Cohn, and M. Osborne. 2008b. Probabilistic inference for machine translation. In Proc. of EMNLP 2008. O. Bojar and M. Prokopov´a. 2006. Czech-English word alignment. In Proc. of LREC. P. F. Brown, V. J. Della Pietra, S. A. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263–311. D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. J. Clark, C. Dyer, A. Lavie, and N. A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proc. of ACL. F. Cromi`eres and S. Kurohashi. 2009. An alignment algorithm using belief propagation and a structure-based distortion model. In Proc. of EACL. J. DeNero and D. Klein. 2010. Discriminative modeling of extraction sets for machine translation. In Proc. of ACL. L. R. Dice. 1945. Measures of the amount of ecologic association between species. Journal of Ecology, 26:297–302. C. Dyer and P. Resnik. 2010. Context-free reordering, finite-state translation. In Proc. of NAACL. C. Dyer, A. Lopez, J. Ganitkevitch, J. Weese, F. Ture, P. Blunsom, H. Setiawan, V. Eidelman, and P. Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proc. of ACL (demonstration session). A. Fraser. 2007. Improved Word Alignments for Statistical Machine Translation. Ph.D. thesis, University of Southern California. K. Gimpel and N. A. Smith. 2009. Cube summing, approximate inference with non-local features, and dynamic programming without semirings. In Proc. of EACL. N. Habash and F. Sadat. 2006. Arabic preprocessing schemes for statistical machine translation. In Proc. of NAACL, New York. A. Haghighi, J. Blitzer, J. DeNero, and D. Klein. 2009. Better word alignments with supervised ITG models. In Proc. of ACL-IJCNLP. P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proc. of NAACL. D. Koller and N. Friedman. 2009. Probabilistic Graphical Models: Principles and Techniques. MIT Press. S. Kumar, W. Macherey, C. Dyer, and F. Och. 2009. Efficient minimum error rate training and minimum bayesrisk decoding for translation hypergraphs and lattices. In Proc. of ACL-IJCNLP. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. of ICML. A. Lavie and M. Denkowski. 2009. The METEOR metric for automatic evaluation of machine translation. Machine Translation Journal, 23(2–3):105–115. P. Liang and D. Klein. 2009. Online EM for unsupervised models. In Proc. of NAACL. Y. Liu, T. Xia, X. Xiao, and Q. Liu. 2009. Weighted alignment matrices for statistical machine translation. In Proc. of EMNLP. Y. Liu, Q. Liu, and S. Lin. 2010. Discriminative word alignment by linear modeling. Computational Linguistics, 36(3):303–339. A. Lopez. 2008. Tera-scale translation models via pattern matching. In Proc. of COLING. R. Mihalcea and T. Pedersen. 2003. An evaluation exercise for word alignment. In Proc. of the Workshop on Building and Using Parallel Texts. R. C. Moore. 2005. A discriminative framework for bilingual word alignment. In Proc. of HLT-EMNLP. F. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. F. Och. 1999. An efficient method for determining bilingual word classes. In Proc. of EACL. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL. 418 A. Quattoni, M. Collins, and T. Darrell. 2004. Conditional random fields for object recognition. In NIPS 17. H. Setiawan, C. Dyer, and P. Resnik. 2010. Discriminative word alignment with a function word reordering model. In Proc. of EMNLP. N. A. Smith and J. Eisner. 2005. Contrastive estimation: training log-linear models on unlabeled data. In Proc. of ACL. M. Snover, B. J. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proc. of AMTA. T. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto. 2002. Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world. In Proc. of LREC. B. Taskar, S. Lacoste-Julien, and D. Klein. 2005. A discriminative matching approach to word alignment. In Proc. of HLT-EMNLP. Y. Tsuruoka, J. Tsujii, and S. Ananiadou. 2009. Stochastic gradient descent training for l1-regularized loglinear models with cumulative penalty. In Proc. of ACL-IJCNLP. A. Venugopal, A. Zollmann, N. A. Smith, and S. Vogel. 2008. Wider pipelines: n-best alignments and parses in MT training. In Proc. of AMTA. S. Vogel, H. Ney, and C. Tillmann. 1996. HMM-based word alignment in statistical translation. In Proc. of COLING. D. Weiss and B. Taskar. 2010. Structured prediction cascades. In Proc. of AISTATS. 419
|
2011
|
42
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 420–429, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Model-Based Aligner Combination Using Dual Decomposition John DeNero Google Research [email protected] Klaus Macherey Google Research [email protected] Abstract Unsupervised word alignment is most often modeled as a Markov process that generates a sentence f conditioned on its translation e. A similar model generating e from f will make different alignment predictions. Statistical machine translation systems combine the predictions of two directional models, typically using heuristic combination procedures like grow-diag-final. This paper presents a graphical model that embeds two directional aligners into a single model. Inference can be performed via dual decomposition, which reuses the efficient inference algorithms of the directional models. Our bidirectional model enforces a one-to-one phrase constraint while accounting for the uncertainty in the underlying directional models. The resulting alignments improve upon baseline combination heuristics in word-level and phrase-level evaluations. 1 Introduction Word alignment is the task of identifying corresponding words in sentence pairs. The standard approach to word alignment employs directional Markov models that align the words of a sentence f to those of its translation e, such as IBM Model 4 (Brown et al., 1993) or the HMM-based alignment model (Vogel et al., 1996). Machine translation systems typically combine the predictions of two directional models, one which aligns f to e and the other e to f (Och et al., 1999). Combination can reduce errors and relax the one-to-many structural restriction of directional models. Common combination methods include the union or intersection of directional alignments, as well as heuristic interpolations between the union and intersection like grow-diag-final (Koehn et al., 2003). This paper presents a model-based alternative to aligner combination. Inference in a probabilistic model resolves the conflicting predictions of two directional models, while taking into account each model’s uncertainty over its output. This result is achieved by embedding two directional HMM-based alignment models into a larger bidirectional graphical model. The full model structure and potentials allow the two embedded directional models to disagree to some extent, but reward agreement. Moreover, the bidirectional model enforces a one-to-one phrase alignment structure, similar to the output of phrase alignment models (Marcu and Wong, 2002; DeNero et al., 2008), unsupervised inversion transduction grammar (ITG) models (Blunsom et al., 2009), and supervised ITG models (Haghighi et al., 2009; DeNero and Klein, 2010). Inference in our combined model is not tractable because of numerous edge cycles in the model graph. However, we can employ dual decomposition as an approximate inference technique (Rush et al., 2010). In this approach, we iteratively apply the same efficient sequence algorithms for the underlying directional models, and thereby optimize a dual bound on the model objective. In cases where our algorithm converges, we have a certificate of optimality under the full model. Early stopping before convergence still yields useful outputs. Our model-based approach to aligner combination yields improvements in alignment quality and phrase extraction quality in Chinese-English experiments, relative to typical heuristic combinations methods applied to the predictions of independent directional models. 420 2 Model Definition Our bidirectional model G = (V, D) is a globally normalized, undirected graphical model of the word alignment for a fixed sentence pair (e, f). Each vertex in the vertex set V corresponds to a model variable Vi, and each undirected edge in the edge set D corresponds to a pair of variables (Vi, Vj). Each vertex has an associated potential function ωi(vi) that assigns a real-valued potential to each possible value vi of Vi.1 Likewise, each edge has an associated potential function µij(vi, vj) that scores pairs of values. The probability under the model of any full assignment v to the model variables, indexed by V, factors over vertex and edge potentials. P(v) ∝ Y vi∈V ωi(vi) · Y (vi,vj)∈D µij(vi, vj) Our model contains two directional hidden Markov alignment models, which we review in Section 2.1, along with additional structure that that we introduce in Section 2.2. 2.1 HMM-Based Alignment Model This section describes the classic hidden Markov model (HMM) based alignment model (Vogel et al., 1996). The model generates a sequence of words f conditioned on a word sequence e. We conventionally index the words of e by i and f by j. P(f|e) is defined in terms of a latent alignment vector a, where aj = i indicates that word position i of e aligns to word position j of f. P(f|e) = X a P(f, a|e) P(f, a|e) = |f| Y j=1 D(aj|aj−1)M(fj|eaj) . (1) In Equation 1 above, the emission model M is a learned multinomial distribution over word types. The transition model D is a multinomial over transition distances, which treats null alignments as a special case. D(aj = 0|aj−1 = i) = po D(aj = i′ ̸= 0|aj−1 = i) = (1 −po) · c(i′ −i) , 1Potentials in an undirected model play the same role as conditional probabilities in a directed model, but do not need to be locally normalized. where c(i′ −i) is a learned distribution over signed distances, normalized over the possible transitions from i. The parameters of the conditional multinomial M and the transition model c can be learned from a sentence aligned corpus via the expectation maximization algorithm. The null parameter po is typically fixed.2 The highest probability word alignment vector under the model for a given sentence pair (e, f) can be computed exactly using the standard Viterbi algorithm for HMMs in O(|e|2 · |f|) time. An alignment vector a can be converted trivially into a set of word alignment links A: Aa = {(i, j) : aj = i, i ̸= 0} . Aa is constrained to be many-to-one from f to e; many positions j can align to the same i, but each j appears at most once. We have defined a directional model that generates f from e. An identically structured model can be defined that generates e from f. Let b be a vector of alignments where bi = j indicates that word position j of f aligns to word position i of e. Then, P(e, b|f) is defined similarly to Equation 1, but with e and f swapped. We can distinguish the transition and emission distributions of the two models by subscripting them with their generative direction. P(e, b|f) = |e| Y j=1 Df→e(bi|bi−1)Mf→e(ei|fbi) . The vector b can be interpreted as a set of alignment links that is one-to-many: each value i appears at most once in the set. Ab = {(i, j) : bi = j, j ̸= 0} . 2.2 A Bidirectional Alignment Model We can combine two HMM-based directional alignment models by embedding them in a larger model 2In experiments, we set po = 10−6. Transitions from a nullaligned state aj−1 = 0 are also drawn from a fixed distribution, where D(aj = 0|aj−1 = 0) = 10−4 and for i′ ≥1, D(aj = i′|aj−1 = 0) ∝0.8 − i′· |f| |e| −j . With small po, the shape of this distribution has little effect on the alignment outcome. 421 How are you 你 好 How are you How are you a1 c11 b2 b2 c22(a) c22(b) a2 a3 b1 c12 c13 c21 c22 c23 a1 a2 a3 b1 c21(a) c21(b) c23(a) c23(b) c13(a) c13(b) c12(a) c12(b) c11(a) c11(b) c22(a) a1 a2 a3 c21(a) c23(a) c13(a) c12(a) c11(a) Figure 1: The structure of our graphical model for a simple sentence pair. The variables a are blue, b are red, and c are green. that includes all of the random variables of two directional models, along with additional structure that promotes agreement and resolves discrepancies. The original directional models include observed word sequences e and f, along with the two latent alignment vectors a and b defined in Section 2.1. Because the word types and lengths of e and f are always fixed by the observed sentence pair, we can define our model only over a and b, where the edge potentials between any aj, fj, and e are compiled into a vertex potential function ω(a) j on aj, defined in terms of f and e, and likewise for any bi. ω(a) j (i) = Me→f(fj|ei) ω(b) i (j) = Mf→e(ei|fj) The edge potentials between a and b encode the transition model in Equation 1. µ(a) j−1,j(i, i′) = De→f(aj = i′|aj−1 = i) µ(b) i−1,i(j, j′) = Df→e(bi = j′|bi−1 = j) In addition, we include in our model a latent boolean matrix c that encodes the output of the combined aligners: c ∈{0, 1}|e|×|f| . This matrix encodes the alignment links proposed by the bidirectional model: Ac = {(i, j) : cij = 1} . Each model node for an element cij ∈{0, 1} is connected to aj and bi via coherence edges. These edges allow the model to ensure that the three sets of variables, a, b, and c, together encode a coherent alignment analysis of the sentence pair. Figure 1 depicts the graph structure of the model. 2.3 Coherence Potentials The potentials on coherence edges are not learned and do not express any patterns in the data. Instead, they are fixed functions that promote consistency between the integer-valued directional alignment vectors a and b and the boolean-valued matrix c. Consider the assignment aj = i, where i = 0 indicates that word fj is null-aligned, and i ≥1 indicates that fj aligns to ei. The coherence potential ensures the following relationship between the variable assignment aj = i and the variables ci′j, for any i′ ∈[1, |e|]. • If i = 0 (null-aligned), then all ci′j = 0. • If i > 0, then cij = 1. • ci′j = 1 only if i′ ∈{i −1, i, i + 1}. • Assigning ci′j = 1 for i′ ̸= i incurs a cost e−α. Collectively, the list of cases above enforce an intuitive correspondence: an alignment aj = i ensures that cij must be 1, adjacent neighbors may be 1 but incur a cost, and all other elements are 0. This pattern of effects can be encoded in a potential function µ(c) for each coherence edge. These edge potential functions takes an integer value i for some variable aj and a binary value k for some ci′j. µ(c) (aj,ci′j)(i, k) = 1 i = 0 ∧k = 0 0 i = 0 ∧k = 1 1 i = i′ ∧k = 1 0 i = i′ ∧k = 0 1 i ̸= i′ ∧k = 0 e−α |i −i′| = 1 ∧k = 1 0 |i −i′| > 1 ∧k = 1 (2) Above, potentials of 0 effectively disallow certain cases because a full assignment to (a, b, c) is scored by the product of all model potentials. The potential function µ(c) (bi,cij′)(j, k) for a coherence edge between b and c is defined similarly. 422 2.4 Model Properties We interpret c as the final alignment produced by the model, ignoring a and b. In this way, we relax the one-to-many constraints of the directional models. However, all of the information about how words align is expressed by the vertex and edge potentials on a and b. The coherence edges and the link matrix c only serve to resolve conflicts between the directional models and communicate information between them. Because directional alignments are preserved intact as components of our model, extensions or refinements to the underlying directional Markov alignment model could be integrated cleanly into our model as well, including lexicalized transition models (He, 2007), extended conditioning contexts (Brunning et al., 2009), and external information (Shindo et al., 2010). For any assignment to (a, b, c) with non-zero probability, c must encode a one-to-one phrase alignment with a maximum phrase length of 3. That is, any word in either sentence can align to at most three words in the opposite sentence, and those words must be contiguous. This restriction is directly enforced by the edge potential in Equation 2. 3 Model Inference In general, graphical models admit efficient, exact inference algorithms if they do not contain cycles. Unfortunately, our model contains numerous cycles. For every pair of indices (i, j) and (i′, j′), the following cycle exists in the graph: cij →bi →cij′ →aj′ → ci′j′ →bi′ →ci′j →aj →cij Additional cycles also exist in the graph through the edges between aj−1 and aj and between bi−1 and bi. The general phrase alignment problem under an arbitrary model is known to be NP-hard (DeNero and Klein, 2008). 3.1 Dual Decomposition While the entire graphical model has loops, there are two overlapping subgraphs that are cycle-free. One subgraph Ga includes all of the vertices corresponding to variables a and c. The other subgraph Gb includes vertices for variables b and c. Every edge in the graph belongs to exactly one of these two subgraphs. The dual decomposition inference approach allows us to exploit this sub-graph structure (Rush et al., 2010). In particular, we can iteratively apply exact inference to the subgraph problems, adjusting their potentials to reflect the constraints of the full problem. The technique of dual decomposition has recently been shown to yield state-of-the-art performance in dependency parsing (Koo et al., 2010). 3.2 Dual Problem Formulation To describe a dual decomposition inference procedure for our model, we first restate the inference problem under our graphical model in terms of the two overlapping subgraphs that admit tractable inference. Let c(a) be a copy of c associated with Ga, and c(b) with Gb. Also, let f(a, c(a)) be the unnormalized log-probability of an assignment to Ga and g(b, c(b)) be the unnormalized log-probability of an assignment to Gb. Finally, let I be the index set of all (i, j) for c. Then, the maximum likelihood assignment to our original model can be found by optimizing max a,b,c(a),c(b) f(a, c(a)) + g(b, c(b)) (3) such that: c(a) ij = c(b) ij ∀(i, j) ∈I . The Lagrangian relaxation of this optimization problem is L(a, b, c(a), c(b), u) = f(a, c(a))+g(b, c(b))+ X (i,j)∈I u(i, j)(c(a) i,j −c(b) i,j ) . Hence, we can rewrite the original problem as max a,b,c(a),c(b) min u L(a, b, c(a), c(b), u) . We can form a dual problem that is an upper bound on the original optimization problem by swapping the order of min and max. In this case, the dual problem decomposes into two terms that are each local to an acyclic subgraph. min u max a,c(a) f(a, c(a)) + X i,j u(i, j)c(a) ij + max b,c(b) g(b, c(b)) − X i,j u(i, j)c(b) ij (4) 423 How are you 你 好 you b2 c22(a) c22(b) a1 a2 a3 b1 c21(a) c21(b) c23(a) c23(b) c13(a) c13(b) c12(a) c12(b) c11(a) c11(b) a3 c23(a) c13(a) Figure 2: Our combined model decomposes into two acyclic models that each contain a copy of c. The decomposed model is depicted in Figure 2. As in previous work, we solve for the dual variable u by repeatedly performing inference in the two decoupled maximization problems. 3.3 Sub-Graph Inference We now address the problem of evaluating Equation 4 for fixed u. Consider the first line of Equation 4, which includes variables a and c(a). max a,c(a) f(a, c(a)) + X i,j u(i, j)c(a) ij (5) Because the graph Ga is tree-structured, Equation 5 can be evaluated in polynomial time. In fact, we can make a stronger claim: we can reuse the Viterbi inference algorithm for linear chain graphical models that applies to the embedded directional HMM models. That is, we can cast the optimization of Equation 5 as max a |f| Y j=1 De→f(aj|aj−1) · M′ j(aj = i) . In the original HMM-based aligner, the vertex potentials correspond to bilexical probabilities. Those quantities appear in f(a, c(a)), and therefore will be a part of M′ j(·) above. The additional terms of Equation 5 can also be factored into the vertex potentials of this linear chain model, because the optimal How are you c22(a) a1 a2 a3 c21(a) c23(a) c13(a) c12(a) c11(a) Figure 3: The tree-structured subgraph Ga can be mapped to an equivalent chain-structured model by optimizing over ci′j for aj = i. choice of each cij can be determined from aj and the model parameters. If aj = i, then cij = 1 according to our edge potential defined in Equation 2. Hence, setting aj = i requires the inclusion of the corresponding vertex potential ω(a) j (i), as well as u(i, j). For i′ ̸= i, either ci′j = 0, which contributes nothing to Equation 5, or ci′j = 1, which contributes u(i′, j)−α, according to our edge potential between aj and ci′j. Thus, we can capture the net effect of assigning aj and then optimally assigning all ci′j in a single potential M′ j(aj = i) = ω(a) j (i) + exp u(i, j) + X j′:|j′−j|=1 max(0, u(i, j′) −α) Note that Equation 5 and f are sums of terms in log space, while Viterbi inference for linear chains assumes a product of terms in probability space, which introduces the exponentiation above. Defining this potential allows us to collapse the source-side sub-graph inference problem defined by Equation 5, into a simple linear chain model that only includes potential functions M′ j and µ(a). Hence, we can use a highly optimized linear chain inference implementation rather than a solver for general tree-structured graphical models. Figure 3 depicts this transformation. An equivalent approach allows us to evaluate the 424 Algorithm 1 Dual decomposition inference algorithm for the bidirectional model for t = 1 to max iterations do r ←1 t ▷Learning rate c(a) ←arg max f(a, c(a)) + P i,j u(i, j)c(a) ij c(b) ←arg max g(b, c(b)) −P i,j u(i, j)c(b) ij if c(a) = c(b) then return c(a) ▷Converged u ←u + r · (c(b) −c(a)) ▷Dual update return combine(c(a), c(b)) ▷Stop early second line of Equation 4 for fixed u: max b,c(b) g(b, c(b)) + X i,j u(i, j)c(b) ij . (6) 3.4 Dual Decomposition Algorithm Now that we have the means to efficiently evaluate Equation 4 for fixed u, we can define the full dual decomposition algorithm for our model, which searches for a u that optimizes Equation 4. We can iteratively search for such a u via sub-gradient descent. We use a learning rate 1 t that decays with the number of iterations t. The full dual decomposition optimization procedure appears in Algorithm 1. If Algorithm 1 converges, then we have found a u such that the value of c(a) that optimizes Equation 5 is identical to the value of c(b) that optimizes Equation 6. Hence, it is also a solution to our original optimization problem: Equation 3. Since the dual problem is an upper bound on the original problem, this solution must be optimal for Equation 3. 3.5 Convergence and Early Stopping Our dual decomposition algorithm provides an inference method that is exact upon convergence.3 When Algorithm 1 does not converge, the two alignments c(a) and c(b) can still be used. While these alignments may differ, they will likely be more similar than the alignments of independent aligners. These alignments will still need to be combined procedurally (e.g., taking their union), but because 3This certificate of optimality is not provided by other approximate inference algorithms, such as belief propagation, sampling, or simulated annealing. they are more similar, the importance of the combination procedure is reduced. We analyze the behavior of early stopping experimentally in Section 5. 3.6 Inference Properties Because we set a maximum number of iterations n in the dual decomposition algorithm, and each iteration only involves optimization in a sequence model, our entire inference procedure is only a constant multiple n more computationally expensive than evaluating the original directional aligners. Moreover, the value of u is specific to a sentence pair. Therefore, our approach does not require any additional communication overhead relative to the independent directional models in a distributed aligner implementation. Memory requirements are virtually identical to the baseline: only u must be stored for each sentence pair as it is being processed, but can then be immediately discarded once alignments are inferred. Other approaches to generating one-to-one phrase alignments are generally more expensive. In particular, an ITG model requires O(|e|3 · |f|3) time, whereas our algorithm requires only O(n · (|f||e|2 + |e||f|2)) . Moreover, our approach allows Markov distortion potentials, while standard ITG models are restricted to only hierarchical distortion. 4 Related Work Alignment combination normally involves selecting some A from the output of two directional models. Common approaches include forming the union or intersection of the directional sets. A∪= Aa ∪Ab A∩= Aa ∩Ab . More complex combiners, such as the grow-diagfinal heuristic (Koehn et al., 2003), produce alignment link sets that include all of A∩and some subset of A∪based on the relationship of multiple links (Och et al., 1999). In addition, supervised word alignment models often use the output of directional unsupervised aligners as features or pruning signals. In the case 425 that a supervised model is restricted to proposing alignment links that appear in the output of a directional aligner, these models can be interpreted as a combination technique (Deng and Zhou, 2009). Such a model-based approach differs from ours in that it requires a supervised dataset and treats the directional aligners’ output as fixed. Combination is also related to agreement-based learning (Liang et al., 2006). This approach to jointly learning two directional alignment models yields state-of-the-art unsupervised performance. Our method is complementary to agreement-based learning, as it applies to Viterbi inference under the model rather than computing expectations. In fact, we employ agreement-based training to estimate the parameters of the directional aligners in our experiments. A parallel idea that closely relates to our bidirectional model is posterior regularization, which has also been applied to the word alignment problem (Grac¸a et al., 2008). One form of posterior regularization stipulates that the posterior probability of alignments from two models must agree, and enforces this agreement through an iterative procedure similar to Algorithm 1. This approach also yields state-of-the-art unsupervised alignment performance on some datasets, along with improvements in end-to-end translation quality (Ganchev et al., 2008). Our method differs from this posterior regularization work in two ways. First, we iterate over Viterbi predictions rather than posteriors. More importantly, we have changed the output space of the model to be a one-to-one phrase alignment via the coherence edge potential functions. Another similar line of work applies belief propagation to factor graphs that enforce a one-to-one word alignment (Cromi`eres and Kurohashi, 2009). The details of our models differ: we employ distance-based distortion, while they add structural correspondence terms based on syntactic parse trees. Also, our model training is identical to the HMMbased baseline training, while they employ belief propagation for both training and Viterbi inference. Although differing in both model and inference, our work and theirs both find improvements from defining graphical models for alignment that do not admit exact polynomial-time inference algorithms. Aligner Intersection Union Agreement Model |A∩| |A∪| |A∩|/|A∪| Baseline 5,554 10,998 50.5% Bidirectional 7,620 10,262 74.3% Table 1: The bidirectional model’s dual decomposition algorithm substantially increases the overlap between the predictions of the directional models, measured by the number of links in their intersection. 5 Experimental Results We evaluated our bidirectional model by comparing its output to the annotations of a hand-aligned corpus. In this way, we can show that the bidirectional model improves alignment quality and enables the extraction of more correct phrase pairs. 5.1 Data Conditions We evaluated alignment quality on a hand-aligned portion of the NIST 2002 Chinese-English test set (Ayan and Dorr, 2006). We trained the model on a portion of FBIS data that has been used previously for alignment model evaluation (Ayan and Dorr, 2006; Haghighi et al., 2009; DeNero and Klein, 2010). We conducted our evaluation on the first 150 sentences of the dataset, following previous work. This portion of the dataset is commonly used to train supervised models. We trained the parameters of the directional models using the agreement training variant of the expectation maximization algorithm (Liang et al., 2006). Agreement-trained IBM Model 1 was used to initialize the parameters of the HMM-based alignment models (Brown et al., 1993). Both IBM Model 1 and the HMM alignment models were trained for 5 iterations on a 6.2 million word parallel corpus of FBIS newswire. This training regimen on this data set has provided state-of-the-art unsupervised results that outperform IBM Model 4 (Haghighi et al., 2009). 5.2 Convergence Analysis With n = 250 maximum iterations, our dual decomposition inference algorithm only converges 6.2% of the time, perhaps largely due to the fact that the two directional models have different one-to-many structural constraints. However, the dual decompo426 Model Combiner Prec Rec AER union 57.6 80.0 33.4 Baseline intersect 86.2 62.7 27.2 grow-diag 60.1 78.8 32.1 union 63.3 81.5 29.1 Bidirectional intersect 77.5 75.1 23.6 grow-diag 65.6 80.6 28.0 Table 2: Alignment error rate results for the bidirectional model versus the baseline directional models. “growdiag” denotes the grow-diag-final heuristic. Model Combiner Prec Rec F1 union 75.1 33.5 46.3 Baseline intersect 64.3 43.4 51.8 grow-diag 68.3 37.5 48.4 union 63.2 44.9 52.5 Bidirectional intersect 57.1 53.6 55.3 grow-diag 60.2 47.4 53.0 Table 3: Phrase pair extraction accuracy for phrase pairs up to length 5. “grow-diag” denotes the grow-diag-final heuristic. sition algorithm does promote agreement between the two models. We can measure the agreement between models as the fraction of alignment links in the union A∪that also appear in the intersection A∩of the two directional models. Table 1 shows a 47% relative increase in the fraction of links that both models agree on by running dual decomposition (bidirectional), relative to independent directional inference (baseline). Improving convergence rates represents an important area of future work. 5.3 Alignment Error Evaluation To evaluate alignment error of the baseline directional aligners, we must apply a combination procedure such as union or intersection to Aa and Ab. Likewise, in order to evaluate alignment error for our combined model in cases where the inference algorithm does not converge, we must apply combination to c(a) and c(b). In cases where the algorithm does converge, c(a) = c(b) and so no further combination is necessary. We evaluate alignments relative to hand-aligned data using two metrics. First, we measure alignment error rate (AER), which compares the proposed alignment set A to the sure set S and possible set P in the annotation, where S ⊆P. Prec(A, P) = |A ∩P| |A| Rec(A, S) = |A ∩S| |S| AER(A, S, P) = 1 −|A ∩S| + |A ∩P| |A| + |S| AER results for Chinese-English are reported in Table 2. The bidirectional model improves both precision and recall relative to all heuristic combination techniques, including grow-diag-final (Koehn et al., 2003). Intersected alignments, which are one-to-one phrase alignments, achieve the best AER. Second, we measure phrase extraction accuracy. Extraction-based evaluations of alignment better coincide with the role of word aligners in machine translation systems (Ayan and Dorr, 2006). Let R5(S, P) be the set of phrases up to length 5 extracted from the sure link set S and possible link set P. Possible links are both included and excluded from phrase pairs during extraction, as in DeNero and Klein (2010). Null aligned words are never included in phrase pairs for evaluation. Phrase extraction precision, recall, and F1 for R5(A, A) are reported in Table 3. Correct phrase pair recall increases from 43.4% to 53.6% (a 23.5% relative increase) for the bidirectional model, relative to the best baseline. Finally, we evaluated our bidirectional model in a large-scale end-to-end phrase-based machine translation system from Chinese to English, based on the alignment template approach (Och and Ney, 2004). The translation model weights were tuned for both the baseline and bidirectional alignments using lattice-based minimum error rate training (Kumar et al., 2009). In both cases, union alignments outperformed other combination heuristics. Bidirectional alignments yielded a modest improvement of 0.2% BLEU4 on a single-reference evaluation set of sentences sampled from the web (Papineni et al., 2002). 4BLEU improved from 29.59% to 29.82% after training IBM Model 1 for 3 iterations and training the HMM-based alignment model for 3 iterations. During training, link posteriors were symmetrized by pointwise linear interpolation. 427 As our model only provides small improvements in alignment precision and recall for the union combiner, the magnitude of the BLEU improvement is not surprising. 6 Conclusion We have presented a graphical model that combines two classical HMM-based alignment models. Our bidirectional model, which requires no additional learning and no supervised data, can be applied using dual decomposition with only a constant factor additional computation relative to independent directional inference. The resulting predictions improve the precision and recall of both alignment links and extraced phrase pairs in Chinese-English experiments. The best results follow from combination via intersection. Because our technique is defined declaratively in terms of a graphical model, it can be extended in a straightforward manner, for instance with additional potentials on c or improvements to the component directional models. We also look forward to discovering the best way to take advantage of these new alignments in downstream applications like machine translation, supervised word alignment, bilingual parsing (Burkett et al., 2010), part-of-speech tag induction (Naseem et al., 2009), or cross-lingual model projection (Smith and Eisner, 2009; Das and Petrov, 2011). References Necip Fazil Ayan and Bonnie J. Dorr. 2006. Going beyond AER: An extensive analysis of word alignments and their impact on MT. In Proceedings of the Association for Computational Linguistics. Phil Blunsom, Trevor Cohn, Chris Dyer, and Miles Osborne. 2009. A Gibbs sampler for phrasal synchronous grammar induction. In Proceedings of the Association for Computational Linguistics. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics. Jamie Brunning, Adria de Gispert, and William Byrne. 2009. Context-dependent alignment models for statistical machine translation. In Proceedings of the North American Chapter of the Association for Computational Linguistics. David Burkett, John Blitzer, and Dan Klein. 2010. Joint parsing and alignment with weakly synchronized grammars. In Proceedings of the North American Association for Computational Linguistics and IJCNLP. Fabien Cromi`eres and Sadao Kurohashi. 2009. An alignment algorithm using belief propagation and a structure-based distortion model. In Proceedings of the European Chapter of the Association for Computational Linguistics and IJCNLP. Dipanjan Das and Slav Petrov. 2011. Unsupervised partof-speech tagging with bilingual graph-based projections. In Proceedings of the Association for Computational Linguistics. John DeNero and Dan Klein. 2008. The complexity of phrase alignment problems. In Proceedings of the Association for Computational Linguistics. John DeNero and Dan Klein. 2010. Discriminative modeling of extraction sets for machine translation. In Proceedings of the Association for Computational Linguistics. John DeNero, Alexandre Bouchard-Cˆot´e, and Dan Klein. 2008. Sampling alignment structure under a Bayesian translation model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Yonggang Deng and Bowen Zhou. 2009. Optimizing word alignment combination for phrase table training. In Proceedings of the Association for Computational Linguistics. Kuzman Ganchev, Joao Grac¸a, and Ben Taskar. 2008. Better alignments = better translations? In Proceedings of the Association for Computational Linguistics. Joao Grac¸a, Kuzman Ganchev, and Ben Taskar. 2008. Expectation maximization and posterior constraints. In Proceedings of Neural Information Processing Systems. Aria Haghighi, John Blitzer, John DeNero, and Dan Klein. 2009. Better word alignments with supervised ITG models. In Proceedings of the Association for Computational Linguistics. Xiaodong He. 2007. Using word-dependent transition models in HMM-based word alignment for statistical machine. In ACL Workshop on Statistical Machine Translation. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the North American Chapter of the Association for Computational Linguistics. Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 428 Shankar Kumar, Wolfgang Macherey, Chris Dyer, and Franz Josef Och. 2009. Efficient minimum error rate training and minimum bayes-risk decoding for translation hypergraphs and lattices. In Proceedings of the Association for Computational Linguistics. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the North American Chapter of the Association for Computational Linguistics. Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Tahira Naseem, Benjamin Snyder, Jacob Eisenstein, and Regina Barzilay. 2009. Multilingual part-of-speech tagging: Two unsupervised approaches. Journal of Artificial Intelligence Research. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics. Franz Josef Och, Christopher Tillman, and Hermann Ney. 1999. Improved alignment models for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the Association for Computational Linguistics. Alexander M. Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Hiroyuki Shindo, Akinori Fujino, and Masaaki Nagata. 2010. Word alignment with synonym regularization. In Proceedings of the Association for Computational Linguistics. David A. Smith and Jason Eisner. 2009. Parser adaptation and projection with quasi-synchronous grammar features. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In Proceedings of the Conference on Computational linguistics. 429
|
2011
|
43
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 430–439, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics An Algorithm for Unsupervised Transliteration Mining with an Application to Word Alignment Hassan Sajjad Alexander Fraser Helmut Schmid Institute for Natural Language Processing University of Stuttgart {sajjad,fraser,schmid}@ims.uni-stuttgart.de Abstract We propose a language-independent method for the automatic extraction of transliteration pairs from parallel corpora. In contrast to previous work, our method uses no form of supervision, and does not require linguistically informed preprocessing. We conduct experiments on data sets from the NEWS 2010 shared task on transliteration mining and achieve an F-measure of up to 92%, outperforming most of the semi-supervised systems that were submitted. We also apply our method to English/Hindi and English/Arabic parallel corpora and compare the results with manually built gold standards which mark transliterated word pairs. Finally, we integrate the transliteration module into the GIZA++ word aligner and evaluate it on two word alignment tasks achieving improvements in both precision and recall measured against gold standard word alignments. 1 Introduction Most previous methods for building transliteration systems were supervised, requiring either handcrafted rules or a clean list of transliteration pairs, both of which are expensive to create. Such resources are also not applicable to other language pairs. In this paper, we show that it is possible to extract transliteration pairs from a parallel corpus using an unsupervised method. We first align a bilingual corpus at the word level using GIZA++ and create a list of word pairs containing a mix of nontransliterations and transliterations. We train a statistical transliterator on the list of word pairs. We then filter out a few word pairs (those which have the lowest transliteration probabilities according to the trained transliteration system) which are likely to be non-transliterations. We retrain the transliterator on the filtered data set. This process is iterated, filtering out more and more non-transliteration pairs until a nearly clean list of transliteration word pairs is left. The optimal number of iterations is automatically determined by a novel stopping criterion. We compare our unsupervised transliteration mining method with the semi-supervised systems presented at the NEWS 2010 shared task on transliteration mining (Kumaran et al., 2010) using four language pairs. We refer to this task as NEWS10. These systems used a manually labelled set of data for initial supervised training, which means that they are semi-supervised systems. In contrast, our system is fully unsupervised. We achieve an Fmeasure of up to 92% outperforming most of the semi-supervised systems. The NEWS10 data sets are extracted Wikipedia InterLanguage Links (WIL) which consist of parallel phrases, whereas a parallel corpus consists of parallel sentences. Transliteration mining on the WIL data sets is easier due to a higher percentage of transliterations than in parallel corpora. We also do experiments on parallel corpora for two language pairs. To this end, we created gold standards in which sampled word pairs are annotated as either transliterations or non-transliterations. These gold standards have been submitted with the paper as supplementary material as they are available to the research community. 430 Finally we integrate a transliteration module into the GIZA++ word aligner and show that it improves word alignment quality. The transliteration module is trained on the transliteration pairs which our mining method extracts from the parallel corpora. We evaluate our word alignment system on two language pairs using gold standard word alignments and achieve improvements of 10% and 13.5% in precision and 3.5% and 13.5% in recall. The rest of the paper is organized as follows. In section 2, we describe the filtering model and the transliteration model. In section 3, we present our iterative transliteration mining algorithm and an algorithm which computes a stopping criterion for the mining algorithm. Section 4 describes the evaluation of our mining method through both gold standard evaluation and through using it to improve word alignment quality. In section 5, we present previous work and we conclude in section 6. 2 Models Our algorithms use two different models. The first model is a joint character sequence model which we apply to transliteration mining. We use the grapheme-to-phoneme converter g2p to implement this model. The other model is a standard phrasebased MT model which we apply to transliteration (as opposed to transliteration mining). We build it using the Moses toolkit. 2.1 Joint Sequence Model Using g2p Here, we briefly describe g2p using notation from Bisani and Ney (2008). The details of the model, its parameters and the utilized smoothing techniques can be found in Bisani and Ney (2008). The training data is a list of word pairs (a source word and its presumed transliteration) extracted from a word-aligned parallel corpus. g2p builds a joint sequence model on the character sequences of the word pairs and infers m-to-n alignments between source and target characters with Expectation Maximization (EM) training. The m-to-n character alignment units are referred to as “multigrams”. The model built on multigrams consisting of source and target character sequences greater than one learns too much noise (non-transliteration information) from the training data and performs poorly. In our experiments, we use multigrams with a maximum of one character on the source and one character on the target side (i.e., 0,1-to-0,1 character alignment units). The N-gram approximation of the joint probability can be defined in terms of multigrams qi as: p(qk 1) ≈ k+1 Y j=1 p(qj|qj−1 j−N+1) (1) where q0, qk+1 are set to a special boundary symbol. N-gram models of order > 1 did not work well because these models tended to learn noise (information from non-transliteration pairs) in the training data. For our experiments, we only trained g2p with the unigram model. In test mode, we look for the best sequence of multigrams given a fixed source and target string and return the probability of this sequence. For the mining process, we trained g2p on lists containing both transliteration pairs and nontransliteration pairs. 2.2 Statistical Machine Transliteration System We build a phrase-based MT system for transliteration using the Moses toolkit (Koehn et al., 2003). We also tried using g2p for implementing the transliteration decoder but found Moses to perform better. Moses has the advantage of using Minimum Error Rate Training (MERT) which optimizes transliteration accuracy rather than the likelihood of the training data as g2p does. The training data contains more non-transliteration pairs than transliteration pairs. We don’t want to maximize the likelihood of the non-transliteration pairs. Instead we want to optimize the transliteration performance for test data. Secondly, it is easy to use a large language model (LM) with Moses. We build the LM on the target word types in the data to be filtered. For training Moses as a transliteration system, we treat each word pair as if it were a parallel sentence, by putting spaces between the characters of each word. The model is built with the default settings of the Moses toolkit. The distortion limit “d“ is set to zero (no reordering). The LM is implemented as a five-gram model using the SRILM-Toolkit (Stolcke, 2002), with Add-1 smoothing for unigrams and Kneser-Ney smoothing for higher n-grams. 431 3 Extraction of Transliteration Pairs Training of a supervised transliteration system requires a list of transliteration pairs which is expensive to create. Such lists are usually either built manually or extracted using a classifier trained on manually labelled data and using other language dependent information. In this section, we present an iterative method for the extraction of transliteration pairs from parallel corpora which is fully unsupervised and language pair independent. Initially, we extract a list of word pairs from a word-aligned parallel corpus using GIZA++. The extracted word pairs are either transliterations, other kinds of translations, or misalignments. In each iteration, we first train g2p on the list of word pairs. Then we delete those 5% of the (remaining) training data which are least likely to be transliterations according to g2p.1 We determine the best iteration according to our stopping criterion and return the filtered data set from this iteration. The stopping criterion uses unlabelled held-out data to predict the optimal stopping point. The following sections describe the transliteration mining method in detail. 3.1 Methodology We will first describe the iterative filtering algorithm (Algorithm 1) and then the algorithm for the stopping criterion (Algorithm 2). In practice, we first run Algorithm 2 for 100 iterations to determine the best number of iterations. Then, we run Algorithm 1 for that many iterations. Initially, the parallel corpus is word-aligned using GIZA++ (Och and Ney, 2003), and the alignments are refined using the grow-diag-final-and heuristic (Koehn et al., 2003). We extract all word pairs which occur as 1-to-1 alignments in the word-aligned corpus. We ignore non-1-to-1 alignments because they are less likely to be transliterations for most language pairs. The extracted set of word pairs will be called “list of word pairs” later on. We use the list of word pairs as the training data for Algorithm 1. Algorithm 1 builds a joint sequence model using g2p on the training data and computes the joint probability of all word pairs according to g2p. We normalize the probabilities by taking the nth square root 1Since we delete 5% from the filtered data, the number of deleted data items decreases in each iteration. Algorithm 1 Mining of transliteration pairs 1: training data ←list of word pairs 2: I ←0 3: repeat 4: Build a joint source channel model on the training data using g2p and compute the joint probability of every word pair. 5: Remove the 5% word pairs with the lowest lengthnormalized probability from the training data. {and repeat the process with the filtered training data} 6: I ←I+1 7: until I = Stopping iteration from Algorithm 2 where n is the average length of the source and the target string. The training data contains mostly nontransliteration pairs and a few transliteration pairs. Therefore the training data is initially very noisy and the joint sequence model is not very accurate. However it can successfully be used to eliminate a few word pairs which are very unlikely to be transliterations. On the filtered training data, we can train a model which is slightly better than the previous model. Using this improved model, we can eliminate further non-transliterations. Our results show that at the iteration determined by our stopping criterion, the filtered set mostly contains transliterations and only a small number of transliterations have been mistakenly eliminated (see section 4.2). Algorithm 2 automatically determines the best stopping point of the iterative transliteration mining process. It is an extension of Algorithm 1. It runs the iterative process of Algorithm 1 on half of the list of word pairs (training data) for 100 iterations. For every iteration, it builds a transliteration system on the filtered data. The transliteration system is tested on the source side of the other half of the list of word pairs (held-out). The output of the transliteration system is matched against the target side of the held-out data. (These target words are either transliterations, translations or misalignments.) We match the target side of the held-out data under the assumption that all matches are transliterations. The iteration where the output of the transliteration system best matches the held-out data is chosen as the stopping iteration of Algorithm 1. 432 Algorithm 2 Selection of the stopping iteration for the transliteration mining algorithm 1: Create clusters of word pairs from the list of word pairs which have a common prefix of length 2 both on the source and target language side. 2: Randomly add each cluster either to the training data or to the held-out data. 3: I ←0 4: while I < 100 do 5: Build a joint sequence model on the training data using g2p and compute the length-normalized joint probability of every word pair in the training data. 6: Remove the 5% word pairs with the lowest probability from the training data. {The training data will be reduced by 5% of the rest in each iteration} 7: Build a transliteration system on the filtered training data and test it using the source side of the held-out and match the output against the target side of the held-out. 8: I ←I+1 9: end while 10: Collect statistics of the matching results and take the median from 9 consecutive iterations (median9). 11: Choose the iteration with the best median9 score for the transliteration mining process. We will now describe Algorithm 2 in detail. Algorithm 2 initially splits the word pairs into training and held-out data. This could be done randomly, but it turns out that this does not work well for some tasks. The reason is that the parallel corpus contains inflectional variants of the same word. If two variants are distributed over training and held-out data, then the one in the training data may cause the transliteration system to produce a correct translation (but not transliteration) of its variant in the heldout data. This problem is further discussed in section 4.2.2. Instead of randomly splitting the data, we first create clusters of word pairs which have a common prefix of length 2 both on the source and target language side. We randomly add each cluster either to the training data or to the held-out data. We repeat the mining process (described in Algorithm 1) to eliminate non-transliteration pairs from the training data. For each iteration of Algorithm 2, i.e., steps 4 to 9, we build a transliteration system on the filtered training data and test it on the source side of the held-out. We collect statistics on how well the output of the system matches the target side of the held-out. The matching scores on the held-out data often make large jumps from iteration to iteration. We take the median of the results from 9 consecutive iterations (the 4 iterations before, the current and the 4 iterations after the current iteration) to smooth the scores. We call this median9. We choose the iteration with the best smoothed score as the stopping point for the filtering process. In our tests, the median9 heuristic indicated an iteration close to the optimal iteration. Sometimes several nearby iterations have the same maximal smoothed score. In that case, we choose the one with the highest unsmoothed score. Section 4.2 explains the median9 heuristic in more detail and presents experimental results showing that it works well. 4 Experiments We evaluate our transliteration mining algorithm on three tasks: transliteration mining from Wikipedia InterLanguage Links, transliteration mining from parallel corpora, and word alignment using a word aligner with a transliteration component. On the WIL data sets, we compare our fully unsupervised system with the semi-supervised systems presented at the NEWS10 (Kumaran et al., 2010). In the evaluation on parallel corpora, we compare our mining results with a manually built gold standard in which each word pair is either marked as a transliteration or as a non-transliteration. In the word alignment experiment, we integrate a transliteration module which is trained on the transliterations pairs extracted by our method into a word aligner and show a significant improvement. The following sections describe the experiments in detail. 4.1 Experiments Using Parallel Phrases of Wikipedia InterLanguage Links We conduct transliteration mining experiments on the English/Arabic, English/Hindi, English/Tamil and English/Russian Wikipedia InterLanguage Links (WIL) used in the NEWS10.2 All data sets 2We do not evaluate on the English/Chinese data because the Chinese data requires word segmentation which is beyond the scope of our work. Another problem is that our extraction method was developed for alphabetic languages and probably needs to be adapted before it is applicable to logographic languages such as Chinese. 433 Our S-Best S-Worst Systems Rank EA 87.4 91.5 70.2 16 3 ET 90.1 91.4 57.5 14 3 EH 92.2 94.4 71.4 14 3 Table 1: Summary of results on NEWS10 data sets where “EA” is English/Arabic, “ET” is English/Tamil and “EH” is English/Hindi. “Our” shows the F-measure of our filtered data against the gold standard using the supplied evaluation tool, “Systems” is the total number of participants in the subtask, and “Rank” is the rank we would have obtained if our system had participated. contain training data, seed data and reference data. We make no use of the seed data since our system is fully unsupervised. We calculate the F-measure of our filtered transliteration pairs against the supplied gold standard using the supplied evaluation tool. For English/Arabic, English/Hindi and English/Tamil, our system is better than most of the semi-supervised systems presented at the NEWS 2010 shared task for transliteration mining. Table 1 summarizes the F-scores on these data sets. On the English/Russian data set, our system achieves 76% F-measure which is not good compared with the systems that participated in the shared task. The English/Russian corpus contains many cognates which – according to the NEWS10 definition – are not transliterations of each other. Our system learns the cognates in the training data and extracts them as transliterations (see Table 2). The two best teams on the English/Russian task presented various extraction methods (Jiampojamarn et al., 2010; Darwish, 2010). Their systems behave differently on English/Russian than on other language pairs. Their best systems for English/Russian are only trained on the seed data and the use of unlabelled data does not help the performance. Since our system is fully unsupervised, and the unlabelled data is not useful, we perform badly. 4.2 Experiments Using Parallel Corpora The Wikipedia InterLanguage Links shared task data contains a much larger proportion of transliterations than a parallel corpus. In order to examine how well our method performs on parallel corpora, we apply it to parallel corpora of English/Hindi and English/Arabic, and compare the transliteration mining results with a gold standard. Table 2: Cognates from English/Russian corpus extracted by our system as transliteration pairs. None of them are correct transliteration pairs according to the gold standard. We use the English/Hindi corpus from the shared task on word alignment, organized as part of the ACL 2005 Workshop on Building and Using Parallel Texts (WA05) (Martin et al., 2005). For English/Arabic, we use a freely available parallel corpus from the United Nations (UN) (Eisele and Chen, 2010). We randomly take 200,000 parallel sentences from the UN corpus of the year 2000. We create gold standards for both language pairs by randomly selecting a few thousand word pairs from the lists of word pairs extracted from the two corpora. We manually tag them as either transliterations or non-transliterations. The English/Hindi gold standard contains 180 transliteration pairs and 2084 non-transliteration pairs and the English/Arabic gold standard contains 288 transliteration pairs and 6639 non-transliteration pairs. We have submitted these gold standards with the paper. They are available to the research community. In the following sections, we describe the median9 heuristic and the splitting method of Algorithm 2. The splitting method is used to avoid early peaks in the held-out statistics, and the median9 heuristic smooths the held-out statistics in order to obtain a single peak.3 4.2.1 Motivation for Median9 Heuristic Algorithm 2 collects statistics from the held-out data (step 10) and selects the stopping iteration. Due to the noise in the held-out data, the transliteration accuracy on the held-out data often jumps from iteration to iteration. The dotted line in figure 1 (right) shows the held-out prediction accuracy for the En3We do not use the seed data in our system. However, to check the correctness of the stopping point, we tested the transliteration system on the seed data (available with NEWS10) for every iteration of Algorithm 2. We verified that the median9 held-out statistics and accuracy on the seed data have their peaks at the same iteration. 434 glish/Hindi parallel corpus. The curve is very noisy and has two peaks. It is difficult to see the effect of the filtering. We take the median of the results from 9 consecutive iterations to smooth the scores. The solid line in figure 1 (right) shows a smoothed curve built using the median9 held-out scores. A comparison with the gold standard (section 4.2.3) shows that the stopping point (peak) reached using the median9 heuristic is better than the stopping point obtained with unsmoothed scores. 4.2.2 Motivation for Splitting Method Algorithm 2 initially splits the list of word pairs into training and held-out data. A random split worked well for the WIL data, but failed on the parallel corpora. The reason is that parallel corpora contain inflectional variants of the same word. If these variants are randomly distributed over training and heldout data, then a non-transliteration word pair such as the English-Hindi pair “change – badlao” may end up in the training data and the related pair “changes – badlao” in the held-out data. The Moses system used for transliteration will learn to “transliterate” (or actually translate) “change” to “badlao”. From other examples, it will learn that a final “s” can be dropped. As a consequence, the Moses transliterator may produce the non-transliteration “badlao” for the English word “changes” in the held-out data. Such matching predictions of the transliterator which are actually translations lead to an overestimate of the transliteration accuracy and may cause Algorithm 2 to predict a stopping iteration which is too early. By splitting the list of word pairs in such a way that inflectional variants of a word are placed either in the training data, or in the held-out, but not in both, this problem can be solved.4 The left graph in Figure 1 shows that the median9 held-out statistics obtained after a random data split of a Hindi/English corpus contains two peaks which occur too early. These peaks disappear in the right graph of Figure 1 which shows the results obtained after a split with the clustering method. The overall trend of the smoothed curve in figure 1 (right) is very clear. We start by filtering out non-transliteration pairs from the data, so the results 4This solution is appropriate for all of the language pairs used in our experiments, but should be revisited if there is inflection realized as prefixes, etc. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 10 20 30 40 50 60 70 80 90 accuracy iterations held out median9 0 0.1 0.2 0.3 0.4 0.5 0.6 0 10 20 30 40 50 60 70 80 90 accuracy iterations held out median 9 Figure 1: Statistics of held-out prediction of English/Hindi data using modified Algorithm 2 with random division of the list of word pairs (left) and using Algorithm 2 (right). The dotted line shows unsmoothed heldout scores and solid line shows median9 held-out scores of the transliteration system go up. When no more non-transliteration pairs are left, we start filtering out transliteration pairs and the results of the system go down. We use this stopping criterion for all language pairs and achieve consistently good results. 4.2.3 Results on Parallel Corpora According to the gold standard, the English/Hindi and English/Arabic data sets contain 8% and 4% transliteration pairs respectively. We repeat the same mining procedure – run Algorithm 2 up to 100 iterations and return the stopping iteration. Then, we run Algorithm 1 up to the stopping iteration returned by Algorithm 2 and obtain the filtered data. TP FN TN FP EH Filtered 170 10 2039 45 EA Filtered 197 91 6580 59 Table 3: Transliteration mining results using the parallel corpus of English/Hindi (EH) and English/Arabic (EA) against the gold standard Table 3 shows the mining results on the English/Hindi and English/Arabic corpora. The gold standard is a subset of the data sets. The English/Hindi gold standard contains 180 transliteration pairs and 2084 non-transliteration pairs. The English/Arabic gold standard contains 288 transliteration pairs and 6639 non-transliteration pairs. From the English/Hindi data, the mining system has mined 170 transliteration pairs out of 180 transliteration pairs. The English/Arabic mined data contains 197 transliteration pairs out of 288 transliteration pairs. The mining system has wrongly identified a few non-transliteration pairs as transliterations (see 435 table 3, last column). Most of these word pairs are close transliterations and differ by only one or two characters from perfect transliteration pairs. The close transliteration pairs provide many valid multigrams which may be helpful for the mining system. 4.3 Integration into Word Alignment Model In the previous section, we presented a method for the extraction of transliteration pairs from a parallel corpus. In this section, we will explain how to build a transliteration module on the extracted transliteration pairs and how to integrate it into MGIZA++ (Gao and Vogel, 2008) by interpolating it with the ttable probabilities of the IBM models and the HMM model. MGIZA++ is an extension of GIZA++. It has the ability to resume training from any model rather than starting with Model1. 4.3.1 Modified EM Training of the Word Alignment Models GIZA++ applies the IBM models (Brown et al., 1993) and the HMM model (Vogel et al., 1996) in both directions, i.e., source to target and target to source. The alignments are refined using the grow-diag-final-and heuristic (Koehn et al., 2003). GIZA++ generates a list of translation pairs with alignment probabilities, which is called the t-table. In this section, we propose a method to modify the translation probabilities of the t-table by interpolating the translation counts with transliteration counts. The interpolation is done in both directions. In the following, we will only consider the e-to-f direction. The transliteration module which is used to calculate the conditional transliteration probability is described in Algorithm 3. We build a transliteration system by training Moses on the filtered transliteration corpus (using Algorithm 1) and apply it to the e side of the list of word pairs. For every source word, we generate the list of 10-best transliterations nbestTI(e). Then, we extract every f that cooccurs with e in a parallel sentence and add it to nbestTI(e) which gives us the list of candidate transliteration pairs candidateTI(e). We use the sum of transliteration probabilities P f′∈CandidateTI(e) pmoses(f′, e) as an approximation for the prior probability pmoses(e) = P f′ pmoses(f′, e) which is needed to convert the joint transliteration probability into a conditional Algorithm 3 Estimation of transliteration probabilities, e-to-f direction 1: unfiltered data ←list of word pairs 2: filtered data ←transliteration pairs extracted using Algorithm 1 3: Train a transliteration system on the filtered data 4: for all e do 5: nbestTI(e) ←10 best transliterations for e according to the transliteration system 6: cooc(e) ←set of all f that cooccur with e in a parallel sentence 7: candidateTI(e) ←cooc(e) ∪nbestTI(e) 8: end for 9: for all f do 10: pmoses(f, e) ←joint transliteration probability of e and f according to the transliterator 11: pti(f|e) ← pmoses(f,e) P f′∈CandidateT I(e) pmoses(f ′,e) 12: end for probability. We use the constraint decoding option of Moses to compute the joint probability of e and f. It computes the probability by dividing the translation score of the best target sentence given a source sentence by the normalization factor. We combine the transliteration probabilities with the translation probabilities of the IBM models and the HMM model. The normal translation probability pta(f|e) of the word alignment models is computed with relative frequency estimates. We smooth the alignment frequencies by adding the transliteration probabilities weighted by the factor λ and get the following modified translation probabilities ˆp(f|e) = fta(f, e) + λpti(f|e) fta(e) + λ (2) where fta(f, e) = pta(f|e)f(e). pta(f|e) is obtained from the original t-table of the alignment model. f(e) is the total corpus frequency of e. λ is the transliteration weight which is optimized for every language pair (see section 4.3.2). Apart from the definition of the weight λ, our smoothing method is equivalent to Witten-Bell smoothing. We smooth after every iteration of the IBM models and the HMM model except the last iteration of each model. Algorithm 4 shows the smoothing for IBM Model4. IBM Model1 and the HMM model are smoothed in the same way. We also apply Algorithm 3 and Algorithm 4 in the alignment direction 436 Algorithm 4 Interpolation with the IBM Model4, eto-f direction 1: {We want to run four iterations of Model4} 2: f(e) ←total frequency of e in the corpus 3: Run MGIZA++ for one iteration of Model4 4: I ←1 5: while I < 4 do 6: Look up pta(f|e) in the t-table of Model4 7: fta(f, e) ←pta(f|e)f(e) for all (f, e) 8: ˆp(f|e) ←fta(f,e)+λpti(f|e) fta(e)+λ for all (f, e) 9: Resume MGIZA++ training for 1 iteration using the modified t-table probabilities ˆp(f|e) 10: I ←I + 1 11: end while f to e. The final alignments are generated using the grow-diag-final-and heuristic (Koehn et al., 2003). 4.3.2 Evaluation The English/Hindi corpus available from WA05 consists of training, development and test data. As development and test data for English/Arabic, we use manually created gold standard word alignments for 155 sentences extracted from the Hansards corpus released by LDC. We use 50 sentences for development and 105 sentences for test. Baseline: We align the data sets using GIZA++ (Och and Ney, 2003) and refine the alignments using the grow-diag-final-and heuristic (Koehn et al., 2003). We obtain the baseline F-measure by comparing the alignments of the test corpus with the gold standard alignments. Experiments We use GIZA++ with 5 iterations of Model1, 4 iterations of HMM and 4 iterations of Model4. We interpolate translation and transliteration probabilities at different iterations (and different combinations of iterations) of the three models and always observe an improvement in alignment quality. For the final experiments, we interpolate at every iteration of the IBM models and the HMM model except the last iteration of every model where we could not interpolate for technical reasons.5 Algo5We had problems in resuming MGIZA++ training when training was supposed to continue from a different model, such as if we stopped after the 5th iteration of Model1 and then tried to resume MGIZA++ from the first iteration of the HMM model. In this case, we ran the 5th iteration of Model1, then the first iteration of the HMM and only then stopped for interpolarithm 4 shows the interpolation of the transliteration probabilities with IBM Model4. We used the same procedure with IBM Model1 and the HMM model. The parameter λ is optimized on development data for every language pair. The word alignment system is not very sensitive to λ. Any λ in the range between 50 and 100 works fine for all language pairs. The optimization helps to maximize the improvement in word alignment quality. For our experiments, we use λ = 80. On test data, we achieve an improvement of approximately 10% and 13.5% in precision and 3.5% and 13.5% in recall on English/Hindi and English/Arabic word alignment, respectively. Table 4 shows the scores of the baseline and our word alignment model. Lang Pb Rb Fb Pti Rti Fti EH 49.1 48.5 51.2 59.1 52.1 55.4 EA 50.8 49.9 50.4 64.4 63.6 64 Table 4: Word alignment results on the test data of English/Hindi (EH) and English/Arabic (EA) where Pb is the precision of baseline GIZA++ and Pti is the precision of our word alignment system We compared our word alignment results with the systems presented at WA05. Three systems, one limited and two un-limited, participated in the English/Hindi task. We outperform the limited system and one un-limited system. 5 Previous Research Previous work on transliteration mining uses a manually labelled set of training data to extract transliteration pairs from a parallel corpus or comparable corpora. The training data may contain a few hundred randomly selected transliteration pairs from a transliteration dictionary (Yoon et al., 2007; Sproat et al., 2006; Lee and Chang, 2003) or just a few carefully selected transliteration pairs (Sherif and Kondrak, 2007; Klementiev and Roth, 2006). Our work is more challenging as we extract transliteration pairs without using transliteration dictionaries or gold standard transliteration pairs. Klementiev and Roth (2006) initialize their transliteration model with a list of 20 transliteration tion; so we did not interpolate in just those iterations of training where we were transitioning from one model to the next. 437 pairs. Their model makes use of temporal scoring to rank the candidate transliterations. A lot of work has been done on discovering and learning transliterations from comparable corpora by using temporal and phonetic information (Tao et al., 2006; Klementiev and Roth, 2006; Sproat et al., 2006). We do not have access to this information. Sherif and Kondrak (2007) train a probabilistic transducer on 14 manually constructed transliteration pairs of English/Arabic. They iteratively extract transliteration pairs from the test data and add them to the training data. Our method is different from the method of Sherif and Kondrak (2007) as our method is fully unsupervised, and because in each iteration, they add the most probable transliteration pairs to the training data, while we filter out the least probable transliteration pairs from the training data. The transliteration mining systems of the four NEWS10 participants are either based on discriminative or on generative methods. All systems use manually labelled (seed) data for the initial training. The system based on the edit distance method submitted by Jiampojamarn et al. (2010) performs best for the English/Russian task. Jiampojamarn et al. (2010) submitted another system based on a standard n-gram kernel which ranked first for the English/Hindi and English/Tamil tasks.6 For the English/Arabic task, the transliteration mining system of Noeman and Madkour (2010) was best. They normalize the English and Arabic characters in the training data which increases the recall.7 Our transliteration extraction method differs in that we extract transliteration pairs from a parallel corpus without supervision. The results of the NEWS10 experiments (Kumaran et al., 2010) show that no single system performs well on all language pairs. Our unsupervised method seems robust as its performance is similar to or better than many of the semi-supervised systems on three language pairs. We are only aware of one previous work which uses transliteration information for word alignment. 6They use the seed data as positive examples. In order to obtain also negative examples, they generate all possible word pairs from the source and target words in the seed data and extract the ones which are not transliterations but have a common substring of some minimal length. 7They use the phrase table of Moses to build a mapping table between source and target characters. The mapping table is then used to construct a finite state transducer. Hermjakob (2009) proposed a linguistically focused word alignment system which uses many features including hand-crafted transliteration rules for Arabic/English alignment. His evaluation did not explicitly examine the effect of transliteration (alone) on word alignment. We show that the integration of a transliteration system based on unsupervised transliteration mining increases the word alignment quality for the two language pairs we tested. 6 Conclusion We proposed a method to automatically extract transliteration pairs from parallel corpora without supervision or linguistic knowledge. We evaluated it against the semi-supervised systems of NEWS10 and achieved high F-measure and performed better than most of the semi-supervised systems. We also evaluated our method on parallel corpora and achieved high F-measure. We integrated the transliteration extraction module into the GIZA++ word aligner and showed gains in alignment quality. We will release our transliteration mining system and word alignment system in the near future. Acknowledgments The authors wish to thank the anonymous reviewers for their comments. We would like to thank Christina Lioma for her valuable feedback on an earlier draft of this paper. Hassan Sajjad was funded by the Higher Education Commission (HEC) of Pakistan. Alexander Fraser was funded by Deutsche Forschungsgemeinschaft grant Models of Morphosyntax for Statistical Machine Translation. Helmut Schmid was supported by Deutsche Forschungsgemeinschaft grant SFB 732. References Maximilian Bisani and Hermann Ney. 2008. Jointsequence models for grapheme-to-phoneme conversion. Speech Communication, 50(5). Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263–311. Kareem Darwish. 2010. Transliteration mining with phonetic conflation and iterative training. In Proceedings of the 2010 Named Entities Workshop, Uppsala, Sweden. Association for Computational Linguistics. 438 Andreas Eisele and Yu Chen. 2010. MultiUN: A multilingual corpus from United Nation documents. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10), Valletta, Malta. Qin Gao and Stephan Vogel. 2008. Parallel implementations of word alignment tool. In Software Engineering, Testing, and Quality Assurance for Natural Language Processing, Columbus, Ohio, June. Association for Computational Linguistics. Ulf Hermjakob. 2009. Improved word alignment with statistics and linguistic heuristics. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1, EMNLP ’09, Morristown, NJ, USA. Association for Computational Linguistics. Sittichai Jiampojamarn, Kenneth Dwyer, Shane Bergsma, Aditya Bhargava, Qing Dou, Mi-Young Kim, and Grzegorz Kondrak. 2010. Transliteration generation and mining with limited training resources. In Proceedings of the 2010 Named Entities Workshop, Uppsala, Sweden. Association for Computational Linguistics. Alexandre Klementiev and Dan Roth. 2006. Weakly supervised named entity transliteration and discovery from multilingual comparable corpora. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL, Morristown, NJ, USA. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference, pages 127–133, Edmonton, Canada. A Kumaran, Mitesh M. Khapra, and Haizhou Li. 2010. Whitepaper of news 2010 shared task on transliteration mining. In Proceedings of the 2010 Named Entities Workshop the 48th Annual Meeting of the ACL, Uppsala, Sweden. Chun-Jen Lee and Jason S. Chang. 2003. Acquisition of English-Chinese transliterated word pairs from parallel-aligned texts using a statistical machine transliteration model. In Proceedings of the HLTNAACL 2003 Workshop on Building and using parallel texts, Morristown, NJ, USA. ACL. Joel Martin, Rada Mihalcea, and Ted Pedersen. 2005. Word alignment for languages with scarce resources. In ParaText ’05: Proceedings of the ACL Workshop on Building and Using Parallel Texts, Morristown, NJ, USA. Association for Computational Linguistics. Sara Noeman and Amgad Madkour. 2010. Language independent transliteration mining system using finite state automata framework. In Proceedings of the 2010 Named Entities Workshop, Uppsala, Sweden. Association for Computational Linguistics. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Tarek Sherif and Grzegorz Kondrak. 2007. Bootstrapping a stochastic transducer for Arabic-English transliteration extraction. In ACL, Prague, Czech Republic. Richard Sproat, Tao Tao, and ChengXiang Zhai. 2006. Named entity transliteration with comparable corpora. In ACL. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. In Intl. Conf. Spoken Language Processing, Denver, Colorado. Tao Tao, Su-Yoon Yoon, Andrew Fister, Richard Sproat, and ChengXiang Zhai. 2006. Unsupervised named entity transliteration using temporal and phonetic correlation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Sydney. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In 16th International Conference on Computational Linguistics, pages 836–841, Copenhagen, Denmark. Su-Youn Yoon, Kyoung-Young Kim, and Richard Sproat. 2007. Multilingual transliteration using feature based phonetic method. In Proceedings of the 45th Annual Meeting of the ACL, Prague, Czech Republic. 439
|
2011
|
44
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 440–449, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Beam-Width Prediction for Efficient Context-Free Parsing Nathan Bodenstab† Aaron Dunlop† Keith Hall‡ and Brian Roark† † Center for Spoken Language Understanding, Oregon Health & Science University, Portland, OR ‡Google, Inc., Zurich, Switzerland {bodensta,dunlopa,roark}@cslu.ogi.edu [email protected] Abstract Efficient decoding for syntactic parsing has become a necessary research area as statistical grammars grow in accuracy and size and as more NLP applications leverage syntactic analyses. We review prior methods for pruning and then present a new framework that unifies their strengths into a single approach. Using a log linear model, we learn the optimal beam-search pruning parameters for each CYK chart cell, effectively predicting the most promising areas of the model space to explore. We demonstrate that our method is faster than coarse-to-fine pruning, exemplified in both the Charniak and Berkeley parsers, by empirically comparing our parser to the Berkeley parser using the same grammar and under identical operating conditions. 1 Introduction Statistical constituent parsers have gradually increased in accuracy over the past ten years. This accuracy increase has opened the door to automatically derived syntactic information within a number of NLP tasks. Prior work incorporating parse structure into machine translation (Chiang, 2010) and Semantic Role Labeling (Tsai et al., 2005; Punyakanok et al., 2008) indicate that such hierarchical structure can have great benefit over shallow labeling techniques like chunking and part-of-speech tagging. Although syntax is becoming increasingly important for large-scale NLP applications, constituent parsing is slow — too slow to scale to the size of many potential consumer applications. The exhaustive CYK algorithm has computational complexity O(n3|G|) where n is the length of the sentence and |G| is the number of grammar productions, a nonnegligible constant. Increases in accuracy have primarily been accomplished through an increase in the size of the grammar, allowing individual grammar rules to be more sensitive to their surrounding context, at a considerable cost in efficiency. Grammar transformation techniques such as linguistically inspired non-terminal annotations (Johnson, 1998; Klein and Manning, 2003b) and latent variable grammars (Matsuzaki et al., 2005; Petrov et al., 2006) have increased the grammar size |G| from a few thousand rules to several million in an explicitly enumerable grammar, or even more in an implicit grammar. Exhaustive search for the maximum likelihood parse tree with a state-of-the-art grammar can require over a minute of processing for a single sentence of 25 words, an unacceptable amount of time for real-time applications or when processing millions of sentences. Deterministic algorithms for dependency parsing exist that can extract syntactic dependency structure very quickly (Nivre, 2008), but this approach is often undesirable as constituent parsers are more accurate and more adaptable to new domains (Petrov et al., 2010). The most accurate constituent parsers, e.g., Charniak (2000), Petrov and Klein (2007a), make use of approximate inference, limiting their search to a fraction of the total search space and achieving speeds of between one and four newspaper sentences per second. The paradigm for building stateof-the-art parsing models is to first design a model structure that can achieve high accuracy and then, after the model has been built, design effective approximate inference methods around that particular model; e.g., coarse-to-fine non-terminal hierarchies for a given model, or agenda-based methods 440 that are empirically tuned to achieve acceptable efficiency/accuracy operating points. While both of the above mentioned papers use the CYK dynamic programming algorithm to search through possible solutions, their particular methods of approximate inference are quite distinct. In this paper, we examine a general approach to approximate inference in constituent parsing that learns cell-specific thresholds for arbitrary grammars. For each cell in the CYK chart, we sort all potential constituents in a local agenda, ordered by an estimate of their posterior probability. Given features extracted from the chart cell context – e.g., span width; POS-tags and words surrounding the boundary of the cell – we train a log linear model to predict how many constituents should be popped from the local agenda and added to the chart. As a special case of this approach, we simply predict whether the number to add should be zero or greater than zero, in which case the method can be seen as a cell-by-cell generalization of Roark and Hollingshead’s (2008; 2009) tagger-derived Chart Constraints. More generally, instead of a binary classification decision, we can also use this method to predict the desired cell population directly and get cell closure for free when the classifier predicts a beam-width of zero. In addition, we use a nonsymmetric loss function during optimization to account for the imbalance between over-predicting or under-predicting the beam-width. A key feature of our approach is that it does not rely upon reference syntactic annotations when learning to search. Rather, the beam-width prediction model is trained to learn the rank of constituents in the maximum likelihood trees.1 We will illustrate this by presenting results using a latent-variable grammar, for which there is no “true” reference latent variable parse. We simply parse sections 2-21 of the WSJ treebank and train our search models from the output of these trees, with no prior knowledge of the non-terminal set or other grammar characteristics to guide the process. Hence, this ap1Note that we do not call this method “unsupervised” because all grammars used in this paper are induced from supervised data, although our framework can also accommodate unsupervised grammars. We emphasize that we are learning to search using only maximum likelihood trees, not that we are doing unsupervised parsing. Figure 1: Inside (grey) and outside (white) representations of an example chart edge Ni,j. proach is broadly applicable to a wide range of scenarios, including tuning the search to new domains where domain mismatch may yield very different efficiency/accuracy operating points. In the next section, we present prior work on approximate inference in parsing, and discuss how our method to learn optimal beam-search parameters unite many of their strengths into a single framework. We then explore using our approach to open or close cells in the chart as an alternative to Roark and Hollingshead (2008; 2009). Finally, we present results which combine cell closure and adaptive beam-width prediction to achieve the most efficient parser. 2 Background 2.1 Preliminaries and notation Let S = w1 . . . w|S| represent an input string of |S| words. Let wi,j denote the substring from word wi+1 to wj; i.e., S = w0,|S|. We use the term chart edge to refer to a non-terminal spanning a specific substring of the input sentence. Let Ni,j denote the edge labeled with non-terminal N spanning wi,j, for example NP3,7. We define an edge’s figure-of-merit (FOM) as an estimate of the product of its inside (β) and outside (α) scores, conceptually the relative merit the edge has to participate in the final parse tree (see Figure 1). More formally: α(Ni,j) = P(w0,i, Ni,j, wj,n) β(Ni,j) = P(wi,j|N) FOM(Ni,j) = ˆα(Ni,j)ˆβ(Ni,j) 441 With bottom-up parsing, the true inside probability is accumulated and β(Ni,j) does not need to be estimated, improving the FOMs ability to represent the true inside/outside distribution. In this paper, we use a modified version of the Caraballo and Charniak Boundary FOM (1998) for local edge comparison, which computes ˆα(Ni,j) using POS forward-backward scores and POS-tononterminal constituent boundary transition probabilities. Details can be found in (?). We also note that in this paper we only use the FOM scoring function to rank constituents in a local agenda. Alternative approaches to ranking competitors are also possible, such as Learning as Search Optimization (Daum´e and Marcu, 2005). The method we present in this paper to learn the optimal beam-search parameters is applicable to any ranking function, and we demonstrate this by computing results with both the Boundary FOM and only the inside probability in Section 6. 2.2 Agenda-based parsing Agenda-based parsers maintain a global agenda of edges, ranked by FOM score. At each iteration, the highest-scoring edge is popped off of the agenda, added to the chart, and combined with other edges already in the chart. The agenda-based approach includes best-first parsing (Bobrow, 1990) and A* parsing (Klein and Manning, 2003a), which differ in whether an admissible FOM estimate ˆα(Ni,j) is required. A* uses an admissible FOM, and thus guarantees finding the maximum likelihood parse, whereas an inadmissible heuristic (best-first) may require less exploration of the search space. Much work has been pursued in both admissible and inadmissible heuristics for agenda parsing (Caraballo and Charniak, 1998; Klein and Manning, 2003a; Pauls et al., 2010). In this paper, we also make use of agendas, but at a local rather than a global level. We maintain an agenda for each cell, which has two significant benefits: 1) Competing edges can be compared directly, avoiding the difficulty inherent in agenda-based approaches of comparing edges of radically different span lengths and characteristics; and 2) Since the agendas are very small, the overhead of agenda maintenance — a large component of agenda-based parse time — is minimal. 2.3 Beam-search parsing CYK parsing with a beam-search is a local pruning strategy, comparing edges within the same chart cell. The beam-width can be defined in terms of a threshold in the number of edges allowed, or in terms of a threshold on the difference in probability relative to the highest scoring edge (Collins, 1999; Zhang et al., 2010). For the current paper, we use both kinds of thresholds, avoiding pathological cases that each individual criteria is prone to encounter. Further, unlike most beam-search approaches we will make use of a FOM estimate of the posterior probability of an edge, defined above, as our ranking function. Finally, we will learn log linear models to assign cellspecific thresholds, rather than relying on a single search parameter. 2.4 Coarse-to-Fine Parsing Coarse-to-fine parsing, also known as multiple pass parsing (Goodman, 1997; Charniak, 2000; Charniak and Johnson, 2005), first parses the input sentence with a simplified (coarse) version of the target (fine) grammar in which multiple non-terminals are merged into a single state. Since the coarse grammar is quite small, parsing is much faster than with the fine grammar, and can quickly yield an estimate of the outside probability α(·) for use in subsequent agenda or beam-search parsing with the fine grammar. This approach can also be used iteratively with grammars of increasing complexity (Petrov and Klein, 2007a). Building a coarse grammar from a fine grammar is a non-trivial problem, and most often approached with detailed knowledge of the fine grammar being used. For example, Goodman (1997) suggests using a coarse grammar consisting of regular non-terminals, such as NP and VP, and then non-terminals augmented with head-word information for the more accurate second-pass grammar. Such an approach is followed by Charniak (2000) as well. Petrov and Klein (2007a) derive coarse grammars in a more statistically principled way, although the technique is closely tied to their latent variable grammar representation. To the extent that our cell-specific threshold classifier predicts that a chart cell should contain zero edges or more than zero edges, it is making coarse 442 predictions about the unlabeled constituent structure of the target parse tree. This aspect of our work is can be viewed as a coarse-to-fine process, though without considering specific grammatical categories or rule productions. 2.5 Chart Constraints Roark and Hollingshead (2008; 2009) introduced a pruning technique that ignores entire chart cells based on lexical and POS features of the input sentence. They train two finite-state binary taggers: one that allows multi-word constituents to start at a word, and one that allows constituents to end at a word. Given these tags, it is straightforward to completely skip many chart cells during processing. In this paper, instead of tagging word positions to infer valid constituent spans, we classify chart cells directly. We further generalize this cell classification to predict the beam-width of the chart cell, where a beam-width of zero indicates that the cell is completely closed. We discuss this in detail in the next section. 3 Open/Closed Cell Classification 3.1 Constituent Closure We first look at the binary classification of chart cells as either open or closed to full constituents, and predict this value from the input sentence alone. This is the same problem that Roark and Hollingshead (2008; 2009) solve with Chart Constraints; however, where they classify lexical items as either beginning or ending a constituent, we classify individual chart cells as open or closed, an approach we call Constituent Closure. Although the number of classifications scales quadratically with our approach, the total parse time is still dominated by the O(n3|G|) parsing complexity and we find that the added level of specificity reduces the search space significantly. To learn to classify a chart cell spanning words wi+1 . . . wj of a sentence S as open or closed to full constituents, we first map cells in the training corpus to tuples: Φ(S, i, j) = (x, y) (1) where x is a feature-vector representation of the chart cell and y is the target class 1 if the cell contains an edge from the maximum likelihood parse tree, 0 otherwise. The feature vector x is encoded with the chart cell’s absolute and relative span width, as well as unigram and bigram lexical and part-ofspeech tag items from wi−1 . . . wj+2. Given feature/target tuples (x, y) for every chart cell in every sentence of a training corpus τ, we train a weight vector θ using the averaged perceptron algorithm (Collins, 2002) to learn an open/closed binary decision boundary: ˆθ = argmin θ X (x,y)∈Φ(τ) Lλ(H(θ · x), y) (2) where H(·) is the unit step function: 1 if the inner product θ ·x > 0, and 0 otherwise; and Lλ(·, ·) is an asymmetric loss function, defined below. When predicting cell closure, all misclassifications are not equal. If we leave open a cell which contains no edges in the maximum likelihood (ML) parse, we incur the cost of additional processing, but are still able to recover the ML tree. However, if we close a chart cell which contains an ML edge, search errors occur. To deal with this imbalance, we introduce an asymmetric loss function Lλ(·, ·) to penalize false-negatives more severely during training. Lλ(h, y) = 0 if h = y 1 if h > y λ if h < y (3) We found the value λ = 102 to give the best performance on our development set, and we use this value in all of our experiments. Figures 2a and 2b compare the pruned charts of Chart Constraints and Constituent Closure for a single sentence in the development set. Note that both of these methods are predicting where a complete constituent may be located in the chart, not partial constituents headed by factored nonterminals within a binarized grammar. Depending on the grammar factorization (right or left) we can infer chart cells that are restricted to only edges with a factored lefthand-side non-terminal. In Figure 2 these chart cells are colored gray. Note that Constituent Closure reduces the number of completely open cells considerably vs. Chart Constraints, and the number of cells open to factored categories somewhat. 443 3.2 Complete Closure Alternatively, we can predict whether a chart cell contains any edge, either a partial or a full constituent, an approach we call Complete Closure. This is a more difficult classification problem as partial constituents occur in a variety of contexts. Nevertheless, learning this directly allows us to remove a large number of internal chart cells from consideration, since no additional cells need to be left open to partial constituents. The learning algorithm is identical to Equation 2, but training examples are now assigned a positive label if the chart cell contains any edge from the binarized maximum likelihood tree. Figure 2c gives a visual representation of Complete Closure for the same sentence; the number of completely open cells increases somewhat, but the total number of open cells (including those open to factored categories) is greatly reduced. We compare the effectiveness of Constituent Closure, Complete Closure, and Chart Constraints, by decreasing the percentage of chart cells closed until accuracy over all sentences in our development set start to decline. For Constituent and Complete Closure, we also vary the loss function, adjusting the relative penalty between a false-negative (closing off a chart cell that contains a maximum likelihood edge) and a false-positive. Results show that using Chart Constrains as a baseline, we prune (skip) 33% of the total chart cells. Constituent Closure improves on this baseline only slightly (36%), but we see our biggest gains with Complete Closure, which prunes 56% of all chart cells in the development set. All of these open/closed cell classification methods can improve the efficiency of the exhaustive CYK algorithm, or any of the approximate inference methods mentioned in Section 2. We empirically evaluate them when applied to CYK parsing and beam-search parsing in Section 6. 4 Beam-Width Prediction The cell-closing approaches discussed in Section 3 make binary decisions to either allow or completely block all edges in each cell. This all-on/all-off tactic ignores the characteristics of the local cell population, which, given a large statistical grammar, may contain hundred of edges, even if very improbable. Retaining all of these partial derivations forces the (a) Chart Constraints (Roark and Hollingshead, 2009) (b) Constituent Closure (this paper) (c) Complete Closure (this paper) Figure 2: Comparison of Chart Constraints (Roark and Hollingshead, 2009) to Constituent and Complete Closure for a single example sentence. Black cells are open to all edges while grey cells only allow factored edges (incomplete constituents). search in larger spans to continue down improbable paths, adversely affecting efficiency. We can further improve parsing speed in these open cells by leveraging local pruning methods, such as beam-search. When parsing with a beam-search, finding the optimal beam-width threshold(s) to balance speed and accuracy is a necessary step. As mentioned in Sec444 tion 2.3, two variations of the beam-width are often considered: a fixed number of allowed edges, or a relative probability difference from the highest scoring local edge. For the remainder of this paper we fix the relative probability threshold for all experiments and focus on adapting the number of allowed edges per cell. We will refer to this numberof-allowed-edges value as the beam-width, notated by b, and leave adaptation of the relative probability difference to future work. The standard way to tune the beam-width is a simple sweep over possible values until accuracy on a heldout data set starts to decline. The optimal point will necessarily be very conservative, allowing outliers (sentences or sub-phrases with above average ambiguity) to stay within the beam and produce valid parse trees. The majority of chart cells will require much fewer than b entries to find the maximum likelihood (ML) edge, yet, constrained by a constant beam-width, the cell will continue to be filled with unfruitful edges, exponentially increasing downstream computation. For example, when parsing with the Berkeley latent-variable grammar and Boundary FOM, we find we can reduce the global beam-width b to 15 edges in each cell before accuracy starts to decline. However we find that 73% of the ML edges are ranked first in their cell and 96% are ranked in the top three. Thus, in 24 of every 25 cells, 80% of the edges are unnecessary (12 of the top 15). Clearly, it would be advantageous to adapt the beam-width such that it is restrictive when we are confident in the FOM ranking and more forgiving in ambiguous contexts. To address this problem, we learn the optimal beam-width for each chart cell directly. We define Ri,j as the rank of the ML edge in the chart cell spanning wi+1 . . . wj. If no ML edge exists in the cell, then Ri,j = 0. Given a global maximum beamwidth b, we train b different binary classifiers, each using separate mapping functions Φk, where the target value y produced by Φk is 1 if Ri,j > k and 0 otherwise. The same asymmetry noted in Section 3 applies in this task as well. When in doubt, we prefer to over-predict the beam-width and risk an increase in processing time opposed to under-predicting at the expense of accuracy. Thus we use the same loss function Lλ, this time training several classifiers: ˆθk = argmin θ X (x,y)∈Φk(τ) Lλ(H(θ · x), y) (4) Note that in Equation 4 when k = 0, we recover the open/closed cell classification of Equation 2, since a beam width of 0 indicates that the chart cell is completely closed. During decoding, we assign the beam-width for chart cell spanning wi+1 . . . wj given models θ0, θ1, ...θb−1 by finding the lowest value k such that the binary classifier θk classifies Ri,j ≤k. If no such k exists, ˆRi,j is set to the maximum beam-width value b: ˆRi,j = argmin k θk · xi ≤0 (5) In Equation 5 we assume there are b unique classifiers, one for each possible beam-width value between 0 and b −1, but this level of granularity is not required. Choosing the number of classification bins to minimize total parsing time is dependent on the FOM function and how it ranks ML edges. With the Boundary FOM we use in this paper, 97.8% of ML edges have a local rank less than five and we find that the added cost of computing b decision boundaries for each cell is not worth the added specificity. We searched over possible classification bins and found that training four classifiers with beam-width decision boundaries at 0, 1, 2, and 4 is faster than 15 individual classifiers and more memory efficient, since each model θk has over 800,000 parameters. All beam-width prediction results reported in this paper use these settings. Figure 3 is a visual representation of beam-width prediction on a single sentence of the development set using the Berkeley latent-variable grammar and Boundary FOM. In this figure, the gray scale represents the relative size of the beam-width, black being the maximum beam-width value, b, and the lightest gray being a beam-width of size one. We can see from this figure that very few chart cells are classified as needing the full 15 edges, apart from span-1 cells which we do not classify. 445 Figure 3: Visualization of Beam-Width Prediction for a single example sentence. The grey scale represents the size of the predicted beam-width: white is 0 (cell is skipped) and black is the maximum value b (b=15 in this example). 5 Experimental Setup We run all experiments on the WSJ treebank (Marcus et al., 1999) using the standard splits: section 2-21 for training, section 22 for development, and section 23 for testing. We preprocess the treebank by removing empty nodes, temporal labels, and spurious unary productions (X→X), as is standard in published works on syntactic parsing. The pruning methods we present in this paper can be used to parse with any grammar. To achieve stateof-the-art accuracy levels, we parse with the Berkeley SM6 latent-variable grammar (Petrov and Klein, 2007b) where the original treebank non-terminals are automatically split into subclasses to optimize parsing accuracy. This is an explicit grammar consisting of 4.3 million productions, 2.4 million of which are lexical productions. Exhaustive CYK parsing with the grammar takes more than a minute per sentence. Accuracy is computed from the 1-best Viterbi (max) tree extracted from the chart. Alternative decoding methods, such as marginalizing over the latent variables in the grammar or MaxRule decoding (Petrov and Klein, 2007a) are certainly possible in our framework, but it is unknown how effective these methods will be given the heavily pruned nature of the chart. We leave investigation of this to future work. We compute the precision and recall of constituents from the 1-best Viterbi trees using the standard EVALB script (?), which ignores punctuation and the root symbol. Accuracy results are reported as F-measure (F1), the harmonic mean between precision and recall. We ran all timing tests on an Intel 3.00GHz processor with 6MB of cache and 16GB of memory. Our parser is written in Java and publicly available at http://nlp.csee.ogi.edu. 6 Results We empirically demonstrate the advantages of our pruning methods by comparing the total parse time of each system, including FOM initialization, chart cell classification, and beam-width prediction. The parse times reported for Chart Constraints do not include tagging times as we were provided with this pre-tagged data, but tagging all of Section 22 takes less than three seconds and we choose to ignore this contribution for simplicity. Figure 4 contains a timing comparison of the three components of our final parser: Boundary FOM initialization (which includes the forward-backward algorithm over ambiguous part-of-speech tags), beam446 Figure 4: Timing breakdown by sentence length for major components of our parser. width prediction, and the final beam-search, including 1-best extraction. We bin these relative times with respect to sentence length to see how each component scales with the number of input words. As expected, the O(n3|G|) beam-search begins to dominate as the sentence length grows, but Boundary FOM initialization is not cheap, and absorbs, on average, 20% of the total parse time. Beam-width prediction, on the other hand, is almost negligible in terms of processing time even though it scales quadratically with the length of the sentence. We compare the accuracy degradation of beamwidth prediction and Chart Constraints in Figure 5 as we incrementally tighten their respective pruning parameters. We also include the baseline beamsearch parser with Boundary FOM in this figure to demonstrate the accuracy/speed trade-off of adjusting a global beam-width alone. In this figure we see that the knee of the beam-width prediction curve (Beam-Predict) extends substantially further to the left before accuracy declines, indicating that our pruning method is intelligently removing a significant portion of the search space that remains unpruned with Chart Constraints. In Table 1 we present the accuracy and parse time for three baseline parsers on the development set: exhaustive CYK parsing, beam-search parsing using only the inside score β(·), and beam-search parsing using the Boundary FOM. We then apply our two cell-closing methods, Constituent Closure and Complete Closure, to all three baselines. As expected, the relative speedup of these methods across the various baselines is similar since the open/closed cell classification does not change across parsers. We Figure 5: Time vs. accuracy curves comparing beam-width prediction (Beam-Predict) and Chart Constraints. also see that Complete Closure is between 22% and 31% faster than Constituent Closure, indicating that the greater number of cells closed translates directly into a reduction in parse time. We can further apply beam-width prediction to the two beam-search baseline parsers in Table 1. Dynamically adjusting the beam-width for the remaining open cells decreases parse time by an additional 25% when using the Inside FOM, and 28% with the boundary FOM. We apply our best model to the test set and report results in Table 2. Beam-width prediction, again, outperforms the baseline of a constant beam-width by 65% and the open/closed classification of Chart Constraints by 49%. We also compare beam-width prediction to the Berkeley Coarse-to-Fine parser. Both our parser and the Berkeley parser are written in Java, both are run with Viterbi decoding, and both parse with the same grammar, so a direct comparison of speed and accuracy is fair.2 7 Conclusion and Future Work We have introduced three new pruning methods, the best of which unites figure-of-merit estimation from agenda-based parsing, local pruning from beamsearch parsing, and unlabeled constituent structure 2We run the Berkeley parser with the default search parameterization to achieve the fastest possible parsing time. We note that 3 of 2416 sentences fail to parse under these settings. Using the ‘-accurate’ option provides a valid parse for all sentences, but increases parsing time of section 23 to 0.293 seconds per sentence with no increase in F-score. We assume a back-off strategy for failed parses could be implemented to parse all sentences with a parsing time close to the default parameterization. 447 Parser Sec/Sent F1 CYK 70.383 89.4 CYK + Constituent Closure 47.870 89.3 CYK + Complete Closure 32.619 89.3 Beam + Inside FOM (BI) 3.977 89.2 BI + Constituent Closure 2.033 89.2 BI + Complete Closure 1.575 89.3 BI + Beam-Predict 1.180 89.3 Beam + Boundary FOM (BB) 0.326 89.2 BB + Constituent Closure 0.279 89.2 BB + Complete Closure 0.199 89.3 BB + Beam-Predict 0.143 89.3 Table 1: Section 22 development set results for CYK and Beam-Search (Beam) parsing using the Berkeley latent-variable grammar. prediction from coarse-to-fine parsing and Chart Constraints. Furthermore, our pruning method is trained using only maximum likelihood trees, allowing it to be tuned to specific domains without labeled data. Using this framework, we have shown that we can decrease parsing time by 65% over a standard beam-search without any loss in accuracy, and parse significantly faster than both the Berkeley parser and Chart Constraints. We plan to explore a number of remaining questions in future work. First, we will try combining our approach with constituent-level Coarse-toFine pruning. The two methods prune the search space in very different ways and may prove to be complementary. On the other hand, our parser currently spends 20% of the total parse time initializing the FOM, and adding additional preprocessing costs, such as parsing with a coarse grammar, may not outweigh the benefits gained in the final search. Second, as with Chart Constraints we do not prune lexical or unary edges in the span-1 chart cells (i.e., chart cells that span a single word). We expect pruning entries in these cells would notably reduce parse time since they cause exponentially many chart edges to be built in larger spans. Initial work constraining span-1 chart cells has promising results (Bodenstab et al., 2011) and we hope to investigate its interaction with beam-width prediction even further. Parser Sec/Sent F1 CYK 64.610 88.7 Berkeley CTF MaxRule 0.213 90.2 Berkeley CTF Viterbi 0.208 88.8 Beam + Boundary FOM (BB) 0.334 88.6 BB + Chart Constraints 0.244 88.7 BB + Beam-Predict (this paper) 0.125 88.7 Table 2: Section 23 test set results for multiple parsers using the Berkeley latent-variable grammar. Finally, the size and structure of the grammar is the single largest contributor to parse efficiency. In contrast to the current paradigm, we plan to investigate new algorithms that jointly optimize accuracy and efficiency during grammar induction, leading to more efficient decoding. Acknowledgments We would like to thank Kristy Hollingshead for her valuable discussions, as well as the anonymous reviewers who gave very helpful feedback. This research was supported in part by NSF Grants #IIS-0447214, #IIS-0811745 and DARPA grant #HR0011-09-1-0041. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF or DARPA. References Robert J. Bobrow. 1990. Statistical agenda parsing. In DARPA Speech and Language Workshop, pages 222– 224. Nathan Bodenstab, Kristy Hollingshead, and Brian Roark. 2011. Unary constraints for efficient contextfree parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, Portland, Oregon. Sharon A Caraballo and Eugene Charniak. 1998. New figures of merit for best-first probabilistic chart parsing. Computational Linguistics, 24:275–298. Eugene Charniak and Mark Johnson. 2005. Coarse-tofine n-best parsing and MaxEnt discriminative reranking. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 173– 180, Ann Arbor, Michigan. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st North American 448 chapter of the Association for Computational Linguistics conference, pages 132–139, Seattle, Washington. David Chiang. 2010. Learning to translate with source and target syntax. In Proceedings of the 48rd Annual Meeting on Association for Computational Linguistics, pages 1443–1452. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. PhD dissertation, University of Pennsylvania. Michael Collins. 2002. Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical Methods in Natural Language Processing, volume 10, pages 1–8, Philadelphia. Hal Daum´e, III and Daniel Marcu. 2005. Learning as search optimization: approximate large margin methods for structured prediction. In Proceedings of the 22nd international conference on Machine learning, ICML ’05, pages 169–176, New York, NY, USA. Joshua Goodman. 1997. Global thresholding and Multiple-Pass parsing. Proceedings of the Second Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 11–25. Mark Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24(4):613–632. Dan Klein and Christopher D. Manning. 2003a. A* parsing. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL ’03), pages 40–47, Edmonton, Canada. Dan Klein and Christopher D. Manning. 2003b. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, pages 423–430, Sapporo, Japan. Mitchell P Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, and Ann Taylor. 1999. Treebank-3, Philadelphia. Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2005. Probabilistic CFG with latent annotations. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics - ACL ’05, pages 75–82, Ann Arbor, Michigan. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Comput. Linguist., 34:513–553. Adam Pauls, Dan Klein, and Chris Quirk. 2010. Topdown k-best a* parsing. In In proceedings of the Annual Meeting on Association for Computational Linguistics Short Papers, ACLShort ’10, pages 200–204, Morristown, NJ, USA. Slav Petrov and Dan Klein. 2007a. Improved inference for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 404– 411, Rochester, New York. Slav Petrov and Dan Klein. 2007b. Learning and inference for hierarchically split PCFGs. In AAAI 2007 (Nectar Track). Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 433–440, Sydney, Australia. Slav Petrov, Pi-Chuan Chang, Michael Ringgaard, and Hiyan Alshawi. 2010. Uptraining for accurate deterministic question parsing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 705–713, Cambridge, MA, October. Vasin Punyakanok, Dan Roth, and Wen tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257–287. Brian Roark and Kristy Hollingshead. 2008. Classifying chart cells for quadratic complexity context-free inference. In Donia Scott and Hans Uszkoreit, editors, Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 745– 752, Manchester, UK. Brian Roark and Kristy Hollingshead. 2009. Linear complexity Context-Free parsing pipelines via chart constraints. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 647–655, Boulder, Colorado. Tzong-Han Tsai, Chia-Wei Wu, Yu-Chun Lin, and WenLian Hsu. 2005. Exploiting full parsing information to label semantic roles using an ensemble of ME and SVM via integer linear programming. In Proceedings of the Ninth Conference on Computational Natural Language Learning, CONLL ’05, pages 233–236, Morristown, NJ, USA. Yue Zhang, Byung gyu Ahn, Stephen Clark, Curt Van Wyk, James R. Curran, and Laura Rimell. 2010. Chart pruning for fast Lexicalised-Grammar parsing. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1472–1479, Beijing, China. 449
|
2011
|
45
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 450–459, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Optimal Head-Driven Parsing Complexity for Linear Context-Free Rewriting Systems Pierluigi Crescenzi Dip. di Sistemi e Informatica Universit`a di Firenze Daniel Gildea Computer Science Dept. University of Rochester Andrea Marino Dip. di Sistemi e Informatica Universit`a di Firenze Gianluca Rossi Dip. di Matematica Universit`a di Roma Tor Vergata Giorgio Satta Dip. di Ingegneria dell’Informazione Universit`a di Padova Abstract We study the problem of finding the best headdriven parsing strategy for Linear ContextFree Rewriting System productions. A headdriven strategy must begin with a specified righthand-side nonterminal (the head) and add the remaining nonterminals one at a time in any order. We show that it is NP-hard to find the best head-driven strategy in terms of either the time or space complexity of parsing. 1 Introduction Linear Context-Free Rewriting Systems (LCFRSs) (Vijay-Shankar et al., 1987) constitute a very general grammatical formalism which subsumes contextfree grammars (CFGs) and tree adjoining grammars (TAGs), as well as the synchronous context-free grammars (SCFGs) and synchronous tree adjoining grammars (STAGs) used as models in machine translation.1 LCFRSs retain the fundamental property of CFGs that grammar nonterminals rewrite independently, but allow nonterminals to generate discontinuous phrases, that is, to generate more than one span in the string being produced. This important feature has been recently exploited by Maier and Søgaard (2008) and Kallmeyer and Maier (2010) for modeling phrase structure treebanks with discontinuous constituents, and by Kuhlmann and Satta (2009) for modeling non-projective dependency treebanks. The rules of a LCFRS can be analyzed in terms of the properties of rank and fan-out. Rank is the 1To be more precise, SCFGs and STAGs generate languages composed by pair of strings, while LCFRSs generate string languages. We can abstract away from this difference by assuming concatenation of components in a string pair. number of nonterminals on the right-hand side (rhs) of a rule, while fan-out is the number of spans of the string generated by the nonterminal in the lefthand side (lhs) of the rule. CFGs are equivalent to LCFRSs with fan-out one, while TAGs are one type of LCFRSs with fan-out two. Rambow and Satta (1999) show that rank and fan-out induce an infinite, two-dimensional hierarchy in terms of generative power; while CFGs can always be reduced to rank two (Chomsky Normal Form), this is not the case for LCFRSs with any fan-out greater than one. General algorithms for parsing LCFRSs build a dynamic programming chart of recognized nonterminals bottom-up, in a manner analogous to the CKY algorithm for CFGs (Hopcroft and Ullman, 1979), but with time and space complexity that are dependent on the rank and fan-out of the grammar rules. Whenever it is possible, binarization of LCFRS rules, or reduction of rank to two, is therefore important for parsing, as it reduces the time complexity needed for dynamic programming. This has lead to a number of binarization algorithms for LCFRSs, as well as factorization algorithms that factor rules into new rules with smaller rank, without necessarily reducing rank all the way to two. Kuhlmann and Satta (2009) present an algorithm for binarizing certain LCFRS rules without increasing their fan-out, and Sagot and Satta (2010) show how to reduce rank to the lowest value possible for LCFRS rules of fan-out two, again without increasing fan-out. G´omez-Rodr´ıguez et al. (2010) show how to factorize well-nested LCFRS rules of arbitrary fan-out for efficient parsing. In general there may be a trade-off required between rank and fan-out, and a few recent papers have investigated this trade-off taking gen450 eral LCFRS rules as input. G´omez-Rodr´ıguez et al. (2009) present an algorithm for binarization of LCFRSs while keeping fan-out as small as possible. The algorithm is exponential in the resulting fan-out, and G´omez-Rodr´ıguez et al. (2009) mention as an important open question whether polynomialtime algorithms to minimize fan-out are possible. Gildea (2010) presents a related method for binarizing rules while keeping the time complexity of parsing as small as possible. Binarization turns out to be possible with no penalty in time complexity, but, again, the factorization algorithm is exponential in the resulting time complexity. Gildea (2011) shows that a polynomial time algorithm for factorizing LCFRSs in order to minimize time complexity would imply an improved approximation algorithm for the well-studied graph-theoretic property known as treewidth. However, whether the problem of factorizing LCFRSs in order to minimize time complexity is NP-hard is still an open question in the above works. Similar questions have arisen in the context of machine translation, as the SCFGs used to model translation are also instances of LCFRSs, as already mentioned. For SCFG, Satta and Peserico (2005) showed that the exponent in the time complexity of parsing algorithms must grow at least as fast as the square root of the rule rank, and Gildea and ˇStefankoviˇc (2007) tightened this bound to be linear in the rank. However, neither paper provides an algorithm for finding the best parsing strategy, and Huang et al. (2009) mention that whether finding the optimal parsing strategy for an SCFG rule is NPhard is an important problem for future work. In this paper, we investigate the problem of rule binarization for LCFRSs in the context of headdriven parsing strategies. Head-driven strategies begin with one rhs symbol, and add one nonterminal at a time. This rules out any factorization in which two subsets of nonterminals of size greater than one are combined in a single step. Head-driven strategies allow for the techniques of lexicalization and Markovization that are widely used in (projective) statistical parsing (Collins, 1997). The statistical LCFRS parser of Kallmeyer and Maier (2010) binarizes rules head-outward, and therefore adopts what we refer to as a head-driven strategy. However, the binarization used by Kallmeyer and Maier (2010) simply proceeds left to right through the rule, without considering the impact of the parsing strategy on either time or space complexity. We examine the question of whether we can efficiently find the strategy that minimizes either the time complexity or the space complexity of parsing. While a naive algorithm can evaluate all r! head-driven strategies in time O(n · r!), where r is the rule’s rank and n is the total length of the rule’s description, we wish to determine whether a polynomial-time algorithm is possible. Since parsing problems can be cast in terms of logic programming (Shieber et al., 1995), we note that our problem can be thought of as a type of query optimization for logic programming. Query optimization for logic programming is NP-complete since query optimization for even simple conjunctive database queries is NP-complete (Chandra and Merlin, 1977). However, the fact that variables in queries arising from LCFRS rules correspond to the endpoints of spans in the string to be parsed means that these queries have certain structural properties (Gildea, 2011). We wish to determine whether the structure of LCFRS rules makes efficient factorization algorithms possible. In the following, we show both the the time- and space-complexity problems to be NP-hard for headdriven strategies. We provide what is to our knowledge the first NP-hardness result for a grammar factorization problem, which we hope will aid in understanding parsing algorithms in general. 2 LCFRSs and parsing complexity In this section we briefly introduce LCFRSs and define the problem of optimizing head-driven parsing complexity for these formalisms. For a positive integer n, we write [n] to denote the set {1, . . . , n}. As already mentioned in the introduction, LCFRSs generate tuples of strings over some finite alphabet. This is done by associating each production p of a grammar with a function g that takes as input the tuples generated by the nonterminals in p’s rhs, and rearranges their string components into a new tuple, possibly adding some alphabet symbols. Let V be some finite alphabet. We write V ∗for the set of all (finite) strings over V . For natural numbers r ≥0 and f, f1, . . . , fr ≥1, consider a func451 tion g : (V ∗)f1 × · · · × (V ∗)fr →(V ∗)f defined by an equation of the form g(⟨x1,1, . . . , x1,f1⟩, . . . , ⟨xr,1, . . . , xr,fr⟩) = ⃗α . Here the xi,j’s denote variables over strings in V ∗, and ⃗α = ⟨α1, . . . , αf⟩is an f-tuple of strings over g’s argument variables and symbols in V . We say that g is linear, non-erasing if ⃗α contains exactly one occurrence of each argument variable. We call r and f the rank and the fan-out of g, respectively, and write r(g) and f(g) to denote these quantities. Example 1 g1(⟨x1,1, x1,2⟩) = ⟨x1,1x1,2⟩takes as input a tuple with two strings and returns a tuple with a single string, obtained by concatenating the components in the input tuple. g2(⟨x1,1, x1,2⟩) = ⟨ax1,1b, cx1,2d⟩takes as input a tuple with two strings and wraps around these strings with symbols a, b, c, d ∈V . Both functions are linear, nonerasing, and we have r(g1) = r(g2) = 1, f(g1) = 1 and f(g2) = 2. 2 A linear context-free rewriting system is a tuple G = (VN, VT , P, S), where VN and VT are finite, disjoint alphabets of nonterminal and terminal symbols, respectively. Each A ∈VN is associated with a value f(A), called its fan-out. The nonterminal S is the start symbol, with f(S) = 1. Finally, P is a set of productions of the form p : A → g(A1, A2, . . . , Ar(g)) , (1) where A, A1, . . . , Ar(g) ∈VN, and g : (V ∗ T )f(A1) × · · · × (V ∗ T )f(Ar(g)) →(V ∗ T )f(A) is a linear, nonerasing function. Production (1) can be used to transform the r(g) string tuples generated by the nonterminals A1, . . . , Ar(g) into a tuple of f(A) strings generated by A. The values r(g) and f(g) are called the rank and fan-out of p, respectively, written r(p) and f(p). Given that f(S) = 1, S generates a set of strings, defining the language L(G). Example 2 Let g1 and g2 be as in Example 1, and let g3() = ⟨ε, ε⟩. Consider the LCFRS G defined by the productions p1 : S →g1(A), p2 : A →g2(A) and p3 : A →g3(). We have f(S) = 1, f(A) = f(G) = 2, r(p3) = 0 and r(p1) = r(p2) = r(G) = 1. We have L(G) = {anbncndn | n ≥1}. For instance, the string a3b3c3d3 is generated by means fan-out strategy 4 ((A1 ⊕A4) ⊕A3)∗⊕A2 3 (A1 ⊕A4)∗⊕(A2 ⊕A3) 3 ((A1 ⊕A2)∗⊕A4) ⊕A3 2 ((A∗ 2 ⊕A3) ⊕A4) ⊕A1 Figure 1: Some parsing strategies for production p in Example 3, and the associated maximum value for fan-out. Symbol ⊕denotes the merging operation, and superscript ∗marks the first step in the strategy in which the highest fan-out is realized. of the following bottom-up process. First, the tuple ⟨ε, ε⟩is generated by A through p3. We then iterate three times the application of p2 to ⟨ε, ε⟩, resulting in the tuple ⟨a3b3, c3d3⟩. Finally, the tuple (string) ⟨a3b3c3d3⟩is generated by S through application of p1. 2 Existing parsing algorithms for LCFRSs exploit dynamic programming. These algorithms compute partial parses of the input string w, represented by means of specialized data structures called items. Each item indexes the boundaries of the segments of w that are spanned by the partial parse. In the special case of parsing based on CFGs, an item consists of two indices, while for TAGs four indices are required. In the general case of LCFRSs, parsing of a production p as in (1) can be carried out in r(g) −1 steps, collecting already available parses for nonterminals A1, . . . , Ar(g) one at a time, and ‘merging’ these into intermediate partial parses. We refer to the order in which nonterminals are merged as a parsing strategy, or, equivalently, a factorization of the original grammar rule. Any parsing strategy results in a complete parse of p, spanning f(p) = f(A) segments of w and represented by some item with 2f(A) indices. However, intermediate items obtained in the process might span more than f(A) segments. We illustrate this through an example. Example 3 Consider a linear non-erasing function g(⟨x1,1, x1,2⟩, ⟨x2,1, x2,2⟩, ⟨x3,1, x3,2⟩, ⟨x4,1, x4,2⟩) = ⟨x1,1x2,1x3,1x4,1, x3,2x2,2x4,2x1,2⟩, and a production p : A →g(A1, A2, A3, A4), where all the nonterminals involved have fan-out 2. We could parse p starting from A1, and then merging with A4, 452 v1 v2 v3 v4 e1 e3 e2 e4 Figure 2: Example input graph for our construction of an LCFRS production. A3, and A2. In this case, after we have collected the first three nonterminals, we have obtained a partial parse having fan-out 4, that is, an item spanning 4 segments of the input string. Alternatively, we could first merge A1 and A4, then merge A2 and A3, and finally merge the two obtained partial parses. This strategy is slightly better, resulting in a maximum fan-out of 3. Other possible strategies can be explored, displayed in Figure 1. It turns out that the best parsing strategy leads to fan-out 2. 2 The maximum fan-out f realized by a parsing strategy determines the space complexity of the parsing algorithm. For an input string w, items will require (in the worst-case) 2f indices, each taking O(|w|) possible values. This results in space complexity of O(|w|2f). In the special cases of parsing based on CFGs and TAGs, this provides the wellknown space complexity of O(|w|2) and O(|w|4), respectively. It can also be shown that, if a partial parse having fan-out f is obtained by means of the combination of two partial parses with fan-out f1 and f2, respectively, the resulting time complexity will be O(|w|f+f1+f2) (Seki et al., 1991; Gildea, 2010). As an example, in the case of parsing based on CFGs, nonterminals as well as partial parses all have fanout one, resulting in the standard time complexity of O(|w|3) of dynamic programming methods. When parsing with TAGs, we have to manipulate objects with fan-out two (in the worst case), resulting in time complexity of O(|w|6). We investigate here the case of general LCFRS productions, whose internal structure is considerably more complex than the context-free or the tree adjoining case. Optimizing the parsing complexity for a production means finding a parsing strategy that results in minimum space or time complexity. We now turn the above optimization problems into decision problems. In the MIN SPACE STRATEGY problem one takes as input an LCFRS production p and an integer k, and must decide whether there exists a parsing strategy for p with maximum fan-out not larger than k. In the MIN TIME STRATEGY problem one is given p and k as above and must decide whether there exists a parsing strategy for p such that, in any of its steps merging two partial parses with fan-out f1 and f2 and resulting in a partial parse with fan-out f, the relation f +f1+f2 ≤k holds. In this paper we investigate the above problems in the context of a specific family of linguistically motivated parsing strategies for LCFRSs, called headdriven. In a head-driven strategy, one always starts parsing a production p from a fixed nonterminal in its rhs, called the head of p, and merges the remaining nonterminals one at a time with the partial parse containing the head. Thus, under these strategies, the construction of partial parses that do not include the head is forbidden, and each parsing step involves at most one partial parse. In Figure 1, all of the displayed strategies but the one in the second line are head-driven (for different choices of the head). 3 NP-completeness results For an LCFRS production p, let H be its head nonterminal, and let A1, . . . , An be all the non-head nonterminals in p’s rhs, with n + 1 = r(p). A headdriven parsing strategy can be represented as a permutation π over the set [n], prescribing that the nonhead nonterminals in p’s rhs should be merged with H in the order Aπ(1), Aπ(2), . . . , Aπ(n). Note that there are n! possible head-driven parsing strategies. To show that MIN SPACE STRATEGY is NPhard under head-driven parsing strategies, we reduce from the MIN CUT LINEAR ARRANGEMENT problem, which is a decision problem over (undirected) graphs. Given a graph M = (V, E) with set of vertices V and set of edges E, a linear arrangement of M is a bijective function h from V to [n], where |V | = n. The cutwidth of M at gap i ∈[n −1] and with respect to a linear arrangement h is the number of edges crossing the gap between the i-th vertex and its successor: cw(M, h, i) = |{(u, v) ∈E | h(u) ≤i < h(v)}| . 453 p : A →g(H, A1, A2, A3, A4) g(⟨xH,e1, xH,e2, xH,e3, xH,e4⟩, ⟨xA1,e1,l, xA1,e1,r, xA1,e3,l, xA1,e3,r⟩, ⟨xA2,e1,l, xA2,e1,r, xA2,e2,l, xA2,e2,r⟩, ⟨xA3,e2,l, xA3,e2,r, xA3,e3,l, xA3,e3,r, xA3,e4,l, xA3,e4,r⟩, ⟨xA4,e4,l, xA4,e4,r⟩) = ⟨ xA1,e1,lxA2,e1,lxH,e1xA1,e1,rxA2,e1,r, xA2,e2,lxA3,e2,lxH,e2xA2,e2,rxA3,e2,r, xA1,e3,lxA3,e3,lxH,e3xA1,e3,rxA3,e3,r, xA3,e4,lxA4,e4,lxH,e4xA3,e4,rxA4,e4,r ⟩ Figure 3: The construction used to prove Theorem 1 builds the LCFRS production p shown, when given as input the graph of Figure 2. The cutwidth of M is then defined as cw(M) = min h max i∈[n−1] cw(M, h, i) . In the MIN CUT LINEAR ARRANGEMENT problem, one is given as input a graph M and an integer k, and must decide whether cw(M) ≤k. This problem has been shown to be NP-complete (Gavril, 1977). Theorem 1 The MIN SPACE STRATEGY problem restricted to head-driven parsing strategies is NPcomplete. PROOF We start with the NP-hardness part. Let M = (V, E) and k be an input instance for MIN CUT LINEAR ARRANGEMENT, and let V = {v1, . . . , vn} and E = {e1, . . . , eq}. We assume there are no self loops in M, since these loops do not affect the value of the cutwidth and can therefore be removed. We construct an LCFRS production p and an integer k′ as follows. Production p has a head nonterminal H and a nonhead nonterminal Ai for each vertex vi ∈V . We let H generate tuples with a string component for each edge ei ∈E. Thus, we have f(H) = q. Accordingly, we use variables xH,ei, for each ei ∈E, to denote the string components in tuples generated by H. For each vi ∈V , let E(vi) ⊆E be the set of edges impinging on vi; thus |E(vi)| is the degree of vi. We let Ai generate a tuple with two string components for each ej ∈E(vi). Thus, we have f(Ai) = 2 · |E(vi)|. Accordingly, we use variables xAi,ej,l and xAi,ej,r, for each ej ∈E(vi), to denote the string components in tuples generated by Ai (here subscripts l and r indicate left and right positions, respectively; see below). We set r(p) = n + 1 and f(p) = q, and define p by A → g(H, A1, A2, . . . , An), with g(tH, tA1, . . . , tAn) = ⟨α1, . . . , αq⟩. Here tH is the tuple of variables for H and each tAi, i ∈[n], is the tuple of variables for Ai. Each string αi, i ∈[q], is specified as follows. Let vs and vt be the endpoints of ei, with vs, vt ∈V and s < t. We define αi = xAs,ei,lxAt,ei,lxH,eixAs,ei,rxAt,ei,r . Observe that whenever edge ei impinges on vertex vj, then the left and right strings generated by Aj and associated with ei wrap around the string generated by H and associated with the same edge. Finally, we set k′ = q + k. Example 4 Given the input graph of Figure 2, our reduction constructs the LCFRS production shown in Figure 3. Figure 4 gives a visualization of how the spans in this production fit together. For each edge in the graph of Figure 2, we have a group of five spans in the production: one for the head nonterminal, and two spans for each of the two nonterminals corresponding to the edge’s endpoints. 2 Assume now some head-driven parsing strategy π for p. For each i ∈[n], we define Dπ i to be the partial parse obtained after step i in π, consisting of the merge of nonterminals H, Aπ(1), . . . , Aπ(i). Consider some edge ej = (vs, vt). We observe that for any Dπ i that includes or excludes both nonterminals As and At, the αj component in the definition of p is associated with a single string, and therefore contributes with a single unit to the fan-out of the partial parse. On the other hand, if Dπ i includes only one nonterminal between As and At, the αj component is associated with two strings and contributes with two units to the fan-out of the partial parse. We can associate with π a linear arrangement hπ of M by letting hπ(vπ(i)) = i, for each vi ∈V . From the above observation on the fan-out of Dπ i , 454 xA1,e1,lxA2,e1,l xH,e1 xA1,e1,rxA2,e1,r xA2,e2,lxA3,e2,l xH,e2 xA2,e2,rxA3,e2,r xA1,e3,lxA3,e3,l xH,e3 xA1,e3,rxA3,e3,r xA3,e4,lxA4,e4,l xH,e4 xA3,e4,rxA4,e4,r H A1 A2 A3 A4 Figure 4: A visualization of how the spans for each nonterminal fit together in the left-to-right order defined by the production of Figure 3. we have the following relation, for every i ∈[n−1]: f(Dπ i ) = q + cw(M, hπ, i) . We can then conclude that M, k is a positive instance of MIN CUT LINEAR ARRANGEMENT if and only if p, k′ is a positive instance of MIN SPACE STRATEGY. This proves that MIN SPACE STRATEGY is NP-hard. To show that MIN SPACE STRATEGY is in NP, consider a nondeterministic algorithm that, given an LCFRS production p and an integer k, guesses a parsing strategy π for p, and tests whether f(Dπ i ) ≤ k for each i ∈[n]. The algorithm accepts or rejects accordingly. Such an algorithm can clearly be implemented to run in polynomial time. ■ We now turn to the MIN TIME STRATEGY problem, restricted to head-driven parsing strategies. Recall that we are now concerned with the quantity f1 + f2 + f, where f1 is the fan-out of some partial parse D, f2 is the fan-out of a nonterminal A, and f is the fan out of the partial parse resulting from the merge of the two previous analyses. We need to introduce the MODIFIED CUTWIDTH problem, which is a variant of the MIN CUT LINEAR ARRANGEMENT problem. Let M = (V, E) be some graph with |V | = n, and let h be a linear arrangement for M. The modified cutwidth of M at position i ∈[n] and with respect to h is the number of edges crossing over the i-th vertex: mcw(M, h, i) = |{(u, v) ∈E | h(u) < i < h(v)}| . The modified cutwidth of M is defined as mcw(M) = min h max i∈[n] mcw(M, h, i) . In the MODIFIED CUTWIDTH problem one is given as input a graph M and an integer k, and must decide whether mcw(M) ≤k. The MODIFIED CUTWIDTH problem has been shown to be NPcomplete by Lengauer (1981). We strengthen this result below; recall that a cubic graph is a graph without self loops where each vertex has degree three. Lemma 1 The MODIFIED CUTWIDTH problem restricted to cubic graphs is NP-complete. PROOF The MODIFIED CUTWIDTH problem has been shown to be NP-complete when restricted to graphs of maximum degree three by Makedon et al. (1985), reducing from a graph problem known as bisection width (see also Monien and Sudborough (1988)). Specifically, the authors construct a graph G′ of maximum degree three and an integer k′ from an input graph G = (V, E) with an even number n of vertices and an integer k, such that mcw(G′) ≤k′ if and only if the bisection width bw(G) of G is not greater than k, where bw(G) = min A,B⊆V |{(u, v) ∈E | u ∈A ∧v ∈B}| with A ∩B = ∅, A ∪B = V , and |A| = |B|. The graph G′ has vertices of degree two and three only, and it is based on a grid-like gadget R(r, c); see Figure 5. For each vertex of G, G′ includes a component R(2n4, 8n4+8). Moreover, G′ has a component called an H-shaped graph, containing left and right columns R(3n4, 12n4 + 12) connected by a middle bar R(2n4, 12n4 + 9); see Figure 6. From each of the n vertex components there is a sheaf of 2n2 edges connecting distinct degree 2 vertices in the component to 2n2 distinct degree 2 vertices in 455 x x x1 x2 x3 x4 x5 x x1 x2 x5 x3 x4 Figure 5: The R(5, 10) component (left), the modification of its degree 2 vertex x (middle), and the corresponding arrangement (right). the middle bar of the H-shaped graph. Finally, for each edge (vi, vj) of G there is an edge in G′ connecting a degree 2 vertex in the component corresponding to the vertex vi with a degree 2 vertex in the component corresponding to the vertex vj. The integer k′ is set to 3n4 + n3 + k −1. Makedon et al. (1985) show that the modified cutwidth of R(r, c) is r −1 whenever r ≥3 and c ≥4r + 8. They also show that an optimal linear arrangement for G′ has the form depicted in Figure 6, where half of the vertex components are to the left of the H-shaped graph and all the other vertex components are to the right. In this arrangement, the modified cutwidth is attested by the number of edges crossing over the vertices in the left and right columns of the H-shaped graph, which is equal to 3n4 −1 + n 2 2n2 + γ = 3n4 + n3 + γ −1 (2) where γ denotes the number of edges connecting vertices to the left with vertices to the right of the H-shaped graph. Thus, bw(G) ≤k if and only if mcw(G′) ≤k′. All we need to show now is how to modify the components of G′ in order to make it cubic. Modifying the vertex components All vertices x of degree 2 of the components corresponding to a vertex in G can be transformed into a vertex of degree 3 by adding five vertices x1, . . . , x5 connected as shown in the middle bar of Figure 5. Observe that these five vertices can be positioned in the arrangement immediately after x in the order x1, x2, x5, x3, x4 (see the right part of the figure). The resulting maximum modified cutwidth can increase by 2 in correspondence of vertex x5. Since the vertices of these components, in the optimal arrangement, have modified cutwidth smaller than 2n4 + n3 + n2, an increase by 2 is still smaller than the maximum modified cutwidth of the entire graph, which is 3n4 + O(n3). Modifying the middle bar of the H-shaped graph The vertices of degree 2 of this part of the graph can be modified as in the previous paragraph. Indeed, in the optimal arrangement, these vertices have modified cutwidth smaller than 2n4 + 2n3 + n2, and an increase by 2 is still smaller than the maximum cutwidth of the entire graph. Modifying the left/right columns of the H-shaped graph We replace the two copies of component R(3n4, 12n4 + 12) with two copies of the new component D(3n4, 24n4 + 16) shown in Figure 7, which is a cubic graph. In order to prove that relation (2) still holds, it suffices to show that the modified cutwidth of the component D(r, c) is still r −1 whenever r ≥3 and c = 8r + 16. We first observe that the linear arrangement obtained by visiting the vertices of D(r, c) from top to bottom and from left to right has modified cutwidth r −1. Let us now prove that, for any partition of the vertices into two subsets V1 and V2 with |V1|, |V2| ≥ 4r2, there exist at least r disjoint paths between vertices of V1 and vertices of V2. To this aim, we distinguish the following three cases. • Any row has (at least) one vertex in V1 and one vertex in V2: in this case, it is easy to see there exist at least r disjoint paths between vertices of V1 and vertices of V2. • There exist at least 3r ‘mixed’ columns, that is, columns with (at least) one vertex in V1 and one vertex in V2. Again, it is easy to see that there exist at least r disjoint paths between vertices 456 Figure 6: The optimal arrangement of G′. of V1 and vertices of V2 (at least one path every three columns). • The previous two cases do not apply. Hence, there exists a row entirely formed by vertices of V1 (or, equivalently, of V2). The worst case is when this row is the smallest one, that is, the one with (c−3−1) 2 + 1 = 4r + 7 vertices. Since at most 3r −1 columns are mixed, we have that at most (3r −1)(r −2) = 3r2 −7r + 2 vertices of V2 are on these mixed columns. Since |V2| ≥4r2, this implies that at least r columns are fully contained in V2. On the other hand, at least 4r+7−(3r−1) = r+8 columns are fully contained in V1. If the V1-columns interleave with the V2-columns, then there exist at least 2(r −1) disjoint paths between vertices of V1 and vertices of V2. Otherwise, all the V1columns precede or follow all the V2-columns (this corresponds to the optimal arrangement): in this case, there are r disjoint paths between vertices of V1 and vertices of V2. Observe now that any linear arrangement partitions the set of vertices in D(r, c) into the sets V1, consisting of the first 4r2 vertices in the arrangement, and V2, consisting of all the remaining vertices. Since there are r disjoint paths connecting V1 and V2, there must be at least r−1 edges passing over every vertex in the arrangement which is assigned to a position between the (4r2 + 1)-th and the position 4r2 + 1 from the right end of the arrangement: thus, the modified cutwidth of any linear arrangement of the vertices of D(r, c) is at least r −1. We can then conclude that the original proof of Makedon et al. (1985) still applies, according to relation (2). ■ Figure 7: The D(5, 10) component. We can now reduce from the MODIFIED CUTWIDTH problem for cubic graphs to the MIN TIME STRATEGY problem restricted to head-driven parsing strategies. Theorem 2 The MIN TIME STRATEGY problem restricted to head-driven parsing strategies is NPcomplete. PROOF We consider hardness first. Let M and k be an input instance of the MODIFIED CUTWIDTH problem restricted to cubic graphs, where M = (V, E) and V = {v1, . . . , vn}. We construct an LCFRS production p exactly as in the proof of Theorem 1, with rhs nonterminals H, A1, . . . , An. We also set k′ = 2 · k + 2 · |E| + 9. Assume now some head-driven parsing strategy π for p. After parsing step i ∈[n], we have a partial parse Dπ i consisting of the merge of nonterminals H, Aπ(1), . . . , Aπ(i). We write tc(p, π, i) to denote the exponent of the time complexity due to step i. As already mentioned, this quantity is defined as the sum of the fan-out of the two antecedents involved in the parsing step and the fan-out of its result: tc(p, π, i) = f(Dπ i−1) + f(Aπ(i)) + f(Dπ i ) . Again, we associate with π a linear arrangement hπ of M by letting hπ(vπ(i)) = i, for each vi ∈V . As in the proof of Theorem 1, the fan-out of Dπ i is then related to the cutwidth of the linear arrange457 ment hπ of M at position i by f(Dπ i ) = |E| + cw(M, hπ, i) . From the proof of Theorem 1, the fan-out of nonterminal Aπ(i) is twice the degree of vertex vπ(i), denoted by |E(vπ(i))|. We can then rewrite the above equation in terms of our graph M: tc(p, π, i) = 2 · |E| + cw(M, hπ, i −1) + + 2 · |E(vπ(i))| + cw(M, hπ, i) . The following general relation between cutwidth and modified cutwidth is rather intuitive: mcw(M, hπ, i) = 1 2 · [cw(M, hπ, i −1) + −|E(vπ(i))| + cw(M, hπ, i)] . Combining the two equations above we obtain: tc(p, π, i) = 2 · |E| + 3 · |E(vπ(i))| + + 2 · mcw(M, hπ, i) . Because we are restricting M to the class of cubic graphs, we can write: tc(p, π, i) = 2 · |E| + 9 + 2 · mcw(M, hπ, i) . We can thus conclude that there exists a head-driven parsing strategy for p with time complexity not greater than 2 · |E| + 9 + 2 · k = k′ if and only if mcw(M) ≤k. The membership of MODIFIED CUTWIDTH in NP follows from an argument similar to the one in the proof of Theorem 1. ■ We have established the NP-completeness of both the MIN SPACE STRATEGY and the MIN TIME STRATEGY decision problems. It is now easy to see that the problem of finding a space- or time-optimal parsing strategy for a LCFRS production is NP-hard as well, and thus cannot be solved in polynomial (deterministic) time unless P = NP. 4 Concluding remarks Head-driven strategies are important in parsing based on LCFRSs, both in order to allow statistical modeling of head-modifier dependencies and in order to generalize the Markovization of CFG parsers to parsers with discontinuous spans. However, there are n! possible head-driven strategies for an LCFRS production with a head and n modifiers. Choosing among these possible strategies affects both the time and the space complexity of parsing. In this paper we have shown that optimizing the choice according to either metric is NP-hard. To our knowledge, our results are the first NP-hardness results for a grammar factorization problem. SCFGs and STAGs are specific instances of LCFRSs. Grammar factorization for synchronous models is an important component of current machine translation systems (Zhang et al., 2006), and algorithms for factorization have been studied by Gildea et al. (2006) for SCFGs and by Nesson et al. (2008) for STAGs. These algorithms do not result in what we refer as head-driven strategies, although, as machine translation systems improve, lexicalized rules may become important in this setting as well. However, the results we have presented in this paper do not carry over to the above mentioned synchronous models, since the fan-out of these models is bounded by two, while in our reductions in Section 3 we freely use unbounded values for this parameter. Thus the computational complexity of optimizing the choice of the parsing strategy for SCFGs is still an open problem. Finally, our results for LCFRSs only apply when we restrict ourselves to head-driven strategies. This is in contrast to the findings of Gildea (2011), which show that, for unrestricted parsing strategies, a polynomial time algorithm for minimizing parsing complexity would imply an improved approximation algorithm for finding the treewidth of general graphs. Our result is stronger, in that it shows strict NPhardness, but also weaker, in that it applies only to head-driven strategies. Whether NP-hardness can be shown for unrestricted parsing strategies is an important question for future work. Acknowledgments The first and third authors are partially supported from the Italian PRIN project DISCO. The second author is partially supported by NSF grants IIS0546554 and IIS-0910611. 458 References Ashok K. Chandra and Philip M. Merlin. 1977. Optimal implementation of conjunctive queries in relational data bases. In Proc. ninth annual ACM symposium on Theory of computing, STOC ’77, pages 77–90. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proc. 35th Annual Conference of the Association for Computational Linguistics (ACL-97), pages 16–23. F. Gavril. 1977. Some NP-complete problems on graphs. In Proc. 11th Conf. on Information Sciences and Systems, pages 91–95. Daniel Gildea and Daniel ˇStefankoviˇc. 2007. Worst-case synchronous grammar rules. In Proc. 2007 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-07), pages 147– 154, Rochester, NY. Daniel Gildea, Giorgio Satta, and Hao Zhang. 2006. Factoring synchronous grammars by sorting. In Proc. International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06) Poster Session, pages 279–286. Daniel Gildea. 2010. Optimal parsing strategies for Linear Context-Free Rewriting Systems. In Proc. 2010 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-10), pages 769–776. Daniel Gildea. 2011. Grammar factorization by tree decomposition. Computational Linguistics, 37(1):231– 248. Carlos G´omez-Rodr´ıguez, Marco Kuhlmann, Giorgio Satta, and David Weir. 2009. Optimal reduction of rule length in Linear Context-Free Rewriting Systems. In Proc. 2009 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-09), pages 539–547. Carlos G´omez-Rodr´ıguez, Marco Kuhlmann, and Giorgio Satta. 2010. Efficient parsing of well-nested linear context-free rewriting systems. In Proc. 2010 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-10), pages 276– 284, Los Angeles, California. John E. Hopcroft and Jeffrey D. Ullman. 1979. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, Reading, MA. Liang Huang, Hao Zhang, Daniel Gildea, and Kevin Knight. 2009. Binarization of synchronous context-free grammars. Computational Linguistics, 35(4):559–595. Laura Kallmeyer and Wolfgang Maier. 2010. Datadriven parsing with probabilistic linear context-free rewriting systems. In Proc. 23rd International Conference on Computational Linguistics (Coling 2010), pages 537–545. Marco Kuhlmann and Giorgio Satta. 2009. Treebank grammar techniques for non-projective dependency parsing. In Proc. 12th Conference of the European Chapter of the ACL (EACL-09), pages 478–486. Thomas Lengauer. 1981. Black-white pebbles and graph separation. Acta Informatica, 16:465–475. Wolfgang Maier and Anders Søgaard. 2008. Treebanks and mild context-sensitivity. In Philippe de Groote, editor, Proc. 13th Conference on Formal Grammar (FG-2008), pages 61–76, Hamburg, Germany. CSLI Publications. F. S. Makedon, C. H. Papadimitriou, and I. H. Sudborough. 1985. Topological bandwidth. SIAM J. Alg. Disc. Meth., 6(3):418–444. B. Monien and I.H. Sudborough. 1988. Min cut is NPcomplete for edge weighted trees. Theor. Comput. Sci., 58:209–229. Rebecca Nesson, Giorgio Satta, and Stuart M. Shieber. 2008. Optimal k-arization of synchronous tree adjoining grammar. In Proc. 46th Annual Meeting of the Association for Computational Linguistics (ACL-08), pages 604–612. Owen Rambow and Giorgio Satta. 1999. Independent parallelism in finite copying parallel rewriting systems. Theor. Comput. Sci., 223(1-2):87–120. Benoˆıt Sagot and Giorgio Satta. 2010. Optimal rank reduction for linear context-free rewriting systems with fan-out two. In Proc. 48th Annual Meeting of the Association for Computational Linguistics, pages 525–533, Uppsala, Sweden. Giorgio Satta and Enoch Peserico. 2005. Some computational complexity results for synchronous contextfree grammars. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 803–810, Vancouver, Canada. H. Seki, T. Matsumura, M. Fujii, and T. Kasami. 1991. On multiple context-free grammars. Theoretical Computer Science, 88:191–229. Stuart M. Shieber, Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and implementation of deductive parsing. The Journal of Logic Programming, 24(1-2):3–36. K. Vijay-Shankar, D. L. Weir, and A. K. Joshi. 1987. Characterizing structural descriptions produced by various grammatical formalisms. In Proc. 25th Annual Conference of the Association for Computational Linguistics (ACL-87), pages 104–111. Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proc. 2006 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-06), pages 256–263. 459
|
2011
|
46
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 460–469, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Prefix Probability for Probabilistic Synchronous Context-Free Grammars Mark-Jan Nederhof School of Computer Science University of St Andrews North Haugh, St Andrews, Fife KY16 9SX United Kingdom [email protected] Giorgio Satta Dept. of Information Engineering University of Padua via Gradenigo, 6/A I-35131 Padova Italy [email protected] Abstract We present a method for the computation of prefix probabilities for synchronous contextfree grammars. Our framework is fairly general and relies on the combination of a simple, novel grammar transformation and standard techniques to bring grammars into normal forms. 1 Introduction Within the area of statistical machine translation, there has been a growing interest in so-called syntaxbased translation models, that is, models that define mappings between languages through hierarchical sentence structures. Several such statistical models that have been investigated in the literature are based on synchronous rewriting or tree transduction. Probabilistic synchronous context-free grammars (PSCFGs) are one among the most popular examples of such models. PSCFGs subsume several syntax-based statistical translation models, as for instance the stochastic inversion transduction grammars of Wu (1997), the statistical model used by the Hiero system of Chiang (2007), and systems which extract rules from parsed text, as in Galley et al. (2004). Despite the widespread usage of models related to PSCFGs, our theoretical understanding of this class is quite limited. In contrast to the closely related class of probabilistic context-free grammars, a syntax model for which several interesting mathematical and statistical properties have been investigated, as for instance by Chi (1999), many theoretical problems are still unsolved for the class of PSCFGs. This paper considers a parsing problem that is well understood for probabilistic context-free grammars but that has never been investigated in the context of PSCFGs, viz. the computation of prefix probabilities. In the case of a probabilistic context-free grammar, this problem is defined as follows. We are asked to compute the probability that a sentence generated by our model starts with a prefix string v given as input. This quantity is defined as the (possibly infinite) sum of the probabilities of all strings of the form vw, for any string w over the alphabet of the model. This problem has been studied by Jelinek and Lafferty (1991) and by Stolcke (1995). Prefix probabilities can be used to compute probability distributions for the next word or part-of-speech. This has applications in incremental processing of text or speech from left to right; see again (Jelinek and Lafferty, 1991). Prefix probabilities can also be exploited in speech understanding systems to score partial hypotheses in beam search (Corazza et al., 1991). This paper investigates the problem of computing prefix probabilities for PSCFGs. In this context, a pair of strings v1 and v2 is given as input, and we are asked to compute the probability that any string in the source language starting with prefix v1 is translated into any string in the target language starting with prefix v2. This probability is more precisely defined as the sum of the probabilities of translation pairs of the form [v1w1, v2w2], for any strings w1 and w2. A special case of prefix probability for PSCFGs is the right prefix probability. This is defined as the probability that some (complete) input string w in the source language is translated into a string in the target language starting with an input prefix v. 460 Prefix probabilities and right prefix probabilities for PSCFGs can be exploited to compute probability distributions for the next word or part-of-speech in left-to-right incremental translation, essentially in the same way as described by Jelinek and Lafferty (1991) for probabilistic context-free grammars, as discussed later in this paper. Our solution to the problem of computing prefix probabilities is formulated in quite different terms from the solutions by Jelinek and Lafferty (1991) and by Stolcke (1995) for probabilistic context-free grammars. In this paper we reduce the computation of prefix probabilities for PSCFGs to the computation of inside probabilities under the same model. Computation of inside probabilities for PSCFGs is a well-known problem that can be solved using offthe-shelf algorithms that extend basic parsing algorithms. Our reduction is a novel grammar transformation, and the proof of correctness proceeds by fairly conventional techniques from formal language theory, relying on the correctness of standard methods for the computation of inside probabilities for PSCFG. This contrasts with the techniques proposed by Jelinek and Lafferty (1991) and by Stolcke (1995), which are extensions of parsing algorithms for probabilistic context-free grammars, and require considerably more involved proofs of correctness. Our method for computing the prefix probabilities for PSCFGs runs in exponential time, since that is the running time of existing methods for computing the inside probabilities for PSCFGs. It is unlikely this can be improved, because the recognition problem for PSCFG is NP-complete, as established by Satta and Peserico (2005), and there is a straightforward reduction from the recognition problem for PSCFGs to the problem of computing the prefix probabilities for PSCFGs. 2 Definitions In this section we introduce basic definitions related to synchronous context-free grammars and their probabilistic extension; our notation follows Satta and Peserico (2005). Let N and Σ be sets of nonterminal and terminal symbols, respectively. In what follows we need to represent bijections between the occurrences of nonterminals in two strings over N ∪Σ. This is realized by annotating nonterminals with indices from an infinite set. We define I(N) = {A t | A ∈N, t ∈ N} and VI = I(N) ∪Σ. For a string γ ∈V ∗ I , we write index(γ) to denote the set of all indices that appear in symbols in γ. Two strings γ1, γ2 ∈V ∗ I are synchronous if each index from N occurs at most once in γ1 and at most once in γ2, and index(γ1) = index(γ2). Therefore γ1, γ2 have the general form: γ1 = u10A t1 11 u11A t2 12 u12 · · · u1r−1A tr 1r u1r γ2 = u20A tπ(1) 21 u21A tπ(2) 22 u22 · · · u2r−1A tπ(r) 2r u2r where r ≥0, u1i, u2i ∈Σ∗, A ti 1i , A tπ(i) 2i ∈I(N), ti ̸= tj for i ̸= j, and π is a permutation of the set {1, . . . , r}. A synchronous context-free grammar (SCFG) is a tuple G = (N, Σ, P, S), where N and Σ are finite, disjoint sets of nonterminal and terminal symbols, respectively, S ∈N is the start symbol and P is a finite set of synchronous rules. Each synchronous rule has the form s : [A1 →α1, A2 → α2], where A1, A2 ∈N and where α1, α2 ∈V ∗ I are synchronous strings. The symbol s is the label of the rule, and each rule is uniquely identified by its label. For technical reasons, we allow the existence of multiple rules that are identical apart from their labels. We refer to A1 →α1 and A2 →α2, respectively, as the left and right components of rule s. Example 1 The following synchronous rules implicitly define a SCFG: s1 : [S →A 1 B 2 , S →B 2 A 1 ] s2 : [A →aA 1 b, A →bA 1 a] s3 : [A →ab, A →ba] s4 : [B →cB 1 d, B →dB 1 c] s5 : [B →cd, B →dc] 2 In each step of the derivation process of a SCFG G, two nonterminals with the same index in a pair of synchronous strings are rewritten by a synchronous rule. This is done in such a way that the result is once more a pair of synchronous strings. An auxiliary notion is that of reindexing, which is an injective function f from N to N. We extend f to VI by letting f(A t ) = A f(t) for A t ∈I(N) and f(a) = a for a ∈Σ. We also extend f to strings in V ∗ I by 461 letting f(ε) = ε and f(Xγ) = f(X)f(γ), for each X ∈VI and γ ∈V ∗ I . Let γ1, γ2 be synchronous strings in V ∗ I . The derive relation [γ1, γ2] ⇒G [δ1, δ2] holds whenever there exist an index t in index(γ1) = index(γ2), a synchronous rule s : [A1 →α1, A2 →α2] in P and some reindexing f such that: (i) index(f(α1)) ∩(index(γ1) \ {t}) = ∅; (ii) γ1 = γ′ 1A t 1 γ′′ 1, γ2 = γ′ 2A t 2 γ′′ 2; and (iii) δ1 = γ′ 1f(α1)γ′′ 1, δ2 = γ′ 2f(α2)γ′′ 2. We also write [γ1, γ2] ⇒s G [δ1, δ2] to explicitly indicate that the derive relation holds through rule s. Note that δ1, δ2 above are guaranteed to be synchronous strings, because α1 and α2 are synchronous strings and because of (i) above. Note also that, for a given pair [γ1, γ2] of synchronous strings, an index t and a rule s, there may be infinitely many choices of reindexing f such that the above constraints are satisfied. In this paper we will not further specify the choice of f. We say the pair [A1, A2] of nonterminals is linked (in G) if there is a rule of the form s : [A1 → α1, A2 →α2]. The set of linked nonterminal pairs is denoted by N[2]. A derivation is a sequence σ = s1s2 · · · sd of synchronous rules si ∈P with d ≥0 (σ = ε for d = 0) such that [γ1i−1, γ2i−1] ⇒si G [γ1i, γ2i] for every i with 1 ≤i ≤d and synchronous strings [γ1i, γ2i] with 0 ≤i ≤d. Throughout this paper, we always implicitly assume some canonical form for derivations in G, by demanding for instance that each step rewrites a pair of nonterminal occurrences of which the first is leftmost in the left component. When we want to focus on the specific synchronous strings being derived, we also write derivations in the form [γ10, γ20] ⇒σ G [γ1d, γ2d], and we write [γ10, γ20] ⇒∗ G [γ1d, γ2d] when σ is not further specified. The translation generated by a SCFG G is defined as: T(G) = {[w1, w2] | [S 1 , S 1 ] ⇒∗ G [w1, w2], w1, w2 ∈Σ∗} For w1, w2 ∈Σ∗, we write D(G, [w1, w2]) to denote the set of all (canonical) derivations σ such that [S 1 , S 1 ] ⇒σ G [w1, w2]. Analogously to standard terminology for contextfree grammars, we call a SCFG reduced if every rule occurs in at least one derivation σ ∈ D(G, [w1, w2]), for some w1, w2 ∈Σ∗. We assume without loss of generality that the start symbol S does not occur in the right-hand side of either component of any rule. Example 2 Consider the SCFG G from example 1. The following is a canonical derivation in G, since it is always the leftmost nonterminal occurrence in the left component that is involved in a derivation step: [S 1 , S 1 ] ⇒G [A 1 B 2 , B 2 A 1 ] ⇒G [aA 3 bB 2 , B 2 bA 3 a] ⇒G [aaA 4 bbB 2 , B 2 bbA 4 aa] ⇒G [aaabbbB 2 , B 2 bbbaaa] ⇒G [aaabbbcB 5 d, dB 5 cbbbaaa] ⇒G [aaabbbccdd, ddccbbbaaa] It is not difficult to see that the generated translation is T(G) = {[apbpcqdq, dqcqbpap] | p, q ≥1}. 2 The size of a synchronous rule s : [A1 →α1, A2 →α2], is defined as |s| = |A1α1A2α2|. The size of G is defined as |G| = P s∈P |s|. A probabilistic SCFG (PSCFG) is a pair G = (G, pG) where G = (N, Σ, P, S) is a SCFG and pG is a function from P to real numbers in [0, 1]. We say that G is proper if for each pair [A1, A2] ∈N[2] we have: X s:[A1→α1, A2→α2] pG(s) = 1 Intuitively, properness ensures that where a pair of nonterminals in two synchronous strings can be rewritten, there is a probability distribution over the applicable rules. For a (canonical) derivation σ = s1s2 · · · sd, we define pG(σ) = Qd i=1 pG(si). For w1, w2 ∈Σ∗, we also define: pG([w1, w2]) = X σ∈D(G,[w1,w2]) pG(σ) (1) We say a PSCFG is consistent if pG defines a probability distribution over the translation, or formally: X w1,w2 pG([w1, w2]) = 1 462 If the grammar is reduced, proper and consistent, then also: X w1,w2∈Σ∗, σ∈P ∗ s.t. [A 1 1 , A 1 2 ]⇒σ G[w1, w2] pG(σ) = 1 for every pair [A1, A2] ∈N[2]. The proof is identical to that of the corresponding fact for probabilistic context-free grammars. 3 Effective PSCFG parsing If w = a1 · · · an then the expression w[i, j], with 0 ≤i ≤j ≤n, denotes the substring ai+1 · · · aj (if i = j then w[i, j] = ε). In this section, we assume the input is the pair [w1, w2] of terminal strings. The task of a recognizer for SCFG G is to decide whether [w1, w2] ∈T(G). We present a general algorithm for solving the above problem in terms of the specification of a deduction system, following Shieber et al. (1995). The items that are constructed by the system have the form [m1, A1, m′ 1; m2, A2, m′ 2], where [A1, A2] ∈ N[2] and where m1, m′ 1, m2, m′ 2 are non-negative integers such that 0 ≤m1 ≤m′ 1 ≤|w1| and 0 ≤m2 ≤m′ 2 ≤|w2|. Such an item can be derived by the deduction system if and only if: [A 1 1 , A 1 2 ] ⇒∗ G [w1[m1, m′ 1], w2[m2, m′ 2]] The deduction system has one inference rule, shown in figure 1. One of its side conditions has a synchronous rule in P of the form: s : [A1 →u10A t1 11 u11 · · · u1r−1A tr 1r u1r, A2 →u20A tπ(1) 21 u21 · · · u2r−1A tπ(r) 2r u2r] (2) Observe that, in the right-hand side of the two rule components above, nonterminals A1i and A2π−1(i), 1 ≤i ≤r, have both the same index. More precisely, A1i has index ti and A2π−1(i) has index ti′ with i′ = π(π−1(i)) = i. Thus the nonterminals in each antecedent item in figure 1 form a linked pair. We now turn to a computational analysis of the above algorithm. In the inference rule in figure 1 there are 2(r + 1) variables that can be bound to positions in w1, and as many that can be bound to positions in w2. However, the side conditions imply m′ ij = mij + |uij|, for i ∈{1, 2} and 0 ≤j ≤r, and therefore the number of free variables is only r + 1 for each component. By standard complexity analysis of deduction systems, for example following McAllester (2002), the time complexity of a straightforward implementation of the recognition algorithm is O(|P| · |w1|rmax+1 · |w2|rmax+1), where rmax is the maximum number of right-hand side nonterminals in either component of a synchronous rule. The algorithm therefore runs in exponential time, when the grammar G is considered as part of the input. Such computational behavior seems unavoidable, since the recognition problem for SCFG is NP-complete, as reported by Satta and Peserico (2005). See also Gildea and Stefankovic (2007) and Hopkins and Langmead (2010) for further analysis of the upper bound above. The recognition algorithm above can easily be turned into a parsing algorithm by letting an implementation keep track of which items were derived from which other items, as instantiations of the consequent and the antecedents, respectively, of the inference rule in figure 1. A probabilistic parsing algorithm that computes pG([w1, w2]), defined in (1), can also be obtained from the recognition algorithm above, by associating each item with a probability. To explain the basic idea, let us first assume that each item can be inferred in finitely many ways by the inference rule in figure 1. Each instantiation of the inference rule should be associated with a term that is computed by multiplying the probability of the involved rule s and the product of all probabilities previously associated with the instantiations of the antecedents. The probability associated with an item is then computed as the sum of each term resulting from some instantiation of an inference rule deriving that item. This is a generalization to PSCFG of the inside algorithm defined for probabilistic context-free grammars (Manning and Sch¨utze, 1999), and we can show that the probability associated with item [0, S, |w1| ; 0, S, |w2|] provides the desired value pG([w1, w2]). We refer to the procedure sketched above as the inside algorithm for PSCFGs. However, this simple procedure fails if there are cyclic dependencies, whereby the derivation of an item involves a proper subderivation of the same item. Cyclic dependencies can be excluded if it can 463 [m′ 10, A11, m11; m′ 2π−1(1)−1, A2π−1(1), m2π−1(1)] ... [m′ 1r−1, A1r, m1r; m′ 2π−1(r)−1, A2π−1(r), m2π−1(r)] [m10, A1, m′ 1r; m20, A2, m′ 2r] s:[A1 →u10A t1 11 u11 · · · u1r−1A tr 1r u1r, A2 →u20A tπ(1) 21 u21 · · · u2r−1A tπ(r) 2r u2r] ∈P, w1[m10, m′ 10] = u10, ... w1[m1r, m′ 1r] = u1r, w2[m20, m′ 20] = u20, ... w2[m2r, m′ 2r] = u2r Figure 1: SCFG recognition, by a deduction system consisting of a single inference rule. be guaranteed that, in figure 1, m′ 1r −m10 is greater than m1j −m′ 1j−1 for each j (1 ≤j ≤r), or m′ 2r −m20 is greater than m2j −m′ 2j−1 for each j (1 ≤j ≤r). Consider again a synchronous rule s of the form in (2). We say s is an epsilon rule if r = 0 and u10 = u20 = ϵ. We say s is a unit rule if r = 1 and u10 = u11 = u20 = u21 = ϵ. Similarly to context-free grammars, absence of epsilon rules and unit rules guarantees that there are no cyclic dependencies between items and in this case the inside algorithm correctly computes pG([w1, w2]). Epsilon rules can be eliminated from PSCFGs by a grammar transformation that is very similar to the transformation eliminating epsilon rules from a probabilistic context-free grammar (Abney et al., 1999). This is sketched in what follows. We first compute the set of all nullable linked pairs of nonterminals of the underlying SCFG, that is, the set of all [A1, A2] ∈N[2] such that [A 1 1 , A 1 2 ] ⇒∗ G [ε, ε]. This can be done in linear time O(|G|) using essentially the same algorithm that identifies nullable nonterminals in a context-free grammar, as presented for instance by Sippu and Soisalon-Soininen (1988). Next, we identify all occurrences of nullable pairs [A1, A2] in the right-hand side components of a rule s, such that A1 and A2 have the same index. For every possible choice of a subset U of these occurrences, we add to our grammar a new rule sU constructed by omitting all of the nullable occurrences in U. The probability of sU is computed as the probability of s multiplied by terms of the form: X σ s.t. [A 1 1 ,A 1 2 ]⇒σ G[ε, ε] pG(σ) (3) for every pair [A1, A2] in U. After adding these extra rules, which in effect circumvents the use of epsilongenerating subderivations, we can safely remove all epsilon rules, with the only exception of a possible rule of the form [S →ϵ, S →ϵ]. The translation and the associated probability distribution in the resulting grammar will be the same as those in the source grammar. One problem with the above construction is that we have to create new synchronous rules sU for each possible choice of subset U. In the worst case, this may result in an exponential blow-up of the source grammar. In the case of context-free grammars, this is usually circumvented by casting the rules in binary form prior to epsilon rule elimination. However, this is not possible in our case, since SCFGs do not allow normal forms with a constant bound on the length of the right-hand side of each component. This follows from a result due to Aho and Ullman (1969) for a formalism called syntax directed translation schemata, which is a syntactic variant of SCFGs. An additional complication with our construction is that finding any of the values in (3) may involve solving a system of non-linear equations, similarly to the case of probabilistic context-free grammars; see again Abney et al. (1999), and Stolcke (1995). Approximate solution of such systems might take exponential time, as pointed out by Kiefer et al. (2007). Notwithstanding the worst cases mentioned above, there is a special case that can be easily dealt with. Assume that, for each nullable pair [A1, A2] in G we have that [A 1 1 , A 1 2 ] ⇒∗ G [w1, w2] does not hold for any w1 and w2 with w1 ̸= ε or w2 ̸= ε. Then each of the values in (3) is guaranteed to be 1, and furthermore we can remove the instances of the nullable pairs in the source rule s all at the same time. This means that the overall construction of 464 elimination of nullable rules from G can be implemented in linear time |G|. It is this special case that we will encounter in section 4. After elimination of epsilon rules, one can eliminate unit rules. We define Cunit([A1, A2], [B1, B2]) as the sum of the probabilities of all derivations deriving [B1, B2] from [A1, A2] with arbitrary indices, or more precisely: X σ∈P ∗s.t. ∃t∈N, [A 1 1 , A 1 2 ]⇒σ G[B t 1 , B t 2 ] pG(σ) Note that [A1, A2] may be equal to [B1, B2] and σ may be ε, in which case Cunit([A1, A2], [B1, B2]) is at least 1, but it may be larger if there are unit rules. Therefore Cunit([A1, A2], [B1, B2]) should not be seen as a probability. Consider a pair [A1, A2] ∈N[2] and let all unit rules with left-hand sides A1 and A2 be: s1 : [A1, A2] →[A t1 11 , A t1 21 ] ... sm : [A1, A2] →[A tm 1m , A tm 2m ] The values of Cunit(·, ·) are related by the following: Cunit([A1, A2], [B1, B2]) = δ([A1, A2] = [B1, B2]) + X i pG(si) · Cunit([A1i, A2i], [B1, B2]) where δ([A1, A2] = [B1, B2]) is defined to be 1 if [A1, A2] = [B1, B2] and 0 otherwise. This forms a system of linear equations in the unknown variables Cunit(·, ·). Such a system can be solved in polynomial time in the number of variables, for example using Gaussian elimination. The elimination of unit rules starts with adding a rule s′ : [A1 →α1, A2 →α2] for each nonunit rule s : [B1 →α1, B2 →α2] and pair [A1, A2] such that Cunit([A1, A2], [B1, B2]) > 0. We assign to the new rule s′ the probability pG(s) · Cunit([A1, A2], [B1, B2]). The unit rules can now be removed from the grammar. Again, in the resulting grammar the translation and the associated probability distribution will be the same as those in the source grammar. The new grammar has size O(|G|2), where G is the input grammar. The time complexity is dominated by the computation of the solution of the linear system of equations. This computation takes cubic time in the number of variables. The number of variables in this case is O(|G|2), which makes the running time O(|G|6). 4 Prefix probabilities The joint prefix probability pprefix G ([v1, v2]) of a pair [v1, v2] of terminal strings is the sum of the probabilities of all pairs of strings that have v1 and v2, respectively, as their prefixes. Formally: pprefix G ([v1, v2]) = X w1,w2∈Σ∗ pG([v1w1, v2w2]) At first sight, it is not clear this quantity can be effectively computed, as it involves a sum over infinitely many choices of w1 and w2. However, analogously to the case of context-free prefix probabilities (Jelinek and Lafferty, 1991), we can isolate two parts in the computation. One part involves infinite sums, which are independent of the input strings v1 and v2, and can be precomputed by solving a system of linear equations. The second part does rely on v1 and v2, and involves the actual evaluation of pprefix G ([v1, v2]). This second part can be realized effectively, on the basis of the precomputed values from the first part. In order to keep the presentation simple, and to allow for simple proofs of correctness, we solve the problem in a modular fashion. First, we present a transformation from a PSCFG G = (G, pG), with G = (N, Σ, P, S), to a PSCFG Gprefix = (Gprefix, pGprefix), with Gprefix = (Nprefix, Σ, Pprefix, S↓). The latter grammar derives all possible pairs [v1, v2] such that [v1w1, v2w2] can be derived from G, for some w1 and w2. Moreover, pGprefix([v1, v2]) = pprefix G ([v1, v2]), as will be verified later. Computing pGprefix([v1, v2]) directly using a generic probabilistic parsing algorithm for PSCFGs is difficult, due to the presence of epsilon rules and unit rules. The next step will be to transform Gprefix into a third grammar G′ prefix by eliminating epsilon rules and unit rules from the underlying SCFG, and preserving the probability distribution over pairs of strings. Using G′ prefix one can then effectively 465 apply generic probabilistic parsing algorithms for PSCFGs, such as the inside algorithm discussed in section 3, in order to compute the desired prefix probabilities for the source PSCFG G. For each nonterminal A in the source SCFG G, the grammar Gprefix contains three nonterminals, namely A itself, A↓and Aε. The meaning of A remains unchanged, whereas A↓is intended to generate a string that is a suffix of a known prefix v1 or v2. Nonterminals Aε generate only the empty string, and are used to simulate the generation by G of infixes of the unknown suffix w1 or w2. The two lefthand sides of a synchronous rule in Gprefix can contain different combinations of nonterminals of the forms A, A↓, or Aε. The start symbol of Gprefix is S↓. The structure of the rules from the source grammar is largely retained, except that some terminal symbols are omitted in order to obtain the intended interpretation of A↓and Aε. In more detail, let us consider a synchronous rule s : [A1 →α1, A2 →α2] from the source grammar, where for i ∈{1, 2} we have: αi = ui0A ti1 i1 ui1 · · · uir−1A tir ir uir The transformed grammar then contains a large number of rules, each of which is of the form s′ : [B1 →β1, B2 →β2], where Bi →βi is of one of three forms, namely Ai →αi, A↓ i →α↓ i or Aε i →αε i, where α↓ i and αε i are explained below. The choices for i = 1 and for i = 2 are independent, so that we can have 3 ∗3 = 9 kinds of synchronous rules, to be further subdivided in what follows. A unique label s′ is produced for each new rule, and the probability of each new rule equals that of s. The right-hand side αε i is constructed by omitting all terminals and propagating downwards the ε superscript, resulting in: αε i = Aε ti1 i1 · · · Aε tir ir It is more difficult to define α↓ i . In fact, there can be a number of choices for α↓ i and, for each choice, the transformed grammar contains an instance of the synchronous rule s′ : [B1 →β1, B2 →β2] as defined above. The reason why different choices need to be considered is because the boundary between the known prefix vi and the unknown suffix wi can occur at different positions, either within a terminal string uij or else further down in a subderivation involving Aij. In the first case, we have for some j (0 ≤j ≤r): α↓ i = ui0A ti1 i1 ui1A ti2 i2 · · · uij−1A tij ij u′ ijA ε tij+1 ij+1 A ε tij+2 ij+2 · · · Aε tir ir where u′ ij is a choice of a prefix of uij. In words, the known prefix ends after u′ ij and, thereafter, no more terminals are generated. We demand that u′ ij must not be the empty string, unless Ai = S and j = 0. The reason for this restriction is that we want to avoid an overlap with the second case. In this second case, we have for some j (1 ≤j ≤r): α↓ i = ui0A ti1 i1 ui1A ti2 i2 · · · uij−1A ↓tij ij A ε tij+1 ij+1 A ε tij+2 ij+2 · · · Aε tir ir Here the known prefix of the input ends within a subderivation involving Aij, and further to the right no more terminals are generated. Example 3 Consider the synchronous rule s : [A →aB 1 bc C 2 d, D →ef E 2 F 1 ]. The first component of a synchronous rule derived from this can be one of the following eight: Aε →Bε 1 Cε 2 A↓→aBε 1 Cε 2 A↓→aB↓1 Cε 2 A↓→aB 1 b Cε 2 A↓→aB 1 bc Cε 2 A↓→aB 1 bc C↓2 A↓→aB 1 bc C 2 d A →aB 1 bc C 2 d The second component can be one of the following six: Dε →Eε 2 F ε 1 D↓→eEε 2 F ε 1 D↓→ef Eε 2 F ε 1 D↓→ef E↓2 F ε 1 D↓→ef E 2 F ↓1 D →ef E 2 F 1 466 In total, the transformed grammar will contain 8 ∗ 6 = 48 synchronous rules derived from s. 2 For each synchronous rule s, the above grammar transformation produces O(|s|) left rule components and as many right rule components. This means the number of new synchronous rules is O(|s|2), and the size of each such rule is O(|s|). If we sum O(|s|3) for every rule s we obtain a time and space complexity of O(|G|3). We now investigate formal properties of our grammar transformation, in order to relate it to prefix probabilities. We define the relation ⊢between P and Pprefix such that s ⊢s′ if and only if s′ was obtained from s by the transformation described above. This is extended in a natural way to derivations, such that s1 · · · sd ⊢s′ 1 · · · s′ d′ if and only if d = d′ and si ⊢s′ i for each i (1 ≤i ≤d). The formal relation between G and Gprefix is revealed by the following two lemmas. Lemma 1 For each v1, v2, w1, w2 ∈Σ∗and σ ∈P ∗such that [S, S] ⇒σ G [v1w1, v2w2], there is a unique σ′ ∈P ∗ prefix such that [S↓, S↓] ⇒σ′ Gprefix [v1, v2] and σ ⊢σ′. 2 Lemma 2 For each v1, v2 ∈Σ∗and derivation σ′ ∈P ∗ prefix such that [S↓, S↓] ⇒σ′ Gprefix [v1, v2], there is a unique σ ∈P ∗and unique w1, w2 ∈Σ∗ such that [S, S] ⇒σ G [v1w1, v2w2] and σ ⊢σ′. 2 The only non-trivial issue in the proof of Lemma 1 is the uniqueness of σ′. This follows from the observation that the length of v1 in v1w1 uniquely determines how occurrences of left components of rules in P found in σ are mapped to occurrences of left components of rules in Pprefix found in σ′. The same applies to the length of v2 in v2w2 and the right components. Lemma 2 is easy to prove as the structure of the transformation ensures that the terminals that are in rules from P but not in the corresponding rules from Pprefix occur at the end of a string v1 (and v2) to form the longer string v1w1 (and v2w2, respectively). The transformation also ensures that s ⊢s′ implies pG(s) = pGprefix(s′). Therefore σ ⊢σ′ implies pG(σ) = pGprefix(σ′). By this and Lemmas 1 and 2 we may conclude: Theorem 1 pGprefix([v1, v2]) = pprefix G ([v1, v2]). 2 Because of the introduction of rules with left-hand sides of the form Aε in both the left and right components of synchronous rules, it is not straightforward to do effective probabilistic parsing with the grammar Gprefix. We can however apply the transformations from section 3 to eliminate epsilon rules and thereafter eliminate unit rules, in a way that leaves the derived string pairs and their probabilities unchanged. The simplest case is when the source grammar G is reduced, proper and consistent, and has no epsilon rules. The only nullable pairs of nonterminals in Gprefix will then be of the form [Aε 1, Aε 2]. Consider such a pair [Aε 1, Aε 2]. Because of reduction, properness and consistency of G we have: X w1,w2∈Σ∗, σ∈P ∗s.t. [A 1 1 , A 1 2 ]⇒σ G[w1, w2] pG(σ) = 1 Because of the structure of the grammar transformation by which Gprefix was obtained from G, we also have: X σ∈P ∗s.t. [Aε 1 1 , Aε 1 2 ]⇒σ Gprefix[ε, ε] pGprefix(σ) = 1 Therefore pairs of occurrences of Aε 1 and Aε 2 with the same index in synchronous rules of Gprefix can be systematically removed without affecting the probability of the resulting rule, as outlined in section 3. Thereafter, unit rules can be removed to allow parsing by the inside algorithm for PSCFGs. Following the computational analyses for all of the constructions presented in section 3, and for the grammar transformation discussed in this section, we can conclude that the running time of the proposed algorithm for the computation of prefix probabilities is dominated by the running time of the inside algorithm, which in the worst case is exponential in |G|. This result is not unexpected, as already pointed out in the introduction, since the recognition problem for PSCFGs is NP-complete, as established by Satta and Peserico (2005), and there is a straightforward reduction from the recognition problem for PSCFGs to the problem of computing the prefix probabilities for PSCFGs. 467 One should add that, in real world machine translation applications, it has been observed that recognition (and computation of inside probabilities) for SCFGs can typically be carried out in low-degree polynomial time, and the worst cases mentioned above are not observed with real data. Further discussion on this issue is due to Zhang et al. (2006). 5 Discussion We have shown that the computation of joint prefix probabilities for PSCFGs can be reduced to the computation of inside probabilities for the same model. Our reduction relies on a novel grammar transformation, followed by elimination of epsilon rules and unit rules. Next to the joint prefix probability, we can also consider the right prefix probability, which is defined by: pr−prefix G ([v1, v2]) = X w pG([v1, v2w]) In words, the entire left string is given, along with a prefix of the right string, and the task is to sum the probabilities of all string pairs for different suffixes following the given right prefix. This can be computed as a special case of the joint prefix probability. Concretely, one can extend the input and the grammar by introducing an end-of-sentence marker $. Let G′ be the underlying SCFG grammar after the extension. Then: pr−prefix G ([v1, v2]) = pprefix G′ ([v1$, v2]) Prefix probabilities and right prefix probabilities for PSCFGs can be exploited to compute probability distributions for the next word or part-of-speech in left-to-right incremental translation of speech, or alternatively as a predictive tool in applications of interactive machine translation, of the kind described by Foster et al. (2002). We provide some technical details here, generalizing to PSCFGs the approach by Jelinek and Lafferty (1991). Let G = (G, pG) be a PSCFG, with Σ the alphabet of terminal symbols. We are interested in the probability that the next terminal in the target translation is a ∈Σ, after having processed a prefix v1 of the source sentence and having produced a prefix v2 of the target translation. This can be computed as: pr−word G (a | [v1, v2]) = pprefix G ([v1, v2a]) pprefix G ([v1, v2]) Two considerations are relevant when applying the above formula in practice. First, the computation of pprefix G ([v1, v2a]) need not be computed from scratch if pprefix G ([v1, v2]) has been computed already. Because of the tabular nature of the inside algorithm, one can extend the table for pprefix G ([v1, v2]) by adding new entries to obtain the table for pprefix G ([v1, v2a]). The same holds for the computation of pprefix G ([v1b, v2]). Secondly, the computation of pprefix G ([v1, v2a]) for all possible a ∈Σ may be impractical. However, one may also compute the probability that the next part-of-speech in the target translation is A. This can be realised by adding a rule s′ : [B →b, A →cA] for each rule s : [B →b, A →a] from the source grammar, where A is a nonterminal representing a part-of-speech and cA is a (pre-)terminal specific to A. The probability of s′ is the same as that of s. If G′ is the underlying SCFG after adding such rules, then the required value is pprefix G′ ([v1, v2 cA]). One variant of the definitions presented in this paper is the notion of infix probability, which is useful in island-driven speech translation. Here we are interested in the probability that any string in the source language with infix v1 is translated into any string in the target language with infix v2. However, just as infix probabilities are difficult to compute for probabilistic context-free grammars (Corazza et al., 1991; Nederhof and Satta, 2008) so (joint) infix probabilities are difficult to compute for PSCFGs. The problem lies in the possibility that a given infix may occur more than once in a string in the language. The computation of infix probabilities can be reduced to that of solving non-linear systems of equations, which can be approximated using for instance Newton’s algorithm. However, such a system of equations is built from the input strings, which entails that the computational effort of solving the system primarily affects parse time rather than parsergeneration time. 468 References S. Abney, D. McAllester, and F. Pereira. 1999. Relating probabilistic grammars and automata. In 37th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 542–549, Maryland, USA, June. A.V. Aho and J.D. Ullman. 1969. Syntax directed translations and the pushdown assembler. Journal of Computer and System Sciences, 3:37–56. Z. Chi. 1999. Statistical properties of probabilistic context-free grammars. Computational Linguistics, 25(1):131–160. D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. A. Corazza, R. De Mori, R. Gretter, and G. Satta. 1991. Computation of probabilities for an islanddriven parser. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(9):936–950. G. Foster, P. Langlais, and G. Lapalme. 2002. Userfriendly text prediction for translators. In Conference on Empirical Methods in Natural Language Processing, pages 148–155, University of Pennsylvania, Philadelphia, PA, USA, July. M. Galley, M. Hopkins, K. Knight, and D. Marcu. 2004. What’s in a translation rule? In HLT-NAACL 2004, Proceedings of the Main Conference, Boston, Massachusetts, USA, May. D. Gildea and D. Stefankovic. 2007. Worst-case synchronous grammar rules. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics, Proceedings of the Main Conference, pages 147– 154, Rochester, New York, USA, April. M. Hopkins and G. Langmead. 2010. SCFG decoding without binarization. In Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, pages 646–655, October. F. Jelinek and J.D. Lafferty. 1991. Computation of the probability of initial substring generation by stochastic context-free grammars. Computational Linguistics, 17(3):315–323. S. Kiefer, M. Luttenberger, and J. Esparza. 2007. On the convergence of Newton’s method for monotone systems of polynomial equations. In Proceedings of the 39th ACM Symposium on Theory of Computing, pages 217–266. C.D. Manning and H. Sch¨utze. 1999. Foundations of Statistical Natural Language Processing. MIT Press. D. McAllester. 2002. On the complexity analysis of static analyses. Journal of the ACM, 49(4):512–537. M.-J. Nederhof and G. Satta. 2008. Computing partition functions of PCFGs. Research on Language and Computation, 6(2):139–162. G. Satta and E. Peserico. 2005. Some computational complexity results for synchronous context-free grammars. In Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 803–810. S.M. Shieber, Y. Schabes, and F.C.N. Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24:3–36. S. Sippu and E. Soisalon-Soininen. 1988. Parsing Theory, Vol. I: Languages and Parsing, volume 15 of EATCS Monographs on Theoretical Computer Science. Springer-Verlag. A. Stolcke. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):167–201. D. Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–404. Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 256–263, New York, USA, June. 469
|
2011
|
47
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 470–480, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Comparison of Loopy Belief Propagation and Dual Decomposition for Integrated CCG Supertagging and Parsing Michael Auli School of Informatics University of Edinburgh [email protected] Adam Lopez HLTCOE Johns Hopkins University [email protected] Abstract Via an oracle experiment, we show that the upper bound on accuracy of a CCG parser is significantly lowered when its search space is pruned using a supertagger, though the supertagger also prunes many bad parses. Inspired by this analysis, we design a single model with both supertagging and parsing features, rather than separating them into distinct models chained together in a pipeline. To overcome the resulting increase in complexity, we experiment with both belief propagation and dual decomposition approaches to inference, the first empirical comparison of these algorithms that we are aware of on a structured natural language processing problem. On CCGbank we achieve a labelled dependency F-measure of 88.8% on gold POS tags, and 86.7% on automatic part-of-speeoch tags, the best reported results for this task. 1 Introduction Accurate and efficient parsing of Combinatorial Categorial Grammar (CCG; Steedman, 2000) is a longstanding problem in computational linguistics, due to the complexities associated its mild context sensitivity. Even for practical CCG that are strongly context-free (Fowler and Penn, 2010), parsing is much harder than with Penn Treebank-style contextfree grammars, with vast numbers of nonterminal categories leading to increased grammar constants. Where a typical Penn Treebank grammar may have fewer than 100 nonterminals (Hockenmaier and Steedman, 2002), we found that a CCG grammar derived from CCGbank contained over 1500. The same grammar assigns an average of 22 lexical categories per word (Clark and Curran, 2004a), resulting in an enormous space of possible derivations. The most successful approach to CCG parsing is based on a pipeline strategy (§2). First, we tag (or multitag) each word of the sentence with a lexical category using a supertagger, a sequence model over these categories (Bangalore and Joshi, 1999; Clark, 2002). Second, we parse the sentence under the requirement that the lexical categories are fixed to those preferred by the supertagger. Variations on this approach drive the widely-used, broad coverage C&C parser (Clark and Curran, 2004a; Clark and Curran, 2007; Kummerfeld et al., 2010). However, it fails when the supertagger makes errors. We show experimentally that this pipeline significantly lowers the upper bound on parsing accuracy (§3). The same experiment shows that the supertagger prunes many bad parses. So, while we want to avoid the error propagation inherent to a pipeline, ideally we still want to benefit from the key insight of supertagging: that a sequence model over lexical categories can be quite accurate. Our solution is to combine the features of both the supertagger and the parser into a single, less aggressively pruned model. The challenge with this model is its prohibitive complexity, which we address with approximate methods: dual decomposition and belief propagation (§4). We present the first side-by-side comparison of these algorithms on an NLP task of this complexity, measuring accuracy, convergence behavior, and runtime. In both cases our model significantly outperforms the pipeline approach, leading to the best published results in CCG parsing (§5). 470 2 CCG and Supertagging CCG is a lexicalized grammar formalism encoding for each word lexical categories that are either basic (eg. NN, JJ) or complex. Complex lexical categories specify the number and directionality of arguments. For example, one lexical category for the verb like is (S\NP)/NP, specifying the first argument as an NP to the right and the second as an NP to the left; there are over 100 lexical categories for like in our lexicon. In parsing, adjacent spans are combined using a small number of binary combinatory rules like forward application or composition (Steedman, 2000; Fowler and Penn, 2010). In the first derivation below, (S\NP)/NP and NP combine to form the spanning category S\NP, which only requires an NP to its left to form a complete sentence-spanning S. The second derivation uses type-raising to change the category type of I. I like tea NP (S\NP)/NP NP > S\NP < S I like tea NP (S\NP)/NP NP >T S/(S\NP) >B S/NP > S As can be inferred from even this small example, a key difficulty in parsing CCG is that the number of categories quickly becomes extremely large, and there are typically many ways to analyze every span of a sentence. Supertagging (Bangalore and Joshi, 1999; Clark, 2002) treats the assignment of lexical categories (or supertags) as a sequence tagging problem. Because they do this with high accuracy, they are often exploited to prune the parser’s search space: the parser only considers lexical categories with high posterior probability (or other figure of merit) under the supertagging model (Clark and Curran, 2004a). The posterior probabilities are then discarded; it is the extensive pruning of lexical categories that leads to substantially faster parsing times. Pruning the categories in advance this way has a specific failure mode: sometimes it is not possible to produce a sentence-spanning derivation from the tag sequences preferred by the supertagger, since it does not enforce grammaticality. A workaround for this problem is the adaptive supertagging (AST) approach of Clark and Curran (2004a). It is based on a step function over supertagger beam widths, relaxing the pruning threshold for lexical categories only if the parser fails to find an analysis. The process either succeeds and returns a parse after some iteration or gives up after a predefined number of iterations. As Clark and Curran (2004a) show, most sentences can be parsed with a very small number of supertags per word. However, the technique is inherently approximate: it will return a lower probability parse under the parsing model if a higher probability parse can only be constructed from a supertag sequence returned by a subsequent iteration. In this way it prioritizes speed over exactness, although the tradeoff can be modified by adjusting the beam step function. Regardless, the effect of the approximation is unbounded. We will also explore reverse adaptive supertagging, a much less aggressive pruning method that seeks only to make sentences parseable when they otherwise would not be due to an impractically large search space. Reverse AST starts with a wide beam, narrowing it at each iteration only if a maximum chart size is exceeded. In this way it prioritizes exactness over speed. 3 Oracle Parsing What is the effect of these approximations? To answer this question we computed oracle best and worst values for labelled dependency F-score using the algorithm of Huang (2008) on the hybrid model of Clark and Curran (2007), the best model of their C&C parser. We computed the oracle on our development data, Section 00 of CCGbank (Hockenmaier and Steedman, 2007), using both AST and Reverse AST beams settings shown in Table 1. The results (Table 2) show that the oracle best accuracy for reverse AST is more than 3% higher than the aggressive AST pruning.1 In fact, it is almost as high as the upper bound oracle accuracy of 97.73% obtained using perfect supertags—in other words, the search space for reverse AST is theoretically near-optimal.2 We also observe that the oracle 1The numbers reported here and in later sections differ slightly from those in a previously circulated draft of this paper, for two reasons: we evaluate only on sentences for which a parse was returned instead of all parses, to enable direct comparison with Clark and Curran (2007); and we use their hybrid model instead of their normal-form model, except where noted. Despite these changes our main findings remained unchanged. 2This idealized oracle reproduces a result from Clark and Cur471 Condition Parameter Iteration 1 2 3 4 5 AST β (beam width) 0.075 0.03 0.01 0.005 0.001 k (dictionary cutoff) 20 20 20 20 150 Reverse β 0.001 0.005 0.01 0.03 0.075 k 150 20 20 20 20 Table 1: Beam step function used for standard (AST) and less aggressive (Reverse) AST throughout our experiments. Parameter β is a beam threshold while k bounds the use of a part-of-speech tag dictionary, which is used for words seen less than k times. Viterbi F-score Oracle Max F-score Oracle Min F-score LF LP LR LF LP LR LF LP LR cat/word AST 87.38 87.83 86.93 94.35 95.24 93.49 54.31 54.81 53.83 1.3-3.6 Reverse 87.36 87.55 87.17 97.65 98.21 97.09 18.09 17.75 18.43 3.6-1.3 Table 2: Comparison of adaptive supertagging (AST) and a less restrictive setting (Reverse) with Viterbi and oracle F-scores on CCGbank Section 00. The table shows the labelled F-score (LF), precision (LP) and recall (LR) and the the number of lexical categories per word used (from first to last parsing attempt). 88.2
88.4
88.6
88.8
89.0
89.2
89.4
89.6
89.8
85600
85800
86000
86200
86400
86600
86800
87000
87200
87400
0.075
0.03
0.01
0.005
0.001
0.0005
0.0001
0.00005
0.00001
Labelleld
F-‐score
Model
score
Supertagger
beam
Model
score
F-‐measure
93.5
94.0
94.5
95.0
95.5
96.0
96.5
97.0
97.5
98.0
98.5
82500
83000
83500
84000
84500
85000
0.075
0.03
0.01
0.005
0.001
0.0005
0.0001
0.00005
0.00001
Labelleld
F-‐score
Model
score
Supertagger
beam
Model
score
F-‐measure
Figure 1: Comparison between model score and Viterbi F-score (left); and between model score and oracle F-score (right) for different supertagger beams on a subset of CCGbank Section 00. worst accuracy is much lower in the reverse setting. It is clear that the supertagger pipeline has two effects: while it beneficially prunes many bad parses, it harmfully prunes some very good parses. We can also see from the scores of the Viterbi parses that while the reverse condition has access to much better parses, the model doesn’t actually find them. This mirrors the result of Clark and Curran (2007) that they use to justify AST. Digging deeper, we compared parser model score against Viterbi F-score and oracle F-score at a varan (2004b). The reason that using the gold-standard supertags doesn’t result in 100% oracle parsing accuracy is that some of the development set parses cannot be constructed by the learned grammar. riety of fixed beam settings (Figure 1), considering only the subset of our development set which could be parsed with all beam settings. The inverse relationship between model score and F-score shows that the supertagger restricts the parser to mostly good parses (under F-measure) that the model would otherwise disprefer. Exactly this effect is exploited in the pipeline model. However, when the supertagger makes a mistake, the parser cannot recover. 4 Integrated Supertagging and Parsing The supertagger obviously has good but not perfect predictive features. An obvious way to exploit this without being bound by its decisions is to incorporate these features directly into the parsing model. 472 In our case both the parser and the supertagger are feature-based models, so from the perspective of a single parse tree, the change is simple: the tree is simply scored by the weights corresponding to all of its active features. However, since the features of the supertagger are all Markov features on adjacent supertags, the change has serious implications for search. If we think of the supertagger as defining a weighted regular language consisting of all supertag sequences, and the parser as defining a weighted mildly context-sensitive language consisting of only a subset of these sequences, then the search problem is equivalent to finding the optimal derivation in the weighted intersection of a regular and mildly context-sensitive language. Even allowing for the observation of Fowler and Penn (2010) that our practical CCG is context-free, this problem still reduces to the construction of Bar-Hillel et al. (1964), making search very expensive. Therefore we need approximations. Fortunately, recent literature has introduced two relevant approximations to the NLP community: loopy belief propagation (Pearl, 1988), applied to dependency parsing by Smith and Eisner (2008); and dual decomposition (Dantzig and Wolfe, 1960; Komodakis et al., 2007; Sontag et al., 2010, inter alia), applied to dependency parsing by Koo et al. (2010) and lexicalized CFG parsing by Rush et al. (2010). We apply both techniques to our integrated supertagging and parsing model. 4.1 Loopy Belief Propagation Belief propagation (BP) is an algorithm for computing marginals (i.e. expectations) on structured models. These marginals can be used for decoding (parsing) in a minimum-risk framework (Smith and Eisner, 2008); or for training using a variety of algorithms (Sutton and McCallum, 2010). We experiment with both uses in §5. Many researchers in NLP are familiar with two special cases of belief propagation: the forward-backward and inside-outside algorithms, used for computing expectations in sequence models and context-free grammars, respectively.3 Our use of belief propagation builds directly on these two familiar algorithms. 3Forward-backward and inside-outside are formally shown to be special cases of belief propagation by Smyth et al. (1997) and Sato (2007), respectively. f(T1) f(T2) b(T0) b(T1) t1 T0 T1 t2 T2 e0 e1 e2 Figure 2: Supertagging factor graph with messages. Circles are variables and filled squares are factors. BP is usually understood as an algorithm on bipartite factor graphs, which structure a global function into local functions over subsets of variables (Kschischang et al., 1998). Variables maintain a belief (expectation) over a distribution of values and BP passes messages about these beliefs between variables and factors. The idea is to iteratively update each variable’s beliefs based on the beliefs of neighboring variables (through a shared factor), using the sum-product rule. This results in the following equation for a message mx→f(x) from a variable x to a factor f mx→f(x) = Y h∈n(x)\f mh→x(x) (1) where n(x) is the set of all neighbours of x. The message mf→x from a factor to a variable is mf→x(x) = X ∼{x} f(X) Y y∈n(f)\x my→f(y) (2) where ∼{x} represents all variables other than x, X = n(f) and f(X) is the set of arguments of the factor function f. Making this concrete, our supertagger defines a distribution over tags T0...TI, based on emission factors e0...eI and transition factors t1...tI (Figure 2). The message fi a variable Ti receives from its neighbor to the left corresponds to the forward probability, while messages from the right correspond to backward probability bi. fi(Ti) = X Ti−1 fi−1(Ti−1)ei−1(Ti−1)ti(Ti−1, Ti) (3) bi(Ti) = X Ti+1 bi+1(Ti+1)ei+1(Ti+1)ti+1(Ti, Ti+1) (4) 473 span (0,2) span (1,3) span (0,3) TREE n(T0) o(T2) o(T0) n(T2) T0 e0 e1 e2 T1 T2 f(T1) f(T2) b(T0) b(T1) t1 t2 Figure 3: Factor graph for the combined parsing and supertagging model. The current belief Bx(x) for variable x can be computed by taking the normalized product of all its incoming messages. Bx(x) = 1 Z Y h∈n(x) mh→x(x) (5) In the supertagger model, this is just: p(Ti) = 1 Z fi(Ti)bi(Ti)ei(Ti) (6) Our parsing model is also a distribution over variables Ti, along with an additional quadratic number of span(i, j) variables. Though difficult to represent pictorially, a distribution over parses is captured by an extension to graphical models called case-factor diagrams (McAllester et al., 2008). We add this complex distribution to our model as a single factor (Figure 3). This is a natural extension to the use of complex factors described by Smith and Eisner (2008) and Dreyer and Eisner (2009). When a factor graph is a tree as in Figure 2, BP converges in a single iteration to the exact marginals. However, when the model contains cycles, as in Figure 3, we can iterate message passing. Under certain assumptions this loopy BP it will converge to approximate marginals that are bounded under an interpretation from statistical physics (Yedidia et al., 2001; Sutton and McCallum, 2010). The TREE factor exchanges inside ni and outside oi messages with the tag and span variables, taking into account beliefs from the sequence model. We will omit the unchanged outside recursion for brevity, but inside messages n(Ci,j) for category Ci,j in span(i, j) are computed using rule probabilities r as follows: n(Ci,j) = fi(Ci,j)bi(Ci,j)ei(Ci,j) if j=i+1 X k,X,Y n(Xi,k)n(Yk,j)r(Ci,j, Xi,k, Yk,j) (7) Note that the only difference from the classic inside algorithm is that the recursive base case of a category spanning a single word has been replaced by a message from the supertag that contains both forward and backward factors, along with a unary emission factor, which doubles as a unary rule factor and thus contains the only shared features of the original models. This difference is also mirrored in the forward and backward messages, which are identical to Equations 3 and 4, except that they also incorporate outside messages from the tree factor. Once all forward-backward and inside-outside probabilities have been calculated the belief of supertag Ti can be computed as the product of all incoming messages. The only difference from Equation 6 is the addition of the outside message. p(Ti) = 1 Z fi(Ti)bi(Ti)ei(Ti)oi(Ti) (8) The algorithm repeatedly runs forward-backward and inside-outside, passing their messages back and forth, until these quantities converge. 4.2 Dual Decomposition Dual decomposition (Rush et al., 2010; Koo et al., 2010) is a decoding (i.e. search) algorithm for problems that can be decomposed into exactly solvable subproblems: in our case, supertagging and parsing. Formally, given Y as the set of valid parses, Z as the set of valid supertag sequences, and T as the set of supertags, we want to solve the following optimization for parser f(y) and supertagger g(z). arg max y∈Y,z∈Z f(y) + g(z) (9) such that y(i, t) = z(i, t) for all (i, t) ∈I (10) Here y(i, t) is a binary function indicating whether word i is assigned supertag t by the parser, for the 474 set I = {(i, t) : i ∈1 . . . n, t ∈T} denoting the set of permitted supertags for each word; similarly z(i, t) for the supertagger. To enforce the constraint that the parser and supertagger agree on a tag sequence we introduce Lagrangian multipliers u = {u(i, t) : (i, t) ∈I} and construct a dual objective over variables u(i, t). L(u) = max y∈Y (f(y) − X i,t u(i, t)y(i, t)) (11) + max z∈Z (f(z) + X i,t u(i, t)z(i, t)) This objective is an upper bound that we want to make as tight as possible by solving for minu L(u). We optimize the values of the u(i, t) variables using the same algorithm as Rush et al. (2010) for their tagging and parsing problem (essentially a perceptron update).4 An advantages of DD is that, on convergence, it recovers exact solutions to the combined problem. However, if it does not converge or we stop early, an approximation must be returned: following Rush et al. (2010) we used the highest scoring output of the parsing submodel over all iterations. 5 Experiments Parser. We use the C&C parser (Clark and Curran, 2007) and its supertagger (Clark, 2002). Our baseline is the hybrid model of Clark and Curran (2007); our integrated model simply adds the supertagger features to this model. The parser relies solely on the supertagger for pruning, using CKY for search over the pruned space. Training requires repeated calculation of feature expectations over packed charts of derivations. For training, we limited the number of items in this chart to 0.3 million, and for testing, 1 million. We also used a more permissive training supertagger beam (Table 3) than in previous work (Clark and Curran, 2007). Models were trained with the parser’s L-BFGS trainer. Evaluation. We evaluated on CCGbank (Hockenmaier and Steedman, 2007), a right-most normalform CCG version of the Penn Treebank. We use sections 02-21 (39603 sentences) for training, 4The u terms can be interpreted as the messages from factors to variables (Sontag et al., 2010) and the resulting message passing algorithms are similar to the max-product algorithm, a sister algorithm to BP. section 00 (1913 sentences) for development and section 23 (2407 sentences) for testing. We supply gold-standard part-of-speech tags to the parsers. Evaluation is based on labelled and unlabelled predicate argument structure recovery and supertag accuracy. We only evaluate on sentences for which an analysis was returned; the coverage for all parsers is 99.22% on section 00, and 99.63% on section 23. Model combination. We combine the parser and the supertagger over the search space defined by the set of supertags within the supertagger beam (see Table 1); this avoids having to perform inference over the prohibitively large set of parses spanned by all supertags. Hence at each beam setting, the model operates over the same search space as the baseline; the difference is that we search with our integrated model. 5.1 Parsing Accuracy We first experiment with the separately trained supertagger and parser, which are then combined using belief propagation (BP) and dual decomposition (DD). We run the algorithms for many iterations, and irrespective of convergence, for BP we compute the minimum risk parse from the current marginals, and for DD we choose the highest-scoring parse seen over all iterations. We measured the evolving accuracy of the models on the development set (Figure 4). In line with our oracle experiment, these results demonstrate that we can coax more accurate parses from the larger search space provided by the reverse setting; the influence of the supertagger features allow us to exploit this advantage. One behavior we observe in the graph is that the DD results tend to incrementally improve in accuracy while the BP results quickly stabilize, mirroring the result of Smith and Eisner (2008). This occurs because DD continues to find higher scoring parses at each iteration, and hence the results change. However for BP, even if the marginals have not converged, the minimum risk solution turns out to be fairly stable across successive iterations. We next compare the algorithms against the baseline on our test set (Table 4). We find that the early stability of BP’s performance generalises to the test set as does DD’s improvement over several iterations. More importantly, we find that the applying 475 Parameter Iteration 1 2 3 4 5 6 7 Training β 0.001 0.001 0.0045 0.0055 0.01 0.05 0.1 k 150 20 20 20 20 20 20 Table 3: Beam step function used for training (cf. Table 1). section 00 (dev) section 23 (test) AST Reverse AST Reverse LF UF ST LF UF ST LF UF ST LF UF ST Baseline 87.38 93.08 94.21 87.36 93.13 93.99 87.73 93.09 94.33 87.65 93.06 94.01 C&C ’07 87.24 93.00 94.16 87.64 93.00 94.32 BPk=1 87.70 93.28 94.44 88.35 93.69 94.73 88.20 93.28 94.60 88.78 93.66 94.81 BPk=25 87.70 93.31 94.44 88.33 93.72 94.71 88.19 93.27 94.59 88.80 93.68 94.81 DDk=1 87.40 93.09 94.23 87.38 93.15 94.03 87.74 93.10 94.33 87.67 93.07 94.02 DDk=25 87.71 93.32 94.44 88.29 93.71 94.67 88.14 93.24 94.59 88.80 93.68 94.82 Table 4: Results for individually-trained submodels combined using dual decomposition (DD) or belief propagation (BP) for k iterations, evaluated by labelled and unlabelled F-score (LF/UF) and supertag accuracy (ST). We compare against the previous best result of Clark and Curran (2007); our baseline is their model with wider training beams (cf. Table 3). 87.2
87.4
87.6
87.8
88.0
88.2
88.4
1
6
11
16
21
26
31
36
41
46
Labelled
F-‐score
Itera0ons
BL
AST
BL
Rev
BP
AST
BP
Rev
DD
AST
DD
Rev
Figure 4: Labelled F-score of baseline (BL), belief propagation (BP), and dual decomposition (DD) on section 00. our combined model using either algorithm consistently outperforms the baseline after only a few iterations. Overall, we improve the labelled F-measure by almost 1.1% and unlabelled F-measure by 0.6% over the baseline. To the best of our knowledge, the results obtained with BP and DD are the best reported results on this task using gold POS tags. Next, we evaluate performance when using automatic part-of-speech tags as input to our parser and supertagger (Table 5). This enables us to compare against the results of Fowler and Penn (2010), who trained the Petrov parser (Petrov et al., 2006) on CCGbank. We outperform them on all criteria. Hence our combined model represents the best CCG parsing results under any setting. Finally, we revisit the oracle experiment of §3 using our combined models (Figure 5). Both show an improved relationship between model score and Fmeasure. 5.2 Algorithmic Convergence Figure 4 shows that parse accuracy converges after a few iterations. Do the algorithms converge? BP converges when the marginals do not change between iterations, and DD converges when both submodels agree on all supertags. We measured the convergence of each algorithm under these criteria over 1000 iterations (Figure 6). DD converges much faster, while BP in the reverse condition converges quite slowly. This is interesting when contrasted with its behavior on parse accuracy—its rate of convergence after one iteration is 1.5%, but its accuracy is already the highest at this point. Over the entire 1000 iterations, most sentences converge: all but 3 for BP (both in AST and reverse) and all but 476 section 00 (dev) section 23 (test) LF LP LR UF UP UR LF LP LR UF UP UR Baseline 85.53 85.73 85.33 91.99 92.20 91.77 85.74 85.90 85.58 91.92 92.09 91.75 Petrov I-5 85.79 86.09 85.50 92.44 92.76 92.13 86.01 86.29 85.74 92.34 92.64 92.04 BPk=1 86.44 86.74 86.14 92.54 92.86 92.23 86.73 86.95 86.50 92.45 92.69 92.21 DDk=25 86.35 86.65 86.05 92.52 92.85 92.20 86.68 86.90 86.46 92.44 92.67 92.21 Table 5: Results on automatically assigned POS tags. Petrov I-5 is based on the parser output of Fowler and Penn (2010); we evaluate on sentences for which all parsers returned an analysis (2323 sentences for section 23 and 1834 sentences for section 00). 89.4
89.5
89.6
89.7
89.8
89.9
90.0
60000
80000
100000
120000
140000
160000
180000
200000
0.075
0.03
0.01
0.005
0.001
0.0005
0.0001
0.00005
0.00001
Labelleld
F-‐score
Model
score
Supertagger
beam
Model
score
F-‐measure
89.2
89.3
89.4
89.5
89.6
89.7
89.8
89.9
85200
85400
85600
85800
86000
86200
86400
0.075
0.03
0.01
0.005
0.001
0.0005
0.0001
0.00005
0.00001
Labelleld
F-‐score
Model
score
Supertagger
beam
Model
score
F-‐measure
Figure 5: Comparison between model score and Viterbi F-score for the integrated model using belief propagation (left) and dual decomposition (right); the results are based on the same data as Figure 1. . !" #!" $!" %!" &!" '!" (!" )!" *!" +!" #!!" #" #!" #!!" #!!!" !"#$%&'%#(%)&*+%),-.) /+%&*0"#1) ,-"./0" ,-"1232452" 66"./0" 66"1232452" Figure 6: Rate of convergence for belief propagation (BP) and dual decomposition (DD) with maximum k = 1000. 41 (2.6%) for DD in reverse (6 in AST). 5.3 Parsing Speed Because the C&C parser with AST is very fast, we wondered about the effect on speed for our model. We measured the runtime of the algorithms under the condition that we stopped at a particular iteration (Table 6). Although our models improve substantially over C&C, there is a significant cost in speed for the best result. 5.4 Training the Integrated Model In the experiments reported so far, the parsing and supertagging models were trained separately, and only combined at test time. Although the outcome of these experiments was successful, we wondered if we could obtain further improvements by training the model parameters together. Since the gradients produced by (loopy) BP are approximate, for these experiments we used a stochastic gradient descent (SGD) trainer (Bottou, 2003). We found that the SGD parameters described by Finkel et al. (2008) worked equally well for our models, and, on the baseline, produced similar results to L-BFGS. Curiously, however, we found that the combined model does not perform as well when 477 AST Reverse sent/sec LF sent/sec LF Baseline 65.8 87.38 5.9 87.36 BPk=1 60.8 87.70 5.8 88.35 BPk=5 46.7 87.70 4.7 88.34 BPk=25 35.3 87.70 3.5 88.33 DDk=1 64.6 87.40 5.9 87.38 DDk=5 41.9 87.65 3.1 88.09 DDk=25 32.5 87.71 1.9 88.29 Table 6: Parsing time in seconds per sentence (vs. Fmeasure) on section 00. AST Reverse LF UF ST LF UF ST Baseline 86.7 92.7 94.0 86.7 92.7 93.9 BP inf 86.8 92.8 94.1 87.2 93.1 94.2 BP train 86.3 92.5 93.8 85.6 92.1 93.2 Table 7: Results of training with SGD on approximate gradients from LPB on section 00. We test LBP in both inference and training (train) as well as in inference only (inf); a maximum number of 10 iterations is used. the parameters are trained together (Table 7). A possible reason for this is that we used a stricter supertagger beam setting during training (Clark and Curran, 2007) to make training on a single machine practical. This leads to lower performance, particularly in the Reverse condition. Training a model using DD would require a different optimization algorithm based on Viterbi results (e.g. the perceptron) which we will pursue in future work. 6 Conclusion and Future Work Our approach of combining models to avoid the pipeline problem (Felzenszwalb and McAllester, 2007) is very much in line with much recent work in NLP. Such diverse topics as machine translation (Dyer et al., 2008; Dyer and Resnik, 2010; Mi et al., 2008), part-of-speech tagging (Jiang et al., 2008), named entity recognition (Finkel and Manning, 2009) semantic role labelling (Sutton and McCallum, 2005; Finkel et al., 2006), and others have also been improved by combined models. Our empirical comparison of BP and DD also complements the theoretically-oriented comparison of marginal- and margin-based variational approximations for parsing described by Martins et al. (2010). We have shown that the aggressive pruning used in adaptive supertagging significantly harms the oracle performance of the parser, though it mostly prunes bad parses. Based on these findings, we combined parser and supertagger features into a single model. Using belief propagation and dual decomposition, we obtained more principled—and more accurate—approximations than a pipeline. Models combined using belief propagation achieve very good performance immediately, despite an initial convergence rate just over 1%, while dual decomposition produces comparable results after several iterations, and algorithmically converges more quickly. Our best result of 88.8% represents the state-of-the art in CCG parsing accuracy. In future work we plan to integrate the POS tagger, which is crucial to parsing accuracy (Clark and Curran, 2004b). We also plan to revisit the idea of combined training. Though we have focused on CCG in this work we expect these methods to be equally useful for other linguistically motivated but computationally complex formalisms such as lexicalized tree adjoining grammar. Acknowledgements We would like to thank Phil Blunsom, Prachya Boonkwan, Christos Christodoulopoulos, Stephen Clark, Michael Collins, Chris Dyer, Timothy Fowler, Mark Granroth-Wilding, Philipp Koehn, Terry Koo, Tom Kwiatkowski, Andr´e Martins, Matt Post, David Smith, David Sontag, Mark Steedman, and Charles Sutton for helpful discussion related to this work and comments on previous drafts, and the anonymous reviewers for helpful comments. We also acknowledge funding from EPSRC grant EP/P504171/1 (Auli); the EuroMatrixPlus project funded by the European Commission, 7th Framework Programme (Lopez); and the resources provided by the Edinburgh Compute and Data Facility (http://www.ecdf.ed.ac.uk). The ECDF is partially supported by the eDIKT initiative (http://www.edikt.org.uk). References S. Bangalore and A. K. Joshi. 1999. Supertagging: An Approach to Almost Parsing. Computational Linguis478 tics, 25(2):238–265, June. Y. Bar-Hillel, M. Perles, and E. Shamir. 1964. On formal properties of simple phrase structure grammars. In Language and Information: Selected Essays on their Theory and Application, pages 116–150. L. Bottou. 2003. Stochastic learning. In Advanced Lectures in Machine Learning, pages 146–168. S. Clark and J. R. Curran. 2004a. The importance of supertagging for wide-coverage CCG parsing. In COLING, Morristown, NJ, USA. S. Clark and J. R. Curran. 2004b. Parsing the WSJ using CCG and log-linear models. In Proc. of ACL, pages 104–111, Barcelona, Spain. S. Clark and J. R. Curran. 2007. Wide-Coverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4):493–552. S. Clark. 2002. Supertagging for Combinatory Categorial Grammar. In TAG+6. G. B. Dantzig and P. Wolfe. 1960. Decomposition principle for linear programs. Operations Research, 8(1):101–111. M. Dreyer and J. Eisner. 2009. Graphical models over multiple strings. In Proc. of EMNLP. C. Dyer and P. Resnik. 2010. Context-free reordering, finite-state translation. In Proc. of HLT-NAACL. C. J. Dyer, S. Muresan, and P. Resnik. 2008. Generalizing word lattice translation. In Proc. of ACL. P. F. Felzenszwalb and D. McAllester. 2007. The Generalized A* Architecture. In Journal of Artificial Intelligence Research, volume 29, pages 153–190. J. R. Finkel and C. D. Manning. 2009. Joint parsing and named entity recognition. In Proc. of NAACL. Association for Computational Linguistics. J. R. Finkel, C. D. Manning, and A. Y. Ng. 2006. Solving the problem of cascading errors: Approximate Bayesian inference for linguistic annotation pipelines. In Proc. of EMNLP. J. R. Finkel, A. Kleeman, and C. D. Manning. 2008. Feature-based, conditional random field parsing. In Proceedings of ACL-HLT. T. A. D. Fowler and G. Penn. 2010. Accurate contextfree parsing with combinatory categorial grammar. In Proc. of ACL. J. Hockenmaier and M. Steedman. 2002. Generative models for statistical parsing with Combinatory Categorial Grammar. In Proc. of ACL. J. Hockenmaier and M. Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics, 33(3):355–396. L. Huang. 2008. Forest Reranking: Discriminative parsing with Non-Local Features. In Proceedings of ACL08: HLT. W. Jiang, L. Huang, Q. Liu, and Y. L¨u. 2008. A cascaded linear model for joint Chinese word segmentation and part-of-speech tagging. In Proceedings of ACL-08: HLT. N. Komodakis, N. Paragios, and G. Tziritas. 2007. MRF optimization via dual decomposition: Messagepassing revisited. In Proc. of Int. Conf. on Computer Vision (ICCV). T. Koo, A. M. Rush, M. Collins, T. Jaakkola, and D. Sontag. 2010. Dual Decomposition for Parsing with NonProjective Head Automata. In In Proc. EMNLP. F. R. Kschischang, B. J. Frey, and H.-A. Loeliger. 1998. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47:498–519. J. K. Kummerfeld, J. Rosener, T. Dawborn, J. Haggerty, J. R. Curran, and S. Clark. 2010. Faster parsing by supertagger adaptation. In Proc. of ACL. A. F. T. Martins, N. A. Smith, E. P. Xing, P. M. Q. Aguiar, and M. A. T. Figueiredo. 2010. Turbo parsers: Dependency parsing by approximate variational inference. In Proc. of EMNLP. D. McAllester, M. Collins, and F. Pereira. 2008. Casefactor diagrams for structured probabilistic modeling. Journal of Computer and System Sciences, 74(1):84– 96. H. Mi, L. Huang, and Q. Liu. 2008. Forest-based translation. In Proc. of ACL-HLT. J. Pearl. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann. S. Petrov, L. Barrett, R. Thibaux, and D. Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proc. of ACL. A. M. Rush, D. Sontag, M. Collins, and T. Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In In Proc. EMNLP. T. Sato. 2007. Inside-outside probability computation for belief propagation. In Proc. of IJCAI. D. A. Smith and J. Eisner. 2008. Dependency parsing by belief propagation. In Proc. of EMNLP. P. Smyth, D. Heckerman, and M. Jordan. 1997. Probabilistic independence networks for hidden Markov probability models. Neural computation, 9(2):227– 269. D. Sontag, A. Globerson, and T. Jaakkola. 2010. Introduction to dual decomposition. In S. Sra, S. Nowozin, and S. J. Wright, editors, Optimization for Machine Learning. MIT Press. M. Steedman. 2000. The syntactic process. MIT Press, Cambridge, MA. C. Sutton and A. McCallum. 2005. Joint parsing and semantic role labelling. In Proc. of CoNLL. 479 C. Sutton and A. McCallum. 2010. An introduction to conditional random fields. arXiv:stat.ML/1011.4088. J. Yedidia, W. Freeman, and Y. Weiss. 2001. Generalized belief propagation. In Proc. of NIPS. 480
|
2011
|
48
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 481–490, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Jointly Learning to Extract and Compress Taylor Berg-Kirkpatrick Dan Gillick Dan Klein Computer Science Division University of California at Berkeley {tberg, dgillick, klein}@cs.berkeley.edu Abstract We learn a joint model of sentence extraction and compression for multi-document summarization. Our model scores candidate summaries according to a combined linear model whose features factor over (1) the n-gram types in the summary and (2) the compressions used. We train the model using a marginbased objective whose loss captures end summary quality. Because of the exponentially large set of candidate summaries, we use a cutting-plane algorithm to incrementally detect and add active constraints efficiently. Inference in our model can be cast as an ILP and thereby solved in reasonable time; we also present a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published ROUGE results to date on the TAC 2008 data set. 1 Introduction Applications of machine learning to automatic summarization have met with limited success, and, as a result, many top-performing systems remain largely ad-hoc. One reason learning may have provided limited gains is that typical models do not learn to optimize end summary quality directly, but rather learn intermediate quantities in isolation. For example, many models learn to score each input sentence independently (Teufel and Moens, 1997; Shen et al., 2007; Schilder and Kondadadi, 2008), and then assemble extractive summaries from the top-ranked sentences in a way not incorporated into the learning process. This extraction is often done in the presence of a heuristic that limits redundancy. As another example, Yih et al. (2007) learn predictors of individual words’ appearance in the references, but in isolation from the sentence selection procedure. Exceptions are Li et al. (2009) who take a max-margin approach to learning sentence values jointly, but still have ad hoc constraints to handle redundancy. One main contribution of the current paper is the direct optimization of summary quality in a single model; we find that our learned systems substantially outperform unlearned counterparts on both automatic and manual metrics. While pure extraction is certainly simple and does guarantee some minimal readability, Lin (2003) showed that sentence compression (Knight and Marcu, 2001; McDonald, 2006; Clarke and Lapata, 2008) has the potential to improve the resulting summaries. However, attempts to incorporate compression into a summarization system have largely failed to realize large gains. For example, Zajic et al (2006) use a pipeline approach, pre-processing to yield additional candidates for extraction by applying heuristic sentence compressions, but their system does not outperform state-of-the-art purely extractive systems. Similarly, Gillick and Favre (2009), though not learning weights, do a limited form of compression jointly with extraction. They report a marginal increase in the automatic wordoverlap metric ROUGE (Lin, 2004), but a decline in manual Pyramid (Nenkova and Passonneau, 2004). A second contribution of the current work is to show a system for jointly learning to jointly compress and extract that exhibits gains in both ROUGE and content metrics over purely extractive systems. Both Martins and Smith (2009) and Woodsend and Lapata (2010) build models that jointly extract and compress, but learn scores for sentences (or phrases) using independent classifiers. Daum´e III (2006) 481 learns parameters for compression and extraction jointly using an approximate training procedure, but his results are not competitive with state-of-the-art extractive systems, and he does not report improvements on manual content or quality metrics. In our approach, we define a linear model that scores candidate summaries according to features that factor over the n-gram types that appear in the summary and the structural compressions used to create the sentences in the summary. We train these parameters jointly using a margin-based objective whose loss captures end summary quality through the ROUGE metric. Because of the exponentially large set of candidate summaries, we use a cutting plane algorithm to incrementally detect and add active constraints efficiently. To make joint learning possible we introduce a new, manually-annotated data set of extracted, compressed sentences. Inference in our model can be cast as an integer linear program (ILP) and solved in reasonable time using a generic ILP solver; we also introduce a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published comparable results (ROUGE) to date on our test set. 2 Joint Model We focus on the task of multi-document summarization. The input is a collection of documents, each consisting of multiple sentences. The output is a summary of length no greater than Lmax. Let x be the input document set, and let y be a representation of a summary as a vector. For an extractive summary, y is as a vector of indicators y = (ys : s ∈x), one indicator ys for each sentence s in x. A sentence s is present in the summary if and only if its indicator ys = 1 (see Figure 1a). Let Y (x) be the set of valid summaries of document set x with length no greater than Lmax. While past extractive methods have assigned value to individual sentences and then explicitly represented the notion of redundancy (Carbonell and Goldstein, 1998), recent methods show greater success by using a simpler notion of coverage: bigrams Figure 1: Diagram of (a) extractive and (b) joint extractive and compressive summarization models. Variables ys indicate the presence of sentences in the summary. Variables yn indicate the presence of parse tree nodes. Note that there is intentionally a bigram missing from (a). contribute content, and redundancy is implicitly encoded in the fact that redundant sentences cover fewer bigrams (Nenkova and Vanderwende, 2005; Gillick and Favre, 2009). This later approach is associated with the following objective function: max y∈Y (x) X b∈B(y) vb (1) Here, vb is the value of bigram b, and B(y) is the set of bigrams present in the summary encoded by y. Gillick and Favre (2009) produced a state-of-the-art system1 by directly optimizing this objective. They let the value vb of each bigram be given by the number of input documents the bigram appears in. Our implementation of their system will serve as a baseline, referred to as EXTRACTIVE BASELINE. We extend objective 1 so that it assigns value not just to the bigrams that appear in the summary, but also to the choices made in the creation of the summary. In our complete model, which jointly extracts and compresses sentences, we choose whether or not to cut individual subtrees in the constituency parses 1See Text Analysis Conference results in 2008 and 2009. 482 of each sentence. This is in contrast to the extractive case where choices are made on full sentences. max y∈Y (x) X b∈B(y) vb + X c∈C(y) vc (2) C(y) is the set of cut choices made in y, and vc assigns value to each. Next, we present details of our representation of compressive summaries. Assume a constituency parse ts for every sentence s. We represent a compressive summary as a vector y = (yn : n ∈ts, s ∈ x) of indicators, one for each non-terminal node in each parse tree of the sentences in the document set x. A word is present in the output summary if and only if its parent parse tree node n has yn = 1 (see Figure 1b). In addition to the length constraint on the members of Y (x), we require that each node n may have yn = 1 only if its parent π(n) has yπ(n) = 1. This ensures that only subtrees may be deleted. While we use constituency parses rather than dependency parses, this model has similarities with the vine-growth model of Daum´e III (2006). For the compressive model we define the set of cut choices C(y) for a summary y to be the set of edges in each parse that are broken in order to delete a subtree (see Figure 1b). We require that each subtree has a non-terminal node for a root, and say that an edge (n, π(n)) between a node and its parent is broken if the parent has yπ(n) = 1 but the child has yn = 0. Notice that breaking a single edge deletes an entire subtree. 2.1 Parameterization Before learning weights in Section 3, we parameterize objectives 1 and 2 using features. This entails to parameterizing each bigram score vb and each subtree deletion score vc. For weights w ∈Rd and feature functions g(b, x) ∈Rd and h(c, x) ∈Rd we let: vb = wTg(b, x) vc = wTh(c, x) For example, g(b, x) might include a feature the counts the number of documents in x that b appears in, and h(c, x) might include a feature that indicates whether the deleted subtree is an SBAR modifying a noun. This parameterization allows us to cast summarization as structured prediction. We can define a feature function f(y, x) ∈Rd which factors over summaries y through B(y) and C(y): f(y, x) = X b∈B(y) g(b, x) + X c∈C(y) h(c, x) Using this characterization of summaries as feature vectors we can define a linear predictor for summarization: d(x; w) = arg max y∈Y (x) wTf(y, x) (3) = arg max y∈Y (x) X b∈B(y) vb + X c∈C(y) vc The arg max in Equation 3 optimizes Objective 2. Learning weights for Objective 1 where Y (x) is the set of extractive summaries gives our LEARNED EXTRACTIVE system. Learning weights for Objective 2 where Y (x) is the set of compressive summaries, and C(y) the set of broken edges that produce subtree deletions, gives our LEARNED COMPRESSIVE system, which is our joint model of extraction and compression. 3 Structured Learning Discriminative training attempts to minimize the loss incurred during prediction by optimizing an objective on the training set. We will perform discriminative training using a loss function that directly measures end-to-end summarization quality. In Section 4 we show that finding summaries that optimize Objective 2, Viterbi prediction, is efficient. Online learning algorithms like perceptron or the margin-infused relaxed algorithm (MIRA) (Crammer and Singer, 2003) are frequently used for structured problems where Viterbi inference is available. However, we find that such methods are unstable on our problem. We instead turn to an approach that optimizes a batch objective which is sensitive to all constraints on all instances, but is efficient by adding these constraints incrementally. 3.1 Max-margin objective For our problem the data set consists of pairs of document sets and label summaries, D = {(xi, y∗ i ) : i ∈1, . . . , N}. Note that the label summaries 483 can be expressed as vectors y∗because our training summaries are variously extractive or extractive and compressive (see Section 5). We use a soft-margin support vector machine (SVM) (Vapnik, 1998) objective over the full structured output space (Taskar et al., 2003; Tsochantaridis et al., 2004) of extractive and compressive summaries: min w 1 2∥w∥2 + C N N X i=1 ξi (4) s.t. ∀i, ∀y ∈Y (xi) (5) wT f(y∗ i , xi) −f(y, xi) ≥ℓ(y, y∗ i ) −ξi The constraints in Equation 5 require that the difference in model score between each possible summary y and the gold summary y∗ i be no smaller than the loss ℓ(y, y∗ i ), padded by a per-instance slack of ξi. We use bigram recall as our loss function (see Section 3.3). C is the regularization constant. When the output space Y (xi) is small these constraints can be explicitly enumerated. In this case it is standard to solve the dual, which is a quadratic program. Unfortunately, the size of the output space of extractive summaries is exponential in the number of sentences in the input document set. 3.2 Cutting-plane algorithm The cutting-plane algorithm deals with the exponential number of constraints in Equation 5 by performing constraint induction (Tsochantaridis et al., 2004). It alternates between solving Objective 4 with a reduced set of currently active constraints, and adding newly active constraints to the set. In our application, this approach efficiently solves the structured SVM training problem up to some specified tolerance ϵ. Suppose ˆw and ˆξ optimize Objective 4 under the currently active constraints on a given iteration. Notice that the ˆyi satisfying ˆyi = arg max y∈Y (xi) ˆwTf(y, xi) + ℓ(y, y∗ i ) (6) corresponds to the constraint in the fully constrained problem, for training instance (xi, y∗ i ), most violated by ˆw and ˆξ. On each round of constraint induction the cutting-plane algorithm computes the arg max in Equation 6 for a training instance, which is referred to as loss-augmented prediction, and adds the corresponding constraint to the active set. The constraints from Equation (5) are equivalent to: ∀i wTf(y∗ i , xi) ≥maxy∈Y (xi) wTf(y, xi) + ℓ(y, y∗ i ) −ξi. Thus, if loss-augmented prediction turns up no new constraints on a given iteration, the current solution to the reduced problem, ˆw and ˆξ, is the solution to the full SVM training problem. In practice, constraints are only added if the right hand side of Equation (5) exceeds the left hand side by at least ϵ. Tsochantaridis et al. (2004) prove that only O(N ϵ ) constraints are added before constraint induction finds a Cϵ-optimal solution. Loss-augmented prediction is not always tractable. Luckily, our choice of loss function, bigram recall, factors over bigrams. Thus, we can easily perform loss-augmented prediction using the same procedure we use to perform Viterbi prediction (described in Section 4). We simply modify each bigram value vb to include bigram b’s contribution to the total loss. We solve the intermediate partially-constrained max-margin problems using the factored sequential minimal optimization (SMO) algorithm (Platt, 1999; Taskar et al., 2004). In practice, for ϵ = 10−4, the cutting-plane algorithm converges after only three passes through the training set when applied to our summarization task. 3.3 Loss function In the simplest case, 0-1 loss, the system only receives credit for exactly identifying the label summary. Since there are many reasonable summaries we are less interested in exactly matching any specific training instance, and more interested in the degree to which a predicted summary deviates from a label. The standard method for automatically evaluating a summary against a reference is ROUGE, which we simplify slightly to bigram recall. With an extractive reference denoted by y∗, our loss function is: ℓ(y, y∗) = |B(y) T B(y∗)| |B(y∗)| We verified that bigram recall correlates well with ROUGE and with manual metrics. 484 4 Efficient Prediction We show how to perform prediction with the extractive and compressive models by solving ILPs. For many instances, a generic ILP solver can find exact solutions to the prediction problems in a matter of seconds. For difficult instances, we present a fast approximate algorithm. 4.1 ILP for extraction Gillick and Favre (2009) express the optimization of Objective 1 for extractive summarization as an ILP. We begin here with their algorithm. Let each input sentence s have length ls. Let the presence of each bigram b in B(y) be indicated by the binary variable zb. Let Qsb be an indicator of the presence of bigram b in sentence s. They specify the following ILP over binary variables y and z: max y,z X b vbzb s.t. X s lsys ≤Lmax ∀b X s Qsb ≤zb (7) ∀s, b ysQsb ≥zb (8) Constraints 7 and 8 ensure consistency between sentences and bigrams. Notice that the Constraint 7 requires that selecting a sentence entails selecting all its bigrams, and Constraint 8 requires that selecting a bigram entails selecting at least one sentence that contains it. Solving the ILP is fast in practice. Using the GNU Linear Programming Kit (GLPK) on a 3.2GHz Intel machine, decoding took less than a second on most instances. 4.2 ILP for joint compression and extraction We can extend the ILP formulation of extraction to solve the compressive problem. Let ln be the number of words node n has as children. With this notation we can write the length restriction as P n lnyn ≤Lmax. Let the presence of each cut c in C(y) be indicated by the binary variable zc, which is active if and only if yn = 0 but yπ(n) = 1, where node π(n) is the parent of node n. The constraints on zc are diagrammed in Figure 2. While it is possible to let B(y) contain all bigrams present in the compressive summary, the reFigure 2: Diagram of ILP for joint extraction and compression. Variables zb indicate the presence of bigrams in the summary. Variables zc indicate edges in the parse tree that have been cut in order to remove subtrees. The figure suppresses bigram variables zstopped,in and zfrance,he to reduce clutter. Note that the edit shown is intentionally bad. It demonstrates a loss of bigram coverage. duction of B(y) makes the ILP formulation efficient. We omit from B(y) bigrams that are the result of deleted intermediate words. As a result the required number of variables zb is linear in the length of a sentence. The constraints on zb are given in Figure 2. They can be expressed in terms of the variables yn. By solving the following ILP we can compute the arg max required for prediction in the joint model: max y,z X b vbzb + X c vczc s.t. X n lnyn ≤Lmax ∀n yn ≤yπ(n) (9) ∀b zb = 1 b ∈B(y) (10) ∀c zc = 1 c ∈C(y) (11) 485 Constraint 9 encodes the requirement that only full subtrees may be deleted. For simplicity, we have written Constraints 10 and 11 in implicit form. These constraints can be encoded explicitly using O(N) linear constraints, where N is the number of words in the document set x. The reduction of B(y) to include only bigrams not resulting from deleted intermediate words avoids O(N2) required constraints. In practice, solving this ILP for joint extraction and compression is, on average, an order of magnitude slower than solving the ILP for pure extraction, and for certain instances finding the exact solution is prohibitively slow. 4.3 Fast approximate prediction One common way to quickly approximate an ILP is to solve its LP relaxation (and round the results). We found that, while very fast, the LP relaxation of the joint ILP gave poor results, finding unacceptably suboptimal solutions. This appears possibly to have been problematic for Martins and Smith (2009) as well. We developed an alternative fast approximate joint extractive and compressive solver that gives better results in terms of both objective value and bigram recall of resulting solutions. The approximate joint solver first extracts a subset of the sentences in the document set that total no more than M words. In a second step, we apply the exact joint extractive and compressive summarizer (see Section 4.2) to the resulting extraction. The objective we maximize in performing the initial extraction is different from the one used in extractive summarization. Specifically, we pick an extraction that maximizes P s∈y P b∈s vb. This objective rewards redundant bigrams, and thus is likely to give the joint solver multiple options for including the same piece of relevant content. M is a parameter that trades-off between approximation quality and problem difficulty. When M is the size of the document set x, the approximate solver solves the exact joint problem. In Figure 3 we plot the trade-off between approximation quality and computation time, comparing to the exact joint solver, an exact solver that is limited to extractive solutions, and the LP relaxation solver. The results show that the approximate joint solver yields substantial improvements over the LP relaxation, and Figure 3: Plot of objective value, bigram recall, and elapsed time for the approximate joint extractive and compressive solver against size of intermediate extraction set. Also shown are values for an LP relaxation approximate solver, a solver that is restricted to extractive solutions, and finally the exact compressive solver. These solvers do not use an intermediate extraction. Results are for 44 document sets, averaging about 5000 words per document set. can achieve results comparable to those produced by the exact solver with a 5-fold reduction in computation time. On particularly difficult instances the parameter M can be decreased, ensuring that all instances are solved in a reasonable time period. 5 Data We use the data from the Text Analysis Conference (TAC) evaluations from 2008 and 2009, a total of 92 multi-document summarization problems. Each problem asks for a 100-word-limited summary of 10 related input documents and provides a set of four abstracts written by experts. These are the nonupdate portions of the TAC 2008 and 2009 tasks. To train the extractive system described in Section 2, we use as our labels y∗the extractions with the largest bigram recall values relative to the sets of references. While these extractions are inferior to the abstracts, they are attainable by our model, a quality found to be advantageous in discriminative training for machine translation (Liang et al., 2006; 486 COUNT: 1(docCount(b) = ·) where docCount(b) is the number of documents containing b. STOP: 1(isStop(b1) = ·, isStop(b2) = ·) where isStop(w) indicates a stop word. POSITION: 1(docPosition(b) = ·) where docPosition(b) is the earliest position in a document of any sentence containing b, buckets earliest positions ≥4. CONJ: All two- and three-way conjunctions of COUNT, STOP, and POSITION features. BIAS: Bias feature, active on all bigrams. Table 1: Bigram features: component feature functions in g(b, x) that we use to characterize the bigram b in both the extractive and compressive models. Chiang et al., 2008). Previous work has referred to the lack of extracted, compressed data sets as an obstacle to joint learning for summarizaiton (Daum´e III, 2006; Martins and Smith, 2009). We collected joint data via a Mechanical Turk task. To make the joint annotation task more feasible, we adopted an approximate approach that closely matches our fast approximate prediction procedure. Annotators were shown a 150-word maximum bigram recall extractions from the full document set and instructed to form a compressed summary by deleting words until 100 or fewer words remained. Each task was performed by two annotators. We chose the summary we judged to be of highest quality from each pair to add to our corpus. This gave one gold compressive summary y∗for each of the 44 problems in the TAC 2009 set. We used these labels to train our joint extractive and compressive system described in Section 2. Of the 288 total sentences presented to annotators, 38 were unedited, 45 were deleted, and 205 were compressed by an average of 7.5 words. 6 Features Here we describe the features used to parameterize our model. Relative to some NLP tasks, our feature sets are small: roughly two hundred features on bigrams and thirteen features on subtree deletions. This is because our data set is small; with only 48 training documents we do not have the statistical support to learn weights for more features. For larger training sets one could imagine lexicalized versions of the features we describe. COORD: Indicates phrase involved in coordination. Four versions of this feature: NP, VP, S, SBAR. S-ADJUNCT: Indicates a child of an S, adjunct to and left of the matrix verb. Four version of this feature: CC, PP, ADVP, SBAR. REL-C: Indicates a relative clause, SBAR modifying a noun. ATTR-C: Indicates a sentence-final attribution clause, e.g. ‘the senator announced Friday.’ ATTR-PP: Indicates a PP attribution, e.g. ‘according to the senator.’ TEMP-PP: Indicates a temporal PP, e.g. ‘on Friday.’ TEMP-NP: Indicates a temporal NP, e.g. ‘Friday.’ BIAS: Bias feature, active on all subtree deletions. Table 2: Subtree deletion features: component feature functions in h(c, x) that we use to characterize the subtree deleted by cutting edge c = (n, π(n)) in the joint extractive and compressive model. 6.1 Bigram features Our bigram features include document counts, the earliest position in a document of a sentence that contains the bigram, and membership of each word in a standard set of stopwords. We also include all possible two- and three-way conjuctions of these features. Table 1 describes the features in detail. We use stemmed bigrams and prune bigrams that appear in fewer than three input documents. 6.2 Subtree deletion features Table 2 gives a description of our subtree tree deletion features. Of course, by training to optimize a metric like ROUGE, the system benefits from restrictions on the syntactic variety of edits; the learning is therefore more about deciding when an edit is worth the coverage trade-offs rather than finegrained decisions about grammaticality. We constrain the model to only allow subtree deletions where one of the features in Table 2 (aside from BIAS) is active. The root, and thus the entire sentence, may always be cut. We choose this particular set of allowed deletions by looking at human annotated data and taking note of the most common types of edits. Edits which are made rarely by humans should be avoided in most scenarios, and we simply don’t have enough data to learn when to do them safely. 487 System BR R-2 R-SU4 Pyr LQ LAST DOCUMENT 4.00 5.85 9.39 23.5 7.2 EXT. BASELINE 6.85 10.05 13.00 35.0 6.2 LEARNED EXT. 7.43 11.05 13.86 38.4 6.6 LEARNED COMP. 7.75 11.70 14.38 41.3 6.5 Table 3: Bigram Recall (BR), ROUGE (R-2 and R-SU4) and Pyramid (Pyr) scores are multiplied by 100; Linguistic Quality (LQ) is scored on a 1 (very poor) to 10 (very good) scale. 7 Experiments 7.1 Experimental setup We set aside the TAC 2008 data set (48 problems) for testing and use the TAC 2009 data set (44 problems) for training, with hyper-parameters set to maximize six-fold cross-validation bigram recall on the training set. We run the factored SMO algorithm until convergence, and run the cutting-plane algorithm until convergence for ϵ = 10−4. We used GLPK to solve all ILPs. We solved extractive ILPs exactly, and joint extractive and compressive ILPs approximately using an intermediate extraction size of 1000. Constituency parses were produced using the Berkeley parser (Petrov and Klein, 2007). We show results for three systems, EXTRACTIVE BASELINE, LEARNED EXTRACTIVE, LEARNED COMPRESSIVE, and the standard baseline that extracts the first 100 words in the the most recent document, LAST DOCUMENT. 7.2 Results Our evaluation results are shown in Table 3. ROUGE-2 (based on bigrams) and ROUGE-SU4 (based on both unigrams and skip-bigrams, separated by up to four words) are given by the official ROUGE toolkit with the standard options (Lin, 2004). Pyramid (Nenkova and Passonneau, 2004) is a manually evaluated measure of recall on facts or Semantic Content Units appearing in the reference summaries. It is designed to help annotators distinguish information content from linguistic quality. Two annotators performed the entire evaluation without overlap by splitting the set of problems in half. To evaluate linguistic quality, we sent all the summaries to Mechanical Turk (with two times redunSystem Sents Words/Sent Word Types LAST DOCUMENT 4.0 25.0 36.5 EXT. BASELINE 5.0 20.8 36.3 LEARNED EXT. 4.8 21.8 37.1 LEARNED COMP. 4.5 22.9 38.8 Table 4: Summary statistics for the summaries generated by each system: Average number of sentences per summary, average number of words per summary sentence, and average number of non-stopword word types per summary. dancy), using the template and instructions designed by Gillick and Liu (2010). They report that Turkers can faithfully reproduce experts’ rankings of average system linguistic quality (though their judgements of content are poorer). The table shows average linguistic quality. All the content-based metrics show substantial improvement for learned systems over unlearned ones, and we see an extremely large improvement for the learned joint extractive and compressive system over the previous state-of-the-art EXTRACTIVE BASELINE. The ROUGE scores for the learned joint system, LEARNED COMPRESSIVE, are, to our knowledge, the highest reported on this task. We cannot compare Pyramid scores to other reported scores because of annotator difference. As expected, the LAST DOCUMENT baseline outperforms other systems in terms of linguistic quality. But, importantly, the gains achieved by the joint extractive and compressive system in content-based metrics do not come at the cost of linguistic quality when compared to purely extractive systems. Table 4 shows statistics on the outputs of the systems we evaluated. The joint extractive and compressive system fits more word types into a summary than the extractive systems, but also produces longer sentences on average. Reading the output summaries more carefully suggests that by learning to extract and compress jointly, our joint system has the flexibility to use or create reasonable, mediumlength sentences, whereas the extractive systems are stuck with a few valuable long sentences, but several less productive shorter sentences. Example summaries produced by the joint system are given in Figure 4 along with reference summaries produced by humans. 488 LEARNED COMPRESSIVE: The country’s work safety authority will release the list of the first batch of coal mines to be closed down said Wang Xianzheng, deputy director of the National Bureau of Production Safety Supervision and Administration. With its coal mining safety a hot issue, attracting wide attention from both home and overseas, China is seeking solutions from the world to improve its coal mining safety system. Despite government promises to stem the carnage the death toll in China’s disaster-plagued coal mine industry is rising according to the latest statistics released by the government Friday. Fatal coal mine accidents in China rose 8.5 percent in the first eight months of this year with thousands dying despite stepped-up efforts to make the industry safer state media said Wednesday. REFERENCE: China’s accident-plagued coal mines cause thousands of deaths and injuries annually. 2004 saw over 6,000 mine deaths. January through August 2005, deaths rose 8.5% over the same period in 2004. Most accidents are gas explosions, but fires, floods, and caveins also occur. Ignored safety procedures, outdated equipment, and corrupted officials exacerbate the problem. Official responses include shutting down thousands of ill-managed and illegally-run mines, punishing errant owners, issuing new safety regulations and measures, and outlawing local officials from investing in mines. China also sought solutions at the Conference on South African Coal Mining Safety Technology and Equipment held in Beijing. LEARNED COMPRESSIVE: Karl Rove the White House deputy chief of staff told President George W. Bush and others that he never engaged in an effort to disclose a CIA operative’s identity to discredit her husband’s criticism of the administration’s Iraq policy according to people with knowledge of Rove’s account in the investigation. In a potentially damaging sign for the Bush administration special counsel Patrick Fitzgerald said that although his investigation is nearly complete it’s not over. Lewis Scooter Libby Vice President Dick Cheney’s chief of staff and a key architect of the Iraq war was indicted Friday on felony charges of perjury making false statements to FBI agents and obstruction of justice for impeding the federal grand jury investigating the CIA leak case. REFERENCE: Special Prosecutor Patrick Fitzgerald is investigating who leaked to the press that Valerie Plame, wife of former Ambassador Joseph Wilson, was an undercover CIA agent. Wilson was a critic of the Bush administration. Administration staffers Karl Rove and I. Lewis Libby are the focus of the investigation. NY Times correspondent Judith Miller was jailed for 85 days for refusing to testify about Libby. Libby was eventually indicted on five counts: 2 false statements, 1 obstruction of justice, 2 perjury. Libby resigned immediately. He faces 30 years in prison and a fine of $1.25 million if convicted. Libby pleaded not guilty. Figure 4: Example summaries produced by our learned joint model of extraction and compression. These are each 100-word-limited summaries of a collection of ten documents from the TAC 2008 data set. Constituents that have been removed via subtree deletion are grayed out. References summaries produced by humans are provided for comparison. 8 Conclusion Jointly learning to extract and compress within a unified model outperforms learning pure extraction, which in turn outperforms a state-of-the-art extractive baseline. Our system gives substantial increases in both automatic and manual content metrics, while maintaining high linguistic quality scores. Acknowledgements We thank the anonymous reviewers for their comments. This project is supported by DARPA under grant N10AP20007. References J. Carbonell and J. Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proc. of SIGIR. D. Chiang, Y. Marton, and P. Resnik. 2008. Online largemargin training of syntactic and structural translation features. In Proc. of EMNLP. J. Clarke and M. Lapata. 2008. Global Inference for Sentence Compression: An Integer Linear Programming Approach. Journal of Artificial Intelligence Research, 31:399–429. K. Crammer and Y. Singer. 2003. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951–991. H.C. Daum´e III. 2006. Practical structured learning techniques for natural language processing. Ph.D. thesis, University of Southern California. D. Gillick and B. Favre. 2009. A scalable global model for summarization. In Proc. of ACL Workshop on Integer Linear Programming for Natural Language Processing. D. Gillick and Y. Liu. 2010. Non-Expert Evaluation of Summarization Systems is Risky. In Proc. of NAACL Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. K. Knight and D. Marcu. 2001. Statistics-based summarization-step one: Sentence compression. In Proc. of AAAI. L. Li, K. Zhou, G.R. Xue, H. Zha, and Y. Yu. 2009. Enhancing diversity, coverage and balance for summarization through structure learning. In Proc. of the 18th International Conference on World Wide Web. P. Liang, A. Bouchard-Cˆot´e, D. Klein, and B. Taskar. 2006. An end-to-end discriminative approach to machine translation. In Proc. of the ACL. 489 C.Y. Lin. 2003. Improving summarization performance by sentence compression: a pilot study. In Proc. of ACL Workshop on Information Retrieval with Asian Languages. C.Y. Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proc. of ACL Workshop on Text Summarization Branches Out. A.F.T. Martins and N.A. Smith. 2009. Summarization with a joint model for sentence extraction and compression. In Proc. of NAACL Workshop on Integer Linear Programming for Natural Language Processing. R. McDonald. 2006. Discriminative sentence compression with soft syntactic constraints. In Proc. of EACL. A. Nenkova and R. Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Proc. of NAACL. A. Nenkova and L. Vanderwende. 2005. The impact of frequency on summarization. Technical report, MSRTR-2005-101. Redmond, Washington: Microsoft Research. S. Petrov and D. Klein. 2007. Learning and inference for hierarchically split PCFGs. In AAAI. J.C. Platt. 1999. Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods. MIT press. F. Schilder and R. Kondadadi. 2008. Fastsum: Fast and accurate query-based multi-document summarization. In Proc. of ACL. D. Shen, J.T. Sun, H. Li, Q. Yang, and Z. Chen. 2007. Document summarization using conditional random fields. In Proc. of IJCAI. B. Taskar, C. Guestrin, and D. Koller. 2003. Max-margin Markov networks. In Proc. of NIPS. B. Taskar, D. Klein, M. Collins, D. Koller, and C. Manning. 2004. Max-margin parsing. In Proc. of EMNLP. S. Teufel and M. Moens. 1997. Sentence extraction as a classification task. In Proc. of ACL Workshop on Intelligent and Scalable Text Summarization. I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In Proc. of ICML. V.N. Vapnik. 1998. Statistical learning theory. John Wiley and Sons, New York. K. Woodsend and M. Lapata. 2010. Automatic generation of story highlights. In Proc. of ACL. W. Yih, J. Goodman, L. Vanderwende, and H. Suzuki. 2007. Multi-document summarization by maximizing informative content-words. In Proc. of IJCAI. D.M. Zajic, B.J. Dorr, R. Schwartz, and J. Lin. 2006. Sentence compression as a component of a multidocument summarization system. In Proc. of the 2006 Document Understanding Workshop. 490
|
2011
|
49
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 43–51, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Evaluating the Impact of Coder Errors on Active Learning Ines Rehbein Computational Linguistics Saarland University [email protected] Josef Ruppenhofer Computational Linguistics Saarland University [email protected] Abstract Active Learning (AL) has been proposed as a technique to reduce the amount of annotated data needed in the context of supervised classification. While various simulation studies for a number of NLP tasks have shown that AL works well on goldstandard data, there is some doubt whether the approach can be successful when applied to noisy, real-world data sets. This paper presents a thorough evaluation of the impact of annotation noise on AL and shows that systematic noise resulting from biased coder decisions can seriously harm the AL process. We present a method to filter out inconsistent annotations during AL and show that this makes AL far more robust when applied to noisy data. 1 Introduction Supervised machine learning techniques are still the mainstay for many NLP tasks. There is, however, a well-known bottleneck for these approaches: the amount of high-quality data needed for training, mostly obtained by human annotation. Active Learning (AL) has been proposed as a promising approach to reduce the amount of time and cost for human annotation. The idea behind active learning is quite intuitive: instead of annotating a large number of randomly picked instances we carefully select a small number of instances that are maximally informative for the machine learning classifier. Thus a smaller set of data points is able to boost classifier performance and to yield an accuracy comparable to the one obtained when training the same system on a larger set of randomly chosen data. Active learning has been applied to several NLP tasks like part-of-speech tagging (Ringger et al., 2007), chunking (Ngai and Yarowsky, 2000), syntactic parsing (Osborne and Baldridge, 2004; Hwa, 2004), Named Entity Recognition (Shen et al., 2004; Laws and Sch¨utze, 2008; Tomanek and Hahn, 2009), Word Sense Disambiguation (Chen et al., 2006; Zhu and Hovy, 2007; Chan and Ng, 2007), text classification (Tong and Koller, 1998) or statistical machine translation (Haffari and Sarkar, 2009), and has been shown to reduce the amount of annotated data needed to achieve a certain classifier performance, sometimes by as much as half. Most of these studies, however, have only simulated the active learning process using goldstandard data. This setting is crucially different from a real world scenario where we have to deal with erroneous data and inconsistent annotation decisions made by the human annotators. While simulations are an indispensable instrument to test different parameters and settings, it has been shown that when applying AL to highly ambiguous tasks like e.g. Word Sense Disambiguation (WSD) with fine-grained sense distinctions, AL can actually harm the learning process (Dang, 2004; Rehbein et al., 2010). Dang suggests that the lack of a positive effect of AL might be due to inconsistencies in the human annotations and that AL cannot efficiently be applied to tasks which need double blind annotation with adjudication to insure a sufficient data quality. Even if we take a more optimistic view and assume that AL might still be useful even for tasks featuring a high degree of ambiguity, it remains crucial to address the problem of annotation noise and its impact on AL. 43 In this paper we present a thorough evaluation of the impact of annotation noise on AL. We simulate different types of coder errors and assess the effect on the learning process. We propose a method to detect inconsistencies and remove them from the training data, and show that our method does alleviate the problem of annotation noise in our experiments. The paper is structured as follows. Section 2 reports on recent research on the impact of annotation noise in the context of supervised classification. Section 3 describes the experimental setup of our simulation study and presents results. In Section 4 we present our filtering approach and show its impact on AL performance. Section 5 concludes and outlines future work. 2 Related Work We are interested in the question whether or not AL can be successfully applied to a supervised classification task where we have to deal with a considerable amount of inconsistencies and noise in the data, which is the case for many NLP tasks (e.g. sentiment analysis, the detection of metaphors, WSD with fine-grained word senses, to name but a few). Therefore we do not consider part-of-speech tagging or syntactic parsing, where coders are expected to agree on most annotation decisions. Instead, we focus on work on AL for WSD, where intercoder agreement (at least for fine-grained annotation schemes) usually is much lower than for the former tasks. 2.1 Annotation Noise Studies on active learning for WSD have been limited to running simulations of AL using gold standard data and a coarse-grained annotation scheme (Chen et al., 2006; Chan and Ng, 2007; Zhu and Hovy, 2007). Two exceptions are Dang (2004) and Rehbein et al. (2010) who both were not able to replicate the positive findings obtained for AL for WSD on coarse-grained sense distinctions. A possible reason for this failure is the amount of annotation noise in the training data which might mislead the classifier during the AL process. Recent work on the impact of annotation noise on a machine learning task (Reidsma and Carletta, 2008) has shown that random noise can be tolerated in supervised learning, while systematic errors (as caused by biased annotators) can seriously impair the performance of a supervised classifier even if the observed accuracy of the classifier on a test set coming from the same population as the training data is as high as 0.8. Related work (Beigman Klebanov et al., 2008; Beigman Klebanov and Beigman, 2009) has been studying annotation noise in a multi-annotator setting, distinguishing between hard cases (unreliably annotated due to genuine ambiguity) and easy cases (reliably annotated data). The authors argue that even for those data points where the annotators agreed on one particular class, a proportion of the agreement might be merely due to chance. Following this assumption, the authors propose a measure to estimate the amount of annotation noise in the data after removing all hard cases. Klebanov et al. (2008; 2009) show that, according to their model, high inter-annotator agreement (κ) achieved in an annotation scenario with two annotators is no guarantee for a high-quality data set. Their model, however, assumes that a) all instances where annotators disagreed are in fact hard cases, and b) that for the hard cases the annotators decisions are obtained by coin-flips. In our experience, some amount of disagreement can also be observed for easy cases, caused by attention slips or by a deviant interpretation of some class(es) by one of the annotators, and the annotation decision of an individual annotator cannot so much be described as random choice (coin-flip) but as systematically biased selection, causing the types of errors which have been shown to be problematic for supervised classification (Reidsma and Carletta, 2008). Further problems arise in the AL scenario where the instances to be annotated are selected as a function of the sampling method and the annotation judgements made before. Therefore, Beigman and Klebanov Beigman (2009)’s approach of identifying unreliably annotated instances by disagreement is not applicable to AL, as most instances are annotated only once. 2.2 Annotation Noise and Active Learning For AL to be succesful, we need to remove systematic noise in the training data. The challenge we face is that we only have a small set of seed data and no information about the reliability of the annotations 44 assigned by the human coders. Zhu et al. (2008) present a method for detecting outliers in the pool of unannotated data to prevent these instances from becoming part of the training data. This approach is different from ours, where we focus on detecting annotation noise in the manually labelled training data produced by the human coders. Schein and Ungar (2007) provide a systematic investigation of 8 different sampling methods for AL and their ability to handle different types of noise in the data. The types of noise investigated are a) prediction residual error (the portion of squared error that is independent of training set size), and b) different levels of confusion among the categories. Type a) models the presence of unknown features that influence the true probabilities of an outcome: a form of noise that will increase residual error. Type b) models categories in the data set which are intrinsically hard to disambiguate, while others are not. Therefore, type b) errors are of greater interest to us, as it is safe to assume that intrinsically ambiguous categories will lead to biased coder decisions and result in the systematic annotation noise we are interested in. Schein and Ungar observe that none of the 8 sampling methods investigated in their experiment achieved a significant improvement over the random sampling baseline on type b) errors. In fact, entropy sampling and margin sampling even showed a decrease in performance compared to random sampling. For AL to work well on noisy data, we need to identify and remove this type of annotation noise during the AL process. To the best of our knowledge, there is no work on detecting and removing annotation noise by human coders during AL. 3 Experimental Setup To make sure that the data we use in our simulation is as close to real-world data as possible, we do not create an artificial data set as done in (Schein and Ungar, 2007; Reidsma and Carletta, 2008) but use real data from a WSD task for the German verb drohen (threaten).1 Drohen has three different word senses which can be disambiguated by humans with 1The data has been provided by the SALSA project: http://www.coli.uni-saarland.de/projects/salsa a high accuracy.2 This point is crucial to our setup. To control the amount of noise in the data, we need to be sure that the initial data set is noise-free. For classification we use a maximum entropy classifier.3 Our sampling method is uncertainty sampling (Lewis and Gale, 1994), a standard sampling heuristic for AL where new instances are selected based on the confidence of the classifier for predicting the appropriate label. As a measure of uncertainty we use Shannon entropy (1) (Zhang and Chen, 2002) and the margin metric (2) (Schein and Ungar, 2007). The first measure considers the model’s predictions q for each class c and selects those instances from the pool where the Shannon entropy is highest. − X c qc log qc (1) The second measure looks at the difference between the largest two values in the prediciton vector q, namely the two predicted classes c, c′ which are, according to our model, the most likely ones for instance xn, and selects those instances where the difference (margin) between the two predicted probabilities is the smallest. We discuss some details of this metric in Section 4. Mn = |P(c|xn) −P(c′|xn)| (2) The features we use for WSD are a combination of context features (word token with window size 11 and POS context with window size 7), syntactic features based on the output of a dependency parser4 and semantic features based on GermaNet hyperonyms. These settings were tuned to the target verb by (Rehbein et al., 2009). All results reported below are averages over a 5-fold cross validation. 3.1 Simulating Coder Errors in AL Before starting the AL trials we automatically separate the 2,500 sentences into test set (498 sentences) and pool (2,002 sentences),5 retaining the overall distribution of word senses in the data set. We insert a varying amount of noise into the pool data, 2In a pilot study where two human coders assigned labels to a set of 100 sentences, the coders agreed on 99% of the data. 3http://maxent.sourceforge.net 4The MaltParser: http://maltparser.org 5The split has been made automatically, the unusual numbers are caused by rounding errors. 45 test pool ALrand ALbias % errors 0% 0% 30% 30% drohen1-salsa 126 506 524 514 Comittment 129 520 522 327 Run risk 243 976 956 1161 Total 498 2002 2002 2002 Table 1: Distribution of word senses in pool and test sets starting from 0% up to 30% of noise, increasing by 2% in each trial. We assess the impact of annotation noise on active learning in three different settings. In the first setting, we randomly select new instances from the pool (random sampling; rand). In the second setting, we randomly replace n percent of all labels (from 0 to 30) in the pool by another label before starting the active learning trial, but retain the distribution of the different labels in the pool data (active learning with random errors); (Table 1, ALrand, 30%). In the third setting we simulate biased decisions by a human annotator. For a certain fraction (0 to 30%) of instances of a particular non-majority class, we substitute the majority class label for the gold label, thereby producing a more skewed distribution than in the original pool (active learning with biased errors); (Table 1, ALbias, 30%). For all three settings (rand, ALrand, ALbias) and each degree of noise (0-30%), we run active learning simulations on the already annotated data, simulating the annotation process by selecting one new, prelabelled instance per trial from the pool and, instead of handing them over to a human coder, assigning the known (possibly erroneous) label to the instance and adding it to the training set. We use the same split (test, pool) for all three settings and all degrees of noise, with identical test sets for all trials. 3.2 Results Figure 1 shows active learning curves for the different settings and varying degrees of noise. The horizontal black line slightly below 0.5 accuracy shows the majority baseline (the performance obtained when always assigning the majority class). For all degrees of randomly inserted noise, active learning (ALrand) outperforms random sampling (rand) at an early stage in the learning process. Looking at the biased errors (ALbias), we see a different picture. With a low degree of noise, the curves for ALrand and ALbias are very similar. When inserting more noise, performance for ALbias decreases, and with around 20% of biased errors in the pool AL performs worse than our random sampling baseline. In the random noise setting (ALrand), even after inserting 30% of errors AL clearly outperforms random sampling. Increasing the size of the seed data reduces the effect slightly, but does not prevent it (not shown here due to space limitations). This confirms the findings that under certain circumstances AL performs worse than random sampling (Dang, 2004; Schein and Ungar, 2007; Rehbein et al., 2010). We could also confirm Schein and Ungar (2007)’s observation that margin sampling is less sensitive to certain types of noise than entropy sampling (Table 2). Because of space limitations we only show curves for margin sampling. For entropy sampling, the general trend is the same, with results being slightly lower than for margin sampling. 4 Detecting Annotation Noise Uncertainty sampling using the margin metric selects instances for which the difference between classifier predictions for the two most probable classes c, c′ is very small (Section 3, Equation 2). When selecting unlabelled instances from the pool, this metric picks examples which represent regions of uncertainty between classes which have yet to be learned by the classifier and thus will advance the learning process. Our human coder, however, is not the perfect oracle assumed in most AL simulations, and might also assign incorrect labels. The filter approach has two objectives: a) to detect incorrect labels assigned by human coders, and b) to prevent the hard cases (following the terminology of Klebanov et al. (2008)) from becoming part of the training data. We proceed as follows. Our approach makes use of the limited set of seed data S and uses heuristics to detect unreliably annotated instances. We assume that the instances in S have been validated thoroughly. We train an ensemble of classifiers E on subsets of S, and use E to decide whether or not a newly annotated instance should be added to the seed. 46 error=2% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 rand al_rand al_bias error=6% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 error=10% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 error=14% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 error=18% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 rand al_rand al_bias error=22% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 error=26% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 error=30% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 Figure 1: Active learning curves for varying degrees of noise, starting from 0% up to 30% for a training size up to 1200 instances (solid circle (black): random sampling; filled triangle point-up (red): AL with random errors; cross (green): AL with biased errors) 47 filter % error 0 4 8 12 16 20 24 28 30 rand 0.763 0.752 0.736 0.741 0.726 0.708 0.707 0.677 0.678 entropy ALrand 0.806 0.786 0.779 0.743 0.752 0.762 0.731 0.724 0.729 entropy y ALrand 0.792 0.786 0.777 0.760 0.771 0.748 0.730 0.729 0.727 margin ALrand 0.795 0.795 0.782 0.771 0.758 0.755 0.737 0.719 0.708 margin y ALrand 0.800 0.785 0.773 0.777 0.765 0.766 0.734 0.735 0.718 entropy ALbias 0.806 0.793 0.759 0.748 0.702 0.651 0.625 0.630 0.622 entropy y ALbias 0.802 0.781 0.777 0.735 0.702 0.678 0.687 0.624 0.616 margin ALbias 0.795 0.789 0.770 0.753 0.706 0.684 0.656 0.634 0.624 margin y ALbias 0.787 0.781 0.787 0.768 0.739 0.700 0.671 0.653 0.651 Table 2: Accuracy for the different sampling methods without and with filtering after adding 500 instances to the seed data There are a number of problems with this approach. First, there is the risk of overfitting S. Second, we know that classifier accuracy in the early phase of AL is low. Therefore, using classifier predictions at this stage to accept or reject new instances could result in poor choices that might harm the learning proceess. To avoid this and to generalise over S to prevent overfitting, we do not directly train our ensemble on instances from S. Instead, we create new feature vectors Fgen on the basis of the feature vectors Fseed in S. For each class in S, we extract all attribute-value pairs from the feature vectors for this particular class. For each class, we randomly select features (with replacement) from Fseed and combine them into a new feature vector Fgen, retaining the distribution of the different classes in the data. As a result, we obtain a more general set of feature vectors Fgen with characteristic features being distributed more evenly over the different feature vectors. In the next step we train n = 5 maximum entropy classifiers on subsets of Fgen, excluding the instances last annotated by the oracle. Each subset is half the size of the current S. We use the ensemble to predict the labels for the new instances and, based on the predictions, accept or reject these, following the two heuristics below (also see Figure 2). 1. If all n ensemble classifiers agree on one label but disagree with the oracle ⇒reject. 2. If the sum of the margins predicted by the ensemble classifiers is below a particular theshold tmargin ⇒reject. The threshold tmargin was set to 0.01, based on a qualitative data analysis. AL with Filtering: Input: annotated seed data S, unannotated pool P AL loop: • train classifier C on S • let C predict labels for data in P • select new instances from P according to sampling method, hand over to oracle for annotation Repeat: after every c new instances annotated by the oracle • for each class in S, extract sets of features Fseed • create new, more general feature vectors Fgen from this set (with replacement) • train an ensemble E of n classifiers on different subsets of Fgen Filtering Heuristics: • if all n classifier in E agree on label but disagree with oracle: ⇒remove instance from seed • if margin is less than threshold tmargin: ⇒remove instance from seed Until done Figure 2: Heuristics for filtering unreliable data points (parameters used: initial seed size: 9 sentences, c = 10, n = 5, tmargin = 0.01) 48 In each iteration of the AL process, one new instance is selected using margin sampling. The instance is presented to the oracle who assigns a label. Then the instance is added to the seed data, thus influencing the selection of the next data point to be annotated. After 10 new instances have been added, we apply the filter technique which finally decides whether the newly added instances will remain in the seed data or will be removed. Figure 3 shows learning curves for the filter approach. With increasing amount of errors in the pool, a clear pattern emerges. For both sampling methods (ALrand, ALbias), the filtering step clearly improves results. Even for the noisier data sets with up to 26% of errors, ALbias with filtering performs at least as well as random sampling. 4.1 Error Analysis Next we want to find out what kind of errors the system could detect. We want to know whether the approach is able to detect the errors previously inserted into the data, and whether it manages to identify hard cases representing true ambiguities. To answer these questions we look at one fold of the ALbias data with 10% of noise. In 1,200 AL iterations the system rejected 116 instances (Table 3). The major part of the rejections was due to the majority vote of the ensemble classifiers (first heuristic, H1) which rejects all instances where the ensemble classifiers agree with each other but disagree with the human judgement. Out of the 105 instances rejected by H1, 41 were labelled incorrectly. This means that we were able to detect around half of the incorrect labels inserted in the pool. 11 instances were filtered out by the margin threshold (H2). None of these contained an incorerrors inserted in pool 173 err. instances selected by AL 93 instances rejected by H1+H2 116 instances rejected by H1 105 true errors rejected by H1 41 instances rejected by H2 11 true errors rejected by H2 0 Table 3: Error analysis of the instances rejected by the filtering approach rect label. On first glance H2 seems to be more lenient than H1, considering the number of rejected sentences. This, however, could also be an effect of the order in which we apply the filters. The different word senses are evenly distributed over the rejected instances (H1: Commitment 30, drohen1-salsa 38, Run risk 36; H2: Commitment 3, drohen1-salsa 4, Run risk 4). This shows that there is less uncertainty about the majority word sense, Run risk. It is hard to decide whether the correctly labelled instances rejected by the filtering method would have helped or hurt the learning process. Simply adding them to the seed data after the conclusion of AL would not answer this question, as it would merely tell us whether they improve classification accuracy further, but we still would not know what impact these instances would have had on the selection of instances during the AL process. 5 Conclusions This paper shows that certain types of annotation noise cause serious problems for active learning approaches. We showed how biased coder decisions can result in an accuracy for AL approaches which is below the one for random sampling. In this case, it is necessary to apply an additional filtering step to remove the noisy data from the training set. We presented an approach based on a resampling of the features in the seed data and guided by an ensemble of classifiers trained on the resampled feature vectors. We showed that our approach is able to detect a certain amount of noise in the data. Future work should focus on finding optimal parameter settings to make the filtering method more robust even for noisier data sets. We also plan to improve the filtering heuristics and to explore further ways of detecting human coder errors. Finally, we plan to test our method in a real-world annotation scenario. 6 Acknowledgments This work was funded by the German Research Foundation DFG (grant PI 154/9-3). We would like to thank the anonymous reviewers for their helpful comments and suggestions. 49 error=2% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 rand ALrand ALrand_f ALbias ALbias_f error=6% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 error=10% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 error=14% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 error=18% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 rand ALrand ALrand_f ALbias ALbias_f error=22% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 error=26% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 error=30% Training size Accuracy 0 250 600 950 0.4 0.5 0.6 0.7 0.8 Figure 3: Active learning curves for varying degrees of noise, starting from 0% up to 30% for a training size up to 1200 instances (solid circle (black): random sampling; open circle (red): ALrand; cross (green): ALrand with filtering; filled triangle point-up (black): ALbias; plus (blue): ALbias with filtering) 50 References Beata Beigman Klebanov and Eyal Beigman. 2009. From annotator agreement to noise models. Computational Linguistics, 35:495–503, December. Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2008. Analyzing disagreements. In Proceedings of the Workshop on Human Judgements in Computational Linguistics, HumanJudge ’08, pages 2–7, Morristown, NJ, USA. Association for Computational Linguistics. Yee Seng Chan and Hwee Tou Ng. 2007. Domain adaptation with active learning for word sense disambiguation. In Proceedings of ACL-2007. Jinying Chen, Andrew Schein, Lyle Ungar, and Martha Palmer. 2006. An empirical study of the behavior of active learning for word sense disambiguation. In Proceedings of NAACL-2006, New York, NY. Hoa Trang Dang. 2004. Investigations into the role of lexical semantics in word sense disambiguation. PhD dissertation, University of Pennsylvania, Pennsylvania, PA. Gholamreza Haffari and Anoop Sarkar. 2009. Active learning for multilingual statistical machine translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1, pages 181– 189. Association for Computational Linguistics. Rebecca Hwa. 2004. Sample selection for statistical parsing. Computational Linguistics, 30(3):253–276. Florian Laws and H. Sch¨utze. 2008. Stopping criteria for active learning of named entity recognition. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), Manchester, UK, August. David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In Proceedings of ACM-SIGIR, Dublin, Ireland. Grace Ngai and David Yarowsky. 2000. Rule writing or annotation: cost-efficient resource usage for base noun phrase chunking. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 117–125, Stroudsburg, PA, USA. Association for Computational Linguistics. Miles Osborne and Jason Baldridge. 2004. Ensemblebased active learning for parse selection. In Proceedings of HLT-NAACL 2004. Ines Rehbein, Josef Ruppenhofer, and Jonas Sunde. 2009. Majo - a toolkit for supervised word sense disambiguation and active learning. In Proceedings of the 8th Workshop on Treebanks and Linguistic Theories (TLT-8), Milano, Italy. Ines Rehbein, Josef Ruppenhofer, and Alexis Palmer. 2010. Bringing active learning to life. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), Beijing, China. Dennis Reidsma and Jean Carletta. 2008. Reliability measurement without limits. Computational Linguistics, 34:319–326. Eric Ringger, Peter McClanahan, Robbie Haertel, George Busby, Marc Carmen, James Carroll, Kevin Seppi, and Deryle Lonsdale. 2007. Active learning for part-ofspeech tagging: Accelerating corpus annotation. In Proceedings of the Linguistic Annotation Workshop, Prague. Andrew I. Schein and Lyle H. Ungar. 2007. Active learning for logistic regression: an evaluation. Machine Learning, 68:235–265. Dan Shen, Jie Zhang, Jian Su, Guodong Zhou, and ChewLim Tan. 2004. Multi-criteria-based active learning for named entity recognition. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, Stroudsburg, PA, USA. Association for Computational Linguistics. Katrin Tomanek and Udo Hahn. 2009. Reducing class imbalance during active learning for named entity annotation. In Proceedings of the 5th International Conference on Knowledge Capture, Redondo Beach, CA. Simon Tong and Daphne Koller. 1998. Support vector machine active learning with applications to text classification. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML-00), pages 287–295. Cha Zhang and Tsuhan Chen. 2002. An active learning framework for content-based information retrieval. IEEE Transactions on Multimedia, 4(2):260–268. Jingbo Zhu and Edward Hovy. 2007. Active learning for word sense disambiguation with methods for addressing the class imbalance problem. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Prague, Czech Republic. Jingbo Zhu, Huizhen Wang, Tianshun Yao, and Benjamin K. Tsou. 2008. Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), Manchester, UK. 51
|
2011
|
5
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 491–499, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Discovery of Topically Coherent Sentences for Extractive Summarization Asli Celikyilmaz Microsoft Speech Labs Mountain View, CA, 94041 [email protected] Dilek Hakkani-T¨ur Microsoft Speech Labs | Microsoft Research Mountain View, CA, 94041 [email protected] Abstract Extractive methods for multi-document summarization are mainly governed by information overlap, coherence, and content constraints. We present an unsupervised probabilistic approach to model the hidden abstract concepts across documents as well as the correlation between these concepts, to generate topically coherent and non-redundant summaries. Based on human evaluations our models generate summaries with higher linguistic quality in terms of coherence, readability, and redundancy compared to benchmark systems. Although our system is unsupervised and optimized for topical coherence, we achieve a 44.1 ROUGE on the DUC-07 test set, roughly in the range of state-of-the-art supervised models. 1 Introduction A query-focused multi-document summarization model produces a short-summary text of a set of documents, which are retrieved based on a user’s query. An ideal generated summary text should contain the shared relevant content among set of documents only once, plus other unique information from individual documents that are directly related to the user’s query addressing different levels of detail. Recent approaches to the summarization task has somewhat focused on the redundancy and coherence issues. In this paper, we introduce a series of new generative models for multiple-documents, based on a discovery of hierarchical topics and their correlations to extract topically coherent sentences. Prior research has demonstrated the usefulness of sentence extraction for generating summary text taking advantage of surface level features such as word repetition, position in text, cue phrases, etc, (Radev, 2004; Nenkova and Vanderwende, 2005a; Wan and Yang, 2006; Nenkova et al., 2006). Because documents have pre-defined structures (e.g., sections, paragraphs, sentences) for different levels of concepts in a hierarchy, most recent summarization work has focused on structured probabilistic models to represent the corpus concepts (Barzilay et al., 1999; Daum´e-III and Marcu, 2006; Eisenstein and Barzilay, 2008; Tang et al., 2009; Chen et al., 2000; Wang et al., 2009). In particular (Haghighi and Vanderwende, 2009; Celikyilmaz and HakkaniTur, 2010) build hierarchical topic models to identify salient sentences that contain abstract concepts rather than specific concepts. Nonetheless, all these systems crucially rely on extracting various levels of generality from documents, focusing little on redundancy and coherence issues in model building. A model than can focus on both issues is deemed to be more beneficial for a summarization task. Topical coherence in text involves identifying key concepts, the relationships between these concepts, and linking these relationships into a hierarchy. In this paper, we present a novel, fully generative Bayesian model of document corpus, which can discover topically coherent sentences that contain key shared information with as little detail and redundancy as possible. Our model can discover hierarchical latent structure of multi-documents, in which some words are governed by low-level topics (T) and others by high-level topics (H). The main contributions of this work are: −construction of a novel bayesian framework to 491 capture higher level topics (concepts) related to summary text discussed in §3, −representation of a linguistic system as a sequence of increasingly enriched models, which use posterior topic correlation probabilities in sentences to design a novel sentence ranking method in §4 and 5, −application of the new hierarchical learning method for generation of less redundant summaries discussed in §6. Our models achieve comparable qualitative results on summarization of multiple newswire documents. Human evaluations of generated summaries confirm that our model can generate non-redundant and topically coherent summaries. 2 Multi-Document Summarization Models Prior research has demonstrated the usefulness of sentence extraction for summarization based on lexical, semantic, and discourse constraints. Such models often rely on different approaches including: identifying important keywords (Nenkova et al., 2006); topic signatures based on user queries (Lin and Hovy, 2002; Conroy et al., 2006; Harabagiu et al., 2007); high frequency content word feature based learning (Nenkova and Vanderwende, 2005a; Nenkova and Vanderwende, 2005b), to name a few. Recent research focusing on the extraction of latent concepts from document clusters are close in spirit to our work (Barzilay and Lee, 2004; Daum´eIII and Marcu, 2006; Eisenstein and Barzilay, 2008; Tang et al., 2009; Wang et al., 2009). Some of these work (Haghighi and Vanderwende, 2009; Celikyilmaz and Hakkani-Tur, 2010) focus on the discovery of hierarchical concepts from documents (from abstract to specific) using extensions of hierarchal topic models (Blei et al., 2004) and reflect this hierarchy on the sentences. Hierarchical concept learning models help to discover, for instance, that ”baseball” and ”football” are both contained in a general class ”sports”, so that the summaries reference terms related to more abstract concepts like ”sports”. Although successful, the issue with concept learning methods for summarization is that the extracted sentences usually contain correlated concepts. We need a model that can identify salient sentences referring to general concepts of documents and there should be minimum correlation between them. Our approach differs from the early work, in that, we utilize the advantages of previous topic models and build an unsupervised generative model that can associate each word in each document with three random variables: a sentence S, a higher-level topic H, and a lower-level topic T, in an analogical way to PAM models (Li and McCallum, 2006), i.e., a directed acyclic graph (DAG) representing mixtures of hierarchical structure, where super-topics are multinomials over sub-topics at lower levels in the DAG. We define a tiered-topic clustering in which the upper nodes in the DAG are higher-level topics H, representing common co-occurence patterns (correlations) between lower-level topics T in documents. This has not been the focus in prior work on generative approaches for summarization task. Mainly, our model can discover correlated topics to eliminate redundant sentences in summary text. Rather than representing sentences as a layer in hierarchical models, e.g., (Haghighi and Vanderwende, 2009; Celikyilmaz and Hakkani-Tur, 2010), we model sentences as meta-variables. This is similar to author-topic models (Rosen-Zvi et al., 2004), in which words are generated by first selecting an author uniformly from an observed author list and then selecting a topic from a distribution over topics that is specific to that author. In our model, words are generated from different topics of documents by first selecting a sentence containing the word and then topics that are specific to that sentence. This way we can directly extract from documents the summary related sentences that contain high-level topics. In addition in (Celikyilmaz and Hakkani-Tur, 2010), the sentences can only share topics if the sentences are represented on the same path of captured topic hierarchy, restricting topic sharing across sentences on different paths. Our DAG identifies tiered topics distributed over document clusters that can be shared by each sentence. 3 Topic Coherence for Summarization In this section we discuss the main contribution, our two hierarchical mixture models, which improve summary generation performance through the use of tiered topic models. Our models can identify lowerlevel topics T (concepts) defined as distributions over words or higher-level topics H, which represent correlations between these lower level topics given 492 sentences. We present our synthetic experiment for model development to evaluate extracted summaries on redundancy measure. In §6, we demonstrate the performance of our models on coherence and informativeness of generated summaries by qualitative and intrinsic evaluations. For model development we use the DUC 2005 dataset1, which consists of 45 document clusters, each of which include 1-4 set of human generated summaries (10-15 sentences each). Each document cluster consists ∼25 documents (25-30 sentences/document) retrieved based on a user query. We consider each document cluster as a corpus and build 45 separate models. For the synthetic experiments, we include the provided human generated summaries of each corpus as additional documents. The sentences in human summaries include general concepts mentioned in the corpus, the salient sentences of documents. Contrary to usual qualitative evaluations of summarization tasks, our aim during development is to measure the percentage of sentences in a human summary that our model can identify as salient among all other document cluster sentences. Because human produced summaries generally contain non-redundant sentences, we use total number of top-ranked human summary sentences as a qualitative redundancy measure in our synthetic experiments. In each model, a document d is a vector of Nd words wd, where each wid is chosen from a vocabulary of size V , and a vector of sentences S, representing all sentences in a corpus of size SD. We identify sentences as meta-variables of document clusters, which the generative process models both sentences and documents using tiered topics. A sentence’s relatedness to summary text is tied to the document cluster’s user query. The idea is that a lexical word present or related to a query should increase its sentence’s probability of relatedness. 4 Two-Tiered Topic Model - TTM Our base model, the two-tiered topic model (TTM), is inspired by the hierarchical topic model, PAM, proposed by Li and McCallum (2006). PAM structures documents to represent and learn arbitrary, nested, and possibly sparse topic correlations using 1www-nlpir.nist.gov/projects/duc/data.html (Background) Specific Content Parameters w S Sentences x T LowerLevel Topics ! Summary Related Word Indicator SD K2 "H Summary Content Indicator Parameters # "T Lower-Level Topic Parameters Higher-Level Topic Parameters K1!K2 K1 " Documents in a Document Cluster Nd Document Sentence selector y Higher-Level Topics H Figure 1: Graphical model depiction of two-tiered topic model (TTM) described in section §4. S are sentences si=1..SD in document clusters. The high-level topics (Hk1=1...K1), representing topic correlations, are modeled as distributions over lowlevel-topics (Tk2=1...K2). Shaded nodes indicate observed variables. Hyper-parameters for φ, θH, θT , θ are omitted. a directed acyclic graph. Our goals are not so different: we aim to discover concepts from documents that would attribute for the general topics related to a user query, however, we want to relate this information to sentences. We represent sentences S by discovery of general (more general) to specific topics (Fig.1). Similarly, we represent summary unrelated (document specific) sentences as corpus specific distributions θ over background words wB, (functional words like prepositions, etc.). Our two-tiered topic model for salient sentence discovery can be generated for each word in the document (Algorithm 1) as follows: For a word wid in document d, a random variable xid is drawn, which determines if wid is query related, i.e., wid either exists in the query or is related to the query2. Otherwise, wid is unrelated to the user query. Then sentence si is chosen uniformly at random (ysi∼ Uniform(si)) from sentences in the document containing wid (deterministic if there is only one sentence containing wid). We assume that if a word is related to a query, it is likely to be summary-related 2We measure relatedness to a query if a word exists in the query or it is synonymous based on information extracted from WordNet (Miller, 1995). 493 H1 H2 H3 T1 T2 T3 T T T T wB ... W W W ... H4 T4 T W Sentences Document Specific Words ! S H T T W K1 K2 T3 :”network” “retail” C4 H1 starbucks, coffee, schultz, tazo, pasqua, states, subsidiary acquire, bought, purchase, disclose, jointventure, johnson starbucks, coffee, retailer, frappaccino francisco, pepsi, area, profit, network, internet, Francisco-based H2 H3 T2 :”coffee” T4 :”retail” T1 :”acquisition” High-Level Topics LowLevel Topics Figure 2: Depiction of TTM given the query ”D0718D: Starbucks Coffee : How has Starbucks Coffee attempted to expand and diversify through joint ventures, acquisitions, or subsidiaries?”. If a word is query/summary related sentence S, first a sentence then a high-level (H) and a low-level (T) topic is sampled. ( C represents that a random variable is a parent of all C random variables.) The bolded links from H −T represent correlated low-level topics. (so as the sampled sentence si). We keep track of the frequency of si’s in a vector, DS ∈ZSD. Every time an si is sampled for a query related wid, we increment its count, a degree of sentence saliency. Given that wid is related to a query, it is associated with two-tiered multinomial distributions: high-level H topics and low-level T topics. A highlevel topic Hki is chosen first from a distribution over low-level topics T specific to that si and one low-level topic Tkj is chosen from a distribution over words, and wid is generated from the sampled low-level topic. If wid is not query-related, it is generated as a background word wB. The resulting tiered model is shown as a graph and plate diagrams in Fig.1 & 2. A sentence sampled from a query related word is associated with a distribution over K1 number of high-level topics Hki, each of which are also associated with K2 number of low-level topics Tkj, a multinomial over lexical words of a corpus. In Fig.2 the most confident words of four low-level topics is shown. The bolded links between Hki and Tkj represent the strength of corAlgorithm 1 Two-Tiered Topic Model Generation 1: Sample: si = 1..SD: Ψ ∼Beta(η), 2: k1 = 1...K1: θH ∼Dirichlet(αH), 3: k2 = 1...K1 × K2: θT ∼Dirichlet(αT ), 4: and k = 1..K2: φ ∼Dirichlet(β). 5: for documents d ←1, ..., D do 6: for words wid, i ←1, ..., Nd do 7: - Draw a discrete x ∼Binomial(Ψwid)⋆ 8: - If x = 1, wid is summary related; 9: · conditioned on S draw a sentence 10: ysi ∼Uniform(si) containing wi, 11: · sample a high-level topic Hk1 ∼θH k1(αH), 12: and a low-level topic Tk2 ∼θT k2(αT ), 13: · sample a word wik1k2 ∼φHk1Tk2(α), 14: - If x = 0, the word is unrelated ⋆⋆ 15: sample a word wB ∼θ(α), 16: corpus specific distribution. 17: end for 18: end for ⋆if wid exists or related to the the query then x = 1 deterministic, otherwise it is stochastically assigned x ∼Bin(Ψ). ⋆⋆wid is a background word. relation between Tkj’s, e.g., the topic ”acquisition” is found to be more correlated with ”retail” than the ”network” topic given H1. This information is used to rank sentences based on the correlated topics. 4.1 Learning and Inference for TTM Our learning procedure involves finding parameters, which likely integrates out model’s posterior distribution P(H, T|Wd, S), d∈D. EM algorithms might face problems with local maxima in topic models (Blei et al., 2003) suggesting implementation of approximate methods in which some of the parameters, e.g., θH, θT , ψ, and θ, can be integrated out, resulting in standard Dirichlet-multinomial as well as binomial distributions. We use Gibbs sampling which allows a combination of estimates from several local maxima of the posterior distribution. For each word, xid is sampled from a sentence specific binomial ψ which in turn has a smoothing prior η to determine if the sampled word wid is (query) summary-related or document-specific. Depending on xid, we either sample a sentence along with a high/low-level topic pair or just sample background words wB. The probability distribution over sentence assignments, P(ysi = s|S) si ∈S, is assumed to be uniform over the elements of S, and deterministic if there is only one sentence in the docu494 ment containing the corresponding word. The optimum hyper-parameters are set based on the training dataset model performance via cross-validation 3. For each word we sample a high-level Hki and a low-level Tkj topic if the word is query related (xid = 1). The sampling distribution for TTM for a word given the remaining topics and hyperparameters αH, αT , α, β, η is: pTTM(Hk1, Tk2, x = 1|w, H−k1, T−k2) ∝ αH + nk1 d P H′ αH′ + nd ∗ αT + nk1k2 d P T′ αT′ + ndH ∗ η + nk1k2 x 2η + nk1k2 ∗ βw + nw k1k2x P w′ βw′ + nk1k2x and when x = 0 (a corpus specific word), pTTM(x = 0|w, zH−k, zt−k) ∝ η + nx k1k2 2η + nk1k2 ∗ αw + nw P w′ αw′ + n The nk1 d is the number of occurrences of high-level topic k1 in document d, and nk1k2 d is the number of times the low-level topic k2 is sampled together with high-level topic k1 in d, nw k1k2x is the number of occurrences of word w sampled from path H-T given that the word is query related. Note that the number of tiered topics in the model is fixed to K1 and K2, which is optimized with validation experiments. It is also possible to construct extended models of TTM using non-parametric priors, e.g., hierarchal Dirichlet processes (Li et al., 2007) (left for future work). 4.2 Summary Generation with TTM We can observe the frequency of draws of every sentence in a document cluster S, given it’s words are related, through DS ∈ZSD. We obtain DS during Gibbs sampling (in §4.1), which indicates a saliency score of each sentence sj ∈S, j = 1..SD: scoreTTM(sj) ∝# [wid ∈sj, xid = 1] /nwj (1) where wid indicates a word in a document d that exists in sj and is sampled as summary related based on random indicator variable xid. nwj is the number of words in sj and normalizes the score favoring 3An alternative way would be to use Dirichlet priors (Blei et al., 2003) which we opted for due to computational reasons but will be investigated as future research. sentences with many related words. We rank sentences based on (1). We compare TTM results on synthetic experiments against PAM (Li and McCallum, 2006) a similar topic model that clusters topics in a hierarchical structure, where super-topics are distributions over sub-topics. We obtain sentence scores for PAM models by calculating the sub-topic significance (TS) based on super-topic correlations, and discover topic correlations over the entire document space (corpus wide). Hence; we calculate the TS of a given sub-topic, k = 1, .., K2 by: TS(zk) = 1 D X d∈D 1 K1 K1 X k1 p(zk sub|zk1 sup) (2) where zk sub is a sub-topic k = 1..K2 and zk1 sup is a super-topic k1. The conditional probability of a subtopic k given a super-topic k1, p(zk sub|zk1 sup), explains the variation of that sub-topic in relation to other sub-topics. The higher the variation over the entire corpus, the better it represents the general theme of the documents. So, sentences including such topics will have higher saliency scores, which we quantify by imposing topic’s significance on vocabulary: scorePAM(si) = 1 K2 K2 X k Y w∈si p(w|zk sub) ∗TS(zk) (3) Fig. 4 illustrates the average salience sentence selection performance of TTM and PAM models (for 45 models). The x-axis represents the percentage of sentences selected by the model among all sentences in the DUC2005 corpus. 100% means all sentences in the corpus included in the summary text. The y-axis is the % of selected human sentences over all sentences. The higher the human summary sentences are ranked, the better the model is in selecting the salient sentences. Hence, the system which peaks sooner indicates a better model. In Fig.4 TTM is significantly better in identifying human sentences as salient in comparison to PAM. The statistical significance is measured based on the area under the curve averaged over 45 models. 5 Enriched Two-Tiered Topic Model Our model can discover words that are related to summary text using posteriors ˆP(θH) and ˆP(θT ), 495 “acquisition” “coffee” “network” H2 T,WH “retail” seattle, acquire, sales, billion... coffee, starbucks... purchase, disclose, joint-venture, johnson schultz, tazo, pasqua, states, subsidiary pepsi, area, profit,network francisco frappaccino, retailer, mocca, organic T2 T,WH High-Level Topics H1 WL T1 T3 T4 Low-Level Topics Low-Level Topics L=2 L=2 L=2 L=2 L=1 L=1 ! L Indicator Word Level (Background) Specific Content Parameters w S Sentences x T H LowerLevel Topics Higher-Level Topics Summary Related Word Indicator SD "H Summary Content Indicator Parameters # "T Lower-Level Topic Parameters Higher-Level Topic Parameters Sentence selector K1!K2 K1 y " Documents in a Document Cluster Nd Document $ K1 +K2 WL WL WL Figure 3: Graphical model depiction of sentence level enriched two-tiered model (ETTM) described in section §5. Each path defined by H/T pair k1k2, has a multinomial ζ over which level of the path outputs a given word. L indicates which level, i.e, high or low, the word is sampled from. On the right is the high-level topic-word and low-level topic-word distributions characterized by ETTM. Each Hk1 also represented as distributions over general words WH as well as indicates the degree of correlation between low-level topics denoted by boldness of the arrows. as well as words wB specific to documents (via ˆP(θ)) (Fig.1). TTM can discover topic correlations, but cannot differentiate if a word in a sentence is more general or specific given a query. Sentences with general words would be more suitable to include in summary text compared to sentences containing specific words. For instance for a given sentence: ”Starbucks Coffee has attempted to expand and diversify through joint ventures, and acquisitions.”, ”starbucks” and ”coffee” are more general words given the document clusters compared to ”joint” and ”ventures” (see Fig.2), because they appear more frequently in document clusters. However, TTM has no way of knowing that ”starbucks” and ”coffee” are common terms given the context. We would like to associate general words with highlevel topics, and context specific words with lowlevel topics. Sentence containing words that are sampled from high-level topics would be a better candidate for summary text. Thus; we present enriched TTM (ETTM) generative process (Fig.3), which samples words not only from low-level topics but also from high-level topics as well. ETTM discovers three separate distributions over words: (i) high-level topics H as distributions over corpus general words WH, (ii) low-level topics T as distributions over corpus specific words WL, and Level Generation for Enriched TTM Fetch ζk ∼Beta(γ); k = 1...K1 × K2. For wid, i = 1, ..., Nd, d = 1, ..D: If x = 1, sentence si is summary related; - sample Hk1 and Tk2 - sample a level L from Bin(ζk1k2) - If L = 1 (general word); wid ∼φHki - else if L = 2 (context specific); wid ∼φHk1Tk2 else if x = 0, do Step 14-16 in Alg. 1. (iii) background word distributions, i.e,. document specific WB (less confidence for summary text). Similar to TTM’s generative process, if wid is related to a given query, then x = 1 is deterministic, otherwise x ∈{0, 1} is stochastically determined if wid should be sampled as a background word (wB) or through hierarchical path, i.e., H-T pairs. We first sample a sentence si for wid uniformly at random from the sentences containing the word ysi∼Uniform(si)). At this stage we sample a level Lwid ∈{1, 2} for wid to determine if it is a high-level word, e.g., more general to context like ”starbucks” or ”coffee” or more specific to related context such as ”subsidiary”, ”frappucino”. Each path through the DAG, defined by a H-T pair (total of K1K2 pairs), has a binomial ζK1K2 over which 496 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 % of human generated sentences used in the generated summary 0 10 20 30 40 50 60 70 80 90 100 % of sentences added to the generated summary text. 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 2 4 6 8 10 ETIM TIM PAM hPAM ETIM TIM hPAM PAM TIM ETIM PAM HPAM Figure 4: Average saliency performance of four systems over 45 different DUC models. The area under each curve is shown in legend. Inseam is the magnified view of top-ranked 10% of sentences in corpus. level of the path outputs sampled word. If the word is a specific type, x = 0, then it is sampled from the background word distribution θ, a document specific multinomial. Once the level and conditional path is drawn (see level generation for ETTM above) the rest of the generative model is same as TTM. 5.1 Learning and Inference for ETTM For each word, x is sampled from a sentence specific binomial ψ, just like TTM. If the word is related to the query x = 1, we sample a high and low-level topic pair H −T as well as an additional level L is sampled to determine which level of topics the word should be sampled from. L is a corpus specific binomial one for all H −T pairs. If L = 1, the word is one of corpus general words and sampled from the high-level topic, otherwise (L = 2) the word is corpus specific and sampled from a the low-level topic. The optimum hyper-parameters are set based on training performance via cross validation. The conditional probabilities are similar to TTM, but with additional random variables, which determine the level of generality of words as follows: pETTM(Tk1, Tk2, L|w, T−k1, T−k2, L) ∝ pTTM(Tk1, Tk2, x = 1|.) ∗ γ+NL k1k2 2γ+nk1k2 5.2 Summary Generation with ETTM For ETTM models, we extend the TTM sentence score to be able to include the effect of the general words in sentences (as word sequences in language models) using probabilities of K1 high-level topic distributions, φw Hk=1..K1, as: scoreETTM(si) ∝# [wid ∈sj, xid = 1] /nwj ∗ 1 K1 P k=1..K1 Q w∈si p(w|Tk) where p(w|Tk) is the probability of a word in si being generated from high-level topic Hk. Using this score, we re-rank the sentences in documents of the synthetic experiment. We compare the results of ETTM to a structurally similar probabilistic model, entitled hierarchical PAM (Mimno et al., 2007), which is designed to capture topics on a hierarchy of two layers, i.e., super topics and subtopics, where super-topics are distributions over abstract words. In Fig. 4 out of 45 models ETTM has the best performance in ranking the human generated sentences at the top, better than the TTM model. Thus; ETTM is capable of capturing focused sentences with general words related to the main concepts of the documents and much less redundant sentences containing concepts specific to user query. 6 Final Experiments In this section, we qualitatively compare our models against state-of-the art models and later apply an intrinsic evaluation of generated summaries on topical coherence and informativeness. For a qualitative comparison with the previous state-of-the models, we use the standard summarization datasets on this task. We train our models on the datasets provided by DUC2005 task and validate the results on DUC 2006 task, which consist of a total of 100 document clusters. We evaluate the performance of our models on DUC2007 datasets, which comprise of 45 document clusters, each containing 25 news articles. The task is to create max. 250 word long summary for each document cluster. 6.1. ROUGE Evaluations: We train each document cluster as a separate corpus to find the optimum parameters of each model and evaluate on test document clusters. ROUGE is a commonly used measure, a standard DUC evaluation metric, which computes recall over various n-grams statistics from a model generated summary against a set of human generated summaries. We report results in R-1 (recall against unigrams), R-2 (recall against bigrams), and R-SU4 497 ROUGE w/o stop words w/ stop words R-1 R-2 R-4 R-1 R-2 R-4 PYTHY 35.7 8.9 12.1 42.6 11.9 16.8 HIERSUM 33.8 9.3 11.6 42.4 11.8 16.7 HybHSum 35.1 8.3 11.8 45.6 11.4 17.2 PAM 32.1 7.1 11.0 41.7 9.1 15.3 hPAM 31.9 7.0 11.1 41.2 8.9 15.2 TTM∗ 34.0 8.7 11.5 44.7 10.7 16.5 ETTM∗ 32.4 8.3 11.2 44.1 10.4 16.4 Table 1: ROUGE results of the best systems on DUC2007 dataset (best results are bolded.) ∗indicate our models. (recall against skip-4 bigrams) ROUGE scores w/ and w/o stop words included. For our models, we ran Gibbs samplers for 2000 iterations for each configuration throwing out first 500 samples as burn-in. We iterated different values for hyperparameters and measured the performance on validation dataset to capture the optimum values. The following models are used as benchmark: (i) PYTHY (Toutanova et al., 2007): Utilizes human generated summaries to train a sentence ranking system using a classifier model; (ii) HIERSUM (Haghighi and Vanderwende, 2009): Based on hierarchical topic models. Using an approximation for inference, sentences are greedily added to a summary so long as they decrease KL-divergence of the generated summary concept distributions from document word-frequency distributions. (iii) HybHSum (Celikyilmaz and Hakkani-Tur, 2010): A semisupervised model, which builds a hierarchial LDA to probabilistically score sentences in training dataset as summary or non-summary sentences. Using these probabilities as output variables, it learns a discriminative classifier model to infer the scores of new sentences in testing dataset. (iv) PAM (Li and McCallum, 2006) and hPAM (Mimno et al., 2007): Two hierarchical topic models to discover high and lowlevel concepts from documents, baselines for synthetic experiments in §4 & §5. Results of our experiments are illustrated in Table 6. Our unsupervised TTM and ETTM systems yield a 44.1 R-1 (w/ stop-words) outperforming the rest of the models, except HybHSum. Because HybHSum uses the human generated summaries as supervision during model development and our systems do not, our performance is quite promising considering the generation is completely unsupervised without seeing any human generated summaries during training. However, the R-2 evaluation (as well as R-4) w/ stop-words does not outperform other models. This is because R-2 is a measure of bi-gram recall and neither of our models represent bi-grams whereas, for instance, PHTHY includes several bi-gram and higher order n-gram statistics. For topic models bigrams tend to degenerate due to generating inconsistent bag of bi-grams (Wallach, 2006). 6.2. Manual Evaluations: A common DUC task is to manually evaluate models on the quality of generated summaries. We compare our best model ETTM to the results of PAM, our benchmark model in synthetic experiments, as well as hybrid hierarchical summarization model, hLDA (Celikyilmaz and Hakkani-Tur, 2010). Human annotators are given two sets of summary text for each document set, generated from either one of the two approaches: best ETTM and PAM or best ETTM and HybHSum models. The annotators are asked to mark the better summary according to five criteria: non-redundancy (which summary is less redundant), coherence (which summary is more coherent), focus and readability (content and no unnecessary details), responsiveness and overall performance. We asked 3 annotators to rate DUC2007 predicted summaries (45 summary pairs per annotator). A total of 42 pairs are judged for ETTM vs. PAM models and 49 pairs for ETTM vs. HybHSum models. The evaluation results in frequencies are shown in Table 6. The participants rated ETTM generated summaries more coherent and focused compared to PAM, where the results are statistically significant (based on t-test on 95% confidence level) indicating that ETTM summaries are rated significantly better. The results of ETTM are slightly better than HybHSum. We consider our results promising because, being unsupervised, ETTM does not utilize human summaries for model development. 7 Conclusion We introduce two new models for extracting topically coherent sentences from documents, an important property in extractive multi-document summarization systems. Our models combine approaches from the hierarchical topic models. We empha498 PAM ETTM Tie HybHSum ETTM Tie Non-Redundancy 13 26 3 12 18 19 Coherence 13 26 3 15 18 16 Focus 14 24 4 14 17 18 Responsiveness 15 24 3 19 12 18 Overall 15 25 2 17 22 10 Table 2: Frequency results of manual evaluations. Tie indicates evaluations where two summaries are rated equal. size capturing correlated semantic concepts in documents as well as characterizing general and specific words, in order to identify topically coherent sentences in documents. We showed empirically that a fully unsupervised model for extracting general sentences performs well at summarization task using datasets that were originally used in building automatic summarization system challenges. The success of our model can be traced to its capability of directly capturing coherent topics in documents, which makes it able to identify salient sentences. Acknowledgments The authors would like to thank Dr. Zhaleh Feizollahi for her useful comments and suggestions. References R. Barzilay and L. Lee. 2004. Catching the drift: Probabilistic content models with applications to generation and summarization. In Proc. HLT-NAACL’04. R. Barzilay, K.R. McKeown, and M. Elhadad. 1999. Information fusion in the context of multi-document summarization. Proc. 37th ACL, pages 550–557. D. Blei, A. Ng, and M. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research. D. Blei, T. Griffiths, M. Jordan, and J. Tenenbaum. 2004. Hierarchical topic models and the nested chinese restaurant process. In Neural Information Processing Systems [NIPS]. A. Celikyilmaz and D. Hakkani-Tur. 2010. A hybrid hierarchical model for multi-document summarization. Proc. 48th ACL 2010. D. Chen, J. Tang, L. Yao, J. Li, and L. Zhou. 2000. Query-focused summarization by combining topic model and affinity propagation. LNCS– Advances in Data and Web Development. J. Conroy, H. Schlesinger, and D. OLeary. 2006. Topicfocused multi-document summarization using an approximate oracle score. Proc. ACL. H. Daum´e-III and D. Marcu. 2006. Bayesian query focused summarization. Proc. ACL-06. J. Eisenstein and R. Barzilay. 2008. Bayesian unsupervised topic segmentation. Proc. EMNLP-SIGDAT. A. Haghighi and L. Vanderwende. 2009. Exploring content models for multi-document summarization. NAACL HLT-09. S. Harabagiu, A. Hickl, and F. Lacatusu. 2007. Satisfying information needs with multi-document summaries. Information Processing and Management. W. Li and A. McCallum. 2006. Pachinko allocation: Dag-structure mixture models of topic correlations. Proc. ICML. W. Li, D. Blei, and A. McCallum. 2007. Nonparametric bayes pachinko allocation. The 23rd Conference on Uncertainty in Artificial Intelligence. C.Y. Lin and E. Hovy. 2002. The automated acquisition of topic signatures fro text summarization. Proc. CoLing. G. A. Miller. 1995. Wordnet: A lexical database for english. ACM, Vol. 38, No. 11: 39-41. D. Mimno, W. Li, and A. McCallum. 2007. Mixtures of hierarchical topics with pachinko allocation. Proc. ICML. A. Nenkova and L. Vanderwende. 2005a. Document summarization using conditional random fields. Technical report, Microsoft Research. A. Nenkova and L. Vanderwende. 2005b. The impact of frequency on summarization. Technical report, Microsoft Research. A. Nenkova, L. Vanderwende, and K. McKowen. 2006. A composition context sensitive multi-document summarizer. Prof. SIGIR. D. R. Radev. 2004. Lexrank: graph-based centrality as salience in text summarization. Jrnl. Artificial Intelligence Research. M. Rosen-Zvi, T. Griffiths, M. Steyvers, and P. Smyth. 2004. The author-topic model for authors and documents. UAI. J. Tang, L. Yao, and D. Chens. 2009. Multi-topic based query-oriented summarization. SIAM International Conference Data Mining. K. Toutanova, C. Brockett, M. Gamon, J. Jagarlamudi, H. Suzuki, and L. Vanderwende. 2007. The phthy summarization system: Microsoft research at duc 2007. In Proc. DUC. H. Wallach. 2006. Topic modeling: Beyond bag-ofwords. Proc. ICML 2006. X. Wan and J. Yang. 2006. Improved affinity graph based multi-document summarization. HLT-NAACL. D. Wang, S. Zhu, T. Li, and Y. Gong. 2009. Multidocument summarization using sentence-based topic models. Proc. ACL 2009. 499
|
2011
|
50
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 500–509, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Coherent Citation-Based Summarization of Scientific Papers Amjad Abu-Jbara EECS Department University of Michigan Ann Arbor, MI, USA [email protected] Dragomir Radev EECS Department and School of Information University of Michigan Ann Arbor, MI, USA [email protected] Abstract In citation-based summarization, text written by several researchers is leveraged to identify the important aspects of a target paper. Previous work on this problem focused almost exclusively on its extraction aspect (i.e. selecting a representative set of citation sentences that highlight the contribution of the target paper). Meanwhile, the fluency of the produced summaries has been mostly ignored. For example, diversity, readability, cohesion, and ordering of the sentences included in the summary have not been thoroughly considered. This resulted in noisy and confusing summaries. In this work, we present an approach for producing readable and cohesive citation-based summaries. Our experiments show that the proposed approach outperforms several baselines in terms of both extraction quality and fluency. 1 Introduction Scientific research is a cumulative activity. The work of downstream researchers depends on access to upstream discoveries. The footnotes, end notes, or reference lists within research articles make this accumulation possible. When a reference appears in a scientific paper, it is often accompanied by a span of text describing the work being cited. We name the sentence that contains an explicit reference to another paper citation sentence. Citation sentences usually highlight the most important aspects of the cited paper such as the research problem it addresses, the method it proposes, the good results it reports, and even its drawbacks and limitations. By aggregating all the citation sentences that cite a paper, we have a rich source of information about it. This information is valuable because human experts have put their efforts to read the paper and summarize its important contributions. One way to make use of these sentences is creating a summary of the target paper. This summary is different from the abstract or a summary generated from the paper itself. While the abstract represents the author’s point of view, the citation summary is the summation of multiple scholars’ viewpoints. The task of summarizing a scientific paper using its set of citation sentences is called citationbased summarization. There has been previous work done on citationbased summarization (Nanba et al., 2000; Elkiss et al., 2008; Qazvinian and Radev, 2008; Mei and Zhai, 2008; Mohammad et al., 2009). Previous work focused on the extraction aspect; i.e. analyzing the collection of citation sentences and selecting a representative subset that covers the main aspects of the paper. The cohesion and the readability of the produced summaries have been mostly ignored. This resulted in noisy and confusing summaries. In this work, we focus on the coherence and readability aspects of the problem. Our approach produces citation-based summaries in three stages: preprocessing, extraction, and postprocessing. Our experiments show that our approach produces better summaries than several baseline summarization systems. The rest of this paper is organized as follows. After we examine previous work in Section 2, we outline the motivation of our approach in Section 3. Section 4 describes the three stages of our summarization system. The evaluation and the results are presented in Section 5. Section 6 concludes the paper. 500 2 Related Work The idea of analyzing and utilizing citation information is far from new. The motivation for using information latent in citations has been explored tens of years back (Garfield et al., 1984; Hodges, 1972). Since then, there has been a large body of research done on citations. Nanba and Okumura (2000) analyzed citation sentences and automatically categorized citations into three groups using 160 pre-defined phrasebased rules. They also used citation categorization to support a system for writing surveys (Nanba and Okumura, 1999). Newman (2001) analyzed the structure of the citation networks. Teufel et al. (2006) addressed the problem of classifying citations based on their function. Siddharthan and Teufel (2007) proposed a method for determining the scientific attribution of an article by analyzing citation sentences. Teufel (2007) described a rhetorical classification task, in which sentences are labeled as one of Own, Other, Background, Textual, Aim, Basis, or Contrast according to their role in the authors argument. In parts of our approach, we were inspired by this work. Elkiss et al. (2008) performed a study on citation summaries and their importance. They concluded that citation summaries are more focused and contain more information than abstracts. Mohammad et al. (2009) suggested using citation information to generate surveys of scientific paradigms. Qazvinian and Radev (2008) proposed a method for summarizing scientific articles by building a similarity network of the citation sentences that cite the target paper, and then applying network analysis techniques to find a set of sentences that covers as much of the summarized paper facts as possible. We use this method as one of the baselines when we evaluate our approach. Qazvinian et al. (2010) proposed a citation-based summarization method that first extracts a number of important keyphrases from the set of citation sentences, and then finds the best subset of sentences that covers as many keyphrases as possible. Qazvinian and Radev (2010) addressed the problem of identifying the non-explicit citing sentences to aid citation-based summarization. 3 Motivation The coherence and readability of citation-based summaries are impeded by several factors. First, many citation sentences cite multiple papers besides the target. For example, the following is a citation sentence that appeared in the NLP literature and talked about Resnik’s (1999) work. (1) Grefenstette and Nioche (2000) and Jones and Ghani (2000) use the web to generate corpora for languages where electronic resources are scarce, while Resnik (1999) describes a method for mining the web for bilingual texts. The first fragment of this sentence describes different work other than Resnik’s. The contribution of Resnik is mentioned in the underlined fragment. Including the irrelevant fragments in the summary causes several problems. First, the aim of the summarization task is to summarize the contribution of the target paper using minimal text. These fragments take space in the summary while being irrelevant and less important. Second, including these fragments in the summary breaks the context and, hence, degrades the readability and confuses the reader. Third, the existence of irrelevant fragments in a sentence makes the ranking algorithm assign a low weight to it although the relevant fragment may cover an aspect of the paper that no other sentence covers. A second factor has to do with the ordering of the sentences included in the summary. For example, the following are two other citation sentences for Resnik (1999). (2) Mining the Web for bilingual text (Resnik, 1999) is not likely to provide sufficient quantities of high quality data. (3) Resnik (1999) addressed the issue of language identification for finding Web pages in the languages of interest. If these two sentences are to be included in the summary, the reasonable ordering would be to put the second sentence first. Thirdly, in some instances of citation sentences, the reference is not a syntactic constituent in the sen501 tence. It is added just to indicate the existence of citation. For example, in sentence (2) above, the reference could be safely removed from the sentence without hurting its grammaticality. In other instances (e.g. sentence (3) above), the reference is a syntactic constituent of the sentence and removing it makes the sentence ungrammatical. However, in certain cases, the reference could be replaced with a suitable pronoun (i.e. he, she or they). This helps avoid the redundancy that results from repeating the author name(s) in every sentence. Finally, a significant number of citation sentences are not suitable for summarization (Teufel et al., 2006) and should be filtered out. The following sentences are two examples. (4) The two algorithms we employed in our dependency parsing model are the Eisner parsing (Eisner, 1996) and Chu-Lius algorithm (Chu and Liu, 1965). (5) This type of model has been used by, among others, Eisner (1996). Sentence (4) appeared in a paper by Nguyen et al (2007). It does not describe any aspect of Eisner’s work, rather it informs the reader that Nguyen et al. used Eisner’s algorithm in their model. There is no value in adding this sentence to the summary of Eisner’s paper. Teufel (2007) reported that a significant number of citation sentences (67% of the sentences in her dataset) were of this type. Likewise, the comprehension of sentence (5) depends on knowing its context (i.e. its surrounding sentences). This sentence alone does not provide any valuable information about Eisner’s paper and should not be added to the summary unless its context is extracted and included in the summary as well. In our approach, we address these issues to achieve the goal of improving the coherence and the readability of citation-based summaries. 4 Approach In this section we describe a system that takes a scientific paper and a set of citation sentences that cite it as input, and outputs a citation summary of the paper. Our system produces the summaries in three stages. In the first stage, the citation sentences are preprocessed to rule out the unsuitable sentences and the irrelevant fragments of sentences. In the second stage, a number of citation sentences that cover the various aspects of the paper are selected. In the last stage, the selected sentences are post-processed to enhance the readability of the summary. We describe the stages in the following three subsections. 4.1 Preprocessing The aim of this stage is to determine which pieces of text (sentences or fragments of sentences) should be considered for selection in the next stage and which ones should be excluded. This stage involves three tasks: reference tagging, reference scope identification, and sentence filtering. 4.1.1 Reference Tagging A citation sentence contains one or more references. At least one of these references corresponds to the target paper. When writing scientific articles, authors usually use standard patterns to include pointers to their references within the text. We use pattern matching to tag such references. The reference to the target is given a different tag than the references to other papers. The following example shows a citation sentence with all the references tagged and the target reference given a different tag. In <TREF>Resnik (1999)</TREF>, <REF>Nie, Simard, and Foster (2001)</REF>, <REF>Ma and Liberman (1999)</REF>, and <REF>Resnik and Smith (2002)</REF>, the Web is harvested in search of pages that are available in two languages. 4.1.2 Identifying the Reference Scope In the previous section, we showed the importance of identifying the scope of the target reference; i.e. the fragment of the citation sentence that corresponds to the target paper. We define the scope of a reference as the shortest fragment of the citation sentence that contains the reference and could form a grammatical sentence if the rest of the sentence was removed. To find such a fragment, we use a simple yet adequate heuristic. We start by parsing the sentence using the link grammar parser (Sleator and Temperley, 502 1991). Since the parser is not trained on citation sentences, we replace the references with placeholders before passing the sentence to the parser. Figure 1 shows a portion of the parse tree for Sentence (1) (from Section 1). Figure 1: An example showing the scope of a target reference We extract the scope of the reference from the parse tree as follows. We find the smallest subtree rooted at an S node (sentence clause node) and contains the target reference node. We extract the text that corresponds to this subtree if it is grammatical. Otherwise, we find the second smallest subtree rooted at an S node and so on. For example, the parse tree shown in Figure 1 suggests that the scope of the reference is: Resnik (1999) describes a method for mining the web for bilingual texts. 4.1.3 Sentence Filtering The task in this step is to detect and filter out unsuitable sentences; i.e., sentences that depend on their context (e.g. Sentence (5) above) or describe the own work of their authors, not the contribution of the target paper (e.g Sentence (4) above). Formally, we classify the citation sentences into two classes: suitable and unsuitable sentences. We use a machine learning technique for this purpose. We extract a number of features from each sentence and train a classification model using these features. The trained model is then used to classify the sentences. We use Support Vector Machines (SVM) with linear kernel as our classifier. The features that we use in this step and their descriptions are shown in Table 1. 4.2 Extraction In the first stage, the sentences and sentence fragments that are not useful for our summarization task are ruled out. The input to this stage is a set of citation sentences that are believed to be suitable for the summary. From these sentences, we need to select a representative subset. The sentences are selected based on these three main properties: First, they should cover diverse aspects of the paper. Second, the sentences that cover the same aspect should not contain redundant information. For example, if two sentences talk about the drawbacks of the target paper, one sentence can mention the computation inefficiency, while the other criticize the assumptions the paper makes. Third, the sentences should cover as many important facts about the target paper as possible using minimal text. In this stage, the summary sentences are selected in three steps. In the first step, the sentences are classified into five functional categories: Background, Problem Statement, Method, Results, and Limitations. In the second step, we cluster the sentences within each category into clusters of similar sentences. In the third step, we compute the LexRank (Erkan and Radev, 2004) values for the sentences within each cluster. The summary sentences are selected based on the classification, the clustering, and the LexRank values. 4.2.1 Functional Category Classification We classify the citation sentences into the five categories mentioned above using a machine learning technique. A classification model is trained on a number of features (Table 2) extracted from a labeled set of citation sentences. We use SVM with linear kernel as our classifier. 4.2.2 Sentence Clustering In the previous step we determined the category of each citation sentence. It is very likely that sentences from the same category contain similar or overlapping information. For example, Sentences (6), (7), and (8) below appear in the set of citation 503 Feature Description Similarity to the target paper The value of the cosine similarity (using TF-IDF vectors) between the citation sentence and the target paper. Headlines The section in which the citation sentence appeared in the citing paper. We recognize 10 section types such as Introduction, Related Work, Approach, etc. Relative position The relative position of the sentence in the section and the paragraph in which it appears First person pronouns This feature takes a value of 1 if the sentence contains a first person pronoun (I, we, our, us, etc.), and 0 otherwise. Tense of the first verb A sentence that contains a past tense verb near its beginning is more likely to be describing previous work. Determiners Demonstrative Determiners (this, that, these, those, and which) and Alternative Determiners (another, other). The value of this feature is the relative position of the first determiner (if one exists) in the sentence. Table 1: The features used for sentence filtering Feature Description Similarity to the sections of the target paper The sections of the target paper are categorized into five categories: 1) Introduction, Motivation, Problem Statement. 2) Background, Prior Work, Previous Work, and Related Work. 3) Experiments, Results, and Evaluation. 4) Discussion, Conclusion, and Future work. 5) All other headlines. The value of this feature is the cosine similarity (using TF-IDF vectors) between the sentence and the text of the sections of each of the five section categories. Headlines This is the same feature that we used for sentence filtering in Section 4.1.3. Number of references in the sentence Sentences that contain multiple references are more likely to be Background sentences. Verbs We use all the verbs that their lemmatized form appears in at least three sentences that belong to the same category in the training set. Auxiliary verbs are excluded. In our annotated dataset, for example, the verb propose appeared in 67 sentences from the Methodology category, while the verbs outperform and achieve appeared in 33 Result sentences. Table 2: The features used for sentence classification sentences that cite Goldwater and Griffiths’ (2007). These sentences belong to the same category (i.e Method). Both Sentences (6) and (7) convey the same information about Goldwater and Griffiths (2007) contribution. Sentence (8), however, describes a different aspect of the paper methodology. (6) Goldwater and Griffiths (2007) proposed an information-theoretic measure known as the Variation of Information (VI) (7) Goldwater and Griffiths (2007) propose using the Variation of Information (VI) metric (8) A fully-Bayesian approach to unsupervised POS tagging has been developed by Goldwater and Griffiths (2007) as a viable alternative to the traditional maximum likelihood-based HMM approach. Clustering divides the sentences of each category into groups of similar sentences. Following Qazvinian and Radev (2008), we build a cosine similarity graph out of the sentences of each category. This is an undirected graph in which nodes are sentences and edges represent similarity relations. Each edge is weighted by the value of the cosine similarity (using TF-IDF vectors) between the two sentences the edge connects. Once we have the similarity network constructed, we partition it into clusters using a community finding technique. We use the Clauset algorithm (Clauset et al., 2004), a hierarchical agglomerative community finding algorithm that runs in linear time. 4.2.3 Ranking Although the sentences that belong to the same cluster are similar, they are not necessarily equally important. We rank the sentences within each cluster by computing their LexRank (Erkan and Radev, 2004). Sentences with higher rank are more important. 4.2.4 Sentence Selection At this point we have determined (Figure 2), for each sentence, its category, its cluster, and its relative importance. Sentences are added to the summary in order based on their category, the size of their clusters, then their LexRank values. The categories are 504 Figure 2: Example illustrating sentence selection ordered as Background, Problem, Method, Results, then Limitations. Clusters within each category are ordered by the number of sentences in them whereas the sentences of each cluster are ordered by their LexRank values. In the example shown in Figure 2, we have three categories. Each category contains several clusters. Each cluster contains several sentences with different LexRank values (illustrated by the sizes of the dots in the figure.) If the desired length of the summary is 3 sentences, the selected sentences will be in order S1, S12, then S18. If the desired length is 5, the selected sentences will be S1, S5, S12, S15, then S18. 4.3 Postprocessing In this stage, we refine the sentences that we extracted in the previous stage. Each citation sentence will have the target reference (the author’s names and the publication year) mentioned at least once. The reference could be either syntactically and semantically part of the sentence (e.g. Sentence (3) above) or not (e.g. Sentence (2)). The aim of this refinement step is to avoid repeating the author’s names and the publication year in every sentence. We keep the author’s names and the publication year only in the first sentence of the summary. In the following sentences, we either replace the reference with a suitable personal pronoun or remove it. The reference is replaced with a pronoun if it is part of the sentence and this replacement does not make the sentence ungrammatical. The reference is removed if it is not part of the sentence. If the sentence contains references for other papers, they are removed if this doesn’t hurt the grammaticality of the sentence. To determine whether a reference is part of the sentence or not, we again use a machine learning approach. We train a model on a set of labeled sentences. The features used in this step are listed in Table 3. The trained model is then used to classify the references that appear in a sentence into three classes: keep, remove, replace. If a reference is to be replaced, and the paper has one author, we use ”he/she” (we do not know if the author is male or female). If the paper has two or more authors, we use ”they”. 5 Evaluation We provide three levels of evaluation. First, we evaluate each of the components in our system separately. Then we evaluate the summaries that our system generate in terms of extraction quality. Finally, we evaluate the coherence and readability of the summaries. 5.1 Data We use the ACL Anthology Network (AAN) (Radev et al., 2009) in our evaluation. AAN is a collection of more than 16000 papers from the Computational Linguistics journal, and the proceedings of the ACL conferences and workshops. AAN provides all citation information from within the network including the citation network, the citation sentences, and the citation context for each paper. We used 55 papers from AAN as our data. The papers have a variable number of citation sentences, ranging from 15 to 348. The total number of citation sentences in the dataset is 4,335. We split the data randomly into two different sets; one for evaluating the components of the system, and the other for evaluating the extraction quality and the readability of the generated summaries. The first set (dataset1, henceforth) contained 2,284 sentences coming from 25 papers. We asked humans with good background in NLP (the area of the annotated papers) to provide two annotations for each sentence in this set: 1) label the sentence as Background, Problem, Method, Result, Limitation, or Unsuitable, 2) for each reference in the sentence, determine whether it could be replaced with a pronoun, removed, or should be kept. 505 Feature Description Part-of-speech (POS) tag We consider the POS tags of the reference, the word before, and the word after. Before passing the sentence to the POS tagger, all the references in the sentence are replaced by placeholders. Style of the reference It is common practice in writing scientific papers to put the whole citation between parenthesis when the authors are not a constitutive part of the enclosing sentence, and to enclose just the year between parenthesis when the author’s name is a syntactic constituent in the sentence. Relative position of the reference This feature takes one of three values: first, last, and inside. Grammaticality Grammaticality of the sentence if the reference is removed/replaced. Again, we use the Link Grammar parser (Sleator and Temperley, 1991) to check the grammaticality Table 3: The features used for author name replacement Each sentence was given to 3 different annotators. We used the majority vote labels. We use Kappa coefficient (Krippendorff, 2003) to measure the inter-annotator agreement. Kappa coefficient is defined as: Kappa = P(A) −P(E) 1 −P(E) (1) where P(A) is the relative observed agreement among raters and P(E) is the hypothetical probability of chance agreement. The agreement among the three annotators on distinguishing the unsuitable sentences from the other five categories is 0.85. On Landis and Kochs(1977) scale, this value indicates an almost perfect agreement. The agreement on classifying the sentences into the five functional categories is 0.68. On the same scale this value indicates substantial agreement. The second set (dataset2, henceforth) contained 30 papers (2051 sentences). We asked humans with a good background in NLP (the papers topic) to generate a readable, coherent summary for each paper in the set using its citation sentences as the source text. We asked them to fix the length of the summaries to 5 sentences. Each paper was assigned to two humans to summarize. 5.2 Component Evaluation Reference Tagging and Reference Scope Identification Evaluation: We ran our reference tagging and scope identification components on the 2,284 sentences in dataset1. Then, we went through the tagged sentences and the extracted scopes, and counted the number of correctly/incorrectly tagged (extracted)/missed references (scopes). Our tagging Bkgrnd Prob Method Results Limit. Precision 64.62% 60.01% 88.66% 76.05% 33.53% Recall 72.47% 59.30% 75.03% 82.29% 59.36% F1 68.32% 59.65% 81.27% 79.04% 42.85% Table 4: Precision and recall results achieved by our citation sentence classifier component achieved 98.2% precision and 94.4% recall. The reference to the target paper was tagged correctly in all the sentences. Our scope identification component extracted the scope of target references with good precision (86.4%) but low recall (35.2%). In fact, extracting a useful scope for a reference requires more than just finding a grammatical substring. In future work, we plan to employ text regeneration techniques to improve the recall by generating grammatical sentences from ungrammatical fragments. Sentence Filtering Evaluation: We used Support Vector Machines (SVM) with linear kernel as our classifier. We performed 10-fold cross validation on the labeled sentences (unsuitable vs all other categories) in dataset1. Our classifier achieved 80.3% accuracy. Sentence Classification Evaluation: We used SVM in this step as well. We also performed 10fold cross validation on the labeled sentences (the five functional categories). This classifier achieved 70.1% accuracy. The precision and recall for each category are given in Table 4 Author Name Replacement Evaluation: The classifier used in this task is also SVM. We performed 10-fold cross validation on the labeled sentences of dataset1. Our classifier achieved 77.41% accuracy. 506 Produced using our system There has been a large number of studies in tagging and morphological disambiguation using various techniques such as statistical techniques, e.g. constraint-based techniques and transformation-based techniques. A thorough removal of ambiguity requires a syntactic process. A rule-based tagger described in Voutilainen (1995) was equipped with a set of guessing rules that had been hand-crafted using knowledge of English morphology and intuitions. The precision of rule-based taggers may exceed that of the probabilistic ones. The construction of a linguistic rule-based tagger, however, has been considered a difficult and time-consuming task. Produced using Qazvinian and Radev (2008) system Another approach is the rule-based or constraint-based approach, recently most prominently exemplified by the Constraint Grammar work (Karlsson et al. , 1995; Voutilainen, 1995b; Voutilainen et al. , 1992; Voutilainen and Tapanainen, 1993), where a large number of hand-crafted linguistic constraints are used to eliminate impossible tags or morphological parses for a given word in a given context. Some systems even perform the POS tagging as part of a syntactic analysis process (Voutilainen, 1995). A rule-based tagger described in (Voutilainen, 1995) is equipped with a set of guessing rules which has been hand-crafted using knowledge of English morphology and intuition. Older versions of EngCG (using about 1,150 constraints) are reported ( butilainen et al. 1992; Voutilainen and HeikkiUi 1994; Tapanainen and Voutilainen 1994; Voutilainen 1995) to assign a correct analysis to about 99.7% of all words while each word in the output retains 1.04-1.09 alternative analyses on an average, i.e. some of the ambiguities remait unresolved. We evaluate the resulting disambiguated text by a number of metrics defined as follows (Voutilainen, 1995a). Table 5: Sample Output 5.3 Extraction Evaluation To evaluate the extraction quality, we use dataset2 (that has never been used for training or tuning any of the system components). We use our system to generate summaries for each of the 30 papers in dataset2. We also generate summaries for the papers using a number of baseline systems (described in Section 5.3.1). All the generated summaries were 5 sentences long. We use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) based on the longest common substrings (ROUGE-L) as our evaluation metric. 5.3.1 Baselines We evaluate the extraction quality of our system (FL) against 7 different baselines. In the first baseline, the sentences are selected randomly from the set of citation sentences and added to the summary. The second baseline is the MEAD summarizer (Radev et al., 2004) with all its settings set to default. The third baseline is LexRank (Erkan and Radev, 2004) run on the entire set of citation sentences of the target paper. The forth baseline is Qazvinian and Radev (2008) citation-based summarizer (QR08) in which the citation sentences are first clustered then the sentences within each cluster are ranked using LexRank. The remaining baselines are variations of our system produced by removing one component from the pipeline at a time. In one variation (FL-1), we remove the sentence filtering component. In another variation (FL-2), we remove the sentence classification component; so, all the sentences are assumed to come from one category in the subsequent components. In a third variation (FL-3), the clustering component is removed. To make the comparison of the extraction quality to those baselines fair, we remove the author name replacement component from our system and all its variations. 5.3.2 Results Table 6 shows the average ROUGE-L scores (with 95% confidence interval) for the summaries of the 30 papers in dataset2 generated using our system and the different baselines. The two human summaries were used as models for comparison. The Human score reported in the table is the result of comparing the two human summaries to each others. Statistical significance was tested using a 2-tailed paired t-test. The results are statistically significant at the 0.05 level. The results show that our approach outperforms all the baseline techniques. It achieves higher ROUGE-L score for most of the papers in our testing set. Comparing the score of FL-1 to the score of FL shows that sentence filtering has a significant impact on the results. It also shows that the classification and clustering components both improve the extraction quality. 5.4 Coherence and Readability Evaluation We asked human judges (not including the authors) to rate the coherence and readability of a number of summaries for each of dataset2 papers. For each paper we evaluated 3 summaries. The sum507 Human Random MEAD LexRank QR08 ROUGE-L 0.733 0.398 0.410 0.408 0.435 FL-1 FL-2 FL-3 FL ROUGE-L 0.475 0.511 0.525 0.539 Table 6: Extraction Evaluation Average Coherence Rating Number of summaries Human FL QV08 1≤coherence <2 0 9 17 2≤coherence <3 3 11 12 3≤coherence <4 16 9 1 4≤coherence ≤5 11 1 0 Table 7: Coherence Evaluation mary that our system produced, the human summary, and a summary produced by Qazvinian and Radev (2008) summarizer (the best baseline - after our system and its variations - in terms of extraction quality as shown in the previous subsection.) The summaries were randomized and given to the judges without telling them how each summary was produced. The judges were not given access to the source text. They were asked to use a five pointscale to rate how coherent and readable the summaries are, where 1 means that the summary is totally incoherent and needs significant modifications to improve its readability, and 5 means that the summary is coherent and no modifications are needed to improve its readability. We gave each summary to 5 different judges and took the average of their ratings for each summary. We used Weighted Kappa with linear weights (Cohen, 1968) to measure the interrater agreement. The Weighted Kappa measure between the five groups of ratings was 0.72. Table 7 shows the number of summaries in each rating range. The results show that our approach significantly improves the coherence of citation-based summarization. Table 5 shows two sample summaries (each 5 sentences long) for the Voutilainen (1995) paper. One summary was produced using our system and the other was produced using Qazvinian and Radev (2008) system. 6 Conclusions In this paper, we presented a new approach for citation-based summarization of scientific papers that produces readable summaries. Our approach involves three stages. The first stage preprocesses the set of citation sentences to filter out the irrelevant sentences or fragments of sentences. In the second stage, a representative set of sentences are extracted and added to the summary in a reasonable order. In the last stage, the summary sentences are refined to improve their readability. The results of our experiments confirmed that our system outperforms several baseline systems. Acknowledgments This work is in part supported by the National Science Foundation grant “iOPENER: A Flexible Framework to Support Rapid Learning in Unfamiliar Research Domains”, jointly awarded to University of Michigan and University of Maryland as IIS 0705832, and in part by the NIH Grant U54 DA021519 to the National Center for Integrative Biomedical Informatics. Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the supporters. References Aaron Clauset, M. E. J. Newman, and Cristopher Moore. 2004. Finding community structure in very large networks. Phys. Rev. E, 70(6):066111, Dec. Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4):213 – 220. Aaron Elkiss, Siwei Shen, Anthony Fader, G¨unes¸ Erkan, David States, and Dragomir Radev. 2008. Blind men and elephants: What do citation summaries tell us about a research article? J. Am. Soc. Inf. Sci. Technol., 59(1):51–62. Gunes Erkan and Dragomir R. Radev. 2004. Lexrank: graph-based lexical centrality as salience in text summarization. J. Artif. Int. Res., 22(1):457–479. E. Garfield, Irving H. Sher, and R. J. Torpie. 1984. The Use of Citation Data in Writing the History of Science. Institute for Scientific Information Inc., Philadelphia, Pennsylvania, USA. T. L. Hodges. 1972. Citation indexing-its theory and application in science, technology, and humanities. Ph.D. thesis, University of California at Berkeley.Ph.D. thesis, University of California at Berkeley. 508 Klaus H. Krippendorff. 2003. Content Analysis: An Introduction to Its Methodology. Sage Publications, Inc, 2nd edition, December. J. Richard Landis and Gary G. Koch. 1977. The Measurement of Observer Agreement for Categorical Data. Biometrics, 33(1):159–174, March. Qiaozhu Mei and ChengXiang Zhai. 2008. Generating impact-based summaries for scientific literature. In Proceedings of ACL-08: HLT, pages 816–824, Columbus, Ohio, June. Association for Computational Linguistics. Saif Mohammad, Bonnie Dorr, Melissa Egan, Ahmed Hassan, Pradeep Muthukrishan, Vahed Qazvinian, Dragomir Radev, and David Zajic. 2009. Using citations to generate surveys of scientific paradigms. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 584–592, Boulder, Colorado, June. Association for Computational Linguistics. Hidetsugu Nanba and Manabu Okumura. 1999. Towards multi-paper summarization using reference information. In IJCAI ’99: Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, pages 926–931, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Hidetsugu Nanba, Noriko Kando, Manabu Okumura, and Of Information Science. 2000. Classification of research papers using citation links and citation types: Towards automatic review article generation. M. E. J. Newman. 2001. The structure of scientific collaboration networks. Proceedings of the National Academy of Sciences of the United States of America, 98(2):404–409, January. Vahed Qazvinian and Dragomir R. Radev. 2008. Scientific paper summarization using citation summary networks. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 689–696, Manchester, UK, August. Vahed Qazvinian and Dragomir R. Radev. 2010. Identifying non-explicit citing sentences for citation-based summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 555–564, Uppsala, Sweden, July. Association for Computational Linguistics. Vahed Qazvinian, Dragomir R. Radev, and Arzucan Ozgur. 2010. Citation summarization through keyphrase extraction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 895–903, Beijing, China, August. Coling 2010 Organizing Committee. Dragomir Radev, Timothy Allison, Sasha BlairGoldensohn, John Blitzer, Arda C¸ elebi, Stanko Dimitrov, Elliott Drabek, Ali Hakim, Wai Lam, Danyu Liu, Jahna Otterbacher, Hong Qi, Horacio Saggion, Simone Teufel, Michael Topper, Adam Winkel, and Zhu Zhang. 2004. MEAD - a platform for multidocument multilingual text summarization. In LREC 2004, Lisbon, Portugal, May. Dragomir R. Radev, Pradeep Muthukrishnan, and Vahed Qazvinian. 2009. The acl anthology network corpus. In NLPIR4DL ’09: Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries, pages 54–61, Morristown, NJ, USA. Association for Computational Linguistics. Advaith Siddharthan and Simone Teufel. 2007. Whose idea was this, and why does it matter? attributing scientific work to citations. In In Proceedings of NAACL/HLT-07. Daniel D. K. Sleator and Davy Temperley. 1991. Parsing english with a link grammar. In In Third International Workshop on Parsing Technologies. Simone Teufel, Advaith Siddharthan, and Dan Tidhar. 2006. Automatic classification of citation function. In In Proc. of EMNLP-06. Simone Teufel. 2007. Argumentative zoning for improved citation indexing. computing attitude and affect in text. In Theory and Applications, pages 159170. 509
|
2011
|
51
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 510–520, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Class of Submodular Functions for Document Summarization Hui Lin Dept. of Electrical Engineering University of Washington Seattle, WA 98195, USA [email protected] Jeff Bilmes Dept. of Electrical Engineering University of Washington Seattle, WA 98195, USA [email protected] Abstract We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization. 1 Introduction In this paper, we address the problem of generic and query-based extractive summarization from collections of related documents, a task commonly known as multi-document summarization. We treat this task as monotone submodular function maximization (to be defined in Section 2). This has a number of critical benefits. On the one hand, there exists a simple greedy algorithm for monotone submodular function maximization where the summary solution obtained (say ˆS) is guaranteed to be almost as good as the best possible solution (say Sopt) according to an objective F. More precisely, the greedy algorithm is a constant factor approximation to the cardinality constrained version of the problem, so that F( ˆS) ≥(1 −1/e)F(Sopt) ≈0.632F(Sopt). This is particularly attractive since the quality of the solution does not depend on the size of the problem, so even very large size problems do well. It is also important to note that this is a worst case bound, and in most cases the quality of the solution obtained will be much better than this bound suggests. Of course, none of this is useful if the objective function F is inappropriate for the summarization task. In this paper, we argue that monotone nondecreasing submodular functions F are an ideal class of functions to investigate for document summarization. We show, in fact, that many well-established methods for summarization (Carbonell and Goldstein, 1998; Filatova and Hatzivassiloglou, 2004; Takamura and Okumura, 2009; Riedhammer et al., 2010; Shen and Li, 2010) correspond to submodular function optimization, a property not explicitly mentioned in these publications. We take this fact, however, as testament to the value of submodular functions for summarization: if summarization algorithms are repeatedly developed that, by chance, happen to be an instance of a submodular function optimization, this suggests that submodular functions are a natural fit. On the other hand, other authors have started realizing explicitly the value of submodular functions for summarization (Lin and Bilmes, 2010; Qazvinian et al., 2010). Submodular functions share many properties in common with convex functions, one of which is that they are closed under a number of common combination operations (summation, certain compositions, restrictions, and so on). These operations give us the tools necessary to design a powerful submodular objective for submodular document summarization that extends beyond any previous work. We demonstrate this by carefully crafting a class of submodular func510 tions we feel are ideal for extractive summarization tasks, both generic and query-focused. In doing so, we demonstrate better than existing state-of-the-art performance on a number of standard summarization evaluation tasks, namely DUC-04 through to DUC07. We believe our work, moreover, might act as a springboard for researchers in summarization to consider the problem of “how to design a submodular function” for the summarization task. In Section 2, we provide a brief background on submodular functions and their optimization. Section 3 describes how the task of extractive summarization can be viewed as a problem of submodular function maximization. We also in this section show that many standard methods for summarization are, in fact, already performing submodular function optimization. In Section 4, we present our own submodular functions. Section 5 presents results on both generic and query-focused summarization tasks, showing as far as we know the best known ROUGE results for DUC04 through DUC-06, and the best known precision results for DUC-07, and the best recall DUC-07 results among those that do not use a web search engine. Section 6 discusses implications for future work. 2 Background on Submodularity We are given a set of objects V = {v1, . . . , vn} and a function F : 2V →R that returns a real value for any subset S ⊆V . We are interested in finding the subset of bounded size |S| ≤k that maximizes the function, e.g., argmaxS⊆V F(S). In general, this operation is hopelessly intractable, an unfortunate fact since the optimization coincides with many important applications. For example, F might correspond to the value or coverage of a set of sensor locations in an environment, and the goal is to find the best locations for a fixed number of sensors (Krause et al., 2008). If the function F is monotone submodular then the maximization is still NP complete, but it was shown in (Nemhauser et al., 1978) that a greedy algorithm finds an approximate solution guaranteed to be within e−1 e ∼0.63 of the optimal solution, as mentioned in Section 1. A version of this algorithm (Minoux, 1978), moreover, scales to very large data sets. Submodular functions are those that satisfy the property of diminishing returns: for any A ⊆B ⊆V \v, a submodular function F must satisfy F(A+v)−F(A) ≥ F(B + v) −F(B). That is, the incremental “value” of v decreases as the context in which v is considered grows from A to B. An equivalent definition, useful mathematically, is that for any A, B ⊆V , we must have that F(A) + F(B) ≥F(A ∪B) + F(A ∩B). If this is satisfied everywhere with equality, then the function F is called modular, and in such case F(A) = c + P a∈A ⃗fa for a sized |V | vector ⃗f of real values and constant c. A set function F is monotone nondecreasing if ∀A ⊆B, F(A) ≤F(B). As shorthand, in this paper, monotone nondecreasing submodular functions will simply be referred to as monotone submodular. Historically, submodular functions have their roots in economics, game theory, combinatorial optimization, and operations research. More recently, submodular functions have started receiving attention in the machine learning and computer vision community (Kempe et al., 2003; Narasimhan and Bilmes, 2005; Krause and Guestrin, 2005; Narasimhan and Bilmes, 2007; Krause et al., 2008; Kolmogorov and Zabin, 2004) and have recently been introduced to natural language processing for the tasks of document summarization (Lin and Bilmes, 2010) and word alignment (Lin and Bilmes, 2011). Submodular functions share a number of properties in common with convex and concave functions (Lov´asz, 1983), including their wide applicability, their generality, their multiple options for their representation, and their closure under a number of common operators (including mixtures, truncation, complementation, and certain convolutions). For example, if a collection of functions {Fi}i is submodular, then so is their weighted sum F = P i αiFi where αi are nonnegative weights. It is not hard to show that submodular functions also have the following composition property with concave functions: Theorem 1. Given functions F : 2V →R and f : R →R, the composition F′ = f ◦F : 2V →R (i.e., F′(S) = f(F(S))) is nondecreasing submodular, if f is non-decreasing concave and F is nondecreasing submodular. This property will be quite useful when defining submodular functions for document summarization. 511 3 Submodularity in Summarization 3.1 Summarization with knapsack constraint Let the ground set V represents all the sentences (or other linguistic units) in a document (or document collection, in the multi-document summarization case). The task of extractive document summarization is to select a subset S ⊆V to represent the entirety (ground set V ). There are typically constraints on S, however. Obviously, we should have |S| < |V | = N as it is a summary and should be small. In standard summarization tasks (e.g., DUC evaluations), the summary is usually required to be length-limited. Therefore, constraints on S can naturally be modeled as knapsack constraints: P i∈S ci ≤b, where ci is the non-negative cost of selecting unit i (e.g., the number of words in the sentence) and b is our budget. If we use a set function F : 2V →R to measure the quality of the summary set S, the summarization problem can then be formalized as the following combinatorial optimization problem: Problem 1. Find S∗∈argmax S⊆V F(S) subject to: X i∈S ci ≤b. Since this is a generalization of the cardinality constraint (where ci = 1, ∀i), this also constitutes a (well-known) NP-hard problem. In this case as well, however, a modified greedy algorithm with partial enumeration can solve Problem 1 near-optimally with (1−1/e)-approximation factor if F is monotone submodular (Sviridenko, 2004). The partial enumeration, however, is too computationally expensive for real world applications. In (Lin and Bilmes, 2010), we generalize the work by Khuller et al. (1999) on the budgeted maximum cover problem to the general submodular framework, and show a practical greedy algorithm with a (1 −1/√e)-approximation factor, where each greedy step adds the unit with the largest ratio of objective function gain to scaled cost, while not violating the budget constraint (see (Lin and Bilmes, 2010) for details). Note that in all cases, submodularity and monotonicity are two necessary ingredients to guarantee that the greedy algorithm gives near-optimal solutions. In fact, greedy-like algorithms have been widely used in summarization. One of the more popular approaches is maximum marginal relevance (MMR) (Carbonell and Goldstein, 1998), where a greedy algorithm selects the most relevant sentences, and at the same time avoids redundancy by removing sentences that are too similar to ones already selected. Interestingly, the gain function defined in the original MMR paper (Carbonell and Goldstein, 1998) satisfies diminishing returns, a fact apparently unnoticed until now. In particular, Carbonell and Goldstein (1998) define an objective function gain of adding element k to set S (k /∈S) as: λSim1(sk, q) −(1 −λ) max i∈S Sim2(si, sk), (1) where Sim1(sk, q) measures the similarity between unit sk to a query q, Sim2(si, sk) measures the similarity between unit si and unit sk, and 0 ≤λ ≤1 is a trade-off coefficient. We have: Theorem 2. Given an expression for FMMR such that FMMR(S ∪{k})−FMMR(S) is equal to Eq. 1, FMMR is non-monotone submodular. Obviously, diminishing-returns hold since max i∈S Sim2(si, sk) ≤max i∈R Sim2(si, sk) for all S ⊆R, and therefore FMMR is submodular. On the other hand, FMMR, would not be monotone, so the greedy algorithm’s constant-factor approximation guarantee does not apply in this case. When scoring a summary at the sub-sentence level, submodularity naturally arises. Concept-based summarization (Filatova and Hatzivassiloglou, 2004; Takamura and Okumura, 2009; Riedhammer et al., 2010; Qazvinian et al., 2010) usually maximizes the weighted credit of concepts covered by the summary. Although the authors may not have noticed, their objective functions are also submodular, adding more evidence suggesting that submodularity is natural for summarization tasks. Indeed, let S be a subset of sentences in the document and denote Γ(S) as the set of concepts contained in S. The total credit of the concepts covered by S is then Fconcept(S) ≜ X i∈Γ(S) ci, where ci is the credit of concept i. This function is known to be submodular (Narayanan, 1997). 512 Similar to the MMR approach, in (Lin and Bilmes, 2010), a submodular graph based objective function is proposed where a graph cut function, measuring the similarity of the summary to the rest of document, is combined with a subtracted redundancy penalty function. The objective function is submodular but again, non-monotone. We theoretically justify that the performance guarantee of the greedy algorithm holds for this objective function with high probability (Lin and Bilmes, 2010). Our justification, however, is shown to be applicable only to certain particular non-monotone submodular functions, under certain reasonable assumptions about the probability distribution over weights of the graph. 3.2 Summarization with covering constraint Another perspective is to treat the summarization problem as finding a low-cost subset of the document under the constraint that a summary should cover all (or a sufficient amount of) the information in the document. Formally, this can be expressed as Problem 2. Find S∗∈argmin S⊆V X i∈S ci subject to: F(S) ≥α, where ci are the element costs, and set function F(S) measure the information covered by S. When F is submodular, the constraint F(S) ≥α is called a submodular cover constraint. When F is monotone submodular, a greedy algorithm that iteratively selects k with minimum ck/(F(S ∪{k}) −F(S)) has approximation guarantees (Wolsey, 1982). Recent work (Shen and Li, 2010) proposes to model document summarization as finding a minimum dominating set and a greedy algorithm is used to solve the problem. The dominating set constraint is also a submodular cover constraint. Define δ(S) be the set of elements that is either in S or is adjacent to some element in S. Then S is a dominating set if |δ(S)| = |V |. Note that Fdom(S) ≜|δ(S)| is monotone submodular. The dominating set constraint is then also a submodular cover constraint, and therefore the approaches in (Shen and Li, 2010) are special cases of Problem 2. The solutions found in this framework, however, do not necessarily satisfy a summary’s budget constraint. Consequently, a subset of the solution found by solving Problem 2 has to be constructed as the final summary, and the near-optimality is no longer guaranteed. Therefore, solving Problem 1 for document summarization appears to be a better framework regarding global optimality. In the present paper, our framework is that of Problem 1. 3.3 Automatic summarization evaluation Automatic evaluation of summary quality is important for the research of document summarization as it avoids the labor-intensive and potentially inconsistent human evaluation. ROUGE (Lin, 2004) is widely used for summarization evaluation and it has been shown that ROUGE-N scores are highly correlated with human evaluation (Lin, 2004). Interestingly, ROUGE-N is monotone submodular, adding further evidence that monotone submodular functions are natural for document summarization. Theorem 3. ROUGE-N is monotone submodular. Proof. By definition (Lin, 2004), ROUGE-N is the n-gram recall between a candidate summary and a set of reference summaries. Precisely, let S be the candidate summary (a set of sentences extracted from the ground set V ), ce : 2V →Z+ be the number of times n-gram e occurs in summary S, and Ri be the set of n-grams contained in the reference summary i (suppose we have K reference summaries, i.e., i = 1, · · · , K). Then ROUGE-N can be written as the following set function: FROUGE-N(S) ≜ PK i=1 P e∈Ri min(ce(S), re,i) PK i=1 P e∈Ri re,i , where re,i is the number of times n-gram e occurs in reference summary i. Since ce(S) is monotone modular and min(x, a) is a concave non-decreasing function of x, min(ce(S), re,i) is monotone submodular by Theorem 1. Since summation preserves submodularity, and the denominator is constant, we see that FROUGE-N is monotone submodular. Since the reference summaries are unknown, it is of course impossible to optimize FROUGE-N directly. Therefore, some approaches (Filatova and Hatzivassiloglou, 2004; Takamura and Okumura, 2009; Riedhammer et al., 2010) instead define “concepts”. Alter513 natively, we herein propose a class of monotone submodular functions that naturally models the quality of a summary while not depending on an explicit notion of concepts, as we will see in the following section. 4 Monotone Submodular Objectives Two properties of a good summary are relevance and non-redundancy. Objective functions for extractive summarization usually measure these two separately and then mix them together trading off encouraging relevance and penalizing redundancy. The redundancy penalty usually violates the monotonicity of the objective functions (Carbonell and Goldstein, 1998; Lin and Bilmes, 2010). We therefore propose to positively reward diversity instead of negatively penalizing redundancy. In particular, we model the summary quality as F(S) = L(S) + λR(S), (2) where L(S) measures the coverage, or “fidelity”, of summary set S to the document, R(S) rewards diversity in S, and λ ≥0 is a trade-off coefficient. Note that the above is analogous to the objectives widely used in machine learning, where a loss function that measures the training set error (we measure the coverage of summary to a document), is combined with a regularization term encouraging certain desirable (e.g., sparsity) properties (in our case, we “regularize” the solution to be more diverse). In the following, we discuss how both L(S) and R(S) are naturally monotone submodular. 4.1 Coverage function L(S) can be interpreted either as a set function that measures the similarity of summary set S to the document to be summarized, or as a function representing some form of “coverage” of V by S. Most naturally, L(S) should be monotone, as coverage improves with a larger summary. L(S) should also be submodular: consider adding a new sentence into two summary sets, one a subset of the other. Intuitively, the increment when adding a new sentence to the small summary set should be larger than the increment when adding it to the larger set, as the information carried by the new sentence might have already been covered by those sentences that are in the larger summary but not in the smaller summary. This is exactly the property of diminishing returns. Indeed, Shannon entropy, as the measurement of information, is another well-known monotone submodular function. There are several ways to define L(S) in our context. For instance, we could use L(S) = P i∈V,j∈S wi,j where wi,j represents the similarity between i and j. L(S) could also be facility location objective, i.e., L(S) = P i∈V maxj∈S wi,j, as used in (Lin et al., 2009). We could also use L(S) = P i∈Γ(S) ci as used in concept-based summarization, where the definition of “concept” and the mechanism to extract these concepts become important. All of these are monotone submodular. Alternatively, in this paper we propose the following objective that does not reply on concepts. Let L(S) = X i∈V min {Ci(S), α Ci(V )} , (3) where Ci : 2V →R is a monotone submodular function and 0 ≤α ≤1 is a threshold co-efficient. Firstly, L(S) as defined in Eqn. 3 is a monotone submodular function. The monotonicity is immediate. To see that L(S) is submodular, consider the fact that f(x) = min(x, a) where a ≥0 is a concave non-decreasing function, and by Theorem 1, each summand in Eqn. 3 is a submodular function, and as summation preserves submodularity, L(S) is submodular. Next, we explain the intuition behind Eqn. 3. Basically, Ci(S) measures how similar S is to element i, or how much of i is “covered” by S. Then Ci(V ) is just the largest value that Ci(S) can achieve. We call i “saturated” by S when min{Ci(S), αCi(V )} = αCi(V ). When i is already saturated in this way, any new sentence j can not further improve the coverage of i even if it is very similar to i (i.e., Ci(S ∪{j}) −Ci(S) is large). This will give other sentences that are not yet saturated a higher chance of being better covered, and therefore the resulting summary tends to better cover the entire document. One simple way to define Ci(S) is just to use Ci(S) = X j∈S wi,j (4) where wi,j ≥0 measures the similarity between i and j. In this case, when α = 1, Eqn. 3 reduces to the case where L(S) = P i∈V,j∈S wi,j. As we will see in Section 5, having an α that is less than 514 1 significantly improves the performance compared to the case when α = 1, which coincides with our intuition that using a truncation threshold improves the final summary’s coverage. 4.2 Diversity reward function Instead of penalizing redundancy by subtracting from the objective, we propose to reward diversity by adding the following to the objective: R(S) = K X i=1 s X j∈Pi∩S rj. (5) where Pi, i = 1, · · · K is a partition of the ground set V (i.e., S i Pi = V and the Pis are disjoint) into separate clusters, and ri ≥0 indicates the singleton reward of i (i.e., the reward of adding i into the empty set). The value ri estimates the importance of i to the summary. The function R(S) rewards diversity in that there is usually more benefit to selecting a sentence from a cluster not yet having one of its elements already chosen. As soon as an element is selected from a cluster, other elements from the same cluster start having diminishing gain, thanks to the square root function. For instance, consider the case where k1, k2 ∈P1, k3 ∈P2, and rk1 = 4, rk2 = 9, and rk3 = 4. Assume k1 is already in the summary set S. Greedily selecting the next element will choose k3 rather than k2 since √ 13 < 2 + 2. In other words, adding k3 achieves a greater reward as it increases the diversity of the summary (by choosing from a different cluster). Note, R(S) is distinct from L(S) in that R(S) might wish to include certain outlier material that L(S) could ignore. It is easy to show that R(S) is submodular by using the composition rule from Theorem 1. The square root is non-decreasing concave function. Inside each square root lies a modular function with non-negative weights (and thus is monotone). Applying the square root to such a monotone submodular function yields a submodular function, and summing them all together retains submodularity, as mentioned in Section 2. The monotonicity of R(S) is straightforward. Note, the form of Eqn. 5 is similar to structured group norms (e.g., (Zhao et al., 2009)), recently shown to be related to submodularity (Bach, 2010; Jegelka and Bilmes, 2011). Several extensions to Eqn. 5 are discussed next: First, instead of using a ground set partition, intersecting clusters can be used. Second, the square root function in Eqn. 5 can be replaced with any other non-decreasing concave functions (e.g., f(x) = log(1 + x)) while preserving the desired property of R(S), and the curvature of the concave function then determines the rate that the reward diminishes. Last, multi-resolution clustering (or partitions) with different sizes (K) can be used, i.e., we can use a mixture of components, each of which has the structure of Eqn. 5. A mixture can better represent the core structure of the ground set (e.g., the hierarchical structure in the documents (Celikyilmaz and Hakkani-t¨ur, 2010)). All such extensions preserve both monotonicity and submodularity. 5 Experiments The document understanding conference (DUC) (http://duc.nist.org) was the main forum providing benchmarks for researchers working on document summarization. The tasks in DUC evolved from single-document summarization to multi-document summarization, and from generic summarization (2001–2004) to query-focused summarization (2005–2007). As ROUGE (Lin, 2004) has been officially adopted for DUC evaluations since 2004, we also take it as our main evaluation criterion. We evaluated our approaches on DUC data 2003-2007, and demonstrate results on both generic and query-focused summarization. In all experiments, the modified greedy algorithm (Lin and Bilmes, 2010) was used for summary generation. 5.1 Generic summarization Summarization tasks in DUC-03 and DUC-04 are multi-document summarization on English news articles. In each task, 50 document clusters are given, each of which consists of 10 documents. For each document cluster, the system generated summary may not be longer than 665 bytes including spaces and punctuation. We used DUC-03 as our development set, and tested on DUC-04 data. We show ROUGE-1 scores1 as it was the main evaluation criterion for DUC-03, 04 evaluations. 1ROUGE version 1.5.5 with options: -a -c 95 -b 665 -m -n 4 -w 1.2 515 Documents were pre-processed by segmenting sentences and stemming words using the Porter Stemmer. Each sentence was represented using a bag-of-terms vector, where we used context terms up to bi-grams. Similarity between sentence i and sentence j, i.e., wi,j, was computed using cosine similarity: wi,j = P w∈si tfw,i × tfw,j × idf2 w qP w∈si tf2 w,siidf2 w qP w∈sj tf2 w,jidf2 w , where tfw,i and tfw,j are the numbers of times that w appears in si and sentence sj respectively, and idfw is the inverse document frequency (IDF) of term w (up to bigram), which was calculated as the logarithm of the ratio of the number of articles that w appears over the total number of all articles in the document cluster. Table 1: ROUGE-1 recall (R) and F-measure (F) results (%) on DUC-04. DUC-03 was used as development set. DUC-04 R F P i∈V P j∈S wi,j 33.59 32.44 L1(S) 39.03 38.65 R1(S) 38.23 37.81 L1(S) + λR1(S) 39.35 38.90 Takamura and Okumura (2009) 38.50 Wang et al. (2009) 39.07 Lin and Bilmes (2010) 38.39 Best system in DUC-04 (peer 65) 38.28 37.94 We first tested our coverage and diversity reward objectives separately. For coverage, we use a modular Ci(S) = P j∈S wi,j for each sentence i, i.e., L1(S) = X i∈V min X j∈S wi,j, α X k∈V wi,k . (6) When α = 1, L1(S) reduces to P i∈V,j∈S wi,j, which measures the overall similarity of summary set S to ground set V . As mentioned in Section 4.1, using such similarity measurement could possibly over-concentrate on a small portion of the document and result in a poor coverage of the whole document. As shown in Table 1, optimizing this objective function gives a ROUGE-1 F-measure score 32.44%. On the other hand, when using L1(S) with an α < 1 (the value of α was determined on DUC-03 using a grid search), a ROUGE-1 F-measure score 38.65% 36.2 36.4 36.6 36.8 37 37.2 37.4 37.6 0 5 10 15 20 ROUGE-1 F-measure (%) K=0.05N K=0.1N K=0.2N a Figure 1: ROUGE-1 F-measure scores on DUC-03 when α and K vary in objective function L1(S) + λR1(S), where λ = 6 and α = a N . is achieved, which is already better than the best performing system in DUC-04. As for the diversity reward objective, we define the singleton reward as ri = 1 N P j wi,j, which is the average similarity of sentence i to the rest of the document. It basically states that the more similar to the whole document a sentence is, the more reward there will be by adding this sentence to an empty summary set. By using this singleton reward, we have the following diversity reward function: R1(S) = K X k=1 s X j∈S∩Pk 1 N X i∈V wi,j. (7) In order to generate Pk, k = 1, · · · K, we used CLUTO2 to cluster the sentences, where the IDFweighted term vector was used as feature vector, and a direct K-mean clustering algorithm was used. In this experiment, we set K = 0.2N. In other words, there are 5 sentences in each cluster on average. And as we can see in Table 1, optimizing the diversity reward function alone achieves comparable performance to the DUC-04 best system. Combining L1(S) and R1(S), our system outperforms the best system in DUC-04 significantly, and it also outperforms several recent systems, including a concept-based summarization approach (Takamura and Okumura, 2009), a sentence topic model based system (Wang et al., 2009), and our MMR-styled submodular system (Lin and Bilmes, 2010). Figure 1 illustrates how ROUGE-1 scores change when α and K vary on the development set (DUC-03). 2http://glaros.dtc.umn.edu/gkhome/cluto/cluto/overview 516 Table 2: ROUGE-2 recall (R) and F-measure (F) results (%) on DUC-05, where DUC-05 was used as training set. DUC-05 R F L1(S) + λRQ(S) 8.38 8.31 Daum´e III and Marcu (2006) 7.62 Extr, Daum´e et al. (2009) 7.67 Vine, Daum´e et al. (2009) 8.24 Table 3: ROUGE-2 recall (R) and F-measure (F) results on DUC-05 (%). We used DUC-06 as training set. DUC-05 R F L1(S) + λRQ(S) 7.82 7.72 Daum´e III and Marcu (2006) 6.98 Best system in DUC-05 (peer 15) 7.44 7.43 5.2 Query-focused summarization We evaluated our approach on the task of queryfocused summarization using DUC 05-07 data. In DUC-05 and DUC-06, participants were given 50 document clusters, where each cluster contains 25 news articles related to the same topic. Participants were asked to generate summaries of at most 250 words for each cluster. For each cluster, a title and a narrative describing a user’s information need are provided. The narrative is usually composed of a set of questions or a multi-sentence task description. The main task in DUC-07 is the same as in DUC-06. In DUC 05-07, ROUGE-2 was the primary criterion for evaluation, and thus we also report ROUGE-23 (both recall R, and precision F). Documents were processed as in Section 5.1. We used both the title and the narrative as query, where stop words, including some function words (e.g., “describe”) that appear frequently in the query, were removed. All queries were then stemmed using the Porter Stemmer. Note that there are several ways to incorporate query-focused information into both the coverage and diversity reward objectives. For instance, Ci(S) could be query-dependent in how it measures how much query-dependent information in i is covered by S. Also, the coefficient α could be query and sentence dependent, where it takes larger value when a sentence is more relevant to query (i.e., a larger value of α means later truncation, and therefore more possible coverage). Similarly, sentence clustering and singleton rewards in the diversity function can also 3ROUGE version 1.5.5 was used with option -n 2 -x -m -2 4 -u -c 95 -r 1000 -f A -p 0.5 -t 0 -d -l 250 Table 4: ROUGE-2 recall (R) and F-measure (F) results (%) on DUC-06, where DUC-05 was used as training set. DUC-06 R F L1(S) + λRQ(S) 9.75 9.77 Celikyilmaz and Hakkani-t¨ur (2010) 9.10 Shen and Li (2010) 9.30 Best system in DUC-06 (peer 24) 9.51 9.51 Table 5: ROUGE-2 recall (R) and F-measure (F) results (%) on DUC-07. DUC-05 was used as training set for objective L1(S) + λRQ(S). DUC-05 and DUC06 were used as training sets for objective L1(S) + P κ λκRQ,κ(S). DUC-07 R F L1(S) + λRQ(S) 12.18 12.13 L1(S) + P3 κ=1 λκRQ,κ(S) 12.38 12.33 Toutanova et al. (2007) 11.89 11.89 Haghighi and Vanderwende (2009) 11.80 Celikyilmaz and Hakkani-t¨ur (2010) 11.40 Best system in DUC-07 (peer 15) 12.45 12.29 be query-dependent. In this experiment, we explore an objective with a query-independent coverage function (R1(S)), indicating prior importance, combined with a query-dependent diversity reward function, where the latter is defined as: RQ(S) = K X k=1 v u u t X j∈S∩Pk β N X i∈V wi,j + (1 −β)rj,Q ! , where 0 ≤β ≤1, and rj,Q represents the relevance between sentence j to query Q. This query-dependent reward function is derived by using a singleton reward that is expressed as a convex combination of the query-independent score ( 1 N P i∈V wi,j) and the query-dependent score (rj,Q) of a sentence. We simply used the number of terms (up to a bi-gram) that sentence j overlaps the query Q as rj,Q, where the IDF weighting is not used (i.e., every term in the query, after stop word removal, was treated as equally important). Both query-independent and query-dependent scores were then normalized by their largest value respectively such that they had roughly the same dynamic range. To better estimate of the relevance between query and sentences, we further expanded sentences with synonyms and hypernyms of its constituent words. In particular, part-of-speech tags were obtained for each sentence using the maximum entropy part-of-speech tagger (Ratnaparkhi, 1996), and all nouns were then 517 expanded with their synonyms and hypernyms using WordNet (Fellbaum, 1998). Note that these expanded documents were only used in the estimation rj,Q, and we plan to further explore whether there is benefit to use the expanded documents either in sentence similarity estimation or in sentence clustering in our future work. We also tried to expand the query with synonyms and observed a performance decrease, presumably due to noisy information in a query expression. While it is possible to use an approach that is similar to (Toutanova et al., 2007) to learn the coefficients in our objective function, we trained all coefficients to maximize ROUGE-2 F-measure score using the Nelder-Mead (derivative-free) method. Using L1(S)+λRQ(S) as the objective and with the same sentence clustering algorithm as in the generic summarization experiment (K = 0.2N), our system, when both trained and tested on DUC-05 (results in Table 2), outperforms the Bayesian query-focused summarization approach and the search-based structured prediction approach, which were also trained and tested on DUC-05 (Daum´e et al., 2009). Note that the system in (Daum´e et al., 2009) that achieves its best performance (8.24% in ROUGE-2 recall) is a so called “vine-growth” system, which can be seen as an abstractive approach, whereas our system is purely an extractive system. Comparing to the extractive system in (Daum´e et al., 2009), our system performs much better (8.38% v.s. 7.67%). More importantly, when trained only on DUC-06 and tested on DUC-05 (results in Table 3), our approach outperforms the best system in DUC-05 significantly. We further tested the system trained on DUC-05 on both DUC-06 and DUC-07. The results on DUC-06 are shown in Table. 4. Our system outperforms the best system in DUC-06, as well as two recent approaches (Shen and Li, 2010; Celikyilmaz and Hakkani-t¨ur, 2010). On DUC-07, in terms of ROUGE-2 score, our system outperforms PYTHY (Toutanova et al., 2007), a state-of-the-art supervised summarization system, as well as two recent systems including a generative summarization system based on topic models (Haghighi and Vanderwende, 2009), and a hybrid hierarchical summarization system (Celikyilmaz and Hakkani-t¨ur, 2010). It also achieves comparable performance to the best DUC-07 system. Note that in the best DUC-07 system (Pingali et al., 2007; Jagarlamudi et al., 2006), an external web search engine (Yahoo!) was used to estimate a language model for query relevance. In our system, no such web search expansion was used. To further improve the performance of our system, we used both DUC-05 and DUC-06 as a training set, and introduced three diversity reward terms into the objective where three different sentence clusterings with different resolutions were produced (with sizes 0.3N, 0.15N and 0.05N). Denoting a diversity reward corresponding to clustering κ as RQ,κ(S), we model the summary quality as L1(S) + P3 κ=1 λκRQ,κ(S). As shown in Table 5, using this objective function with multi-resolution diversity rewards improves our results further, and outperforms the best system in DUC-07 in terms of ROUGE-2 F-measure score. 6 Conclusion and discussion In this paper, we show that submodularity naturally arises in document summarization. Not only do many existing automatic summarization methods correspond to submodular function optimization, but also the widely used ROUGE evaluation is closely related to submodular functions. As the corresponding submodular optimization problem can be solved efficiently and effectively, the remaining question is then how to design a submodular objective that best models the task. To address this problem, we introduce a powerful class of monotone submodular functions that are well suited to document summarization by modeling two important properties of a summary, fidelity and diversity. While more advanced NLP techniques could be easily incorporated into our functions (e.g., language models could define a better Ci(S), more advanced relevance estimations for the singleton rewards ri, and better and/or overlapping clustering algorithms for our diversity reward), we already show top results on standard benchmark evaluations using fairly basic NLP methods (e.g., term weighting and WordNet expansion), all, we believe, thanks to the power and generality of submodular functions. As information retrieval and web search are closely related to query-focused summarization, our approach might be beneficial in those areas as well. 518 References F. Bach. 2010. Structured sparsity-inducing norms through submodular functions. Advances in Neural Information Processing Systems. J. Carbonell and J. Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proc. of SIGIR. A. Celikyilmaz and D. Hakkani-t¨ur. 2010. A hybrid hierarchical model for multi-document summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 815–824, Uppsala, Sweden, July. Association for Computational Linguistics. H. Daum´e, J. Langford, and D. Marcu. 2009. Searchbased structured prediction. Machine learning, 75(3):297–325. H. Daum´e III and D. Marcu. 2006. Bayesian queryfocused summarization. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, page 312. C. Fellbaum. 1998. WordNet: An electronic lexical database. The MIT press. E. Filatova and V. Hatzivassiloglou. 2004. Event-based extractive summarization. In Proceedings of ACL Workshop on Summarization, volume 111. A. Haghighi and L. Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 362–370, Boulder, Colorado, June. Association for Computational Linguistics. J. Jagarlamudi, P. Pingali, and V. Varma. 2006. Query independent sentence scoring approach to DUC 2006. In DUC 2006. S. Jegelka and J. A. Bilmes. 2011. Submodularity beyond submodular energies: coupling edges in graph cuts. In Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, June. D. Kempe, J. Kleinberg, and E. Tardos. 2003. Maximizing the spread of influence through a social network. In Proceedings of the 9th Conference on SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). S. Khuller, A. Moss, and J. Naor. 1999. The budgeted maximum coverage problem. Information Processing Letters, 70(1):39–45. V. Kolmogorov and R. Zabin. 2004. What energy functions can be minimized via graph cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(2):147–159. A. Krause and C. Guestrin. 2005. Near-optimal nonmyopic value of information in graphical models. In Proc. of Uncertainty in AI. A. Krause, H.B. McMahan, C. Guestrin, and A. Gupta. 2008. Robust submodular observation selection. Journal of Machine Learning Research, 9:2761–2801. H. Lin and J. Bilmes. 2010. Multi-document summarization via budgeted maximization of submodular functions. In North American chapter of the Association for Computational Linguistics/Human Language Technology Conference (NAACL/HLT-2010), Los Angeles, CA, June. H. Lin and J. Bilmes. 2011. Word alignment via submodular maximization over matroids. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT), Portland, OR, June. H. Lin, J. Bilmes, and S. Xie. 2009. Graph-based submodular selection for extractive summarization. In Proc. IEEE Automatic Speech Recognition and Understanding (ASRU), Merano, Italy, December. C.-Y. Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop. L. Lov´asz. 1983. Submodular functions and convexity. Mathematical programming-The state of the art,(eds. A. Bachem, M. Grotschel and B. Korte) Springer, pages 235–257. M. Minoux. 1978. Accelerated greedy algorithms for maximizing submodular set functions. Optimization Techniques, pages 234–243. M. Narasimhan and J. Bilmes. 2005. A submodularsupermodular procedure with applications to discriminative structure learning. In Proc. Conf. Uncertainty in Artifical Intelligence, Edinburgh, Scotland, July. Morgan Kaufmann Publishers. M. Narasimhan and J. Bilmes. 2007. Local search for balanced submodular clusterings. In Twentieth International Joint Conference on Artificial Intelligence (IJCAI07), Hyderabad, India, January. H. Narayanan. 1997. Submodular functions and electrical networks. North-Holland. G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher. 1978. An analysis of approximations for maximizing submodular set functions I. Mathematical Programming, 14(1):265– 294. P. Pingali, K. Rahul, and V. Varma. 2007. IIIT Hyderabad at DUC 2007. Proceedings of DUC 2007. V. Qazvinian, D.R. Radev, and A. Ozg¨ur. 2010. Citation Summarization Through Keyphrase Extraction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 895– 903. 519 A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In EMNLP, volume 1, pages 133–142. K. Riedhammer, B. Favre, and D. Hakkani-T¨ur. 2010. Long story short-Global unsupervised models for keyphrase based meeting summarization. Speech Communication. C. Shen and T. Li. 2010. Multi-document summarization via the minimum dominating set. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 984–992, Beijing, China, August. Coling 2010 Organizing Committee. M. Sviridenko. 2004. A note on maximizing a submodular set function subject to a knapsack constraint. Operations Research Letters, 32(1):41–43. H. Takamura and M. Okumura. 2009. Text summarization model based on maximum coverage problem and its variant. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 781–789. Association for Computational Linguistics. K. Toutanova, C. Brockett, M. Gamon, J. Jagarlamudi, H. Suzuki, and L. Vanderwende. 2007. The PYTHY summarization system: Microsoft research at DUC 2007. In the proceedings of Document Understanding Conference. D. Wang, S. Zhu, T. Li, and Y. Gong. 2009. Multidocument summarization using sentence-based topic models. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 297–300, Suntec, Singapore, August. Association for Computational Linguistics. L.A. Wolsey. 1982. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica, 2(4):385–393. P. Zhao, G. Rocha, and B. Yu. 2009. Grouped and hierarchical model selection through composite absolute penalties. Annals of Statistics, 37(6A):3468–3497. 520
|
2011
|
52
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 521–529, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Semi-supervised Relation Extraction with Large-scale Word Clustering Ang Sun Ralph Grishman Satoshi Sekine Computer Science Department New York University {asun,grishman,sekine}@cs.nyu.edu Abstract We present a simple semi-supervised relation extraction system with large-scale word clustering. We focus on systematically exploring the effectiveness of different cluster-based features. We also propose several statistical methods for selecting clusters at an appropriate level of granularity. When training on different sizes of data, our semi-supervised approach consistently outperformed a state-of-the-art supervised baseline system. 1 Introduction Relation extraction is an important information extraction task in natural language processing (NLP), with many practical applications. The goal of relation extraction is to detect and characterize semantic relations between pairs of entities in text. For example, a relation extraction system needs to be able to extract an Employment relation between the entities US soldier and US in the phrase US soldier. Current supervised approaches for tackling this problem, in general, fall into two categories: feature based and kernel based. Given an entity pair and a sentence containing the pair, both approaches usually start with multiple level analyses of the sentence such as tokenization, partial or full syntactic parsing, and dependency parsing. Then the feature based method explicitly extracts a variety of lexical, syntactic and semantic features for statistical learning, either generative or discriminative (Miller et al., 2000; Kambhatla, 2004; Boschee et al., 2005; Grishman et al., 2005; Zhou et al., 2005; Jiang and Zhai, 2007). In contrast, the kernel based method does not explicitly extract features; it designs kernel functions over the structured sentence representations (sequence, dependency or parse tree) to capture the similarities between different relation instances (Zelenko et al., 2003; Bunescu and Mooney, 2005a; Bunescu and Mooney, 2005b; Zhao and Grishman, 2005; Zhang et al., 2006; Zhou et al., 2007; Qian et al., 2008). Both lines of work depend on effective features, either explicitly or implicitly. The performance of a supervised relation extraction system is usually degraded by the sparsity of lexical features. For example, unless the example US soldier has previously been seen in the training data, it would be difficult for both the feature based and the kernel based systems to detect whether there is an Employment relation or not. Because the syntactic feature of the phrase US soldier is simply a noun-noun compound which is quite general, the words in it are crucial for extracting the relation. This motivates our work to use word clusters as additional features for relation extraction. The assumption is that even if the word soldier may never have been seen in the annotated Employment relation instances, other words which share the same cluster membership with soldier such as president and ambassador may have been observed in the Employment instances. The absence of lexical features can be compensated by 521 the cluster features. Moreover, word clusters may implicitly correspond to different relation classes. For example, the cluster of president may be related to the Employment relation as in US president while the cluster of businessman may be related to the Affiliation relation as in US businessman. The main contributions of this paper are: we explore the cluster-based features in a systematic way and propose several statistical methods for selecting effective clusters. We study the impact of the size of training data on cluster features and analyze the performance improvements through an extensive experimental study. The rest of this paper is organized as follows: Section 2 presents related work and Section 3 provides the background of the relation extraction task and the word clustering algorithm. Section 4 describes in detail a state-of-the-art supervised baseline system. Section 5 describes the clusterbased features and the cluster selection methods. We present experimental results in Section 6 and conclude in Section 7. 2 Related Work The idea of using word clusters as features in discriminative learning was pioneered by Miller et al. (2004), who augmented name tagging training data with hierarchical word clusters generated by the Brown clustering algorithm (Brown et al., 1992) from a large unlabeled corpus. They used different thresholds to cut the word hierarchy to obtain clusters of various granularities for feature decoding. Ratinov and Roth (2009) and Turian et al. (2010) also explored this approach for name tagging. Though all of them used the same hierarchical word clustering algorithm for the task of name tagging and reported improvements, we noticed that the clusters used by Miller et al. (2004) were quite different from that of Ratinov and Roth (2009) and Turian et al. (2010). To our knowledge, there has not been work on selecting clusters in a principled way. We move a step further to explore several methods in choosing effective clusters. A second difference between this work and the above ones is that we utilize word clusters in the task of relation extraction which is very different from sequence labeling tasks such as name tagging and chunking. Though Boschee et al. (2005) and Chan and Roth (2010) used word clusters in relation extraction, they shared the same limitation as the above approaches in choosing clusters. For example, Boschee et al. (2005) chose clusters of different granularities and Chan and Roth (2010) simply used a single threshold for cutting the word hierarchy. Moreover, Boschee et al. (2005) only augmented the predicate (typically a verb or a noun of the most importance in a relation in their definition) with word clusters while Chan and Roth (2010) performed this for any lexical feature consisting of a single word. In this paper, we systematically explore the effectiveness of adding word clusters to different lexical features. 3 Background 3.1 Relation Extraction One of the well defined relation extraction tasks is the Automatic Content Extraction1 (ACE) program sponsored by the U.S. government. ACE 2004 defined 7 major entity types: PER (Person), ORG (Organization), FAC (Facility), GPE (Geo-Political Entity: countries, cities, etc.), LOC (Location), WEA (Weapon) and VEH (Vehicle). An entity has three types of mention: NAM (proper name), NOM (nominal) or PRO (pronoun). A relation was defined over a pair of entity mentions within a single sentence. The 7 major relation types with examples are shown in Table 1. ACE 2004 also defined 23 relation subtypes. Following most of the previous work, this paper only focuses on relation extraction of major types. Given a relation instance ( , , ) i j x s m m , where i m and j m are a pair of mentions and s is the sentence containing the pair, the goal is to learn a function which maps the instance x to a type c, where c is one of the 7 defined relation types or the type Nil (no relation exists). There are two commonly used learning paradigms for relation extraction: Flat: This strategy performs relation detection and classification at the same time. One multi-class classifier is trained to discriminate among the 7 relation types plus the Nil type. Hierarchical: This one separates relation detection from relation classification. One binary 1 Task definition: http://www.itl.nist.gov/iad/894.01/tests/ace/ ACE guidelines: http://projects.ldc.upenn.edu/ace/ 522 classifier is trained first to distinguish between relation instances and non-relation instances. This can be done by grouping all the instances of the 7 relation types into a positive class and the instances of Nil into a negative class. Then the thresholded output of this binary classifier is used as training data for learning a multi-class classifier for the 7 relation types (Bunescu and Mooney, 2005b). Type Example EMP-ORG US president PHYS a military base in Germany GPE-AFF U.S. businessman PER-SOC a spokesman for the senator DISC each of whom ART US helicopters OTHER-AFF Cuban-American people Table 1: ACE relation types and examples from the annotation guideline 2 . The heads of the two entity mentions are marked. Types are listed in decreasing order of frequency of occurrence in the ACE corpus. 3.2 Brown Word Clustering The Brown algorithm is a hierarchical clustering algorithm which initially assigns each word to its own cluster and then repeatedly merges the two clusters which cause the least loss in average mutual information between adjacent clusters based on bigram statistics. By tracing the pairwise merging steps, one can obtain a word hierarchy which can be represented as a binary tree. A word can be compactly represented as a bit string by following the path from the root to itself in the tree, assigning a 0 for each left branch, and a 1 for each right branch. A cluster is just a branch of that tree. A high branch may correspond to more general concepts while the lower branches it includes might correspond to more specific ones. Brown et al. (1992) described an efficient implementation based on a greedy algorithm which initially assigned only the most frequent words into distinct clusters. It is worth pointing out that in this implementation each word occupies a leaf in the hierarchy, but each leaf might contain more than one word as can be seen from Table 2. The lengths of the bit strings also vary among different words. 2 http://projects.ldc.upenn.edu/ace/docs/EnglishRDCV4-32.PDF Bit string Examples 111011011100 US … 1110110111011 U.S. … 1110110110000 American … 1110110111110110 Cuban, Pakistani, Russian … 11111110010111 Germany, Poland, Greece … 110111110100 businessman, journalist, reporter 1101111101111 president, governor, premier… 1101111101100 senator, soldier, ambassador … 11011101110 spokesman, spokeswoman, … 11001100 people, persons, miners, Haitians 110110111011111 base, compound, camps, camp … 110010111 helicopters, tanks, Marines … Table 2: An example of words and their bit string representations obtained in this paper. Words in bold are head words that appeared in Table 1. 4 Feature Based Relation Extraction Given a pair of entity mentions , i j m m and the sentence containing the pair, a feature based system extracts a feature vector v which contains diverse lexical, syntactic and semantic features. The goal is to learn a function which can estimate the conditional probability ( | ) p c v , the probability of a relation type c given the feature vector v . The type with the highest probability will be output as the class label for the mention pair. We now describe a supervised baseline system with a very large set of features and its learning strategy. 4.1 Baseline Feature Set We first adopted the full feature set from Zhou et al. (2005), a state-of-the-art feature based relation extraction system. For space reasons, we only show the lexical features as in Table 3 and refer the reader to the paper for the rest of the features. At the lexical level, a relation instance can be seen as a sequence of tokens which form a five tuple <Before, M1, Between, M2, After>. Tokens of the five members and the interaction between the heads of the two mentions can be extracted as features as shown in Table 3. In addition, we cherry-picked the following features which were not included in Zhou et al. (2005) but were shown to be quite effective for relation extraction. Bigram of the words between the two mentions: This was extracted by both Zhao and Grishman (2005) and Jiang and Zhai (2007), aiming to 523 provide more order information of the tokens between the two mentions. Patterns: There are three types of patterns: 1) the sequence of the tokens between the two mentions as used in Boschee et al. (2005); 2) the sequence of the heads of the constituents between the two mentions as used by Grishman et al. (2005); 3) the shortest dependency path between the two mentions in a dependency tree as adopted by Bunescu and Mooney (2005a). These patterns can provide more structured information of how the two mentions are connected. Title list: This is tailored for the EMP-ORG type of relations as the head of one of the mentions is usually a title. The features are decoded in a way similar to that of Sun (2009). Position Feature Description Before BM1F first word before M1 BM1L second word before M1 M1 WM1 bag-of-words in M1 HM1 head3 word of M1 Between WBNULL when no word in between WBFL the only word in between when only one word in between WBF first word in between when at least two words in between WBL last word in between when at least two words in between WBO other words in between except first and last words when at least three words in between M2 WM2 bag-of-words in M2 HM2 head word of M2 M12 HM12 combination of HM1 and HM2 After AM2F first word after M2 AM2L second word after M2 Table 3: Lexical features for relation extraction. 4.2 Baseline Learning Strategy We employ a simple learning framework that is similar to the hierarchical learning strategy as described in Section 3.1. Specifically, we first train a binary classifier to distinguish between relation instances and non-relation instances. Then rather than using the thresholded output of this binary classifier as training data, we use only the annotated relation instances to train a multi-class classifier for the 7 relation types. In the test phase, 3 The head word of a mention is normally set as the last word of the mention as in Zhou et al. (2005). given a test instance x , we first apply the binary classifier to it for relation detection; if it is detected as a relation instance we then apply the multi-class relation classifier to classify it4. 5 Cluster Feature Selection The selection of cluster features aims to answer the following two questions: which lexical features should be augmented with word clusters to improve generalization accuracy? How to select clusters at an appropriate level of granularity? We will describe our solutions in Section 5.1 and 5.2. 5.1 Cluster Feature Decoding While each one of the lexical features in Table 3 used by the baseline can potentially be augmented with word clusters, we believe the effectiveness of a lexical feature with augmentation of word clusters should be tested either individually or incrementally according to a rank of its importance as shown in Table 4. We will show the effectiveness of each cluster feature in the experiment section. Impor- tance Lexical Feature Description of lexical feature Cluster Feature 1 HM HM1, HM2 and HM12 HM1_WC, HM2_WC, HM12_WC 2 BagWM WM1 and WM2 BagWM_WC 3 HC a head5 of a chunk in context HC_WC 4 BagWC word of context BagWC_WC Table 4: Cluster features ordered by importance. The importance is based on linguistic intuitions and observations of the contributions of different lexical features from various feature based systems. Table 4 simplifies a relation instance as a three tuple <Context, M1, M2> where the Context includes the Before, Between and After from the 4 Both the binary and multi-class classifiers output normalized probabilities in the range [0,1]. When the binary classifier’s prediction probability is greater than 0.5, we take the prediction with the highest probability of the multi-class classifier as the final class label. When it is in the range [0.3,0.5], we only consider as the final class label the prediction of the multi-class classifier with a probability which is greater than 0.9. All other cases are taken as non-relation instances. 5 The head of a chunk is defined as the last word in the chunk. 524 five tuple representation. As a relation in ACE is usually short, the words of the two entity mentions can provide more critical indications for relation classification than the words from the context. Within the two entity mentions, the head word of each mention is usually more important than other words of the mention; the conjunction of the two heads can provide an additional clue. And in general words other than the chunk head in the context do not contribute to establishing a relationship between the two entity mentions. The cluster based semi-supervised system works by adding an additional layer of lexical features that incorporate word clusters as shown in column 4 of Table 4. Take the US soldier as an example, if we decide to use a length of 10 as a threshold to cut the Brown word hierarchy to generate word clusters, we will extract a cluster feature HM1_WC10=1101111101 in addition to the lexical feature HM1=soldier given that the full bit string of soldier is 1101111101100 in Table 2. (Note that the cluster feature is a nominal feature, not to be confused with an integer feature.) 5.2 Selection of Clusters Given the bit string representations of all the words in a vocabulary, researchers usually use prefixes of different lengths of the bit strings to produce word clusters of various granularities. However, how to choose the set of prefix lengths in a principled way? This has not been answered by prior work. Our main idea is to learn the best set of prefix lengths, perhaps through the validation of their effectiveness on a development set of data. To our knowledge, previous research simply uses ad-hoc prefix lengths and lacks this training procedure. The training procedure can be extremely slow for reasons to be explained below. Formally, let l be the set of available prefix lengths ranging from 1 bit to the length of the longest bit string in the Brown word hierarchy and let m be the set of prefix lengths we want to use in decoding cluster features, then the problem of selecting effective clusters transforms to finding a | | m -combination of the set l which maximizes system performance. The training procedure can be extremely time consuming if we enumerate every possible | | m -combination of l , given that | | m can range from 1 to the size of l and the size of l equals the length of the longest bit string which is usually 20 when inducing 1,000 clusters using the Brown algorithm. One way to achieve better efficiency is to consider only a subset of l instead of the full set. In addition, we limit ourselves to use sizes 3 and 4 for m for matching prior work. This keeps the cluster features to a manageable size considering that every word in your vocabulary could contribute to a lexical feature. For picking a subset of l , we propose below two statistical measures for computing the importance of a certain prefix length. Information Gain (IG): IG measures the quality or importance of a feature f by computing the difference between the prior entropy of classes C and the posterior entropy, given values V of the feature f (Hunt et al., 1966; Quinlan, 1986). For our purpose, C is the set of relation types, f is a cluster-based feature with a certain prefix length such as HM1_WC* where * means the prefix length and a value v is the prefix of the bit string representation of HM1. More formally, the IG of f is computed as follows: ( ) ( )log ( ) ( ( ) ( | )log ( | )) c C v V c C IG f p c p c p v p c v p c v (1) where the first and second terms refer to the prior and posterior entropies respectively. For each prefix length in the set l , we can compute its IG for a type of cluster feature and then rank the prefix lengths based on their IGs for that cluster feature. For simplicity, we rank the prefix lengths for a group of cluster features (a group is a row from column 4 in Table 4) by collapsing the individual cluster features into a single cluster feature. For example, we collapse the 3 types: HM1_WC, HM2_WC and HM12_WC into a single type HM_WC for computing the IG. Prefix Coverage (PC): If we use a short prefix then the clusters produced correspond to the high branches in the word hierarchy and would be very general. The cluster features may not provide more informative information than the words themselves. Similarly, if we use a long prefix such as the length of the longest bit string, then maybe only a few of the lexical features can be covered by clusters. To capture this intuition, we define the PC of a prefix length i as below: 525 ( ) ( ) ( ) ic l count f PC i count f (2) where lf stands for a lexical feature such as HM1 and icf a cluster feature with prefix length i such as HM1_WCi, (*) count is the number of occurrences of that feature in training data. Similar to IG, we compute PC for a group of cluster features, not for each individual feature. In our experiments, the top 10 ranked prefix lengths based on IG and prefix lengths with PC values in the range [0.4, 0.9] were used. In addition to the above two statistical measures, for comparison, we introduce another two simple but extreme measures for the selection of clusters. Use All Prefixes (UA): UA produces a cluster feature at every available bit length with the hope that the underlying supervised system can learn proper weights of different cluster features during training. For example, if the full bit representation of “Apple” is “000”, UA would produce three cluster features: prefix1=0, prefix2=00 and prefix3=000. Because this method does not need validation on the development set, it is the laziest but the fastest method for selecting clusters. Exhaustive Search (ES): ES works by trying every possible combination of the set l and picking the one that works the best for the development set. This is the most cautious and the slowest method for selecting clusters. 6 Experiments In this section, we first present details of our unsupervised word clusters, the relation extraction data set and its preprocessing. We then present a series of experiments coupled with result analyses. We used the English portion of the TDT5 corpora (LDC2006T18) as our unlabeled data for inducing word clusters. It contains roughly 83 million words in 3.4 million sentences with a vocabulary size of 450K. We left case intact in the corpora. Following previous work, we used Liang’s implementation of the Brown clustering algorithm (Liang, 2005). We induced 1,000 word clusters for words that appeared at least twice in the corpora. The reduced vocabulary contains 255K unique words. The clusters are available at http://www.cs.nyu.edu/~asun/data/TDT5_BrownW C.tar.gz. For relation extraction, we used the benchmark ACE 2004 training data. Following most of the previous research, we used in experiments the nwire (newswire) and bnews (broadcast news) genres of the data containing 348 documents and 4374 relation instances. We extracted an instance for every pair of mentions in the same sentence which were separated by no more than two other mentions. The non-relation instances generated were about 8 times more than the relation instances. Preprocessing of the ACE documents: We used the Stanford parser6 for syntactic and dependency parsing. We used chunklink7 to derive chunking information from the Stanford parsing. Because some bnews documents are in lower case, we recover the case for the head of a mention if its type is NAM by making the first character into its upper case. This is for better matching between the words in ACE and the words in the unsupervised word clusters. We used the OpenNLP 8 maximum entropy (maxent) package as our machine learning tool. We choose to work with maxent because the training is fast and it has a good support for multiclass classification. 6.1 Baseline Performance Following previous work, we did 5-fold crossvalidation on the 348 documents with handannotated entity mentions. Our results are shown in Table 5 which also lists the results of another three state-of-the-art feature based systems. For this and the following experiments, all the results were computed at the relation mention level. System P(%) R(%) F(%) Zhou et al. (2007)9 78.2 63.4 70.1 Zhao and Grishman (2005)10 69.2 71.5 70.4 Our Baseline 73.4 67.7 70.4 Jiang and Zhai (2007) 11 72.4 70.2 71.3 Table 5: Performance comparison on the ACE 2004 data over the 7 relation types. 6 http://nlp.stanford.edu/software/lex-parser.shtml 7 http://ilk.uvt.nl/team/sabine/chunklink/README.html 8 http://opennlp.sourceforge.net/ 9 Zhou et al. (2005) tested their system on the ACE 2003 data; Zhou et al. (2007) tested their system on the ACE 2004 data. 10 The paper gives a recall value of 70.5, which is not consistent with the given values of P and F. An examination of the correspondence in preparing this paper indicates that the correct recall value is 71.5. 11 The result is from using the All features in Jiang and Zhai (2007). It is not quite clear from the paper that whether they used the 348 documents or the whole 2004 training data. 526 Note that although all the 4 systems did 5-fold cross-validation on the ACE 2004 data, the detailed data partition might be different. Also, we were doing cross-validation at the document level which we believe was more natural than the instance level. Nonetheless, we believe our baseline system has achieved very competitive performance. 6.2 The Effectiveness of Cluster Selection Methods We investigated the tradeoff between performance and training time of each proposed method in selecting clusters. In this experiment, we randomly selected 70 documents from the 348 documents as test data which roughly equaled the size of 1 fold in the baseline in Section 6.1. For the baseline in this section, all the rest of the documents were used as training data. For the semi-supervised system, 70 percent of the rest of the documents were randomly selected as training data and 30 percent as development data. The set of prefix lengths that worked the best for the development set was chosen to select clusters. We only used the cluster feature HM_WC in this experiment. System F △ Training Time (in minute) Baseline 70.70 1 UA 71.19 +0.49 1.5 PC3 71.65 +0.95 30 PC4 71.72 +1.02 46 IG3 71.65 +0.95 45 IG4 71.68 +0.98 78 ES3 71.66 +0.96 465 ES4 71.60 +0.90 1678 Table 6: The tradeoff between performance and training time of each method in selecting clusters. PC3 means using 3 prefixes with the PC method. △ in this paper means the difference between a system and the baseline. Table 6 shows that all the 4 proposed methods improved baseline performance, with UA as the fastest and ES as the slowest. It was interesting that ES did not always outperform the two statistical methods which might be because of its overfitting to the development set. In general, both PC and IG had good balances between performance and training time. There was no dramatic difference in performance between using 3 and 4 prefix lengths. For the rest of this paper, we will only use PC4 as our method in selecting clusters. 6.3 The Effectiveness of Cluster Features The baseline here is the same one used in Section 6.1. For the semi-supervised system, each test fold was the same one used in the baseline and the other 4 folds were further split into a training set and a development set in a ratio of 7:3 for selecting clusters. We first added the cluster features individually into the baseline and then added them incrementally according to the order specified in Table 4. System F △ 1 Baseline 70.4 2 1 + HM_WC 71.5 + 1.1 3 1 + BagWM_WC 71.0 + 0.6 4 1 + HC_WC 69.6 - 0.8 5 1 + BagWC_WC 46.1 - 24.3 6 2 + BagWM_WC 71.0 + 0.6 7 6 + HC_WC 70.6 + 0.2 8 7+ BagWC_WC 50.3 - 20.1 Table 7: Performance 12 of the baseline and using different cluster features with PC4 over the 7 types. We found that adding clusters to the heads of the two mentions was the most effective way of introducing cluster features. Adding clusters to the words of the mentions can also help, though not as good as the heads. We were surprised that the heads of chunks in context did not help. This might be because ACE relations are usually short and the limited number of long relations is not sufficient in generalizing cluster features. Adding clusters to every word in context hurt the performance a lot. Because of the behavior of each individual feature, it was not surprising that adding them incrementally did not give more performance gain. For the rest of this paper, we will only use HM_WC as cluster features. 6.4 The Impact of Training Size We studied the impact of training data size on cluster features as shown in Table 8. The test data was always the same as the 5-fold used in the baseline in Section 6.1. no matter the size of the training data. The training documents for the 12 All the improvements of F in Table 7, 8 and 9 were significant at confidence levels >= 95%. 527 # docs F of Relation Classification F of Relation Detection Baseline PC4 (△) Prefix10(△) Baseline PC4(△) Prefix10(△) 50 62.9 63.8(+ 0.9) 63.7(+0.8) 71.4 71.9(+ 0.5) 71.6(+0.2) 75 62.8 64.6(+ 1.8) 63.9(+1.1) 71.5 72.3(+ 0.8) 72.5(+1.0) 125 66.1 68.1(+ 2.0) 67.5(+1.4) 74.5 74.8(+ 0.3) 74.3(-0.2) 175 67.8 69.7(+ 1.9) 69.5(+1.7) 75.2 75.5(+ 0.3) 75.2(0.0) 225 68.9 70.1(+ 1.2) 69.6(+0.7) 75.6 75.9(+ 0.3) 75.3(-0.3) ≈280 70.4 71.5(+ 1.1) 70.7(+0.3) 76.4 76.9(+ 0.5) 76.3(-0.1) Table 8: Performance over the 7 relation types with different sizes of training data. Prefix10 uses the single prefix length 10 to generate word clusters as used by Chan and Roth (2010). Type P R F Baseline PC4 (△) Baseline PC4 (△) Baseline PC4 (△) EMP-ORG 75.4 77.2(+1.8) 79.8 81.5(+1.7) 77.6 79.3(+1.7) PHYS 73.2 71.2(-2.0) 61.6 60.2(-1.4) 66.9 65.3(-1.7) GPE-AFF 67.1 69.0(+1.9) 60.0 63.2(+3.2) 63.3 65.9(+2.6) PER-SOC 88.2 83.9(-4.3) 58.4 61.0(+2.6) 70.3 70.7(+0.4) DISC 79.4 80.6(+1.2) 42.9 46.0(+3.2) 55.7 58.6(+2.9) ART 87.9 96.9(+9.0) 63.0 67.4(+4.4) 73.4 79.3(+5.9) OTHER-AFF 70.6 80.0(+9.4) 41.4 41.4(0.0) 52.2 54.6(+2.4) Table 9: Performance of each individual relation type based on 5-fold cross-validation. current size setup were randomly selected and added to the previous size setup (if applicable). For example, we randomly selected another 25 documents and added them to the previous 50 documents to get 75 documents. We made sure that every document participated in this experiment. The training documents for each size setup were split into a real training set and a development set in a ratio of 7:3 for selecting clusters. There are some clear trends in Table 8. Under each training size, PC4 consistently outperformed the baseline and the system Prefix10 for relation classification. For PC4, the gain for classification was more pronounced than detection. The mixed detection results of Prefix10 indicated that only using a single prefix may not be stable. We did not observe the same trend in the reduction of annotation need with cluster-based features as in Koo et al. (2008) for dependency parsing. PC4 with sizes 50, 125, 175 outperformed the baseline with sizes 75, 175, 225 respectively. But this was not the case when PC4 was tested with sizes 75 and 225. This might due to the complexity of the relation extraction task. 6.5 Analysis There were on average 69 cross-type errors in the baseline in Section 6.1 which were reduced to 56 by using PC4. Table 9 showed that most of the improvements involved EMP-ORG, GPE-AFF, DISC, ART and OTHER-AFF. The performance gain for PER-SOC was not as pronounced as the other five types. The five types of relations are ambiguous as they share the same entity type GPE while the PER-SOC relation only holds between PER and PER. This reflects that word clusters can help to distinguish between ambiguous relation types. As mentioned earlier the gain of relation detection was not as pronounced as classification as shown in Table 8. The unbalanced distribution of relation instances and non-relation instances remains as an obstacle for pushing the performance of relation extraction to the next level. 7 Conclusion and Future Work We have described a semi-supervised relation extraction system with large-scale word clustering. We have systematically explored the effectiveness of different cluster-based features. We have also demonstrated that the two proposed statistical methods are both effective and efficient in selecting clusters at an appropriate level of granularity through an extensive experimental study. 528 Based on the experimental results, we plan to investigate additional ways to improve the performance of relation detection. Moreover, extending word clustering to phrase clustering (Lin and Wu, 2009) and pattern clustering (Sun and Grishman, 2010) is worth future investigation for relation extraction. References Rie K. Ando and Tong Zhang. 2005 A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data. Journal of Machine Learning Research, Vol 6:1817-1853. Elizabeth Boschee, Ralph Weischedel, and Alex Zamanian. 2005. Automatic information extraction. In Proceedings of the International Conference on Intelligence Analysis. Peter F. Brown, Vincent J. Della Pietra, Peter V. deSouza, Jenifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467–479. Razvan C. Bunescu and Raymond J. Mooney. 2005a. A shortest path dependency kenrel for relation extraction. In Proceedings of HLT/EMNLP. Razvan C. Bunescu and Raymond J. Mooney. 2005b. Subsequence kernels for relation extraction. In Proceedings of NIPS. Yee Seng Chan and Dan Roth. 2010. Exploiting background knowledge for relation extraction. In Proc. of COLING. Ralph Grishman, David Westbrook and Adam Meyers. 2005. NYU’s English ACE 2005 System Description. ACE 2005 Evaluation Workshop. Earl B. Hunt, Philip J. Stone and Janet Marin. 1966. Experiments in Induction. New York: Academic Press, 1966. Jing Jiang and ChengXiang Zhai. 2007. A systematic exploration of the feature space for relation extraction. In Proceedings of HLT-NAACL-07. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for information extraction. In Proceedings of ACL-04. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple Semi-supervised Dependency Parsing. In Proceedings of ACL-08: HLT. Percy Liang. 2005. Semi-Supervised Learning for Natural Language. Master’s thesis, Massachusetts Institute of Technology. Dekang Lin and Xiaoyun Wu. 2009. Phrase Clustering for Discriminative Learning. In Proc. of ACL-09. Scott Miller, Heidi Fox, Lance Ramshaw, and Ralph Weischedel. 2000. A novel use of statistical parsing to extract information from text. In Proc. of NAACL. Scott Miller, Jethran Guinness and Alex Zamanian. 2004. Name Tagging with Word Clusters and Discriminative Training. In Proc. of HLT-NAACL. Longhua Qian, Guodong Zhou, Qiaoming Zhu and Peide Qian. 2008. Exploiting constituent dependencies for tree kernel-based semantic relation extraction . In Proc. of COLING. John Ross Quinlan. 1986. Induction of decision trees. Machine Learning, 1(1), 81-106. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of CoNLL-09. Ang Sun. 2009. A Two-stage Bootstrapping Algorithm for Relation Extraction. In RANLP-09. Ang Sun and Ralph Grishman. 2010. Semi-supervised Semantic Pattern Discovery with Guidance from Unsupervised Pattern Clusters. In Proc. of COLING. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of ACL. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research, 3:1083–1106. Zhu Zhang. 2004. Weakly supervised relation classification for information extraction. In Proc. of CIKM’2004. Min Zhang, Jie Zhang, Jian Su, and GuoDong Zhou. 2006. A composite kernel to extract relations between entities with both flat and structured features. In Proceedings of COLING-ACL-06. Shubin Zhao and Ralph Grishman. 2005. Extracting relations with integrated information using kernel methods. In Proceedings of ACL. Guodong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of ACL-05. Guodong Zhou, Min Zhang, DongHong Ji, and QiaoMing Zhu. 2007. Tree kernel-based relation extraction with context-sensitive structured parse tree information. In Proceedings of EMNLPCoNLL-07. 529
|
2011
|
53
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 530–540, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics In-domain Relation Discovery with Meta-constraints via Posterior Regularization Harr Chen, Edward Benson, Tahira Naseem, and Regina Barzilay Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {harr, eob, tahira, regina} @csail.mit.edu Abstract We present a novel approach to discovering relations and their instantiations from a collection of documents in a single domain. Our approach learns relation types by exploiting meta-constraints that characterize the general qualities of a good relation in any domain. These constraints state that instances of a single relation should exhibit regularities at multiple levels of linguistic structure, including lexicography, syntax, and document-level context. We capture these regularities via the structure of our probabilistic model as well as a set of declaratively-specified constraints enforced during posterior inference. Across two domains our approach successfully recovers hidden relation structure, comparable to or outperforming previous state-of-the-art approaches. Furthermore, we find that a small set of constraints is applicable across the domains, and that using domain-specific constraints can further improve performance. 1 1 Introduction In this paper, we introduce a novel approach for the unsupervised learning of relations and their instantiations from a set of in-domain documents. Given a collection of news articles about earthquakes, for example, our method discovers relations such as the earthquake’s location and resulting damage, and extracts phrases representing the relations’ instantiations. Clusters of similar in-domain documents are 1The source code for this work is available at: http://groups.csail.mit.edu/rbg/code/relation extraction/ A strong earthquake rocked the Philippine island of Mindoro early Tuesday, [destroying]ind [some homes]arg ... A strong earthquake hit the China-Burma border early Wednesday ... The official Xinhua News Agency said [some houses]arg were [damaged]ind ... A strong earthquake with a preliminary magnitude of 6.6 shook northwestern Greece on Saturday, ... [destroying]ind [hundreds of old houses]arg ... Figure 1: Excerpts from newswire articles about earthquakes. The indicator and argument words for the damage relation are highlighted. increasingly available in forms such as Wikipedia article categories, financial reports, and biographies. In contrast to previous work, our approach learns from domain-independent meta-constraints on relation expression, rather than supervision specific to particular relations and their instances. In particular, we leverage the linguistic intuition that documents in a single domain exhibit regularities in how they express their relations. These regularities occur both in the relations’ lexical and syntactic realizations as well as at the level of document structure. For instance, consider the damage relation excerpted from earthquake articles in Figure 1. Lexically, we observe similar words in the instances and their contexts, such as “destroying” and “houses.” Syntactically, in two instances the relation instantiation is the dependency child of the word “destroying.” On the discourse level, these instances appear toward the beginning of their respective documents. In general, valid relations in many domains are characterized by these coherence properties. We capture these regularities using a Bayesian model where the underlying relations are repre530 sented as latent variables. The model takes as input a constituent-parsed corpus and explains how the constituents arise from the latent variables. Each relation instantiation is encoded by the variables as a relation-evoking indicator word (e.g., “destroying”) and corresponding argument constituent (e.g., “some homes”).2 Our approach capitalizes on relation regularity in two ways. First, the model’s generative process encourages coherence in the local features and placement of relation instances. Second, we apply posterior regularization (Grac¸a et al., 2007) during inference to enforce higher-level declarative constraints, such as requiring indicators and arguments to be syntactically linked. We evaluate our approach on two domains previously studied for high-level document structure analysis, news articles about earthquakes and financial markets. Our results demonstrate that we can successfully identify domain-relevant relations. We also study the importance and effectiveness of the declaratively-specified constraints. In particular, we find that a small set of declarative constraints are effective across domains, while additional domainspecific constraints yield further benefits. 2 Related Work Extraction with Reduced Supervision Recent research in information extraction has taken large steps toward reducing the need for labeled data. Examples include using bootstrapping to amplify small seed sets of example outputs (Agichtein and Gravano, 2000; Yangarber et al., 2000; Bunescu and Mooney, 2007; Zhu et al., 2009), leveraging existing databases that overlap with the text (Mintz et al., 2009; Yao et al., 2010), and learning general domain-independent knowledge bases by exploiting redundancies in large web and news corpora (Hasegawa et al., 2004; Shinyama and Sekine, 2006; Banko et al., 2007; Yates and Etzioni, 2009). Our approach is distinct in both the supervision and data we operate over. First, in contrast to bootstrapping and database matching approaches, we learn from meta-qualities, such as low variability in syntactic patterns, that characterize a good relation. 2We do not use the word “argument” in the syntactic sense— a relation’s argument may or may not be the syntactic dependency argument of its indicator. We hypothesize that these properties hold across relations in different domains. Second, in contrast to work that builds general relation databases from heterogeneous corpora, our focus is on learning the relations salient in a single domain. Our setup is more germane to specialized domains expressing information not broadly available on the web. Earlier work in unsupervised information extraction has also leveraged meta-knowledge independent of specific relation types, such as declarativelyspecified syntactic patterns (Riloff, 1996), frequent dependency subtree patterns (Sudo et al., 2003), and automatic clusterings of syntactic patterns (Lin and Pantel, 2001; Zhang et al., 2005) and contexts (Chen et al., 2005; Rosenfeld and Feldman, 2007). Our approach incorporates a broader range of constraints and balances constraints with underlying patterns learned from the data, thereby requiring more sophisticated machinery for modeling and inference. Extraction with Constraints Previous work has recognized the appeal of applying declarative constraints to extraction. In a supervised setting, Roth and Yih (2004) induce relations by using linear programming to impose global declarative constraints on the output from a set of classifiers trained on local features. Chang et al. (2007) propose an objective function for semi-supervised extraction that balances likelihood of labeled instances and constraint violation on unlabeled instances. Recent work has also explored how certain kinds of supervision can be formulated as constraints on model posteriors. Such constraints are not declarative, but instead based on annotations of words’ majority relation labels (Mann and McCallum, 2008) and pre-existing databases with the desired output schema (Bellare and McCallum, 2009). In contrast to previous work, our approach explores a different class of constraints that does not rely on supervision that is specific to particular relation types and their instances. 3 Model Our work performs in-domain relation discovery by leveraging regularities in relation expression at the lexical, syntactic, and discourse levels. These regularities are captured via two components: a probabilistic model that explains how documents are generated from latent relation variables and a technique 531 is_verb 0 1 0 earthquake 1 0 0 hit 0 1 0 has_proper 0 0 1 has_number 0 0 0 depth 1 3 2 Figure 2: Words w and constituents x of syntactic parses are represented with indicator features φi and argument features φa respectively. A single relation instantiation is a pair of indicator w and argument x; we filter w to be nouns and verbs and x to be noun phrases and adjectives. for biasing inference to adhere to declarativelyspecified constraints on relation expression. This section describes the generative process, while Sections 4 and 5 discuss declarative constraints. 3.1 Problem Formulation Our input is a corpus of constituent-parsed documents and a number K of relation types. The output is K clusters of semantically related relation instantiations. We represent these instantiations as a pair of indicator word and argument sequence from the same sentence. The indicator’s role is to anchor a relation and identify its type. We only allow nouns or verbs to be indicators. For instance, in the earthquake domain a likely indicator for damage would be “destroyed.” The argument is the actual relation value, e.g., “some homes,” and corresponds to a noun phrase or adjective.3 Along with the document parse trees, we utilize a set of features φi(w) and φa(x) describing each potential indicator word w and argument constituent x, respectively. An example feature representation is shown in Figure 2. These features can encode words, part-of-speech tags, context, and so on. Indicator and argument feature definitions need not be the same (e.g., has number is important for argu3In this paper we focus on unary relations; binary relations can be modeled with extensions of the hidden variables and constraints. ments but irrelevant for indicators).4 3.2 Generative Process Our model associates each relation type k with a set of feature distributions θk and a location distribution λk. Each instantiation’s indicator and argument, and its position within a document, are drawn from these distributions. By sharing distributions within each relation, the model places high probability mass on clusters of instantiations that are coherent in features and position. Furthermore, we allow at most one instantiation per document and relation, so as to target relations that are relevant to the entire document. There are three steps to the generative process. First, we draw feature and location distributions for each relation. Second, an instantiation is selected for every pair of document d and relation k. Third, the indicator features of each word and argument features of each constituent are generated based on the relation parameters and instantiations. Figure 3 presents a reference for the generative process. Generating Relation Parameters Each relation k is associated with four feature distribution parameter vectors: θi k for indicator words, θbi k for nonindicator words, θa k for argument constituents, and θba k for non-argument constituents. Each of these is a set of multinomial parameters per feature drawn from a symmetric Dirichlet prior. A likely indicator word should have features that are highly probable according to θi k, and likewise for arguments and θa k. Parameters θbi k and θba k represent background distributions for non-relation words and constituents, similar in spirit to other uses of background distributions that filter out irrelevant words (Che, 2006).5 By drawing each instance from these distributions, we encourage the relation to be coherent in local lexical and syntactic properties. Each relation type k is also associated with a parameter vector λk over document segments drawn from a symmetric Dirichlet prior. Documents are divided into L equal-length segments; λk states how likely relation k is for each segment, with one null outcome for the relation not occurring in the document. Because λk is shared within a relation, its 4We consider only categorical features here, though the extension to continuous or ordinal features is straightforward. 5We use separate background distributions for each relation to make inference more tractable. 532 For each relation type k: • For each indicator feature φi draw feature distributions θi k,φi, θbi k,φi ∼Dir(θ0) • For each argument feature φa draw feature distributions θa k,φa, θba k,φa ∼Dir(θ0) • Draw location distribution λk ∼Dir(λ0) For each relation type k and document d: • Select document segment sd,k ∼Mult(λk) • Select sentence zd,k uniformly from segment sd,k, and indicator id,k and argument ad,k uniformly from sentence zd,k For each word w in every document d: • Draw each indicator feature φi(w) ∼ Mult 1 Z QK k=1 θk,φi , where θk,φi is θi k,φi if id,k = w and θbi k,φi otherwise For each constituent x in every document d: • Draw each argument feature φa(x) ∼ Mult 1 Z QK k=1 θk,φa , where θk,φa is θa k,φa if ad,k = x and θba k,φa otherwise Figure 3: The generative process for model parameters and features. In the above Dir and Mult refer respectively to the Dirichlet distribution and multinomial distribution. Fixed hyperparameters are subscripted with zero. instances will tend to occur in the same relative positions across documents. The model can learn, for example, that a particular relation typically occurs in the first quarter of a document (if L = 4). Generating Relation Instantiations For every relation type k and document d, we first choose which portion of the document (if any) contains the instantiation by drawing a document segment sd,k from λk. Our model only draws one instantiation per pair of k and d, so each discovered instantiation within a document is a separate relation. We then choose the specific sentence zd,k uniformly from within the segment, and the indicator word id,k and argument constituent ad,k uniformly from within that sentence. Generating Text Finally, we draw the feature values. We make a Na¨ıve Bayes assumption between features, drawing each independently conditioned on relation structure. For a word w, we want all relations to be able to influence its generation. Toward this end, we compute the element-wise product of feature parameters across relations k = 1, . . . , K, using indicator parameters θi k if relation k selected w as an indicator word (if id,k = w) and background parameters θbi k otherwise. The result is then normalized to form a valid multinomial that produces word w’s features. Constituents are drawn similarly from every relations’ argument distributions. 4 Inference with Constraints The model presented above leverages relation regularities in local features and document placement. However, it is unable to specify global syntactic preferences about relation expression, such as indicators and arguments being in the same clause. Another issue with this model is that different relations could overlap in their indicators and arguments.6 To overcome these obstacles, we apply declarative constraints by imposing inequality constraints on expectations of the posterior during inference using posterior regularization (Grac¸a et al., 2007). In this section we present the technical details of the approach; Section 5 explains the specific linguistically-motivated constraints we consider. 4.1 Inference with Posterior Regularization We first review how posterior regularization impacts the variational inference procedure in general. Let θ, z, and x denote the parameters, hidden structure, and observations of an arbitrary model. We are interested in estimating the posterior distribution p(θ, z | x) by finding a distribution q(θ, z) ∈Q that is minimal in KL-divergence to the true posterior: KL(q(θ, z) ∥p(θ, z | x)) = Z q(θ, z) log q(θ, z) p(θ, z, x)dθdz + log p(x). (1) For tractability, variational inference typically makes a mean-field assumption that restricts the set Q to distributions where θ and z are independent, i.e., q(θ, z) = q(θ)q(z). We then optimize equation 1 by coordinate-wise descent on q(θ) and q(z). To incorporate constraints into inference, we further restrict Q to distributions that satisfy a given 6In fact, a true maximum a posteriori estimate of the model parameters would find the same most salient relation over and over again for every k, rather than finding K different relations. 533 set of inequality constraints, each of the form Eq[f(z)] ≤b. Here, f(z) is a deterministic function of z and b is a user-specified threshold. Inequalities in the opposite direction simply require negating f(z) and b. For example, we could apply a syntactic constraint of the form Eq[f(z)] ≥b, where f(z) counts the number of indicator/argument pairs that are syntactically connected in a pre-specified manner (e.g., the indicator and argument modify the same verb), and b is a fixed threshold. Given a set C of constraints with functions fc(z) and thresholds bc, the updates for q(θ) and q(z) from equation 1 are as follows: q(θ) = argmin q(θ) KL q(θ) ∥q′(θ) , (2) where q′(θ) ∝exp Eq(z)[log p(θ, z, x)], and q(z) = argmin q(z) KL q(z) ∥q′(z) s.t. Eq(z)[fc(z)] ≤bc, ∀c ∈C, (3) where q′(z) ∝exp Eq(θ)[log p(θ, z, x)]. Equation 2 is not affected by the posterior constraints and is updated by setting q(θ) to q′(θ). We solve equation 3 in its dual form (Grac¸a et al., 2007): argmin κ X c∈C κcbc + log X z q′(z)e−P c∈C κcfc(z) s.t. κc ≥0, ∀c ∈C. (4) With the box constraints of equation 4, a numerical optimization procedure such as L-BFGS-B (Byrd et al., 1995) can be used to find optimal dual parameters κ∗. The original q(z) is then updated to q′(z) exp −P c∈C κ∗ cfc(z) and renormalized. 4.2 Updates for our Model Our model uses this mean-field factorization: q(θ, λ, z, a, i) = K Y k=1 q(λk; ˆλk)q(θi k; ˆθi k)q(θbi k ; ˆθbi k )q(θa k; ˆθa k)q(θba k ; ˆθba k ) × Y d q(zd,k, ad,k, id,k; ˆcd,k) (5) In the above, ˆλ and ˆθ are Dirichlet distribution parameters, and ˆc are multinomial parameters. Note that we do not factorize the distribution of z, i, and a for a single document and relation, instead representing their joint distribution with a single set of variational parameters ˆc. This is tractable because a single relation occurs only once per document, reducing the joint search space of z, i, and a. The factors in equation 5 are updated one at a time while holding the other factors fixed. Updating ˆθ Due to the Na¨ıve Bayes assumption between features, each feature’s q(θ) distributions can be updated separately. However, the product between feature parameters of different relations introduces a nonconjugacy in the model, precluding a closed form update. Instead we numerically optimize equation 1 with respect to each ˆθ, similarly to previous work (Boyd-Graber and Blei, 2008). For instance, ˆθi k,φ of relation k and feature φ is updated by finding the gradient of equation 1 with respect to ˆθi k,φ and applying L-BFGS. Parameters ˆθbi, ˆθa, and ˆθba are updated analogously. Updating ˆλ This update follows the standard closed form for Dirichlet parameters: ˆλk,ℓ= λ0 + Eq(z,a,i)[Cℓ(z, a, i)], (6) where Cℓcounts the number of times z falls into segment ℓof a document. Updating ˆc Parameters ˆc are updated by first computing an unconstrained update q′(z, a, i; ˆc′): ˆc′ d,k,(z,a,i) ∝exp Eq(λk)[log p(z, a, i | λk)] + Eq(θi k)[log p(i | θi k)] + X w̸=i Eq(θbi k )[log p(w | θbi k )] + Eq(θa k)[log p(a | θa k)] + X x̸=a Eq(θba k )[log p(x | θba k )] We then perform the minimization on the dual in equation 4 under the provided constraints to derive a final update to the constrained ˆc. Simplifying Approximation The update for ˆθ requires numerical optimization due to the nonconjugacy introduced by the point-wise product in feature generation. If instead we have every relation type separately generate a copy of the corpus, the ˆθ 534 Quantity f(z, a, i) ≤or ≥ b Syntax ∀k Counts i, a of relation k that match a pattern (see text) ≥ 0.8D Prevalence ∀k Counts instantiations of relation k ≥ 0.8D Separation (ind) ∀w Counts times w selected as i ≤ 2 Separation (arg) ∀w Counts times w selected as part of a ≤ 1 Table 1: Each constraint takes the form Eq[f(z, a, i)] ≤b or Eq[f(z, a, i)] ≥b; D denotes the number of corpus documents, ∀k means one constraint per relation type, and ∀w means one constraint per token in the corpus. updates becomes closed-form expressions similar to equation 6. This approximation yields similar parameter estimates as the true updates while vastly improving speed, so we use it in our experiments. 5 Declarative Constraints We now have the machinery to incorporate a variety of declarative constraints during inference. The classes of domain-independent constraints we study are summarized in Table 1. For the proportion constraints we arbitrarily select a threshold of 80% without any tuning, in the spirit of building a domain-independent approach. Syntax As previous work has observed, most relations are expressed using a limited number of common syntactic patterns (Riloff, 1996; Banko and Etzioni, 2008). Our syntactic constraint captures this insight by requiring that a certain proportion of the induced instantiations for each relation match one of these syntactic patterns: • The indicator is a verb and the argument’s headword is either the child or grandchild of the indicator word in the dependency tree. • The indicator is a noun and the argument is a modifier or complement. • The indicator is a noun in a verb’s subject and the argument is in the corresponding object. Prevalence For a relation to be domain-relevant, it should occur in numerous documents across the corpus, so we institute a constraint on the number of times a relation is instantiated. Note that the effect of this constraint could also be achieved by tuning the prior probability of a relation not occurring in a document. However, this prior would need to be adjusted every time the number of documents or feature selection changes; using a constraint is an appealing alternative that is portable across domains. Separation The separation constraint encourages diversity in the discovered relation types by restricting the number of times a single word can serve as either an indicator or part of the argument of a relation instance. Specifically, we require that every token of the corpus occurs at most once as a word in a relation’s argument in expectation. On the other hand, a single word can sometimes be evocative of multiple relations (e.g., “occurred” signals both date and time in “occurred on Friday at 3pm”). Thus, we allow each word to serve as an indicator more than once, arbitrarily fixing the limit at two. 6 Experimental Setup Datasets and Metrics We evaluate on two datasets, financial market reports and newswire articles about earthquakes, previously used in work on high-level content analysis (Barzilay and Lee, 2004; Lapata, 2006). Finance articles chronicle daily market movements of currencies and stock indexes, and earthquake articles document specific earthquakes. Constituent parses are obtained automatically using the Stanford parser (Klein and Manning, 2003) and then converted to dependency parses using the PennConvertor tool (Johansson and Nugues, 2007). We manually annotated relations for both corpora, selecting relation types that occurred frequently in each domain. We found 15 types for finance and 9 for earthquake. Corpus statistics are summarized below, and example relation types are shown in Table 2. Docs Sent/Doc Tok/Doc Vocab Finance 100 12.1 262.9 2918 Earthquake 200 9.3 210.3 3155 In our task, annotation conventions for desired output relations can greatly impact token-level performance, and the model cannot learn to fit a particular convention by looking at example data. For example, earthquakes times are frequently reported in both local and GMT, and either may be arbitrarily chosen as correct. Moreover, the baseline we 535 Finance Bond 104.58 yen, 98.37 yen Dollar Change up 0.52 yen, down 0.01 yen Tokyo Index Change down 5.38 points or 0.41 percent, up 0.16 points, insignificant in percentage terms Earthquake Damage about 10000 homes, some buildings, no information Epicenter Patuca about 185 miles (300 kilometers) south of Quito, 110 kilometers (65 miles) from shore under the surface of the Flores sea in the Indonesian archipelago Magnitude 5.7, 6, magnitude-4 Table 2: Example relation types identified in the finance and earthquake datasets with example instance arguments. compare against produces lambda calculus formulas rather than spans of text as output, so a token-level comparison requires transforming its output. For these reasons, we evaluate on both sentencelevel and token-level precision, recall, and F-score. Precision is measured by mapping every induced relation cluster to its closest gold relation and computing the proportion of predicted sentences or words that are correct. Conversely, for recall we map every gold relation to its closest predicted relation and find the proportion of gold sentences or words that are predicted. This mapping technique is based on the many-to-one scheme used for evaluating unsupervised part-of-speech induction (Johnson, 2007). Note that sentence-level scores are always at least as high as token-level scores, since it is possible to select a sentence correctly but none of its true relation tokens while the opposite is not possible. Domain-specific Constraints On top of the crossdomain constraints from Section 5, we study whether imposing basic domain-specific constraints can be beneficial. The finance dataset is heavily quantitative, so we consider applying a single domain-specific constraint stating that most relation arguments should include a number. Likewise, earthquake articles are typically written with a majority of the relevant information toward the beginning of the document, so its domain-specific constraint is that most relations should occur in the first two sentences of a document. Note that these domain-specific constraints are not specific to individual relations or instances, but rather encode a preference across all relation types. In both cases, we again use an 80% threshold without tuning. Features For indicators, we use the word, part of speech, and word stem. For arguments, we use the word, syntactic constituent label, the head word of the parent constituent, and the dependency label of the argument to its parent. Baselines We compare against three alternative unsupervised approaches. Note that the first two only identify relation-bearing sentences, not the specific words that participate in the relation. Clustering (CLUTO): A straightforward way of identifying sentences bearing the same relation is to simply cluster them. We implement a clustering baseline using the CLUTO toolkit with word and part-of-speech features. As with our model, we set the number of clusters K to the true number of relation types. Mallows Topic Model (MTM): Another technique for grouping similar sentences is the Mallows-based topic model of Chen et al. (2009). The datasets we consider here exhibit high-level regularities in content organization, so we expect that a topic model with global constraints could identify plausible clusters of relation-bearing sentences. Again, K is set to the true number of relation types. Unsupervised Semantic Parsing (USP): Our final unsupervised comparison is to USP, an unsupervised deep semantic parser introduced by Poon and Domingos (2009). USP induces a lambda calculus representation of an entire corpus and was shown to be competitive with open information extraction approaches (Lin and Pantel, 2001; Banko et al., 2007). We give USP the required Stanford dependency format as input (de Marneffe and Manning, 2008). We find that the results are sensitive to the cluster granularity prior, so we tune this parameter and report the best-performing runs. We recognize that USP targets a different output representation than ours: a hierarchical semantic structure over the entirety of a dependency-parsed text. In contrast, we focus on discovering a limited number K of domain-relevant relations expressed as constituent phrases. Despite these differences, both 536 methods ultimately aim to capture domain-specific relations expressed with varying verbalizations, and both operate over in-domain input corpora supplemented with syntactic information. For these reasons, USP provides a clear and valuable point of comparison. For this comparison, we transform USP’s lambda calculus formulas to relation spans as follows. First, we group lambda forms by a combination of core form, argument form, and the parent’s core form.7 We then filter to the K relations that appear in the most documents. For token-level evaluation we take the dependency tree fragment corresponding to the lambda form. For example, in the sentence “a strong earthquake rocked the Philippines island of Mindoro early Tuesday,” USP learns that the word “Tuesday” has a core form corresponding to words {Tuesday, Wednesday, Saturday}, a parent form corresponding to words {shook, rock, hit, jolt}, and an argument form of TMOD; all phrases with this same combination are grouped as a relation. Training Regimes and Hyperparameters For each run of our model we perform three random restarts to convergence and select the posterior with lowest final free energy. We fix K to the true number of annotated relation types for both our model and USP and L (the number of document segments) to five. Dirichlet hyperparameters are set to 0.1. 7 Results Table 3’s first two sections present the results of our main evaluation. For earthquake, the far more difficult domain, our base model with only the domainindependent constraints strongly outperforms all three baselines across both metrics. For finance, the CLUTO and USP baselines achieve performance comparable to or slightly better than our base model. Our approach, however, has the advantage of providing a formalism for seamlessly incorporating additional arbitrary domain-specific constraints. When we add such constraints (denoted as model+DSC), we achieve consistently higher performance than all baselines across both datasets and metrics, demonstrating that this approach provides a simple and effective framework for injecting domain knowledge into relation discovery. 7This grouping mechanism yields better results than only grouping by core form. The first two baselines correspond to a setup where the number of sentence clusters K is set to the true number of relation types. This has the effect of lowering precision because each sentence must be assigned a cluster. To mitigate this impact, we experimented with using K + N clusters, with N ranging from 1 to 30. In each case, we then keep only the K largest clusters. For the earthquake dataset, increasing N improves performance until some point, after which performance degrades. However, the best FScore corresponding to the optimal number of clusters is 42.2, still far below our model’s 66.0 F-score. For the finance domain, increasing the number of clusters hurts performance. Our results show a large gap in F-score between the sentence and token-level evaluations for both the USP baseline and our model. A qualitative analysis of the results indicates that our model often picks up on regularities that are difficult to distinguish without relation-specific supervision. For earthquake, a location may be annotated as “the Philippine island of Mindoro” while we predict just the word “Mindoro.” For finance, an index change can be annotated as “30 points, or 0.8 percent,” while our model identifies “30 points” and “0.8 percent” as separate relations. In practice, these outputs are all plausible discoveries, and a practitioner desiring specific outputs could impose additional constraints to guide relation discovery toward them. The Impact of Constraints To understand the impact of the declarative constraints, we perform an ablation analysis on the constraint sets. We consider removing the constraints on syntactic patterns (no-syn) and the constraints disallowing relations to overlap (no-sep) from the full domain-independent model.8 We also try a version with hard syntactic constraints (hard-syn), which requires that every extraction match one of the three syntactic patterns specified by the syntactic constraint. Table 3’s bottom section presents the results of this evaluation. The model’s performance degrades when either of the two constraint sets are removed, demonstrating that the constraints are in fact beneficial for relation discovery. Additionally, in the hardsyn case, performance drops dramatically for finance 8Prevalence constraints are always enforced, as otherwise the prior on not instantiating a relation would need to be tuned. 537 Finance Earthquake Sentence-level Token-level Sentence-level Token-level Prec Rec F1 Prec Rec F1 Prec Rec F1 Prec Rec F1 Model 82.1 59.7 69.2 42.2 23.9 30.5 54.2 68.1 60.4 20.2 16.8 18.3 Model+DSC 87.3 81.6 84.4 51.8 30.0 38.0 66.4 65.6 66.0 22.6 23.1 22.8 CLUTO 56.3 92.7 70.0 — — — 19.8 58.0 29.5 — — — MTM 40.4 99.3 57.5 — — — 18.6 74.6 29.7 — — — USP 91.3 66.1 76.7 28.5 32.6 30.4 61.2 43.5 50.8 9.9 32.3 15.1 No-sep 97.8 35.4 52.0 86.1 8.7 15.9 42.2 21.9 28.8 16.1 4.6 7.1 No-syn 83.3 46.1 59.3 20.8 9.9 13.4 53.8 60.9 57.1 14.0 13.8 13.9 Hard-syn 47.7 39.0 42.9 11.6 7.0 8.7 55.0 66.2 60.1 20.1 17.3 18.6 Table 3: Top section: our model, with and without domain-specific constraints (DSC). Middle section: The three baselines. Bottom section: ablation analysis of constraint sets for our model. For all scores, higher is better. while remaining almost unchanged for earthquake. This suggests that formulating constraints as soft inequalities on posterior expectations gives our model the flexibility to accommodate both the underlying signal in the data and the declarative constraints. Comparison against Supervised CRF Our final set of experiments compares a semi-supervised version of our model against a conditional random field (CRF) model. The CRF model was trained using the same features as our model’s argument features. To incorporate training examples in our model, we simply treat annotated relation instances as observed variables. For both the baselines and our model, we experiment with using up to 10 annotated documents. At each of those levels of supervision, we average results over 10 randomly drawn training sets. At the sentence level, our model compares very favorably to the supervised CRF. For finance, it takes at least 10 annotated documents (corresponding to roughly 130 annotated relation instances) for the CRF to match the semi-supervised model’s performance. For earthquake, using even 10 annotated documents (about 71 relation instances) is not sufficient to match our model’s performance. At the token level, the supervised CRF baseline is far more competitive. Using a single labeled document (13 relation instances) yields superior performance to either of our model variants for finance, while four labeled documents (29 relation instances) do the same for earthquake. This result is not surprising—our model makes strong domain-independent assumptions about how underlying patterns of regularities in the text connect to relation expression. Without domain-specific supervision such assumptions are necessary, but they can prevent the model from fully utilizing available labeled instances. Moreover, being able to annotate even a single document requires a broad understanding of every relation type germane to the domain, which can be infeasible when there are many unfamiliar, complex domains to process. In light of our strong sentence-level performance, this suggests a possible human-assisted application: use our model to identify promising relation-bearing sentences in a new domain, then have a human annotate those sentences for use by a supervised approach to achieve optimal token-level extraction. 8 Conclusions This paper has presented a constraint-based approach to in-domain relation discovery. We have shown that a generative model augmented with declarative constraints on the model posterior can successfully identify domain-relevant relations and their instantiations. Furthermore, we found that a single set of constraints can be used across divergent domains, and that tailoring constraints specific to a domain can yield further performance benefits. Acknowledgements The authors gratefully acknowledge the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0172. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the US government. Thanks also to Hoifung Poon and the members of the MIT NLP group for their suggestions and comments. 538 References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of DL. Michele Banko and Oren Etzioni. 2008. The tradeoffs between open and traditional relation extraction. In Proceedings of ACL. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of IJCAI. Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of HLT/NAACL. Kedar Bellare and Andrew McCallum. 2009. Generalized expectation criteria for bootstrapping extractors using record-text alignment. In Proceedings of EMNLP. Jordan Boyd-Graber and David M. Blei. 2008. Syntactic topic models. In Advances in NIPS. Razvan C. Bunescu and Raymond J. Mooney. 2007. Learning to extract relations from the web using minimal supervision. In Proceedings of ACL. Richard H. Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. 1995. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190–1208. Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007. Guiding semi-supervision with constraintdriven learning. In Proceedings of ACL. 2006. Modeling general and specific aspects of documents with a probabilistic topic model. In Advances in NIPS. Jinxiu Chen, Dong-Hong Ji, Chew Lim Tan, and ZhengYu Niu. 2005. Automatic relation extraction with model order selection and discriminative label identification. In Proceedings of IJCNLP. Harr Chen, S.R.K. Branavan, Regina Barzilay, and David R. Karger. 2009. Content modeling using latent permutations. Journal of Artificial Intelligence Research, 36:129–163. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. The stanford typed dependencies representation. In Proceedings of the COLING Workshop on Cross-framework and Cross-domain Parser Evaluation. Jo˜ao Grac¸a, Kuzman Ganchev, and Ben Taskar. 2007. Expectation maximization and posterior constraints. In Advances in NIPS. Takaaki Hasegawa, Satoshi Sekine, and Ralph Grishman. 2004. Discovering relations among named entities from large corpora. In Proceedings of ACL. Richard Johansson and Pierre Nugues. 2007. Extended constituent-to-dependency conversion for english. In Proceedings of NODALIDA. Mark Johnson. 2007. Why doesn’t EM find good HMM POS-taggers? In Proceedings of EMNLP. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of ACL. Mirella Lapata. 2006. Automatic evaluation of information ordering: Kendall’s tau. Computational Linguistics, 32(4):471–484. Dekang Lin and Patrick Pantel. 2001. DIRT - discovery of inference rules from text. In Proceedings of SIGKDD. Gideon S. Mann and Andrew McCallum. 2008. Generalized expectation criteria for semi-supervised learning of conditional random fields. In Proceedings of ACL. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of ACL/IJCNLP. Hoifung Poon and Pedro Domingos. 2009. Unsupervised semantic parsing. In Proceedings of EMNLP. Ellen Riloff. 1996. Automatically generating extraction patterns from untagged texts. In Proceedings of AAAI. Benjamin Rosenfeld and Ronen Feldman. 2007. Clustering for unsupervised relation identification. In Proceedings of CIKM. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of CoNLL. Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proceedings of HLT/NAACL. Kiyoshi Sudo, Satoshi Sekine, and Ralph Grishman. 2003. An improved extraction pattern representation model for automatic IE pattern acquisition. In Proceedings of ACL. Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Automatic acquisition of domain knowledge for information extraction. In Proceedings of COLING. Limin Yao, Sebastian Riedel, and Andrew McCallum. 2010. Cross-document relation extraction without labelled data. In Proceedings of EMNLP. Alexander Yates and Oren Etzioni. 2009. Unsupervised methods for determining object and relation synonyms on the web. Journal of Artificial Intelligence Research, 34:255–296. Min Zhang, Jian Su, Danmei Wang, Guodong Zhou, and Chew Lim Tan. 2005. Discovering relations between named entities from a large raw corpus using tree similarity-based clustering. In Proceedings of IJCNLP. 539 Jun Zhu, Zaiqing Nie, Xiaojing Liu, Bo Zhang, and JiRong Wen. 2009. StatSnowball: a statistical approach to extracting entity relationships. In Proceedings of WWW. 540
|
2011
|
54
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 541–550, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, Daniel S. Weld Computer Science & Engineering University of Washington Seattle, WA 98195, USA {raphaelh,clzhang,xiaoling,lsz,weld}@cs.washington.edu Abstract Information extraction (IE) holds the promise of generating a large-scale knowledge base from the Web’s natural language text. Knowledge-based weak supervision, using structured data to heuristically label a training corpus, works towards this goal by enabling the automated learning of a potentially unbounded number of relation extractors. Recently, researchers have developed multiinstance learning algorithms to combat the noisy training data that can come from heuristic labeling, but their models assume relations are disjoint — for example they cannot extract the pair Founded(Jobs, Apple) and CEO-of(Jobs, Apple). This paper presents a novel approach for multi-instance learning with overlapping relations that combines a sentence-level extraction model with a simple, corpus-level component for aggregating the individual facts. We apply our model to learn extractors for NY Times text using weak supervision from Freebase. Experiments show that the approach runs quickly and yields surprising gains in accuracy, at both the aggregate and sentence level. 1 Introduction Information-extraction (IE), the process of generating relational data from natural-language text, continues to gain attention. Many researchers dream of creating a large repository of high-quality extracted tuples, arguing that such a knowledge base could benefit many important tasks such as question answering and summarization. Most approaches to IE use supervised learning of relation-specific examples, which can achieve high precision and recall. Unfortunately, however, fully supervised methods are limited by the availability of training data and are unlikely to scale to the thousands of relations found on the Web. A more promising approach, often called “weak” or “distant” supervision, creates its own training data by heuristically matching the contents of a database to corresponding text (Craven and Kumlien, 1999). For example, suppose that r(e1, e2) = Founded(Jobs, Apple) is a ground tuple in the database and s =“Steve Jobs founded Apple, Inc.” is a sentence containing synonyms for both e1 = Jobs and e2 = Apple, then s may be a natural language expression of the fact that r(e1, e2) holds and could be a useful training example. While weak supervision works well when the textual corpus is tightly aligned to the database contents (e.g., matching Wikipedia infoboxes to associated articles (Hoffmann et al., 2010)), Riedel et al. (2010) observe that the heuristic leads to noisy data and poor extraction performance when the method is applied more broadly (e.g., matching Freebase records to NY Times articles). To fix this problem they cast weak supervision as a form of multi-instance learning, assuming only that at least one of the sentences containing e1 and e2 are expressing r(e1, e2), and their method yields a substantial improvement in extraction performance. However, Riedel et al.’s model (like that of previous systems (Mintz et al., 2009)) assumes that relations do not overlap — there cannot exist two facts r(e1, e2) and q(e1, e2) that are both true for any pair of entities, e1 and e2. Unfortunately, this assumption is often violated; 541 for example both Founded(Jobs, Apple) and CEO-of(Jobs, Apple) are clearly true. Indeed, 18.3% of the weak supervision facts in Freebase that match sentences in the NY Times 2007 corpus have overlapping relations. This paper presents MULTIR, a novel model of weak supervision that makes the following contributions: • MULTIR introduces a probabilistic, graphical model of multi-instance learning which handles overlapping relations. • MULTIR also produces accurate sentence-level predictions, decoding individual sentences as well as making corpus-level extractions. • MULTIR is computationally tractable. Inference reduces to weighted set cover, for which it uses a greedy approximation with worst case running time O(|R| · |S|) where R is the set of possible relations and S is largest set of sentences for any entity pair. In practice, MULTIR runs very quickly. • We present experiments showing that MULTIR outperforms a reimplementation of Riedel et al. (2010)’s approach on both aggregate (corpus as a whole) and sentential extractions. Additional experiments characterize aspects of MULTIR’s performance. 2 Weak Supervision from a Database Given a corpus of text, we seek to extract facts about entities, such as the company Apple or the city Boston. A ground fact (or relation instance), is an expression r(e) where r is a relation name, for example Founded or CEO-of, and e = e1, . . . , en is a list of entities. An entity mention is a contiguous sequence of textual tokens denoting an entity. In this paper we assume that there is an oracle which can identify all entity mentions in a corpus, but the oracle doesn’t normalize or disambiguate these mentions. We use ei ∈E to denote both an entity and its name (i.e., the tokens in its mention). A relation mention is a sequence of text (including one or more entity mentions) which states that some ground fact r(e) is true. For example, “Steve Ballmer, CEO of Microsoft, spoke recently at CES.” contains three entity mentions as well as a relation mention for CEO-of(Steve Ballmer, Microsoft). In this paper we restrict our attention to binary relations. Furthermore, we assume that both entity mentions appear as noun phrases in a single sentence. The task of aggregate extraction takes two inputs, Σ, a set of sentences comprising the corpus, and an extraction model; as output it should produce a set of ground facts, I, such that each fact r(e) ∈I is expressed somewhere in the corpus. Sentential extraction takes the same input and likewise produces I, but in addition it also produces a function, Γ : I →P(Σ), which identifies, for each r(e) ∈I, the set of sentences in Σ that contain a mention describing r(e). In general, the corpuslevel extraction problem is easier, since it need only make aggregate predictions, perhaps using corpuswide statistics. In contrast, sentence-level extraction must justify each extraction with every sentence which expresses the fact. The knowledge-based weakly supervised learning problem takes as input (1) Σ, a training corpus, (2) E, a set of entities mentioned in that corpus, (3) R, a set of relation names, and (4), ∆, a set of ground facts of relations in R. As output the learner produces an extraction model. 3 Modeling Overlapping Relations We define an undirected graphical model that allows joint reasoning about aggregate (corpus-level) and sentence-level extraction decisions. Figure 1(a) shows the model in plate form. 3.1 Random Variables There exists a connected component for each pair of entities e = (e1, e2) ∈E × E that models all of the extraction decisions for this pair. There is one Boolean output variable Y r for each relation name r ∈R, which represents whether the ground fact r(e) is true. Including this set of binary random variables enables our model to extract overlapping relations. Let S(e1,e2) ⊂Σ be the set of sentences which contain mentions of both of the entities. For each sentence xi ∈S(e1,e2) there exists a latent variable Zi which ranges over the relation names r ∈R and, 542 E × E 𝑌 R S 𝑍𝑖 (a) Steve Jobs was founder of Apple. Steve Jobs, Steve Wozniak and Ronald Wayne founded Apple. Steve Jobs is CEO of Apple. founder 𝑍1 founder none 0 𝑌bornIn 1 0 0 𝑍2 𝑍3 ... ... ... 𝑌founderOf 𝑌locatedIn 𝑌capitalOf (b) Figure 1: (a) Network structure depicted as plate model and (b) an example network instantiation for the pair of entities Steve Jobs, Apple. importantly, also the distinct value none. Zi should be assigned a value r ∈R only when xi expresses the ground fact r(e), thereby modeling sentencelevel extraction. Figure 1(b) shows an example instantiation of the model with four relation names and three sentences. 3.2 A Joint, Conditional Extraction Model We use a conditional probability model that defines a joint distribution over all of the extraction random variables defined above. The model is undirected and includes repeated factors for making sentence level predictions as well as globals factors for aggregating these choices. For each entity pair e = (e1, e2), define x to be a vector concatenating the individual sentences xi ∈S(e1,e2), Y to be vector of binary Y r random variables, one for each r ∈R, and Z to be the vector of Zi variables, one for each sentence xi. Our conditional extraction model is defined as follows: p(Y = y, Z = z|x; θ) def= 1 Zx Y r Φjoin(yr, z) Y i Φextract(zi, xi) where the parameter vector θ is used, below, to define the factor Φextract. The factors Φjoin are deterministic OR operators Φjoin(yr, z) def= ( 1 if yr = true ∧∃i : zi = r 0 otherwise which are included to ensure that the ground fact r(e) is predicted at the aggregate level for the assignment Y r = yr only if at least one of the sentence level assignments Zi = zi signals a mention of r(e). The extraction factors Φextract are given by Φextract(zi, xi) def= exp X j θjφj(zi, xi) where the features φj are sensitive to the relation name assigned to extraction variable zi, if any, and cues from the sentence xi. We will make use of the Mintz et al. (2009) sentence-level features in the expeiments, as described in Section 7. 3.3 Discussion This model was designed to provide a joint approach where extraction decisions are almost entirely driven by sentence-level reasoning. However, defining the Y r random variables and tying them to the sentencelevel variables, Zi, provides a direct method for modeling weak supervision. We can simply train the model so that the Y variables match the facts in the database, treating the Zi as hidden variables that can take any value, as long as they produce the correct aggregate predictions. This approach is related to the multi-instance learning approach of Riedel et al. (2010), in that both models include sentence-level and aggregate random variables. However, their sentence level variables are binary and they only have a single aggregate variable that takes values r ∈R ∪{none}, thereby ruling out overlapping relations. Additionally, their aggregate decisions make use of Mintzstyle aggregate features (Mintz et al., 2009), that collect evidence from multiple sentences, while we use 543 Inputs: (1) Σ, a set of sentences, (2) E, a set of entities mentioned in the sentences, (3) R, a set of relation names, and (4) ∆, a database of atomic facts of the form r(e1, e2) for r ∈R and ei ∈E. Definitions: We define the training set {(xi, yi)|i = 1 . . . n}, where i is an index corresponding to a particular entity pair (ej, ek) in ∆, xi contains all of the sentences in Σ with mentions of this pair, and yi = relVector(ej, ek). Computation: initialize parameter vector Θ ←0 for t = 1...T do for i = 1...n do (y′, z′) ←arg maxy,z p(y, z|xi; θ) if y′ ̸= yi then z∗←arg maxz p(z|xi, yi; θ) Θ ←Θ + φ(xi, z∗) −φ(xi, z′) end if end for end for Return Θ Figure 2: The MULTIR Learning Algorithm only the deterministic OR nodes. Perhaps surprising, we are still able to improve performance at both the sentential and aggregate extraction tasks. 4 Learning We now present a multi-instance learning algorithm for our weak-supervision model that treats the sentence-level extraction random variables Zi as latent, and uses facts from a database (e.g., Freebase) as supervision for the aggregate-level variables Y r. As input we have (1) Σ, a set of sentences, (2) E, a set of entities mentioned in the sentences, (3) R, a set of relation names, and (4) ∆, a database of atomic facts of the form r(e1, e2) for r ∈R and ei ∈E. Since we are using weak learning, the Y r variables in Y are not directly observed, but can be approximated from the database ∆. We use a procedure, relVector(e1, e2) to return a bit vector whose jth bit is one if rj(e1, e2) ∈∆. The vector does not have a bit for the special none relation; if there is no relation between the two entities, all bits are zero. Finally, we can now define the training set to be pairs {(xi, yi)|i = 1 . . . n}, where i is an index corresponding to a particular entity pair (ej, ek), xi contains all of the sentences with mentions of this pair, and yi = relVector(ej, ek). Given this form of supervision, we would like to find the setting for θ with the highest likelihood: O(θ) = Y i p(yi|xi; θ) = Y i X z p(yi, z|xi; θ) However, this objective would be difficult to optimize exactly, and algorithms for doing so would be unlikely to scale to data sets of the size we consider. Instead, we make two approximations, described below, leading to a Perceptron-style additive (Collins, 2002) parameter update scheme which has been modified to reason about hidden variables, similar in style to the approaches of (Liang et al., 2006; Zettlemoyer and Collins, 2007), but adapted for our specific model. This approximate algorithm is computationally efficient and, as we will see, works well in practice. Our first modification is to do online learning instead of optimizing the full objective. Define the feature sums φ(x, z) = P j φ(xj, zj) which range over the sentences, as indexed by j. Now, we can define an update based on the gradient of the local log likelihood for example i: ∂log Oi(θ) ∂θj = Ep(z|xi,yi;θ)[φj(xi, z)] −Ep(y,z|xi;θ)[φj(xi, z)] where the deterministic OR Φjoin factors ensure that the first expectation assigns positive probability only to assignments that produce the labeled facts yi but that the second considers all valid sets of extractions. Of course, these expectations themselves, especially the second one, would be difficult to compute exactly. Our second modification is to do a Viterbi approximation, by replacing the expectations with maximizations. Specifically, we compute the most likely sentence extractions for the label facts arg maxz p(z|xi, yi; θ) and the most likely extraction for the input, without regard to the labels, arg maxy,z p(y, z|xi; θ). We then compute the features for these assignments and do a simple additive update. The final algorithm is detailed in Figure 2. 544 5 Inference To support learning, as described above, we need to compute assignments arg maxz p(z|x, y; θ) and arg maxy,z p(y, z|x; θ). In this section, we describe algorithms for both cases that use the deterministic OR nodes to simplify the required computations. Predicting the most likely joint extraction arg maxy,z p(y, z|x; θ) can be done efficiently given the structure of our model. In particular, we note that the factors Φjoin represent deterministic dependencies between Z and Y, which when satisfied do not affect the probability of the solution. It is thus sufficient to independently compute an assignment for each sentence-level extraction variable Zi, ignoring the deterministic dependencies. The optimal setting for the aggregate variables Y is then simply the assignment that is consistent with these extractions. The time complexity is O(|R| · |S|). Predicting sentence level extractions given weak supervision facts, arg maxz p(z|x, y; θ), is more challenging. We start by computing extraction scores Φextract(xi, zi) for each possible extraction assignment Zi = zi at each sentence xi ∈S, and storing the values in a dynamic programming table. Next, we must find the most likely assignment z that respects our output variables y. It turns out that this problem is a variant of the weighted, edge-cover problem, for which there exist polynomial time optimal solutions. Let G = (E, V = VS ∪Vy) be a complete weighted bipartite graph with one node vS i ∈VS for each sentence xi ∈S and one node vy r ∈Vy for each relation r ∈R where yr = 1. The edge weights are given by c((vS i , vy r )) def= Φextract(xi, zi). Our goal is to select a subset of the edges which maximizes the sum of their weights, subject to each node vS i ∈VS being incident to exactly one edge, and each node vy r ∈Vy being incident to at least one edge. Exact Solution An exact solution can be obtained by first computing the maximum weighted bipartite matching, and adding edges to nodes which are not incident to an edge. This can be computed in time O(|V|(|E| + |V| log |V|)), which we can rewrite as O((|R| + |S|)(|R||S| + (|R| + |S|) log(|R| + |S|))). Approximate Solution An approximate solution can be obtained by iterating over the nodes in Vy, 𝑣bornIn y 𝑣locatedIn y 𝑣1S 𝑣2S 𝑣3S 𝑝(𝑍1 = bornIn|𝐱) 𝑝(𝑍3 = locatedIn|𝐱) 𝑝(𝑍1 … Figure 3: Inference of arg maxz p(Z = z|x, y) requires solving a weighted, edge-cover problem. and each time adding the highest weight incident edge whose addition doesn’t violate a constraint. The running time is O(|R||S|). This greedy search guarantees each fact is extracted at least once and allows any additional extractions that increase the overall probability of the assignment. Given the computational advantage, we use it in all of the experimental evaluations. 6 Experimental Setup We follow the approach of Riedel et al. (2010) for generating weak supervision data, computing features, and evaluating aggregate extraction. We also introduce new metrics for measuring sentential extraction performance, both relation-independent and relation-specific. 6.1 Data Generation We used the same data sets as Riedel et al. (2010) for weak supervision. The data was first tagged with the Stanford NER system (Finkel et al., 2005) and then entity mentions were found by collecting each continuous phrase where words were tagged identically (i.e., as a person, location, or organization). Finally, these phrases were matched to the names of Freebase entities. Given the set of matches, define Σ to be set of NY Times sentences with two matched phrases, E to be the set of Freebase entities which were mentioned in one or more sentences, ∆to be the set of Freebase facts whose arguments, e1 and e2 were mentioned in a sentence in Σ, and R to be set of relations names used in the facts of ∆. These sets define the weak supervision data. 6.2 Features and Initialization We use the set of sentence-level features described by Riedel et al. (2010), which were originally de545 veloped by Mintz et al. (2009). These include indicators for various lexical, part of speech, named entity, and dependency tree path properties of entity mentions in specific sentences, as computed with the Malt dependency parser (Nivre and Nilsson, 2004) and OpenNLP POS tagger1. However, unlike the previous work, we did not make use of any features that explicitly aggregate these properties across multiple mention instances. The MULTIR algorithm has a single parameter T, the number of training iterations, that must be specified manually. We used T = 50 iterations, which performed best in development experiments. 6.3 Evaluation Metrics Evaluation is challenging, since only a small percentage (approximately 3%) of sentences match facts in Freebase, and the number of matches is highly unbalanced across relations, as we will see in more detail later. We use the following metrics. Aggregate Extraction Let ∆e be the set of extracted relations for any of the systems; we compute aggregate precision and recall by comparing ∆e with ∆. This metric is easily computed but underestimates extraction accuracy because Freebase is incomplete and some true relations in ∆e will be marked wrong. Sentential Extraction Let Se be the sentences where some system extracted a relation and SF be the sentences that match the arguments of a fact in ∆. We manually compute sentential extraction accuracy by sampling a set of 1000 sentences from Se ∪SF and manually labeling the correct extraction decision, either a relation r ∈R or none. We then report precision and recall for each system on this set of sampled sentences. These results provide a good approximation to the true precision but can overestimate the actual recall, since we did not manually check the much larger set of sentences where no approach predicted extractions. 6.4 Precision / Recall Curves To compute precision / recall curves for the tasks, we ranked the MULTIR extractions as follows. For sentence-level evaluations, we ordered according to 1http://opennlp.sourceforge.net/ Recall Precision 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.0 0.2 0.4 0.6 0.8 1.0 SOLOR Riedel et al., 2010 MULTIR Figure 4: Aggregate extraction precision / recall curves for Riedel et al. (2010), a reimplementation of that approach (SOLOR), and our algorithm (MULTIR). the extraction factor score Φextract(zi, xi). For aggregate comparisons, we set the score for an extraction Y r = true to be the max of the extraction factor scores for the sentences where r was extracted. 7 Experiments To evaluate our algorithm, we first compare it to an existing approach for using multi-instance learning with weak supervision (Riedel et al., 2010), using the same data and features. We report both aggregate extraction and sentential extraction results. We then investigate relation-specific performance of our system. Finally, we report running time comparisons. 7.1 Aggregate Extraction Figure 4 shows approximate precision / recall curves for three systems computed with aggregate metrics (Section 6.3) that test how closely the extractions match the facts in Freebase. The systems include the original results reported by Riedel et al. (2010) as well as our new model (MULTIR). We also compare with SOLOR, a reimplementation of their algorithm, which we built in Factorie (McCallum et al., 2009), and will use later to evaluate sentential extraction. MULTIR achieves competitive or higher precision over all ranges of recall, with the exception of the very low recall range of approximately 01%. It also significantly extends the highest recall achieved, from 20% to 25%, with little loss in precision. To investigate the low precision in the 0-1% recall range, we manually checked the ten highest con546 Recall Precision 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.2 0.4 0.6 0.8 1.0 SOLOR MULTIR Figure 5: Sentential extraction precision / recall curves for MULTIR and SOLOR. fidence extractions produced by MULTIR that were marked wrong. We found that all ten were true facts that were simply missing from Freebase. A manual evaluation, as we perform next for sentential extraction, would remove this dip. 7.2 Sentential Extraction Although their model includes variables to model sentential extraction, Riedel et al. (2010) did not report sentence level performance. To generate the precision / recall curve we used the joint model assignment score for each of the sentences that contributed to the aggregate extraction decision. Figure 4 shows approximate precision / recall curves for MULTIR and SOLOR computed against manually generated sentence labels, as defined in Section 6.3. MULTIR achieves significantly higher recall with a consistently high level of precision. At the highest recall point, MULTIR reaches 72.4% precision and 51.9% recall, for an F1 score of 60.5%. 7.3 Relation-Specific Performance Since the data contains an unbalanced number of instances of each relation, we also report precision and recall for each of the ten most frequent relations. Let SM r be the sentences where MULTIR extracted an instance of relation r ∈R, and let SF r be the sentences that match the arguments of a fact about relation r in ∆. For each r, we sample 100 sentences from both SM r and SF r and manually check accuracy. To estimate precision ˜Pr we compute the ratio of true relation mentions in SM r , and to estimate recall ˜Rr we take the ratio of true relation mentions in SF r which are returned by our system. Table 1 presents this approximate precision and recall for MULTIR on each of the relations, along with statistics we computed to measure the quality of the weak supervision. Precision is high for the majority of relations but recall is consistently lower. We also see that the Freebase matches are highly skewed in quantity and can be low quality for some relations, with very few of them actually corresponding to true extractions. The approach generally performs best on the relations with a sufficiently large number of true matches, in many cases even achieving precision that outperforms the accuracy of the heuristic matches, at reasonable recall levels. 7.4 Overlapping Relations Table 1 also highlights some of the effects of learning with overlapping relations. For example, in the data, almost all of the matches for the administrative divisions relation overlap with the contains relation, because they both model relationships for a pair of locations. Since, in general, sentences are much more likely to describe a contains relation, this overlap leads to a situation were almost none of the administrate division matches are true ones, and we cannot accurately learn an extractor. However, we can still learn to accurately extract the contains relation, despite the distracting matches. Similarly, the place of birth and place of death relations tend to overlap, since it is often the case that people are born and die in the same city. In both cases, the precision outperforms the labeling accuracy and the recall is relatively high. To measure the impact of modeling overlapping relations, we also evaluated a simple, restricted baseline. Instead of labeling each entity pair with the set of all true Freebase facts, we created a dataset where each true relation was used to create a different training example. Training MULTIR on this data simulates effects of conflicting supervision that can come from not modeling overlaps. On average across relations, precision increases 12 points but recall drops 26 points, for an overall reduction in F1 score from 60.5% to 40.3%. 7.5 Running Time One final advantage of our model is the modest running time. Our implementation of the 547 Relation Freebase Matches MULTIR #sents % true ˜P ˜R /business/person/company 302 89.0 100.0 25.8 /people/person/place lived 450 60.0 80.0 6.7 /location/location/contains 2793 51.0 100.0 56.0 /business/company/founders 95 48.4 71.4 10.9 /people/person/nationality 723 41.0 85.7 15.0 /location/neighborhood/neighborhood of 68 39.7 100.0 11.1 /people/person/children 30 80.0 100.0 8.3 /people/deceased person/place of death 68 22.1 100.0 20.0 /people/person/place of birth 162 12.0 100.0 33.0 /location/country/administrative divisions 424 0.2 N/A 0.0 Table 1: Estimated precision and recall by relation, as well as the number of matched sentences (#sents) and accuracy (% true) of matches between sentences and facts in Freebase. Riedel et al. (2010) approach required approximately 6 hours to train on NY Times 05-06 and 4 hours to test on the NY Times 07, each without preprocessing. Although they do sampling for inference, the global aggregation variables require reasoning about an exponentially large (in the number of sentences) sample space. In contrast, our approach required approximately one minute to train and less than one second to test, on the same data. This advantage comes from the decomposition that is possible with the deterministic OR aggregation variables. For test, we simply consider each sentence in isolation and during training our approximation to the weighted assignment problem is linear in the number of sentences. 7.6 Discussion The sentential extraction results demonstrates the advantages of learning a model that is primarily driven by sentence-level features. Although previous approaches have used more sophisticated features for aggregating the evidence from individual sentences, we demonstrate that aggregating strong sentence-level evidence with a simple deterministic OR that models overlapping relations is more effective, and also enables training of a sentence extractor that runs with no aggregate information. While the Riedel et al. approach does include a model of which sentences express relations, it makes significant use of aggregate features that are primarily designed to do entity-level relation predictions and has a less detailed model of extractions at the individual sentence level. Perhaps surprisingly, our model is able to do better at both the sentential and aggregate levels. 8 Related Work Supervised-learning approaches to IE were introduced in (Soderland et al., 1995) and are too numerous to summarize here. While they offer high precision and recall, these methods are unlikely to scale to the thousands of relations found in text on the Web. Open IE systems, which perform selfsupervised learning of relation-independent extractors (e.g., Preemptive IE (Shinyama and Sekine, 2006), TEXTRUNNER (Banko et al., 2007; Banko and Etzioni, 2008) and WOE (Wu and Weld, 2010)) can scale to millions of documents, but don’t output canonicalized relations. 8.1 Weak Supervision Weak supervision (also known as distant- or self supervision) refers to a broad class of methods, but we focus on the increasingly-popular idea of using a store of structured data to heuristicaly label a textual corpus. Craven and Kumlien (1999) introduced the idea by matching the Yeast Protein Database (YPD) to the abstracts of papers in PubMed and training a naive-Bayes extractor. Bellare and McCallum (2007) used a database of BibTex records to train a CRF extractor on 12 bibliographic relations. The KYLIN system aplied weak supervision to learn relations from Wikipedia, treating infoboxes as the associated database (Wu and Weld, 2007); Wu et al. (2008) extended the system to use smoothing over an automatically generated infobox taxon548 omy. Mintz et al. (2009) used Freebase facts to train 100 relational extractors on Wikipedia. Hoffmann et al. (2010) describe a system similar to KYLIN, but which dynamically generates lexicons in order to handle sparse data, learning over 5000 Infobox relations with an average F1 score of 61%. Yao et al. (2010) perform weak supervision, while using selectional preference constraints to a jointly reason about entity types. The NELL system (Carlson et al., 2010) can also be viewed as performing weak supervision. Its initial knowledge consists of a selectional preference constraint and 20 ground fact seeds. NELL then matches entity pairs from the seeds to a Web corpus, but instead of learning a probabilistic model, it bootstraps a set of extraction patterns using semisupervised methods for multitask learning. 8.2 Multi-Instance Learning Multi-instance learning was introduced in order to combat the problem of ambiguously-labeled training data when predicting the activity of different drugs (Dietterich et al., 1997). Bunescu and Mooney (2007) connect weak supervision with multi-instance learning and extend their relational extraction kernel to this context. Riedel et al. (2010), combine weak supervision and multi-instance learning in a more sophisticated manner, training a graphical model, which assumes only that at least one of the matches between the arguments of a Freebase fact and sentences in the corpus is a true relational mention. Our model may be seen as an extension of theirs, since both models include sentence-level and aggregate random variables. However, Riedel et al. have only a single aggregate variable that takes values r ∈R ∪{none}, thereby ruling out overlapping relations. We have discussed the comparison in more detail throughout the paper, including in the model formulation section and experiments. 9 Conclusion We argue that weak supervision is promising method for scaling information extraction to the level where it can handle the myriad, different relations on the Web. By using the contents of a database to heuristically label a training corpus, we may be able to automatically learn a nearly unbounded number of relational extractors. Since the processs of matching database tuples to sentences is inherently heuristic, researchers have proposed multi-instance learning algorithms as a means for coping with the resulting noisy data. Unfortunately, previous approaches assume that all relations are disjoint — for example they cannot extract the pair Founded(Jobs, Apple) and CEO-of(Jobs, Apple), because two relations are not allowed to have the same arguments. This paper presents a novel approach for multiinstance learning with overlapping relations that combines a sentence-level extraction model with a simple, corpus-level component for aggregating the individual facts. We apply our model to learn extractors for NY Times text using weak supervision from Freebase. Experiments show improvements for both sentential and aggregate (corpus level) extraction, and demonstrate that the approach is computationally efficient. Our early progress suggests many interesting directions. By joining two or more Freebase tables, we can generate many more matches and learn more relations. We also wish to refine our model in order to improve precision. For example, we would like to add type reasoning about entities and selectional preference constraints for relations. Finally, we are also interested in applying the overall learning approaches to other tasks that could be modeled with weak supervision, such as coreference and named entity classification. The source code of our system, its output, and all data annotations are available at http://cs.uw.edu/homes/raphaelh/mr. Acknowledgments We thank Sebastian Riedel and Limin Yao for sharing their data and providing valuable advice. This material is based upon work supported by a WRF / TJ Cable Professorship, a gift from Google and by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the Air Force Research Laboratory (AFRL). 549 References Michele Banko and Oren Etzioni. 2008. The tradeoffs between open and traditional relation extraction. In Proceedings of 46th Annual Meeting of the Association for Computational Linguistics (ACL-08), pages 28–36. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07), pages 2670–2676. Kedar Bellare and Andrew McCallum. 2007. Learning extractors from unlabeled text using relevant databases. In Sixth International Workshop on Information Integration on the Web. Razvan Bunescu and Raymond Mooney. 2007. Learning to extract relations from the web using minimal supervision. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL07). Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Toward an architecture for neverending language learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-10). Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP-2002). Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology, pages 77–86. Thomas G. Dietterich, Richard H. Lathrop, and Tom´as Lozano-P´erez. 1997. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89:31–71, January. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL05), pages 363–370. Raphael Hoffmann, Congle Zhang, and Daniel S. Weld. 2010. Learning 5000 relational extractors. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10), pages 286–295. Percy Liang, A. Bouchard-Cˆot´e, Dan Klein, and Ben Taskar. 2006. An end-to-end discriminative approach to machine translation. In International Conference on Computational Linguistics and Association for Computational Linguistics (COLING/ACL). Andrew McCallum, Karl Schultz, and Sameer Singh. 2009. Factorie: Probabilistic programming via imperatively defined factor graphs. In Neural Information Processing Systems Conference (NIPS). Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics (ACL-2009), pages 1003–1011. Joakim Nivre and Jens Nilsson. 2004. Memory-based dependency parsing. In Proceedings of the Conference on Natural Language Learning (CoNLL-04), pages 49–56. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of the Sixteenth European Conference on Machine Learning (ECML-2010), pages 148–163. Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computation Linguistics (HLTNAACL-06). Stephen Soderland, David Fisher, Jonathan Aseltine, and Wendy G. Lehnert. 1995. Crystal: Inducing a conceptual dictionary. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI-1995), pages 1314–1321. Fei Wu and Daniel S. Weld. 2007. Autonomously semantifying wikipedia. In Proceedings of the International Conference on Information and Knowledge Management (CIKM-2007), pages 41–50. Fei Wu and Daniel S. Weld. 2008. Automatically refining the wikipedia infobox ontology. In Proceedings of the 17th International Conference on World Wide Web (WWW-2008), pages 635–644. Fei Wu and Daniel S. Weld. 2010. Open information extraction using wikipedia. In The Annual Meeting of the Association for Computational Linguistics (ACL2010), pages 118–127. Limin Yao, Sebastian Riedel, and Andrew McCallum. 2010. Collective cross-document relation extraction without labelled data. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP-2010), pages 1013–1023. Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL-2007). 550
|
2011
|
55
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 551–560, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Exploiting Syntactico-Semantic Structures for Relation Extraction Yee Seng Chan and Dan Roth University of Illinois at Urbana-Champaign {chanys,danr}@illinois.edu Abstract In this paper, we observe that there exists a second dimension to the relation extraction (RE) problem that is orthogonal to the relation type dimension. We show that most of these second dimensional structures are relatively constrained and not difficult to identify. We propose a novel algorithmic approach to RE that starts by first identifying these structures and then, within these, identifying the semantic type of the relation. In the real RE problem where relation arguments need to be identified, exploiting these structures also allows reducing pipelined propagated errors. We show that this RE framework provides significant improvement in RE performance. 1 Introduction Relation extraction (RE) has been defined as the task of identifying a given set of semantic binary relations in text. For instance, given the span of text “... the Seattle zoo . . . ”, one would like to extract the relation that “the Seattle zoo” is located-at “Seattle”. RE has been frequently studied over the last few years as a supervised learning task, learning from spans of text that are annotated with a set of semantic relations of interest. However, most approaches to RE have assumed that the relations’ arguments are given as input (Chan and Roth, 2010; Jiang and Zhai, 2007; Jiang, 2009; Zhou et al., 2005), and therefore offer only a partial solution to the problem. Conceptually, this is a rather simple approach as all spans of texts are treated uniformly and are being mapped to one of several relation types of interest. However, these approaches to RE require a large amount of manually annotated training data to achieve good performance, making it difficult to expand the set of target relations. Moreover, as we show, these approaches become brittle when the relations’ arguments are not given but rather need to be identified in the data too. In this paper we build on the observation that there exists a second dimension to the relation extraction problem that is orthogonal to the relation type dimension: all relation types are expressed in one of several constrained syntactico-semantic structures. As we show, identifying where the text span is on the syntactico-semantic structure dimension first, can be leveraged in the RE process to yield improved performance. Moreover, working in the second dimension provides robustness to the real RE problem, that of identifying arguments along with the relations between them. For example, in “the Seattle zoo”, the entity mention “Seattle” modifies the noun “zoo”. Thus, the two mentions “Seattle” and “the Seattle zoo”, are involved in what we later call a premodifier relation, one of several syntactico-semantic structures we identify in Section 3. We highlight that all relation types can be expressed in one of several syntactico-semantic structures – Premodifiers, Possessive, Preposition, Formulaic and Verbal. As it turns out, most of these structures are relatively constrained and are not difficult to identify. This suggests a novel algorithmic approach to RE that starts by first identifying these structures and then, within these, identifying the semantic type of the relation. Not only does this approach provide significantly improved RE perfor551 mance, it carries with it two additional advantages. First, leveraging the syntactico-semantic structure is especially beneficial in the presence of small amounts of data. Second, and more important, is the fact that exploiting the syntactico-semantic dimension provides several new options for dealing with the full RE problem – incorporating the argument identification into the problem. We explore one of these possibilities, making use of the constrained structures as a way to aid in the identification of the relations’ arguments. We show that this already provides significant gain, and discuss other possibilities that can be explored. The contributions of this paper are summarized below: • We highlight that all relation types are expressed as one of several syntactico-semantic structures and show that most of these are relatively constrained and not difficult to identify. Consequently, working first in this structural dimension can be leveraged in the RE process to improve performance. • We show that when one does not have a large number of training examples, exploiting the syntactico-semantic structures is crucial for RE performance. • We show how to leverage these constrained structures to improve RE when the relations’ arguments are not given. The constrained structures allow us to jointly entertain argument candidates and relations built with them as arguments. Specifically, we show that considering argument candidates which otherwise would have been discarded (provided they exist in syntactico-semantic structures), we reduce error propagation along a standard pipeline RE architecture, and that this joint inference process leads to improved RE performance. In the next section, we describe our relation extraction framework that leverages the syntacticosemantic structures. We then present these structures in Section 3. We describe our mention entity typing system in Section 4 and features for the RE system in Section 5. We present our RE experiments in Section 6 and perform analysis in Section 7, before concluding in Section 8. S = {premodifier, possessive, preposition, formulaic} gold mentions in training data Mtrain Dg = {(mi, mj) ∈Mtrain × Mtrain | mi in same sentence as mj ∧i ̸= j ∧i < j} REbase = RE classifier trained on Dg Ds = ∅ for each (mi, mj) ∈Dg do p = structure inference on (mi, mj) using patterns if p ∈S ∨(mi, mj) was annotated with a S structure Ds = Ds ∪(mi, mj) done REs = RE classifier trained on Ds Output: REbase and REs Figure 1: Training a regular baseline RE classifier REbase and a RE classifier leveraging syntacticosemantic structures REs. 2 Relation Extraction Framework In Figure 1, we show the algorithm for training a typical baseline RE classifier (REbase), and for training a RE classifier that leverages the syntacticosemantic structures (REs). During evaluation and when the gold mentions are already annotated, we apply REs as follows. When given a test example mention pair (xi,xj), we perform structure inference on it using the patterns described in Section 3. If (xi,xj) is identified as having any of the four syntactico-semantic structures S, apply REs to predict the relation label, else apply REbase. Next, we show in Figure 2 our joint inference algorithmic framework that leverages the syntacticosemantic structures for RE, when mentions need to be predicted. Since the structures are fairly constrained, we can use them to consider mention candidates that are originally predicted as non mentions. As shown in Figure 2, we conservatively include such mentions when forming mention pairs, provided their null labels are predicted with a low probability t1. 1In this work, we arbitrary set t=0.2. After the experiments, and in our own analysis, we observe that t=0.25 achieves better performance. Besides using the probability of the 1-best prediction, one could also for instance, use the probability difference between the first and second best predictions. However, selecting an optimal t value is not the main focus of this work. 552 S = {premodifier, possessive, preposition, formulaic} candidate mentions Mcand Let Lm = argmax y PMET (y|m, θ), m ∈Mcand selected mentions Msel = {m ∈Mcand | Lm ̸= null ∨PMET (null|m, θ) ≤t} QhasNull = {(mi, mj) ∈Msel × Msel | mi in same sentence as mj ∧i ̸= j ∧i < j ∧ (Lmi ̸= null ∨Lmj ̸= null)} Let pool of relation predictions R = ∅ for each (mi, mj) ∈QhasNull do p = structure inference on (mi, mj) using patterns if p ∈S r = relation prediction for (mi, mj) using REs R = R ∪r else if Lmi ̸= null ∧Lmj ̸= null r = relation prediction for (mi, mj) using REbase R = R ∪r done Output: R Figure 2: RE using predicted mentions and patterns. Abbreviations: Lm: predicted entity label for mention m using the mention entity typing (MET) classifier described in Section 4; PMET : prediction probability according to the MET classifier; t: used for thresholding. There is a large body of work in using patterns to extract relations (Fundel et al., 2007; Greenwood and Stevenson, 2006; Zhu et al., 2009). However, these works operate along the first dimension, that of using patterns to mine for relation type examples. In contrast, in our RE framework, we apply patterns to identify the syntactico-semantic structure dimension first, and leverage this in the RE process. In (Roth and Yih, 2007), the authors used entity types to constrain the (first dimensional) relation types allowed among them. In our work, although a few of our patterns involve semantic type comparison, most of the patterns are syntactic in nature. In this work, we performed RE evaluation on the NIST Automatic Content Extraction (ACE) corpus. Most prior RE evaluation on ACE data assumed that mentions are already pre-annotated and given as input (Chan and Roth, 2010; Jiang and Zhai, 2007; Zhou et al., 2005). An exception is the work of (Kambhatla, 2004), where the author evaluated on the ACE-2003 corpus. In that work, the author did not address the pipelined errors propagated from the mention identification process. 3 Syntactico-Semantic Structures In this paper, we performed RE on the ACE-2004 corpus. In ACE-2004 when the annotators tagged a pair of mentions with a relation, they also specified the type of syntactico-semantic structure2. ACE2004 identified five types of structures: premodifier, possessive, preposition, formulaic, and verbal. We are unaware of any previous computational approaches that recognize these structures automatically in text, as we do, and use it in the context of RE (or any other problem). In (Qian et al., 2008), the authors reported the recall scores of their RE system on the various syntactico-semantic structures. But they do not attempt to recognize nor leverage these structures. In this work, we focus on detecting the first four structures. These four structures cover 80% of the mention pairs having valid semantic relations (we give the detailed breakdown in Section 7) and we show that they are relatively easy to identify using simple rules or patterns. In this section, we indicate mentions using square bracket pairs, and use mi and mj to represent a mention pair. We now describe the four structures. Premodifier relations specify the proper adjective or proper noun premodifier and the following noun it modifies, e.g.: [the [Seattle] zoo] Possessive indicates that the first mention is in a possessive case, e.g.: [[California] ’s Governor] Preposition indicates that the two mentions are semantically related via the existence of a preposition, e.g.: [officials] in [California] Formulaic The ACE04 annotation guideline3 indicates the annotation of several formulaic relations, including for example address: [Medford] , [Massachusetts] 2ACE-2004 termed it as lexical condition. We use the term syntactico-semantic structure in this paper as the mention pair exists in specific syntactic structures, and we use rules or patterns that are syntactically and semantically motivated to detect these structures. 3http://projects.ldc.upenn.edu/ace/docs/EnglishRDCV4-32.PDF 553 Structure type Pattern Premodifier Basic pattern: [u* [v+] w+] , where u, v, w represent words Each w is a noun or adjective If u* is not empty, then u*: JJ+ ∨JJ “and” JJ? ∨CD JJ* ∨RB DT JJ? ∨RB CD JJ ∨ DT (RB|JJ|VBG|VBD|VBN|CD)? Let w1 = first word in w+. w1 ̸= “’s” and POS tag of w1 ̸= POS Let vl = last word in v+. POS tag of vl ̸= PRP$ nor WP$ Possessive Basic pattern: [u? [v+] w+] , where u, v, w represent words Let w1 = first word in w+. If w1 = “’s” ∨POS tag of w1 = POS, accept mention pair Let vl = last word in v+. If POS tag of vl = PRP$ or WP$, accept mention pair Preposition Basic pattern: [mi] v* [mj], where v represent words and number of prepositions in the text span v* between them = 0, 1, or 2 If satisfy pattern: IN [mi][mj], accept mention pair If satisfy pattern: [mi] (IN|TO) [mj], accept mention pair If all labels in Ld start with “prep”, accept mention pair Formulaic If satisfy pattern: [mi] / [mj] ∧Ec(mi) = PER ∧Ec(mj) = ORG, accept mention pair If satisfy pattern: [mi][mj] If Ec(mi) = PER ∧Ec(mj) = ORG ∨GPE, accept mention pair Table 1: Rules and patterns for the four syntactico-semantic structures. Regular expression notations: ‘*’ matches the preceding element zero or more times; ‘+’ matches the preceding element one or more times; ‘?’ indicates that the preceding element is optional; ‘|’ indicates or. Abbreviations: Ec(m): coarse-grained entity type of mention m; Ld: labels in dependency path between the headword of two mentions. We use square brackets ‘[’ and ‘]’ to denote mention boundaries. The ‘/’ in the Formulaic row denotes the occurrence of a lexical ‘/’ in text. In this rest of this section, we present the rules/patterns for detecting the above four syntactico-semantic structure, giving an overview of them in Table 1. We plan to release all of the rules/patterns along with associated code4. Notice that the patterns are intuitive and mostly syntactic in nature. 3.1 Premodifier Structures • We require that one of the mentions completely include the other mention. Thus, the basic pattern is [u* [v+] w+]. • If u* is not empty, we require that it satisfies any of the following POS tag sequences: JJ+ ∨ JJ and JJ? ∨CD JJ*, etc. These are (optional) POS tag sequences that normally start a valid noun phrase. • We use two patterns to differentiate between premodifier relations and possessive relations, by checking for the existence of POS tags PRP$, WP$, POS, and the word “’s”. 4http://cogcomp.cs.illinois.edu/page/publications 3.2 Possessive Structures • The basic pattern for possessive is similar to that for premodifier: [u? [v+] w+] • If the word immediately following v+ is “’s” or its POS tag is “POS”, we accept the mention pair. If the POS tag of the last word in v+ is either PRP$ or WP$, we accept the mention pair. 3.3 Preposition Structures • We first require the two mentions to be nonoverlapping, and check for the existence of patterns such as “IN [mi] [mj]” and “[mi] (IN|TO) [mj]”. • If the only dependency labels in the dependency path between the head words of mi and mj are “prep” (prepositional modifier), accept the mention pair. 3.4 Formulaic Structures • The ACE-2004 annotator guidelines specify that several relations such as reporter signing off, addresses, etc. are often specified in standard structures. We check for the existence of patterns such as “[mi] / [mj]”, “[mi] [mj]”, 554 Category Feature For every POS of wk and offset from lw word wk wk and offset from lw in POS of wk, wk, and offset from lw mention mi POS of wk, offset from lw, and lw Bc(wk) and offset from lw POS of wk, Bc(wk), and offset from lw POS of wk, offset from lw, and Bc(lw) Contextual C−1,−1 of mi C+1,+1 of mi P−1,−1 of mi P+1,+1 of mi NE tags tag of NE, if lw of NE coincides with lw of mi in the sentence Syntactic parse-label of parse tree constituent parse that exactly covers mi parse-labels of parse tree constituents covering mi Table 2: Features used in our mention entity typing (MET) system. The abbreviations are as follows. lw: last word in the mention; Bc(w): the brown cluster bit string representing w; NE: named entity and whether they satisfy certain semantic entity type constraints. 4 Mention Extraction System As part of our experiments, we perform RE using predicted mentions. We first describe the features (an overview is given in Table 2) and then describe how we extract candidate mentions from sentences during evaluation. 4.1 Mention Extraction Features Features for every word in the mention For every word wk in a mention mi, we extract seven features. These are a combination of wk itself, its POS tag, and its integer offset from the last word (lw) in the mention. For instance, given the mention “the operation room”, the offsets for the three words in the mention are -2, -1, and 0 respectively. These features are meant to capture the word and POS tag sequences in mentions. We also use word clusters which are automatically generated from unlabeled texts, using the Brown clustering (Bc) algorithm of (Brown et al., 1992). This algorithm outputs a binary tree where words are leaves in the tree. Each word (leaf) in the tree can be represented by its unique path from the Category Feature POS POS of single word between m1, m2 hw of mi, mj and P−1,−1 of mi, mj hw of mi, mj and P−1,−1 of mi, mj hw of mi, mj and P+1,+1 of mi, mj hw of mi, mj and P−2,−1 of mi, mj hw of mi, mj and P−1,+1 of mi, mj hw of mi, mj and P+1,+2 of mi, mj Base chunk any base phrase chunk between mi, mj Table 3: Additional RE features. root and this path can be represented as a simple bit string. As part of our features, we use the cluster bit string representation of wk and lw. Contextual We extract the word C−1,−1 immediately before mi, the word C+1,+1 immediately after mi, and their associated POS tags P. NE tags We automatically annotate the sentences with named entity (NE) tags using the named entity tagger of (Ratinov and Roth, 2009). This tagger annotates proper nouns with the tags PER (person), ORG (organization), LOC (location), or MISC (miscellaneous). If the lw of mi coincides (actual token offset) with the lw of any NE annotated by the NE tagger, we extract the NE tag as a feature. Syntactic parse We parse the sentences using the syntactic parser of (Klein and Manning, 2003). We extract the label of the parse tree constituent (if it exists) that exactly covers the mention, and also labels of all constituents that covers the mention. 4.2 Extracting Candidate Mentions From a sentence, we gather the following as candidate mentions: all nouns and possessive pronouns, all named entities annotated by the the NE tagger (Ratinov and Roth, 2009), all base noun phrase (NP) chunks, all chunks satisfying the pattern: NP (PP NP)+, all NP constituents in the syntactic parse tree, and from each of these constituents, all substrings consisting of two or more words, provided the substrings do not start nor end on punctuation marks. These mention candidates are then fed to our mention entity typing (MET) classifier for type prediction (more details in Section 6.3). 555 5 Relation Extraction System We build a supervised RE system using sentences annotated with entity mentions and predefined target relations. During evaluation, when given a pair of mentions mi, mj, the system predicts whether any of the predefined target relation holds between the mention pair. Most of our features are based on the work of (Zhou et al., 2005; Chan and Roth, 2010). Due to space limitations, we refer the reader to our prior work (Chan and Roth, 2010) for the lexical, structural, mention-level, entity type, and dependency features. Here, we only describe the features that were not used in that work. As part of our RE system, we need to extract the head word (hw) of a mention (m), which we heuristically determine as follows: if m contains a preposition and a noun preceding the preposition, we use the noun as the hw. If there is no preposition in m, we use the last noun in m as the hw. POS features If there is a single word between the two mentions, we extract its POS tag. Given the hw of m, Pi,j refers to the sequence of POS tags in the immediate context of hw (we exclude the POS tag of hw). The offsets i and j denote the position (relative to hw) of the first and last POS tag respectively. For instance, P−2,−1 denotes the sequence of two POS tags on the immediate left of hw, and P−1,+1 denotes the POS tag on the immediate left of hw and the POS tag on the immediate right of hw. Base phrase chunk We add a boolean feature to detect whether there is any base phrase chunk in the text span between the two mentions. 6 Experiments We use the ACE-2004 dataset (catalog LDC2005T09 from the Linguistic Data Consortium) to conduct our experiments. Following prior work, we use the news wire (nwire) and broadcast news (bnews) corpora of ACE-2004 for our experiments, which consists of 345 documents. To build our RE system, we use the LIBLINEAR (Fan et al., 2008) package, with its default settings of L2-loss SVM (dual) as the solver, and we use an epsilon of 0.1. To ensure that this baseline RE system based on the features in Section 5 is competitive, we compare against the state-of-the-art featurebased RE systems of (Jiang and Zhai, 2007) and (Chan and Roth, 2010). In these works, the authors reported performance on undirected coarsegrained RE. Performing 5-fold cross validation on the nwire and bnews corpora, (Jiang and Zhai, 2007) and (Chan and Roth, 2010) reported F-measures of 71.5 and 71.2, respectively. Using the same evaluation setting, our baseline RE system achieves a competitive 71.4 F-measure. We build three RE classifiers: binary, coarse, fine. Lumping all the predefined target relations into a single label, we build a binary classifier to predict whether any of the predefined relations exists between a given mention pair. In this work, we model the argument order of the mentions when performing RE, since relations are usually asymmetric in nature. For instance, we consider mi:EMP-ORG:mj and mj:EMP-ORG:mi to be distinct relation types. In our experiments, we extracted a total of 55,520 examples or mention pairs. Out of these, 4,011 are positive relation examples annotated with 6 coarse-grained relation types and 22 fine-grained relation types5. We build a coarse-grained classifier to disambiguate between 13 relation labels (two asymmetric labels for each of the 6 coarse-grained relation types and a null label). We similarly build a fine-grained classifier to disambiguate between 45 relation labels. 6.1 Evaluation Method For our experiments, we adopt the experimental setting in our prior work (Chan and Roth, 2010) of ensuring that all examples from a single document are either all used for training, or all used for evaluation. In that work, we also highlight that ACE annotators rarely duplicate a relation link for coreferent mentions. For instance, assume mentions mi, mj, and mk are in the same sentence, mentions mi and mj are coreferent, and the annotators tag the mention pair mj, mk with a particular relation r. The annotators will rarely duplicate the same (implicit) 5We omit a single relation: Discourse (DISC). The ACE2004 annotation guidelines states that the DISC relation is established only for the purposes of the discourse and does not reference an official entity relevant to world knowledge. In this work, we focus on semantically meaningful relations. Furthermore, the DISC relation is dropped in ACE-2005. 556 10 documents 5% of data 80% of data RE model Rec% Pre% F1% Rec% Pre% F1% Rec% Pre% F1% Binary 58.0 80.3 67.4 64.4 80.6 71.6 73.2 84.0 78.2 Binary+Patterns 73.1 78.5 75.7 (+8.3) 75.3 80.6 77.9 80.1 84.2 82.1 Coarse 33.5 62.5 43.6 42.4 66.2 51.7 62.1 75.5 68.1 Coarse+Patterns 44.2 59.6 50.8 (+7.2) 51.2 64.2 56.9 68.0 75.4 71.5 Fine 18.1 47.0 26.1 26.3 51.6 34.9 51.6 68.4 58.8 Fine+Patterns 24.8 43.5 31.6 (+5.5) 32.2 48.9 38.9 56.4 67.5 61.5 Table 4: Micro-averaged (across the 5 folds) RE results using gold mentions. 10 documents 5% of data 80% of data RE model Rec% Pre% F1% Rec% Pre% F1% Rec% Pre% F1% Binary 32.2 46.6 38.1 35.5 48.9 41.1 40.1 52.7 45.5 Binary+Patterns 46.7 45.9 46.3 (+8.2) 47.6 47.8 47.2 50.2 50.4 50.3 Coarse 18.6 41.1 25.6 22.4 40.9 28.9 32.3 47.5 38.5 Coarse+Patterns 26.8 34.7 30.2 (+4.6) 30.3 37.0 33.3 38.9 42.9 40.8 Fine 10.7 32.2 16.1 14.6 33.4 20.3 26.9 44.3 33.5 Fine+Patterns 15.7 26.3 19.7 (+3.6) 19.4 29.2 23.3 31.7 38.3 34.7 Table 5: Micro-averaged (across the 5 folds) RE results using predicted mentions. relation r between mi and mk, thus leaving the gold relation label as null. Whether this is correct or not is debatable. However, to avoid being penalized when our RE system actually correctly predicts the label of an implicit relation, we take the following approach. During evaluation, if our system correctly predicts an implicit label, we simply switch its prediction to the null label. Since the RE recall scores only take into account non-null relation labels, this scoring method does not change the recall, but could marginally increase the precision scores by decreasing the count of RE predictions. In our experiments, we observe that both the usual and our scoring method give very similar RE results and the experimental trends remain the same. Of course, using this scoring method requires coreference information, which is available in the ACE data. 6.2 RE Evaluation Using Gold Mentions To perform our experiments, we split the 345 documents into 5 equal sets. In each of the 5 folds, 4 sets (276 documents) are reserved for drawing training examples, while the remaining set (69 documents) is used as evaluation data. In the experiments described in this section, we use the gold mentions available in the data. When one only has a small amount of training data, it is crucial to take advantage of external knowledge such as the syntactico-semantic structures. To simulate this setting, in each fold, we randomly selected 10 documents from the fold’s available training documents (about 3% of the total 345 documents) as training data. We built one binary, one coarse-grained, and one fine-grained classifier for each fold. In Section 2, we described how we trained a baseline RE classifier (REbase) and a RE classifier using the syntactico-semantic patterns (REs). We first apply REbase on each test example mention pair (mi,mj) to obtain the RE baseline results, showing these in Table 4 under the column “10 documents”, and in the rows “Binary”, “Coarse”, and “Fine”. We then applied REs on the test examples as described in Section 2, showing the results in the rows “Binary+Patterns”, “Coarse+Patterns”, and “Fine+Patterns”. The results show that by using syntactico-semantic structures, we obtain significant F-measure improvements of 8.3, 7.2, and 5.5 for binary, coarse-grained, and fine-grained relation predictions respectively. 6.3 RE Evaluation Using Predicted Mentions Next, we perform our experiments using predicted mentions. ACE-2004 defines 7 coarse-grained entity types, each of which are then refined into 43 fine557 0 1 2 3 4 5 6 7 8 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 RE F1(%) Improvement Proportion (%) of data used for training Improvement in (gold mentions) RE by using patterns Binary+Pattern Coarse+Pattern Fine+Pattern Figure 3: Improvement in (gold mention) RE. grained entity types. Using the ACE data annotated with mentions and predefined entity types, we build a fine-grained mention entity typing (MET) classifier to disambiguate between 44 labels (43 finegrained and a null label to indicate not a mention). To obtain the coarse-grained entity type predictions from the classifier, we simply check which coarsegrained type the fine-grained prediction belongs to. We use the LIBLINEAR package with the same settings as earlier specified for the RE system. In each fold, we build a MET classifier using all the (276) training documents in that fold. We apply REbase on all mention pairs (mi,mj) where both mi and mj have non null entity type predictions. We show these baseline results in the Rows “Binary”, “Coarse”, and “Fine” of Table 5. In Section 2, we described our algorithmic approach (Figure 2) that takes advantage of the structures with predicted mentions. We show the results of this approach in the Rows “Binary+Patterns”, “Coarse+Patterns”, and “Fine+Patterns” of Table 5. The results show that by leveraging syntacticosemantic structures, we obtain significant F-measure improvements of 8.2, 4.6, and 3.6 for binary, coarsegrained, and fine-grained relation predictions respectively. 7 Analysis We first show statistics regarding the syntacticosemantic structures. In Section 3, we mentioned that ACE-2004 identified five types of structures: premodifier, possessive, preposition, formulaic, and 0 1 2 3 4 5 6 7 8 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 RE F1(%) Improvement Proportion (%) of data used for training Improvement in (predicted mentions) RE by using patterns Binary+Pattern Coarse+Pattern Fine+Pattern Figure 4: Improvement in (predicted mention) RE. Pattern type Rec% Pre% PreMod 86.8 79.7 Poss 94.3 88.3 Prep 94.6 20.0 Formula 85.5 62.2 Table 6: Recall and precision of the patterns. verbal. On the 4,011 examples that we experimented on, premodifiers are the most frequent, accounting for 30.5% of the examples (or about 1,224 examples). The occurrence distributions of the other structures are 18.9% (possessive), 23.9% (preposition), 7.2% (formulaic), and 19.5% (verbal). Hence, the four syntactico-semantic structures that we focused on in this paper account for a large majority (80%) of the relations. In Section 6, we note that out of 55,520 mention pairs, only 4,011 exhibit valid relations. Thus, the proportion of positive relation examples is very sparse at 7.2%. If we can effectively identify and discard most of the negative relation examples, it should improve RE performance, including yielding training data with a more balanced label distribution. We now analyze the utility of the patterns. As shown in Table 6, the patterns are effective in inferring the structure of mention pairs. For instance, applying the premodifier patterns on the 55,520 mention pairs, we correctly identified 86.8% of the 1,224 premodifier occurrences as premodifiers, while incurring a false-positive rate of only about 20%6. We 6Random selection will give a precision of about 2.2% (1,224 out of 55,520) and thus a false-positive rate of 97.8% 558 note that preposition structures are relatively harder to identify. Some of the reasons are due to possibly multiple prepositions in between a mention pair, preposition sense ambiguity, pp-attachment ambiguity, etc. However, in general, we observe that inferring the structures allows us to discard a large portion of the mention pairs which have no valid relation between them. The intuition behind this is the following: if we infer that there is a syntacticosemantic structure between a mention pair, then it is likely that the mention pair exhibits a valid relation. Conversely, if there is a valid relation between a mention pair, then it is likely that there exists a syntactico-semantic structure between the mentions. Next, we repeat the experiments in Section 6.2 and Section 6.3, while gradually increasing the amount of training data used for training the RE classifiers. The detailed results of using 5% and 80% of all available data are shown in Table 4 and Table 5. Note that these settings are with respect to all 345 documents and thus the 80% setting represents using all 276 training documents in each fold. We plot the intermediate results in Figure 3 and Figure 4. We note that leveraging the structures provides improvements on all experimental settings. Also, intuitively, the binary predictions benefit the most from leveraging the structures. How to further exploit this is a possible future work. 8 Conclusion In this paper, we propose a novel algorithmic approach to RE by exploiting syntactico-semantic structures. We show that this approach provides several advantages and improves RE performance. There are several interesting directions for future work. There are probably many near misses when we apply our structure patterns on predicted mentions. For instance, for both premodifier and possessive structures, we require that one mention completely includes the other. Relaxing this might potentially recover additional valid mention pairs and improve performance. We could also try to learn classifiers to automatically identify and disambiguate between the different syntactico-semantic structures. It will also be interesting to feedback the predictions of the structure patterns to the mention entity typing classifier and possibly retrain to obtain a better classifier. Acknowledgements This research is supported by the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA, AFRL, or the US government. We thank Ming-Wei Chang and Quang Do for building the mention extraction system. References Peter F. Brown, Vincent J. Della Pietra, Peter V. deSouza, Jenifer C. Lai, and Robert L. Mercer. 1992. Classbased n-gram models of natural language. Computational Linguistics, 18(4):467–479. Yee Seng Chan and Dan Roth. 2010. Exploiting background knowledge for relation extraction. In Proceedings of the International Conference on Computational Linguistics (COLING), pages 152–160. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Katrin Fundel, Robert K¨uffner, and Ralf Zimmer. 2007. Relex – Relation extraction using dependency parse trees. Bioinformatics, 23(3):365–371. Mark A. Greenwood and Mark Stevenson. 2006. Improving semi-supervised acquisition of relation extraction patterns. In Proceedings of the COLING-ACL Workshop on Information Extraction Beyond The Document, pages 29–35. Jing Jiang and ChengXiang Zhai. 2007. A systematic exploration of the feature space for relation extraction. In Proceedings of Human Language Technologies North American Chapter of the Association for Computational Linguistics (HLT-NAACL), pages 113–120. Jing Jiang. 2009. Multi-task transfer learning for weakly-supervised relation extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 1012–1020. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for information extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 178–181. 559 Dan Klein and Christoper D. Manning. 2003. Fast exact inference with a factored model for natural language parsing. In The Conference on Advances in Neural Information Processing Systems (NIPS), pages 3–10. Longhua Qian, Guodong Zhou, Qiaomin Zhu, and Peide Qian. 2008. Relation extraction using convolution tree kernel expanded with entity features. In Pacific Asia Conference on Language, Information and Computation, pages 415–421. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Annual Conference on Computational Natural Language Learning (CoNLL), pages 147–155. Dan Roth and Wen Tau Yih. 2007. Global inference for entity and relation identification via a linear programming formulation. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. MIT Press. Guodong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 427–434. Jun Zhu, Zaiqing Nie, Xiaojiang Liu, Bo Zhang, and JiRong Wen. 2009. Statsnowball: a statistical approach to extracting entity relationships. In The International World Wide Web Conference, pages 101–110. 560
|
2011
|
56
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 561–569, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Together We Can: Bilingual Bootstrapping for WSD Mitesh M. Khapra Salil Joshi Arindam Chatterjee Pushpak Bhattacharyya Department Of Computer Science and Engineering, IIT Bombay, Powai, Mumbai, 400076. {miteshk,salilj,arindam,pb}@cse.iitb.ac.in Abstract Recent work on bilingual Word Sense Disambiguation (WSD) has shown that a resource deprived language (L1) can benefit from the annotation work done in a resource rich language (L2) via parameter projection. However, this method assumes the presence of sufficient annotated data in one resource rich language which may not always be possible. Instead, we focus on the situation where there are two resource deprived languages, both having a very small amount of seed annotated data and a large amount of untagged data. We then use bilingual bootstrapping, wherein, a model trained using the seed annotated data of L1 is used to annotate the untagged data of L2 and vice versa using parameter projection. The untagged instances of L1 and L2 which get annotated with high confidence are then added to the seed data of the respective languages and the above process is repeated. Our experiments show that such a bilingual bootstrapping algorithm when evaluated on two different domains with small seed sizes using Hindi (L1) and Marathi (L2) as the language pair performs better than monolingual bootstrapping and significantly reduces annotation cost. 1 Introduction The high cost of collecting sense annotated data for supervised approaches (Ng and Lee, 1996; Lee et al., 2004) has always remained a matter of concern for some of the resource deprived languages of the world. The problem is even more hard-hitting for multilingual regions (e.g., India which has more than 20 constitutionally recognized languages). To circumvent this problem, unsupervised and knowledge based approaches (Lesk, 1986; Walker and Amsler, 1986; Agirre and Rigau, 1996; McCarthy et al., 2004; Mihalcea, 2005) have been proposed as an alternative but they have failed to deliver good accuracies. Semi-supervised approaches (Yarowsky, 1995) which use a small amount of annotated data and a large amount of untagged data have shown promise albeit for a limited set of target words. The above situation highlights the need for high accuracy resource conscious approaches to all-words multilingual WSD. Recent work by Khapra et al. (2010) in this direction has shown that it is possible to perform cost effective WSD in a target language (L2) without compromising much on accuracy by leveraging on the annotation work done in another language (L1). This is achieved with the help of a novel synsetaligned multilingual dictionary which facilitates the projection of parameters learned from the Wordnet and annotated corpus of L1 to L2. This approach thus obviates the need for collecting large amounts of annotated corpora in multiple languages by relying on sufficient annotated corpus in one resource rich language. However, in many situations such a pivot resource rich language itself may not be available. Instead, we might have two or more languages having a small amount of annotated corpus and a large amount of untagged corpus. Addressing such situations is the main focus of this work. Specifically, we address the following question: In the absence of a pivot resource rich language is it possible for two resource deprived languages to mutually benefit from each other’s annotated data? While addressing the above question we assume that 561 even though it is hard to obtain large amounts of annotated data in multiple languages, it should be fairly easy to obtain a large amount of untagged data in these languages. We leverage on such untagged data by employing a bootstrapping strategy. The idea is to train an initial model using a small amount of annotated data in both the languages and iteratively expand this seed data by including untagged instances which get tagged with a high confidence in successive iterations. Instead of using monolingual bootstrapping, we use bilingual bootstrapping via parameter projection. In other words, the parameters learned from the annotated data of L1 (and L2 respectively) are projected to L2 (and L1 respectively) and the projected model is used to tag the untagged instances of L2 (and L1 respectively). Such a bilingual bootstrapping strategy when tested on two domains, viz., Tourism and Health using Hindi (L1) and Marathi (L2) as the language pair, consistently does better than a baseline strategy which uses only seed data for training without performing any bootstrapping. Further, it consistently performs better than monolingual bootstrapping. A simple and intuitive explanation for this is as follows. In monolingual bootstrapping a language can benefit only from its own seed data and hence can tag only those instances with high confidence which it has already seen. On the other hand, in bilingual bootstrapping a language can benefit from the seed data available in the other language which was not previously seen in its self corpus. This is very similar to the process of co-training (Blum and Mitchell, 1998) wherein the annotated data in the two languages can be seen as two different views of the same data. Hence, the classifier trained on one view can be improved by adding those untagged instances which are tagged with a high confidence by the classifier trained on the other view. The remainder of this paper is organized as follows. In section 2 we present related work. Section 3 describes the Synset aligned multilingual dictionary which facilitates parameter projection. Section 4 discusses the work of Khapra et al. (2009) on parameter projection. In section 5 we discuss bilingual bootstrapping which is the main focus of our work followed by a brief discussion on monolingual bootstrapping. Section 6 describes the experimental setup. In section 7 we present the results followed by discussion in section 8. Section 9 concludes the paper. 2 Related Work Bootstrapping for Word Sense Disambiguation was first discussed in (Yarowsky, 1995). Starting with a very small number of seed collocations an initial decision list is created. This decisions list is then applied to untagged data and the instances which get tagged with a high confidence are added to the seed data. This algorithm thus proceeds iteratively increasing the seed size in successive iterations. This monolingual bootstrapping method showed promise when tested on a limited set of target words but was not tried for all-words WSD. The failure of monolingual approaches (Ng and Lee, 1996; Lee et al., 2004; Lesk, 1986; Walker and Amsler, 1986; Agirre and Rigau, 1996; McCarthy et al., 2004; Mihalcea, 2005) to deliver high accuracies for all-words WSD at low costs created interest in bilingual approaches which aim at reducing the annotation effort. Recent work in this direction by Khapra et al. (2009) aims at reducing the annotation effort in multiple languages by leveraging on existing resources in a pivot language. They showed that it is possible to project the parameters learned from the annotation work of one language to another language provided aligned Wordnets for the two languages are available. However, they do not address situations where two resource deprived languages have aligned Wordnets but neither has sufficient annotated data. In such cases bilingual bootstrapping can be used so that the two languages can mutually benefit from each other’s small annotated data. Li and Li (2004) proposed a bilingual bootstrapping approach for the more specific task of Word Translation Disambiguation (WTD) as opposed to the more general task of WSD. This approach does not need parallel corpora (just like our approach) and relies only on in-domain corpora from two languages. However, their work was evaluated only on a handful of target words (9 nouns) for WTD as opposed to the broader task of WSD. Our work instead focuses on improving the performance of all words WSD for two resource deprived languages using bilingual bootstrapping. At the heart of our work lies parameter projection facilitated by a synset aligned 562 multilingual dictionary described in the next section. 3 Synset Aligned Multilingual Dictionary A novel and effective method of storage and use of dictionary in a multilingual setting was proposed by Mohanty et al. (2008). For the purpose of current discussion, we will refer to this multilingual dictionary framework as MultiDict. One important departure in this framework from the traditional dictionary is that synsets are linked, and after that the words inside the synsets are linked. The basic mapping is thus between synsets and thereafter between the words. Concepts L1 (English) L2 (Hindi) L3 (Marathi) 04321: a youthful male person {male child, boy} {lwкA (ladkaa), bAlк (baalak), bQcA (bachchaa)} {m lgA (mulgaa), porgA (porgaa), por (por)} Table 1: Multilingual Dictionary Framework Table 1 shows the structure of MultiDict, with one example row standing for the concept of boy. The first column is the pivot describing a concept with a unique ID. The subsequent columns show the words expressing the concept in respective languages (in the example table, English, Hindi and Marathi). After the synsets are linked, cross linkages are set up manually from the words of a synset to the words of a linked synset of the pivot language. For example, for the Marathi word m lgA (mulgaa), “a youthful male person”, the correct lexical substitute from the corresponding Hindi synset is lwкA (ladkaa). The average number of such links per synset per language pair is approximately 3. However, since our work takes place in a semi-supervised setting, we do not assume the presence of these manual cross linkages between synset members. Instead, in the above example, we assume that all the words in the Hindi synset are equally probable translations of every word in the corresponding Marathi synset. Such cross-linkages between synset members facilitate parameter projection as explained in the next section. 4 Parameter Projection Khapra et al. (2009) proposed that the various parameters essential for domain-specific Word Sense Disambiguation can be broadly classified into two categories: Wordnet-dependent parameters: • belongingness-to-dominant-concept • conceptual distance • semantic distance Corpus-dependent parameters: • sense distributions • corpus co-occurrence They proposed a scoring function (Equation (1)) which combines these parameters to identify the correct sense of a word in a context: S∗= arg max i (θiVi + X j∈J Wij ∗Vi ∗Vj) (1) where, i ∈Candidate Synsets J = Set of disambiguated words θi = BelongingnessToDominantConcept(Si) Vi = P(Si|word) Wij = CorpusCooccurrence(Si, Sj) ∗1/WNConceptualDistance(Si, Sj) ∗1/WNSemanticGraphDistance(Si, Sj) The first component θiVi of Equation (1) captures influence of the corpus specific sense of a word in a domain. The other component Wij ∗Vi ∗Vj captures the influence of interaction of the candidate sense with the senses of context words weighted by factors of co-occurrence, conceptual distance and semantic distance. Wordnet-dependent parameters depend on the structure of the Wordnet whereas the Corpusdependent parameters depend on various statistics learned from a sense marked corpora. Both the tasks of (a) constructing a Wordnet from scratch and (b) collecting sense marked corpora for multiple languages are tedious and expensive. Khapra et 563 al. (2009) observed that by projecting relations from the Wordnet of a language and by projecting corpus statistics from the sense marked corpora of the language to those of the target language, the effort required in constructing semantic graphs for multiple Wordnets and collecting sense marked corpora for multiple languages can be avoided or reduced. At the heart of their work lies the MultiDict described in previous section which facilitates parameter projection in the following manner: 1. By linking with the synsets of a pivot resource rich language (Hindi, in our case), the cost of building Wordnets of other languages is partly reduced (semantic relations are inherited). The Wordnet parameters of Hindi Wordnet now become projectable to other languages. 2. For calculating corpus specific sense distributions, P(Sense Si|Word W), we need the counts, #(Si, W). By using cross linked words in the synsets, these counts become projectable to the target language (Marathi, in our case) as they can be approximated by the counts of the cross linked Hindi words calculated from the Hindi sense marked corpus as follows: P(Si|W) = #(Si, marathi word) P j #(Sj, marathi word) P(Si|W) ≈ #(Si, cross linked hindi word) P j #(Sj, cross linked hindi word) The rationale behind the above approximation is the observation that within a domain the counts of crosslinked words will remain the same across languages. This parameter projection strategy as explained above lies at the heart of our work and allows us to perform bilingual bootstrapping by projecting the models learned from one language to another. 5 Bilingual Bootstrapping We now come to the main contribution of our work, i.e., bilingual bootstrapping. As shown in Algorithm 1, we start with a small amount of seed data (LD1 and LD2) in the two languages. Using this data we learn the parameters described in the previous section. We collectively refer to the parameters learned Algorithm 1 Bilingual Bootstrapping LD1 := Seed Labeled Data from L1 LD2 := Seed Labeled Data from L2 UD1 := Unlabeled Data from L1 UD2 := Unlabeled Data from L2 repeat θ1 := model trained using LD1 θ2 := model trained using LD2 {Project models from L1/L2 to L2/L1} ˆθ2 := project(θ1, L2) ˆθ1 := project(θ2, L1) for all u1 ∈UD1 do s := sense assigned by ˆθ1 to u1 if confidence(s) > ǫ then LD1 := LD1 + u1 UD1 := UD1 - u1 end if end for for all u2 ∈UD2 do s := sense assigned by ˆθ2 to u2 if confidence(s) > ǫ then LD2 := LD2 + u2 UD2 := UD2 - u2 end if end for until convergence from the seed data as models θ1 and θ2 for L1 and L2 respectively. The parameter projection strategy described in the previous section is then applied to θ1 and θ2 to obtain the projected models ˆθ2 and ˆθ1 respectively. These projected models are then applied to the untagged data of L1 and L2 and the instances which get labeled with a high confidence are added to the labeled data of the respective languages. This process is repeated till we reach convergence, i.e., till it is no longer possible to move any data from UD1 (and UD2) to LD1 (and LD2 respectively). We compare our algorithm with monolingual bootstrapping where the self models θ1 and θ2 are directly used to annotate the unlabeled instances in L1 and L2 respectively instead of using the projected models ˆθ1 and ˆθ2. The process of monolingual boot564 Algorithm 2 Monolingual Bootstrapping LD1 := Seed Labeled Data from L1 LD2 := Seed Labeled Data from L2 UD1 := Unlabeled Data from L1 UD2 := Unlabeled Data from L2 repeat θ1 := model trained using LD1 θ2 := model trained using LD2 for all u1 ∈UD1 do s := sense assigned by θ1 to u1 if confidence(s) > ǫ then LD1 := LD1 + u1 UD1 := UD1 - u1 end if end for for all u2 ∈UD2 do s := sense assigned by θ2 to u2 if confidence(s) > ǫ then LD2 := LD2 + u2 UD2 := UD2 - u2 end if end for until convergence strapping is shown in Algorithm 2. 6 Experimental Setup We used the publicly available dataset1 described in Khapra et al. (2010) for all our experiments. The data was collected from two domains, viz., Tourism and Health. The data for Tourism domain was collected by manually translating English documents downloaded from Indian Tourism websites into Hindi and Marathi. Similarly, English documents for Health domain were obtained from two doctors and were manually translated into Hindi and Marathi. The entire data was then manually annotated by three lexicographers adept in Hindi and Marathi. The various statistics pertaining to the total number of words, number of words per POS category and average degree of polysemy are described in Tables 2 to 5. Although Tables 2 and 3 also report the num1http://www.cfilt.iitb.ac.in/wsd/annotated corpus Polysemous words Monosemous words Category Tourism Health Tourism Health Noun 62336 24089 35811 18923 Verb 6386 1401 3667 5109 Adjective 18949 8773 28998 12138 Adverb 4860 2527 13699 7152 All 92531 36790 82175 43322 Table 2: Polysemous and Monosemous words per category in each domain for Hindi Polysemous words Monosemous words Category Tourism Health Tourism Health Noun 45589 17482 27386 11383 Verb 7879 3120 2672 1500 Adjective 13107 4788 16725 6032 Adverb 4036 1727 5023 1874 All 70611 27117 51806 20789 Table 3: Polysemous and Monosemous words per category in each domain for Marathi Avg. degree of Wordnet polysemy for polysemous words Category Tourism Health Noun 3.02 3.17 Verb 5.05 6.58 Adjective 2.66 2.75 Adverb 2.52 2.57 All 3.09 3.23 Table 4: Average degree of Wordnet polysemy per category in the 2 domains for Hindi Avg. degree of Wordnet polysemy for polysemous words Category Tourism Health Noun 3.06 3.18 Verb 4.96 5.18 Adjective 2.60 2.72 Adverb 2.44 2.45 All 3.14 3.29 Table 5: Average degree of Wordnet polysemy per category in the 2 domains for Marathi 565 0 10 20 30 40 50 60 70 80 0 1000 2000 3000 4000 5000 F-score (%) Seed Size (words) Seed Size v/s F-score OnlySeed WFS BiBoot MonoBoot 0 10 20 30 40 50 60 70 80 0 1000 2000 3000 4000 5000 F-score (%) Seed Size (words) Seed Size v/s F-score OnlySeed WFS BiBoot MonoBoot Figure 1: Comparison of BiBoot, MonoBoot, OnlySeed and WFS on Hindi Health data Figure 2: Comparison of BiBoot, MonoBoot, OnlySeed and WFS on Hindi Tourism data 0 10 20 30 40 50 60 70 80 0 1000 2000 3000 4000 5000 F-score (%) Seed Size (words) Seed Size v/s F-score OnlySeed WFS BiBoot MonoBoot 0 10 20 30 40 50 60 70 80 0 1000 2000 3000 4000 5000 F-score (%) Seed Size (words) Seed Size v/s F-score OnlySeed WFS BiBoot MonoBoot Figure 3: Comparison of BiBoot, MonoBoot, OnlySeed and WFS on Marathi Health data Figure 4: Comparison of BiBoot, MonoBoot, OnlySeed and WFS on Marathi Tourism data ber of monosemous words, we would like to clearly state that we do not consider monosemous words while evaluating the performance of our algorithms (as monosemous words do not need any disambiguation). We did a 4-fold cross validation of our algorithm using the above described corpora. Note that even though the corpora were parallel we did not use this property in any way in our experiments or algorithm. In fact, the documents in the two languages were randomly split into 4 folds without ensuring that the parallel documents remain in the same folds for the two languages. We experimented with different seed sizes varying from 0 to 5000 in steps of 250. The seed annotated data and untagged instances for bootstrapping are extracted from 3 folds of the data and the final evaluation is done on the held-out data in the 4th fold. We ran both the bootstrapping algorithms (i.e., monolingual bootstrapping and bilingual bootstrapping) for 10 iterations but, we observed that after 1-2 iterations the algorithms converge. In each iteration only those words for which P(assigned sense|word) > 0.6 get moved to the labeled data. Ideally, this threshold (0.6) should have been selected using a development set. However, since our work focuses on resource scarce languages we did not want to incur the additional cost of using a development set. Hence, we used a fixed threshold of 0.6 so that in each iteration only those words get moved to the labeled data for which the assigned sense is clearly a majority sense (P > 0.6). 566 LanguageDomain Algorithm F-score(%) No. of tagged words needed to achieve this F-score % Reduction in annotation cost Hindi-Health Biboot 57.70 1250 (2250+2250)−(1250+1750) (2250+2250) ∗100 = 33.33% OnlySeed 57.99 2250 Marathi-Health Biboot 64.97 1750 OnlySeed 64.51 2250 Hindi-Tourism Biboot 60.67 1000 (2000+2000)−(1000+1250) (2000+2000) ∗100 = 43.75% OnlySeed 59.83 2000 Marathi-Tourism Biboot 61.90 1250 OnlySeed 61.68 2000 Table 6: Reduction in annotation cost achieved using Bilingual Bootstrapping 7 Results The results of our experiments are summarized in Figures 1 to 4. The x-axis represents the amount of seed data used and the y-axis represents the F-scores obtained. The different curves in each graph are as follows: a. BiBoot: This curve represents the F-score obtained after 10 iterations by using bilingual bootstrapping with different amounts of seed data. b. MonoBoot: This curve represents the F-score obtained after 10 iterations by using monolingual bootstrapping with different amounts of seed data. c. OnlySeed: This curve represents the F-score obtained by training on the seed data alone without using any bootstrapping. d. WFS: This curve represents the F-score obtained by simply selecting the first sense from Wordnet, a typically reported baseline. 8 Discussions In this section we discuss the important observations made from Figures 1 to 4. 8.1 Performance of Bilingual bootstrapping For small seed sizes, the F-score of bilingual bootstrapping is consistently better than the F-score obtained by training only on the seed data without using any bootstrapping. This is true for both the languages in both the domains. Further, bilingual bootstrapping also does better than monolingual bootstrapping for small seed sizes. As explained earlier, this better performance can be attributed to the fact that in monolingual bootstrapping the algorithm can tag only those instances with high confidence which it has already seen in the training data. Hence, in successive iterations, very little new information becomes available to the algorithm. This is clearly evident from the fact that the curve of monolingual bootstrapping (MonoBoot) is always close to the curve of OnlySeed. 8.2 Effect of seed size The benefit of bilingual bootstrapping is clearly felt for small seed sizes. However, as the seed size increases the performance of the 3 algorithms, viz., MonoBoot, BiBoot and OnlySeed is more or less the same. This is intuitive, because, as the seed size increases the algorithm is able to see more and more tagged instances in its self corpora and hence does not need any assistance from the other language. In other words, the annotated data in L1 is not able to add any new information to the training process of L2 and vice versa. 8.3 Bilingual bootstrapping reduces annotation cost The performance boost obtained at small seed sizes suggests that bilingual bootstrapping helps to reduce the overall annotation costs for both the languages. To further illustrate this, we take some sample points from the graph and compare the number of tagged words needed by BiBoot and OnlySeed to reach the same (or nearly the same) F-score. We present this comparison in Table 6. 567 The rows for Hindi-Health and Marathi-Health in Table 6 show that when BiBoot is employed we need 1250 tagged words in Hindi and 1750 tagged words in Marathi to attain F-scores of 57.70% and 64.97% respectively. On the other hand, in the absence of bilingual bootstrapping, (i.e., using OnlySeed) we need 2250 tagged words each in Hindi and Marathi to achieve similar F-scores. BiBoot thus gives a reduction of 33.33% in the overall annotation cost ( {1250 + 1750} v/s {2250 + 2250}) while achieving similar F-scores. Similarly, the results for Hindi-Tourism and Marathi-Tourism show that BiBoot gives a reduction of 43.75% in the overall annotation cost while achieving similar F-scores. Further, since the results of MonoBoot are almost the same as OnlySeed, the above numbers indicate that BiBoot provides a reduction in cost when compared to MonoBoot also. 8.4 Contribution of monosemous words in the performance of BiBoot As mentioned earlier, monosemous words in the test set are not considered while evaluating the performance of our algorithm but, we add monosemous words to the seed data. However, we do not count monosemous words while calculating the seed size as there is no manual annotation cost associated with monosemous words (they can be tagged automatically by fetching their singleton sense id from the wordnet). We observed that the monosemous words of L1 help in boosting the performance of L2 and vice versa. This is because for a given monosemous word in L2 (or L1 respectively) the corresponding cross-linked word in L1 (or L2 respectively) need not necessarily be monosemous. In such cases, the cross-linked polysemous word in L2 (or L1 respectively) benefits from the projected statistics of a monosemous word in L1 (or L2 respectively). This explains why BiBoot gives an F-score of 35-52% even at zero seed size even though the F-score of OnlySeed is only 2-5% (see Figures 1 to 4). 9 Conclusion We presented a bilingual bootstrapping algorithm for Word Sense Disambiguation which allows two resource deprived languages to mutually benefit from each other’s data via parameter projection. The algorithm consistently performs better than monolingual bootstrapping. It also performs better than using only monolingual seed data without using any bootstrapping. The benefit of bilingual bootstrapping is felt prominently when the seed size in the two languages is very small thus highlighting the usefulness of this algorithm in highly resource constrained scenarios. Acknowledgments We acknowledge the support of Microsoft Research India in the form of an International Travel Grant, which enabled one of the authors (Mitesh M. Khapra) to attend this conference. References Eneko Agirre and German Rigau. 1996. Word sense disambiguation using conceptual density. In In Proceedings of the 16th International Conference on Computational Linguistics (COLING). Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. pages 92– 100. Morgan Kaufmann Publishers. Mitesh M. Khapra, Sapan Shah, Piyush Kedia, and Pushpak Bhattacharyya. 2009. Projecting parameters for multilingual word sense disambiguation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 459–467, Singapore, August. Association for Computational Linguistics. Mitesh Khapra, Saurabh Sohoney, Anup Kulkarni, and Pushpak Bhattacharyya. 2010. Value for money: Balancing annotation effort, lexicon building and accuracy for multilingual wsd. In Proceedings of the 23rd International Conference on Computational Linguistics. Yoong Keok Lee, Hwee Tou Ng, and Tee Kiah Chia. 2004. Supervised word sense disambiguation with support vector machines and multiple knowledge sources. In Proceedings of Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 137–140. Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In In Proceedings of the 5th annual international conference on Systems documentation. Hang Li and Cong Li. 2004. Word translation disambiguation using bilingual bootstrapping. Comput. Linguist., 30:1–22, March. 568 Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004. Finding predominant word senses in untagged text. In ACL ’04: Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 279, Morristown, NJ, USA. Association for Computational Linguistics. Rada Mihalcea. 2005. Large vocabulary unsupervised word sense disambiguation with graph-based algorithms for sequence data labeling. In In Proceedings of the Joint Human Language Technology and Empirical Methods in Natural Language Processing Conference (HLT/EMNLP), pages 411–418. Rajat Mohanty, Pushpak Bhattacharyya, Prabhakar Pande, Shraddha Kalele, Mitesh Khapra, and Aditya Sharma. 2008. Synset based multilingual dictionary: Insights, applications and challenges. In Global Wordnet Conference. Hwee Tou Ng and Hian Beng Lee. 1996. Integrating multiple knowledge sources to disambiguate word senses: An exemplar-based approach. In In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL), pages 40–47. D. Walker and R. Amsler. 1986. The use of machine readable dictionaries in sublanguage analysis. In In Analyzing Language in Restricted Domains, Grishman and Kittredge (eds), LEA Press, pages 69–83. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, pages 189–196, Morristown, NJ, USA. Association for Computational Linguistics. 569
|
2011
|
57
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 570–580, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Which Noun Phrases Denote Which Concepts? Jayant Krishnamurthy Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 [email protected] Tom M. Mitchell Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 [email protected] Abstract Resolving polysemy and synonymy is required for high-quality information extraction. We present ConceptResolver, a component for the Never-Ending Language Learner (NELL) (Carlson et al., 2010) that handles both phenomena by identifying the latent concepts that noun phrases refer to. ConceptResolver performs both word sense induction and synonym resolution on relations extracted from text using an ontology and a small amount of labeled data. Domain knowledge (the ontology) guides concept creation by defining a set of possible semantic types for concepts. Word sense induction is performed by inferring a set of semantic types for each noun phrase. Synonym detection exploits redundant information to train several domain-specific synonym classifiers in a semi-supervised fashion. When ConceptResolver is run on NELL’s knowledge base, 87% of the word senses it creates correspond to real-world concepts, and 85% of noun phrases that it suggests refer to the same concept are indeed synonyms. 1 Introduction Many information extraction systems construct knowledge bases by extracting structured assertions from free text (e.g., NELL (Carlson et al., 2010), TextRunner (Banko et al., 2007)). A major limitation of many of these systems is that they fail to distinguish between noun phrases and the underlying concepts they refer to. As a result, a polysemous phrase like “apple” will refer sometimes to the concept Apple Computer (the company), and other times to the concept apple (the fruit). Furthermore, two synonymous noun phrases like “apple” and “Apple “apple” “apple computer” apple (the fruit) Apple Computer Figure 1: An example mapping from noun phrases (left) to a set of underlying concepts (right). Arrows indicate which noun phrases can refer to which concepts. [eli lilly, lilly] [kaspersky labs, kaspersky lab, kaspersky] [careerbuilder, careerbuilder.com] [l 3 communications, level 3 communications] [cellular, u.s. cellular] [jc penney, jc penny] [nielsen media research, nielsen company] [universal studios, universal music group, universal] [amr corporation, amr] [intel corp, intel corp., intel corporation, intel] [emmitt smith, chris canty] [albert pujols, pujols] [carlos boozer, dennis martinez] [jason hirsh, taylor buchholz] [chris snyder, ryan roberts] [j.p. losman, losman, jp losman] [san francisco giants, francisco rodriguez] [andruw jones, andruw] [aaron heilman, bret boone] [roberto clemente, clemente] Figure 2: A random sample of concepts created by ConceptResolver. The first 10 concepts are from company, while the second 10 are from athlete. Computer” can refer to the same underlying concept. The result of ignoring this many-to-many mapping between noun phrases and underlying concepts (see Figure 1) is confusion about the meaning of extracted information. To minimize such confusion, a system must separately represent noun phrases, the underlying concepts to which they can refer, and the many-to-many “can refer to” relation between them. The relations extracted by systems like NELL actually apply to concepts, not to noun phrases. Say 570 the system extracts the relation ceoOf(x1, x2) between the noun phrases x1 and x2. The correct interpretation of this extracted relation is that there exist concepts c1 and c2 such that x1 can refer to c1, x2 can refer to c2, and ceoOf(c1, c2). If the original relation were ceoOf(“steve”, “apple”), then c1 would be Steve Jobs, and c2 would be Apple Computer. A similar interpretation holds for one-place category predicates like person(x1). We define concept discovery as the problem of (1) identifying concepts like c1 and c2 from extracted predicates like ceoOf(x1, x2) and (2) mapping noun phrases like x1, x2 to the concepts they can refer to. The main input to ConceptResolver is a set of extracted category and relation instances over noun phrases, like person(x1) and ceoOf(x1, x2), produced by running NELL. Here, any individual noun phrase xi can be labeled with multiple categories and relations. The output of ConceptResolver is a set of concepts, {c1, c2, ..., cn}, and a mapping from each noun phrase in the input to the set of concepts it can refer to. Like many other systems (Miller, 1995; Yates and Etzioni, 2007; Lin and Pantel, 2002), ConceptResolver represents each output concept ci as a set of synonymous noun phrases, i.e., ci = {xi1, xi2, ..., xim}. For example, Figure 2 shows several concepts output by ConceptResolver; each concept clearly reveals which noun phrases can refer to it. Each concept also has a semantic type that corresponds to a category in ConceptResolver’s ontology; for instance, the first 10 concepts in Figure 2 belong to the category company. Previous approaches to concept discovery use little prior knowledge, clustering noun phrases based on co-occurrence statistics (Pantel and Lin, 2002). In comparison, ConceptResolver uses a knowledgerich approach. In addition to the extracted relations, ConceptResolver takes as input two other sources of information: an ontology, and a small number of labeled synonyms. The ontology contains a schema for the relation and category predicates found in the input instances, including properties of predicates like type restrictions on its domain and range. The category predicates are used to assign semantic types to each concept, and the properties of relation predicates are used to create evidence for synonym resolution. The labeled synonyms are used as training data during synonym resolution, where they are 1. Induce Word Senses i. Use extracted category instances to create one or more senses per noun phrase. ii. Use argument type constraints to produce relation evidence for synonym resolution. 2. Cluster Synonymous Senses For each category C defined in the ontology: i. Train a semi-supervised classifier to predict synonymy. ii. Cluster word senses with semantic type C using classifier’s predictions. iii. Output sense clusters as concepts with semantic type C. Figure 3: High-level outline of ConceptResolver’s algorithm. used to train a semi-supervised classifier. ConceptResolver discovers concepts using the process outlined in Figure 3. It first performs word sense induction, using the extracted category instances to create one or more unambiguous word senses for each noun phrase in the knowledge base. Each word sense is a copy of the original noun phrase paired with a semantic type (a category) that restricts the concepts it can refer to. ConceptResolver then performs synonym resolution on these word senses. This step treats the senses of each semantic type independently, first training a synonym classifier then clustering the senses based on the classifier’s decisions. The result of this process is clusters of synonymous word senses, which are output as concepts. Concepts inherit the semantic type of the word senses they contain. We evaluate ConceptResolver using a subset of NELL’s knowledge base, presenting separate results for the concepts of each semantic type. The evaluation shows that, on average, 87% of the word senses created by ConceptResolver correspond to real-world concepts. We additionally find that, on average, 85% of the noun phrases in each concept refer to the same real-world entity. 2 Prior Work Previous work on concept discovery has focused on the subproblems of word sense induction and synonym resolution. Word sense induction is typically performed using unsupervised clustering. In the SemEval word sense induction and disambigua571 tion task (Agirre and Soroa, 2007; Manandhar et al., 2010), all of the submissions in 2007 created senses by clustering the contexts each word occurs in, and the 2010 event explicitly disallowed the use of external resources like ontologies. Other systems cluster words to find both word senses and concepts (Pantel and Lin, 2002; Lin and Pantel, 2002). ConceptResolver’s category-based approach is quite different from these clustering approaches. Snow et al. (2006) describe a system which adds new word senses to WordNet. However, Snow et al. assume the existence of an oracle which provides the senses of each word. In contrast, ConceptResolver automatically determines the number of senses for each word. Synonym resolution on relations extracted from web text has been previously studied by Resolver (Yates and Etzioni, 2007), which finds synonyms in relation triples extracted by TextRunner (Banko et al., 2007). In contrast to our system, Resolver is unsupervised and does not have a schema for the relations. Due to different inputs, ConceptResolver and Resolver are not precisely comparable. However, our evaluation shows that ConceptResolver has higher synonym resolution precision than Resolver, which we attribute to our semi-supervised approach and the known relation schema. Synonym resolution also arises in record linkage (Winkler, 1999; Ravikumar and Cohen, 2004) and citation matching (Bhattacharya and Getoor, 2007; Bhattacharya and Getoor, 2006; Poon and Domingos, 2007). As with word sense induction, many approaches to these problems are unsupervised. A problem with these algorithms is that they require the authors to define domain-specific similarity heuristics to achieve good performance. Other synonym resolution work is fully supervised (Singla and Domingos, 2006; McCallum and Wellner, 2004; Snow et al., 2007), training models using manually constructed sets of synonyms. These approaches use large amounts of labeled data, which can be difficult to create. ConceptResolver’s approach lies between these two extremes: we label a small number of synonyms (10 pairs), then use semi-supervised training to learn a similarity function. We think our technique is a good compromise, as it avoids much of the manual effort of the other approaches: tuning the similarity function in one case, and labeling a large amount of data in the other ConceptResolver uses a novel algorithm for semisupervised clustering which is conceptually similar to other work in the area. Like other approaches (Basu et al., 2004; Xing et al., 2003; Klein et al., 2002), we learn a similarity measure for clustering based on a set of must-link and cannot-link constraints. Unlike prior work, our algorithm exploits multiple views of the data to improve the similarity measure. As far as we know, ConceptResolver is the first application of semi-supervised clustering to relational data – where the items being clustered are connected by relations (Getoor and Diehl, 2005). Interestingly, the relational setting also provides us with the independent views that are beneficial to semi-supervised training. Concept discovery is also related to coreference resolution (Ng, 2008; Poon and Domingos, 2008). The difference between the two problems is that coreference resolution finds noun phrases that refer to the same concept within a specific document. We think the concepts produced by a system like ConceptResolver could be used to improve coreference resolution by providing prior knowledge about noun phrases that can refer to the same concept. This knowledge could be especially helpful for crossdocument coreference resolution systems (Haghighi and Klein, 2010), which actually represent concepts and track mentions of them across documents. 3 Background: Never-Ending Language Learner ConceptResolver is designed as a component for the Never-Ending Language Learner (NELL) (Carlson et al., 2010). In this section, we provide some pertinent background information about NELL that influenced the design of ConceptResolver 1. NELL is an information extraction system that has been running 24x7 for over a year, using coupled semi-supervised learning to populate an ontology from unstructured text found on the web. The ontology defines two types of predicates: categories (e.g., company and CEO) and relations (e.g., ceoOfCompany). Categories are single-argument predicates, and relations are two-argument predicates. 1More information about NELL, including browsable and downloadable versions of its knowledge base, is available from http://rtw.ml.cmu.edu. 572 NELL’s knowledge base contains both definitions for predicates and extracted instances of each predicate. At present, NELL’s knowledge base defines approximately 500 predicates and contains over half a million extracted instances of these predicates with an accuracy of approximately 0.85. Relations between predicates are an important component of NELL’s ontology. For ConceptResolver, the most important relations are domain and range, which define argument types for each relation predicate. For example, the first argument of ceoOfCompany must be a CEO and the second argument must be a company. Argument type restrictions inform ConceptResolver’s word sense induction process (Section 4.1). Multiple sources of information are used to populate each predicate with high precision. The system runs four independent extractors for each predicate: the first uses web co-occurrence statistics, the second uses HTML structures on webpages, the third uses the morphological structure of the noun phrase itself, and the fourth exploits empirical regularities within the knowledge base. These subcomponents are described in more detail by Carlson et al. (2010) and Wang and Cohen (2007). NELL learns using a bootstrapping process, iteratively re-training these extractors using instances in the knowledge base, then adding some predictions of the learners to the knowledge base. This iterative learning process can be viewed as a discrete approximation to EM which does not explicitly instantiate every latent variable. As in other information extraction systems, the category and relation instances extracted by NELL contain polysemous and synonymous noun phrases. ConceptResolver was developed to reduce the impact of these phenomena. 4 ConceptResolver This section describes ConceptResolver, our new component which creates concepts from NELL’s extractions. It uses a two-step procedure, first creating one or more senses for each noun phrase, then clustering synonymous senses to create concepts. 4.1 Word Sense Induction ConceptResolver induces word senses using a simple assumption about noun phrases and concepts. If a noun phrase has multiple senses, the senses should be distinguishable from context. People can determine the sense of an ambiguous word given just a few surrounding words (Kaplan, 1955). We hypothesize that local context enables sense disambiguation by defining the semantic type of the ambiguous word. ConceptResolver makes the simplifying assumption that all word senses can be distinguished on the basis of semantic type. As the category predicates in NELL’s ontology define a set of possible semantic types, this assumption is equivalent to the one-sense-per-category assumption: a noun phrase refers to at most one concept in each category of NELL’s ontology. For example, this means that a noun phrase can refer to a company and a fruit, but not multiple companies. ConceptResolver uses the extracted category assertions to define word senses. Each word sense is represented as a tuple containing a noun phrase and a category. In synonym resolution, the category acts like a type constraint, and only senses with the same category type can be synonymous. To create senses, the system interprets each extracted category predicate c(x) as evidence that category c contains a concept denoted by noun phrase x. Because it assumes that there is at most one such concept, ConceptResolver creates one sense of x for each extracted category predicate. As a concrete example, say the input assertions contain company(“apple”) and fruit(“apple”). Sense induction creates two senses for “apple”: (“apple”, company) and (“apple”, fruit). The second step of sense induction produces evidence for synonym resolution by creating relations between word senses. These relations are created from input relations and the ontology’s argument type constraints. Each extracted relation is mapped to all possible sense relations that satisfy the argument type constraints. For example, the noun phrase relation ceoOfCompany(“steve jobs”, “apple”) would map to ceoOfCompany((“steve jobs”, ceo), (“apple”, company)). It would not map to a similar relation with (“apple”, fruit), however, as (“apple”, fruit) is not in the range of ceoOfCompany. This process is effective because the relations in the ontology have restrictive domains and ranges, so only a small fraction of sense pairs satisfy the argument type restrictions. It is also not vital that this mapping be perfect, as the sense relations are only 573 used as evidence for synonym resolution. The final output of sense induction is a sense-disambiguated knowledge base, where each noun phrase has been converted into one or more word senses, and relations hold between pairs of senses. 4.2 Synonym Resolution After mapping each noun phrase to one or more senses (each with a distinct category type), ConceptResolver performs semi-supervised clustering to find synonymous senses. As only senses with the same category type can be synonymous, our synonym resolution algorithm treats senses of each type independently. For each category, ConceptResolver trains a semi-supervised synonym classifier then uses its predictions to cluster word senses. Our key insight is that semantic relations and string attributes provide independent views of the data: we can predict that two noun phrases are synonymous either based on the similarity of their text strings, or based on similarity in the relations NELL has extracted about them. As a concrete example, we can decide that (“apple computer”, company) and (“apple”, company) are synonymous because the text string “apple” is similar to “apple computer,” or because we have learned that (“steve jobs”, ceo) is the CEO of both companies. ConceptResolver exploits these two independent views using co-training (Blum and Mitchell, 1998) to produce an accurate synonym classifier using only a handful of labels. 4.2.1 Co-Training the Synonym Classifier For each category, ConceptResolver co-trains a pair of synonym classifiers using a handful of labeled synonymous senses and a large number of automatically created unlabeled sense pairs. Co-training is a semi-supervised learning algorithm for data sets where each instance can be classified from two (or more) independent sets of features. That is, the features of each instance xi can be partitioned into two views, xi = (x1 i , x2 i ), and there exist functions in each view, f1, f2, such that f1(x1 i ) = f2(x2 i ) = yi. The co-training algorithm uses a bootstrapping procedure to train f1, f2 using a small set of labeled examples L and a large pool of unlabeled examples U. The training process repeatedly trains each classifier on the labeled examples, then allows each classifier to label some examples in the unlabeled data pool. Co-training also has PAC-style theoretical guarantees which show that it can learn classifiers with arbitrarily high accuracy under appropriate conditions (Blum and Mitchell, 1998). Figure 4 provides high-level pseudocode for cotraining in the context of ConceptResolver. In ConceptResolver, an instance xi is a pair of senses (e.g., <(“apple”, company), (“microsoft”, company)>), the two views x1 i and x2 i are derived from string attributes and semantic relations, and the output yi is whether the senses are synonyms. (The features of each view are described later in this section.) L is initialized with a small number of labeled sense pairs. Ideally, U would contain all pairs of senses in the category, but this set grows quadratically in category size. Therefore, ConceptResolver uses the canopies algorithm (McCallum et al., 2000) to initialize U with a subset of the sense pairs that are more likely to be synonymous. Both the string similarity classifier and the relation classifier are trained using L2-regularized logistic regression. The regularization parameter λ is automatically selected on each iteration by searching for a value which maximizes the loglikelihood of a validation set, which is constructed by randomly sampling 25% of L on each iteration. λ is re-selected on each iteration because the initial labeled data set is extremely small, so the initial validation set is not necessarily representative of the actual data. In our experiments, the initial validation set contains only 15 instances. The string similarity classifier bases its decision on the original noun phrase which mapped to each sense. We use several string similarity measures as features, including SoftTFIDF (Cohen et al., 2003), Level 2 JaroWinkler (Cohen et al., 2003), FellegiSunter (Fellegi and Sunter, 1969), and Monge-Elkan (Monge and Elkan, 1996). The first three algorithms produce similarity scores by matching words in the two phrases and the fourth is an edit distance. We also use a heuristic abbreviation detection algorithm (Schwartz and Hearst, 2003) and convert its output into a score by dividing the length of the detected abbreviation by the total length of the string. The relation classifier’s features capture several intuitive ways to determine that two items are synonyms from the items they are related to. The relation view contains three features for each relation 574 For each category C: 1. Initialize labeled data L with 10 positive and 50 negative examples (pairs of senses) 2. Initialize unlabeled data U by running canopies (McCallum et al., 2000) on all senses in C. 3. Repeat 50 times: i. Train the string similarity classifier on L ii. Train the relation classifier on L iii. Label U with each classifier iv. Add the most confident 5 positive and 25 negative predictions of both classifiers to L Figure 4: The co-training algorithm for learning synonym classifiers. r whose domain is compatible with the current category. Consider the sense pair (s, t), and let r(s) denote s’s values for relation r (i.e., r(s) = {v : r(s, v)}). For each relation r, we instantiate the following features: • (Senses which share values are synonyms) The percent of values of r shared by both s and t, that is |r(s)∩r(t)| |r(s)∪r(t)|. • (Senses with different values are not synonyms) The percent of values of r not shared by s and t, or 1 −|r(s)∩r(t)| |r(s)∪r(t)|. The feature is set to 0 if either r(s) or r(t) is empty. This feature is only instantiated if the ontology specifies that r has at most one value per sense. • (Some relations indicate synonymy) A boolean feature which is true if t ∈r(s) or s ∈r(t). The output of co-training is a pair of classifiers for each category. We combine their predictions using the assumption that the two views X1, X2 are conditionally independent given Y . As we trained both classifiers using logistic regression, we have models for the probabilities P(Y |X1) and P(Y |X2). The conditional independence assumption implies that we can combine their predictions using the formula: P(Y = 1|X1, X2) = P(Y = 1|X1)P(Y = 1|X2)P(Y = 0) P y=0,1 P(Y = y|X1)P(Y = y|X2)(1 −P(Y = y)) The above formula involves a prior term, P(Y ), because the underlying classifiers are discriminative. We set P(Y = 1) = .5 in our experiments as this setting reduces our dependence on the (typically poorly calibrated) probability estimates of logistic regression. We also limited the probability predictions of each classifier to lie in [.01, .99] to avoid divide-by-zero errors. The probability P(Y |X1, X2) is the final synonym classifier which is used for agglomerative clustering. 4.2.2 Agglomerative Clustering The second step of our algorithm runs agglomerative clustering to enforce transitivity constraints on the predictions of the co-trained synonym classifier. As noted in previous works (Snow et al., 2006), synonymy is a transitive relation. If a and b are synonyms, and b and c are synonyms, then a and c must also be synonyms. Unfortunately, co-training is not guaranteed to learn a function that satisfies these transitivity constraints. We enforce the constraints by running agglomerative clustering, as clusterings of instances trivially satisfy the transitivity property. ConceptResolver uses the clustering algorithm described by Snow et al. (2006), which defines a probabilistic model for clustering and a procedure to (locally) maximize the likelihood of the final clustering. The algorithm is essentially bottom-up agglomerative clustering of word senses using a similarity score derived from P(Y |X1, X2). The similarity score for two senses is defined as: log P(Y = 0)P(Y = 1|X1, X2) P(Y = 1)P(Y = 0|X1, X2) The similarity score for two clusters is the sum of the similarity scores for all pairs of senses. The agglomerative clustering algorithm iteratively merges the two most similar clusters, stopping when the score of the best possible pair is below 0. The clusters of word senses produced by this process are the concepts for each category. 5 Evaluation We perform several experiments to measure ConceptResolver’s performance at each of its respective tasks. The first experiment evaluates word sense induction using Freebase as a canonical set of concepts. The second experiment evaluates synonym resolution by comparing ConceptResolver’s sense clusters to a gold standard clustering. For both experiments, we used a knowledge base created by running 140 iterations of NELL. We preprocessed this knowledge base by removing all noun 575 phrases with zero extracted relations. As ConceptResolver treats the instances of each category predicate independently, we chose 7 categories from NELL’s ontology to use in the evaluation. The categories were selected on the basis of the number of extracted relations that ConceptResolver could use to detect synonyms. The number of noun phrases in each category is shown in Table 2. We manually labeled 10 pairs of synonymous senses for each of these categories. The system automatically synthesized 50 negative examples from the positive examples by assuming each pair represents a distinct concept, so senses in different pairs are not synonyms. 5.1 Word Sense Induction Evaluation Our first experiment evaluates the performance of ConceptResolver’s category-based word sense induction. We estimate two quantities: (1) sense precision, the fraction of senses created by our system that correspond to real-world entities, and (2) sense recall, the fraction of real-world entities that ConceptResolver creates senses for. Sense recall is only measured over entities which are represented by a noun phrase in ConceptResolver’s input assertions – it is a measure of ConceptResolver’s ability to create senses for the noun phrases it is given. Sense precision is directly determined by how frequently NELL’s extractors propose correct senses for noun phrases, while sense recall is related to the correctness of the one-sense-per-category assumption. Precision and recall were evaluated by comparing the senses created by ConceptResolver to concepts in Freebase (Bollacker et al., 2008). We sampled 100 noun phrases from each category and matched each noun phrase to a set of Freebase concepts. We interpret each matching Freebase concept as a sense of the noun phrase. We chose Freebase because it had good coverage for our evaluation categories. To align ConceptResolver’s senses with Freebase, we first matched each of our categories with a set of similar Freebase categories2. We then used a combination of Freebase’s search API and Mechanical Turk to align noun phrases with Freebase concepts: we searched for the noun phrase in Freebase, then had Mechanical Turk workers label which of the 2In Freebase, concepts are called Topics and categories are called Types. For clarity, we use our terminology throughout. Freebase Category Precision Recall concepts per Phrase athlete 0.95 0.56 1.76 city 0.97 0.25 3.86 coach 0.86 0.94 1.06 company 0.85 0.41 2.41 country 0.74 0.56 1.77 sportsteam 0.89 0.30 3.28 stadiumoreventvenue 0.83 0.61 1.63 Table 1: ConceptResolver’s word sense induction performance Figure 5: Empirical distribution of the number of Freebase concepts per noun phrase in each category top 10 resulting Freebase concepts the noun phrase could refer to. After obtaining the list of matching Freebase concepts for each noun phrase, we computed sense precision as the number of noun phrases matching ≥1 Freebase concept divided by 100, the total number of noun phrases. Sense recall is the reciprocal of the average number of Freebase concepts per noun phrase. Noun phrases matching 0 Freebase concepts were not included in this computation. The results of the evaluation in Table 1 show that ConceptResolver’s word sense induction works quite well for many categories. Most categories have high precision, while recall varies by category. Categories like coach are relatively unambiguous, with almost exactly 1 sense per noun phrase. Other categories have almost 4 senses per noun phrase. However, this average is somewhat misleading. Figure 5 shows the distribution of the number of concepts per noun phrase in each category. The distribution shows that most noun phrases are unambiguous, but a small number of noun phrases have a large number of senses. In many cases, these noun phrases 576 are generic terms for many items in the category; for example, “palace” in stadiumoreventvenue refers to 10 Freebase concepts. Freebase’s category definitions are also overly technical in some cases – for example, Freebase’s version of company has a concept for each registered corporation. This definition means that some companies like Volkswagen have more than one concept (in this case, 9 concepts). These results suggest that the one-sense-percategory assumption holds for most noun phrases. An important footnote to this evaluation is that the categories in NELL’s ontology are somewhat arbitrary, and that creating subcategories would improve sense recall. For example, we could define subcategories of sportsteam for various sports (e.g., football team); these new categories would allow ConceptResolver to distinguish between teams with the same name that play different sports. Creating subcategories could improve performance in categories with a high level of polysemy. 5.2 Synonym Resolution Evaluation Our second experiment evaluates synonym resolution by comparing the concepts created by ConceptResolver to a gold standard set of concepts. Although this experiment is mainly designed to evaluate ConceptResolver’s ability to detect synonyms, it is somewhat affected by the word sense induction process. Specifically, the gold standard clustering contains noun phrases that refer to multiple concepts within the same category. (It is unclear how to create a gold standard clustering without allowing such mappings.) The word sense induction process produces only one of these mappings, which limits maximum possible recall in this experiment. For this experiment, we report two different measures of clustering performance. The first measure is the precision and recall of pairwise synonym decisions, typically known as cluster precision and recall. We dub this the clustering metric. We also adopt the precision/recall measure from Resolver (Yates and Etzioni, 2007), which we dub the Resolver metric. The Resolver metric aligns each proposed cluster containing ≥2 senses with a gold standard cluster (i.e., a real-world concept) by selecting the cluster that a plurality of the senses in the proposed cluster refer to. Precision is then the fraction of senses in the proposed cluster which are also in the gold standard cluster; recall is computed analogously by swapping the roles of the proposed and gold standard clusters. Resolver precision can be interpreted as the probability that a randomly sampled sense (in a cluster with at least 2 senses) is in a cluster representing its true meaning. Incorrect senses were removed from the data set before evaluating precision; however, these senses may still affect performance by influencing the clustering process. Precision was evaluated by sampling 100 random concepts proposed by ConceptResolver, then manually scoring each concept using both of the metrics above. This process mimics aligning each sampled concept with its best possible match in a gold standard clustering, then measuring precision with respect to the gold standard. Recall was evaluated by comparing the system’s output to a manually constructed set of concepts for each category. To create this set, we randomly sampled noun phrases from each category and manually matched each noun phrase to one or more real-world entities. We then found other noun phrases which referred to each entity and created a concept for each entity with at least one unambiguous reference. This process can create multiple senses for a noun phrase, depending on the real-world entities represented in the input assertions. We only included concepts containing at least 2 senses in the test set, as singleton concepts do not contribute to either recall metric. The size of each recall test set is listed in Table 2; we created smaller test sets for categories where synonyms were harder to find. Incorrectly categorized noun phrases were not included in the gold standard as they do not correspond to any real-world entities. Table 2 shows the performance of ConceptResolver on each evaluation category. For each category, we also report the baseline recall achieved by placing each sense in its own cluster. ConceptResolver has high precision for several of the categories. Other categories like athlete and city have somewhat lower precision. To make this difference concrete, Figure 2 (first page) shows a random sample of 10 concepts from both company and athlete. Recall varies even more widely across categories, partly because the categories have varying levels of polysemy, and partly due to differences in average concept size. The differences in average concept size are reflected in the baseline recall numbers. 577 Resolver Metric Clustering Metric Category # of Recall Precision Recall F1 Baseline Precision Recall F1 Baseline Phrases Set Size Recall Recall athlete 3886 80 0.69 0.69 0.69 0.46 0.41 0.45 0.43 0.00 city 5710 50 0.66 0.52 0.58 0.42 0.30 0.10 0.15 0.00 coach 889 60 0.90 0.93 0.91 0.43 0.83 0.88 0.85 0.00 company 3553 60 0.93 0.71 0.81 0.39 0.79 0.44 0.57 0.00 country 693 60 0.98 0.50 0.66 0.30 0.94 0.15 0.26 0.00 sportsteam 2085 100 0.95 0.48 0.64 0.29 0.87 0.15 0.26 0.00 stadiumoreventvenue 1662 100 0.84 0.73 0.78 0.39 0.65 0.49 0.56 0.00 Table 2: Synonym resolution performance of ConceptResolver We attribute the differences in precision across categories to the different relations available for each category. For example, none of the relations for athlete uniquely identify a single athlete, and therefore synonymy cannot be accurately represented in the relation view. Adding more relations to NELL’s ontology may improve performance in these cases. We note that the synonym resolution portion of ConceptResolver is tuned for precision, and that perfect recall is not necessarily attainable. Many word senses participate in only one relation, which may not provide enough evidence to detect synonymy. As NELL continually extracts more knowledge, it is reasonable for ConceptResolver to abstain from these decisions until more evidence is available. 6 Discussion In order for information extraction systems to accurately represent knowledge, they must represent noun phrases, concepts, and the many-to-many mapping from noun phrases to concepts they denote. We present ConceptResolver, a system which takes extracted relations between noun phrases and identifies latent concepts that the noun phrases refer to. Two lessons from ConceptResolver are that (1) ontologies aid word sense induction, as the senses of polysemous words tend to have distinct semantic types, and (2) redundant information, in the form of string similarity and extracted relations, helps train accurate synonym classifiers. An interesting aspect of ConceptResolver is that its performance should improve as NELL’s ontology and knowledge base grow in size. Defining finer-grained categories will improve performance at word sense induction, as more precise categories will contain fewer ambiguous noun phrases. Both extracting more relation instances and adding new relations to the ontology will improve synonym resolution. These scaling properties allow manual effort to be spent on high-level ontology operations, not on labeling individual instances. We are interested in observing ConceptResolver’s performance as NELL’s ontology and knowledge base grow. For simplicity of exposition, we have implicitly assumed thus far that the categories in NELL’s ontology are mutually exclusive. However, the ontology contains compatible categories like male and politician, where a single concept can belong to both categories. In these situations, the one-senseper-category assumption may create too many word senses. We currently address this problem with a heuristic post-processing step: we merge all pairs of concepts that belong to compatible categories and share at least one referring noun phrase. This heuristic typically works well, however there are problems. An example of a problematic case is “obama,” which NELL believes is a male, female, and politician. In this case, the heuristic cannot decide which “obama” (the male or female) is the politician. As such cases are fairly rare, we have not developed a more sophisticated solution to this problem. ConceptResolver has been integrated into NELL’s continual learning process. NELL’s current set of concepts can be viewed through the knowledge base browser on NELL’s website, http://rtw.ml. cmu.edu. Acknowledgments This work is supported in part by DARPA (under contract numbers FA8750-08-1-0009 and AF875009-C-0179) and by Google. We also gratefully acknowledge the contributions of our colleagues on the NELL project, Jamie Callan for the ClueWeb09 web crawl and Yahoo! for use of their M45 computing cluster. Finally, we thank the anonymous reviewers for their helpful comments. 578 References Eneko Agirre and Aitor Soroa. 2007. Semeval-2007 task 02: Evaluating word sense induction and discrimination systems. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 7–12. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence, pages 2670–2676. Sugato Basu, Mikhail Bilenko, and Raymond J. Mooney. 2004. A probabilistic framework for semi-supervised clustering. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 59–68. Indrajit Bhattacharya and Lise Getoor. 2006. A latent dirichlet model for unsupervised entity resolution. In Proceedings of the 2006 SIAM International Conference on Data Mining, pages 47–58. Indrajit Bhattacharya and Lise Getoor. 2007. Collective entity resolution in relational data. ACM Transactions on Knowledge Discovery from Data, 1(1). Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 92–100. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pages 1247–1250. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Toward an architecture for neverending language learning. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence. William W. Cohen, Pradeep Ravikumar, and Stephen E. Fienberg. 2003. A Comparison of String Distance Metrics for Name-Matching Tasks. In Proceedings of the IJCAI-03 Workshop on Information Integration, pages 73–78, August. Ivan P. Fellegi and Alan B. Sunter. 1969. A theory for record linkage. Journal of the American Statistical Association, 64:1183–1210. Lise Getoor and Christopher P. Diehl. 2005. Link mining: a survey. SIGKDD Explorations Newsletter, 7:3– 12. Aria Haghighi and Dan Klein. 2010. Coreference resolution in a modular, entity-centered model. In Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 385–393, June. Abraham Kaplan. 1955. An experimental study of ambiguity and context. Mechanical Translation, 2:39–46. Dan Klein, Sepandar D. Kamvar, and Christopher D. Manning. 2002. From instance-level constraints to space-level constraints: Making the most of prior knowledge in data clustering. In Proceedings of the Nineteenth International Conference on Machine Learning, pages 307–314. Dekang Lin and Patrick Pantel. 2002. Concept discovery from text. In Proceedings of the 19th International Conference on Computational linguistics - Volume 1, pages 1–7. Suresh Manandhar, Ioannis P. Klapaftis, Dmitriy Dligach, and Sameer S. Pradhan. 2010. Semeval-2010 task 14: Word sense induction & disambiguation. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 63–68. Andrew McCallum and Ben Wellner. 2004. Conditional models of identity uncertainty with application to noun coreference. In Advances in Neural Information Processing Systems 18. Andrew McCallum, Kamal Nigam, and Lyle H. Ungar. 2000. Efficient clustering of high-dimensional data sets with application to reference matching. In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 169–178. George A. Miller. 1995. Wordnet: A lexical database for english. Communications of the ACM, 38:39–41. Alvaro Monge and Charles Elkan. 1996. The field matching problem: Algorithms and applications. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pages 267–270. Vincent Ng. 2008. Unsupervised models for coreference resolution. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 640–649. Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 613–619. Hoifung Poon and Pedro Domingos. 2007. Joint inference in information extraction. In Proceedings of the 22nd AAAI Conference on Artificial Intelligence - Volume 1, pages 913–918. Hoifung Poon and Pedro Domingos. 2008. Joint unsupervised coreference resolution with markov logic. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 650–659. 579 Pradeep Ravikumar and William W. Cohen. 2004. A hierarchical graphical model for record linkage. In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, pages 454–461. Ariel S. Schwartz and Marti A. Hearst. 2003. A simple algorithm for identifying abbreviation definitions in biomedical text. In Proceedings of the Pacific Symposium on BIOCOMPUTING 2003, pages 451–462. Parag Singla and Pedro Domingos. 2006. Entity resolution with markov logic. In Proceedings of the Sixth International Conference on Data Mining, pages 572– 582. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, pages 801–808, Morristown, NJ, USA. Rion Snow, Sushant Prakash, Daniel Jurafsky, and Andrew Y. Ng. 2007. Learning to merge word senses. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1005–1014, June. Richard C. Wang and William W. Cohen. 2007. Language-independent set expansion of named entities using the web. In Proceedings of the Seventh IEEE International Conference on Data Mining, pages 342– 350. William E. Winkler. 1999. The state of record linkage and current research problems. Technical report, Statistical Research Division, U.S. Census Bureau. Eric P. Xing, Andrew Y. Ng, Michael I. Jordan, and Stuart Russell. 2003. Distance metric learning, with application to clustering with side-information. In Advances in Neural Information Processing Systems 17, pages 505–512. Alexander Yates and Oren Etzioni. 2007. Unsupervised resolution of objects and relations on the web. In Proceedings of the 2007 Annual Conference of the North American Chapter of the Association for Computational Linguistics. 580
|
2011
|
58
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 581–589, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Semantic Representation of Negation Using Focus Detection Eduardo Blanco and Dan Moldovan Human Language Technology Research Institute The University of Texas at Dallas Richardson, TX 75080 USA {eduardo,moldovan}@hlt.utdallas.edu Abstract Negation is present in all human languages and it is used to reverse the polarity of part of statements that are otherwise affirmative by default. A negated statement often carries positive implicit meaning, but to pinpoint the positive part from the negative part is rather difficult. This paper aims at thoroughly representing the semantics of negation by revealing implicit positive meaning. The proposed representation relies on focus of negation detection. For this, new annotation over PropBank and a learning algorithm are proposed. 1 Introduction Understanding the meaning of text is a long term goal in the natural language processing community. Whereas philosophers and linguists have proposed several theories, along with models to represent the meaning of text, the field of computational linguistics is still far from doing this automatically. The ambiguity of language, the need to detect implicit knowledge, and the demand for commonsense knowledge and reasoning are a few of the difficulties to overcome. Substantial progress has been made, though, especially on detection of semantic relations, ontologies and reasoning methods. Negation is present in all languages and it is always the case that statements are affirmative by default. Negation is marked and it typically signals something unusual or an exception. It may be present in all units of language, e.g., words (incredible), clauses (He doesn’t have friends). Negation and its correlates (truth values, lying, irony, false or contradictory statements) are exclusive characteristics of humans (Horn, 1989; Horn and Kato, 2000). Negation is fairly well-understood in grammars; the valid ways to express a negation are documented. However, there has not been extensive research on detecting it, and more importantly, on representing the semantics of negation. Negation has been largely ignored within the area of semantic relations. At first glance, one would think that interpreting negation could be reduced to finding negative keywords, detect their scope using syntactic analysis and reverse its polarity. Actually, it is more complex. Negation plays a remarkable role in text understanding and it poses considerable challenges. Detecting the scope of negation in itself is challenging: All vegetarians do not eat meat means that vegetarians do not eat meat and yet All that glitters is not gold means that it is not the case that all that glitters is gold (so out of all things that glitter, some are gold and some are not). In the former example, the universal quantifier all has scope over the negation; in the latter, the negation has scope over all. In logic, two negatives always cancel each other out. On the other hand, in language this is only theoretically the case: she is not unhappy does not mean that she is happy; it means that she is not fully unhappy, but she is not happy either. Some negated statements carry a positive implicit meaning. For example, cows do not eat meat implies that cows eat something other than meat. Otherwise, the speaker would have stated cows do not eat. A clearer example is the correct and yet puzzling statement tables do not eat meat. This sentence sounds 581 unnatural because of the underlying positive statement (i.e., tables eat something other than meat). Negation can express less than or in between when used in a scalar context. For example, John does not have three children probably means that he has either one or two children. Contrasts may use negation to disagree about a statement and not to negate it, e.g., That place is not big, it is massive defines the place as massive, and therefore, big. 2 Related Work Negation has been widely studied outside of computational linguistics. In logic, negation is usually the simplest unary operator and it reverses the truth value. The seminal work by Horn (1989) presents the main thoughts in philosophy and psychology. Linguists have found negation a complex phenomenon; Huddleston and Pullum (2002) dedicate over 60 pages to it. Negation interacts with quantifiers and anaphora (Hintikka, 2002), and influences reasoning (Dowty, 1994; S´anchez Valencia, 1991). Zeijlstra (2007) analyzes the position and form of negative elements and negative concords. Rooth (1985) presented a theory of focus in his dissertation and posterior publications (e.g., Rooth (1992)). In this paper, we follow the insights on scope and focus of negation by Huddleston and Pullum (2002) rather than Rooth’s (1985). Within natural language processing, negation has drawn attention mainly in sentiment analysis (Wilson et al., 2009; Wiegand et al., 2010) and the biomedical domain. Recently, the Negation and Speculation in NLP Workshop (Morante and Sporleder, 2010) and the CoNLL-2010 Shared Task (Farkas et al., 2010) targeted negation mostly on those subfields. Morante and Daelemans (2009) and ¨Ozg¨ur and Radev (2009) propose scope detectors using the BioScope corpus. Councill et al. (2010) present a supervised scope detector using their own annotation. Some NLP applications deal indirectly with negation, e.g., machine translation (van Munster, 1988), text classification (Rose et al., 2003) and recognizing entailments (Bos and Markert, 2005). Regarding corpora, the BioScope corpus annotates negation marks and linguistic scopes exclusively on biomedical texts. It does not annotate focus and it purposely ignores negations such as (talking about the reaction of certain elements) in NK3.3 cells is not always identical (Vincze et al., 2008), which carry the kind of positive meaning this work aims at extracting (in NK3.3 cells is often identical). PropBank (Palmer et al., 2005) only indicates the verb to which a negation mark attaches; it does not provide any information about the scope or focus. FrameNet (Baker et al., 1998) does not consider negation and FactBank (Saur´ı and Pustejovsky, 2009) only annotates degrees of factuality for events. None of the above references aim at detecting or annotating the focus of negation in natural language. Neither do they aim at carefully representing the meaning of negated statements nor extracting implicit positive meaning from them. 3 Negation in Natural Language Simply put, negation is a process that turns a statement into its opposite. Unlike affirmative statements, negation is marked by words (e.g., not, no, never) or affixes (e.g., -n’t, un-). Negation can interact with other words in special ways. For example, negated clauses use different connective adjuncts that positive clauses do: neither, nor instead of either, or. The so-called negatively-oriented polaritysensitive items (Huddleston and Pullum, 2002) include, among many others, words starting with any(anybody, anyone, anywhere, etc.), the modal auxiliaries dare and need and the grammatical units at all, much and till. Negation in verbs usually requires an auxiliary; if none is present, the auxiliary do is inserted (I read the paper vs. I didn’t read the paper). 3.1 Meaning of Negated Statements State-of-the-art semantic role labelers (e.g., the ones trained over PropBank) do not completely represent the meaning of negated statements. Given John didn’t build a house to impress Mary, they encode AGENT(John, build), THEME(a house, build), PURPOSE(to impress Mary, build), NEGATION(n’t, build). This representation corresponds to the interpretation it is not the case that John built a house to impress Mary, ignoring that it is implicitly stated that John did build a house. Several examples are shown Table 1. For all statements s, current role labelers would only encode it is not the case that s. However, examples (1–7) 582 Statement Interpretation 1 John didn’t build a house :to::::::: impress :::: Mary. John built a house for other purpose. 2 I don’t have a watch ::: with::: me. I have a watch, but it is not with me. 3 We don’t have an evacuation plan ::: for ::::::: flooding. We have an evacuation plan for something else (e.g., fire). 4 They didn’t release the UFO files :::: until :::: 2008. They released the UFO files in 2008. 5 John doesn’t know ::::: exactly how they met. John knows how they met, but not exactly. 6 His new job doesn’t require ::::: driving. His new job has requirements, but it does not require driving. 7 His new job doesn’t require driving :: yet. His new job requires driving in the future. 8 His new job doesn’t:::::: require anything. His new job has no requirements. 9 A panic on Wall Street doesn’t exactly ::::: inspire confidence. A panic on Wall Streen discourages confidence. Table 1: Examples of negated statements and their interpretations considering underlying positive meaning. A wavy underline indicates the focus of negation (Section 3.3); examples (8, 9) do not carry any positive meaning. carry positive meaning underneath the direct meaning. Regarding (4), encoding that the UFO files were released in 2008 is crucial to fully interpret the statement. (6–8) show that different verb arguments modify the interpretation and even signal the existence of positive meaning. Examples (5, 9) further illustrate the difficulty of the task; they are very similar (both have AGENT, THEME and MANNER) and their interpretation is altogether different. Note that (8, 9) do not carry any positive meaning; even though their interpretations do not contain a verbal negation, the meaning remains negative. Some examples could be interpreted differently depending on the context (Section 4.2.1). This paper aims at thoroughly representing the semantics of negation by revealing implicit positive meaning. The main contributions are: (1) interpretation of negation using focus detection; (2) focus of negation annotation over all PropBank negated sentences1; (3) feature set to detect the focus of negation; and (4) model to semantically represent negation and reveal its underlying positive meaning. 3.2 Negation Types Huddleston and Pullum (2002) distinguish four contrasts for negation: • Verbal if the marker of negation is grammatically associated with the verb (I did not see anything at all); non-verbal if it is associated with a dependent of the verb (I saw nothing at all). • Analytic if the sole function of the negated mark is to mark negation (Bill did not go); synthetic if it has some other function as well ([Nobody]AGENT went to the meeting). 1Annotation will be available on the author’s website • Clausal if the negation yields a negative clause (She didn’t have a large income); subclausal otherwise (She had a not inconsiderable income). • Ordinary if it indicates that something is not the case, e.g., (1) She didn’t have lunch with my old man: he couldn’t make it; metalinguistic if it does not dispute the truth but rather reformulates a statement, e.g., (2) She didn’t have lunch with your ‘old man’: she had lunch with your father. Note that in (1) the lunch never took place, whereas in (2) a lunch did take place. In this paper, we focus on verbal, analytic, clausal, and both metalinguistic and ordinary negation. 3.3 Scope and Focus Negation has both scope and focus and they are extremely important to capture its semantics. Scope is the part of the meaning that is negated. Focus is that part of the scope that is most prominently or explicitly negated (Huddleston and Pullum, 2002). Both concepts are tightly connected. Scope corresponds to all elements any of whose individual falsity would make the negated statement true. Focus is the element of the scope that is intended to be interpreted as false to make the overall negative true. Consider (1) Cows don’t eat meat and its positive counterpart (2) Cows eat meat. The truth conditions of (2) are: (a) somebody eats something; (b) cows are the ones who eat; and (c) meat is what is eaten. In order for (2) to be true, (a–c) have to be true. And the falsity of any of them is sufficient to make (1) true. In other words, (1) would be true if nobody eats, cows don’t eat or meat is not eaten. Therefore, all three statements (a–c) are inside the scope of (1). The focus is more difficult to identify, especially 583 1 AGENT(the cow, didn’t eat) THEME(grass, didn’t eat) INSTRUMENT(with a fork, didn’t eat) 2 NOT[AGENT(the cow, ate) THEME(grass, ate) INSTRUMENT(with a fork, ate)] 3 NOT[AGENT(the cow, ate)] THEME(grass, ate) INSTRUMENT(with a fork, ate) 4 AGENT(the cow, ate) NOT[THEME(grass, ate)] INSTRUMENT(with a fork, ate) 5 AGENT(the cow, ate) THEME(grass, ate) NOT[INSTRUMENT(with a fork, ate)] Table 2: Possible semantic representations for The cow didn’t eat grass with a fork. without knowing stress or intonation. Text understanding is needed and context plays an important role. The most probable focus for (1) is meat, which corresponds to the interpretation cows eat something else than meat. Another possible focus is cows, which yields someone eats meat, but not cows. Both scope and focus are primarily semantic, highly ambiguous and context-dependent. More examples can be found in Tables 1 and 3 and (Huddleston and Pullum, 2002, Chap. 9). 4 Approach to Semantic Representation of Negation Negation does not stand on its own. To be useful, it should be added as part of another existing knowledge representation. In this Section, we outline how to incorporate negation into semantic relations. 4.1 Semantic Relations Semantic relations capture connections between concepts and label them according to their nature. It is out of the scope of this paper to define them in depth, establish a set to consider or discuss their detection. Instead, we use generic semantic roles. Given s: The cow didn’t eat grass with a fork, typical semantic roles encode AGENT(the cow, eat), THEME(grass, eat), INSTRUMENT(with a fork, eat) and NEGATION(n’t, eat). This representation only differs on the last relation from the positive counterpart. Its interpretation is it is not the case that s. Several options arise to thoroughly represent s. First, we find it useful to consider the semantic representation of the affirmative counterpart: AGENT(the cow, ate), THEME(grass, ate), and INSTRUMENT(with a fork, ate). Second, we believe detecting the focus of negation is useful. Even though it is open to discussion, the focus corresponds to INSTRUMENT(with a fork, ate) Thus, the negated statement should be interpreted as the cow ate grass, but it did not do so using a fork. Table 2 depicts five different possible semantic representations. Option (1) does not incorporate any explicit representation of negation. It attaches the negated mark and auxiliary to eat; the negation is part of the relation arguments. This option fails to detect any underlying positive meaning and corresponds to the interpretation the cow did not eat, grass was not eaten and a fork was not used to eat. Options (2–5) embody negation into the representation with the pseudo-relation NOT. NOT takes as its argument an instantiated relation or set of relations and indicates that they do not hold. Option (2) includes all the scope as the argument of NOT and corresponds to the interpretation it is not the case that the cow ate grass with a fork. Like typical semantic roles, option (2) does not reveal the implicit positive meaning carried by statement s. Options (3–5) encode different interpretations: • (3) negates the AGENT; it corresponds to the cow didn’t eat, but grass was eaten with a fork. • (4) applies NOT to the THEME; it corresponds to the cow ate something with a fork, but not grass. • (5) denies the INSTRUMENT, encoding the meaning the cow ate grass, but it did not use a fork. Option (5) is preferred since it captures the best implicit positive meaning. It corresponds to the semantic representation of the affirmative counterpart after applying the pseudo-relation NOT over the focus of the negation. This fact justifies and motivates the detection of the focus of negation. 4.2 Annotating the Focus of Negation Due to the lack of corpora containing annotation for focus of negation, new annotation is needed. An obvious option is to add it to any text collection. However, building on top of publicly available resources is a better approach: they are known by the community, they contain useful information for detecting the focus of negation and tools have already been developed to predict their annotation. 584 Statement V A0 A1 A2 A4 TMP MNR ADV LOC PNC EXT DIS MOD 1 Even if [that deal]A1 isn’t [:::::: revived]V, NBC hopes to find another. – Even if that deal is suppressed, NBC hopes to find another one. ⋆- + - - - - - - - - 2 [He]A0 [simply]MDIS [ca]MMODn’t [stomach]V [::: the:::: taste::: of ::::: Heinz]A1, she says. – He simply can stomach any ketchup but Heinz’s. + + ⋆- - - - - - - - + + 3 [A decision]A1 isn’t [expected]V [ :::: until ::::: some:::: time::::: next :::: year]MTMP. – A decision is expected at some time next year. + - + - - ⋆- - - - - 4 [...] it told the SEC [it]A0 [could]MMODn’t [provide]V [financial statements]A1 [by the end of its first extension]MTMP “[::::::: without:::::::::::: unreasonable::::::: burden :: or:::::::: expense]MMNR”. – It could provide them by that time with a huge overhead. + + + - - + ⋆- - - - - + 5 [For example]MDIS, [P&G]A0 [up until now]MTMP hasn’t [sold]V [coffee]A1 [:: to::::::: airlines]A2 and does only limited business with hotels and large restaurant chains. – Up until now, P&G has sold coffee, but not to airlines. + + + ⋆- + - - - - - + 6 [Decent life ...]A1 [wo]MMODn’t be [restored]V [ ::::: unless::: the::::::::::: government:::::::: reclaims::: the:::::: streets::::: from::: the:::::: gangs]MADV. – It will be restored if the government reclaims the streets from the gangs. + - + - - - - ⋆- - - - + 7 But [ :::: quite::a ::: few::::::: money::::::::: managers]A0 aren’t [buying]V [it]A1. – Very little managers are buying it. + ⋆+ - - - - - - - - 8 [When]MTMP [she]A0 isn’t [performing]V [ :: for::: an :::::::: audience]MPNC, she prepares for a song by removing the wad of gum from her mouth, and indicates that she’s finished by sticking the gum back in. – She prepares in that way when she is performing, but not for an audience. + + - - - + - - - ⋆- 9 [The company’s net worth]A1 [can]MMODnot [fall]V [:::::: below ::::: $185 :::::: million]A4 [after the dividends are issued]MTMP. – It can fall after the dividends are issued, but not below $185 million. + - + - ⋆+ - - - - - - + 10 Mario Gabelli, an expert at spotting takeover candidates, says that [takeovers]A1 aren’t [:::::: totally]MEXT [gone]V. – Mario Gabelli says that takeovers are partially gone. + - + - - - - - - - ⋆Table 3: Negated statements from PropBank and their interpretation considering underlying positive meaning. Focus is underlined; ‘+’ indicates that the role is present, ‘-’ that it is not and ‘⋆’ that it corresponds to the focus of negation. We decided to work over PropBank. Unlike other resources (e.g., FrameNet), gold syntactic trees are available. Compared to the BioScope corpus, PropBank provides semantic annotation and is not limited to the biomedical domain. On top of that, there has been active research on predicting PropBank roles for years. The additional annotation can be readily used by any system trained with PropBank, quickly incorporating interpretation of negation. 4.2.1 Annotation Guidelines The focus of a negation involving verb v is resolved as: • If it cannot be inferred that an action v occurred, focus is role MNEG. • Otherwise, focus is the role that is most prominently negated. All decisions are made considering as context the previous and next sentence. The mark -NOT is used to indicate the focus. Consider the following statement (file wsj 2282, sentence 16). [While profitable]MADV1,2, [it]A11,A02 “was[n’t]MNEG1 [growing]v1 and was[n’t]MNEG2 [providing]v2 [a satisfactory return on invested capital]A12,” he says. The previous sentence is Applied, then a closely held company, was stagnating under the management of its controlling family. Regarding the first verb (growing), one cannot infer that anything was growing, so focus is MNEG. For the second verb (providing), it is implicitly stated that the company was providing a not satisfactory return on investment, therefore, focus is A1. The guidelines assume that the focus corresponds to a single role or the verb. In cases where more than one role could be selected, the most likely focus is chosen; context and text understanding are key. We define the most likely focus as the one that yields the most meaningful implicit information. For example, in (Table 3, example 2) [He]A0 could be chosen as focus, yielding someone can stomach the taste of Heinz, but not him. However, given the previous sentence ([. . . ] her husband is 585 While profitable MADV 5 MADV * it A1 5 A0 * was n’t MNEG-NOT! growing and was n’t MNEG < providing a satisfacory return ... A1-NOT u Figure 1: Example of focus annotation (marked with -NOT). Its interpretation is explained in Section 4.2.2. adamant about eating only Hunt’s ketchup), it is clear that the best option is A1. Example (5) has a similar ambiguity between A0 and A2, example (9) between MTMP and A4, etc. The role that yields the most useful positive implicit information given the context is always chosen as focus. Table 3 provides several examples having as their focus different roles. Example (1) does not carry any positive meaning, the focus is V. In (2–10) the verb must be interpreted as affirmative, as well as all roles except the one marked with ‘⋆’ (i.e., the focus). For each example, we provide PropBank annotation (top), the new annotation (i.e., the focus, bottom right) and its interpretation (bottom left). 4.2.2 Interpretation of -NOT The mark -NOT is interpreted as follows: • If MNEG-NOT(x, y), then verb y must be negated; the statement does not carry positive meaning. • If any other role is marked with -NOT, ROLENOT(x, y) must be interpreted as it is not the case that x is ROLE of y. Unmarked roles are interpreted positive; they correspond to implicit positive meaning. Role labels (A0, MTMP, etc.) maintain the same meaning from PropBank (Palmer et al., 2005). MNEG can be ignored since it is overwritten by -NOT. The new annotation for the example (Figure 1) must be interpreted as: While profitable, it (the company) was not growing and was providing a not satisfactory return on investment. Paraphrasing, While profitable, it was shrinking or idle and was providing an unsatisfactory return on investment. We discover an entailment and an implicature respectively. 4.3 Annotation Process We annotated the 3,993 verbal negations signaled with MNEG in PropBank. Before annotation began, all semantic information was removed by mapping all role labels to ARG. This step is necessary to ensure that focus selection is not biased by the semanRole #Inst. Focus # – % A1 2,930 1,194 – 40.75 MNEG 3,196 1,109 – 34.70 MTMP 609 246 – 40.39 MMNR 250 190 – 76.00 A2 501 179 – 35.73 MADV 466 94 – 20.17 A0 2,163 73 – 3.37 MLOC 114 22 – 19.30 MEXT 25 22 – 88.00 A4 26 22 – 84.62 A3 48 18 – 37.50 MDIR 35 13 – 37.14 MPNC 87 9 – 10.34 MDIS 287 6 – 2.09 Table 4: Roles, total instantiations and counts corresponding to focus over training and held-out instances. tic labels provided by PropBank. As annotation tool, we use Jubilee (Choi et al., 2010). For each instance, annotators decide the focus given the full syntactic tree, as well as the previous and next sentence. A post-processing step incorporates focus annotation to the original PropBank by adding -NOT to the corresponding role. In a first round, 50% of instances were annotated twice. Inter-annotator agreement was 0.72. After careful examination of the disagreements, they were resolved and annotators were given clearer instructions. The main point of conflict was selecting a focus that yields valid implicit meaning, but not the most valuable (Section 4.2.1). Due to space constraints, we cannot elaborate more on this issue. The remaining instances were annotated once. Table 4 depicts counts for each role. 5 Learning Algorithm We propose a supervised learning approach. Each sentence from PropBank containing a verbal negation becomes an instance. The decision to be made is to choose the role that corresponds to the focus. 586 No. Feature Values Explanation 1 role-present {y, n} is role present? 2 role-f-pos {DT, NNP, ...} First POS tag of role 3 role-f-word {This, to, overseas, ... } First word of role 4 role-length N number fo words in role 5 role-posit N position within the set of roles 6 A1-top {NP, SBAR, PP, ...} syntactic node of A1 7 A1-postag {y, n} does A1 contain the tag postag? 8 A1-keyword {y, n} does A1 cotain the word keyword? 9 first-role {A1, MLOC, ...} label of the first role 10 last-role {A1, MLOC, ...} label of the last role 11 verb-word {appear, describe, ... } main verb 12 verb-postag {VBN, VBZ, ...} POS tag main verb 13 VP-words {were-n’t, be-quickly, ... } sequence of words of VP until verb 14 VP-postags {VBP-RB-RB-VBG, VBN-VBG, ...} sequence of POS tags of VP until verb 15 VP-has-CC {y, n} does the VP contain a CC? 16 VP-has-RB {y, n} does the VP contain a RB? 17 predicate {rule-out, come-up, ... } predicate 18 them-role-A0 {preparer, assigner, ... } thematic role for A0 19 them-role-A1 {effort, container, ... } thematic role for A1 20 them-role-A2 {audience, loaner, ... } thematic role for A2 21 them-role-A3 {intensifier, collateral, ... } thematic role for A3 22 them-role-A4 {beneficiary, end point, ... } thematic role for A4 Table 5: Full set of features. Features (1–5) are extracted for all roles, (7, 8) for all POS tags and keywords detected. The 3,993 annotated instances are divided into training (70%), held-out (10%) and test (20%). The held-out portion is used to tune the feature set and results are reported for the test split only, i.e., using unseen instances. Because PropBank adds semantic role annotation on top of the Penn TreeBank, we have available syntactic annotation and semantic role labels for all instances. 5.1 Baselines We implemented four baselines to measure the difficulty of the task: • A1: select A1, if not present then MNEG. • FIRST: select first role. • LAST: select last role. • BASIC: same than FOC-DET but only using features last role and flags indicating the presence of roles. 5.2 Selecting Features The BASIC baseline obtains a respectable accuracy of 61.38 (Table 6). Most errors correspond to instances having as focus the two most likely foci: A1 and MNEG (Table 4). We improve BASIC with an extended feature set which targets especially A1 and the verb (Table 5). Features (1–5) are extracted for each role and capture their presence, first POS tag and word, length and position within the roles present for that instance. Features (6–8) further characterize A1. A1-postag is extracted for the following POS tags: DT, JJ, PRP, CD, RB, VB and WP; A1-keyword for the following words: any, anybody, anymore, anyone, anything, anytime, anywhere, certain, enough, full, many, much, other, some, specifics, too and until. These lists of POS tags and keywords were extracted after manual examination of training examples and aim at signaling whether this role correspond to the focus. Examples of A1 corresponding to the focus and including one of the POS tags or keywords are: • [Apparently]MADV, [the respondents]A0 do n’t think [:::: that::: an:::::::::: economic :::::::::: slowdown:::::: would:::::: harm ::: the:::::: major::::::::::: investment :::::::: markets ::::::: veryRB :::::: much]A1. (i.e., the responders think it would harm the investements little). 587 • [The oil company]A0 does n’t anticipate [:::::::::: anykeyword :::::::::::: additional :::::::: charges]A1 (i.e., the company anticipates no additional charges). • [Money managers and other bond buyers]A0 haven’t [shown]V [ :::::::::::: muchkeyword :::::::: interest ::in:::: the :::::::: Refcorp :::::: bonds]A1 (i.e., they have shown little interest in the bonds). • He concedes H&R Block is well-entrenched and a great company, but says “[it]A1 doesn’t [grow]V [:::: fast:::::::::::::: enoughkeyword :::: for::us]A1” (i.e., it is growing too slow for us). • [We]A0 don’t [see]V [:a:::::::::: domestic ::::::: source:::: for :::::::::::: somekeyword ::: of:::: our:::::::: HDTV::::::::::::: requirements ]A1, and that’s a source of concern [. . . ] (i.e., we see a domestic source for some other of our HDTV requirements) Features (11–16) correspond to the main verb. VP-words (VP-postag) captures the full sequence of words (POS tags) from the beginning of the VP until the main verb. Features (15–16) check for POS tags as the presence of certain tags usually signal that the verb is not the focus of negation (e.g., [Thus]MDIS, he asserts, [Lloyd’s]A0 [[ca]MMODn’t [react]v [:::::::::: quicklyRB]MMNR [to competition]A1]VP). Features (17–22) tackle the predicate, which includes the main verb and may include other words (typically prepositions). We consider the words in the predicate, as well as the specific thematic roles for each numbered argument. This is useful since PropBank uses different numbered arguments for the same thematic role depending on the frame (e.g., A3 is used as PURPOSE in authorize.01 and as INSTRUMENT in avert.01). 6 Experiments and Results As a learning algorithm, we use bagging with C4.5 decision trees. This combination is fast to train and test, and typically provides good performance. More features than the ones depicted were tried, but we only report the final set. For example, the parent node for all roles was considered and discarded. We name the model considering all features and trained using bagging with C4.5 trees FOC-DET. Results over the test split are depicted in Table 6. Simply choosing A1 as the focus yields an accuracy of 42.11. A better baseline is to always pick the last role (58.39 accuracy). Feeding the learning algoSystem Accuracy A1 42.11 FIRST 7.00 LAST 58.39 BASIC 61.38 FOC-DET 65.50 Table 6: Accuracies over test split. rithm exclusively the label corresponding to the last role and flags indicating the presence of roles yields 61.38 accuracy (BASIC baseline). Having an agreement of 0.72, there is still room for improvement. The full set of features yields 65.50 accuracy. The difference in accuracy between BASIC and FOC-DET (4.12) is statistically significant (Z-value = 1.71). We test the significance of the difference in performance between two systems i and j on a set of ins instances with the Z-score test, where z = abs(erri,errj) σd , errk is the error made using set k and σd = q erri(1−erri) ins + errj(1−errj) ins . 7 Conclusions In this paper, we present a novel way to semantically represent negation using focus detection. Implicit positive meaning is identified, giving a thorough interpretation of negated statements. Due to the lack of corpora annotating the focus of negation, we have added this information to all the negations marked with MNEG in PropBank. A set of features is depicted and a supervised model proposed. The task is highly ambiguous and semantic features have proven helpful. A verbal negation is interpreted by considering all roles positive except the one corresponding to the focus. This has proven useful as shown in several examples. In some cases, though, it is not easy to obtain the meaning of a negated role. Consider (Table 3, example 5) P&G hasn’t sold coffee ::to:::::::: airlines. The proposed representation encodes P&G has sold coffee, but not to airlines. However, it is not said that the buyers are likely to have been other kinds of companies. Even without fully identifying the buyer, we believe it is of utmost importance to detect that P&G has sold coffee. Empirical data (Table 4) shows that over 65% of negations in PropBank carry implicit positive meaning. 588 References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of the 17th international conference on Computational Linguistics, Montreal, Canada. Johan Bos and Katja Markert. 2005. Recognising Textual Entailment with Logical Inference. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 628–635, Vancouver, British Columbia, Canada. Jinho D. Choi, Claire Bonial, and Martha Palmer. 2010. Propbank Instance Annotation Guidelines Using a Dedicated Editor, Jubilee. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10), Valletta, Malta. Isaac Councill, Ryan McDonald, and Leonid Velikovich. 2010. What’s great and what’s not: learning to classify the scope of negation for improved sentiment analysis. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, pages 51–59, Uppsala, Sweden. David Dowty. 1994. The Role of Negative Polarity and Concord Marking in Natural Language Reasoning. In Proceedings of Semantics and Linguistics Theory (SALT) 4, pages 114–144. Rich´ard Farkas, Veronika Vincze, Gy¨orgy M´ora, J´anos Csirik, and Gy¨orgy Szarvas. 2010. The CoNLL-2010 Shared Task: Learning to Detect Hedges and their Scope in Natural Language Text. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 1–12, Uppsala, Sweden. Jaakko Hintikka. 2002. Negation in Logic and in Natural Language. Linguistics and Philosophy, 25(5/6). Laurence R. Horn and Yasuhiko Kato, editors. 2000. Negation and Polarity - Syntactic and Semantic Perspectives (Oxford Linguistics). Oxford University Press, USA. Laurence R. Horn. 1989. A Natural History of Negation. University Of Chicago Press. Rodney D. Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cambridge University Press. Roser Morante and Walter Daelemans. 2009. Learning the Scope of Hedge Cues in Biomedical Texts. In Proceedings of the BioNLP 2009 Workshop, pages 28–36, Boulder, Colorado. Roser Morante and Caroline Sporleder, editors. 2010. Proceedings of the Workshop on Negation and Speculation in Natural Language Processing. University of Antwerp, Uppsala, Sweden. Arzucan ¨Ozg¨ur and Dragomir R. Radev. 2009. Detecting Speculations and their Scopes in Scientific Text. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1398–1407, Singapore. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 31(1):71–106. Mats Rooth. 1985. Association with Focus. Ph.D. thesis, Univeristy of Massachusetts, Amherst. Mats Rooth. 1992. A Theory of Focus Interpretation. Natural Language Semantics, 1:75–116. Carolyn P. Rose, Antonio Roque, Dumisizwe Bhembe, and Kurt Vanlehn. 2003. A Hybrid Text Classification Approach for Analysis of Student Essays. In In Building Educational Applications Using Natural Language Processing, pages 68–75. Victor S´anchez Valencia. 1991. Studies on Natural Logic and Categorial Grammar. Ph.D. thesis, University of Amsterdam. Roser Saur´ı and James Pustejovsky. 2009. FactBank: a corpus annotated with event factuality. Language Resources and Evaluation, 43(3):227–268. Elly van Munster. 1988. The treatment of Scope and Negation in Rosetta. In Proceedings of the 12th International Conference on Computational Linguistics, Budapest, Hungary. Veronika Vincze, Gyorgy Szarvas, Richard Farkas, Gyorgy Mora, and Janos Csirik. 2008. The BioScope corpus: biomedical texts annotated for uncertainty, negation and their scopes. BMC Bioinformatics, 9(Suppl 11):S9+. Michael Wiegand, Alexandra Balahur, Benjamin Roth, Dietrich Klakow, and Andr´es Montoyo. 2010. A survey on the role of negation in sentiment analysis. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, pages 60–68, Uppsala, Sweden, July. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2009. Recognizing Contextual Polarity: An Exploration of Features for Phrase-Level Sentiment Analysis. Computational Linguistics, 35(3):399–433. H. Zeijlstra. 2007. Negation in Natural Language: On the Form and Meaning of Negative Elements. Language and Linguistics Compass, 1(5):498–518. 589
|
2011
|
59
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 52–61, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Fast and Accurate Method for Approximate String Search Ziqi Wang∗ School of EECS Peking University Beijing 100871, China [email protected] Gu Xu Microsoft Research Asia Building 2, No.5 Danling Street, Beijing 100080, China [email protected] Hang Li Microsoft Research Asia Building 2, No.5 Danling Street, Beijing 100080, China [email protected] Ming Zhang School of EECS Peking University Beijing 100871, China [email protected] Abstract This paper proposes a new method for approximate string search, specifically candidate generation in spelling error correction, which is a task as follows. Given a misspelled word, the system finds words in a dictionary, which are most “similar” to the misspelled word. The paper proposes a probabilistic approach to the task, which is both accurate and efficient. The approach includes the use of a log linear model, a method for training the model, and an algorithm for finding the top k candidates. The log linear model is defined as a conditional probability distribution of a corrected word and a rule set for the correction conditioned on the misspelled word. The learning method employs the criterion in candidate generation as loss function. The retrieval algorithm is efficient and is guaranteed to find the optimal k candidates. Experimental results on large scale data show that the proposed approach improves upon existing methods in terms of accuracy in different settings. 1 Introduction This paper addresses the following problem, referred to as approximate string search. Given a query string, a dictionary of strings (vocabulary), and a set of operators, the system returns the top k strings in the dictionary that can be transformed from the query string by applying several operators in the operator set. Here each operator is a rule that can replace a substring in the query string with another substring. The top k results are defined in ∗Contribution during internship at Microsoft Research Asia. terms of an evaluation measure employed in a specific application. The requirement is that the task must be conducted very efficiently. Approximate string search is useful in many applications including spelling error correction, similar terminology retrieval, duplicate detection, etc. Although certain progress has been made for addressing the problem, further investigation on the task is still necessary, particularly from the viewpoint of enhancing both accuracy and efficiency. Without loss of generality, in this paper we address candidate generation in spelling error correction. Candidate generation is to find the most possible corrections of a misspelled word. In such a problem, strings are words, and the operators represent insertion, deletion, and substitution of characters with or without surrounding characters, for example, “a”→“e” and “lly”→“ly”. Note that candidate generation is concerned with a single word; after candidate generation, the words surrounding it in the text can be further leveraged to make the final candidate selection, e.g., Li et al. (2006), Golding and Roth (1999). In spelling error correction, Brill and Moore (2000) proposed employing a generative model for candidate generation and a hierarchy of trie structures for fast candidate retrieval. Our approach is a discriminative approach and is aimed at improving Brill and Moore’s method. Okazaki et al. (2008) proposed using a logistic regression model for approximate dictionary matching. Their method is also a discriminative approach, but it is largely different from our approach in the following points. It formalizes the problem as binary classification and 52 assumes that there is only one rule applicable each time in candidate generation. Efficiency is also not a major concern for them, because it is for offline text mining. There are two fundamental problems in research on approximate string search: (1) how to build a model that can archive both high accuracy and efficiency, and (2) how to develop a data structure and algorithm that can facilitate efficient retrieval of the top k candidates. In this paper, we propose a probabilistic approach to the task. Our approach is novel and unique in the following aspects. It employs (a) a log-linear (discriminative) model for candidate generation, (b) an effective algorithm for model learning, and (c) an efficient algorithm for candidate retrieval. The log linear model is defined as a conditional probability distribution of a corrected word and a rule set for the correction given the misspelled word. The learning method employs, in the training process, a criterion that represents the goal of making both accurate and efficient prediction (candidate generation). As a result, the model is optimally trained toward its objective. The retrieval algorithm uses special data structures and efficiently performs the top k candidates finding. It is guaranteed to find the best k candidates without enumerating all the possible ones. We empirically evaluated the proposed method in spelling error correction of web search queries. The experimental results have verified that the accuracy of the top candidates given by our method is significantly higher than those given by the baseline methods. Our method is more accurate than the baseline methods in different settings such as large rule sets and large vocabulary sizes. The efficiency of our method is also very high in different experimental settings. 2 Related Work Approximate string search has been studied by many researchers. Previous work mainly focused on efficiency rather than model. Usually, it is assumed that the model (similarity distance) is fixed and the goal is to efficiently find all the strings in the collection whose similarity distances are within a threshold. Most existing methods employ n-gram based algorithms (Behm et al., 2009; Li et al., 2007; Yang et al., 2008) or filtering algorithms (Mihov and Schulz, 2004; Li et al., 2008). Instead of finding all the candidates in a fixed range, methods for finding the top k candidates have also been developed. For example, the method by Vernica and Li (2009) utilized n-gram based inverted lists as index structure and a similarity function based on n-gram overlaps and word frequencies. Yang et al. (2010) presented a general framework for top k retrieval based on ngrams. In contrast, our work in this paper aims to learn a ranking function which can achieve both high accuracy and efficiency. Spelling error correction normally consists of candidate generation and candidate final selection. The former task is an example of approximate string search. Note that candidate generation is only concerned with a single word. For single-word candidate generation, rule-based approach is commonly used. The use of edit distance is a typical example, which exploits operations of character deletion, insertion and substitution. Some methods generate candidates within a fixed range of edit distance or different ranges for strings with different lengths (Li et al., 2006; Whitelaw et al., 2009). Other methods make use of weighted edit distance to enhance the representation power of edit distance (Ristad and Yianilos, 1998; Oncina and Sebban, 2005; McCallum et al., 2005; Ahmad and Kondrak, 2005). Conventional edit distance does not take in consideration context information. For example, people tend to misspell “c” to “s” or “k” depending on contexts, and a straightforward application of edit distance cannot deal with the problem. To address the challenge, some researchers proposed using a large number of substitution rules containing context information (at character level). For example, Brill and Moore (2000) developed a generative model including contextual substitution rules; and Toutanova and Moore (2002) further improved the model by adding pronunciation factors into the model. Schaback and Li (2007) proposed a multi-level feature-based framework for spelling error correction including a modification of Brill and Moore’s model (2000). Okazaki et al. (2008) utilized substring substitution rules and incorporated the rules into a L1-regularized logistic regression model. Okazaki et al.’s model is largely different 53 from the model proposed in this paper, although both of them are discriminative models. Their model is a binary classification model and it is assumed that only a single rule is applied in candidate generation. Since users’ behavior of misspelling and correction can be frequently observed in web search log data, it has been proposed to mine spelling-error and correction pairs by using search log data. The mined pairs can be directly used in spelling error correction. Methods of selecting spelling and correction pairs with maximum entropy model (Chen et al., 2007) or similarity functions (Islam and Inkpen, 2009; Jones et al., 2006) have been developed. The mined pairs can only be used in candidate generation of high frequency typos, however. In this paper, we work on candidate generation at the character level, which can be applied to spelling error correction for both high and low frequency words. 3 Model for Candidate Generation As an example of approximate string search, we consider candidate generation in spelling correction. Suppose that there is a vocabulary V and a misspelled word, the objective of candidate generation is to select the best corrections from the vocabulary V. We care about both accuracy and efficiency of the process. The problem is very challenging when the size of vocabulary is large, because there are a large number of potential candidates to be verified. In this paper, we propose a probabilistic approach to candidate generation, which can achieve both high accuracy and efficiency, and is particularly powerful when the scale is large. In our approach, it is assumed that a large number of misspelled words and their best corrections are given as training data. A probabilistic model is then trained by using the training data, which can assign ranking scores to candidates. The best candidates for correction of a misspelled word are thus defined as those candidates having the highest probabilistic scores with respect to the training data and the operators. Hereafter, we will describe the probabilistic model for candidate generation, as well as training and exploitation of the model. n i c o s o o f t ^ $ Derived rules Edit-distance based aligment Expended rules with context m i c r o s o f t ^ $ Figure 1: Example of rule extraction from word pair 3.1 Model The operators (rules) represent insertion, deletion, and substitution of characters in a word with or without surrounding context (characters), which are similar to those defined in (Brill and Moore, 2000; Okazaki et al., 2008). An operator is formally represented a rule α →β that replaces a substring α in a misspelled word with β, where α, β ∈{s|s = t, s = ˆt, or s = t$} and t ∈Σ∗is the set of all possible strings over the alphabet. Obviously, V ⊂Σ∗. We actually derive all the possible rules from the training data using a similar approach to (Brill and Moore, 2000) as shown in Fig. 1. First we conduct the letter alignment based on the minimum edit-distance, and then derive the rules from the alignment. Furthermore we expand the derived rules with surrounding words. Without loss of generality, we only consider using +2, +1, 0, −1, −2 characters as contexts in this paper. If we can apply a set of rules to transform the misspelled word wm to a correct word wc in the vocabulary, then we call the rule set a “transformation” for the word pair wm and wc. Note that for a given word pair, it is likely that there are multiple possible transformations for it. For example, both “n”→“m” and “ni”→“mi” can transform “nicrosoft” to “microsoft”. Without loss of generality, we set the maximum number of rules applicable to a word pair to be a fixed number. As a result, the number of possible transformations for a word pair is finite, and usually limited. This is equivalent to the assumption that the number of spelling errors in a word is small. Given word pair (wm, wc), let R(wm, wc) denote one transformation (a set of rules) that can rewrite 54 wm to wc. We consider that there is a probabilistic mapping between the misspelled word wm and correct word wc plus transformation R(wm, wc). We define the conditional probability distribution of wc and R(wm, wc) given wm as the following log linear model: P(wc, R(wm, wc)|wm) (1) = exp (∑ r∈R(wm,wc) λr ) ∑ (w′c,R(wm,w′c))∈Z(wm) exp (∑ o∈R(wm,w′c) λo ) where r or o denotes a rule in rule set R, λr or λo denotes a weight, and the normalization is carried over Z(wm), all pairs of word w′ c in V and transformation R(wm, w′ c), such that wm can be transformed to w′ c by R(wm, w′ c). The log linear model actually uses binary features indicating whether or not a rule is applied. In general, the weights in Equ. (1) can be any real numbers. To improve efficiency in retrieval, we further assume that all the weights are non-positive, i.e., ∀λr ≤0. It introduces monotonicity in rule application and implies that applying additional rules cannot lead to generation of better candidates. For example, both “office” and “officer” are correct candidates of “ofice”. We view “office” a better candidate (with higher probability) than “officer”, as it needs one less rule. The assumption is reasonable because the chance of making more errors should be lower than that of making less errors. Our experimental results have shown that the change in accuracy by making the assumption is negligible, but the gain in efficiency is very large. 3.2 Training of Model Training data is given as a set of pairs T = { (wi m, wi c) }N i=1, where wi m is a misspelled word and wi c ∈V is a correction of wi m. The objective of training would be to maximize the conditional probability P(wi c, R(wi m, wi c)|wi m) over the training data. This is not a trivial problem, however, because the “true” transformation R∗(wi m, wi c) for each word pair wi m and wi c is not given in the training data. It is often the case that there are multiple transformations applicable, and it is not realistic to assume that such information can be provided by humans or automatically derived. (It is relatively easy to automatically find the pairs wi m and wi c as explained in Section 5.1). In this paper, we assume that the transformation that actually generates the correction among all the possible transformations is the one that can give the maximum conditional probability; the exactly same criterion is also used for fast prediction. Therefore we have the following objective function λ∗= arg max λ L(λ) (2) = arg max λ ∑ i max R(wim,wic)log P(wi c, R(wi m, wi c)|wi m) where λ denotes the weight parameters and the max is taken over the set of transformations that can transform wi m to wi c. We employ gradient ascent in the optimization in Equ. (2). At each step, we first find the best transformation for each word pair based on the current parameters λ(t) R∗(wi m, wi c) (3) = arg max R(wim,wic) log Pλ(t)(wi c, R(wi m, wi c)|wi m) Next, we calculate the gradients, ∂L ∂λr = ∑ i log Pλ(t)(wi c, R∗(wi m, wi c)|wi m) ∂λr (4) In this paper, we employ the bounded L-BFGS (Behm et al., 2009) algorithm for the optimization task, which works well even when the number of weights λ is large. 3.3 Candidate Generation In candidate generation, given a misspelled word wm, we find the k candidates from the vocabulary, that can be transformed from wm and have the largest probabilities assigned by the learned model. We only need to utilize the following ranking function to rank a candidate wc given a misspelled word wm, by taking into account Equs. (1) and (2) rank(wc|wm) = max R(wm,wc) ∑ r∈R(wm,wc) λr (5) For each possible transformation, we simply take summation of the weights of the rules used in the transformation. We then choose the sum as a ranking score, which is equivalent to ranking candidates based on their largest conditional probabilities. 55 aa a NULL NULL ...... ...... ...... ...... ...... ...... ___ ___ e s a 0.0 -0.3 -0.1 failure link leaf node link Aho Corasick Tree a e a s aa a e ea Figure 2: Rule Index based on Aho Corasick Tree. 4 Efficient Retrieval Algorithm In this section, we introduce how to efficiently perform top k candidate generation. Our retrieval algorithm is guaranteed to find the optimal k candidates with some “pruning” techniques. We first introduce the data structures and then the retrieval algorithm. 4.1 Data Structures We exploit two data structures for candidate generation. One is a trie for storing and matching words in the vocabulary, referred to as vocabulary trie, and the other based on what we call an Aho-Corasick tree (AC tree) (Aho and Corasick, 1975), which is used for storing and applying correction rules, referred to as rule index. The vocabulary trie is the same as that used in existing work and it will be traversed when searching the top k candidates. Our rule index is unique because it indexes all the rules based on an AC tree. The AC tree is a trie with “failure links”, on which the Aho-Corasick string matching algorithm can be executed. Aho-Corasick algorithm is a well known dictionary-matching algorithm which can quickly locate all the words in a dictionary within an input string. Time complexity of the algorithm is of linear order in length of input string plus number of matched entries. We index all the α’s in the rules on the AC tree. Each α corresponds to a leaf node, and the β’s of the α are stored in an associated list in decreasing order of rule weights λ, as illustrated in Fig. 2. 1 1One may further improve the index structure by using a trie rather than a ranking list to store βs associated with the same α. However the improvement would not be significant because the number of βs associated with each α is usually very small. 4.2 Algorithm One could employ a naive algorithm that applies all the possible combinations of rules (α’s) to the current word wm, verifies whether the resulting words (candidates) are in the vocabulary, uses the function in Equ. (5) to calculate the ranking scores of the candidates, and find the top k candidates. This algorithm is clearly inefficient. Our algorithm first employs the Aho-Corasick algorithm to locate all the applicable α’s within the input word wm, from the rule index. The corresponding β’s are retrieved as well. Then all the applicable rules are identified and indexed by the applied positions of word wm. Our algorithm next traverses the vocabulary trie and searches the top k candidates with some pruning techniques. The algorithm starts from the root node of the vocabulary trie. At each step, it has multiple search branches. It tries to match at the next position of wm, or apply a rule at the current position of wm. The following two pruning criteria are employed to significantly accelerate the search process. 1) If the current sum of weights of applied rules is smaller than the smallest weight in the top k list, the search branch is pruned. This criterion is derived from the non-negative constraint on rule weights λ. It is easy to verify that the sum of weights will not become larger if one continues to search the branch because all the weights are non-positive. 2) If two search branches merge at the same node in the vocabulary trie as well as the same position on wm, the search branches with smaller sum of weights will be pruned. It is based on the dynamic programming technique because we take max in the ranking function in Equ. 5. It is not difficult to prove that our algorithm is guaranteed to find the best k candidates in terms of the ranking scores, because we only prune those candidates that cannot give better scores than the ones in the current top k list. Due to the limitation of space, we omit the proof of the theorem that if the weights of rules λ are non-positive and the ranking function is defined as in Equ. 5, then the top k candidates obtained with the pruning criteria are the same as the top k candidates obtained without pruning. 56 5 Experimental Results We have experimentally evaluated our approach in spelling error correction of queries in web search. The problem is more challenging than usual due to the following reasons. (1) The vocabulary of queries in web search is extremely large due to the scale, diversity, and dynamics of the Internet. (2) Efficiency is critically important, because the response time of top k candidate retrieval for web search must be kept very low. Our approach for candidate generation is in fact motivated by the application. 5.1 Word Pair Mining In web search, a search session is comprised of a sequence of queries from the same user within a time period. It is easy to observe from search session data that there are many spelling errors and their corrections occurring in the same sessions. We employed heuristics to automatically mine training pairs from search session data at a commercial search engine. First, we segmented the query sequence from each user into sessions. If two queries were issued more than 5 minutes apart, then we put a session boundary between them. We used short sessions here because we found that search users usually correct their misspelled queries very quickly after they find the misspellings. Then the following heuristics were employed to identify pairs of misspelled words and their corrections from two consecutive queries within a session: 1) Two queries have the same number of words. 2) There is only one word difference between two queries. 3) For the two distinct words, the word in the first query is considered as misspelled and the second one as its correction. Finally, we aggregated the identified training pairs across sessions and users and discarded the pairs with low frequencies. Table 1 shows some examples of the mined word pairs. 5.2 Experiments on Accuracy Two representative methods were used as baselines: the generative model proposed by (Brill and Moore, 2000) referred to as generative and the logistic regression model proposed by (Okazaki et al., 2008) Misspelled Correct Misspelled Correct aacoustic acoustic chevorle chevrolet liyerature literature tournemen tournament shinngle shingle newpape newspaper finlad finland ccomponet component reteive retrieve olimpick olympic Table 1: Examples of Word Pairs referred to as logistic. Note that Okazaki et al. (2008)’s model is not particularly for spelling error correction, but it can be employed in the task. When using their method for ranking, we used outputs of the logistic regression model as rank scores. We compared our method with the two baselines in terms of top k accuracy, which is ratio of the true corrections among the top k candidates generated by a method. All the methods shared the same settings: 973,902 words in the vocabulary, 10,597 rules for correction, and up to two rules used in one transformation. We made use of 100,000 word pairs mined from query sessions for training, and 10,000 word pairs for testing. The experimental results are shown in Fig. 3. We can see that our method always performs the best when compared with the baselines and the improvements are statistically significant (p < 0.01). The logistic method works better than generative, when k is small, but its performance becomes saturated, when k is large. Usually a discriminative model works better than a generative model, and that seems to be what happens with small k’s. However, logistic cannot work so well for large k’s, because it only allows the use of one rule each time. We observe that there are many word pairs in the data that need to be transformed with multiple rules. Next, we conducted experiments to investigate how the top k accuracy changes with different sizes of vocabularies, maximum numbers of applicable rules and sizes of rule set for the three methods. The experimental results are shown in Fig. 4, Fig. 5 and Fig. 6. For the experiment in Fig. 4, we enlarged the vocabulary size from 973,902 (smallVocab) to 2,206,948 (largeVocab) and kept the other settings the same as in the previous experiment. Because more candidates can be generated with a larger vocabulary, the performances of all the methods de57 0 5 10 15 20 25 30 40% 50% 60% 70% 80% 90% 100% top k Accuracy Generative Logistic Our Method Figure 3: Accuracy Comparison between Our Method and Baselines cline. However, the drop of accuracy by our method is much smaller than that by generative, which means our method is more powerful when the vocabulary is large, e.g., for web search. For the experiment in Fig. 5, we changed the maximum number of rules that can be applied to a transformation from 2 to 3. Because logistic can only use one rule at a time, it is not included in this experiment. When there are more applicable rules, more candidates can be generated and thus ranking of them becomes more challenging. The accuracies of both methods drop, but our method is constantly better than generative. Moreover, the decrease in accuracy by our method is clearly less than that by generative. For the experiment in Fig. 6, we enlarged the number of rules from 10,497 (smallRuleNum) to 24,054 (largeRuleNum). The performance of our method and those of the two baselines did not change so much, and our method still visibly outperform the baselines when more rules are exploited. 5.3 Experiments on Efficiency We have also experimentally evaluated the efficiency of our approach. Because most existing work uses a predefined ranking function, it is not fair to make a comparison with them. Moreover, Okazaki et al.’ method does not consider efficiency, and Brill and Moore’s method is based a complicated retrieve algorithm which is very hard to implement. Instead of making comparison with the existing methods in terms of efficiency, we evaluated the efficiency of our method by looking at how efficient it becomes with its data structure and pruning technique. 0 5 10 15 20 25 30 30% 40% 50% 60% 70% 80% 90% 100% top k Accuracy Generative (smallVocab) Generative (largeVocab) Logistic (smallVocab) Logistic (largeVocab) Our Method (smallVocab) Our Method (largeVocab) Figure 4: Accuracy Comparisons between Baselines and Our Method with Different Vocabulary Sizes 0 5 10 15 20 25 30 30% 40% 50% 60% 70% 80% 90% 100% top k Accuracy Generative (2 applicable rules) Generative (3 applicable rules) Our Method (2 applicable rules) Our Method (3 applicable rules) Figure 5: Accuracy Comparison between Generative and Our Method with Different Maximum Numbers of Applicable Rules 0 5 10 15 20 25 30 40% 50% 60% 70% 80% 90% 100% Accuracy top k Generative (largeRuleSet) Generative (smallRuleSet) Logistic (largeRuleSet) Logistic (smallRuleSet) Our Method (largeRuleSet) Our Method (smallRuleSet) Figure 6: Accuracy Comparison between Baselines and Our Method with Different Numbers of Rules First, we tested the efficiency of using AhoCorasick algorithm (the rule index). Because the 58 0 5000 10000 15000 20000 25000 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 Number of Rules W ord Length Number of Matching Rules 4 5 6 7 8 9 10 Figure 7: Number of Matching Rules v.s. Number of Rules time complexity of Aho-Corasick algorithm is determined by the lengths of query strings and the number of matches, we examined how the number of matches on query strings with different lengths changes when the number of rules increases. The experimental results are shown in Fig. 7. We can see that the number of matches is not largely affected by the number of rules in the rule index. It implies that the time for searching applicable rules is close to a constant and does not change much with different numbers of rules. Next, since the running time of our method is proportional to the number of visited nodes on the vocabulary trie, we evaluated the efficiency of our method in terms of number of visited nodes. The result reported here is that when k is 10. Specifically, we tested how the number of visited nodes changes according to three factors: maximum number of applicable rules in a transformation, vocabulary size and rule set size. The experimental results are shown in Fig. 8, Fig. 9 and Fig. 10 respectively. From Fig. 8, with increasing maximum number of applicable rules in a transformation, number of visited nodes increases first and then stabilizes, especially when the words are long. Note that pruning becomes even more effective because number of visited nodes without pruning grows much faster. It demonstrates that our method is very efficient when compared to the non-pruning method. Admittedly, the efficiency of our method also deteriorates somewhat. This would not cause a noticeable issue in real applications, however. In the previous section, 1 2 3 4 5 6 7 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 Maximum Number of Applicable Rules Number of Visited Nodes W ord Length 4 5 6 7 8 9 10 Figure 8: Efficiency Evaluation with Different Maximum Numbers of Applicable Rules 0 100 200 300 400 500 0 1000 2000 3000 4000 5000 6000 7000 Vocabulary Size (million) Number of Visited Nodes W ord Length 4 5 6 7 8 9 10 Figure 9: Efficiency Evaluation with Different Sizes of Vocabulary we have seen that using up to two rules in a transformation can bring a very high accuracy. From Fig. 8 and Fig. 9, we can conclude that the numbers of visited nodes are stable and thus the efficiency of our method keeps high with larger vocabulary size and number of rules. It indicates that our pruning strategy is very effective. From all the figures, we can see that our method is always efficient especially when the words are relatively short. 5.4 Experiments on Model Constraints In Section 3.1, we introduce the non-positive constraints on the parameters, i.e., ∀λr ≤0, to enable the pruning technique for efficient top k retrieval. We experimentally verified the impact of the constraints to both the accuracy and efficiency. For ease of reference, we name the model with the non-positive constraints as bounded, and the origi59 5000 10000 15000 20000 25000 0 1000 2000 3000 4000 5000 6000 7000 Number of Rules Number of Visited Nodes W ord Length 4 5 6 7 8 9 10 Figure 10: Efficiency Evaluation with Different Number of Rules 0 5 10 15 20 25 30 40% 50% 60% 70% 80% 90% 100% top k Accuracy Bounded Unbounded Figure 11: Accuracy Comparison between Bounded and Unbounded Models nal model as unbounded. The experimental results are shown in Fig. 11 and Fig. 12. All the experiments were conducted based on the typical setting of our experiments: 973,902 words in the vocabulary, 10,597 rules, and up to two rules in one transformation. In Fig. 11, we can see that the difference between bounded and unbounded in terms of accuracy is negligible, and we can draw a conclusion that adding the constraints does not hurt the accuracy. From Fig. 12, it is easy to note that bounded is much faster than unbounded because our pruning strategy can be applied to bounded. 6 Conclusion In this paper, we have proposed a new method for approximate string search, including spelling error correction, which is both accurate and efficient. Our method is novel and unique in its model, learning 4 6 8 10 12 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Number of Visited Nodes W ord Length Bounded Unbounded Figure 12: Efficiency Comparison between Bounded and Unbounded Models algorithm, and retrieval algorithm. Experimental results on a large data set show that our method improves upon existing methods in terms of accuracy, and particularly our method can perform better when the dictionary is large and when there are many rules. Experimental results have also verified the high efficiency of our method. As future work, we plan to add contextual features into the model and apply our method to other data sets in other tasks. References Farooq Ahmad and Grzegorz Kondrak. 2005. Learning a spelling error model from search query logs. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT ’05, pages 955–962, Morristown, NJ, USA. Association for Computational Linguistics. Alfred V. Aho and Margaret J. Corasick. 1975. Efficient string matching: an aid to bibliographic search. Commun. ACM, 18:333–340, June. Alexander Behm, Shengyue Ji, Chen Li, and Jiaheng Lu. 2009. Space-constrained gram-based indexing for efficient approximate string search. In Proceedings of the 2009 IEEE International Conference on Data Engineering, pages 604–615, Washington, DC, USA. IEEE Computer Society. Eric Brill and Robert C. Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, ACL ’00, pages 286–293, Morristown, NJ, USA. Association for Computational Linguistics. Qing Chen, Mu Li, and Ming Zhou. 2007. Improving query spelling correction using web search re60 sults. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 181–189. Andrew R. Golding and Dan Roth. 1999. A winnowbased approach to context-sensitive spelling correction. Mach. Learn., 34:107–130, February. Aminul Islam and Diana Inkpen. 2009. Real-word spelling correction using google web it 3-grams. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3, EMNLP ’09, pages 1241–1249, Morristown, NJ, USA. Association for Computational Linguistics. Rosie Jones, Benjamin Rey, Omid Madani, and Wiley Greiner. 2006. Generating query substitutions. In Proceedings of the 15th international conference on World Wide Web, WWW ’06, pages 387–396, New York, NY, USA. ACM. Mu Li, Yang Zhang, Muhua Zhu, and Ming Zhou. 2006. Exploring distributional similarity based models for query spelling correction. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 1025– 1032, Morristown, NJ, USA. Association for Computational Linguistics. Chen Li, Bin Wang, and Xiaochun Yang. 2007. Vgram: improving performance of approximate queries on string collections using variable-length grams. In Proceedings of the 33rd international conference on Very large data bases, VLDB ’07, pages 303–314. VLDB Endowment. Chen Li, Jiaheng Lu, and Yiming Lu. 2008. Efficient merging and filtering algorithms for approximate string searches. In Proceedings of the 2008 IEEE 24th International Conference on Data Engineering, pages 257–266, Washington, DC, USA. IEEE Computer Society. Andrew McCallum, Kedar Bellare, and Fernando Pereira. 2005. A conditional random field for discriminativelytrained finite-state string edit distance. In Conference on Uncertainty in AI (UAI). Stoyan Mihov and Klaus U. Schulz. 2004. Fast approximate search in large dictionaries. Comput. Linguist., 30:451–477, December. Naoaki Okazaki, Yoshimasa Tsuruoka, Sophia Ananiadou, and Jun’ichi Tsujii. 2008. A discriminative candidate generator for string transformations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 447–456, Morristown, NJ, USA. Association for Computational Linguistics. Jose Oncina and Marc Sebban. 2005. Learning unbiased stochastic edit distance in the form of a memoryless finite-state transducer. In International Joint Conference on Machine Learning (2005). Workshop: Grammatical Inference Applications: Successes and Future Challenges. Eric Sven Ristad and Peter N. Yianilos. 1998. Learning string-edit distance. IEEE Trans. Pattern Anal. Mach. Intell., 20:522–532, May. Johannes Schaback and Fang Li. 2007. Multi-level feature extraction for spelling correction. In IJCAI-2007 Workshop on Analytics for Noisy Unstructured Text Data, pages 79–86, Hyderabad, India. Kristina Toutanova and Robert C. Moore. 2002. Pronunciation modeling for improved spelling correction. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 144–151, Morristown, NJ, USA. Association for Computational Linguistics. Rares Vernica and Chen Li. 2009. Efficient top-k algorithms for fuzzy search in string collections. In Proceedings of the First International Workshop on Keyword Search on Structured Data, KEYS ’09, pages 9– 14, New York, NY, USA. ACM. Casey Whitelaw, Ben Hutchinson, Grace Y. Chung, and Gerard Ellis. 2009. Using the web for language independent spellchecking and autocorrection. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2, EMNLP ’09, pages 890–899, Morristown, NJ, USA. Association for Computational Linguistics. Xiaochun Yang, Bin Wang, and Chen Li. 2008. Costbased variable-length-gram selection for string collections to support approximate queries efficiently. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, SIGMOD ’08, pages 353–364, New York, NY, USA. ACM. Zhenglu Yang, Jianjun Yu, and Masaru Kitsuregawa. 2010. Fast algorithms for top-k approximate string matching. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI ’10, pages 1467–1473. 61
|
2011
|
6
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 590–599, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Learning Dependency-Based Compositional Semantics Percy Liang UC Berkeley [email protected] Michael I. Jordan UC Berkeley [email protected] Dan Klein UC Berkeley [email protected] Abstract Compositional question answering begins by mapping questions to logical forms, but training a semantic parser to perform this mapping typically requires the costly annotation of the target logical forms. In this paper, we learn to map questions to answers via latent logical forms, which are induced automatically from question-answer pairs. In tackling this challenging learning problem, we introduce a new semantic representation which highlights a parallel between dependency syntax and efficient evaluation of logical forms. On two standard semantic parsing benchmarks (GEO and JOBS), our system obtains the highest published accuracies, despite requiring no annotated logical forms. 1 Introduction What is the total population of the ten largest capitals in the US? Answering these types of complex questions compositionally involves first mapping the questions into logical forms (semantic parsing). Supervised semantic parsers (Zelle and Mooney, 1996; Tang and Mooney, 2001; Ge and Mooney, 2005; Zettlemoyer and Collins, 2005; Kate and Mooney, 2007; Zettlemoyer and Collins, 2007; Wong and Mooney, 2007; Kwiatkowski et al., 2010) rely on manual annotation of logical forms, which is expensive. On the other hand, existing unsupervised semantic parsers (Poon and Domingos, 2009) do not handle deeper linguistic phenomena such as quantification, negation, and superlatives. As in Clarke et al. (2010), we obviate the need for annotated logical forms by considering the endto-end problem of mapping questions to answers. However, we still model the logical form (now as a latent variable) to capture the complexities of language. Figure 1 shows our probabilistic model: (parameters) (world) θ w x z y (question) (logical form) (answer) state with the largest area x1 x1 1 1 c argmax area state ∗∗ Alaska z ∼pθ(z | x) y = JzKw Semantic Parsing Evaluation Figure 1: Our probabilistic model: a question x is mapped to a latent logical form z, which is then evaluated with respect to a world w (database of facts), producing an answer y. We represent logical forms z as labeled trees, induced automatically from (x, y) pairs. We want to induce latent logical forms z (and parameters θ) given only question-answer pairs (x, y), which is much cheaper to obtain than (x, z) pairs. The core problem that arises in this setting is program induction: finding a logical form z (over an exponentially large space of possibilities) that produces the target answer y. Unlike standard semantic parsing, our end goal is only to generate the correct y, so we are free to choose the representation for z. Which one should we use? The dominant paradigm in compositional semantics is Montague semantics, which constructs lambda calculus forms in a bottom-up manner. CCG is one instantiation (Steedman, 2000), which is used by many semantic parsers, e.g., Zettlemoyer and Collins (2005). However, the logical forms there can become quite complex, and in the context of program induction, this would lead to an unwieldy search space. At the same time, representations such as FunQL (Kate et al., 2005), which was used in 590 Clarke et al. (2010), are simpler but lack the full expressive power of lambda calculus. The main technical contribution of this work is a new semantic representation, dependency-based compositional semantics (DCS), which is both simple and expressive (Section 2). The logical forms in this framework are trees, which is desirable for two reasons: (i) they parallel syntactic dependency trees, which facilitates parsing and learning; and (ii) evaluating them to obtain the answer is computationally efficient. We trained our model using an EM-like algorithm (Section 3) on two benchmarks, GEO and JOBS (Section 4). Our system outperforms all existing systems despite using no annotated logical forms. 2 Semantic Representation We first present a basic version (Section 2.1) of dependency-based compositional semantics (DCS), which captures the core idea of using trees to represent formal semantics. We then introduce the full version (Section 2.2), which handles linguistic phenomena such as quantification, where syntactic and semantic scope diverge. We start with some definitions, using US geography as an example domain. Let V be the set of all values, which includes primitives (e.g., 3, CA ∈V) as well as sets and tuples formed from other values (e.g., 3, {3, 4, 7}, (CA, {5}) ∈V). Let P be a set of predicates (e.g., state, count ∈P), which are just symbols. A world w is mapping from each predicate p ∈ P to a set of tuples; for example, w(state) = {(CA), (OR), . . . }. Conceptually, a world is a relational database where each predicate is a relation (possibly infinite). Define a special predicate ø with w(ø) = V. We represent functions by a set of inputoutput pairs, e.g., w(count) = {(S, n) : n = |S|}. As another example, w(average) = {(S, ¯x) : ¯x = |S1|−1 P x∈S1 S(x)}, where a set of pairs S is treated as a set-valued function S(x) = {y : (x, y) ∈S} with domain S1 = {x : (x, y) ∈S}. The logical forms in DCS are called DCS trees, where nodes are labeled with predicates, and edges are labeled with relations. Formally: Definition 1 (DCS trees) Let Z be the set of DCS trees, where each z ∈Z consists of (i) a predicate Relations R j j′ (join) E (extract) Σ (aggregate) Q (quantify) Xi (execute) C (compare) Table 1: Possible relations appearing on the edges of a DCS tree. Here, j, j′ ∈{1, 2, . . . } and i ∈{1, 2, . . . }∗. z.p ∈P and (ii) a sequence of edges z.e1, . . . , z.em, each edge e consisting of a relation e.r ∈R (see Table 1) and a child tree e.c ∈Z. We write a DCS tree z as ⟨p; r1 : c1; . . . ; rm : cm⟩. Figure 2(a) shows an example of a DCS tree. Although a DCS tree is a logical form, note that it looks like a syntactic dependency tree with predicates in place of words. It is this transparency between syntax and semantics provided by DCS which leads to a simple and streamlined compositional semantics suitable for program induction. 2.1 Basic Version The basic version of DCS restricts R to join and aggregate relations (see Table 1). Let us start by considering a DCS tree z with only join relations. Such a z defines a constraint satisfaction problem (CSP) with nodes as variables. The CSP has two types of constraints: (i) x ∈w(p) for each node x labeled with predicate p ∈P; and (ii) xj = yj′ (the j-th component of x must equal the j′-th component of y) for each edge (x, y) labeled with j j′ ∈R. A solution to the CSP is an assignment of nodes to values that satisfies all the constraints. We say a value v is consistent for a node x if there exists a solution that assigns v to x. The denotation JzKw (z evaluated on w) is the set of consistent values of the root node (see Figure 2 for an example). Computation We can compute the denotation JzKw of a DCS tree z by exploiting dynamic programming on trees (Dechter, 2003). The recurrence is as follows: J D p; j1 j′ 1 :c1; · · · ; jm j′ m :cm E K w (1) = w(p) ∩ m \ i=1 {v : vji = tj′ i, t ∈JciKw}. At each node, we compute the set of tuples v consistent with the predicate at that node (v ∈w(p)), and 591 Example: major city in California z = ⟨city; 1 1:⟨major⟩; 1 1:⟨loc; 2 1:⟨CA⟩⟩⟩ 1 1 1 1 major 2 1 CA loc city λc ∃m ∃ℓ∃s . city(c) ∧major(m)∧ loc(ℓ) ∧CA(s)∧ c1 = m1 ∧c1 = ℓ1 ∧ℓ2 = s1 (a) DCS tree (b) Lambda calculus formula (c) Denotation: JzKw = {SF, LA, . . . } Figure 2: (a) An example of a DCS tree (written in both the mathematical and graphical notation). Each node is labeled with a predicate, and each edge is labeled with a relation. (b) A DCS tree z with only join relations encodes a constraint satisfaction problem. (c) The denotation of z is the set of consistent values for the root node. for each child i, the ji-th component of v must equal the j′ i-th component of some t in the child’s denotation (t ∈JciKw). This algorithm is linear in the number of nodes times the size of the denotations.1 Now the dual importance of trees in DCS is clear: We have seen that trees parallel syntactic dependency structure, which will facilitate parsing. In addition, trees enable efficient computation, thereby establishing a new connection between dependency syntax and efficient semantic evaluation. Aggregate relation DCS trees that only use join relations can represent arbitrarily complex compositional structures, but they cannot capture higherorder phenomena in language. For example, consider the phrase number of major cities, and suppose that number corresponds to the count predicate. It is impossible to represent the semantics of this phrase with just a CSP, so we introduce a new aggregate relation, notated Σ. Consider a tree ⟨Σ:c⟩, whose root is connected to a child c via Σ. If the denotation of c is a set of values s, the parent’s denotation is then a singleton set containing s. Formally: J⟨Σ:c⟩Kw = {JcKw}. (2) Figure 3(a) shows the DCS tree for our running example. The denotation of the middle node is {s}, 1Infinite denotations (such as J<Kw) are represented as implicit sets on which we can perform membership queries. The intersection of two sets can be performed as long as at least one of the sets is finite. number of major cities 1 2 1 1 Σ 1 1 major city ∗∗ count ∗∗ average population of major cities 1 2 1 1 Σ 1 1 1 1 major city population ∗∗ average ∗∗ (a) Counting (b) Averaging Figure 3: Examples of DCS trees that use the aggregate relation (Σ) to (a) compute the cardinality of a set and (b) take the average over a set. where s is all major cities. Having instantiated s as a value, everything above this node is an ordinary CSP: s constrains the count node, which in turns constrains the root node to |s|. A DCS tree that contains only join and aggregate relations can be viewed as a collection of treestructured CSPs connected via aggregate relations. The tree structure still enables us to compute denotations efficiently based on (1) and (2). 2.2 Full Version The basic version of DCS described thus far handles a core subset of language. But consider Figure 4: (a) is headed by borders, but states needs to be extracted; in (b), the quantifier no is syntactically dominated by the head verb borders but needs to take wider scope. We now present the full version of DCS which handles this type of divergence between syntactic and semantic scope. The key idea that allows us to give semanticallyscoped denotations to syntactically-scoped trees is as follows: We mark a node low in the tree with a mark relation (one of E, Q, or C). Then higher up in the tree, we invoke it with an execute relation Xi to create the desired semantic scope.2 This mark-execute construct acts non-locally, so to maintain compositionality, we must augment the 2Our mark-execute construct is analogous to Montague’s quantifying in, Cooper storage, and Carpenter’s scoping constructor (Carpenter, 1998). 592 California borders which states? x1 x1 2 1 1 1 CA e ∗∗ state border ∗∗ Alaska borders no states. x1 x1 2 1 1 1 AK q no state border ∗∗ Some river traverses every city. x12 x12 2 1 1 1 q some river q every city traverse ∗∗ x21 x21 2 1 1 1 q some river q every city traverse ∗∗ (narrow) (wide) city traversed by no rivers x12 x12 1 2 e ∗∗ 1 1 q no river traverse city ∗∗ (a) Extraction (e) (b) Quantification (q) (c) Quantifier ambiguity (q, q) (d) Quantification (q, e) state bordering the most states x12 x12 1 1 e ∗∗ 2 1 c argmax state border state ∗∗ state bordering more states than Texas x12 x12 1 1 e ∗∗ 2 1 c 3 1 TX more state border state ∗∗ state bordering the largest state 1 1 2 1 x12 x12 1 1 e ∗∗ c argmax size state ∗∗ border state x12 x12 1 1 e ∗∗ 2 1 1 1 c argmax size state border state ∗∗ (absolute) (relative) Every state’s largest city is major. x1 x1 x2 x2 1 1 1 1 2 1 q every state loc c argmax size city major ∗∗ (e) Superlative (c) (f) Comparative (c) (g) Superlative ambiguity (c) (h) Quantification+Superlative (q, c) Figure 4: Example DCS trees for utterances in which syntactic and semantic scope diverge. These trees reflect the syntactic structure, which facilitates parsing, but importantly, these trees also precisely encode the correct semantic scope. The main mechanism is using a mark relation (E, Q, or C) low in the tree paired with an execute relation (Xi) higher up at the desired semantic point. denotation d = JzKw to include any information about the marked nodes in z that can be accessed by an execute relation later on. In the basic version, d was simply the consistent assignments to the root. Now d contains the consistent joint assignments to the active nodes (which include the root and all marked nodes), as well as information stored about each marked node. Think of d as consisting of n columns, one for each active node according to a pre-order traversal of z. Column 1 always corresponds to the root node. Formally, a denotation is defined as follows (see Figure 5 for an example): Definition 2 (Denotations) Let D be the set of denotations, where each d ∈D consists of • a set of arrays d.A, where each array a = [a1, . . . , an] ∈d.A is a sequence of n tuples (ai ∈V∗); and • a list of n stores d.α = (d.α1, . . . , d.αn), where each store α contains a mark relation α.r ∈{E, Q, C, ø}, a base denotation α.b ∈ D∪{ø}, and a child denotation α.c ∈D∪{ø}. We write d as ⟨⟨A; (r1, b1, c1); . . . ; (rn, bn, cn)⟩⟩. We use d{ri = x} to mean d with d.ri = d.αi.r = x (similar definitions apply for d{αi = x}, d{bi = x}, and d{ci = x}). The denotation of a DCS tree can now be defined recursively: J⟨p⟩Kw = ⟨⟨{[v] : v ∈w(p)}; ø⟩⟩, (3) J D p; e; j j′ :c E K w = Jp; eKw ▷◁j,j′ JcKw, (4) J⟨p; e; Σ:c⟩Kw = Jp; eKw ▷◁∗,∗Σ (JcKw) , (5) J⟨p; e; Xi :c⟩Kw = Jp; eKw ▷◁∗,∗Xi(JcKw), (6) J⟨p; e; E:c⟩Kw = M(Jp; eKw, E, c), (7) J⟨p; e; C:c⟩Kw = M(Jp; eKw, C, c), (8) J⟨p; Q:c; e⟩Kw = M(Jp; eKw, Q, c). (9) 593 1 1 2 1 1 1 c argmax size state border state J·Kw column 1 column 2 A: (OK) (NM) (NV) · · · (TX,2.7e5) (TX,2.7e5) (CA,1.6e5) · · · r: ø c b: ø J⟨size⟩Kw c: ø J⟨argmax⟩Kw DCS tree Denotation Figure 5: Example of the denotation for a DCS tree with a compare relation C. This denotation has two columns, one for each active node—the root node state and the marked node size. The base case is defined in (3): if z is a single node with predicate p, then the denotation of z has one column with the tuples w(p) and an empty store. The other six cases handle different edge relations. These definitions depend on several operations (▷◁j,j′, Σ, Xi, M) which we will define shortly, but let us first get some intuition. Let z be a DCS tree. If the last child c of z’s root is a join ( j j′), aggregate (Σ), or execute (Xi) relation ((4)–(6)), then we simply recurse on z with c removed and join it with some transformation (identity, Σ, or Xi) of c’s denotation. If the last (or first) child is connected via a mark relation E, C (or Q), then we strip off that child and put the appropriate information in the store by invoking M. We now define the operations ▷◁j,j′, Σ, Xi, M. Some helpful notation: For a sequence v = (v1, . . . , vn) and indices i = (i1, . . . , ik), let vi = (vi1, . . . , vik) be the projection of v onto i; we write v−i to mean v[1,...,n]\i. Extending this notation to denotations, let ⟨⟨A; α⟩⟩[i] = ⟨⟨{ai : a ∈A}; αi⟩⟩. Let d[−ø] = d[−i], where i are the columns with empty stores. For example, for d in Figure 5, d[1] keeps column 1, d[−ø] keeps column 2, and d[2, −2] swaps the two columns. Join The join of two denotations d and d′ with respect to components j and j′ (∗means all components) is formed by concatenating all arrays a of d with all compatible arrays a′ of d′, where compatibility means a1j = a′ 1j′. The stores are also concatenated (α+α′). Non-initial columns with empty stores are projected away by applying ·[1,−ø]. The full definition of join is as follows: ⟨⟨A; α⟩⟩▷◁j,j′ ⟨⟨A′; α′⟩⟩= ⟨⟨A′′; α + α′⟩⟩[1,−ø], A′′ = {a + a′ : a ∈A, a′ ∈A′, a1j = a′ 1j′}. (10) Aggregate The aggregate operation takes a denotation and forms a set out of the tuples in the first column for each setting of the rest of the columns: Σ (⟨⟨A; α⟩⟩) = ⟨⟨A′ ∪A′′; α⟩⟩ (11) A′ = {[S(a), a2, . . . , an] : a ∈A} S(a) = {a′ 1 : [a′ 1, a2, . . . , an] ∈A} A′′ = {[∅, a2, . . . , an] : ¬∃a1, a ∈A, ∀2 ≤i ≤n, [ai] ∈d.bi[1].A}. 2.2.1 Mark and Execute Now we turn to the mark (M) and execute (Xi) operations, which handles the divergence between syntactic and semantic scope. In some sense, this is the technical core of DCS. Marking is simple: When a node (e.g., size in Figure 5) is marked (e.g., with relation C), we simply put the relation r, current denotation d and child c’s denotation into the store of column 1: M(d, r, c) = d{r1 = r, b1 = d, c1 = JcKw}. (12) The execute operation Xi(d) processes columns i in reverse order. It suffices to define Xi(d) for a single column i. There are three cases: Extraction (d.ri = E) In the basic version, the denotation of a tree was always the set of consistent values of the root node. Extraction allows us to return the set of consistent values of a marked non-root node. Formally, extraction simply moves the i-th column to the front: Xi(d) = d[i, −(i, ø)]{α1 = ø}. For example, in Figure 4(a), before execution, the denotation of the DCS tree is ⟨⟨{[(CA, OR), (OR)], . . . }; ø; (E, J⟨state⟩Kw, ø)⟩⟩; after applying X1, we have ⟨⟨{[(OR)], . . . }; ø⟩⟩. Generalized Quantification (d.ri = Q) Generalized quantifiers are predicates on two sets, a restrictor A and a nuclear scope B. For example, w(no) = {(A, B) : A ∩B = ∅} and w(most) = {(A, B) : |A ∩B| > 1 2|A|}. In a DCS tree, the quantifier appears as the child of a Q relation, and the restrictor is the parent (see Figure 4(b) for an example). This information is retrieved from the store when the 594 quantifier in column i is executed. In particular, the restrictor is A = Σ (d.bi) and the nuclear scope is B = Σ (d[i, −(i, ø)]). We then apply d.ci to these two sets (technically, denotations) and project away the first column: Xi(d) = ((d.ci ▷◁1,1 A) ▷◁2,1 B) [−1]. For the example in Figure 4(b), the denotation of the DCS tree before execution is ⟨⟨∅; ø; (Q, J⟨state⟩Kw, J⟨no⟩Kw)⟩⟩. The restrictor set (A) is the set of all states, and the nuclear scope (B) is the empty set. Since (A, B) exists in no, the final denotation, which projects away the actual pair, is ⟨⟨{[ ]}⟩⟩(our representation of true). Figure 4(c) shows an example with two interacting quantifiers. The quantifier scope ambiguity is resolved by the choice of execute relation; X12 gives the narrow reading and X21 gives the wide reading. Figure 4(d) shows how extraction and quantification work together. Comparatives and Superlatives (d.ri = C) To compare entities, we use a set S of (x, y) pairs, where x is an entity and y is a number. For superlatives, the argmax predicate denotes pairs of sets and the set’s largest element(s): w(argmax) = {(S, x∗) : x∗∈argmaxx∈S1 max S(x)}. For comparatives, w(more) contains triples (S, x, y), where x is “more than” y as measured by S; formally: w(more) = {(S, x, y) : max S(x) > max S(y)}. In a superlative/comparative construction, the root x of the DCS tree is the entity to be compared, the child c of a C relation is the comparative or superlative, and its parent p contains the information used for comparison (see Figure 4(e) for an example). If d is the denotation of the root, its i-th column contains this information. There are two cases: (i) if the i-th column of d contains pairs (e.g., size in Figure 5), then let d′ = J⟨ø⟩Kw ▷◁1,2 d[i, −i], which reads out the second components of these pairs; (ii) otherwise (e.g., state in Figure 4(e)), let d′ = J⟨ø⟩Kw ▷◁1,2 J⟨count⟩Kw ▷◁1,1 Σ (d[i, −i]), which counts the number of things (e.g., states) that occur with each value of the root x. Given d′, we construct a denotation S by concatenating (+i) the second and first columns of d′ (S = Σ (+2,1 (d′{α2 = ø}))) and apply the superlative/comparative: Xi(d) = (J⟨ø⟩Kw ▷◁1,2 (d.ci ▷◁1,1 S)){α1 = d.α1}. Figure 4(f) shows that comparatives are handled using the exact same machinery as superlatives. Figure 4(g) shows that we can naturally account for superlative ambiguity based on where the scopedetermining execute relation is placed. 3 Semantic Parsing We now turn to the task of mapping natural language utterances to DCS trees. Our first question is: given an utterance x, what trees z ∈Z are permissible? To define the search space, we first assume a fixed set of lexical triggers L. Each trigger is a pair (x, p), where x is a sequence of words (usually one) and p is a predicate (e.g., x = California and p = CA). We use L(x) to denote the set of predicates p triggered by x ((x, p) ∈L). Let L(ϵ) be the set of trace predicates, which can be introduced without an overt lexical trigger. Given an utterance x = (x1, . . . , xn), we define ZL(x) ⊂Z, the set of permissible DCS trees for x. The basic approach is reminiscent of projective labeled dependency parsing: For each span i..j, we build a set of trees Ci,j and set ZL(x) = C0,n. Each set Ci,j is constructed recursively by combining the trees of its subspans Ci,k and Ck′,j for each pair of split points k, k′ (words between k and k′ are ignored). These combinations are then augmented via a function A and filtered via a function F, to be specified later. Formally, Ci,j is defined recursively as follows: Ci,j = F A L(xi+1..j) ∪ [ i≤k≤k′<j a∈Ci,k b∈Ck′,j T1(a, b)) . (13) In (13), L(xi+1..j) is the set of predicates triggered by the phrase under span i..j (the base case), and Td(a, b) = ⃗Td(a, b) ∪ ⃗ T d(b, a), which returns all ways of combining trees a and b where b is a descendant of a (⃗Td) or vice-versa ( ⃗ T d). The former is defined recursively as follows: ⃗T0(a, b) = ∅, and ⃗Td(a, b) = [ r∈R p∈L(ϵ) {⟨a; r:b⟩} ∪⃗Td−1(a, ⟨p; r:b⟩). The latter ( ⃗ T k) is defined similarly. Essentially, ⃗Td(a, b) allows us to insert up to d trace predicates between the roots of a and b. This is useful for modeling relations in noun compounds (e.g., 595 California cities), and it also allows us to underspecify L. In particular, our L will not include verbs or prepositions; rather, we rely on the predicates corresponding to those words to be triggered by traces. The augmentation function A takes a set of trees and optionally attaches E and Xi relations to the root (e.g., A(⟨city⟩) = {⟨city⟩, ⟨city; E:ø⟩}). The filtering function F rules out improperly-typed trees such as ⟨city; 0 0:⟨state⟩⟩. To further reduce the search space, F imposes a few additional constraints, e.g., limiting the number of marked nodes to 2 and only allowing trace predicates between arity 1 predicates. Model We now present our discriminative semantic parsing model, which places a log-linear distribution over z ∈ ZL(x) given an utterance x. Formally, pθ(z | x) ∝ eφ(x,z)⊤θ, where θ and φ(x, z) are parameter and feature vectors, respectively. As a running example, consider x = city that is in California and z = ⟨city; 1 1:⟨loc; 2 1:⟨CA⟩⟩⟩, where city triggers city and California triggers CA. To define the features, we technically need to augment each tree z ∈ ZL(x) with alignment information—namely, for each predicate in z, the span in x (if any) that triggered it. This extra information is already generated from the recursive definition in (13). The feature vector φ(x, z) is defined by sums of five simple indicator feature templates: (F1) a word triggers a predicate (e.g., [city, city]); (F2) a word is under a relation (e.g., [that, 1 1]); (F3) a word is under a trace predicate (e.g., [in, loc]); (F4) two predicates are linked via a relation in the left or right direction (e.g., [city, 1 1, loc, RIGHT]); and (F5) a predicate has a child relation (e.g., [city, 1 1]). Learning Given a training dataset D containing (x, y) pairs, we define the regularized marginal log-likelihood objective O(θ) = P (x,y)∈D log pθ(JzKw = y | x, z ∈ ZL(x)) −λ∥θ∥2 2, which sums over all DCS trees z that evaluate to the target answer y. Our model is arc-factored, so we can sum over all DCS trees in ZL(x) using dynamic programming. However, in order to learn, we need to sum over {z ∈ZL(x) : JzKw = y}, and unfortunately, the additional constraint JzKw = y does not factorize. We therefore resort to beam search. Specifically, we truncate each Ci,j to a maximum of K candidates sorted by decreasing score based on parameters θ. Let ˜ZL,θ(x) be this approximation of ZL(x). Our learning algorithm alternates between (i) using the current parameters θ to generate the K-best set ˜ZL,θ(x) for each training example x, and (ii) optimizing the parameters to put probability mass on the correct trees in these sets; sets containing no correct answers are skipped. Formally, let ˜O(θ, θ′) be the objective function O(θ) with ZL(x) replaced with ˜ZL,θ′(x). We optimize ˜O(θ, θ′) by setting θ(0) = ⃗0 and iteratively solving θ(t+1) = argmaxθ ˜O(θ, θ(t)) using L-BFGS until t = T. In all experiments, we set λ = 0.01, T = 5, and K = 100. After training, given a new utterance x, our system outputs the most likely y, summing out the latent logical form z: argmaxy pθ(T )(y | x, z ∈˜ZL,θ(T )). 4 Experiments We tested our system on two standard datasets, GEO and JOBS. In each dataset, each sentence x is annotated with a Prolog logical form, which we use only to evaluate and get an answer y. This evaluation is done with respect to a world w. Recall that a world w maps each predicate p ∈P to a set of tuples w(p). There are three types of predicates in P: generic (e.g., argmax), data (e.g., city), and value (e.g., CA). GEO has 48 non-value predicates and JOBS has 26. For GEO, w is the standard US geography database that comes with the dataset. For JOBS, if we use the standard Jobs database, close to half the y’s are empty, which makes it uninteresting. We therefore generated a random Jobs database instead as follows: we created 100 job IDs. For each data predicate p (e.g., language), we add each possible tuple (e.g., (job37, Java)) to w(p) independently with probability 0.8. We used the same training-test splits as Zettlemoyer and Collins (2005) (600+280 for GEO and 500+140 for JOBS). During development, we further held out a random 30% of the training sets for validation. Our lexical triggers L include the following: (i) predicates for a small set of ≈20 function words (e.g., (most, argmax)), (ii) (x, x) for each value 596 System Accuracy Clarke et al. (2010) w/answers 73.2 Clarke et al. (2010) w/logical forms 80.4 Our system (DCS with L) 78.9 Our system (DCS with L+) 87.2 Table 2: Results on GEO with 250 training and 250 test examples. Our results are averaged over 10 random 250+250 splits taken from our 600 training examples. Of the three systems that do not use logical forms, our two systems yield significant improvements. Our better system even outperforms the system that uses logical forms. predicate x in w (e.g., (Boston, Boston)), and (iii) predicates for each POS tag in {JJ, NN, NNS} (e.g., (JJ, size), (JJ, area), etc.).3 Predicates corresponding to verbs and prepositions (e.g., traverse) are not included as overt lexical triggers, but rather in the trace predicates L(ϵ). We also define an augmented lexicon L+ which includes a prototype word x for each predicate appearing in (iii) above (e.g., (large, size)), which cancels the predicates triggered by x’s POS tag. For GEO, there are 22 prototype words; for JOBS, there are 5. Specifying these triggers requires minimal domain-specific supervision. Results We first compare our system with Clarke et al. (2010) (henceforth, SEMRESP), which also learns a semantic parser from question-answer pairs. Table 2 shows that our system using lexical triggers L (henceforth, DCS) outperforms SEMRESP (78.9% over 73.2%). In fact, although neither DCS nor SEMRESP uses logical forms, DCS uses even less supervision than SEMRESP. SEMRESP requires a lexicon of 1.42 words per non-value predicate, WordNet features, and syntactic parse trees; DCS requires only words for the domain-independent predicates (overall, around 0.5 words per non-value predicate), POS tags, and very simple indicator features. In fact, DCS performs comparably to even the version of SEMRESP trained using logical forms. If we add prototype triggers (use L+), the resulting system (DCS+) outperforms both versions of SEMRESP by a significant margin (87.2% over 73.2% and 80.4%). 3We used the Berkeley Parser (Petrov et al., 2006) to perform POS tagging. The triggers L(x) for a word x thus include L(t) where t is the POS tag of x. System GEO JOBS Tang and Mooney (2001) 79.4 79.8 Wong and Mooney (2007) 86.6 – Zettlemoyer and Collins (2005) 79.3 79.3 Zettlemoyer and Collins (2007) 81.6 – Kwiatkowski et al. (2010) 88.2 – Kwiatkowski et al. (2010) 88.9 – Our system (DCS with L) 88.6 91.4 Our system (DCS with L+) 91.1 95.0 Table 3: Accuracy (recall) of systems on the two benchmarks. The systems are divided into three groups. Group 1 uses 10-fold cross-validation; groups 2 and 3 use the independent test set. Groups 1 and 2 measure accuracy of logical form; group 3 measures accuracy of the answer; but there is very small difference between the two as seen from the Kwiatkowski et al. (2010) numbers. Our best system improves substantially over past work, despite using no logical forms as training data. Next, we compared our systems (DCS and DCS+) with the state-of-the-art semantic parsers on the full dataset for both GEO and JOBS (see Table 3). All other systems require logical forms as training data, whereas ours does not. Table 3 shows that even DCS, which does not use prototypes, is comparable to the best previous system (Kwiatkowski et al., 2010), and by adding a few prototypes, DCS+ offers a decisive edge (91.1% over 88.9% on GEO). Rather than using lexical triggers, several of the other systems use IBM word alignment models to produce an initial word-predicate mapping. This option is not available to us since we do not have annotated logical forms, so we must instead rely on lexical triggers to define the search space. Note that having lexical triggers is a much weaker requirement than having a CCG lexicon, and far easier to obtain than logical forms. Intuitions How is our system learning? Initially, the weights are zero, so the beam search is essentially unguided. We find that only for a small fraction of training examples do the K-best sets contain any trees yielding the correct answer (29% for DCS on GEO). However, training on just these examples is enough to improve the parameters, and this 29% increases to 66% and then to 95% over the next few iterations. This bootstrapping behavior occurs naturally: The “easy” examples are processed first, where easy is defined by the ability of the current 597 model to generate the correct answer using any tree. Our system learns lexical associations between words and predicates. For example, area (by virtue of being a noun) triggers many predicates: city, state, area, etc. Inspecting the final parameters (DCS on GEO), we find that the feature [area, area] has a much higher weight than [area, city]. Trace predicates can be inserted anywhere, but the features favor some insertions depending on the words present (for example, [in, loc] has high weight). The errors that the system makes stem from multiple sources, including errors in the POS tags (e.g., states is sometimes tagged as a verb, which triggers no predicates), confusion of Washington state with Washington D.C., learning the wrong lexical associations due to data sparsity, and having an insufficiently large K. 5 Discussion A major focus of this work is on our semantic representation, DCS, which offers a new perspective on compositional semantics. To contrast, consider CCG (Steedman, 2000), in which semantic parsing is driven from the lexicon. The lexicon encodes information about how each word can used in context; for example, the lexical entry for borders is S\NP/NP : λy.λx.border(x, y), which means borders looks right for the first argument and left for the second. These rules are often too stringent, and for complex utterances, especially in free wordorder languages, either disharmonic combinators are employed (Zettlemoyer and Collins, 2007) or words are given multiple lexical entries (Kwiatkowski et al., 2010). In DCS, we start with lexical triggers, which are more basic than CCG lexical entries. A trigger for borders specifies only that border can be used, but not how. The combination rules are encoded in the features as soft preferences. This yields a more factorized and flexible representation that is easier to search through and parametrize using features. It also allows us to easily add new lexical triggers without becoming mired in the semantic formalism. Quantifiers and superlatives significantly complicate scoping in lambda calculus, and often type raising needs to be employed. In DCS, the mark-execute construct provides a flexible framework for dealing with scope variation. Think of DCS as a higher-level programming language tailored to natural language, which results in programs (DCS trees) which are much simpler than the logically-equivalent lambda calculus formulae. The idea of using CSPs to represent semantics is inspired by Discourse Representation Theory (DRT) (Kamp and Reyle, 1993; Kamp et al., 2005), where variables are discourse referents. The restriction to trees is similar to economical DRT (Bos, 2009). The other major focus of this work is program induction—inferring logical forms from their denotations. There has been a fair amount of past work on this topic: Liang et al. (2010) induces combinatory logic programs in a non-linguistic setting. Eisenstein et al. (2009) induces conjunctive formulae and uses them as features in another learning problem. Piantadosi et al. (2008) induces first-order formulae using CCG in a small domain assuming observed lexical semantics. The closest work to ours is Clarke et al. (2010), which we discussed earlier. The integration of natural language with denotations computed against a world (grounding) is becoming increasingly popular. Feedback from the world has been used to guide both syntactic parsing (Schuler, 2003) and semantic parsing (Popescu et al., 2003; Clarke et al., 2010). Past work has also focused on aligning text to a world (Liang et al., 2009), using text in reinforcement learning (Branavan et al., 2009; Branavan et al., 2010), and many others. Our work pushes the grounded language agenda towards deeper representations of language—think grounded compositional semantics. 6 Conclusion We built a system that interprets natural language utterances much more accurately than existing systems, despite using no annotated logical forms. Our system is based on a new semantic representation, DCS, which offers a simple and expressive alternative to lambda calculus. Free from the burden of annotating logical forms, we hope to use our techniques in developing even more accurate and broader-coverage language understanding systems. Acknowledgments We thank Luke Zettlemoyer and Tom Kwiatkowski for providing us with data and answering questions. 598 References J. Bos. 2009. A controlled fragment of DRT. In Workshop on Controlled Natural Language, pages 1–5. S. Branavan, H. Chen, L. S. Zettlemoyer, and R. Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), Singapore. Association for Computational Linguistics. S. Branavan, L. Zettlemoyer, and R. Barzilay. 2010. Reading between the lines: Learning to map high-level instructions to commands. In Association for Computational Linguistics (ACL). Association for Computational Linguistics. B. Carpenter. 1998. Type-Logical Semantics. MIT Press. J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In Computational Natural Language Learning (CoNLL). R. Dechter. 2003. Constraint Processing. Morgan Kaufmann. J. Eisenstein, J. Clarke, D. Goldwasser, and D. Roth. 2009. Reading to learn: Constructing features from semantic abstracts. In Empirical Methods in Natural Language Processing (EMNLP), Singapore. R. Ge and R. J. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Computational Natural Language Learning (CoNLL), pages 9–16, Ann Arbor, Michigan. H. Kamp and U. Reyle. 1993. From Discourse to Logic: An Introduction to the Model-theoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory. Kluwer, Dordrecht. H. Kamp, J. v. Genabith, and U. Reyle. 2005. Discourse representation theory. In Handbook of Philosophical Logic. R. J. Kate and R. J. Mooney. 2007. Learning language semantics from ambiguous supervision. In Association for the Advancement of Artificial Intelligence (AAAI), pages 895–900, Cambridge, MA. MIT Press. R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1062–1068, Cambridge, MA. MIT Press. T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Empirical Methods in Natural Language Processing (EMNLP). P. Liang, M. I. Jordan, and D. Klein. 2009. Learning semantic correspondences with less supervision. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), Singapore. Association for Computational Linguistics. P. Liang, M. I. Jordan, and D. Klein. 2010. Learning programs: A hierarchical Bayesian approach. In International Conference on Machine Learning (ICML). Omnipress. S. Petrov, L. Barrett, R. Thibaux, and D. Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In International Conference on Computational Linguistics and Association for Computational Linguistics (COLING/ACL), pages 433–440. Association for Computational Linguistics. S. T. Piantadosi, N. D. Goodman, B. A. Ellis, and J. B. Tenenbaum. 2008. A Bayesian model of the acquisition of compositional semantics. In Proceedings of the Thirtieth Annual Conference of the Cognitive Science Society. H. Poon and P. Domingos. 2009. Unsupervised semantic parsing. In Empirical Methods in Natural Language Processing (EMNLP), Singapore. A. Popescu, O. Etzioni, and H. Kautz. 2003. Towards a theory of natural language interfaces to databases. In International Conference on Intelligent User Interfaces (IUI). W. Schuler. 2003. Using model-theoretic semantic interpretation to guide statistical parsing and word recognition in a spoken language interface. In Association for Computational Linguistics (ACL). Association for Computational Linguistics. M. Steedman. 2000. The Syntactic Process. MIT Press. L. R. Tang and R. J. Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In European Conference on Machine Learning, pages 466–477. Y. W. Wong and R. J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Association for Computational Linguistics (ACL), pages 960–967, Prague, Czech Republic. Association for Computational Linguistics. M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic proramming. In Association for the Advancement of Artificial Intelligence (AAAI), Cambridge, MA. MIT Press. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI), pages 658–666. L. S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL), pages 678–687. 599
|
2011
|
60
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 600–609, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections Dipanjan Das∗ Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Slav Petrov Google Research New York, NY 10011, USA [email protected] Abstract We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language. Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages. We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (BergKirkpatrick et al., 2010). Across eight European languages, our approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. 1 Introduction Supervised learning approaches have advanced the state-of-the-art on a variety of tasks in natural language processing, resulting in highly accurate systems. Supervised part-of-speech (POS) taggers, for example, approach the level of inter-annotator agreement (Shen et al., 2007, 97.3% accuracy for English). However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate. Unsupervised learning approaches appear to be a natural solution to this problem, as they require only unannotated text for train∗This research was carried out during an internship at Google Research. ing models. Unfortunately, the best completely unsupervised English POS tagger (that does not make use of a tagging dictionary) reaches only 76.1% accuracy (Christodoulopoulos et al., 2010), making its practical usability questionable at best. To bridge this gap, we consider a practically motivated scenario, in which we want to leverage existing resources from a resource-rich language (like English) when building tools for resource-poor foreign languages.1 We assume that absolutely no labeled training data is available for the foreign language of interest, but that we have access to parallel data with a resource-rich language. This scenario is applicable to a large set of languages and has been considered by a number of authors in the past (Alshawi et al., 2000; Xi and Hwa, 2005; Ganchev et al., 2009). Naseem et al. (2009) and Snyder et al. (2009) study related but different multilingual grammar and tagger induction tasks, where it is assumed that no labeled data at all is available. Our work is closest to that of Yarowsky and Ngai (2001), but differs in two important ways. First, we use a novel graph-based framework for projecting syntactic information across language boundaries. To this end, we construct a bilingual graph over word types to establish a connection between the two languages (§3), and then use graph label propagation to project syntactic information from English to the foreign language (§4). Second, we treat the projected labels as features in an unsuper1For simplicity of exposition we refer to the resource-poor language as the “foreign language.” Similarly, we use English as the resource-rich language, but any other language with labeled resources could be used instead. 600 vised model (§5), rather than using them directly for supervised training. To make the projection practical, we rely on the twelve universal part-of-speech tags of Petrov et al. (2011). Syntactic universals are a well studied concept in linguistics (Carnie, 2002; Newmeyer, 2005), and were recently used in similar form by Naseem et al. (2010) for multilingual grammar induction. Because there might be some controversy about the exact definitions of such universals, this set of coarse-grained POS categories is defined operationally, by collapsing language (or treebank) specific distinctions to a set of categories that exists across all languages. These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels. We evaluate our approach on eight European languages (§6), and show that both our contributions provide consistent and statistically significant improvements. Our final average POS tagging accuracy of 83.4% compares very favorably to the average accuracy of Berg-Kirkpatrick et al.’s monolingual unsupervised state-of-the-art model (73.0%), and considerably bridges the gap to fully supervised POS tagging performance (96.6%). 2 Approach Overview The focus of this work is on building POS taggers for foreign languages, assuming that we have an English POS tagger and some parallel text between the two languages. Central to our approach (see Algorithm 1) is a bilingual similarity graph built from a sentence-aligned parallel corpus. As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types. Graph construction does not require any labeled data, but makes use of two similarity functions. The edge weights between the foreign language trigrams are computed using a co-occurence based similarity function, designed to indicate how syntactically 2See Christodoulopoulos et al. (2010) for a discussion of metrics for evaluating unsupervised POS induction systems. Algorithm 1 Bilingual POS Induction Require: Parallel English and foreign language data De and Df, unlabeled foreign training data Γf; English tagger. Ensure: Θf, a set of parameters learned using a constrained unsupervised model (§5). 1: De↔f ←word-align-bitext(De, Df) 2: c De ←pos-tag-supervised(De) 3: A ←extract-alignments(De↔f, c De) 4: G ←construct-graph(Γf, Df, A) 5: ˜G ←graph-propagate(G) 6: ∆←extract-word-constraints(˜G) 7: Θf ←pos-induce-constrained(Γf, ∆) 8: Return Θf similar the middle words of the connected trigrams are (§3.2). To establish a soft correspondence between the two languages, we use a second similarity function, which leverages standard unsupervised word alignment statistics (§3.3).3 Since we have no labeled foreign data, our goal is to project syntactic information from the English side to the foreign side. To initialize the graph we tag the English side of the parallel text using a supervised model. By aggregating the POS labels of the English tokens to types, we can generate label distributions for the English vertices. Label propagation can then be used to transfer the labels to the peripheral foreign vertices (i.e. the ones adjacent to the English vertices) first, and then among all of the foreign vertices (§4). The POS distributions over the foreign trigram types are used as features to learn a better unsupervised POS tagger (§5). The following three sections elaborate these different stages is more detail. 3 Graph Construction In graph-based learning approaches one constructs a graph whose vertices are labeled and unlabeled examples, and whose weighted edges encode the degree to which the examples they link have the same label (Zhu et al., 2003). Graph construction for structured prediction problems such as POS tagging is non-trivial: on the one hand, using individual words as the vertices throws away the context 3The word alignment methods do not use POS information. 601 necessary for disambiguation; on the other hand, it is unclear how to define (sequence) similarity if the vertices correspond to entire sentences. Altun et al. (2005) proposed a technique that uses graph based similarity between labeled and unlabeled parts of structured data in a discriminative framework for semi-supervised learning. More recently, Subramanya et al. (2010) defined a graph over the cliques in an underlying structured prediction model. They considered a semi-supervised POS tagging scenario and showed that one can use a graph over trigram types, and edge weights based on distributional similarity, to improve a supervised conditional random field tagger. 3.1 Graph Vertices We extend Subramanya et al.’s intuitions to our bilingual setup. Because the information flow in our graph is asymmetric (from English to the foreign language), we use different types of vertices for each language. The foreign language vertices (denoted by Vf) correspond to foreign trigram types, exactly as in Subramanya et al. (2010). On the English side, however, the vertices (denoted by Ve) correspond to word types. Because all English vertices are going to be labeled, we do not need to disambiguate them by embedding them in trigrams. Furthermore, we do not connect the English vertices to each other, but only to foreign language vertices.4 The graph vertices are extracted from the different sides of a parallel corpus (De, Df) and an additional unlabeled monolingual foreign corpus Γf, which will be used later for training. We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages. 3.2 Monolingual Similarity Function Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010). We briefly review it here for completeness. We define a symmetric similarity function K(ui, uj) over two for4This is because we are primarily interested in learning foreign language taggers, rather than improving supervised English taggers. Note, however, that it would be possible to use our graph-based framework also for completely unsupervised POS induction in both languages, similar to Snyder et al. (2009). Description Feature Trigram + Context x1 x2 x3 x4 x5 Trigram x2 x3 x4 Left Context x1 x2 Right Context x4 x5 Center Word x3 Trigram −Center Word x2 x4 Left Word + Right Context x2 x4 x5 Left Context + Right Word x1 x2 x4 Suffix HasSuffix(x3) Table 1: Various features used for computing edge weights between foreign trigram types. eign language vertices ui, uj ∈Vf based on the co-occurrence statistics of the nine feature concepts given in Table 1. Each feature concept is akin to a random variable and its occurrence in the text corresponds to a particular instantiation of that random variable. For each trigram type x2 x3 x4 in a sequence x1 x2 x3 x4 x5, we count how many times that trigram type co-occurs with the different instantiations of each concept, and compute the point-wise mutual information (PMI) between the two.5 The similarity between two trigram types is given by summing over the PMI values over feature instantiations that they have in common. This is similar to stacking the different feature instantiations into long (sparse) vectors and computing the cosine similarity between them. Finally, note that while most feature concepts are lexicalized, others, such as the suffix concept, are not. Given this similarity function, we define a nearest neighbor graph, where the edge weight for the n most similar vertices is set to the value of the similarity function and to 0 for all other vertices. We use N(u) to denote the neighborhood of vertex u, and fixed n = 5 in our experiments. 3.3 Bilingual Similarity Function To define a similarity function between the English and the foreign vertices, we rely on high-confidence word alignments. Since our graph is built from a parallel corpus, we can use standard word alignment techniques to align the English sentences De 5Note that many combinations are impossible giving a PMI value of 0; e.g., when the trigram type and the feature instantiation don’t have words in common. 602 and their foreign language translations Df.6 Label propagation in the graph will provide coverage and high recall, and we therefore extract only intersected high-confidence (> 0.9) alignments De↔f. Based on these high-confidence alignments we can extract tuples of the form [u ↔v], where u is a foreign trigram type, whose middle word aligns to an English word type v. Our bilingual similarity function then sets the edge weights in proportion to these tuple counts. 3.4 Graph Initialization So far the graph has been completely unlabeled. To initialize the graph for label propagation we use a supervised English tagger to label the English side of the bitext.7 We then simply count the individual labels of the English tokens and normalize the counts to produce tag distributions over English word types. These tag distributions are used to initialize the label distributions over the English vertices in the graph. Note that since all English vertices were extracted from the parallel text, we will have an initial label distribution for all vertices in Ve. 3.5 Graph Example A very small excerpt from an Italian-English graph is shown in Figure 1. As one can see, only the trigrams [suo incarceramento ,], [suo iter ,] and [suo carattere ,] are connected to English words. In this particular case, all English vertices are labeled as nouns by the supervised tagger. In general, the neighborhoods can be more diverse and we allow a soft label distribution over the vertices. It is worth noting that the middle words of the Italian trigrams are nouns too, which exhibits the fact that the similarity metric connects types having the same syntactic category. In the label propagation stage, we propagate the automatic English tags to the aligned Italian trigram types, followed by further propagation solely among the Italian vertices. 6We ran six iterations of IBM Model 1 (Brown et al., 1993), followed by six iterations of the HMM model (Vogel et al., 1996) in both directions. 7We used a tagger based on a trigram Markov model (Brants, 2000) trained on the Wall Street Journal portion of the Penn Treebank (Marcus et al., 1993), for its fast speed and reasonable accuracy (96.7% on sections 22-24 of the treebank, but presumably much lower on the (out-of-domain) parallel corpus). [ suo iter , ] [ suo incarceramento , ] [ suo fidanzato , ] [ suo carattere , ] [ imprisonment ] [ enactment ] [ character ] [ del fidanzato , ] [ il fidanzato , ] NOUN NOUN NOUN [ al fidanzato e ] Figure 1: An excerpt from the graph for Italian. Three of the Italian vertices are connected to an automatically labeled English vertex. Label propagation is used to propagate these tags inwards and results in tag distributions for the middle word of each Italian trigram. 4 POS Projection Given the bilingual graph described in the previous section, we can use label propagation to project the English POS labels to the foreign language. We use label propagation in two stages to generate soft labels on all the vertices in the graph. In the first stage, we run a single step of label propagation, which transfers the label distributions from the English vertices to the connected foreign language vertices (say, V l f) at the periphery of the graph. Note that because we extracted only high-confidence alignments, many foreign vertices will not be connected to any English vertices. This stage of label propagation results in a tag distribution ri over labels y, which encodes the proportion of times the middle word of ui ∈Vf aligns to English words vy tagged with label y: ri(y) = X vy #[ui ↔vy] X y′ X vy′ #[ui ↔vy′] (1) The second stage consists of running traditional label propagation to propagate labels from these peripheral vertices V l f to all foreign language vertices 603 in the graph, optimizing the following objective: C(q) = X ui∈Vf\V l f,uj∈N(ui) wij∥qi −qj∥2 + ν X ui∈Vf\V l f ∥qi −U∥2 s.t. X y qi(y) = 1 ∀ui qi(y) ≥0 ∀ui, y qi = ri ∀ui ∈V l f (2) where the qi (i = 1, . . . , |Vf|) are the label distributions over the foreign language vertices and µ and ν are hyperparameters that we discuss in §6.4. We use a squared loss to penalize neighboring vertices that have different label distributions: ∥qi −qj∥2 = P y(qi(y) −qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y. It can be shown that this objective is convex in q. The first term in the objective function is the graph smoothness regularizer which encourages the distributions of similar vertices (large wij) to be similar. The second term is a regularizer and encourages all type marginals to be uniform to the extent that is allowed by the first two terms (cf. maximum entropy principle). If an unlabeled vertex does not have a path to any labeled vertex, this term ensures that the converged marginal for this vertex will be uniform over all tags, allowing the middle word of such an unlabeled vertex to take on any of the possible tags. While it is possible to derive a closed form solution for this convex objective function, it would require the inversion of a matrix of order |Vf|. Instead, we resort to an iterative update based method. We formulate the update as follows: q(m) i (y) = ri(y) if ui ∈V l f γi(y) κi otherwise (3) where ∀ui ∈Vf \ V l f, γi(y) and κi are defined as: γi(y) = X uj∈N(ui) wijq(m−1) j (y) + ν U(y) (4) κi = ν + X uj∈N(ui) wij (5) We ran this procedure for 10 iterations. 5 POS Induction After running label propagation (LP), we compute tag probabilities for foreign word types x by marginalizing the POS tag distributions of foreign trigrams ui = x−x x+ over the left and right context words: p(y|x) = X x−,x+ qi(y) X x−,x+,y′ qi(y′) (6) We then extract a set of possible tags tx(y) by eliminating labels whose probability is below a threshold value τ: tx(y) = ( 1 if p(y|x) ≥τ 0 otherwise (7) We describe how we choose τ in §6.4. This vector tx is constructed for every word in the foreign vocabulary and will be used to provide features for the unsupervised foreign language POS tagger. We develop our POS induction model based on the feature-based HMM of Berg-Kirkpatrick et al. (2010). For a sentence x and a state sequence z, a first order Markov model defines a distribution: PΘ(X = x, Z = z) = PΘ(Z1 = z1)· Q|x| i=1 PΘ(Zi+1 = zi+1 | Zi = zi) | {z } transition · PΘ(Xi = xi | Zi = zi) | {z } emission (8) In a traditional Markov model, the emission distribution PΘ(Xi = xi | Zi = zi) is a set of multinomials. The feature-based model replaces the emission distribution with a log-linear model, such that: PΘ(X = x | Z = z) = exp Θ⊤f(x, z) X x′∈Val(X) exp Θ⊤f(x′, z) (9) where Val(X) corresponds to the entire vocabulary. This locally normalized log-linear model can look at various aspects of the observation x, incorporating overlapping features of the observation. In our experiments, we used the same set of features as BergKirkpatrick et al. (2010): an indicator feature based 604 on the word identity x, features checking whether x contains digits or hyphens, whether the first letter of x is upper case, and suffix features up to length 3. All features were conjoined with the state z. We trained this model by optimizing the following objective function: L(Θ) = N X i=1 log X z PΘ(X = x(i), Z = z(i)) −C∥Θ∥2 2 (10) Note that this involves marginalizing out all possible state configurations z for a sentence x, resulting in a non-convex objective. To optimize this function, we used L-BFGS, a quasi-Newton method (Liu and Nocedal, 1989). For English POS tagging, BergKirkpatrick et al. (2010) found that this direct gradient method performed better (>7% absolute accuracy) than using a feature-enhanced modification of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977).8 Moreover, this route of optimization outperformed a vanilla HMM trained with EM by 12%. We adopted this state-of-the-art model because it makes it easy to experiment with various ways of incorporating our novel constraint feature into the log-linear emission model. This feature ft incorporates information from the smoothed graph and prunes hidden states that are inconsistent with the thresholded vector tx. The function λ : F →C maps from the language specific fine-grained tagset F to the coarser universal tagset C and is described in detail in §6.2: ft(x, z) = log(tx(y)), if λ(z) = y (11) Note that when tx(y) = 1 the feature value is 0 and has no effect on the model, while its value is −∞when tx(y) = 0 and constrains the HMM’s state space. This formulation of the constraint feature is equivalent to the use of a tagging dictionary extracted from the graph using a threshold τ on the posterior distribution of tags for a given word type (Eq. 7). It would have therefore also been possible to use the integer programming (IP) based approach of 8See §3.1 of Berg-Kirkpatrick et al. (2010) for more details about their modification of EM, and how gradients are computed for L-BFGS. Ravi and Knight (2009) instead of the feature-HMM for POS induction on the foreign side. However, we do not explore this possibility in the current work. 6 Experiments and Results Before presenting our results, we describe the datasets that we used, as well as two baselines. 6.1 Datasets We utilized two kinds of datasets in our experiments: (i) monolingual treebanks9 and (ii) large amounts of parallel text with English on one side. The availability of these resources guided our selection of foreign languages. For monolingual treebank data we relied on the CoNLL-X and CoNLL-2007 shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007). The parallel data came from the Europarl corpus (Koehn, 2005) and the ODS United Nations dataset (UN, 2006). Taking the intersection of languages in these resources, and selecting languages with large amounts of parallel data, yields the following set of eight Indo-European languages: Danish, Dutch, German, Greek, Italian, Portuguese, Spanish and Swedish. Of course, we are primarily interested in applying our techniques to languages for which no labeled resources are available. However, we needed to restrict ourselves to these languages in order to be able to evaluate the performance of our approach. We paid particular attention to minimize the number of free parameters, and used the same hyperparameters for all language pairs, rather than attempting language-specific tuning. We hope that this will allow practitioners to apply our approach directly to languages for which no resources are available. 6.2 Part-of-Speech Tagset and HMM States We use the universal POS tagset of Petrov et al. (2011) in our experiments.10 This set C consists of the following 12 coarse-grained tags: NOUN (nouns), VERB (verbs), ADJ (adjectives), ADV (adverbs), PRON (pronouns), DET (determiners), ADP (prepositions or postpositions), NUM (numerals), CONJ (conjunctions), PRT (particles), PUNC 9We extracted only the words and their POS tags from the treebanks. 10Available at http://code.google.com/p/universal-pos-tags/. 605 (punctuation marks) and X (a catch-all for other categories such as abbreviations or foreign words). While there might be some controversy about the exact definition of such a tagset, these 12 categories cover the most frequent part-of-speech and exist in one form or another in all of the languages that we studied. For each language under consideration, Petrov et al. (2011) provide a mapping λ from the fine-grained language specific POS tags in the foreign treebank to the universal POS tags. The supervised POS tagging accuracies (on this tagset) are shown in the last row of Table 2. The taggers were trained on datasets labeled with the universal tags. The number of latent HMM states for each language in our experiments was set to the number of fine tags in the language’s treebank. In other words, the set of hidden states F was chosen to be the fine set of treebank tags. Therefore, the number of fine tags varied across languages for our experiments; however, one could as well have fixed the set of HMM states to be a constant across languages, and created one mapping to the universal POS tagset. 6.3 Various Models To provide a thorough analysis, we evaluated three baselines and two oracles in addition to two variants of our graph-based approach. We were intentionally lenient with our baselines: • EM-HMM: A traditional HMM baseline, with multinomial emission and transition distributions estimated by the Expectation Maximization algorithm. We evaluated POS tagging accuracy using the lenient many-to-1 evaluation approach (Johnson, 2007). • Feature-HMM: The vanilla feature-HMM of Berg-Kirkpatrick et al. (2010) (i.e. no additional constraint feature) served as a second baseline. Model parameters were estimated with L-BFGS and evaluation again used a greedy many-to-1 mapping. • Projection: Our third baseline incorporates bilingual information by projecting POS tags directly across alignments in the parallel data. For unaligned words, we set the tag to the most frequent tag in the corresponding treebank. For each language, we took the same number of sentences from the bitext as there are in its treebank, and trained a supervised feature-HMM. This can be seen as a rough approximation of Yarowsky and Ngai (2001). We tried two versions of our graph-based approach: • No LP: Our first version takes advantage of our bilingual graph, but extracts the constraint feature after the first stage of label propagation (Eq. 1). Because many foreign word types are not aligned to an English word (see Table 3), and we do not run label propagation on the foreign side, we expect the projected information to have less coverage. Furthermore we expect the label distributions on the foreign to be fairly noisy, because the graph constraints have not been taken into account yet. • With LP: Our full model uses both stages of label propagation (Eq. 2) before extracting the constraint features. As a result, we are able to extract the constraint feature for all foreign word types and furthermore expect the projected tag distributions to be smoother and more stable. Our oracles took advantage of the labeled treebanks: • TB Dictionary: We extracted tagging dictionaries from the treebanks and and used them as constraint features in the feature-based HMM. Evaluation was done using the prespecified mappings. • Supervised: We trained the supervised model of Brants (2000) on the original treebanks and mapped the language-specific tags to the universal tags for evaluation. 6.4 Experimental Setup While we tried to minimize the number of free parameters in our model, there are a few hyperparameters that need to be set. Fortunately, performance was stable across various values, and we were able to use the same hyperparameters for all languages. We used C = 1.0 as the L2 regularization constant in (Eq. 10) and trained both EM and L-BFGS for 1000 iterations. When extracting the vector 606 Model Danish Dutch German Greek Italian Portuguese Spanish Swedish Avg baselines EM-HMM 68.7 57.0 75.9 65.8 63.7 62.9 71.5 68.4 66.7 Feature-HMM 69.1 65.1 81.3 71.8 68.1 78.4 80.2 70.1 73.0 Projection 73.6 77.0 83.2 79.3 79.7 82.6 80.1 74.7 78.8 our approach No LP 79.0 78.8 82.4 76.3 84.8 87.0 82.8 79.4 81.3 With LP 83.2 79.5 82.8 82.5 86.8 87.9 84.2 80.5 83.4 oracles TB Dictionary 93.1 94.7 93.5 96.6 96.4 94.0 95.8 85.5 93.7 Supervised 96.9 94.9 98.2 97.8 95.8 97.2 96.8 94.8 96.6 Table 2: Part-of-speech tagging accuracies for various baselines and oracles, as well as our approach. “Avg” denotes macro-average across the eight languages. tx used to compute the constraint feature from the graph, we tried three threshold values for τ (see Eq. 7). Because we don’t have a separate development set, we used the training set to select among them and found 0.2 to work slightly better than 0.1 and 0.3. For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, τ = 0.2 can be used. For graph propagation, the hyperparameter ν was set to 2 × 10−6 and was not tuned. The graph was constructed using 2 million trigrams; we chose these by truncating the parallel datasets up to the number of sentence pairs that contained 2 million trigrams. 6.5 Results Table 2 shows our complete set of results. As expected, the vanilla HMM trained with EM performs the worst. The feature-HMM model works better for all languages, generalizing the results achieved for English by Berg-Kirkpatrick et al. (2010). Our “Projection” baseline is able to benefit from the bilingual information and greatly improves upon the monolingual baselines, but falls short of the “No LP” model by 2.5% on an average. The “No LP” model does not outperform direct projection for German and Greek, but performs better for six out of eight languages. Overall, it gives improvements ranging from 1.1% for German to 14.7% for Italian, for an average improvement of 8.3% over the unsupervised feature-HMM model. For comparison, the completely unsupervised feature-HMM baseline accuracy on the universal POS tags for English is 79.4%, and goes up to 88.7% with a treebank dictionary. Our full model (“With LP”) outperforms the unsupervised baselines and the “No LP” setting for all languages. It falls short of the “Projection” baseline for German, but is statistically indistinguishable in terms of accuracy. As indicated by bolding, for seven out of eight languages the improvements of the “With LP” setting are statistically significant with respect to the other models, including the “No LP” setting.11 Overall, it performs 10.4% better than the hitherto state-of-the-art feature-HMM baseline, and 4.6% better than direct projection, when we macro-average the accuracy over all languages. 6.6 Discussion Our full model outperforms the “No LP” setting because it has better vocabulary coverage and allows the extraction of a larger set of constraint features. We tabulate this increase in Table 3. For all languages, the vocabulary sizes increase by several thousand words. Although the tag distributions of the foreign words (Eq. 6) are noisy, the results confirm that label propagation within the foreign language part of the graph adds significant quality for every language. Figure 2 shows an excerpt of a sentence from the Italian test set and the tags assigned by four different models, as well as the gold tags. While the first three models get three to four tags wrong, our best model gets only one word wrong and is the most accurate among the four models for this example. Examining the word fidanzato for the “No LP” and “With LP” models is particularly instructive. As Figure 1 shows, this word has no high-confidence alignment in the Italian-English bitext. As a result, its POS tag needs to be induced in the “No LP” case, while the 11A word level paired-t-test is significant at p < 0.01 for Danish, Greek, Italian, Portuguese, Spanish and Swedish, and p < 0.05 for Dutch. 607 Gold: si trovava in un parco con il fidanzato Paolo F. , 27 anni , rappresentante EM-HMM: Feature-HMM: No LP: With LP: CONJ NOUN DET DET NOUN ADP DET NOUN . NOUN . NUM NOUN . NOUN PRON VERB ADP DET NOUN CONJ DET NOUN NOUN NOUN . ADP NOUN . VERB PRON VERB ADP DET NOUN ADP DET NOUN NOUN NOUN . NUM NOUN . NOUN VERB VERB ADP DET NOUN ADP DET ADJ NOUN ADJ . NUM NOUN . NOUN VERB VERB ADP DET NOUN ADP DET NOUN NOUN NOUN . NUM NOUN . NOUN Figure 2: Tags produced by the different models along with the reference set of tags for a part of a sentence from the Italian test set. Italicized tags denote incorrect labels. Language # words with constraints “No LP” “With LP” Danish 88,240 128, 391 Dutch 51,169 74,892 German 59,534 107,249 Greek 90,231 114,002 Italian 48,904 62,461 Portuguese 46,787 65,737 Spanish 72,215 82,459 Swedish 70,181 88,454 Table 3: Size of the vocabularies for the “No LP” and “With LP” models for which we can impose constraints. correct tag is available as a constraint feature in the “With LP” case. 7 Conclusion We have shown the efficacy of graph-based label propagation for projecting part-of-speech information across languages. Because we are interested in applying our techniques to languages for which no labeled resources are available, we paid particular attention to minimize the number of free parameters and used the same hyperparameters for all language pairs. Our results suggest that it is possible to learn accurate POS taggers for languages which do not have any annotated data, but have translations into a resource-rich language. Our results outperform strong unsupervised baselines as well as approaches that rely on direct projections, and bridge the gap between purely supervised and unsupervised POS tagging models. Acknowledgements We would like to thank Ryan McDonald for numerous discussions on this topic. We would also like to thank Amarnag Subramanya for helping us with the implementation of label propagation and Shankar Kumar for access to the parallel data. Finally, we thank Kuzman Ganchev and the three anonymous reviewers for helpful suggestions and comments on earlier drafts of this paper. References Hiyan Alshawi, Srinivas Bangalore, and Shona Douglas. 2000. Head-transducer models for speech translation and their automatic acquisition from bilingual data. Machine Translation, 15. Yasemin Altun, David McAllester, and Mikhail Belkin. 2005. Maximum margin semi-supervised learning for structured variables. In Proc. of NIPS. Taylor Berg-Kirkpatrick, Alexandre B. Cˆot´e, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Proc. of NAACL-HLT. Thorsten Brants. 2000. TnT - a statistical part-of-speech tagger. In Proc. of ANLP. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proc. of CoNLL. Andrew Carnie. 2002. Syntax: A Generative Introduction (Introducing Linguistics). Blackwell Publishing. Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. 2010. Two decades of unsupervised POS induction: How far have we come? In Proc. of EMNLP. Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39. Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proc. of ACL-IJCNLP. 608 Mark Johnson. 2007. Why doesn’t EM find good HMM POS-taggers? In Proc. of EMNLP-CoNLL. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: the Penn treebank. Computational Linguistics, 19. Tahira Naseem, Benjamin Snyder, Jacob Eisenstein, and Regina Barzilay. 2009. Multilingual part-of-speech tagging: Two unsupervised approaches. JAIR, 36. Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguistic knowledge to guide grammar induction. In Proc. of EMNLP. Frederick J. Newmeyer. 2005. Possible and Probable Languages: A Generative Perspective on Linguistic Typology. Oxford University Press. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of CoNLL. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2011. A universal part-of-speech tagset. ArXiv:1104.2086. Sujith Ravi and Kevin Knight. 2009. Minimized models for unsupervised part-of-speech tagging. In Proc. of ACL-IJCNLP. Libin Shen, Giorgio Satta, and Aravind Joshi. 2007. Guided learning for bidirectional sequence classification. In Proc. of ACL. Benjamin Snyder, Tahira Naseem, and Regina Barzilay. 2009. Unsupervised multilingual grammar induction. In Proc. of ACL-IJCNLP. Amar Subramanya, Slav Petrov, and Fernando Pereira. 2010. Efficient graph-based semi-supervised learning of structured tagging models. In Proc. of EMNLP. UN. 2006. ODS UN parallel corpus. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In Proc. of COLING. Chenhai Xi and Rebecca Hwa. 2005. A backoff model for bootstrapping resources for non-English languages. In Proc. of HLT-EMNLP. David Yarowsky and Grace Ngai. 2001. Inducing multilingual POS taggers and NP bracketers via robust projection across aligned corpora. In Proc. of NAACL. Xiaojin Zhu, Zoubin Ghahramani, and John D. Lafferty. 2003. Semi-supervised learning using gaussian fields and harmonic functions. In Proc. of ICML. 609
|
2011
|
61
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 610–619, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Global Learning of Typed Entailment Rules Jonathan Berant Tel Aviv University Tel Aviv, Israel [email protected] Ido Dagan Bar-Ilan University Ramat-Gan, Israel [email protected] Jacob Goldberger Bar-Ilan University Ramat-Gan, Israel [email protected] Abstract Extensive knowledge bases of entailment rules between predicates are crucial for applied semantic inference. In this paper we propose an algorithm that utilizes transitivity constraints to learn a globally-optimal set of entailment rules for typed predicates. We model the task as a graph learning problem and suggest methods that scale the algorithm to larger graphs. We apply the algorithm over a large data set of extracted predicate instances, from which a resource of typed entailment rules has been recently released (Schoenmackers et al., 2010). Our results show that using global transitivity information substantially improves performance over this resource and several baselines, and that our scaling methods allow us to increase the scope of global learning of entailment-rule graphs. 1 Introduction Generic approaches for applied semantic inference from text gained growing attention in recent years, particularly under the Textual Entailment (TE) framework (Dagan et al., 2009). TE is a generic paradigm for semantic inference, where the objective is to recognize whether a target meaning can be inferred from a given text. A crucial component of inference systems is extensive resources of entailment rules, also known as inference rules, i.e., rules that specify a directional inference relation between fragments of text. One important type of rule is rules that specify entailment relations between predicates and their arguments. For example, the rule ‘X annex Y →X control Y’ helps recognize that the text ‘Japan annexed Okinawa’ answers the question ‘Which country controls Okinawa?’. Thus, acquisition of such knowledge received considerable attention in the last decade (Lin and Pantel, 2001; Sekine, 2005; Szpektor and Dagan, 2009; Schoenmackers et al., 2010). Most past work took a “local learning” approach, learning each entailment rule independently of others. It is clear though, that there are global interactions between predicates. Notably, entailment is a transitive relation and so the rules A →B and B →C imply A →C. Recently, Berant et al. (2010) proposed a global graph optimization procedure that uses Integer Linear Programming (ILP) to find the best set of entailment rules under a transitivity constraint. Imposing this constraint raised two challenges. The first of ambiguity: transitivity does not always hold when predicates are ambiguous, e.g., X buy Y →X acquire Y and X acquire Y →X learn Y, but X buy Y ↛X learn Y since these two rules correspond to two different senses of acquire. The second challenge is scalability: ILP solvers do not scale well since ILP is an NP-complete problem. Berant et al. circumvented these issues by learning rules where one of the predicate’s arguments is instantiated (e.g., ‘X reduce nausea →X affect nausea’), which is useful for learning small graphs on-the-fly, given a target concept such as nausea. While rules may be effectively learned when needed, their scope is narrow and they are not useful as a generic knowledge resource. This paper aims to take global rule learning one step further. To this end, we adopt the representation suggested by Schoenmackers et al. (2010), who learned inference rules between typed predicates, i.e., predicates where the argument types (e.g., city or drug) are specified. Schoenmackers et al. uti610 lized typed predicates since they were dealing with noisy and ambiguous web text. Typing predicates helps disambiguation and filtering of noise, while still maintaining rules of wide-applicability. Their method employs a local learning approach, while the number of predicates in their data is too large to be handled directly by an ILP solver. In this paper we suggest applying global optimization learning to open domain typed entailment rules. To that end, we show how to construct a structure termed typed entailment graph, where the nodes are typed predicates and the edges represent entailment rules. We suggest scaling techniques that allow to optimally learn such graphs over a large set of typed predicates by first decomposing nodes into components and then applying incremental ILP (Riedel and Clarke, 2006). Using these techniques, the obtained algorithm is guaranteed to return an optimal solution. We ran our algorithm over the data set of Schoenmackers et al. and release a resource of 30,000 rules1 that achieves substantially higher recall without harming precision. To the best of our knowledge, this is the first resource of that scale to use global optimization for learning predicative entailment rules. Our evaluation shows that global transitivity improves the F1 score of rule learning by 27% over several baselines and that our scaling techniques allow dealing with larger graphs, resulting in improved coverage. 2 Background Most work on learning entailment rules between predicates considered each rule independently of others, using two sources of information: lexicographic resources and distributional similarity. Lexicographic resources are manually-prepared knowledge bases containing semantic information on predicates. A widely-used resource is WordNet (Fellbaum, 1998), where relations such as synonymy and hyponymy can be used to generate rules. Other resources include NomLex (Macleod et al., 1998; Szpektor and Dagan, 2009) and FrameNet (Baker and Lowe, 1998; Ben Aharon et al., 2010). Lexicographic resources are accurate but have 1The resource can be downloaded from http://www.cs.tau.ac.il/˜jonatha6/homepage files/resources /ACL2011Resource.zip low coverage. Distributional similarity algorithms use large corpora to learn broader resources by assuming that semantically similar predicates appear with similar arguments. These algorithms usually represent a predicate with one or more vectors and use some function to compute argument similarity. Distributional similarity algorithms differ in their feature representation: Some use a binary representation: each predicate is represented by one feature vector where each feature is a pair of arguments (Szpektor et al., 2004; Yates and Etzioni, 2009). This representation performs well, but suffers when data is sparse. The binary-DIRT representation deals with sparsity by representing a predicate with a pair of vectors, one for each argument (Lin and Pantel, 2001). Last, a richer form of representation, termed unary, has been suggested where a different predicate is defined for each argument (Szpektor and Dagan, 2008). Different algorithms also differ in their similarity function. Some employ symmetric functions, geared towards paraphrasing (bi-directional entailment), while others choose directional measures more suited for entailment (Bhagat et al., 2007). In this paper, We employ several such functions, such as Lin (Lin and Pantel, 2001), and BInc (Szpektor and Dagan, 2008). Schoenmackers et al. (2010) recently used distributional similarity to learn rules between typed predicates, where the left-hand-side of the rule may contain more than a single predicate (horn clauses). In their work, they used Hearst-patterns (Hearst, 1992) to extract a set of 29 million (argument, type) pairs from a large web crawl. Then, they employed several filtering methods to clean this set and automatically produced a mapping of 1.1 million arguments into 156 types. Examples for (argument, type) pairs are (EXODUS, book), (CHINA, country) and (ASTHMA, disease). Schoenmackers et al. then utilized the types, the mapped arguments and tuples from TextRunner (Banko et al., 2007) to generate 10,672 typed predicates (such as conquer(country,city) and common in(disease,place)), and learn 30,000 rules between these predicates2. In this paper we will learn entailment rules over the same data set, which was generously provided by 2The rules and the mapping of arguments into types can be downloaded from http://www.cs.washington.edu/research/ sherlock-hornclauses/ 611 Schoenmackers et al. As mentioned above, Berant et al. (2010) used global transitivity information to learn small entailment graphs. Transitivity was also used as an information source in other fields of NLP: Taxonomy Induction (Snow et al., 2006), Co-reference Resolution (Finkel and Manning, 2008), Temporal Information Extraction (Ling and Weld, 2010), and Unsupervised Ontology Induction (Poon and Domingos, 2010). Our proposed algorithm applies to any sparse transitive relation, and so might be applicable in these fields as well. Last, we formulate our optimization problem as an Integer Linear Program (ILP). ILP is an optimization problem where a linear objective function over a set of integer variables is maximized under a set of linear constraints. Scaling ILP is challenging since it is an NP-complete problem. ILP has been extensively used in NLP lately (Clarke and Lapata, 2008; Martins et al., 2009; Do and Roth, 2010). 3 Typed Entailment Graphs Given a set of typed predicates, entailment rules can only exist between predicates that share the same (unordered) pair of types (such as place and country)3. Hence, every pair of types defines a graph that describes the entailment relations between predicates sharing those types (Figure 1). Next, we show how to represent entailment rules between typed predicates in a structure termed typed entailment graph, which will be the learning goal of our algorithm. A typed entailment graph is a directed graph where the nodes are typed predicates. A typed predicate is a triple p(t1, t2) representing a predicate in natural language. p is the lexical realization of the predicate and the types t1, t2 are variables representing argument types. These are taken from a set of types T, where each type t ∈T is a bag of natural language words or phrases. Examples for typed predicates are: conquer(country,city) and contain(product,material). An instance of a typed predicate is a triple p(a1, a2), where a1 ∈t1 and a2 ∈t2 are termed arguments. For example, be common in(ASTHMA,AUSTRALIA) is an instance of be common in(disease,place). For brevity, we refer 3Otherwise, the rule would contain unbound variables. to typed entailment graphs and typed predicates as entailment graphs and predicates respectively. Edges in typed entailment graphs represent entailment rules: an edge (u, v) means that predicate u entails predicate v. If the type t1 is different from the type t2, mapping of arguments is straightforward, as in the rule ‘be find in(material,product) →contain(product,material)’. We term this a twotypes entailment graph. When t1 and t2 are equal, mapping of arguments is ambiguous: we distinguish direct-mapping edges where the first argument on the left-hand-side (LHS) is mapped to the first argument on the right-hand-side (RHS), as in ‘beat(team,team) d−→defeat(team,team)’, and reversed-mapping edges where the LHS first argument is mapped to the RHS second argument, as in ‘beat(team,team) r−→lose to(team,team)’. We term this a single-type entailment graph. Note that in single-type entailment graphs reversedmapping loops are possible as in ‘play(team,team) r−→play(team,team)’: if team A plays team B, then team B plays team A. Since entailment is a transitive relation, typedentailment graphs are transitive: if the edges (u, v) and (v, w) are in the graph so is the edge (u, w). Note that in single-type entailment graphs one needs to consider whether mapping of edges is direct or reversed: if mapping of both (u, v) and (v, w) is either direct or reversed, mapping of (u, w) is direct, otherwise it is reversed. Typing plays an important role in rule transitivity: if predicates are ambiguous, transitivity does not necessarily hold. However, typing predicates helps disambiguate them and so the problem of ambiguity is greatly reduced. 4 Learning Typed Entailment Graphs Our learning algorithm is composed of two steps: (1) Given a set of typed predicates and their instances extracted from a corpus, we train a (local) entailment classifier that estimates for every pair of predicates whether one entails the other. (2) Using the classifier scores we perform global optimization, i.e., learn the set of edges over the nodes that maximizes the global score of the graph under transitivity and background-knowledge constraints. Section 4.1 describes the local classifier training 612 province of (place,country) be part of (place,country) annex (country,place) invade (country,place) be relate to (drug,drug) be derive from (drug,drug) be process from (drug,drug) be convert into (drug,drug) Figure 1: Top: A fragment of a two-types entailment graph. bottom: A fragment of a single-type entailment graph. Mapping of solid edges is direct and of dashed edges is reversed. procedure. Section 4.2 gives an ILP formulation for the optimization problem. Sections 4.3 and 4.4 propose scaling techniques that exploit graph sparsity to optimally solve larger graphs. 4.1 Training an entailment classifier Similar to the work of Berant et al. (2010), we use “distant supervision”. Given a lexicographic resource (WordNet) and a set of predicates with their instances, we perform the following three steps (see Table 1): 1) Training set generation We use WordNet to generate positive and negative examples, where each example is a pair of predicates. Let P be the set of input typed predicates. For every predicate p(t1, t2) ∈P such that p is a single word, we extract from WordNet the set S of synonyms and direct hypernyms of p. For every p′ ∈S, if p′(t1, t2) ∈P then p(t1, t2) →p′(t1, t2) is taken as a positive example. Negative examples are generated in a similar manner, with direct co-hyponyms of p (sister nodes in WordNet) and hyponyms at distance 2 instead of synonyms and direct hypernyms. We also generate negative examples by randomly sampling pairs of typed predicates that share the same types. 2) Feature representation Each example pair of predicates (p1, p2) is represented by a feature vector, where each feature is a specific distributional Type example hyper. beat(team,team) →play(team,team) syno. reach(team,game) →arrive at(team,game) cohypo. invade(country,city) ↛bomb(country,city) hypo. defeat(city,city) ↛eliminate(city,city) random hold(place,event) ↛win(place,event) Table 1: Automatically generated training set examples. similarity score estimating whether p1 entails p2. We compute 11 distributional similarity scores for each pair of predicates based on the arguments appearing in the extracted arguments. The first 6 scores are computed by trying all combinations of the similarity functions Lin and BInc with the feature representations unary, binary-DIRT and binary (see Section 2). The other 5 scores were provided by Schoenmackers et al. (2010) and include SR (Schoenmackers et al., 2010), LIME (McCreath and Sharma, 1997), M-estimate (Dzeroski and Brakto, 1992), the standard G-test and a simple implementation of Cover (Weeds and Weir, 2003). Overall, the rationale behind this representation is that combining various scores will yield a better classifier than each single measure. 3) Training We train over an equal number of positive and negative examples, as classifiers tend to perform poorly on the minority class when trained on imbalanced data (Van Hulse et al., 2007; Nikulin, 2008). 4.2 ILP formulation Once the classifier is trained, we would like to learn all edges (entailment rules) of each typed entailment graph. Given a set of predicates V and an entailment score function f : V × V →R derived from the classifier, we want to find a graph G = (V, E) that respects transitivity and maximizes the sum of edge weights P (u,v)∈E f(u, v). This problem is NP-hard by a reduction from the NP-hard Transitive Subgraph problem (Yannakakis, 1978). Thus, employing ILP is an appealing approach for obtaining an optimal solution. For two-types entailment graphs the formulation is simple: The ILP variables are indicators Xuv denoting whether an edge (u, v) is in the graph, with the following ILP: 613 ˆG = arg max X u̸=v f(u, v) · Xuv (1) s.t. ∀u,v,w∈V Xuv + Xvw −Xuw ≤1 (2) ∀u,v∈Ayes Xuv = 1 (3) ∀u,v∈Ano Xuv = 0 (4) ∀u̸=v Xuv ∈{0, 1} (5) The objective in Eq. 1 is a sum over the weights of the eventual edges. The constraint in Eq. 2 states that edges must respect transitivity. The constraints in Eq. 3 and 4 state that for known node pairs, defined by Ayes and Ano, we have background knowledge indicating whether entailment holds or not. We elaborate on how Ayes and Ano were constructed in Section 5. For a graph with n nodes we get n(n−1) variables and n(n−1)(n−2) transitivity constraints. The simplest way to expand this formulation for single-type graphs is to duplicate each predicate node, with one node for each order of the types, and then the ILP is unchanged. However, this is inefficient as it results in an ILP with 2n(2n −1) variables and 2n(2n−1)(2n−2) transitivity constraints. Since our main goal is to scale the use of ILP, we modify it a little. We denote a direct-mapping edge (u, v) by the indicator Xuv and a reversed-mapping edge (u, v) by Yuv. The functions fd and fr provide scores for direct and reversed mappings respectively. The objective in Eq. 1 and the constraint in Eq. 2 are replaced by (Eq. 3, 4 and 5 still exist and are carried over in a trivial manner): arg max X u̸=v fd(u, v)Xuv + X u,v fr(u, v)Yuv (6) s.t. ∀u,v,w∈V Xuv + Xvw −Xuw ≤1 ∀u,v,w∈V Xuv + Yvw −Yuw ≤1 ∀u,v,w∈V Yuv + Xvw −Yuw ≤1 ∀u,v,w∈V Yuv + Yvw −Xuw ≤1 The modified constraints capture the transitivity behavior of direct-mapping and reversed-mapping edges, as described in Section 3. This results in 2n2 −n variables and about 4n3 transitivity constraints, cutting the ILP size in half. Next, we specify how to derive the function f from the trained classifier using a probabilistic formulation4. Following Snow et al. (2006) and Berant et al. (2010), we utilize a probabilistic entailment classifier that computes the posterior Puv = P(Xuv = 1|Fuv). We want to use Puv to derive the posterior P(G|F), where F = ∪u̸=vFuv and Fuv is the feature vector for a node pair (u, v). Since the classifier was trained on a balanced training set, the prior over the two entailment classes is uniform and so by Bayes rule Puv ∝ P(Fuv|Xuv = 1). Using that and the exact same three independence assumptions described by Snow et al. (2006) and Berant et al. (2010) we can show that (for brevity, we omit the full derivation): ˆG = arg maxG log P(G|F) = (7) arg max X u̸=v (log Puv · P(Xuv = 1) (1 −Puv)P(Xuv = 0))Xuv = arg max X u̸=v (log Puv 1 −Puv )Xuv + log η · |E| where η = P(Xuv=1) P(Xuv=0) is the prior odds ratio for an edge in the graph. Comparing Eq. 1 and 7 we see that f(u, v) = log Puv·P(Xuv=1) (1−Puv)P(Xuv=0). Note that f is composed of a likelihood component and an edge prior expressed by P(Xuv = 1), which we assume to be some constant. This constant is a parameter that affects graph sparsity and controls the trade-off between recall and precision. Next, we show how sparsity is exploited to scale the use of ILP solvers. We discuss two-types entailment graphs, but generalization is simple. 4.3 Graph decomposition Though ILP solvers provide an optimal solution, they substantially restrict the size of graphs we can work with. The number of constraints is O(n3), and solving graphs of size > 50 is often not feasible. To overcome this, we take advantage of graph sparsity: most predicates in language do not entail one another. Thus, it might be possible to decompose graphs into small components and solve each 4We describe two-types graphs but extending to single-type graphs is straightforward. 614 Algorithm 1 Decomposed-ILP Input: A set V and a function f : V × V →R Output: An optimal set of directed edges E∗ 1: E′ = {(u, v) : f(u, v) > 0 ∨f(v, u) > 0} 2: V1, V2, ..., Vk ←connected components of G′ = (V, E′) 3: for i = 1 to k do 4: Ei ←ApplyILPSolve(Vi,f) 5: end for 6: E∗←Sk i=1 Ei component separately. This is formalized in the next proposition. Proposition 1. If we can partition a set of nodes V into disjoint sets U, W such that for any crossing edge (u, w) between them (in either direction), f(u, w) < 0, then the optimal set of edges Eopt does not contain any crossing edge. Proof Assume by contradiction that Eopt contains a set of crossing edges Ecross. We can construct Enew = Eopt \ Ecross. Clearly P (u,v)∈Enew f(u, v) > P (u,v)∈Eopt f(u, v), as f(u, v) < 0 for any crossing edge. Next, we show that Enew does not violate transitivity constraints. Assume it does, then the violation is caused by omitting the edges in Ecross. Thus, there must be a node u ∈U and w ∈W (w.l.o.g) such that for some node v, (u, v) and (v, w) are in Enew, but (u, w) is not. However, this means either (u, v) or (v, w) is a crossing edge, which is impossible since we omitted all crossing edges. Thus, Enew is a better solution than Eopt, contradiction. This proposition suggests a simple algorithm (see Algorithm 1): Add to the graph an undirected edge for any node pair with a positive score, then find the connected components, and apply an ILP solver over the nodes in each component. The edges returned by the solver provide an optimal (not approximate) solution to the optimization problem. The algorithm’s complexity is dominated by the ILP solver, as finding connected components takes O(V 2) time. Thus, efficiency depends on whether the graph is sparse enough to be decomposed into small components. Note that the edge prior plays an important role: low values make the graph sparser and easier to solve. In Section 5 we empirically test Algorithm 2 Incremental-ILP Input: A set V and a function f : V × V →R Output: An optimal set of directed edges E∗ 1: ACT,VIO ←φ 2: repeat 3: E∗←ApplyILPSolve(V,f,ACT) 4: VIO ←violated(V, E∗) 5: ACT ←ACT ∪VIO 6: until |VIO| = 0 how typed entailment graphs benefit from decomposition given different prior values. From a more general perspective, this algorithm can be applied to any problem of learning a sparse transitive binary relation. Such problems include Co-reference Resolution (Finkel and Manning, 2008) and Temporal Information Extraction (Ling and Weld, 2010). Last, the algorithm can be easily parallelized by solving each component on a different core. 4.4 Incremental ILP Another solution for scaling ILP is to employ incremental ILP, which has been used in dependency parsing (Riedel and Clarke, 2006). The idea is that even if we omit the transitivity constraints, we still expect most transitivity constraints to be satisfied, given a good local entailment classifier. Thus, it makes sense to avoid specifying the constraints ahead of time, but rather add them when they are violated. This is formalized in Algorithm 2. Line 1 initializes an active set of constraints and a violated set of constraints (ACT;VIO). Line 3 applies the ILP solver with the active constraints. Lines 4 and 5 find the violated constraints and add them to the active constraints. The algorithm halts when no constraints are violated. The solution is clearly optimal since we obtain a maximal solution for a lessconstrained problem. A pre-condition for using incremental ILP is that computing the violated constraints (Line 4) is efficient, as it occurs in every iteration. We do that in a straightforward manner: For every node v, and edges (u, v) and (v, w), if (u, w) /∈E∗we add (u, v, w) to the violated constraints. This is cubic in worst-case but assuming the degree of nodes is bounded by a constant it is linear, and performs very 615 fast in practice. Combining Incremental-ILP and DecomposedILP is easy: We decompose any large graph into its components and apply Incremental ILP on each component. We applied this algorithm on our evaluation data set (Section 5) and found that it converges in at most 6 iterations and that the maximal number of active constraints in large graphs drops from ∼106 to ∼103 −104. 5 Experimental Evaluation In this section we empirically answer the following questions: (1) Does transitivity improve rule learning over typed predicates? (Section 5.1) (2) Do Decomposed-ILP and Incremental-ILP improve scalability? (Section 5.2) 5.1 Experiment 1 A data set of 1 million TextRunner tuples (Banko et al., 2007), mapped to 10,672 distinct typed predicates over 156 types was provided by Schoenmackers et al. (2010). Readers are referred to their paper for details on mapping of tuples to typed predicates. Since entailment only occurs between predicates that share the same types, we decomposed predicates by their types (e.g., all predicates with the types place and disease) into 2,303 typed entailment graphs. The largest graph contains 118 nodes and the total number of potential rules is 263,756. We generated a training set by applying the procedure described in Section 4.1, yielding 2,644 examples. We used SVMperf (Joachims, 2005) to train a Gaussian kernel classifier and computed Puv by projecting the classifier output score, Suv, with the sigmoid function: Puv = 1 1+exp(−Suv). We tuned two SVM parameters using 5-fold cross validation and a development set of two typed entailment graphs. Next, we used our algorithm to learn rules. As mentioned in Section 4.2, we integrate background knowledge using the sets Ayes and Ano that contain predicate pairs for which we know whether entailment holds. Ayes was constructed with syntactic rules: We normalized each predicate by omitting the first word if it is a modal and turning passives to actives. If two normalized predicates are equal they are synonymous and inserted into Ayes. Ano was constructed from 3 sources (1) Predicates differing by a single pair of words that are WordNet antonyms (2) Predicates differing by a single word of negation (3) Predicates p(t1, t2) and p(t2, t1) where p is a transitive verb (e.g., beat) in VerbNet (Kipper-Schuler et al., 2000). We compared our algorithm (termed ILPscale) to the following baselines. First, to 10,000 rules released by Schoenmackers et al. (2010) (Sherlock), where the LHS contains a single predicate (Schoenmackers et al. released 30,000 rules but 20,000 of those have more than one predicate on the LHS, see Section 2), as we learn rules over the same data set. Second, to distributional similarity algorithms: (a) SR: the score used by Schoenmackers et al. as part of the Sherlock system. (b) DIRT: (Lin and Pantel, 2001) a widely-used rule learning algorithm. (c) BInc: (Szpektor and Dagan, 2008) a directional rule learning algorithm. Third, we compared to the entailment classifier with no transitivity constraints (clsf) to see if combining distributional similarity scores improves performance over single measures. Last, we added to all baselines background knowledge with Ayes and Ano (adding the subscript Xk to their name). To evaluate performance we manually annotated all edges in 10 typed entailment graphs - 7 twotypes entailment graphs containing 14, 22, 30, 53, 62, 86 and 118 nodes, and 3 single-type entailment graphs containing 7, 38 and 59 nodes. This annotation yielded 3,427 edges and 35,585 non-edges, resulting in an empirical edge density of 9%. We evaluate the algorithms by comparing the set of edges learned by the algorithms to the gold standard edges. Figure 2 presents the precision-recall curve of the algorithms. The curve is formed by varying a score threshold in the baselines and varying the edge prior in ILPscale5. For figure clarity, we omit DIRT and SR, since BInc outperforms them. Table 2 shows micro-recall, precision and F1 at the point of maximal F1, and the Area Under the Curve (AUC) for recall in the range of 0-0.45 for all algorithms, given background knowledge (knowledge consistently improves performance by a few points for all algorithms). The table also shows results for the rules from Sherlockk. 5we stop raising the prior when run time over the graphs exceeds 2 hours. Often when the solver does not terminate in 2 hours, it also does not terminate after 24 hours or more. 616 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 precision recall BInc clsf BInc_k clsf_k ILP_scale Figure 2: Precision-recall curve for the algorithms. micro-average R (%) P (%) F1 (%) AUC ILPscale 43.4 42.2 42.8 0.22 clsfk 30.8 37.5 33.8 0.17 Sherlockk 20.6 43.3 27.9 N/A BInck 31.8 34.1 32.9 0.17 SRk 38.4 23.2 28.9 0.14 DIRTk 25.7 31.0 28.1 0.13 Table 2: micro-average F1 and AUC for the algorithms. Results show that using global transitivity information substantially improves performance. ILPscale is better than all other algorithms by a large margin starting from recall .2, and improves AUC by 29% and the maximal F1 by 27%. Moreover, ILPscale doubles recall comparing to the rules from the Sherlock resource, while maintaining comparable precision. 5.2 Experiment 2 We want to test whether using our scaling techniques, Decomposed-ILP and Incremental-ILP, allows us to reach the optimal solution in graphs that otherwise we could not solve, and consequently increase the number of learned rules and the overall recall. To check that, we run ILPscale, with and without these scaling techniques (termed ILP−). We used the same data set as in Experiment 1 and learned edges for all 2,303 entailment graphs in the data set. If the ILP solver was unable to hold the ILP in memory or took more than 2 hours log η # unlearned # rules △ Red. -1.75 9/0 6,242 / 7,466 20% 75% -1 9/1 16,790 / 19,396 16% 29% -0.6 9/3 26,330 / 29,732 13% 14% Table 3: Impact of scaling techinques (ILP−/ILPscale). for some graph, we did not attempt to learn its edges. We ran ILPscale and ILP−in three density modes to examine the behavior of the algorithms for different graph densities: (a) log η = −0.6: the configuration that achieved the best recall/precision/F1 of 43.4/42.2/42.8. (b) log η = −1 with recall/precision/F1 of 31.8/55.3/40.4. (c) log η = −1.75: A high precision configuration with recall/precision/F1 of 0.15/0.75/0.23 6. In each run we counted the number of graphs that could not be learned and the number of rules learned by each algorithm. In addition, we looked at the 20 largest graphs in our data (49-118 nodes) and measured the ratio r between the size of the largest component after applying Decomposed-ILP and the original size of the graph. We then computed the average 1−r over the 20 graphs to examine how graph size drops due to decomposition. Table 3 shows the results. Column # unlearned and # rules describe the number of unlearned graphs and the number of learned rules. Column △shows relative increase in the number of rules learned and column Red. shows the average 1 −r. ILPscale increases the number of graphs that we are able to learn: in our best configuration (log η = −0.6) only 3 graphs could not be handled comparing to 9 graphs when omitting our scaling techniques. Since the unlearned graphs are among the largest in the data set, this adds 3,500 additional rules. We compared the precision of rules learned only by ILPscale with that of the rules learned by both, by randomly sampling 100 rules from each and found precision to be comparable. Thus, the additional rules learned translate into a 13% increase in relative recall without harming precision. Also note that as density increases, the number of rules learned grows and the effectiveness of decomposition decreases. This shows how DecomposedILP is especially useful for sparse graphs. We re6Experiment was run on an Intel i5 CPU with 4GB RAM. 617 lease the 29,732 rules learned by the configuration log η = −0.6 as a resource. To sum up, our scaling techniques allow us to learn rules from graphs that standard ILP can not handle and thus considerably increase recall without harming precision. 6 Conclusions and Future Work This paper proposes two contributions over two recent works: In the first, Berant et al. (2010) presented a global optimization procedure to learn entailment rules between predicates using transitivity, and applied this algorithm over small graphs where all predicates have one argument instantiated by a target concept. Consequently, the rules they learn are of limited applicability. In the second, Schoenmackers et al. learned rules of wider applicability by using typed predicates, but utilized a local approach. In this paper we developed an algorithm that uses global optimization to learn widely-applicable entailment rules between typed predicates (where both arguments are variables). This was achieved by appropriately defining entailment graphs for typed predicates, formulating an ILP representation for them, and introducing scaling techniques that include graph decomposition and incremental ILP. Our algorithm is guaranteed to provide an optimal solution and we have shown empirically that it substantially improves performance over Schoenmackers et al.’s recent resource and over several baselines. In future work, we aim to scale the algorithm further and learn entailment rules between untyped predicates. This would require explicit modeling of predicate ambiguity and using approximation techniques when an optimal solution cannot be attained. Acknowledgments This work was performed with financial support from the Turing Center at The University of Washington during a visit of the first author (NSF grant IIS-0803481). We deeply thank Oren Etzioni and Stefan Schoenmackers for providing us with the data sets for this paper and for numerous helpful discussions. We would also like to thank the anonymous reviewers for their useful comments. This work was developed under the collaboration of FBKirst/University of Haifa and was partially supported by the Israel Science Foundation grant 1112/08. The first author is grateful to IBM for the award of an IBM Fellowship, and has carried out this research in partial fulllment of the requirements for the Ph.D. degree. References J. Fillmore Baker, C. F. and J. B. Lowe. 1998. The Berkeley framenet project. In Proc. of COLING-ACL. Michele Banko, Michael Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of IJCAI. Roni Ben Aharon, Idan Szpektor, and Ido Dagan. 2010. Generating entailment rules from framenet. In Proceedings of ACL. Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2010. Global learning of focused entailment graphs. In Proceedings of ACL. Rahul Bhagat, Patrick Pantel, and Eduard Hovy. 2007. LEDIR: An unsupervised algorithm for learning directionality of inference rules. In Proceedings of EMNLP-CoNLL. James Clarke and Mirella Lapata. 2008. Global inference for sentence compression: An integer linear programming approach. Journal of Artificial Intelligence Research, 31:273–381. Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan Roth. 2009. Recognizing textual entailment: Rational, evaluation and approaches. Natural Language Engineering, 15(4):1–17. Quang Do and Dan Roth. 2010. Constraints based taxonomic relation classification. In Proceedings of EMNLP. Saso Dzeroski and Ivan Brakto. 1992. Handling noise in inductive logic programming. In Proceedings of the International Workshop on Inductive Logic Programming. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database (Language, Speech, and Communication). The MIT Press. Jenny Rose Finkel and Christopher D. Manning. 2008. Enforcing transitivity in coreference resolution. In Proceedings of ACL-08: HLT, Short Papers. Marti Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of COLING. Thorsten Joachims. 2005. A support vector method for multivariate performance measures. In Proceedings of ICML. Karin Kipper-Schuler, Hoa Trand Dang, and Martha Palmer. 2000. Class-based construction of verb lexicon. In Proceedings of AAAI/IAAI. 618 Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question answering. Natural Language Engineering, 7(4):343–360. Xiao Ling and Daniel S. Weld. 2010. Temporal information extraction. In Proceedings of AAAI. Catherine Macleod, Ralph Grishman, Adam Meyers, Leslie Barrett, and Ruth Reeves. 1998. NOMLEX: A lexicon of nominalizations. In Proceedings of COLING. Andre Martins, Noah Smith, and Eric Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proceedings of ACL. Eric McCreath and Arun Sharma. 1997. ILP with noise and fixed example size: a bayesian approach. In Proceedings of the Fifteenth international joint conference on artificial intelligence - Volume 2. Vladimir Nikulin. 2008. Classification of imbalanced data with random sets and mean-variance filtering. IJDWM, 4(2):63–78. Hoifung Poon and Pedro Domingos. 2010. Unsupervised ontology induction from text. In Proceedings of ACL. Sebastian Riedel and James Clarke. 2006. Incremental integer linear programming for non-projective dependency parsing. In Proceedings of EMNLP. Stefan Schoenmackers, Oren Etzioni Jesse Davis, and Daniel S. Weld. 2010. Learning first-order horn clauses from web text. In Proceedings of EMNLP. Satoshi Sekine. 2005. Automatic paraphrase discovery based on context and keywords between ne pairs. In Proceedings of IWP. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of ACL. Idan Szpektor and Ido Dagan. 2008. Learning entailment rules for unary templates. In Proceedings of COLING. Idan Szpektor and Ido Dagan. 2009. Augmenting wordnet-based inference with argument mapping. In Proceedings of TextInfer. Idan Szpektor, Hristo Tanev, Ido Dagan, and Bonaventura Coppola. 2004. Scaling web-based acquisition of entailment relations. In Proceedings of EMNLP. Jason Van Hulse, Taghi Khoshgoftaar, and Amri Napolitano. 2007. Experimental perspectives on learning from imbalanced data. In Proceedings of ICML. Julie Weeds and David Weir. 2003. A general framework for distributional similarity. In Proceedings of EMNLP. Mihalis Yannakakis. 1978. Node-and edge-deletion NPcomplete problems. In STOC ’78: Proceedings of the tenth annual ACM symposium on Theory of computing, pages 253–264, New York, NY, USA. ACM. Alexander Yates and Oren Etzioni. 2009. Unsupervised methods for determining object and relation synonyms on the web. Journal of Artificial Intelligence Research, 34:255–296. 619
|
2011
|
62
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 620–631, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Incremental Syntactic Language Models for Phrase-based Translation Lane Schwartz Air Force Research Laboratory Wright-Patterson AFB, OH USA [email protected] Chris Callison-Burch Johns Hopkins University Baltimore, MD USA [email protected] William Schuler Ohio State University Columbus, OH USA [email protected] Stephen Wu Mayo Clinic Rochester, MN USA [email protected] Abstract This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. 1 Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force. Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010. 1990). Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models. Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output. Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies. Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004). On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner. We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding. We directly integrate incremental syntactic parsing into phrase-based translation. This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations. The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation (§3) • A formal definition of an incremental parser for 1While not all languages are written left-to-right, we will refer to incremental processing which proceeds from the beginning of a sentence as left-to-right. 620 statistical MT that can run in linear-time (§4) • Integration with Moses (§5) along with empirical results for perplexity and significant translation score improvement on a constrained UrduEnglish task (§6) 2 Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language. The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent. Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality. Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008), tree-to-string (Liu et al., 2006; Liu et al., 2007; Mi et al., 2008; Mi and Huang, 2008; Huang and Mi, 2010), tree-to-tree (Abeill´e et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010), and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model. Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009). In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model. Instead, we incorporate syntax into the language model. Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language. Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling. This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005). Hassan et al. (2007) and Birch et al. (2007) use supertag n-gram LMs. Syntactic language models have also been explored with tree-based translation models. Charniak et al. (2003) use syntactic language models to rescore the output of a tree-based translation system. Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997); under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results. Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system. Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models. Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010). Like (Galley and Manning, 2009) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse. The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation. The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases. These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work. 3 Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representation ˆτ (typically a tree) that best models the structure of 621 ⟨s⟩ ˜τ0 ⟨s⟩the ˜τ11 ⟨s⟩that ˜τ12 ⟨s⟩president ˜τ13 . . . the president ˜τ21 that president ˜τ22 president Friday ˜τ23 ... president meets ˜τ31 Obama met ˜τ32 ... Figure 1: Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Pr¨asident trifft am Freitag den Vorstand. Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model state ˜τth. Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge. We use the English translation The president meets the board on Friday as a running example throughout all Figures. sentence e, out of all such possible representations τ. This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model. Typically, tree ˆτ is taken to be: ˆτ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3. P(e) = X τ∈τ P(τ, e) (2) P(e) = X τ∈τ P(e | τ)P(τ) (3) 3.1 Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion. After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τt. The syntactic language model probability of a partial sentence e1...et is defined: P(e1...et) = X τ∈τt P(e1...et | τ)P(τ) (4) In practice, a parser may constrain the set of trees under consideration to ˜τt, that subset of analyses or partial analyses that remains after any pruning is performed. An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6). The role of δ is explained in §3.3 below. Any parser which implements these two functions can serve as a syntactic language model. P(e1...et) ≈P(˜τ t) = X τ∈˜τ t P(e1...et | τ)P(τ) (5) δ(et, ˜τ t−1) →˜τ t (6) 622 3.2 Decoding in phrase-based translation Given a source language input sentence f, a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translation ˆe using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002). ˆe = argmax e exp( X j λjhj(e, f)) (7) Phrase-based translation constructs a set of translation options — hypothesized translations for contiguous portions of the source sentence — from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010). To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated. An n-gram language model history is also maintained at each node in the translation lattice. The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state. 3.3 Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last. Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last. As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner. Each node in the translation lattice is augmented with a syntactic language model state ˜τt. The hypothesis at the root of the translation lattice is initialized with ˜τ 0, representing the internal state of the incremental parser before any input words are processed. The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words. Each node contains a backpointer to its parent node, in which ˜τ t−1 is stored. Given a new target language word et and ˜τ t−1, the incremental parser’s transition function δ calculates ˜τ t. Figure 1 illustrates S NP DT The NN president VP VP VB meets NP DT the NN board PP IN on NP Friday Figure 2: Sample binarized phrase structure tree. S S/NP S/PP S/VP NP NP/NN DT The NN president VP VP/NN VP/NP VB meets DT the NN board IN on NP Friday Figure 3: Sample binarized phrase structure tree after application of right-corner transform. a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model state ˜τt. In phrase-based translation, many translation lattice nodes represent multi-word target language phrases. For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node. Only the final syntactic language model state in such sequences need be stored in the translation lattice node. 4 Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments. The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice. To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur623 r1 t−1 r2 t−1 r3 t−1 s1 t−1 s2 t−1 s3 t−1 r1 t r2 t r3 t s1 t s2 t s3 t et−1 et . . . . . . . . . . . . Figure 4: Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax. Circles denote random variables, and edges denote conditional dependencies. Shaded circles denote variables with observed values. sive phrase structure trees using the tree transforms in Schuler et al. (2010). Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents cη/cηι consisting of an ‘active’ constituent cη lacking an ‘awaited’ constituent cηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000). As an example, the parser might consider VP/NN as a possible category for input “meets the”. A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3. Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG). Parsing runs in linear time on the length of the input. This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001), and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store. The parser runs in O(n) time, where n is the number of words in the input. This model is shown graphically in Figure 4 and formally defined in §4.1 below. The incremental parser assigns a probability (Eq. 5) for a partial target language hypothesis, using a bounded store of incomplete constituents cη/cηι. The phrase-based decoder uses this probability value as the syntactic language model feature score. 4.1 Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states, ˆs1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g. generated target language words), e1..T , using HHMM state transition model θA and observation symbol model θB (Rabiner, 1990): ˆs1..D 1..T def = argmax s1..D 1..T T Y t=1 PθA(s1..D t | s1..D t−1 )·PθB(et | s1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store. The model generates each successive store (using store model θS) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θR): PθA(s1..D t | s1..D t−1 ) def = X r1 t ..rD t D Y d=1 PθR(rd t | rd+1 t sd t−1sd−1 t−1 ) · PθS(sd t | rd+1 t rd t sd t−1sd−1 t ) (9) Store elements are defined to contain only the active (cη) and awaited (cηι) constituent categories necessary to compute an incomplete constituent probability: sd t def = ⟨cη, cηι⟩ (10) Reduction states are defined to contain only the complete constituent category crd t necessary to compute an inside likelihood probability, as well as a flag frd t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): rd t def = ⟨crd t , frd t ⟩ (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (frd t = 1; using depth-specific store state expansion model θS-E,d), transition along a sequence of store elements 624 s1 1 s2 1 s3 1 e1 t=1 r1 2 r2 2 r3 2 s1 2 s2 2 s3 2 e2 t=2 r1 3 r2 3 r3 3 s1 3 s2 3 s3 3 e3 t=3 r1 4 r2 4 r3 4 s1 4 s2 4 s3 4 e4 t=4 r1 5 r2 5 r3 5 s1 5 s2 5 s3 5 e5 t=5 r1 6 r2 6 r3 6 s1 6 s2 6 s3 6 e6 t=6 r1 7 r2 7 r3 7 s1 7 s2 7 s3 7 e7 t=7 r1 8 r2 8 r3 8 =DT =NP/NN =NP =NN =S/VP =VB =S/VP =VP/NP =DT =VP/NN =S/VP =NN =VP =S/PP =IN =S/NP =S =NP =The =president =meets =the =board =on =Friday Figure 5: Graphical representation of the Hierarchic Hidden Markov Model after parsing input sentence The president meets the board on Friday. The shaded path through the parse lattice illustrates the recognized right-corner tree structure of Figure 3. if no reduction has taken place (frd t =0; using depthspecific store state transition model θS-T,d): 2 PθS(sd t | rd+1 t rd t sd t−1sd−1 t ) def = if frd+1 t =1, frd t =1 : PθS-E,d(sd t | sd−1 t ) if frd+1 t =1, frd t =0 : PθS-T,d(sd t | rd+1 t rd t sd t−1sd−1 t ) if frd+1 t =0, frd t =0 : Jsd t = sd t−1K (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (frd+1 t = 1; using depth-specific reduction model θR,d): PθR(rd t | rd+1 t sd t−1sd−1 t−1 ) def = if frd+1 t =0 : Jrd t = r⊥K if frd+1 t =1 : PθR,d(rd t | rd+1 t sd t−1 sd−1 t−1 ) (13) where r⊥is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s0 t and rD+1 t . Figure 5 illustrates this model in action. These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009), distinguishing active transitions (model θS-T-A,d, in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2An indicator function J·K is used to denote deterministic probabilities: JφK = 1 if φ is true, 0 otherwise. new incomplete constituent in the same store element) from awaited transitions (model θS-T-W,d, which involve no completion): PθS-T,d(sd t | rd+1 t rd t sd t−1sd−1 t ) def = if rd t ̸=r⊥: PθS-T-A,d(sd t | sd−1 t rd t ) if rd t =r⊥: PθS-T-W,d(sd t | sd t−1rd+1 t ) (14) PθR,d(rd t | rd+1 t sd t−1sd−1 t−1 ) def = if crd+1 t ̸=xt: Jrd t = r⊥K if crd+1 t =xt: PθR-R,d(rd t | sd t−1sd−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch- and depth-specific PCFG probabilities θG-R,d and θG-L,d: 3 3Model probabilities are also defined in terms of leftprogeny probability distribution EθG-RL∗,d which is itself defined in terms of PCFG probabilities: EθG-RL∗,d(cη 0→cη0 ...) def = X cη1 PθG-R,d(cη →cη0 cη1) (16) EθG-RL∗,d(cη k→cη0k0 ...) def = X cη0k EθG-RL∗,d(cη k−1 →cη0k ...) · X cη0k1 PθG-L,d(cη0k →cη0k0 cη0k1) (17) EθG-RL∗,d(cη ∗→cηι ...) def = ∞ X k=0 EθG-RL∗,d(cη k→cηι ...) (18) EθG-RL∗,d(cη + →cηι ...) def = EθG-RL∗,d(cη ∗→cηι ...) −EθG-RL∗,d(cη 0→cηι ...) (19) 625 president meets ˜τ31 . . . the board ˜τ51 ... s1 3 s2 3 s3 3 e3 r1 4 r2 4 r3 4 s1 4 s2 4 s3 4 e4 r1 5 r2 5 r3 5 s1 5 s2 5 s3 5 e5 =meets =the =board Figure 6: A hypothesis in the phrase-based decoding lattice from Figure 1 is expanded using translation option the board of source phrase den Vorstand. Syntactic language model state ˜τ31 contains random variables s1..3 3 ; likewise ˜τ51 contains s1..3 5 . The intervening random variables r1..3 4 , s1..3 4 , and r1..3 5 are calculated by transition function δ (Eq. 6, as defined by §4.1), but are not stored. Observed random variables (e3..e5) are shown for clarity, but are not explicitly stored in any syntactic language model state. • for expansions: PθS-E,d(⟨cηι, c′ ηι⟩| ⟨−, cη⟩) def = EθG-RL∗,d(cη ∗→cηι ...) · Jxηι = c′ ηι = cηιK (20) • for awaited transitions: PθS-T-W,d(⟨cη, cηι1⟩| ⟨c′ η, cηι⟩cηι0) def = Jcη = c′ ηK · PθG-R,d(cηι →cηι0 cηι1) EθG-RL∗,d(cηι 0→cηι0 ...) (21) • for active transitions: PθS-T-A,d(⟨cηι, cηι1⟩| ⟨−, cη⟩cηι0) def = EθG-RL∗,d(cη ∗→cηι ...) · PθG-L,d(cηι →cηι0 cηι1) EθG-RL∗,d(cη + →cηι0 ...) (22) • for cross-element reductions: PθR-R,d(cηι, 1 | ⟨−, cη⟩⟨c′ ηι, −⟩) def = Jcηι = c′ ηιK · EθG-RL∗,d(cη 0→cηι ...) EθG-RL∗,d(cη ∗→cηι ...) (23) • for in-element reductions: PθR-R,d(cηι, 0 | ⟨−, cη⟩⟨c′ ηι, −⟩) def = Jcηι = c′ ηιK · EθG-RL∗,d(cη + →cηι ...) EθG-RL∗,d(cη ∗→cηι ...) (24) We use the parser implementation of (Schuler, 2009; Schuler et al., 2010). 5 Phrase Based Translation with an Incremental Syntactic Language Model The phrase-based decoder is augmented by adding additional state data to each hypothesis in the de626 coder’s hypothesis stacks. Figure 1 illustrates an excerpt from a standard phrase-based translation lattice. Within each decoder stack t, each hypothesis h is augmented with a syntactic language model state ˜τth. Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM. Specifically, ˜τth contains those random variables s1..D t that maintain distributions over syntactic elements. By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM. Specifically, the random variable store at hypothesis h provides P(˜τth) = P(eh 1..t, s1..D 1..t ), where eh 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq. 5). During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses. New hypotheses are placed in appropriate hypothesis stacks. In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word. As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word. This results in a new store of syntactic random variables (Eq. 6) that are associated with the new stack element. When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis. It is then repeated for the remaining words in the hypothesis extension. Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis. The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained. Figure 6 illustrates this process, showing how a syntactic language model state ˜τ51 in a phrase-based decoding lattice is obtained from a previous syntactic language model state ˜τ31 (from Figure 1) by parsing the target language words from a phrasebased translation option. In-domain Out-of-domain LM WSJ 23 ppl ur-en dev ppl WSJ 1-gram 1973.57 3581.72 WSJ 2-gram 349.18 1312.61 WSJ 3-gram 262.04 1264.47 WSJ 4-gram 244.12 1261.37 WSJ 5-gram 232.08 1261.90 WSJ HHMM 384.66 529.41 Interpolated WSJ 5-gram + HHMM 209.13 225.48 Giga 5-gram 258.35 312.28 Interp. Giga 5-gr + WSJ HHMM 222.39 123.10 Interp. Giga 5-gr + WSJ 5-gram 174.88 321.05 Figure 7: Average per-word perplexity values. HHMM was run with beam size of 2000. Bold indicates best single-model results for LMs trained on WSJ sections 2-21. Best overall in italics. Our syntactic language model is integrated into the current version of Moses (Koehn et al., 2007). 6 Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data. Equation 25 calculates ppl using log base b for a test set of T tokens. ppl = b −logbP(e1...eT ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993). The HHMM outperforms the n-gram model in terms of out-of-domain test set perplexity when trained on the same WSJ data; the best perplexity results for in-domain and out-of-domain test sets4 are found by interpolating 4In-domain is WSJ Section 23. Out-of-domain are the English reference translations of the dev section , set aside in (Baker et al., 2009) for parameter tuning, of the NIST Open MT 2008 Urdu-English task. 627 Sentence Moses +HHMM +HHMM length beam=50 beam=2000 10 0.21 533 1143 20 0.53 1193 2562 30 0.85 1746 3749 40 1.13 2095 4588 Figure 8: Mean per-sentence decoding time (in seconds) for dev set using Moses with and without syntactic language model. HHMM parser beam sizes are indicated for the syntactic LM. HHMM and n-gram LMs (Figure 7). To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus. In all cases, including the HHMM significantly reduces perplexity. We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data. We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible. During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM. MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight. In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process. Figure 8 illustrates a slowdown around three orders of magnitude. Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor. Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words). Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set. 7 Discussion This paper argues that incremental syntactic languages models are a straightforward and approMoses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9: Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion. This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing. We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding. We integrated an incremental syntactic language model into Moses. The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets. The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007). Our n-gram model trained only on WSJ is admittedly small. Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models. The added decoding time cost of our syntactic language model is very high. By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality. A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible. Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps. 628 References Anne Abeill´e, Yves Schabes, and Aravind K. Joshi. 1990. Using lexicalized tree adjoining grammars for machine translation. In Proceedings of the 13th International Conference on Computational Linguistics. Anthony E. Ades and Mark Steedman. 1982. On the order of words. Linguistics and Philosophy, 4:517– 558. Kathy Baker, Steven Bethard, Michael Bloodgood, Ralf Brown, Chris Callison-Burch, Glen Coppersmith, Bonnie Dorr, Wes Filardo, Kendall Giles, Anni Irvine, Mike Kayser, Lori Levin, Justin Martineau, Jim Mayfield, Scott Miller, Aaron Phillips, Andrew Philpot, Christine Piatko, Lane Schwartz, and David Zajic. 2009. Semantically informed machine translation (SIMT). SCALE summer workshop final report, Human Language Technology Center Of Excellence. Alexandra Birch, Miles Osborne, and Philipp Koehn. 2007. CCG supertags in factored statistical machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 9–16. Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Peter Brown, John Cocke, Stephen Della Pietra, Vincent Della Pietra, Frederick Jelinek, John Lafferty, Robert Mercer, and Paul Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79–85. Eugene Charniak, Kevin Knight, and Kenji Yamada. 2003. Syntax-based language models for statistical machine translation. In Proceedings of the Ninth Machine Translation Summit of the International Association for Machine Translation. Ciprian Chelba and Frederick Jelinek. 1998. Exploiting syntactic structure for language modeling. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, pages 225– 231. Ciprian Chelba and Frederick Jelinek. 2000. Structured language modeling. Computer Speech and Language, 14(4):283–332. Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical report, Harvard University. Colin Cherry. 2008. Cohesive phrase-based decoding for statistical machine translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 72–80. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 263–270. David Chiang. 2010. Learning to translate with source and target syntax. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1443–1452. John Cocke and Jacob Schwartz. 1970. Programming languages and their compilers. Technical report, Courant Institute of Mathematical Sciences, New York University. Michael Collins, Brian Roark, and Murat Saraclar. 2005. Discriminative syntactic language modeling for speech recognition. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 507–514. Brooke Cowan, Ivona Ku˘cerov´a, and Michael Collins. 2006. A discriminative model for tree-to-tree translation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 232–241. Steve DeNeefe and Kevin Knight. 2009. Synchronous tree adjoining machine translation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 727–736. Steve DeNeefe, Kevin Knight, Wei Wang, and Daniel Marcu. 2007. What can syntax-based MT learn from phrase-based MT? In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 755–763. Yuan Ding and Martha Palmer. 2005. Machine translation using probabilistic synchronous dependency insertion grammars. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 541–548. Jay Earley. 1968. An efficient context-free parsing algorithm. Ph.D. thesis, Department of Computer Science, Carnegie Mellon University. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics, pages 205–208. Michel Galley and Christopher D. Manning. 2009. Quadratic-time dependency parsing for machine translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 773–781. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In 629 Daniel Marcu Susan Dumais and Salim Roukos, editors, Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 273– 280. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 961– 968. Niyu Ge. 2010. A direct syntax-driven reordering model for phrase-based machine translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 849–857. Daniel Gildea. 2003. Loosely tree-based alignment for machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 80–87. Jonathan Graehl and Kevin Knight. 2004. Training tree transducers. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 105–112. Hany Hassan, Khalil Sima’an, and Andy Way. 2007. Supertagged phrase-based statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 288–295. James Henderson. 2004. Lookahead in deterministic left-corner parsing. In Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together, pages 26–33. Liang Huang and Haitao Mi. 2010. Efficient incremental decoding for tree-to-string translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 273–283. Liang Huang and Kenji Sagae. 2010. Dynamic programming for linear-time incremental parsing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1077–1086. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of the 7th Biennial conference of the Association for Machine Translation in the Americas. Kenji Imamura, Hideo Okuma, Taro Watanabe, and Eiichiro Sumita. 2004. Example-based machine translation based on syntactic transfer with statistical models. In Proceedings of the 20th International Conference on Computational Linguistics, pages 99–105. Frederick Jelinek. 1969. Fast sequential decoding algorithm using a stack. IBM Journal of Research and Development, pages 675–685. T. Kasami. 1965. An efficient recognition and syntax analysis algorithm for context free languages. Technical Report AFCRL-65-758, Air Force Cambridge Research Laboratory. Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 127–133. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pages 177–180. Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-tostring alignment template for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 609–616. Yang Liu, Yun Huang, Qun Liu, and Shouxun Lin. 2007. Forest-to-string statistical translation rules. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 704–711. Yang Liu, Yajuan L¨u, and Qun Liu. 2009. Improving tree-to-tree translation with packed forests. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 558–566. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):313–330. I. Dan Melamed. 2004. Statistical machine translation by parsing. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics, pages 653–660. Haitao Mi and Liang Huang. 2008. Forest-based translation rule extraction. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 206–214. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 192–199. 630 Kevin P. Murphy and Mark A. Paskin. 2001. Linear time inference in hierarchical HMMs. In Proceedings of Neural Information Processing Systems, pages 833– 840. Rebecca Nesson, Stuart Shieber, and Alexander Rush. 2006. Induction of probabilistic synchronous treeinsertion grammars for machine translation. In Proceedings of the 7th Biennial conference of the Association for Machine Translation in the Americas, pages 128–137. Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 295–302. Franz Josef Och, Daniel Gildea, Sanjeev Khudanpur, Anoop Sarkar, Kenji Yamada, Alex Fraser, Shankar Kumar, Libin Shen, David Smith, Katherine Eng, Viren Jain, Zhen Jin, and Dragomir Radev. 2004. A smorgasbord of features for statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 161–168. Matt Post and Daniel Gildea. 2008. Parsers as language models for statistical machine translation. In Proceedings of the Eighth Conference of the Association for Machine Translation in the Americas, pages 172–181. Matt Post and Daniel Gildea. 2009. Language modeling with tree substitution grammars. In NIPS workshop on Grammar Induction, Representation of Language, and Language Learning. Arjen Poutsma. 1998. Data-oriented translation. In Ninth Conference of Computational Linguistics in the Netherlands. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal SMT. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 271–279. Lawrence R. Rabiner. 1990. A tutorial on hidden Markov models and selected applications in speech recognition. Readings in speech recognition, 53(3):267–296. Brian Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249–276. William Schuler, Samir AbdelRahman, Tim Miller, and Lane Schwartz. 2010. Broad-coverage incremental parsing using human-like memory constraints. Computational Linguistics, 36(1):1–30. William Schuler. 2009. Positive results for parsing with a bounded stack using a model-based right-corner transform. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 344–352. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 577–585. Stuart M. Shieber and Yves Schabes. 1990. Synchronous tree adjoining grammars. In Proceedings of the 13th International Conference on Computational Linguistics. Stuart M. Shieber. 2004. Synchronous grammars as tree transducers. In Proceedings of the Seventh International Workshop on Tree Adjoining Grammar and Related Formalisms. Mark Steedman. 2000. The syntactic process. MIT Press/Bradford Books, Cambridge, MA. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 523–530. D.H. Younger. 1967. Recognition and parsing of context-free languages in time n cubed. Information and Control, 10(2):189–208. Min Zhang, Hongfei Jiang, Ai Ti Aw, Jun Sun, Seng Li, and Chew Lim Tan. 2007. A tree-to-tree alignmentbased model for statistical machine translation. In Proceedings of the 11th Machine Translation Summit of the International Association for Machine Translation, pages 535–542. Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings of the Workshop on Statistical Machine Translation, pages 138–141. 631
|
2011
|
63
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 632–641, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics An Unsupervised Model for Joint Phrase Alignment and Extraction Graham Neubig1,2 Taro Watanabe2, Eiichiro Sumita2, Shinsuke Mori1, Tatsuya Kawahara1 1Graduate School of Informatics, Kyoto University Yoshida Honmachi, Sakyo-ku, Kyoto, Japan 2National Institute of Information and Communication Technology 3-5 Hikari-dai, Seika-cho, Soraku-gun, Kyoto, Japan Abstract We present an unsupervised model for joint phrase alignment and extraction using nonparametric Bayesian methods and inversion transduction grammars (ITGs). The key contribution is that phrases of many granularities are included directly in the model through the use of a novel formulation that memorizes phrases generated not only by terminal, but also non-terminal symbols. This allows for a completely probabilistic model that is able to create a phrase table that achieves competitive accuracy on phrase-based machine translation tasks directly from unaligned sentence pairs. Experiments on several language pairs demonstrate that the proposed model matches the accuracy of traditional two-step word alignment/phrase extraction approach while reducing the phrase table to a fraction of the original size. 1 Introduction The training of translation models for phrasebased statistical machine translation (SMT) systems (Koehn et al., 2003) takes unaligned bilingual training data as input, and outputs a scored table of phrase pairs. This phrase table is traditionally generated by going through a pipeline of two steps, first generating word (or minimal phrase) alignments, then extracting a phrase table that is consistent with these alignments. However, as DeNero and Klein (2010) note, this two step approach results in word alignments that are not optimal for the final task of generating phrase tables that are used in translation. As a solution to this, they proposed a supervised discriminative model that performs joint word alignment and phrase extraction, and found that joint estimation of word alignments and extraction sets improves both word alignment accuracy and translation results. In this paper, we propose the first unsupervised approach to joint alignment and extraction of phrases at multiple granularities. This is achieved by constructing a generative model that includes phrases at many levels of granularity, from minimal phrases all the way up to full sentences. The model is similar to previously proposed phrase alignment models based on inversion transduction grammars (ITGs) (Cherry and Lin, 2007; Zhang et al., 2008; Blunsom et al., 2009), with one important change: ITG symbols and phrase pairs are generated in the opposite order. In traditional ITG models, the branches of a biparse tree are generated from a nonterminal distribution, and each leaf is generated by a word or phrase pair distribution. As a result, only minimal phrases are directly included in the model, while larger phrases must be generated by heuristic extraction methods. In the proposed model, at each branch in the tree, we first attempt to generate a phrase pair from the phrase pair distribution, falling back to ITG-based divide and conquer strategy to generate phrase pairs that do not exist (or are given low probability) in the phrase distribution. We combine this model with the Bayesian nonparametric Pitman-Yor process (Pitman and Yor, 1997; Teh, 2006), realizing ITG-based divide and conquer through a novel formulation where the Pitman-Yor process uses two copies of itself as a 632 base measure. As a result of this modeling strategy, phrases of multiple granularities are generated, and thus memorized, by the Pitman-Yor process. This makes it possible to directly use probabilities of the phrase model as a replacement for the phrase table generated by heuristic extraction techniques. Using this model, we perform machine translation experiments over four language pairs. We observe that the proposed joint phrase alignment and extraction approach is able to meet or exceed results attained by a combination of GIZA++ and heuristic phrase extraction with significantly smaller phrase table size. We also find that it achieves superior BLEU scores over previously proposed ITG-based phrase alignment approaches. 2 A Probabilistic Model for Phrase Table Extraction The problem of SMT can be defined as finding the most probable target sentence e for the source sentence f given a parallel training corpus ⟨E, F⟩ ˆe = argmax e P(e|f, ⟨E, F⟩). We assume that there is a hidden set of parameters θ learned from the training data, and that e is conditionally independent from the training corpus given θ. We take a Bayesian approach, integrating over all possible values of the hidden parameters: P(e|f, ⟨E, F⟩) = ∫ θ P(e|f, θ)P(θ|⟨E, F⟩). (1) If θ takes the form of a scored phrase table, we can use traditional methods for phrase-based SMT to find P(e|f, θ) and concentrate on creating a model for P(θ|⟨E, F⟩). We decompose this posterior probability using Bayes law into the corpus likelihood and parameter prior probabilities P(θ|⟨E, F⟩) ∝P(⟨E, F⟩|θ)P(θ). In Section 3 we describe an existing method, and in Section 4 we describe our proposed method for modeling these two probabilities. 3 Flat ITG Model There has been a significant amount of work in many-to-many alignment techniques (Marcu and Wong (2002), DeNero et al. (2008), inter alia), and in particular a number of recent works (Cherry and Lin, 2007; Zhang et al., 2008; Blunsom et al., 2009) have used the formalism of inversion transduction grammars (ITGs) (Wu, 1997) to learn phrase alignments. By slightly limit reordering of words, ITGs make it possible to exactly calculate probabilities of phrasal alignments in polynomial time, which is a computationally hard problem when arbitrary reordering is allowed (DeNero and Klein, 2008). The traditional flat ITG generative probability for a particular phrase (or sentence) pair Pflat(⟨e, f⟩; θx, θt) is parameterized by a phrase table θt and a symbol distribution θx. We use the following generative story as a representative of the flat ITG model. 1. Generate symbol x from the multinomial distribution Px(x; θx). x can take the values TERM, REG, or INV. 2. According to the x take the following actions. (a) If x = TERM, generate a phrase pair from the phrase table Pt(⟨e, f⟩; θt). (b) If x = REG, a regular ITG rule, generate phrase pairs ⟨e1, f1⟩and ⟨e2, f2⟩from Pflat, and concatenate them into a single phrase pair ⟨e1e2, f1f2⟩. (c) If x = INV, an inverted ITG rule, follows the same process as (b), but concatenate f1 and f2 in reverse order ⟨e1e2, f2f1⟩. By taking the product of Pflat over every sentence in the corpus, we are able to calculate the likelihood P(⟨E, F⟩|θ) = ∏ ⟨e,f⟩∈⟨E,F⟩ Pflat(⟨e, f⟩; θ). We will refer to this model as FLAT. 3.1 Bayesian Modeling While the previous formulation can be used as-is in maximum likelihood training, this leads to a degenerate solution where every sentence is memorized as a single phrase pair. Zhang et al. (2008) and others propose dealing with this problem by putting a prior probability P(θx, θt) on the parameters. 633 We assign θx a Dirichlet prior1, and assign the phrase table parameters θt a prior using the PitmanYor process (Pitman and Yor, 1997; Teh, 2006), which is a generalization of the Dirichlet process prior used in previous research. It is expressed as θt ∼PY (d, s, Pbase) (2) where d is the discount parameter, s is the strength parameter, and Pbase is the base measure. The discount d is subtracted from observed counts, and when it is given a large value (close to one), less frequent phrase pairs will be given lower relative probability than more common phrase pairs. The strength s controls the overall sparseness of the distribution, and when it is given a small value the distribution will be sparse. Pbase is the prior probability of generating a particular phrase pair, which we describe in more detail in the following section. Non-parametric priors are well suited for modeling the phrase distribution because every time a phrase is generated by the model, it is “memorized” and given higher probability. Because of this, common phrase pairs are more likely to be re-used (the rich-get-richer effect), which results in the induction of phrase tables with fewer, but more helpful phrases. It is important to note that only phrases generated by Pt are actually memorized and given higher probability by the model. In FLAT, only minimal phrases generated after Px outputs the terminal symbol TERM are generated from Pt, and thus only minimal phrases are memorized by the model. While the Dirichlet process is simply the PitmanYor process with d = 0, it has been shown that the discount parameter allows for more effective modeling of the long-tailed distributions that are often found in natural language (Teh, 2006). We confirmed in preliminary experiments (using the data described in Section 7) that the Pitman-Yor process with automatically adjusted parameters results in superior alignment results, outperforming the sparse Dirichlet process priors used in previous research2. The average gain across all data sets was approximately 0.8 BLEU points. 1The value of α had little effect on the results, so we arbitrarily set α = 1. 2We put weak priors on s (Gamma(α = 2, β = 1)) and d (Beta(α = 2, β = 2)) for the Pitman-Yor process, and set α = 1−10 for the Dirichlet process. 3.2 Base Measure Pbase in Equation (2) indicates the prior probability of phrase pairs according to the model. By choosing this probability appropriately, we can incorporate prior knowledge of what phrases tend to be aligned to each other. We calculate Pbase by first choosing whether to generate an unaligned phrase pair (where |e| = 0 or |f| = 0) according to a fixed probability pu3, then generating from Pba for aligned phrase pairs, or Pbu for unaligned phrase pairs. For Pba, we adopt a base measure similar to that used by DeNero et al. (2008): Pba(⟨e, f⟩) =M0(⟨e, f⟩)Ppois(|e|; λ)Ppois(|f|; λ) M0(⟨e, f⟩) =(Pm1(f|e)Puni(e)Pm1(e|f)Puni(f)) 1 2 . Ppois is the Poisson distribution with the average length parameter λ. As long phrases lead to sparsity, we set λ to a relatively small value to allow us to bias against overly long phrases4. Pm1 is the word-based Model 1 (Brown et al., 1993) probability of one phrase given the other, which incorporates word-based alignment information as prior knowledge in the phrase translation probability. We take the geometric mean5of the Model 1 probabilities in both directions to encourage alignments that are supported by both models (Liang et al., 2006). It should be noted that while Model 1 probabilities are used, they are only soft constraints, compared with the hard constraint of choosing a single word alignment used in most previous phrase extraction approaches. For Pbu, if g is the non-null phrase in e and f, we calculate the probability as follows: Pbu(⟨e, f⟩) = Puni(g)Ppois(|g|; λ)/2. Note that Pbu is divided by 2 as the probability is considering null alignments in both directions. 4 Hierarchical ITG Model While in FLAT only minimal phrases were memorized by the model, as DeNero et al. (2008) note 3We choose 10−2, 10−3, or 10−10 based on which value gave the best accuracy on the development set. 4We tune λ to 1, 0.1, or 0.01 based on which value gives the best performance on the development set. 5The probabilities of the geometric mean do not add to one, but we found empirically that even when left unnormalized, this provided much better results than the using the arithmetic mean, which is more theoretically correct. 634 and we confirm in the experiments in Section 7, using only minimal phrases leads to inferior translation results for phrase-based SMT. Because of this, previous research has combined FLAT with heuristic phrase extraction, which exhaustively combines all adjacent phrases permitted by the word alignments (Och et al., 1999). We propose an alternative, fully statistical approach that directly models phrases at multiple granularities, which we will refer to as HIER. By doing so, we are able to do away with heuristic phrase extraction, creating a fully probabilistic model for phrase probabilities that still yields competitive results. Similarly to FLAT, HIER assigns a probability Phier(⟨e, f⟩; θx, θt) to phrase pairs, and is parameterized by a phrase table θt and a symbol distribution θx. The main difference from the generative story of the traditional ITG model is that symbols and phrase pairs are generated in the opposite order. While FLAT first generates branches of the derivation tree using Px, then generates leaves using the phrase distribution Pt, HIER first attempts to generate the full sentence as a single phrase from Pt, then falls back to ITG-style derivations to cope with sparsity. We allow for this within the Bayesian ITG context by defining a new base measure Pdac (“divide-andconquer”) to replace Pbase in Equation (2), resulting in the following distribution for θt. θt ∼PY (d, s, Pdac) (3) Pdac essentially breaks the generation of a single longer phrase into two generations of shorter phrases, allowing even phrase pairs for which c(⟨e, f⟩) = 0 to be given some probability. The generative process of Pdac, similar to that of Pflat from the previous section, is as follows: 1. Generate symbol x from Px(x; θx). x can take the values BASE, REG, or INV. 2. According to x take the following actions. (a) If x = BASE, generate a new phrase pair directly from Pbase of Section 3.2. (b) If x = REG, generate ⟨e1, f1⟩and ⟨e2, f2⟩ from Phier, and concatenate them into a single phrase pair ⟨e1e2, f1f2⟩. Figure 1: A word alignment (a), and its derivations according to FLAT (b), and HIER (c). Solid and dotted lines indicate minimal and non-minimal pairs respectively, and phrases are written under their corresponding instance of Pt. The pair hate/coˆute is generated from Pbase. (c) If x = INV, follow the same process as (b), but concatenate f1 and f2 in reverse order ⟨e1e2, f2f1⟩. A comparison of derivation trees for FLAT and HIER is shown in Figure 1. As previously described, FLAT first generates from the symbol distribution Px, then from the phrase distribution Pt, while HIER generates directly from Pt, which falls back to divide-and-conquer based on Px when necessary. It can be seen that while Pt in FLAT only generates minimal phrases, Pt in HIER generates (and thus memorizes) phrases at all levels of granularity. 4.1 Length-based Parameter Tuning There are still two problems with HIER, one theoretical, and one practical. Theoretically, HIER contains itself as its base measure, and stochastic process models that include themselves as base measures are deficient, as noted in Cohen et al. (2010). Practically, while the Pitman-Yor process in HIER shares the parameters s and d over all phrase pairs in the model, long phrase pairs are much more sparse 635 Figure 2: Learned discount values by phrase pair length. than short phrase pairs, and thus it is desirable to appropriately adjust the parameters of Equation (2) according to phrase pair length. In order to solve these problems, we reformulate the model so that each phrase length l = |f|+|e| has its own phrase parameters θt,l and symbol parameters θx,l, which are given separate priors: θt,l ∼PY (s, d, Pdac,l) θx,l ∼Dirichlet(α) We will call this model HLEN. The generative story is largely similar to HIER with a few minor changes. When we generate a sentence, we first choose its length l according to a uniform distribution over all possible sentence lengths l ∼Uniform(1, L), where L is the size |e| + |f| of the longest sentence in the corpus. We then generate a phrase pair from the probability Pt,l(⟨e, f⟩) for length l. The base measure for HLEN is identical to that of HIER, with one minor change: when we fall back to two shorter phrases, we choose the length of the left phrase from ll ∼Uniform(1, l −1), set the length of the right phrase to lr = l−ll, and generate the smaller phrases from Pt,ll and Pt,lr respectively. It can be seen that phrases at each length are generated from different distributions, and thus the parameters for the Pitman-Yor process will be different for each distribution. Further, as ll and lr must be smaller than l, Pt,l no longer contains itself as a base measure, and is thus not deficient. An example of the actual discount values learned in one of the experiments described in Section 7 is shown in Figure 2. It can be seen that, as expected, the discounts for short phrases are lower than those of long phrases. In particular, phrase pairs of length up to six (for example, |e| = 3, |f| = 3) are given discounts of nearly zero while larger phrases are more heavily discounted. We conjecture that this is related to the observation by Koehn et al. (2003) that using phrases where max(|e|, |f|) ≤3 cause significant improvements in BLEU score, while using larger phrases results in diminishing returns. 4.2 Implementation Previous research has used a variety of sampling methods to learn Bayesian phrase based alignment models (DeNero et al., 2008; Blunsom et al., 2009; Blunsom and Cohn, 2010). All of these techniques are applicable to the proposed model, but we choose to apply the sentence-based blocked sampling of Blunsom and Cohn (2010), which has desirable convergence properties compared to sampling single alignments. As exhaustive sampling is too slow for practical purpose, we adopt the beam search algorithm of Saers et al. (2009), and use a probability beam, trimming spans where the probability is at least 1010 times smaller than that of the best hypothesis in the bucket. One important implementation detail that is different from previous models is the management of phrase counts. As a phrase pair ta may have been generated from two smaller component phrases tb and tc, when a sample containing ta is removed from the distribution, it may also be necessary to decrement the counts of tb and tc as well. The Chinese Restaurant Process representation of Pt (Teh, 2006) lends itself to a natural and easily implementable solution to this problem. For each table representing a phrase pair ta, we maintain not only the number of customers sitting at the table, but also the identities of phrases tb and tc that were originally used when generating the table. When the count of the table ta is reduced to zero and the table is removed, the counts of tb and tc are also decremented. 5 Phrase Extraction In this section, we describe both traditional heuristic phrase extraction, and the proposed model-based extraction method. 636 Figure 3: The phrase, block, and word alignments used in heuristic phrase extraction. 5.1 Heuristic Phrase Extraction The traditional method for heuristic phrase extraction from word alignments exhaustively enumerates all phrases up to a certain length consistent with the alignment (Och et al., 1999). Five features are used in the phrase table: the conditional phrase probabilities in both directions estimated using maximum likelihood Pml(f|e) and Pml(e|f), lexical weighting probabilities (Koehn et al., 2003), and a fixed penalty for each phrase. We will call this heuristic extraction from word alignments HEUR-W. These word alignments can be acquired through the standard GIZA++ training regimen. We use the combination of our ITG-based alignment with traditional heuristic phrase extraction as a second baseline. An example of these alignments is shown in Figure 3. In model HEUR-P, minimal phrases generated from Pt are treated as aligned, and we perform phrase extraction on these alignments. However, as the proposed models tend to align relatively large phrases, we also use two other techniques to create smaller alignment chunks that prevent sparsity. We perform regular sampling of the trees, but if we reach a minimal phrase generated from Pt, we continue traveling down the tree until we reach either a one-to-many alignment, which we will call HEUR-B as it creates alignments similar to the block ITG, or an at-most-one alignment, which we will call HEUR-W as it generates word alignments. It should be noted that forcing alignments smaller than the model suggests is only used for generating alignments for use in heuristic extraction, and does not affect the training process. 5.2 Model-Based Phrase Extraction We also propose a method for phrase table extraction that directly utilizes the phrase probabilities Pt(⟨e, f⟩). Similarly to the heuristic phrase tables, we use conditional probabilities Pt(f|e) and Pt(e|f), lexical weighting probabilities, and a phrase penalty. Here, instead of using maximum likelihood, we calculate conditional probabilities directly from Pt probabilities: Pt(f|e) = Pt(⟨e, f⟩)/ ∑ { ˜f:c(⟨e, ˜f⟩)≥1} Pt(⟨e, ˜f⟩) Pt(e|f) = Pt(⟨e, f⟩)/ ∑ {˜e:c(⟨˜e,f⟩)≥1} Pt(⟨˜e, f⟩). To limit phrase table size, we include only phrase pairs that are aligned at least once in the sample. We also include two more features: the phrase pair joint probability Pt(⟨e, f⟩), and the average posterior probability of each span that generated ⟨e, f⟩as computed by the inside-outside algorithm during training. We use the span probability as it gives a hint about the reliability of the phrase pair. It will be high for common phrase pairs that are generated directly from the model, and also for phrases that, while not directly included in the model, are composed of two high probability child phrases. It should be noted that while for FLAT and HIER Pt can be used directly, as HLEN learns separate models for each length, we must combine these probabilities into a single value. We do this by setting Pt(⟨e, f⟩) = Pt,l(⟨e, f⟩)c(l)/ L ∑ ˜l=1 c(˜l) for every phrase pair, where l = |e| + |f| and c(l) is the number of phrases of length l in the sample. We call this model-based extraction method MOD. 5.3 Sample Combination As has been noted in previous works, (Koehn et al., 2003; DeNero et al., 2006) exhaustive phrase extraction tends to out-perform approaches that use syntax or generative models to limit phrase boundaries. DeNero et al. (2006) state that this is because generative models choose only a single phrase segmentation, and thus throw away many good phrase pairs that are in conflict with this segmentation. Luckily, in the Bayesian framework it is simple to overcome this problem by combining phrase tables 637 from multiple samples. This is equivalent to approximating the integral over various parameter configurations in Equation (1). In MOD, we do this by taking the average of the joint probability and span probability features, and re-calculating the conditional probabilities from the averaged joint probabilities. 6 Related Work In addition to the previously mentioned phrase alignment techniques, there has also been a significant body of work on phrase extraction (Moore and Quirk (2007), Johnson et al. (2007a), inter alia). DeNero and Klein (2010) presented the first work on joint phrase alignment and extraction at multiple levels. While they take a supervised approach based on discriminative methods, we present a fully unsupervised generative model. A generative probabilistic model where longer units are built through the binary combination of shorter units was proposed by de Marcken (1996) for monolingual word segmentation using the minimum description length (MDL) framework. Our work differs in that it uses Bayesian techniques instead of MDL, and works on two languages, not one. Adaptor grammars, models in which nonterminals memorize subtrees that lie below them, have been used for word segmentation or other monolingual tasks (Johnson et al., 2007b). The proposed method could be thought of as synchronous adaptor grammars over two languages. However, adaptor grammars have generally been used to specify only two or a few levels as in the FLAT model in this paper, as opposed to recursive models such as HIER or many-leveled models such as HLEN. One exception is the variational inference method for adaptor grammars presented by Cohen et al. (2010) that is applicable to recursive grammars such as HIER. We plan to examine variational inference for the proposed models in future work. 7 Experimental Evaluation We evaluate the proposed method on translation tasks from four languages, French, German, Spanish, and Japanese, into English. de-en es-en fr-en ja-en TM (en) 1.80M 1.62M 1.35M 2.38M TM (other) 1.85M 1.82M 1.56M 2.78M LM (en) 52.7M 52.7M 52.7M 44.7M Tune (en ) 49.8k 49.8k 49.8k 68.9k Tune (other) 47.2k 52.6k 55.4k 80.4k Test (en) 65.6k 65.6k 65.6k 40.4k Test (other) 62.7k 68.1k 72.6k 48.7k Table 1: The number of words in each corpus for TM and LM training, tuning, and testing. 7.1 Experimental Setup The data for French, German, and Spanish are from the 2010 Workshop on Statistical Machine Translation (Callison-Burch et al., 2010). We use the news commentary corpus for training the TM, and the news commentary and Europarl corpora for training the LM. For Japanese, we use data from the NTCIR patent translation task (Fujii et al., 2008). We use the first 100k sentences of the parallel corpus for the TM, and the whole parallel corpus for the LM. Details of both corpora can be found in Table 1. Corpora are tokenized, lower-cased, and sentences of over 40 words on either side are removed for TM training. For both tasks, we perform weight tuning and testing on specified development and test sets. We compare the accuracy of our proposed method of joint phrase alignment and extraction using the FLAT, HIER and HLEN models, with a baseline of using word alignments from GIZA++ and heuristic phrase extraction. Decoding is performed using Moses (Koehn and others, 2007) using the phrase tables learned by each method under consideration, as well as standard bidirectional lexical reordering probabilities (Koehn et al., 2005). Maximum phrase length is limited to 7 in all models, and for the LM we use an interpolated Kneser-Ney 5-gram model. For GIZA++, we use the standard training regimen up to Model 4, and combine alignments with grow-diag-final-and. For the proposed models, we train for 100 iterations, and use the final sample acquired at the end of the training process for our experiments using a single sample6. In addition, 6For most models, while likelihood continued to increase gradually for all 100 iterations, BLEU score gains plateaued after 5-10 iterations, likely due to the strong prior information 638 de-en es-en fr-en ja-en Align Extract # Samp. BLEU Size BLEU Size BLEU Size BLEU Size GIZA++ HEUR-W 1 16.62 4.91M 22.00 4.30M 21.35 4.01M 23.20 4.22M FLAT MOD 1 13.48 136k 19.15 125k 17.97 117k 16.10 89.7k HIER MOD 1 16.58 1.02M 21.79 859k 21.50 751k 23.23 723k HLEN MOD 1 16.49 1.17M 21.57 930k 21.31 860k 23.19 820k HIER MOD 10 16.53 3.44M 21.84 2.56M 21.57 2.63M 23.12 2.21M HLEN MOD 10 16.51 3.74M 21.69 3.00M 21.53 3.09M 23.20 2.70M Table 2: BLEU score and phrase table size by alignment method, extraction method, and samples combined. Bold numbers are not significantly different from the best result according to the sign test (p < 0.05) (Collins et al., 2005). we also try averaging the phrase tables from the last ten samples as described in Section 5.3. 7.2 Experimental Results The results for these experiments can be found in Table 2. From these results we can see that when using a single sample, the combination of using HIER and model probabilities achieves results approximately equal to GIZA++ and heuristic phrase extraction. This is the first reported result in which an unsupervised phrase alignment model has built a phrase table directly from model probabilities and achieved results that compare to heuristic phrase extraction. It can also be seen that the phrase table created by the proposed method is approximately 5 times smaller than that obtained by the traditional pipeline. In addition, HIER significantly outperforms FLAT when using the model probabilities. This confirms that phrase tables containing only minimal phrases are not able to achieve results that compete with phrase tables that use multiple granularities. Somewhat surprisingly, HLEN consistently slightly underperforms HIER. This indicates potential gains to be provided by length-based parameter tuning were outweighed by losses due to the increased complexity of the model. In particular, we believe the necessity to combine probabilities from multiple Pt,l models into a single phrase table may have resulted in a distortion of the phrase probabilities. In addition, the assumption that phrase lengths are generated from a uniform distribution is likely too strong, and further gains provided by Pbase. As iterations took 1.3 hours on a single processor, good translation results can be achieved in approximately 13 hours, which could further reduced using distributed sampling (Newman et al., 2009; Blunsom et al., 2009). FLAT HIER MOD 17.97 117k 21.50 751k HEUR-W 21.52 5.65M 21.68 5.39M HEUR-B 21.45 4.93M 21.41 2.61M HEUR-P 21.56 4.88M 21.47 1.62M Table 3: Translation results and phrase table size for various phrase extraction techniques (French-English). could likely be achieved by more accurate modeling of phrase lengths. We leave further adjustments to the HLEN model to future work. It can also be seen that combining phrase tables from multiple samples improved the BLEU score for HLEN, but not for HIER. This suggests that for HIER, most of the useful phrase pairs discovered by the model are included in every iteration, and the increased recall obtained by combining multiple samples does not consistently outweigh the increased confusion caused by the larger phrase table. We also evaluated the effectiveness of modelbased phrase extraction compared to heuristic phrase extraction. Using the alignments from HIER, we created phrase tables using model probabilities (MOD), and heuristic extraction on words (HEUR-W), blocks (HEUR-B), and minimal phrases (HEUR-P) as described in Section 5. The results of these experiments are shown in Table 3. It can be seen that model-based phrase extraction using HIER outperforms or insignificantly underperforms heuristic phrase extraction over all experimental settings, while keeping the phrase table to a fraction of the size of most heuristic extraction methods. Finally, we varied the size of the parallel corpus for the Japanese-English task from 50k to 400k sen639 Figure 4: The effect of corpus size on the accuracy (a) and phrase table size (b) for each method (Japanese-English). tences and measured the effect of corpus size on translation accuracy. From the results in Figure 4 (a), it can be seen that at all corpus sizes, the results from all three methods are comparable, with insignificant differences between GIZA++ and HIER at all levels, and HLEN lagging slightly behind HIER. Figure 4 (b) shows the size of the phrase table induced by each method over the various corpus sizes. It can be seen that the tables created by GIZA++ are significantly larger at all corpus sizes, with the difference being particularly pronounced at larger corpus sizes. 8 Conclusion In this paper, we presented a novel approach to joint phrase alignment and extraction through a hierarchical model using non-parametric Bayesian methods and inversion transduction grammars. Machine translation systems using phrase tables learned directly by the proposed model were able to achieve accuracy competitive with the traditional pipeline of word alignment and heuristic phrase extraction, the first such result for an unsupervised model. For future work, we plan to refine HLEN to use a more appropriate model of phrase length than the uniform distribution, particularly by attempting to bias against phrase pairs where one of the two phrases is much longer than the other. In addition, we will test probabilities learned using the proposed model with an ITG-based decoder. We will also examine the applicability of the proposed model in the context of hierarchical phrases (Chiang, 2007), or in alignment using syntactic structure (Galley et al., 2006). It is also worth examining the plausibility of variational inference as proposed by Cohen et al. (2010) in the alignment context. Acknowledgments This work was performed while the first author was supported by the JSPS Research Fellowship for Young Scientists. References Phil Blunsom and Trevor Cohn. 2010. Inducing synchronous grammars with slice sampling. In Proceedings of the Human Language Technology: The 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics. Phil Blunsom, Trevor Cohn, Chris Dyer, and Miles Osborne. 2009. A Gibbs sampler for phrasal synchronous grammar induction. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics, pages 782–790. Peter F. Brown, Vincent J.Della Pietra, Stephen A. Della Pietra, and Robert. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19:263–311. Chris Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar F. Zaidan. 2010. Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation. In Proceedings of the Joint 5th Workshop on Statistical Machine Translation and MetricsMATR, pages 17–53. Colin Cherry and Dekang Lin. 2007. Inversion transduction grammar for joint phrasal translation modeling. In Proceedings of the NAACL Workshop on Syntax and Structure in Machine Translation. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Shay B. Cohen, David M. Blei, and Noah A. Smith. 2010. Variational inference for adaptor grammars. In Proceedings of the Human Language Technology: The 640 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 564–572. Michael Collins, Philipp Koehn, and Ivona Kuˇcerov´a. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 531–540. Carl de Marcken. 1996. Unsupervised Language Acquisition. Ph.D. thesis, Massachusetts Institute of Technology. John DeNero and Dan Klein. 2008. The complexity of phrase alignment problems. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 25–28. John DeNero and Dan Klein. 2010. Discriminative modeling of extraction sets for machine translation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1453–1463. John DeNero, Dan Gillick, James Zhang, and Dan Klein. 2006. Why generative phrase models underperform surface heuristics. In Proceedings of the 1st Workshop on Statistical Machine Translation, pages 31–38. John DeNero, Alex Bouchard-Cˆot´e, and Dan Klein. 2008. Sampling alignment structure under a Bayesian translation model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 314–323. Atsushi Fujii, Masao Utiyama, Mikio Yamamoto, and Takehito Utsuro. 2008. Overview of the patent translation task at the NTCIR-7 workshop. In Proceedings of the 7th NTCIR Workshop Meeting on Evaluation of Information Access Technologies, pages 389–400. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics, pages 961–968. J. Howard Johnson, Joel Martin, George Foster, and Roland Kuhn. 2007a. Improving translation quality by discarding most of the phrasetable. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007b. Adaptor grammars: A framework for specifying compositional nonparametric Bayesian models. Advances in Neural Information Processing Systems, 19:641. Philipp Koehn et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Phillip Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Human Language Technology Conference (HLTNAACL), pages 48–54. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 IWSLT speech translation evaluation. In Proceedings of the International Workshop on Spoken Language Translation. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the Human Language Technology Conference - North American Chapter of the Association for Computational Linguistics Annual Meeting (HLT-NAACL), pages 104–111. Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine translation. pages 133–139. Robert C. Moore and Chris Quirk. 2007. An iterativelytrained segmentation-free phrase translation model for statistical machine translation. In Proceedings of the 2nd Workshop on Statistical Machine Translation, pages 112–119. David Newman, Arthur Asuncion, Padhraic Smyth, and Max Welling. 2009. Distributed algorithms for topic models. Journal of Machine Learning Research, 10:1801–1828. Franz Josef Och, Christoph Tillmann, and Hermann Ney. 1999. Improved alignment models for statistical machine translation. In Proceedings of the 4th Conference on Empirical Methods in Natural Language Processing, pages 20–28. Jim Pitman and Marc Yor. 1997. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. The Annals of Probability, 25(2):855– 900. Markus Saers, Joakim Nivre, and Dekai Wu. 2009. Learning stochastic bracketing inversion transduction grammars with a cubic time biparsing algorithm. In Proceedings of the The 11th International Workshop on Parsing Technologies. Yee Whye Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Hao Zhang, Chris Quirk, Robert C. Moore, and Daniel Gildea. 2008. Bayesian learning of noncompositional phrases with synchronous parsing. Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 97–105. 641
|
2011
|
64
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 642–652, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Learning Hierarchical Translation Structure with Linguistic Annotations Markos Mylonakis ILLC University of Amsterdam [email protected] Khalil Sima’an ILLC University of Amsterdam [email protected] Abstract While it is generally accepted that many translation phenomena are correlated with linguistic structures, employing linguistic syntax for translation has proven a highly non-trivial task. The key assumption behind many approaches is that translation is guided by the source and/or target language parse, employing rules extracted from the parse tree or performing tree transformations. These approaches enforce strict constraints and might overlook important translation phenomena that cross linguistic constituents. We propose a novel flexible modelling approach to introduce linguistic information of varying granularity from the source side. Our method induces joint probability synchronous grammars and estimates their parameters, by selecting and weighing together linguistically motivated rules according to an objective function directly targeting generalisation over future data. We obtain statistically significant improvements across 4 different language pairs with English as source, mounting up to +1.92 BLEU for Chinese as target. 1 Introduction Recent advances in Statistical Machine Translation (SMT) are widely centred around two concepts: (a) hierarchical translation processes, frequently employing Synchronous Context Free Grammars (SCFGs) and (b) transduction or synchronous rewrite processes over a linguistic syntactic tree. SCFGs in the form of the Inversion-Transduction Grammar (ITG) were first introduced by (Wu, 1997) as a formalism to recursively describe the translation process. The Hiero system (Chiang, 2005) utilised an ITG-flavour which focused on hierarchical phrase-pairs to capture context-driven translation and reordering patterns with ‘gaps’, offering competitive performance particularly for language pairs with extensive reordering. As Hiero uses a single non-terminal and concentrates on overcoming translation lexicon sparsity, it barely explores the recursive nature of translation past the lexical level. Nevertheless, the successful employment of SCFGs for phrase-based SMT brought translation models assuming latent syntactic structure to the spotlight. Simultaneously, mounting efforts have been directed towards SMT models employing linguistic syntax on the source side (Yamada and Knight, 2001; Quirk et al., 2005; Liu et al., 2006), target side (Galley et al., 2004; Galley et al., 2006) or both (Zhang et al., 2008; Liu et al., 2009; Chiang, 2010). Hierarchical translation was combined with target side linguistic annotation in (Zollmann and Venugopal, 2006). Interestingly, early on (Koehn et al., 2003) exemplified the difficulties of integrating linguistic information in translation systems. Syntaxbased MT often suffers from inadequate constraints in the translation rules extracted, or from striving to combine these rules together towards a full derivation. Recent research tries to address these issues, by re-structuring training data parse trees to better suit syntax-based SMT training (Wang et al., 2010), or by moving from linguistically motivated synchronous grammars to systems where linguistic plausibility of the translation is assessed through additional features in a phrase-based system (Venugopal et al., 2009; Chiang et al., 2009), obscuring the impact of higher level syntactic processes. While it is assumed that linguistic structure does correlate with some translation phenomena, in this 642 work we do not employ it as the backbone of translation. In place of linguistically constrained translation imposing syntactic parse structure, we opt for linguistically motivated translation. We learn latent hierarchical structure, taking advantage of linguistic annotations but shaped and trained for translation. We start by labelling each phrase-pair span in the word-aligned training data with multiple linguistically motivated categories, offering multi-grained abstractions from its lexical content. These phrasepair label charts are the input of our learning algorithm, which extracts the linguistically motivated rules and estimates the probabilities for a stochastic SCFG, without arbitrary constraints such as phrase or span sizes. Estimating such grammars under a Maximum Likelihood criterion is known to be plagued by strong overfitting leading to degenerate estimates (DeNero et al., 2006). In contrast, our learning objective not only avoids overfitting the training data but, most importantly, learns joint stochastic synchronous grammars which directly aim at generalisation towards yet unseen instances. By advancing from structures which mimic linguistic syntax, to learning linguistically aware latent recursive structures targeting translation, we achieve significant improvements in translation quality for 4 different language pairs in comparison with a strong hierarchical translation baseline. Our key contributions are presented in the following sections. Section 2 discusses the weak independence assumptions of SCFGs and introduces a joint translation model which addresses these issues and separates hierarchical translation structure from phrase-pair emission. In section 3 we consider a chart over phrase-pair spans filled with sourcelanguage linguistically motivated labels. We show how we can employ this crucial input to extract and train a hierarchical translation structure model with millions of rules. Section 4 demonstrates decoding with the model by constraining derivations to linguistic hints of the source sentence and presents our empirical results. We close with a discussion of related work and our conclusions. 2 Joint Translation Model Our model is based on a probabilistic Synchronous CFG (Wu, 1997; Chiang, 2005). SCFGs define a SBAR →[WHNP SBAR\WHNP] (a) SBAR\WHNP →⟨VP/NPL NPR⟩ (b) NPR →[NP PP] (c) WHNP →WHNPP (d) WHNPP →which / der (e) VP/NPL →VP/NPL P (f) VP/NPL P →is / ist (g) NPR →NPR P (h) NPR P →the solution / die L¨osung (i) NP →NPP (j) NPP →the solution / die L¨osung (k) PP →PPP (l) PPP →to the problem / f¨ur das Problem (m) Figure 1: English-German SCFG rules for the relative clause(s) ‘which is the solution (to the problem) / der die L¨osung (f¨ur das Problem) ist’, [ ] signify monotone translation, ⟨⟩a swap reordering. language over string pairs, which are generated beginning from a start symbol S and recursively expanding pairs of linked non-terminals across the two strings using the grammar’s rule set. By crossing the links between the non-terminals of the two sides reordering phenomena are captured. We employ binary SCFGs, i.e. grammars with a maximum of two non-terminals on the right-hand side. Also, for this work we only used grammars with either purely lexical or purely abstract rules involving one or two nonterminal pairs. An example can be seen in Figure 1, using an ITG-style notation and assuming the same non-terminal labels for both sides. We utilise probabilistic SCFGs, where each rule is assigned a conditional probability of expanding the left-hand side symbol with the rule’s right-hand side. Phrase-pairs are emitted jointly and the overall probabilistic SCFG is a joint model over parallel strings. 2.1 SCFG Reordering Weaknesses An interesting feature of all probabilistic SCFGs (i.e. not only binary ones), which has received surprisingly little attention, is that the reordering pat643 tern between the non-terminal pairs (or in the case of ITGs the choice between monotone and swap expansion) are not conditioned on any other part of a derivation. The result is that, the reordering pattern with the highest probability will always be preferred (e.g. in the Viterbi derivation) over the rest, irrespective of lexical or abstract context. As an example, a probabilistic SCFG will always assign a higher probability to derivations swapping or monotonically translating nouns and adjectives between English and French, only depending on which of the two rules NP →[NN JJ], NP →⟨NN JJ⟩ has a higher probability. The rest of the (sometimes thousands of) rule-specific features usually added to SCFG translation models do not directly help either, leaving reordering decisions disconnected from the rest of the derivation. While in a decoder this is somehow mitigated by the use of a language model, we believe that the weakness of straightforward applications of SCFGs to model reordering structure at the sentence level misses a chance to learn this crucial part of the translation process during grammar induction. As (Mylonakis and Sima’an, 2010) note, ‘plain’ SCFGs seem to perform worse than the grammars described next, mainly due to wrong long-range reordering decisions for which the language model can hardly help. 2.2 Hierarchical Reordering SCFG We address the weaknesses mentioned above by relying on an SCFG grammar design that is similar to the ‘Lexicalised Reordering’ grammar of (Mylonakis and Sima’an, 2010). As in the rules of Figure 1, we separate non-terminals according to the reordering patterns in which they participate. Nonterminals such as BL, CR take part only in swapping right-hand sides ⟨BL CR⟩(with BL swapping from the source side’s left to the target side’s right, CR swapping in the opposite direction), while non-terminals such as B, C take part solely in monotone right-hand side expansions [B C]. These nonterminal categories can appear also on the left-hand side of a rule, as in rule (c) of Figure 1. In contrast with (Mylonakis and Sima’an, 2010), monotone and swapping non-terminals do not emit phrase-pairs themselves. Rather, each non-terminal NT is expanded to a dedicated phrase-pair emitA →[B C] A →⟨BL CR⟩ AL →[B C] AL →⟨BL CR⟩ AR →[B C] AR →⟨BL CR⟩ A →AP AP →α / β AL →AL P AL P →α / β AR →AR P AR P →α / β Figure 2: Recursive Reordering Grammar rule categories; A, B, C non-terminals; α, β source and target strings respectively. ting non-terminal NTP, which generates all phrasepairs for it and nothing more. In this way, the preference of non-terminals to either expand towards a (long) phrase-pair or be further analysed recursively is explicitly modelled. Furthermore, this set of pre-terminals allows us to separate the higher order translation structure from the process that emits phrase-pairs, a feature we employ next. In (Mylonakis and Sima’an, 2010) this grammar design mainly contributed to model lexical reordering preferences. While we retain this function, for the rich linguistically-motivated grammars used in this work this design effectively propagates reordering preferences above and below the current rule application (e.g. Figure 1, rules (a)-(c)), allowing to learn and apply complex reordering patterns. The different types of grammar rules are summarised in abstract form in Figure 2. We will subsequently refer to this grammar structure as Hierarchical Reordering SCFG (HR-SCFG). 2.3 Generative Model We arrive at a probabilistic SCFG model which jointly generates source e and target f strings, by augmenting each grammar rule with a probability, summing up to one for every left-hand side. The probability of a derivation D of tuple ⟨e, f⟩beginning from start symbol S is equal to the product of the probabilities of the rules used to recursively generate it. We separate the structural part of the derivation D, down to the pre-terminals NTP, from the phraseemission part. The grammar rules pertaining to the 644 X, SBAR, WHNP+VP, WHNP+VBZ+NP X, VBZ+NP, VP, SBAR\WHNP X, SBAR/NN, WHNP+VBZ+DT X, VBZ+DT, VP/NN X, WHNP+VBZ, X, NP, SBAR/NP VP\VBZ X, WHNP, X, VBZ, X, DT, X, NN, SBAR/VP VP/NP NP/NN NP\DT which is the problem Figure 3: The label chart for the source fragment ‘which is the problem’. Only a sample of the entries is listed. structural part and their associated probabilities define a model p(σ) over the latent variable σ determining the recursive, reordering and phrase-pair segmenting structure of translation, as in Figure 4. Given σ, the phrase-pair emission part merely generates the phrase-pairs utilising distributions from every NTP to the phrase-pairs that it covers, thereby defining a model over all sentence-pairs generated given each translation structure. The probabilities of a derivation and of a sentence-pair are then as follows: p(D) =p(σ)p(e, f|σ) (1) p(e, f) = X D:D ∗⇒⟨e,f⟩ p(D) (2) By splitting the joint model in a hierarchical structure model and a lexical emission one we facilitate estimating the two models separately. The following section discusses this. 3 Learning Translation Structure 3.1 Phrase-Pair Label Chart The input to our learning algorithm is a wordaligned parallel corpus. We consider as phrasepair spans those that obey the word-alignment constraints of (Koehn et al., 2003). For every training sentence-pair, we also input a chart containing one or more labels for every synchronous span, such as that of Figure 3. Each label describes different properties of the phrase pair (syntactic, semantic etc.), possibly in relation to its context, or supplying varying levels of abstraction (phrase-pair, determiner with noun, noun-phrase, sentence etc.). We aim to induce a recursive translation structure explaining the joint generation of the source and target sentence taking advantage of these phrase-pair span labels. For this work we employ the linguistically motivated labels of (Zollmann and Venugopal, 2006), albeit for the source language. Given a parse of the source sentence, each span is assigned the following kind of labels: Phrase-Pair All phrase-pairs are assigned the X label Constituent Source phrase is a constituent A Concatenation of Constituents Source phrase labelled A+B as a concatenation of constituents A and B, similarly for 3 constituents. Partial Constituents Categorial grammar (BarHillel, 1953) inspired labels A/B, A\B, indicating a partial constituent A missing constituent B right or left respectively. An important point is that we assign all applicable labels to every span. In this way, each label set captures the features of the source side’s parse-tree without being bounded by the actual parse structure, as well as provides a coarse to fine-grained view of the source phrase. 3.2 Grammar Extraction From every word-aligned sentence-pair and its label chart, we extract SCFG rules as those of Figure 2. Binary rules are extracted from adjoining synchronous spans up to the whole sentence-pair level, with the non-terminals of both left and right-hand side derived from the label names plus their reordering function (monotone, left/right swapping) in the span examined. A single unary rule per non-terminal NT generates the phrase-pair emitting NTP. Unary rules NTP →α / β generating the phrase-pair are created for all the labels covering it. While we label the phrase-pairs similarly to (Zollmann and Venugopal, 2006), the extracted grammar is rather different. We do not employ rules that are grounded to lexical context (‘gap’ rules), relying instead on the reordering-aware non-terminal set and related unary and binary rules. The result is a grammar which can both capture a rich array of translation phenomena based on linguistic and lexical grounds and explicitly model the balance between 645 SBAR WHNP WHNPP which der < SBAR\WHNP > VP/NPL VP/NPL P is ist NPR NP NPP the solution die L¨osung PP PPP to the problem f¨ur das Problem Figure 4: A derivation of a sentence fragment with the grammar of Figure 1. memorising long phrase-pairs and generalising over yet unseen ones, as shown in the next example. The derivation in Figure 4 illustrates some of the formalism’s features. A preference to reorder based on lexical content is applied for is / ist. Noun phrase NPR is recursively constructed with a preference to constitute the right branch of an order swapping nonterminal expansion. This is matched with VP/NPL which reorders in the opposite direction. The labels VP/NP and SBAR\WHNP allow linguistic syntax context to influence the lexical and reordering translation choices. Crucially, all these lexical, attachment and reordering preferences (as encoded in the model’s rules and probabilities) must be matched together to arrive at the analysis in Figure 4. 3.3 Parameter Estimation We estimate the parameters for the phrase-emission model p(e, f|σ) using Relative Frequency Estimation (RFE) on the label charts induced for the training sentence-pairs, after the labels have been augmented by the reordering indications. In the RFE estimate, every rule NTP →α / β receives a probability in proportion with the times that α / β was covered by the NT label. On the other hand, estimating the parameters under Maximum-Likelihood Estimation (MLE) for the latent translation structure model p(σ) is bound to overfit towards memorising whole sentence-pairs as discussed in (Mylonakis and Sima’an, 2010), with the resulting grammar estimate not being able to generalise past the training data. However, apart from overfitting towards long phrase-pairs, a grammar with millions of structural rules is also liable to overfit towards degenerate latent structures which, while fitting the training data well, have limited applicability to unseen sentences. We avoid both pitfalls by estimating the grammar probabilities with the Cross-Validating ExpectationMaximization algorithm (CV-EM) (Mylonakis and Sima’an, 2008; Mylonakis and Sima’an, 2010). CVEM is a cross-validating instance of the well known EM algorithm (Dempster et al., 1977). It works iteratively on a partition of the training data, climbing the likelihood of the training data while crossvalidating the latent variable values, considering for every training data point only those which can be produced by models built from the rest of the data excluding the current part. As a result, the estimation process simulates maximising future data likelihood, using the training data to directly aim towards strong generalisation of the estimate. For our probabilistic SCFG-based translation structure variable σ, implementing CV-EM boils down to a synchronous version of the Inside-Outside algorithm, modified to enforce the CV criterion. In this way we arrive at cross-validated ML estimate of the σ parameters while keeping the phrase-emission parameters of p(e, f|σ) fixed. The CV-criterion, apart from avoiding overfitting, results in discarding the structural rules which are only found in a single part of the training corpus, leading to a more compact grammar while still retaining millions of structural rules that are more hopeful to generalise. Unravelling the joint generative process, by modelling latent hierarchical structure separately from phrase-pair emission, allows us to concentrate our inference efforts towards the hidden, higher-level translation mechanism. 4 Experiments 4.1 Decoding Model The induced joint translation model can be used to recover arg maxe p(e|f), as it is equal to arg maxe p(e, f). We employ the induced probabilistic HR-SCFG G as the backbone of a log-linear, feature based translation model, with the derivation probability p(D) under the grammar estimate being 646 one of the features. This is augmented with a small number n of additional smoothing features φi for derivation rules r: (a) conditional phrase translation probabilities, (b) lexical phrase translation probabilities, (c) word generation penalty, and (d) a count of swapping reordering operations. Features (a), (b) and (c) are applicable to phrase-pair emission rules and features for both translation directions are used, while (d) is only triggered by structural rules. These extra features assess translation quality past the synchronous grammar derivation and learning general reordering or word emission preferences for the language pair. As an example, while our probabilistic HR-SCFG maintains a separate joint phrase-pair emission distribution per non-terminal, the smoothing features (a) above assess the conditional translation of surface phrases irrespective of any notion of recursive translation structure. The final feature is the language model score for the target sentence, mounting up to the following model used at decoding time, with the feature weights λ trained by Minimum Error Rate Training (MERT) (Och, 2003) on a development corpus. p(D ∗⇒⟨e, f⟩) ∝p(e)λlmpG(D)λG n Y i=1 Y r∈D φi(r)λi 4.2 Decoding Modifications We use a customised version of the Joshua SCFG decoder (Li et al., 2009) to translate, with the following modifications: Source Labels Constraints As for this work the phrase-pair labels used to extract the grammar are based on the linguistic analysis of the source side, we can construct the label chart for every input sentence from its parse. We subsequently use it to consider only derivations with synchronous spans which are covered by non-terminals matching one of the labels for those spans. This applies both for the nonterminals covering phrase-pairs as well as the higher level parts of the derivation. In this manner we not only constrain the translation hypotheses resulting in faster decoding time, but, more importantly, we may ground the hypotheses more closely to the available linguistic information of the source sentence. This is of particular interest as we move up the derivation tree, where an initial wrong choice below could propagate towards hypotheses wildly diverging from the input sentence’s linguistic annotation. Per Non-Terminal Pruning The decoder uses a combination of beam and cube-pruning (Huang and Chiang, 2007). As our grammar uses non-terminals in the hundreds of thousands, it is important not to prune away prematurely non-terminals covering smaller spans and to leave more options to be considered as we move up the derivation tree. For this, for every cell in the decoder’s chart, we keep a separate bin per non-terminal and prune together hypotheses leading to the same non-terminal covering a cell. This allows full derivations to be found for all input sentences, as well as avoids aggressive pruning at an early stage. Given the source label constraint discussed above, this does not increase running times or memory demands considerably as we allow only up to a few tens of nonterminals per span. Expected Counts Rule Pruning To compact the hierarchical structure part of the grammar prior to decoding, we prune rules that fail to accumulate 10−8 expected counts during the last CV-EM iteration. For English to German, this brings the structural rules from 15M down to 1.2M. Note that we do not prune the phrase-pair emitting rules. Overall, we consider this a much more informed pruning criterion than those based on probability values (that are not comparable across left-hand sides) or righthand side counts (frequent symbols need many more expansions than a highly specialised one). 4.3 Experimental Setting & Baseline We evaluate our method on four different language pairs with English as the source language and French, German, Dutch and Chinese as target. The data for the first three language pairs are derived from parliament proceedings sourced from the Europarl corpus (Koehn, 2005), with WMT07 development and test data for French and German. The data for the English to Chinese task is composed of parliament proceedings and news articles. For all language pairs we employ 200K and 400K sentence pairs for training, 2K for development and 2K for testing (single reference per source sentence). Both the baseline and our method decode 647 Training English to French German Dutch Chinese set size BLEU NIST BLEU NIST BLEU NIST BLEU NIST 200K josh-base 29.20 7.2123 18.65 5.8047 21.97 6.2469 22.34 6.5540 lts 29.43 7.2611** 19.10** 5.8714** 22.31* 6.2903* 23.67** 6.6595** 400K josh-base 29.58 7.3033 18.86 5.8818 22.25 6.2949 23.24 6.7402 lts 29.83 7.4000** 19.49** 5.9374** 22.92** 6.3727** 25.16** 6.9005** Table 1: Experimental results for training sets of 200K and 400K sentence pairs. Statistically significant score improvements from the baseline at the 95% confidence level are labelled with a single star, at the 99% level with two. with a 3-gram language model smoothed with modified Knesser-Ney discounting (Chen and Goodman, 1998), trained on around 1M sentences per target language. The parses of the source sentences employed by our system during training and decoding are created with the Charniak parser (Charniak, 2000). We compare against a state-of-the-art hierarchical translation (Chiang, 2005) baseline, based on the Joshua translation system under the default training and decoding settings (josh-base). Apart of evaluating against a state-of-the-art system, especially on the English-Chinese language pair, the comparison has an added interesting aspect. The heuristically trained baseline takes advantage of ‘gap rules’ to reorder based on lexical context cues, but makes very limited use of the hierarchical structure above the lexical surface. In contrast, our method induces a grammar with no such rules, relying on lexical content and the strength of a higher level translation structure instead. 4.4 Training & Decoding Details To train our Latent Translation Structure (LTS) system, we used the following settings. CV-EM crossvalidated on a 10-part partition of the training data and performed 10 iterations. The structural rule probabilities were initialised to uniform per lefthand side. The decoder does not employ any ‘glue grammar’ as is usual with hierarchical translation systems to limit reordering up to a certain cut-off length. Instead, we rely on our LTS grammar to reorder and construct the translation output up to the full sentence length. In summary, our system’s experimental pipeline is as follows. All input sentences are parsed and label charts are created from these parses. The Hierarchical Reordering SCFG is extracted and its parameters are estimated employing CV-EM. The structural rules of the estimate are pruned according to their expected counts and smoothing features are added to all rules. We train the feature weights under MERT and decode with the resulting log-linear model. The overall training and decoding setup is appealing also regarding computational demands. On an 8-core 2.3GHz system, training on 200K sentencepairs demands 4.5 hours while decoding runs on 25 sentences per minute. 4.5 Results Table 1 presents the results for the baseline and our method for the 4 language pairs, for training sets of both 200K and 400K sentence pairs. Our system (lts) outperforms the baseline for all 4 language pairs for both BLEU and NIST scores, by a margin which scales up to +1.92 BLEU points for English to Chinese translation when training on the 400K set. In addition, increasing the size of the training data from 200K to 400K sentence pairs widens the performance margin between the baseline and our system, in some cases considerably. All but one of the performance improvements are found to be statistically significant (Koehn, 2004) at the 95% confidence level, most of them also at the 99% level. We selected an array of target languages of increasing reordering complexity with English as source. Examining the results across the target languages, LTS performance gains increase the more challenging the sentence structure of the target language is in relation to the source’s, highlighted when translating to Chinese. Even for Dutch and German, which pose additional challenges such as compound words and morphology which we do not explicitly treat in the current system, LTS still delivers significant improvements in performance. Additionally, 648 System 200K 400K (a) lts-nolabels 22.50 24.24 lts 23.67** 25.16** (b) josh-base-lm4 23.81 24.77 lts-lm4 24.48** 26.35** Table 2: Additional experiments for English to Chinese translation examining (a) the impact of the linguistic annotations in the LTS system (lts), when compared with an instance not employing such annotations (lts-nolabels) and (b) decoding with a 4th-order language model (-lm4). BLEU scores for 200K and 400K training sentence pairs. the robustness of our system is exemplified by delivering significant performance increases for all language pairs. For the English to Chinese translation task, we performed further experiments along two axes. We first investigate the contribution of the linguistic annotations, by comparing our complete system (lts) with an otherwise identical implementation (lts-nolabels) which does not employ any linguistically motivated labels. The latter system then uses a labels chart as that of Figure 3, which however labels all phrase-pair spans solely with the generic X label. The results in Table 2(a) indicate that a large part of the performance improvement can be attributed to the use of the linguistic annotations extracted from the source parse trees, indicating the potential of the LTS system to take advantage of such additional annotations to deliver better translations. The second additional experiment relates to the impact of employing a stronger language model during decoding, which may increase performance but slows down decoding speed. Notably, as can be seen in Table 2(b), switching to a 4-gram LM results in performance gains for both the baseline and our system and while the margin between the two systems decreases, our system continues to deliver a considerable and significant improvement in translation BLEU scores. 5 Related Work In this work, we focus on the combination of learning latent structure with syntax and linguistic annotations, exploring the crossroads of machine learning, linguistic syntax and machine translation. Training a joint probability model was first discussed in (Marcu and Wong, 2002). We show that a translation system based on such a joint model can perform competitively in comparison with conditional probability models, when it is augmented with a rich latent hierarchical structure trained adequately to avoid overfitting. Earlier approaches for linguistic syntax-based translation such as (Yamada and Knight, 2001; Galley et al., 2006; Huang et al., 2006; Liu et al., 2006) focus on memorising and reusing parts of the structure of the source and/or target parse trees and constraining decoding by the input parse tree. In contrast to this approach, we choose to employ linguistic annotations in the form of unambiguous synchronous span labels, while discovering ambiguous translation structure taking advantage of them. Later work (Marton and Resnik, 2008; Venugopal et al., 2009; Chiang et al., 2009) takes a more flexible approach, influencing translation output using linguistically motivated features, or features based on source-side linguistically-guided latent syntactic categories (Huang et al., 2010). A feature-based approach and ours are not mutually exclusive, as we also employ a limited set of features next to our trained model during decoding. We find augmenting our system with a more extensive feature set an interesting research direction for the future. An array of recent work (Chiang, 2010; Zhang et al., 2008; Liu et al., 2009) sets off to utilise source and target syntax for translation. While for this work we constrain ourselves to source language syntax annotations, our method can be directly applied to employ labels taking advantage of linguistic annotations from both sides of translation. The decoding constraints of section 4.2 can then still be applied on the source part of hybrid source-target labels. For the experiments in this paper we employ a label set similar to the non-terminals set of (Zollmann and Venugopal, 2006). However, the synchronous grammars we learn share few similarities with those that they heuristically extract. The HR-SCFG we adopt allows capturing more complex reordering phenomena and, in contrast to both (Chiang, 2005; Zollmann and Venugopal, 2006), is not exposed to the issues highlighted in section 2.1. Nevertheless, our results underline the capacity of linguistic anno649 tations similar to those of (Zollmann and Venugopal, 2006) as part of latent translation variables. Most of the aforementioned work does concentrate on learning hierarchical, linguistically motivated translation models. Cohn and Blunsom (2009) sample rules of the form proposed in (Galley et al., 2004) from a Bayesian model, employing Dirichlet Process priors favouring smaller rules to avoid overfitting. Their grammar is however also based on the target parse-tree structure, with their system surpassing a weak baseline by a small margin. In contrast to the Bayesian approach which imposes external priors to lead estimation away from degenerate solutions, we take a data-driven approach to arrive to estimates which generalise well. The rich linguistically motivated latent variable learnt by our method delivers translation performance that compares favourably to a state-of-the-art system. Mylonakis and Sima’an (2010) also employ the CV-EM algorithm to estimate the parameters of an SCFG, albeit a much simpler one based on a handful of non-terminals. In this work we employ some of their grammar design principles for an immensely more complex grammar with millions of hierarchical latent structure rules and show how such grammar can be learnt and applied taking advantage of source language linguistic annotations. 6 Conclusions In this work we contribute a method to learn and apply latent hierarchical translation structure. To this end, we take advantage of source-language linguistic annotations to motivate instead of constrain the translation process. An input chart over phrasepair spans, with each cell filled with multiple linguistically motivated labels, is coupled with the HRSCFG design to arrive at a rich synchronous grammar with millions of structural rules and the capacity to capture complex linguistically conditioned translation phenomena. We address overfitting issues by cross-validating climbing the likelihood of the training data and propose solutions to increase the efficiency and accuracy of decoding. An interesting aspect of our work is delivering competitive performance for difficult language pairs such as English-Chinese with a joint probability generative model and an SCFG without ‘gap rules’. Instead of employing hierarchical phrase-pairs, we invest in learning the higher-order hierarchical synchronous structure behind translation, up to the full sentence length. While these choices and the related results challenge current MT research trends, they are not mutually exclusive with them. Future work directions include investigating the impact of hierarchical phrases for our models as well as any gains from additional features in the log-linear decoding model. Smoothing the HR-SCFG grammar estimates could prove a possible source of further performance improvements. Learning translation and reordering behaviour with respect to linguistic cues is facilitated in our approach by keeping separate phrase-pair emission distributions per emitting nonterminal and reordering pattern, while the employment of the generic X non-terminals already allows backing off to more coarse-grained rules. Nevertheless, we still believe that further smoothing of these sparse distributions, e.g. by interpolating them with less sparse ones, could in the future lead to an additional increase in translation quality. Finally, we discuss in this work how our method can already utilise hundreds of thousands of phrasepair labels and millions of structural rules. A further promising direction is broadening this set with labels taking advantage of both source and targetlanguage linguistic annotation or categories exploring additional phrase-pair properties past the parse trees such as semantic annotations. Acknowledgments Both authors are supported by a VIDI grant (nr. 639.022.604) from The Netherlands Organization for Scientific Research (NWO). The authors would like to thank Maxim Khalilov for helping with experimental data and Andreas Zollmann and the anonymous reviewers for their valuable comments. References Yehoshua Bar-Hillel. 1953. A quasi-arithmetical notation for syntactic description. Language, 29(1):47–58. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the North American Association for Computational Linguistics (HLT/NAACL), Seattle, Washington, USA, April. 650 Stanley Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University, August. David Chiang, Kevin Knight, and Wei Wang. 2009. 11,001 new features for statistical machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 218–226, Boulder, Colorado, June. Association for Computational Linguistics. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL 2005, pages 263–270. David Chiang. 2010. Learning to translate with source and target syntax. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1443–1452, Uppsala, Sweden, July. Association for Computational Linguistics. Trevor Cohn and Phil Blunsom. 2009. A Bayesian model of syntax-directed tree to string grammar induction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 352–361, Singapore, August. Association for Computational Linguistics. A.P. Dempster, N.M. Laird, and D.B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. John DeNero, Dan Gillick, James Zhang, and Dan Klein. 2006. Why generative phrase models underperform surface heuristics. In Proceedings on the Workshop on Statistical Machine Translation, pages 31–38, New York City. Association for Computational Linguistics. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 273–280, Boston, Massachusetts, USA, May. Association for Computational Linguistics. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 961– 968, Sydney, Australia, July. Association for Computational Linguistics. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 144–151, Prague, Czech Republic, June. Association for Computational Linguistics. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of the 7th Biennial Conference of the Association for Machine Translation in the Americas (AMTA), Boston, MA, USA. Zhongqiang Huang, Martin Cmejrek, and Bowen Zhou. 2010. Soft syntactic constraints for hierarchical phrase-based translation using latent syntactic distributions. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 138–147, Cambridge, MA, October. Association for Computational Linguistics. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In HLTNAACL 2003. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 388–395, Barcelona, Spain, July. Association for Computational Linguistics. Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In MT Summit 2005. Zhifei Li, Chris Callison-Burch, Chris Dyer, Sanjeev Khudanpur, Lane Schwartz, Wren Thornton, Jonathan Weese, and Omar Zaidan. 2009. Joshua: An open source toolkit for parsing-based machine translation. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 135–139, Athens, Greece, March. Association for Computational Linguistics. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-tostring alignment template for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 609–616, Sydney, Australia, July. Association for Computational Linguistics. Yang Liu, Yajuan L¨u, and Qun Liu. 2009. Improving tree-to-tree translation with packed forests. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 558–566, Suntec, Singapore, August. Association for Computational Linguistics. Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proceedings of Empirical methods in natural language processing, pages 133–139. Association for Computational Linguistics. Yuval Marton and Philip Resnik. 2008. Soft syntactic constraints for hierarchical phrased-based translation. In Proceedings of ACL-08: HLT, pages 1003–1011, 651 Columbus, Ohio, June. Association for Computational Linguistics. Markos Mylonakis and Khalil Sima’an. 2008. Phrase translation probabilities with ITG priors and smoothing as learning objective. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 630–639, Honolulu, USA, October. Markos Mylonakis and Khalil Sima’an. 2010. Learning probabilistic synchronous CFGs for phrase-based translation. In Fourteenth Conference on Computational Natural Language Learning, Uppsala, Sweden, July. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167, Sapporo, Japan, July. Association for Computational Linguistics. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal smt. In Proceedings of 43rd Annual Meeting of the Association for Computational Linguistics, Ann Arbor, Michigan, USA, June. Ashish Venugopal, Andreas Zollmann, Noah A. Smith, and Stephan Vogel. 2009. Preference grammars: Softening syntactic constraints to improve statistical machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 236–244, Boulder, Colorado, June. Association for Computational Linguistics. Wei Wang, Jonathan May, Kevin Knight, and Daniel Marcu. 2010. Re-structuring, re-labeling, and realigning for syntax-based machine translation. Computational Linguistics, 36(2):247–277. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 523–530, Toulouse, France, July. Association for Computational Linguistics. Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan, and Sheng Li. 2008. A tree sequence alignment-based tree-to-tree translation model. In Proceedings of ACL-08: HLT, pages 559–567, Columbus, Ohio, June. Association for Computational Linguistics. Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings on the Workshop on Statistical Machine Translation, pages 138–141, New York City, June. Association for Computational Linguistics. 652
|
2011
|
65
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 653–662, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Phrase-Based Translation Model for Question Retrieval in Community Question Answer Archives Guangyou Zhou, Li Cai, Jun Zhao∗, and Kang Liu National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences 95 Zhongguancun East Road, Beijing 100190, China {gyzhou,lcai,jzhao,kliu}@nlpr.ia.ac.cn Abstract Community-based question answer (Q&A) has become an important issue due to the popularity of Q&A archives on the web. This paper is concerned with the problem of question retrieval. Question retrieval in Q&A archives aims to find historical questions that are semantically equivalent or relevant to the queried questions. In this paper, we propose a novel phrase-based translation model for question retrieval. Compared to the traditional word-based translation models, the phrasebased translation model is more effective because it captures contextual information in modeling the translation of phrases as a whole, rather than translating single words in isolation. Experiments conducted on real Q&A data demonstrate that our proposed phrasebased translation model significantly outperforms the state-of-the-art word-based translation model. 1 Introduction Over the past few years, large scale question and answer (Q&A) archives have become an important information resource on the Web. These include the traditional Frequently Asked Questions (FAQ) archives and the emerging community-based Q&A services, such as Yahoo! Answers1, Live QnA2, and Baidu Zhidao3. ∗Correspondence author: [email protected] 1http://answers.yahoo.com/ 2http://qna.live.com/ 3http://zhidao.baidu.com/ Community-based Q&A services can directly return answers to the queried questions instead of a list of relevant documents, thus provide an effective alternative to the traditional adhoc information retrieval. To make full use of the large scale archives of question-answer pairs, it is critical to have functionality helping users to retrieve historical answers (Duan et al., 2008). Therefore, it is a meaningful task to retrieve the questions that are semantically equivalent or relevant to the queried questions. For example in Table 1, given question Q1, Q2 can be returned and their answers will then be used to answer Q1 because the answer of Q2 is expected to partially satisfy the queried question Q1. This is what we called question retrieval in this paper. The major challenge for Q&A retrieval, as for Query: Q1: How to get rid of stuffy nose? Expected: Q2: What is the best way to prevent a cold? Not Expected: Q3: How do I air out my stuffy room? Q4: How do you make a nose bleed stop quicker? Table 1: An example on question retrieval most information retrieval models, such as vector space model (VSM) (Salton et al., 1975), Okapi model (Robertson et al., 1994), language model (LM) (Ponte and Croft, 1998), is the lexical gap (or lexical chasm) between the queried questions and the historical questions in the archives (Jeon et al., 2005; Xue et al., 2008). For example in Table 1, Q1 and Q2 are two semantically similar questions, but they have very few words in common. This prob653 lem is more serious for Q&A retrieval, since the question-answer pairs are usually short and there is little chance of finding the same content expressed using different wording (Xue et al., 2008). To solve the lexical gap problem, most researchers regarded the question retrieval task as a statistical machine translation problem by using IBM model 1 (Brown et al., 1993) to learn the word-to-word translation probabilities (Berger and Lafferty, 1999; Jeon et al., 2005; Xue et al., 2008; Lee et al., 2008; Bernhard and Gurevych, 2009). Experiments consistently reported that the word-based translation models could yield better performance than the traditional methods (e.g., VSM. Okapi and LM). However, all these existing approaches are considered to be context independent in that they do not take into account any contextual information in modeling word translation probabilities. For example in Table 1, although neither of the individual word pair (e.g., “stuffy”/“cold” and “nose”/“cold”) might have a high translation probability, the sequence of words “stuffy nose” can be easily translated from a single word “cold” in Q2 with a relative high translation probability. In this paper, we argue that it is beneficial to capture contextual information for question retrieval. To this end, inspired by the phrase-based statistical machine translation (SMT) systems (Koehn et al., 2003; Och and Ney, 2004), we propose a phrasebased translation model (P-Trans) for question retrieval, and we assume that question retrieval should be performed at the phrase level. This model learns the probability of translating one sequence of words (e.g., phrase) into another sequence of words, e.g., translating a phrase in a historical question into another phrase in a queried question. Compared to the traditional word-based translation models that account for translating single words in isolation, the phrase-based translation model is potentially more effective because it captures some contextual information in modeling the translation of phrases as a whole. More precise translation can be determined for phrases than for words. It is thus reasonable to expect that using such phrase translation probabilities as ranking features is likely to improve the question retrieval performance, as we will show in our experiments. Unlike the general natural language translation, the parallel sentences between questions and answers in community-based Q&A have very different lengths, leaving many words in answers unaligned to any word in queried questions. Following (Berger and Lafferty, 1999), we restrict our attention to those phrase translations consistent with a good wordlevel alignment. Specifically, we make the following contributions: • we formulate the question retrieval task as a phrase-based translation problem by modeling the contextual information (in Section 3.1). • we linearly combine the phrase-based translation model for the question part and answer part (in Section 3.2). • we propose a linear ranking model framework for question retrieval in which different models are incorporated as features because the phrasebased translation model cannot be interpolated with a unigram language model (in Section 3.3). • finally, we conduct the experiments on community-based Q&A data for question retrieval. The results show that our proposed approach significantly outperforms the baseline methods (in Section 4). The remainder of this paper is organized as follows. Section 2 introduces the existing state-of-theart methods. Section 3 describes our phrase-based translation model for question retrieval. Section 4 presents the experimental results. In Section 5, we conclude with ideas for future research. 2 Preliminaries 2.1 Language Model The unigram language model has been widely used for question retrieval on community-based Q&A data (Jeon et al., 2005; Xue et al., 2008; Cao et al., 2010). To avoid zero probability, we use JelinekMercer smoothing (Zhai and Lafferty, 2001) due to its good performance and cheap computational cost. So the ranking function for the query likelihood language model with Jelinek-Mercer smoothing can be 654 written as: Score(q, D) = ∏ w∈q (1 −λ)Pml(w|D) + λPml(w|C) (1) Pml(w|D) = #(w, D) |D| , Pml(w|C) = #(w, C) |C| (2) where q is the queried question, D is a document, C is background collection, λ is smoothing parameter. #(t, D) is the frequency of term t in D, |D| and |C| denote the length of D and C respectively. 2.2 Word-Based Translation Model Previous work (Berger et al., 2000; Jeon et al., 2005; Xue et al., 2008) consistently reported that the wordbased translation models (Trans) yielded better performance than the traditional methods (VSM, Okapi and LM) for question retrieval. These models exploit the word translation probabilities in a language modeling framework. Following Jeon et al. (2005) and Xue et al. (2008), the ranking function can be written as: Score(q, D) = ∏ w∈q (1−λ)Ptr(w|D)+λPml(w|C) (3) Ptr(w|D) = ∑ t∈D P(w|t)Pml(t|D), Pml(t|D) = #(t, D) |D| (4) where P(w|t) denotes the translation probability from word t to word w. 2.3 Word-Based Translation Language Model Xue et al. (2008) proposed to linearly mix two different estimations by combining language model and word-based translation model into a unified framework, called TransLM. The experiments show that this model gains better performance than both the language model and the word-based translation model. Following Xue et al. (2008), this model can be written as: Score(q, D) = ∏ w∈q (1 −λ)Pmx(w|D) + λPml(w|C) (5) Pmx(w|D) = α ∑ t∈D P(w|t)Pml(t|D)+(1−α)Pml(w|D) (6) D: … for good cold home remedies … document E: [for, good, cold, home remedies] segmentation F: [for1, best2, stuffy nose3, home remedy4] translation M: (1Ƥ3⧎2Ƥ1⧎3Ƥ4⧎4Ƥ2) permutation q: best home remedy for stuffy nose queried question Figure 1: Example describing the generative procedure of the phrase-based translation model. 3 Our Approach: Phrase-Based Translation Model for Question Retrieval 3.1 Phrase-Based Translation Model Phrase-based machine translation models (Koehn et al., 2003; D. Chiang, 2005; Och and Ney, 2004) have shown superior performance compared to word-based translation models. In this paper, the goal of phrase-based translation model is to translate a document4 D into a queried question q. Rather than translating single words in isolation, the phrase-based model translates one sequence of words into another sequence of words, thus incorporating contextual information. For example, we might learn that the phrase “stuffy nose” can be translated from “cold” with relative high probability, even though neither of the individual word pairs (e.g., “stuffy”/“cold” and “nose”/“cold”) might have a high word translation probability. Inspired by the work of (Sun et al., 2010; Gao et al., 2010), we assume the following generative process: first the document D is broken into K non-empty word sequences t1, . . . , tK, then each t is translated into a new non-empty word sequence w1, . . . , wK, and finally these phrases are permutated and concatenated to form the queried questions q, where t and w denote the phrases or consecutive sequence of words. To formulate this generative process, let E denote the segmentation of D into K phrases t1, . . . , tK, and let F denote the K translation phrases w1, . . . , wK −we refer to these (ti, wi) pairs as bi-phrases. Finally, let M denote a permutation of K elements representing the final reordering step. Figure 1 describes an example of the generative procedure. Next let us place a probability distribution over rewrite pairs. Let B(D, q) denote the set of E, 4In this paper, a document has the same meaning as a historical question-answer pair in the Q&A archives. 655 F, M triples that translate D into q. Here we assume a uniform probability over segmentations, so the phrase-based translation model can be formulated as: P(q|D) ∝ ∑ (E,F,M)∈ B(D,q) P(F|D, E) · P(M|D, E, F) (7) As is common practice in SMT, we use the maximum approximation to the sum: P(q|D) ≈ max (E,F,M)∈ B(D,q) P(F|D, E) · P(M|D, E, F) (8) Although we have defined a generative model for translating D into q, our goal is to calculate the ranking score function over existing q and D, rather than generating new queried questions. Equation (8) cannot be used directly for document ranking because q and D are often of very different lengths, leaving many words in D unaligned to any word in q. This is the key difference between the communitybased question retrieval and the general natural language translation. As pointed out by Berger and Lafferty (1999) and Gao et al. (2010), document-query translation requires a distillation of the document, while translation of natural language tolerates little being thrown away. Thus we attempt to extract the key document words that form the distillation of the document, and assume that a queried question is translated only from the key document words. In this paper, the key document words are identified via word alignment. We introduce the “hidden alignments” A = a1 . . . aj . . . aJ, which describe the mapping from a word position j in queried question to a document word position i = aj. The different alignment models we present provide different decompositions of P(q, A|D). We assume that the position of the key document words are determined by the Viterbi alignment, which can be obtained using IBM model 1 as follows: ˆA = arg max A P(q, A|D) = arg max A { P(J|I) J ∏ j=1 P(wj|taj) } = [ arg max aj P(wj|taj) ]J j=1 (9) Given ˆA, when scoring a given Q&A pair, we restrict our attention to those E, F, M triples that are consistent with ˆA, which we denote as B(D, q, ˆA). Here, consistency requires that if two words are aligned in ˆA, then they must appear in the same biphrase (ti, wi). Once the word alignment is fixed, the final permutation is uniquely determined, so we can safely discard that factor. Thus equation (8) can be written as: P(q|D) ≈ max (E,F,M)∈B(D,q, ˆ A) P(F|D, E) (10) For the sole remaining factor P(F|D, E), we make the assumption that a segmented queried question F = w1, . . . , wK is generated from left to right by translating each phrase t1, . . . , tK independently: P(F|D, E) = K ∏ k=1 P(wk|tk) (11) where P(wk|tk) is a phrase translation probability, the estimation will be described in Section 3.3. To find the maximum probability assignment efficiently, we use a dynamic programming approach, somewhat similar to the monotone decoding algorithm described in (Och, 2002). We define αj to be the probability of the most likely sequence of phrases covering the first j words in a queried question, then the probability can be calculated using the following recursion: (1) Initialization: α0 = 1 (12) (2) Induction: αj = ∑ j′<j,w=wj′+1...wj { αj′P(w|tw) } (13) (3) Total: P(q|D) = αJ (14) 3.2 Phrase-Based Translation Model for Question Part and Answer Part In Q&A, a document D is decomposed into (¯q, ¯a), where ¯q denotes the question part of the historical question in the archives and ¯a denotes the answer part. Although it has been shown that doing Q&A retrieval based solely on the answer part does not perform well (Jeon et al., 2005; Xue et al., 2008), the answer part should provide additional evidence about relevance and, therefore, it should be combined with the estimation based on the question part. 656 In this combined model, P(q|¯q) and P(q|¯a) are calculated with equations (12) to (14). So P(q|D) will be written as: P(q|D) = µ1P(q|¯q) + µ2P(q|¯a) (15) where µ1 + µ2 = 1. In equation (15), the relative importance of question part and answer part is adjusted through µ1 and µ2. When µ1 = 1, the retrieval model is based on phrase-based translation model for the question part. When µ2 = 1, the retrieval model is based on phrase-based translation model for the answer part. 3.3 Parameter Estimation 3.3.1 Parallel Corpus Collection In Q&A archives, question-answer pairs can be considered as a type of parallel corpus, which is used for estimating the translation probabilities. Unlike the bilingual machine translation, the questions and answers in a Q&A archive are written in the same language, the translation probability can be calculated through setting either as the source and the other as the target. In this paper, P(¯a|¯q) is used to denote the translation probability with the question as the source and the answer as the target. P(¯q|¯a) is used to denote the opposite configuration. For a given word or phrase, the related words or phrases differ when it appears in the question or in the answer. Following Xue et al. (2008), a pooling strategy is adopted. First, we pool the question-answer pairs used to learn P(¯a|¯q) and the answer-question pairs used to learn P(¯q|¯a), and then use IBM model 1 (Brown et al., 1993) to learn the combined translation probabilities. Suppose we use the collection {(¯q, ¯a)1, . . . , (¯q, ¯a)m} to learn P(¯a|¯q) and use the collection {(¯a, ¯q)1, . . . , (¯a, ¯q)m} to learn P(¯q|¯a), then {(¯q, ¯a)1, . . . , (¯q, ¯a)m, (¯a, ¯q)1, . . . , (¯a, ¯q)m} is used here to learn the combination translation probability Ppool(wi|tj). 3.3.2 Parallel Corpus Preprocessing Unlike the bilingual parallel corpus used in SMT, our parallel corpus is collected from Q&A archives, which is more noisy. Directly using the IBM model 1 can be problematic, it is possible for translation model to contain “unnecessary” translations (Lee et al., 2008). In this paper, we adopt a variant of TextRank algorithm (Mihalcea and Tarau, 2004) to identify and eliminate unimportant words from parallel corpus, assuming that a word in a question or answer is unimportant if it holds a relatively low significance in the parallel corpus. Following (Lee et al., 2008), the ranking algorithm proceeds as follows. First, all the words in a given document are added as vertices in a graph G. Then edges are added between words if the words co-occur in a fixed-sized window. The number of co-occurrences becomes the weight of an edge. When the graph is constructed, the score of each vertex is initialized as 1, and the PageRankbased ranking algorithm is run on the graph iteratively until convergence. The TextRank score of a word w in document D at kth iteration is defined as follows: Rk w,D = (1 −d) + d · ∑ ∀j:(i,j)∈G ei,j ∑ ∀l:(j,l)∈G ej,l Rk−1 w,D (16) where d is a damping factor usually set to 0.85, and ei,j is an edge weight between i and j. We use average TextRank score as threshold: words are removed if their scores are lower than the average score of all words in a document. 3.3.3 Translation Probability Estimation After preprocessing the parallel corpus, we will calculate P(w|t), following the method commonly used in SMT (Koehn et al., 2003; Och, 2002) to extract bi-phrases and estimate their translation probabilities. First, we learn the word-to-word translation probability using IBM model 1 (Brown et al., 1993). Then, we perform Viterbi word alignment according to equation (9). Finally, the bi-phrases that are consistent with the word alignment are extracted using the heuristics proposed in (Och, 2002). We set the maximum phrase length to five in our experiments. After gathering all such bi-phrases from the training data, we can estimate conditional relative frequency estimates without smoothing: P(w|t) = N(t, w) N(t) (17) where N(t, w) is the number of times that t is aligned to w in training data. These estimates are 657 source stuffy nose internet explorer 1 stuffy nose internet explorer 2 cold ie 3 stuffy internet browser 4 sore throat explorer 5 sneeze browser Table 2: Phrase translation probability examples. Each column shows the top 5 target phrases learned from the word-aligned question-answer pairs. useful for contextual lexical selection with sufficient training data, but can be subject to data sparsity issues (Sun et al., 2010; Gao et al., 2010). An alternate translation probability estimate not subject to data sparsity is the so-called lexical weight estimate (Koehn et al., 2003). Let P(w|t) be the word-toword translation probability, and let A be the word alignment between w and t. Here, the word alignment contains (i, j) pairs, where i ∈1 . . . |w| and j ∈0 . . . |t|, with 0 indicating a null word. Then we use the following estimate: Pt(w|t, A) = |w| ∏ i=1 1 |{j|(j, i) ∈A}| ∑ ∀(i,j)∈A P(wi|tj) (18) We assume that for each position in w, there is either a single alignment to 0, or multiple alignments to non-zero positions in t. In fact, equation (18) computes a product of per-word translation scores; the per-word scores are the averages of all the translations for the alignment links of that word. The word translation probabilities are calculated using IBM 1, which has been widely used for question retrieval (Jeon et al., 2005; Xue et al., 2008; Lee et al., 2008; Bernhard and Gurevych, 2009). These wordbased scores of bi-phrases, though not as effective in contextual selection, are more robust to noise and sparsity. A sample of the resulting phrase translation examples is shown in Table 2, where the top 5 target phrases are translated from the source phrases according to the phrase-based translation model. For example, the term “explorer” used alone, most likely refers to a person who engages in scientific exploration, while the phrase “internet explorer” has a very different meaning. 3.4 Ranking Candidate Historical Questions Unlike the word-based translation models, the phrase-based translation model cannot be interpolated with a unigram language model. Following (Sun et al., 2010; Gao et al., 2010), we resort to a linear ranking framework for question retrieval in which different models are incorporated as features. We consider learning a relevance function of the following general, linear form: Score(q, D) = θT · Φ(q, D) (19) where the feature vector Φ(q, D) is an arbitrary function that maps (q, D) to a real value, i.e., Φ(q, D) ∈R. θ is the corresponding weight vector, we optimize this parameter for our evaluation metrics directly using the Powell Search algorithm (Paul et al., 1992) via cross-validation. The features used in this paper are as follows: • Phrase translation features (PT): ΦPT (q, D, A) = logP(q|D), where P(q|D) is computed using equations (12) to (15), and the phrase translation probability P(w|t) is estimated using equation (17). • Inverted Phrase translation features (IPT): ΦIPT (D, q, A) = logP(D|q), where P(D|q) is computed using equations (12) to (15) except that we set µ2 = 0 in equation (15), and the phrase translation probability P(w|t) is estimated using equation (17). • Lexical weight feature (LW): ΦLW (q, D, A) = logP(q|D), here P(q|D) is computed by equations (12) to (15), and the phrase translation probability is computed as lexical weight according to equation (18). • Inverted Lexical weight feature (ILW): ΦILW (D, q, A) = logP(D|q), here P(D|q) is computed by equations (12) to (15) except that we set µ2 = 0 in equation (15), and the phrase translation probability is computed as lexical weight according to equation (18). • Phrase alignment features (PA): ΦPA(q, D, B) = ∑K 2 |ak −bk−1 −1|, where B is a set of K bi-phrases, ak is the start position of the phrase in D that was translated 658 into the kth phrase in queried question, and bk−1 is the end position of the phrase in D that was translated into the (k −1)th phrase in queried question. The feature, inspired by the distortion model in SMT (Koehn et al., 2003), models the degree to which the queried phrases are reordered. For all possible B, we only compute the feature value according to the Viterbi alignment, ˆB = arg maxB P(q, B|D). We find ˆB using the Viterbi algorithm, which is almost identical to the dynamic programming recursion of equations (12) to (14), except that the sum operator in equation (13) is replaced with the max operator. • Unaligned word penalty features (UWP): ΦUWP (q, D), which is defined as the ratio between the number of unaligned words and the total number of words in queried questions. • Language model features (LM): ΦLM(q, D, A) = logPLM(q|D), where PLM(q|D) is the unigram language model with Jelinek-Mercer smoothing defined by equations (1) and (2). • Word translation features (WT): ΦWT (q, D) = logP(q|D), where P(q|D) is the word-based translation model defined by equations (3) and (4). 4 Experiments 4.1 Data Set and Evaluation Metrics We collect the questions from Yahoo! Answers and use the getByCategory function provided in Yahoo! Answers API5 to obtain Q&A threads from the Yahoo! site. More specifically, we utilize the resolved questions under the top-level category at Yahoo! Answers, namely “Computers & Internet”. The resulting question repository that we use for question retrieval contains 518,492 questions. To learn the translation probabilities, we use about one million question-answer pairs from another data set.6 In order to create the test set, we randomly select 300 questions for this category, denoted as 5http://developer.yahoo.com/answers 6The Yahoo! Webscope dataset Yahoo answers comprehensive questions and answers version 1.0.2, available at http://reseach.yahoo.com/Academic Relations. “CI TST”. To obtain the ground-truth of question retrieval, we employ the Vector Space Model (VSM) (Salton et al., 1975) to retrieve the top 20 results and obtain manual judgements. The top 20 results don’t include the queried question itself. Given a returned result by VSM, an annotator is asked to label it with “relevant” or “irrelevant”. If a returned result is considered semantically equivalent to the queried question, the annotator will label it as “relevant”; otherwise, the annotator will label it as “irrelevant”. Two annotators are involved in the annotation process. If a conflict happens, a third person will make judgement for the final result. In the process of manually judging questions, the annotators are presented only the questions. Table 3 provides the statistics on the final test set. #queries #returned #relevant CI TST 300 6,000 798 Table 3: Statistics on the Test Data We evaluate the performance of our approach using Mean Average Precision (MAP). We perform a significant test, i.e., a t-test with a default significant level of 0.05. Following the literature, we set the parameters λ = 0.2 (Cao et al., 2010) in equations (1), (3) and (5), and α = 0.8 (Xue et al., 2008) in equation (6). 4.2 Question Retrieval Results We randomly divide the test questions into five subsets and conduct 5-fold cross-validation experiments. In each trial, we tune the parameters µ1 and µ2 with four of the five subsets and then apply it to one remaining subset. The experiments reported below are those averaged over the five trials. Table 4 presents the main retrieval performance. Row 1 to row 3 are baseline systems, all these methods use word-based translation models and obtain the state-of-the-art performance in previous work (Jeon et al., 2005; Xue et al., 2008). Row 3 is similar to row 2, the only difference is that TransLM only considers the question part, while Xue et al. (2008) incorporates the question part and answer part. Row 4 and row 5 are our proposed phrase-based translation model with maximum phrase length of five. Row 4 is phrase-based translation model purely based on question part, this model is equivalent to 659 # Methods Trans Prob MAP 1 Jeon et al. (2005) Ppool 0.289 2 TransLM Ppool 0.324 3 Xue et al. (2008) Ppool 0.352 4 P-Trans (µ1 = 1, l = 5) Ppool 0.366 5 P-Trans (l = 5) Ppool 0.391 Table 4: Comparison with different methods for question retrieval. setting µ1 = 1 in equation (15). Row 5 is the phrasebased combination model which linearly combines the question part and answer part. As expected, different parts can play different roles: a phrase to be translated in queried questions may be translated from the question part or answer part. All these methods use pooling strategy to estimate the translation probabilities. There are some clear trends in the result of Table 4: (1) Word-based translation language model (TransLM) significantly outperforms word-based translation model of Jeon et al. (2005) (row 1 vs. row 2). Similar observations have been made by Xue et al. (2008). (2) Incorporating the answer part into the models, either word-based or phrase-based, can significantly improve the performance of question retrieval (row 2 vs. row 3; row 4 vs. row 5). (3) Our proposed phrase-based translation model (P-Trans) significantly outperforms the state-of-theart word-based translation models (row 2 vs. row 4 and row 3 vs. row 5, all these comparisons are statistically significant at p < 0.05). 4.3 Impact of Phrase Length Our proposed phrase-based translation model, due to its capability of capturing contextual information, is more effective than the state-of-the-art word-based translation models. It is important to investigate the impact of the phrase length on the final retrieval performance. Table 5 shows the results, it is seen that using the longer phrases up to the maximum length of five can consistently improve the retrieval performance. However, using much longer phrases in the phrase-based translation model does not seem to produce significantly better performance (row 8 and row 9 vs. row 10 are not statistically significant). # Systems MAP 6 P-Trans (l = 1) 0.352 7 P-Trans (l = 2) 0.373 8 P-Trans (l = 3) 0.386 9 P-Trans (l = 4) 0.390 10 P-Trans (l = 5) 0.391 Table 5: The impact of the phrase length on retrieval performance. Model # Methods Average MAP P-Trans (l = 5) 11 Initial 69 0.380 12 TextRank 24 0.391 Table 6: Effectiveness of parallel corpus preprocessing. 4.4 Effectiveness of Parallel Corpus Preprocessing Question-answer pairs collected from Yahoo! answers are very noisy, it is possible for translation models to contain “unnecessary” translations. In this paper, we attempt to identify and decrease the proportion of unnecessary translations in a translation model by using TextRank algorithm. This kind of “unnecessary” translation between words will eventually affect the bi-phrase translation. Table 6 shows the effectiveness of parallel corpus preprocessing. Row 11 reports the average number of translations per word and the question retrieval performance when only stopwords 7 are removed. When using the TextRank algorithm for parallel corpus preprocessing, the average number of translations per word is reduced from 69 to 24, but the performance of question retrieval is significantly improved (row 11 vs. row 12). Similar results have been made by Lee et al. (2008). 4.5 Impact of Pooling Strategy The correspondence of words or phrases in the question-answer pair is not as strong as in the bilingual sentence pair, thus noise will be inevitably introduced for both P(¯a|¯q) and P(¯q|¯a). To see how much the pooling strategy benefit the question retrieval, we introduce two baseline methods for comparison. The first method (denoted as P(¯a|¯q)) is used to denote the translation probability with the question as the source and the answer as 7http://truereader.com/manuals/onix/stopwords1.html 660 Model # Trans Prob MAP P-Trans (l = 5) 13 P(¯a|¯q) 0.387 14 P(¯q|¯a) 0.381 15 Ppool 0.391 Table 7: The impact of pooling strategy for question retrieval. the target. The second (denoted as P(¯a|¯q)) is used to denote the translation probability with the answer as the source and the question as the target. Table 7 provides the comparison. From this Table, we see that the pooling strategy significantly outperforms the two baseline methods for question retrieval (row 13 and row 14 vs. row 15). 5 Conclusions and Future Work In this paper, we propose a novel phrase-based translation model for question retrieval. Compared to the traditional word-based translation models, the proposed approach is more effective in that it can capture contextual information instead of translating single words in isolation. Experiments conducted on real Q&A data demonstrate that the phrasebased translation model significantly outperforms the state-of-the-art word-based translation models. There are some ways in which this research could be continued. First, question structure should be considered, so it is necessary to combine the proposed approach with other question retrieval methods (e.g., (Duan et al., 2008; Wang et al., 2009; Bunescu and Huang, 2010)) to further improve the performance. Second, we will try to investigate the use of the proposed approach for other kinds of data set, such as categorized questions from forum sites and FAQ sites. Acknowledgments This work was supported by the National Natural Science Foundation of China (No. 60875041 and No. 61070106). We thank the anonymous reviewers for their insightful comments. We also thank Maoxi Li and Jiajun Zhang for suggestion to use the alignment toolkits. References A. Berger and R. Caruana and D. Cohn and D. Freitag and V. Mittal. 2000. Bridging the lexical chasm: statistical approach to answer-finding. In Proceedings of SIGIR, pages 192-199. A. Berger and J. Lafferty. 1999. Information retrieval as statistical translation. In Proceedings of SIGIR, pages 222-229. D. Bernhard and I. Gurevych. 2009. Combining lexical semantic resources with question & answer archives for translation-based answer finding. In Proceedings of ACL, pages 728-736. P. F. Brown and V. J. D. Pietra and S. A. D. Pietra and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263-311. R. Bunescu and Y. Huang. 2010. Learning the relative usefulness of questions in community QA. In Proceedings of EMNLP, pages 97-107. X. Cao and G. Cong and B. Cui and C. S. Jensen. 2010. A generalized framework of exploring category information for question retrieval in community question answer archives. In Proceedings of WWW. D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL. H. Duan and Y. Cao and C. Y. Lin and Y. Yu. 2008. Searching questions by identifying questions topics and question focus. In Proceedings of ACL, pages 156-164. J. Gao and X. He and J. Nie. 2010. Clickthrough-based translation models for web search: from word models to phrase models. In Proceedings of CIKM. J. Jeon and W. Bruce Croft and J. H. Lee. 2005. Finding similar questions in large question and answer archives. In Proceedings of CIKM, pages 84-90. R. Mihalcea and P. Tarau. 2004. TextRank: Bringing order into text. In Proceedings of EMNLP, pages 404411. P. Koehn and F. Och and D. Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL, pages 48-54. J. -T. Lee and S. -B. Kim and Y. -I. Song and H. -C. Rim. 2008. Bridging lexical gaps between queries and questions on large online Q&A collections with compact translation models. In Proceedings of EMNLP, pages 410-418. F. Och. 2002. Statistical mahcine translation: from single word models to alignment templates. Ph.D thesis, RWTH Aachen. F. Och and H. Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417-449. 661 J. M. Ponte and W. B. Croft. 1998. A language modeling approach to information retrieval. In Proceedings of SIGIR. W. H. Press and S. A. Teukolsky and W. T. Vetterling and B. P. Flannery. 1992. Numerical Recipes In C. Cambridge Univ. Press. S. Robertson and S. Walker and S. Jones and M. Hancock-Beaulieu and M. Gatford. 1994. Okapi at trec-3. In Proceedings of TREC, pages 109-126. G. Salton and A. Wong and C. S. Yang. 1975. A vector space model for automatic indexing. Communications of the ACM, 18(11):613-620. X. Sun and J. Gao and D. Micol and C. Quirk. 2010. Learning phrase-based spelling error models from clickthrough data. In Proceedings of ACL. K. Wang and Z. Ming and T-S. Chua. 2009. A syntactic tree matching approach to finding similar questions in community-based qa services. In Proceedings of SIGIR, pages 187-194. X. Xue and J. Jeon and W. B. Croft. 2008. Retrieval models for question and answer archives. In Proceedings of SIGIR, pages 475-482. C. Zhai and J. Lafferty. 2001. A study of smooth methods for language models applied to ad hoc information retrieval. In Proceedings of SIGIR, pages 334-342. 662
|
2011
|
66
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 663–672, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Neutralizing Linguistically Problematic Annotations in Unsupervised Dependency Parsing Evaluation Roy Schwartz1 Omri Abend1∗Roi Reichart2 Ari Rappoport1 1Institute of Computer Science Hebrew University of Jerusalem {roys02|omria01|arir}@cs.huji.ac.il 2Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology [email protected] Abstract Dependency parsing is a central NLP task. In this paper we show that the common evaluation for unsupervised dependency parsing is highly sensitive to problematic annotations. We show that for three leading unsupervised parsers (Klein and Manning, 2004; Cohen and Smith, 2009; Spitkovsky et al., 2010a), a small set of parameters can be found whose modification yields a significant improvement in standard evaluation measures. These parameters correspond to local cases where no linguistic consensus exists as to the proper gold annotation. Therefore, the standard evaluation does not provide a true indication of algorithm quality. We present a new measure, Neutral Edge Direction (NED), and show that it greatly reduces this undesired phenomenon. 1 Introduction Unsupervised induction of dependency parsers is a major NLP task that attracts a substantial amount of research (Klein and Manning, 2004; Cohen et al., 2008; Headden et al., 2009; Spitkovsky et al., 2010a; Gillenwater et al., 2010; Berg-Kirkpatrick et al., 2010; Blunsom and Cohn, 2010, inter alia). Parser quality is usually evaluated by comparing its output to a gold standard whose annotations are linguistically motivated. However, there are cases in which there is no linguistic consensus as to what the correct annotation is (K¨ubler et al., 2009). Examples include which verb is the head in a verb group structure (e.g., “can” or “eat” in “can eat”), and which ∗Omri Abend is grateful to the Azrieli Foundation for the award of an Azrieli Fellowship. noun is the head in a sequence of proper nouns (e.g., “John” or “Doe” in “John Doe”). We refer to such annotations as (linguistically) problematic. For such cases, evaluation measures should not punish the algorithm for deviating from the gold standard. In this paper we show that the evaluation measures reported in current works are highly sensitive to the annotation in problematic cases, and propose a simple new measure that greatly neutralizes the problem. We start from the following observation: for three leading algorithms (Klein and Manning, 2004; Cohen and Smith, 2009; Spitkovsky et al., 2010a), a small set (at most 18 out of a few thousands) of parameters can be found whose modification dramatically improves the standard evaluation measures (the attachment score measure by 9.3-15.1%, and the undirected measure by a smaller but still significant 1.3-7.7%). The phenomenon is implementation independent, occurring with several algorithms based on a fundamental probabilistic dependency model1. We show that these parameter changes can be mapped to edge direction changes in local structures in the dependency graph, and that these correspond to problematic annotations. Thus, the standard evaluation measures do not reflect the true quality of the evaluated algorithm. We explain why the standard undirected evaluation measure is in fact sensitive to such edge direc1It is also language-independent; we have produced it in five different languages: English, Czech, Japanese, Portuguese, and Turkish. Due to space considerations, in this paper we focus on English, because it is the most studied language for this task and the most practically useful one at present. 663 tion changes, and present a new evaluation measure, Neutral Edge Direction (NED), which greatly alleviates the problem by ignoring the edge direction in local structures. Using NED, manual modifications of model parameters always yields small performance differences. Moreover, NED sometimes punishes such manual parameter tweaking by yielding worse results. We explain this behavior using an experiment revealing that NED always prefers the structures that are more consistent with the modeling assumptions lying in the basis of the algorithm. When manual parameter modification is done against this preference, the NED results decrease. The contributions of this paper are as follows. First, we show the impact of a small number of annotation decisions on the performance of unsupervised dependency parsers. Second, we observe that often these decisions are linguistically controversial and therefore this impact is misleading. This reveals a problem in the common evaluation of unsupervised dependency parsing. This is further demonstrated by noting that recent papers evaluate the task using three gold standards which differ in such decisions and which yield substantially different results. Third, we present the NED measure, which is agnostic to errors arising from choosing the non-gold direction in such cases. Section 2 reviews related work. Section 3 describes the performed parameter modifications. Section 4 discusses the linguistic controversies in annotating problematic dependency structures. Section 5 presents NED. Section 6 describes experiments with it. A discussion is given in Section 7. 2 Related Work Grammar induction received considerable attention over the years (see (Clark, 2001; Klein, 2005) for reviews). For unsupervised dependency parsing, the Dependency Model with Valence (DMV) (Klein and Manning, 2004) was the first to beat the simple right-branching baseline. A technical description of DMV is given at the end of this section. The great majority of recent works, including those experimented with in this paper, are elaborations of DMV. Smith and Eisner (2005) improved the DMV results by generalizing the function maximized by DMV’s EM training algorithm. Smith and Eisner (2006) used a structural locality bias, experimenting on five languages. Cohen et al. (2008) extended DMV by using a variational EM training algorithm and adding logistic normal priors. Cohen and Smith (2009, 2010) further extended it by using a shared logistic normal prior which provided a new way to encode the knowledge that some POS tags are more similar than others. A bilingual joint learning further improved their performance. Headden et al. (2009) obtained the best reported results on WSJ10 by using a lexical extension of DMV. Gillenwater et al. (2010) used posterior regularization to bias the training towards a small number of parent-child combinations. Berg-Kirkpatrick et al. (2010) added new features to the M step of the DMV EM procedure. Berg-Kirkpatrick and Klein (2010) used a phylogenetic tree to model parameter drift between different languages. Spitkovsky et al. (2010a) explored several training protocols for DMV. Spitkovsky et al. (2010c) showed the benefits of Viterbi (“hard”) EM to DMV training. Spitkovsky et al. (2010b) presented a novel lightlysupervised approach that used hyper-text mark-up annotation of web-pages to train DMV. A few non-DMV-based works were recently presented. Daum´e III (2009) used shift-reduce techniques. Blunsom and Cohn (2010) used tree substitution grammar to achieve best results on WSJ∞. Druck et al. (2009) took a semi-supervised approach, using a set of rules such as “A noun is usually the parent of a determiner which is to its left”, experimenting on several languages. Naseem et al. (2010) further extended this idea by using a single set of rules which globally applies to six different languages. The latter used a model similar to DMV. The controversial nature of some dependency structures was discussed in (Nivre, 2006; K¨ubler et al., 2009). Klein (2005) discussed controversial constituency structures and the evaluation problems stemming from them, stressing the importance of a consistent standard of evaluation. A few works explored the effects of annotation conventions on parsing performance. Nilsson et al. (2006) transformed the dependency annotations of coordinations and verb groups in the Prague TreeBank. They trained the supervised MaltParser (Nivre et al., 2006) on the transformed data, parsed the test data and re-transformed the resulting parse, 664 w3 w2 w1 (a) w3 w2 w1 (b) Figure 1: A dependency structure on the words w1, w2, w3 before (Figure 1(a)) and after (Figure 1(b)) an edge-flip of w2→w1. thus improving performance. Klein and Manning (2004) observed that a large portion of their errors is caused by predicting the wrong direction of the edge between a noun and its determiner. K¨ubler (2005) compared two different conversion schemes in German supervised constituency parsing and found one to have positive influence on parsing quality. Dependency Model with Valence (DMV). DMV (Klein and Manning, 2004) defines a probabilistic grammar for unlabeled dependency structures. It is defined as follows: the root of the sentence is first generated, and then each head recursively generates its right and left dependents. The parameters of the model are of two types: PSTOP and PATTACH. PSTOP(dir, h, adj) determines the probability to stop generating arguments, and is conditioned on 3 arguments: the head h, the direction dir ((L)eft or (R)ight) and adjacency adj (whether the head already has dependents ((Y )es) in direction dir or not ((N)o)). PATTACH(arg|h, dir) determines the probability to generate arg as head h’s dependent in direction dir. 3 Significant Effects of Edge Flipping In this section we present recurring error patterns in some of the leading unsupervised dependency parsers. These patterns are all local, confined to a sequence of up to three words (but mainly of just two consecutive words). They can often be mended by changing the directions of a few types of edges. The modified parameters described in this section were handpicked to improve performance: we examined the local parser errors occurring the largest number of times, and found the corresponding parameters. Note that this is a valid methodology, since our goal is not to design a new algorithm but to demonstrate that modifying a small set of parameters can yield a major performance boost and eventually discover problems with evaluation methods or algorithms. I PRP want VBP to TO eat VB . ROOT Figure 2: A parse of the sentence “I want to eat”, before (straight line) and after (dashed line) an edge-flip of the edge “to”←“eat”. We start with a few definitions. Consider Figure 1(a) that shows a dependency structure on the words w1, w2, w3. Edge flipping (henceforth, edgeflip) the edge w2→w1 is the following modification of a parse tree: (1) setting w2’s parent as w1 (instead of the other way around), and (2) setting w1’s parent as w3 (instead of the edge w3→w2). Figure 1(b) shows the dependency structure after the edge-flip. Note that (1) imposes setting a new parent to w2, as otherwise it would have had no parent. Setting this parent to be w3 is the minimal modification of the original parse, since it does not change the attachment of the structure [w2, w1] to the rest of the sentence, but only the direction of the internal edge. Figure 2 presents a parse of the sentence “I want to eat”, before and after an edge-flip of the edge “to”←“eat”. Since unsupervised dependency parsers are generally structure prediction models, the predictions of the parse edges are not independent. Therefore, there is no single parameter which completely controls the edge direction, and hence there is no direct way to perform an edge-flip by parameter modification. However, setting extreme values for the parameters controlling the direction of a certain edge type creates a strong preference towards one of the directions, and effectively determines the edge direction. This procedure is henceforth termed parameter-flip. We show that by performing a few parameterflips, a substantial improvement in the attachment score can be obtained. Results are reported for three algorithms. Parameter Changes. All the works experimented with in this paper are not lexical and use sequences of POS tags as their input. In addition, they all use the DMV parameter set (PSTOP and PATTACH) for parsing. We will henceforth refer to this set, conditioned on POS tags, as the model parameter set. We show how an edge in the dependency graph is encoded using the DMV parameters. Say the 665 model prefers setting “to” (POS tag: TO) as a dependent of the infinitive verb (POS tag: V B) to its right (e.g., “to eat”). This is reflected by a high value of PATTACH(TO|V B, L), a low value of PATTACH(V B|TO, R), since “to” tends to be a left dependent of the verb and not the other way around, and a low value of PSTOP(V B, L, N), as the verb usually has at least one left argument (i.e., “to”). A parameter-flip of w1→w2 is hence performed by setting PATTACH(w2|w1, R) to a very low value and PATTACH(w1|w2, L) to a very high value. When the modifications to PATTACH are insufficient to modify the edge direction, PSTOP(w2, L, N) is set to a very low value and PSTOP(w1, R, N) to a very high value2. Table 1 describes the changes made for the three algorithms. The ‘+’ signs in the table correspond to edges in which the algorithm disagreed with the gold standard, and were thus modified. Similarly, the ‘–’ signs in the table correspond to edges in which the algorithm agreed with the gold standard, and were thus not modified. The number of modified parameters does not exceed 18 (out of a few thousands). The Freq. column in the table shows the percentage of the tokens in sections 2-21 of PTB WSJ that participate in each structure. Equivalently, the percentage of edges in the corpus which are of either of the types appearing in the Orig. Edge column. As the table shows, the modified structures cover a significant portion of the tokens. Indeed, 42.9% of the tokens in the corpus participate in at least one of them3. Experimenting with Edge Flipping. We experimented with three DMV-based algorithms: a replication of (Klein and Manning, 2004), as appears in (Cohen et al., 2008) (henceforth, km04), Cohen and Smith (2009) (henceforth, cs09), and Spitkovsky et al. (2010a) (henceforth, saj10a). Decoding is done using the Viterbi algorithm4. For each of these algorithms we present the performance gain when compared to the original parameters. The training set is sections 2-21 of the Wall Street 2Note that this yields unnormalized models. Again, this is justified since the resulting model is only used as a basis for discussion and is not a fully fledged algorithm. 3Some tokens participate in more than one structure. 4http://www.cs.cmu.edu/∼scohen/parser.html. Structure Freq. Orig. Edge km04 cs09 saj10a Coordination (“John & Mary”) 2.9% CC→NNP – + – Prepositional Phrase (“in the house”) 32.7% DT→NN + + + DT→NNP – + + DT→NNS – – + IN→DT + + – IN←NN + + – IN←NNP + – – IN←NNS – + – P RP $→NN – – + Modal Verb (“can eat”) 2.4% MD←V B – + – Infinitive Verb (“to eat”) 4.5% TO→V B – + + Proper Name Sequence (“John Doe”) 18.5% NNP →NNP + – – Table 1: Parameter changes for the three algorithms. The Freq. column shows what percentage of the tokens in sections 2-21 of PTB WSJ participate in each structure. The Orig. column indicates the original edge. The modified edge is of the opposite direction. The other columns show the different algorithms: km04: basic DMV model (replication of (Klein and Manning, 2004)); cs09; (Cohen and Smith, 2009); saj10a: (Spitkovsky et al., 2010a). Journal Penn TreeBank (Marcus et al., 1993). Testing is done on section 23. The constituency annotation was converted to dependencies using the rules of (Yamada and Matsumoto, 2003)5. Following standard practice, we present the attachment score (i.e., percentage of words that have a correct head) of each algorithm, with both the original parameters and the modified ones. We present results both on all sentences and on sentences of length ≤10, excluding punctuation. Table 2 shows results for all algorithms6. The performance difference between the original and the modified parameter set is considerable for all data sets, where differences exceed 9.3%, and go up to 15.1%. These are enormous differences from the perspective of current algorithm evaluation results. 4 Linguistically Problematic Annotations In this section, we discuss the controversial nature of the annotation in the modified structures (K¨ubler 5http://www.jaist.ac.jp/∼h-yamada/ 6Results are slightly worse than the ones published in the original papers due to the different decoding algorithms (cs09 use MBR while we used Viterbi) and a different conversion procedure (saj10a used (Collins, 1999) and not (Yamada and Matsumoto, 2003)) ; see Section 5. 666 Algo. ≤10 ≤∞ Orig. Mod. ∆ Orig. Mod. ∆ km04 45.8 59.8 14 34.6 43.9 9.3 cs09 60.9 72.9 12 39.9 54.6 14.7 saj10a 54.7 69.8 15.1 41.6 54.3 12.7 Table 2: Results of the original (Orig. columns), the modified (Mod. columns) parameter sets and their difference (∆columns) for the three algorithms. et al., 2009). We remind the reader that structures for which no linguistic consensus exists as to their correct annotation are referred to as (linguistically) problematic. We begin by showing that all the structures modified are indeed linguistically problematic. We then note that these controversies are reflected in the evaluation of this task, resulting in three, significantly different, gold standards currently in use. Coordination Structures are composed of two proper nouns, separated by a conjunctor (e.g., “John and Mary”). It is not clear which token should be the head of this structure, if any (Nilsson et al., 2006). Prepositional Phrases (e.g., “in the house” or “in Rome”), where every word is a reasonable candidate to head this structure. For example, in the annotation scheme used by (Collins, 1999) the preposition is the head, in the scheme used by (Johansson and Nugues, 2007) the noun is the head, while TUT annotation, presented in (Bosco and Lombardo, 2004), takes the determiner to be the noun’s head. Verb Groups are composed of a verb and an auxiliary or a modal verb (e.g., “can eat”). Some schemes choose the modal as the head (Collins, 1999), others choose the verb (Rambow et al., 2002). Infinitive Verbs (e.g., “to eat”) are also in controversy, as in (Yamada and Matsumoto, 2003) the verb is the head while in (Collins, 1999; Bosco and Lombardo, 2004) the “to” token is the head. Sequences of Proper Nouns (e.g., “John Doe”) are also subject to debate, as PTB’s scheme takes the last proper noun as the head, and BIO’s scheme defines a more complex scheme (Dredze et al., 2007). Evaluation Inconsistency Across Papers. A fact that may not be recognized by some readers is that comparing the results of unsupervised dependency parsers across different papers is not directly possible, since different papers use different gold standard annotations even when they are all derived from the Penn Treebank constituency annotation. This happens because they use different rules for converting constituency annotation to dependency annotation. A probable explanation for this fact is that people have tried to correct linguistically problematic annotations in different ways, which is why we note this issue here7. There are three different annotation schemes in current use: (1) Collins head rules (Collins, 1999), used in e.g., (Berg-Kirkpatrick et al., 2010; Spitkovsky et al., 2010a); (2) Conversion rules of (Yamada and Matsumoto, 2003), used in e.g., (Cohen and Smith, 2009; Gillenwater et al., 2010); (3) Conversion rules of (Johansson and Nugues, 2007) used, e.g., in the CoNLL shared task 2007 (Nivre et al., 2007) and in (Blunsom and Cohn, 2010). The differences between the schemes are substantial. For instance, 14.4% of section 23 is tagged differently by (1) and (2)8. 5 The Neutral Edge Direction (NED) Measure As shown in the previous sections, the annotation of problematic edges can substantially affect performance. This was briefly discussed in (Klein and Manning, 2004), which used undirected evaluation as a measure which is less sensitive to alternative annotations. Undirected accuracy was commonly used since to assess the performance of unsupervised parsers (e.g., (Smith and Eisner, 2006; Headden et al., 2008; Spitkovsky et al., 2010a)) but also of supervised ones (Wang et al., 2005; Wang et al., 2006). In this section we discuss why this measure is in fact not indifferent to edge-flips and propose a new measure, Neutral Edge Direction (NED). 7Indeed, half a dozen flags in the LTH Constituent-toDependency Conversion Tool (Johansson and Nugues, 2007) are used to control the conversion in problematic cases. 8In our experiments we used the scheme of (Yamada and Matsumoto, 2003), see Section 3. The significant effects of edge flipping were observed with the other two schemes as well. 667 w1 w2 w3 (a) w1 w3 w2 (b) w4 w3 w2 (c) Figure 3: A dependency structure on the words w1, w2, w3 before (Figure 3(a)) and after (Figure 3(b)) an edge-flip of w2→w3, and when the direction of the edge between w2 and w3 is switched and the new parent of w3 is set to be some other word, w4 (Figure 3(c)). Undirected Evaluation. The measure is defined as follows: traverse over the tokens and mark a correct attachment if the token’s induced parent is either (1) its gold parent or (2) its gold child. The score is the ratio of correct attachments and the number of tokens. We show that this measure does not ignore edgeflips. Consider Figure 3 that shows a dependency structure on the words w1, w2, w3 before (Figure 3(a)) and after (Figure 3(b)) an edge-flip of w2→w3. Assume that 3(a) is the gold standard and that 3(b) is the induced parse. Consider w2. Its induced parent (w3) is its gold child, and thus undirected evaluation does not consider it an error. On the other hand, w3 is assigned w2’s gold parent, w1. This is considered an error, since w1 is neither w3’s gold parent (as it is w2), nor its gold child9. Therefore, one of the two tokens involved in the edge-flip is penalized by the measure. Recall the example “I want to eat” and the edgeflip of the edge “to”←“eat” (Figure 2). As “to”’s parent in the induced graph (“want”) is neither its gold parent nor its gold child, the undirected evaluation measure marks it as an error. This is an example where an edge-flip in a problematic edge, which should not be considered an error, was in fact considered an error by undirected evaluation. Neutral Edge Direction (NED). The NED measure is a simple extension of the undirected evaluation measure10. Unlike undirected evaluation, NED ignores all errors directly resulting from an edge-flip. 9Otherwise, the gold parse would have contained a w1→w2→w3→w1 cycle. 10An implementation of NED is available at http://www.cs.huji.ac.il/∼roys02/software/ned.html NED is defined as follows: traverse over the tokens and mark a correct attachment if the token’s induced parent is either (1) its gold parent (2) its gold child or (3) its gold grandparent. The score is the ratio of correct attachments and the number of tokens. NED, by its definition, ignores edge-flips. Consider again Figure 3, where we assume that 3(a) is the gold standard and that 3(b) is the induced parse. Much like undirected evaluation, NED will mark the attachment of w2 as correct, since its induced parent is its gold child. However, unlike undirected evaluation, w3’s induced attachment will also be marked as correct, as its induced parent is its gold grandparent. Now consider another induced parse in which the direction of the edge between w2 and w3 is switched and the w3’s parent is set to be some other word, w4 (Figure 3(c)). This should be marked as an error, even if the direction of the edge between w2 and w3 is controversial, since the structure [w2, w3] is no longer a dependent of w1. It is indeed a NED error. Note that undirected evaluation gives the parses in Figure 3(b) and Figure 3(c) the same score, while if the structure [w2, w3] is problematic, there is a major difference in their correctness. Discussion. Problematic structures are ubiquitous, with more than 40% of the tokens in PTB WSJ appearing in at least one of them (see Section 3). Therefore, even a substantial difference in the attachment between two parsers is not necessarily indicative of a true quality difference. However, an attachment score difference that persists under NED is an indication of a true quality difference, since generally problematic structures are local (i.e., obtained by an edge-flip) and NED ignores such errors. Reporting NED alone is insufficient, as obviously the edge direction does matter in some cases. For example, in adjective–noun structures (e.g., “big house”), the correct edge direction is widely agreed upon (“big”←“house”) (K¨ubler et al., 2009), and thus choosing the wrong direction should be considered an error. Therefore, we suggest evaluating using both NED and attachment score in order to get a full picture of the parser’s performance. A possible criticism on NED is that it is only indifferent to alternative annotations in structures of size 2 (e.g., “to eat”) and does not necessarily handle larger problematic structures, such as coordinations 668 ROOT John and Mary (a) ROOT John and Mary (b) ROOT in house the (c) ROOT in the house (d) ROOT house in the (e) Figure 4: Alternative parses of “John and Mary” and “in the house”. Figure 4(a) follows (Collins, 1999), Figure 4(b) follows (Johansson and Nugues, 2007). Figure 4(c) follows (Collins, 1999; Yamada and Matsumoto, 2003). Figure 4(d) and Figure 4(e) show induced parses made by (km04,saj10a) and cs09, respectively. (see Section 4). For example, Figure 4(a) and Figure 4(b) present two alternative annotations of the sentence “John and Mary”. Assume the parse in Figure 4(a) is the gold parse and that in Figure 4(b) is the induced parse. The word “Mary” is a NED error, since its induced parent (“and”) is neither its gold child nor its gold grandparent. Thus, NED does not accept all possible annotations of structures of size 3. On the other hand, using a method which accepts all possible annotations of structures of size 3 seems too permissive. A better solution may be to modify the gold standard annotation, so to explicitly annotate problematic structures as such. We defer this line of research to future work. NED is therefore an evaluation measure which is indifferent to edge-flips, and is consequently less sensitive to alternative annotations. We now show that NED is indifferent to the differences between the structures originally learned by the algorithms mentioned in Section 3 and the gold standard annotation in all the problematic cases we consider. Most of the modifications made are edge-flips, and are therefore ignored by NED. The exceptions are coordinations and prepositional phrases which are structures of size 3. In the former, the alternative annotations differ only in a single edge-flip (i.e., CC→NNP), and are thus not NED errors. Regarding prepositional phrases, Figure 4(c) presents the gold standard of “in the house”, Figure 4(d) the parse induced by km04 and saj10a and Figure 4(e) the parse induced by cs09. As the reader can verify, both induced parses receive a perfect NED score. In order to further demonstrate NED’s insensitivity to alternative annotations, we took two of the three common gold standard annotations (see Section 4) and evaluated them one against the other. We considered section 23 of WSJ following the scheme of (Yamada and Matsumoto, 2003) as the gold standard and of (Collins, 1999) as the evaluated set. Results show that the attachment score is only 85.6%, the undirected accuracy is improved to 90.3%, while the NED score is 95.3%. This shows that NED is significantly less sensitive to the differences between the different annotation schemes, compared to the other evaluation measures. 6 Experimenting with NED In this section we show that NED indeed reduces the performance difference between the original and the modified parameter sets, thus providing empirical evidence for its validity. For brevity, we present results only for the entire WSJ corpus. Results on WSJ10 are similar. The datasets and decoding algorithms are the same as those used in Section 3. Table 3 shows the score differences between the parameter sets using attachment score, undirected evaluation and NED. A substantial difference persists under undirected evaluation: a gap of 7.7% in cs09, of 3.5% in saj10a and of 1.3% in km04. The differences are further reduced using NED. This is consistent with our discussion in Section 5, and shows that undirected evaluation only ignores some of the errors inflicted by edge-flips. For cs09, the difference is substantially reduced, but a 4.2% performance gap remains. For km04 and saj10a, the original parameters outperform the new ones by 3.6% and 1% respectively. We can see that even when ignoring edge-flips, some difference remains, albeit not necessarily in the favor of the modified models. This is because we did not directly perform edge-flips, but rather parameter-flips. The difference is thus a result of second-order effects stemming from the parameterflips. In the next section, we explain why the remaining difference is positive for some algorithms (cs09) and negative for others (km04, saj10a). For completeness, Table 4 shows a comparison of some of the current state-of-the-art algorithms, using attachment score, undirected evaluation and NED. The training and test sets are those used in Section 3. The table shows that the relative orderings of the algorithms under NED is different than under the other 669 Algo. Mod. – Orig. Attach. Undir. NED km04 9.3 (43.9–34.6) 1.3 (54.2–52.9) –3.6 (63–66.6) cs09 14.7 (54.6–39.9) 7.7 (56.9–49.2) 4.2 (66.8–62.6) saj10a 12.7 (54.3–41.6) 3.5 (59.4–55.9) –1 (66.8–67.8) Table 3: Differences between the modified and original parameter sets when evaluated using attachment score (Attach.), undirected evaluation (Undir.), and NED. measures. This is an indication that NED provides a different perspective on algorithm quality11. Algo. Att10 Att∞ Un10 Un∞ NED10 NED∞ bbdk10 66.1 49.6 70.1 56.0 75.5 61.8 bc10 67.2 53.6 73 61.7 81.6 70.2 cs09 61.5 42 66.9 50.4 81.5 62.9 gggtp10 57.1 45 62.5 53.2 80.4 65.1 km04 45.8 34.6 60.3 52.9 78.4 66.6 saj10a 54.7 41.6 66.5 55.9 78.9 67.8 saj10c 63.8 46.1 72.6 58.8 84.2 70.8 saj10b∗ 67.9 48.2 74.0 57.7 86.0 70.7 Table 4: A comparison of recent works, using Att (attachment score) Un (undirected evaluation) and NED, on sentences of length ≤10 (excluding punctuation) and on all sentences. The gold standard is obtained using the rules of (Yamada and Matsumoto, 2003). bbdk10: (Berg-Kirkpatrick et al., 2010), bc10: (Blunsom and Cohn, 2010), cs09: (Cohen and Smith, 2009), gggtp10: (Gillenwater et al., 2010), km04: A replication of (Klein and Manning, 2004), saj10a: (Spitkovsky et al., 2010a), saj10c: (Spitkovsky et al., 2010c), saj10b∗: A lightlysupervised algorithm (Spitkovsky et al., 2010b). 7 Discussion In this paper we explored two ways of dealing with cases in which there is no clear theoretical justification to prefer one dependency structure over another. Our experiments suggest that it is crucial to deal with such structures if we would like to have a proper evaluation of unsupervised parsing algorithms against a gold standard. The first way was to modify the parameters of the parsing algorithms so that in cases where such problematic decisions are to be made they follow the gold standard annotation. Indeed, this modification leads to a substantial improvement in the attachment score of the algorithms. 11Results may be different than the ones published in the original papers due to the different conversion procedures used in each work. See Section 4 for discussion. The second way was to change the evaluation. The NED measure we proposed does not punish for differences between gold and induced structures in the problematic cases. Indeed, in Section 6 (Table 3) we show that the differences between the original and modified models are much smaller when evaluating with NED compared to when evaluating with the traditional attachment score. As Table 3 reveals, however, even when evaluating with NED, there is still some difference between the original and the modified model, for each of the algorithms we consider. Moreover, for two of the algorithms (km04 and saj10a) NED prefers the original model while for one (cs09) it prefers the modified version. In this section we explain these patterns and show that they are both consistent and predictable. Our hypothesis, for which we provide empirical justification, is that in cases where there is no theoretically preferred annotation, NED prefers the structures that are more learnable by DMV. That is, NED gives higher scores to the annotations that better fit the assumptions and modeling decisions of DMV, the model that lies in the basis of the parsing algorithms. To support our hypothesis we perform an experiment requiring two preparatory steps for each algorithm. First, we construct a supervised version of the algorithm. This supervised version consists of the same statistical model as the original unsupervised algorithm, but the parameters are estimated to maximize the likelihood of a syntactically annotated training corpus, rather than of a plain text corpus. Second, we construct two corpora for the algorithm, both consist of the same text and differ only in their syntactic annotation. The first is annotated with the gold standard annotation. The second is similarly annotated except in the linguistically problematic structures. We replace these structures with the ones that would have been created with the unsupervised version of the algorithm (see Table 1 for the relevant structures for each algorithm)12. Each 12In cases the structures are comprised of a single edge, the second corpus is obtained from the gold standard by an edgeflip. The only exceptions are the cases of the prepositional phrases. Their gold standard and the learned structures for each of the algorithms are shown in Figure 4. In this case, the second corpus is obtained from the gold standard by replacing each prepositional phrase in the gold standard with the corresponding 670 corpus is divided into a training and a test set. We then train the supervised version of the algorithms on each of the training sets. We parse the test data twice, once with each of the resulting models. We evaluate both parsed corpora against the corpus annotation from which they originated. The training set of each corpus consists of sections 2–21 of WSJ20 (i.e., WSJ sentences of length ≤20, excluding punctuation)13 and the test set is section 23 of WSJ∞. Evaluation is performed using both NED and attachment score. The patterns we observed are very similar for both. For brevity, we report only attachment score results. km04 cs09 saj10a Orig. Gold Orig. Gold Orig. Gold NED, Unsup. 66.6 63 62.6 66.8 67.8 66.8 Sup. 71.3 69.9 63.3 69.9 71.8 69.9 Table 5: The first line shows the NED results from Section 6, when using the original parameters (Orig. columns) and the modified parameters (Gold columns). The second line shows the results of the supervised versions of the algorithms using the corpus which agrees with the unsupervised model in the problematic cases (Orig.) and the gold standard (Gold). The results of our experiment are presented in Table 5 along with a comparison to the NED scores from Section 6. The table clearly demonstrates that a set of parameters (original or modified) is preferred by NED in the unsupervised experiments reported in Section 6 (top line) if and only if the structures produced by this set are better learned by the supervised version of the algorithm (bottom line). This observation supports our hypothesis that in cases where there is no theoretical preference for one structure over the other, NED (unlike the other measures) prefers the structures that are more consistent with the modeling assumptions lying in the basis of the algorithm. We consider this to be a desired property of a measure since a more consistent model should be preferred where no theoretical preference exists. learned structure. 13In using WSJ20, we follow (Spitkovsky et al., 2010a), which showed that training the DMV on sentences of bounded length yields a higher score than using the entire corpus. We use it as we aim to use an optimal setting. 8 Conclusion In this paper we showed that the standard evaluation of unsupervised dependency parsers is highly sensitive to problematic annotations. We modified a small set of parameters that controls the annotation in such problematic cases in three leading parsers. This resulted in a major performance boost, which is unindicative of a true difference in quality. We presented Neutral Edge Direction (NED), a measure that is less sensitive to the annotation of local structures. As the problematic structures are generally local, NED is less sensitive to their alternative annotations. In the future, we suggest reporting NED along with the current measures. Acknowledgements. We would like to thank Shay Cohen for his assistance with his implementation of the DMV parser and Taylor Berg-Kirkpatrick, Phil Blunsom and Jennifer Gillenwater for providing us with their data sets. We would also like to thank Valentin I. Spitkovsky for his comments and for providing us with his data sets. References Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cˆot´e, John DeNero and Dan Klein, 2010. Painless unsupervised learning with features. In Proc. of NAACL. Taylor Berg-Kirkpatrick and Dan Klein, 2010. Phylogenetic Grammar Induction. In Proc. of ACL. Cristina Bosco and Vincenzo Lombardo, 2004. Dependency and relational structure in treebank annotation. In Proc. of the Workshop on Recent Advances in Dependency Grammar at COLING’04. Phil Blunsom and Trevor Cohn, 2010. Unsupervised Induction of Tree Substitution Grammars for Dependency Parsing. In Proc. of EMNLP. Shay B. Cohen, Kevin Gimpel and Noah A. Smith, 2008. Logistic Normal Priors for Unsupervised Probabilistic Grammar Induction. In Proc. of NIPS. Shay B. Cohen and Noah A. Smith, 2009. Shared Logistic Normal Distributions for Soft Parameter Tying. In Proc. of HLT-NAACL. Michael J. Collins, 1999. Head-driven statistical models for natural language parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia. Alexander Clark, 2001. Unsupervised language acquisition: theory and practice. Ph.D. thesis, University of Sussex. Hal Daum´e III, 2009. Unsupervised search-based structured prediction. In Proc. of ICML. 671 Mark Dredze, John Blitzer, Partha Pratim Talukdar, Kuzman Ganchev, Jo˜ao V. Grac¸a and Fernando Pereira, 2007. Frustratingly Hard Domain Adaptation for Dependency Parsing. In Proc. of the CoNLL 2007 Shared Task. EMNLP-CoNLL. Gregory Druck, Gideon Mann and Andrew McCallum, 2009. Semi-supervised learning of dependency parsers using generalized expectation criteria. In Proc. of ACL. Jennifer Gillenwater, Kuzman Ganchev, Jo˜ao V. Grac¸a, Ben Taskar and Fernando Preira, 2010. Sparsity in dependency grammar induction. In Proc. of ACL. William P. Headden III, David McClosky, and Eugene Charniak, 2008. Evaluating Unsupervised Part-ofSpeech Tagging for Grammar Induction. In Proc. of COLING. William P. Headden III, Mark Johnson and David McClosky, 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Proc. of HLT-NAACL. Richard Johansson and Pierre Nugues, 2007. Extended Constituent-to-DependencyConversion for English. In Proc. of NODALIDA. Dan Klein, 2005. The unsupervised learning of natural language structure. Ph.D. thesis, Stanford University. Dan Klein and Christopher Manning, 2004. Corpusbased induction of syntactic structure: Models of dependency and constituency. In Proc. of ACL. Sandra K¨ubler, 2005. How Do Treebank Annotation Schemes Influence Parsing Results? Or How Not to Compare Apples And Oranges. In Proc. of RANLP. Sandra K¨ubler, R. McDonald and Joakim Nivre, 2009. Dependency Parsing. Morgan And Claypool Publishers. Mitchell Marcus, Beatrice Santorini and Mary Ann Marcinkiewicz, 1993. Building a large annotated corpus of English: The Penn treebank. Computational Linguistics 19:313-330. Tahira Naseem, Harr Chen, Regina Barzilay and Mark Johnson, 2010. Using universal linguistic knowledge to guide grammar induction. In Proc. of EMNLP. Joakim Nivre, 2006. Inductive Dependency Parsing. Springer. Joakim Nivre, Johan Hall and Jens Nilsson, 2006. MaltParser: A data-driven parser-generator for dependency parsing. In Proc. of LREC-2006. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel and Deniz Yuret, 2007. The CoNLL 2007 shared task on dependency parsing. In Proc. of the CoNLL Shared Task, EMNLPCoNLL, 2007. Jens Nilsson, Joakim Nivre and Johan Hall, 2006. Graph transformations in data-driven dependency parsing. In Proc. of ACL. Owen Rambow, Cassandre Creswell, Rachel Szekely, Harriet Tauber and Marilyn Walker, 2002. A dependency treebank for English. In Proc. of LREC. Noah A. Smith and Jason Eisner, 2005. Guiding unsupervised grammar induction using contrastive estimation. In Proc. of IJCAI. Noah A. Smith and Jason Eisner, 2006. Annealing structural bias in multilingual weighted grammar induction. In Proc. of ACL. Valentin I. Spitkovsky, Hiyan Alshawi and Daniel Jurafsky, 2010a. From Baby Steps to Leapfrog: How “Less is More” in Unsupervised Dependency Parsing. In Proc. of NAACL-HLT. Valentin I. Spitkovsky, Hiyan Alshawi and Daniel Jurafsky, 2010b. Profiting from Mark-Up: Hyper-Text Annotations for Guided Parsing. In Proc. of ACL. Valentin I. Spitkovsky, Hiyan Alshawi, Daniel Jurafsky and Christopher D. Manning, 2010c. Viterbi training improves unsupervised dependency parsing. In Proc. of CoNLL. Qin Iris Wang, Dale Schuurmans and Dekang Lin, 2005. Strictly Lexical Dependency Parsing. In IWPT. Qin Iris Wang, Colin Cherry, Dan Lizotte and Dale Schuurmans, 2006. Improved Large Margin Dependency Parsing via Local Constraints and Laplacian Regularization. In Proc. of CoNLL. Hiroyasu Yamada and Yuji Matsumoto, 2003. Statistical dependency analysis with support vector machines. In Proc. of the International Workshop on Parsing Technologies. 672
|
2011
|
67
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 673–682, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Dynamic Programming Algorithms for Transition-Based Dependency Parsers Marco Kuhlmann Dept. of Linguistics and Philology Uppsala University, Sweden [email protected] Carlos Gómez-Rodríguez Departamento de Computación Universidade da Coruña, Spain [email protected] Giorgio Satta Dept. of Information Engineering University of Padua, Italy [email protected] Abstract We develop a general dynamic programming technique for the tabulation of transition-based dependency parsers, and apply it to obtain novel, polynomial-time algorithms for parsing with the arc-standard and arc-eager models. We also show how to reverse our technique to obtain new transition-based dependency parsers from existing tabular methods. Additionally, we provide a detailed discussion of the conditions under which the feature models commonly used in transition-based parsing can be integrated into our algorithms. 1 Introduction Dynamic programming algorithms, also known as tabular or chart-based algorithms, are at the core of many applications in natural language processing. When applied to formalisms such as context-free grammar, they provide polynomial-time parsing algorithms and polynomial-space representations of the resulting parse forests, even in cases where the size of the search space is exponential in the length of the input string. In combination with appropriate semirings, these packed representations can be exploited to compute many values of interest for machine learning, such as best parses and feature expectations (Goodman, 1999; Li and Eisner, 2009). In this paper, we follow the line of investigation started by Huang and Sagae (2010) and apply dynamic programming to (projective) transition-based dependency parsing (Nivre, 2008). The basic idea, originally developed in the context of push-down automata (Lang, 1974; Tomita, 1986; Billot and Lang, 1989), is that while the number of computations of a transition-based parser may be exponential in the length of the input string, several portions of these computations, when appropriately represented, can be shared. This can be effectively implemented through dynamic programming, resulting in a packed representation of the set of all computations. The contributions of this paper can be summarized as follows. We provide (declarative specifications of) novel, polynomial-time algorithms for two widelyused transition-based parsing models: arc-standard (Nivre, 2004; Huang and Sagae, 2010) and arc-eager (Nivre, 2003; Zhang and Clark, 2008). Our algorithm for the arc-eager model is the first tabular algorithm for this model that runs in polynomial time. Both algorithms are derived using the same general technique; in fact, we show that this technique is applicable to all transition-parsing models whose transitions can be classified into “shift” and “reduce” transitions. We also show how to reverse the tabulation to derive a new transition system from an existing tabular algorithm for dependency parsing, originally developed by Gómez-Rodríguez et al. (2008). Finally, we discuss in detail the role of feature information in our algorithms, and in particular the conditions under which the feature models traditionally used in transition-based dependency parsing can be integrated into our framework. While our general approach is the same as the one of Huang and Sagae (2010), we depart from their framework by not representing the computations of a parser as a graph-structured stack in the sense of Tomita (1986). We instead simulate computations as in Lang (1974), which results in simpler algorithm specifications, and also reveals deep similarities between transition-based systems for dependency parsing and existing tabular methods for lexicalized context-free grammars. 673 2 Transition-Based Dependency Parsing We start by briefly introducing the framework of transition-based dependency parsing; for details, we refer to Nivre (2008). 2.1 Dependency Graphs Let w D w0 wn 1 be a string over some fixed alphabet, where n 1 and w0 is the special token root. A dependency graph for w is a directed graph G D .Vw; A/, where Vw D f0; : : : ; n 1g is the set of nodes, and A Vw Vw is the set of arcs. Each node in Vw encodes the position of a token in w, and each arc in A encodes a dependency relation between two tokens. To denote an arc .i; j / 2 A, we write i ! j ; here, the node i is the head, and the node j is the dependent. A sample dependency graph is given in the left part of Figure 2. 2.2 Transition Systems A transition system is a structure S D .C; T; I; Ct/, where C is a set of configurations, T is a finite set of transitions, which are partial functions tW C * C, I is a total initialization function mapping each input string to a unique initial configuration, and Ct C is a set of terminal configurations. The transition systems that we investigate in this paper differ from each other only with respect to their sets of transitions, and are identical in all other aspects. In each of them, a configuration is defined relative to a string w as above, and is a triple c D .; ˇ; A/, where and ˇ are disjoint lists of nodes from Vw, called stack and buffer, respectively, and A Vw Vw is a set of arcs. We denote the stack, buffer and arc set associated with c by .c/, ˇ.c/, and A.c/, respectively. We follow a standard convention and write the stack with its topmost element to the right, and the buffer with its first element to the left; furthermore, we indicate concatenation in the stack and in the buffer by a vertical bar. The initialization function maps each string w to the initial configuration .Œ; Œ0; : : : ; jwj 1; ;/. The set of terminal configurations contains all configurations of the form .Œ0; Œ; A/, where A is some set of arcs. Given an input string w, a parser based on S processes w from left to right, starting in the initial configuration I.w/. At each point, it applies one of the transitions, until at the end it reaches a terminal .; ijˇ; A/ ` .ji; ˇ; A/ .sh/ .jijj; ˇ; A/ ` .jj; ˇ; A [ fj ! ig/ .la/ .jijj; ˇ; A/ ` .ji; ˇ; A [ fi ! j g/ .ra/ Figure 1: Transitions in the arc-standard model. configuration; the dependency graph defined by the arc set associated with that configuration is then returned as the analysis for w. Formally, a computation of S on w is a sequence
D c0; : : : ; cm, m 0, of configurations (defined relative to w) in which each configuration is obtained as the value of the preceding one under some transition. It is called complete whenever c0 D I.w/, and cm 2 Ct. We note that a computation can be uniquely specified by its initial configuration c0 and the sequence of its transitions, understood as a string over T . Complete computations, where c0 is fixed, can be specified by their transition sequences alone. 3 Arc-Standard Model To introduce the core concepts of the paper, we first look at a particularly simple model for transitionbased dependency parsing, known as the arc-standard model. This model has been used, in slightly different variants, by a number of parsers (Nivre, 2004; Attardi, 2006; Huang and Sagae, 2010). 3.1 Transition System The arc-standard model uses three types of transitions: Shift (sh) removes the first node in the buffer and pushes it to the stack. Left-Arc (la) creates a new arc with the topmost node on the stack as the head and the second-topmost node as the dependent, and removes the second-topmost node from the stack. Right-Arc (ra) is symmetric to Left-Arc in that it creates an arc with the second-topmost node as the head and the topmost node as the dependent, and removes the topmost node. The three transitions can be formally specified as in Figure 1. The right half of Figure 2 shows a complete computation of the arc-standard transition system, specified by its transition sequence. The picture also shows the contents of the stack over the course of the computation; more specifically, column i shows the stack .ci/ associated with the configuration ci. 674 root This news had little effect on the markets 0 1 0 0 1 2 0 2 0 2 3 0 3 0 3 0 3 0 3 5 4 4 5 0 3 5 0 3 5 0 3 5 6 6 6 7 7 8 0 3 5 6 8 0 3 5 6 0 3 5 0 3 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 sh sh sh la sh la sh sh la sh sh sh la ra ra ra ra
1
2
0 Figure 2: A dependency tree (left) and a computation generating this tree in the arc-standard system (right). 3.2 Push Computations The key to the tabulation of transition-based dependency parsers is to find a way to decompose computations into smaller, shareable parts. For the arcstandard model, as well as for the other transition systems that we consider in this paper, we base our decomposition on the concept of push computations. By this, we mean computations
D c0; : : : ; cm ; m 1 ; on some input string w with the following properties: (P1) The initial stack .c0/ is not modified during the computation, and is not even exposed after the first transition: For every 1 i m, there exists a non-empty stack i such that .ci/ D .c0/ji. (P2) The overall effect of the computation is to push a single node to the stack: The stack .cm/ can be written as .cm/ D .c0/jh, for some h 2 Vw. We can verify that the computation in Figure 2 is a push computation. We can also see that it contains shorter computations that are push computations; one example is the computation
0 D c1; : : : ; c16, whose overall effect is to push the node 3. In Figure 2, this computation is marked by the zig-zag path traced in bold. The dashed line delineates the stack .c1/, which is not modified during
0. Every computation that consists of a single sh transition is a push computation. Starting from these atoms, we can build larger push computations by means of two (partial) binary operations fla and fra, defined as follows. Let
1 D c10; : : : ; c1m1 and
2 D c20; : : : ; c2m2 be push computations on the same input string w such that c1m1 D c20. Then fra.
1;
2/ D c10; : : : ; c1m1; c21; : : : ; c2m2; c ; where c is obtained from c2m2 by applying the ra transition. (The operation fla is defined analogously.) We can verify that fra.
1;
2/ is another push computation. For instance, with respect to Figure 2, fra.
1;
2/ D
0. Conversely, we say that the push computation
0 can be decomposed into the subcomputations
1 and
2, and the operation fra. 3.3 Deduction System Building on the compositional structure of push computations, we now construct a deduction system (in the sense of Shieber et al. (1995)) that tabulates the computations of the arc-standard model for a given input string w D w0 wn 1. For 0 i n, we shall write ˇi to denote the buffer Œi; : : : ; n 1. Thus, ˇ0 denotes the full buffer, associated with the initial configuration I.w/, and ˇn denotes the empty buffer, associated with a terminal configuration c 2 Ct. Item form. The items of our deduction system take the form Œi; h; j, where 0 i h < j n. The intended interpretation of an item Œi; h; j is: For every configuration c0 with ˇ.c0/ D ˇi, there exists a push computation
D c0; : : : ; cm such that ˇ.cm/ D ˇj , and .cm/ D .c0/jh. Goal. The only goal item is Œ0; 0; n, asserting that there exists a complete computation for w. Axioms. For every stack , position i < n and arc set A, by a single sh transition we obtain the push computation .; ˇi; A/; .ji; ˇiC1; A/. Therefore we can take the set of all items of the form Œi; i; i C 1 as the axioms of our system. Inference rules. The inference rules parallel the composition operations fla and fra. Suppose that we have deduced the items Œi; h1; k and Œk; h2; j, where 0 i h1 < k h2 < j n. The item Œi; h1; k asserts that for every configuration c10 675 Item form: Œi; h; j , 0 i h < j jwj Goal: Œ0; 0; jwj Axioms: Œi; i; i C 1 Inference rules: Œi; h1; k Œk; h2; j Œi; h2; j .laI h2 ! h1/ Œi; h1; k Œk; h2; j Œi; h1; j .raI h1 ! h2/ Figure 3: Deduction system for the arc-standard model. with ˇ.c10/ D ˇi, there exists a push computation
1 D c10; : : : ; c1m1 such that ˇ.c1m1/ D ˇk, and .c1m1/ D .c10/jh1. Using the item Œk; h2; j, we deduce the existence of a second push computation
2 D c20; : : : ; c2m2 such that c20 D c1m1, ˇ.c2m2/ D ˇj , and .c2m2/ D .c10/jh1jh2. By means of fra, we can then compose
1 and
2 into a new push computation fra.
1;
2/ D c10; : : : ; c1m1; c21; : : : ; c2m2; c : Here, ˇ.c/ D ˇj , and .c/ D .c10/jh1. Therefore, we may generate the item Œi; h1; j . The inference rule for la can be derived analogously. Figure 3 shows the complete deduction system. 3.4 Completeness and Non-Ambiguity We have informally argued that our deduction system is sound. To show completeness, we prove the following lemma: For all 0 i h < j jwj and every push computation
D c0; : : : ; cm on w with ˇ.c0/ D ˇi, ˇ.cm/ D ˇj and .cm/ D .c0/jh, the item Œi; h; j is generated. The proof is by induction on m, and there are two cases: m D 1. In this case,
consists of a single sh transition, h D i, j D i C 1, and we need to show that the item Œi; i; i C 1 is generated. This holds because this item is an axiom. m 2. In this case,
ends with either a la or a ra transition. Let c be the rightmost configuration in
that is different from cm and whose stack size is one larger than the size of .c0/. The computations
1 D c0; : : : ; c and
2 D c; : : : ; cm 1 are both push computations with strictly fewer transitions than
. Suppose that the last transition in
is ra. In this case, ˇ.c/ D ˇk for some i < k < j, .c/ D .c0/jh with h < k, ˇ.cm 1/ D ˇj , and .cm 1/ D .c0/jhjh0 for some k h0 < j. By induction, we may assume that we have generated items Œi; h; k and Œk; h0; j . Applying the inference rule for ra, we deduce the item Œi; h; j. An analogous argument can be made for fla. Apart from being sound and complete, our deduction system also has the property that it assigns at most one derivation to a given item. To see this, note that in the proof of the lemma, the choice of c is uniquely determined: If we take any other configuration c0 that meets the selection criteria, then the computation
0 2 D c0; : : : ; cm 1 is not a push computation, as it contains c as an intermediate configuration, and thereby violates property P1. 3.5 Discussion Let us briefly take stock of what we have achieved so far. We have provided a deduction system capable of tabulating the set of all computations of an arcstandard parser on a given input string, and proved the correctness of this system relative to an interpretation based on push computations. Inspecting the system, we can see that its generic implementation takes space in O.jwj3/ and time in O.jwj5/. Our deduction system is essentially the same as the one for the CKY algorithm for bilexicalized contextfree grammar (Collins, 1996; Gómez-Rodríguez et al., 2008). This equivalence reveals a deep correspondence between the arc-standard model and bilexicalized context-free grammar, and, via results by Eisner and Satta (1999), to head automata. In particular, Eisner’s and Satta’s “hook trick” can be applied to our tabulation to reduce its runtime to O.jwj4/. 4 Adding Features The main goal with the tabulation of transition-based dependency parsers is to obtain a representation based on which semiring values such as the highest-scoring computation for a given input (and with it, a dependency tree) can be calculated. Such computations involve the use of feature information. In this section, we discuss how our tabulation of the arcstandard system can be extended for this purpose. 676 Œi; h1; kI hx2; x1i; hx1; x3i W v1 Œk; h2; jI hx1; x3i; hx3; x4i W v2 Œi; h1; jI hx2; x1i; hx1; x3i W v1 C v2 C hx3; x4i E˛ra .ra/ Œi; h; j I hx2; x1i; hx1; x3i W v Œj; j; j C 1I hx1; x3i; hx3; wj i W hx1; x3i E˛sh .sh/ Figure 4: Extended inference rules under the feature model ˚ D hs1:w; s0:wi. The annotations indicate how to calculate a candidate for an update of the Viterbi score of the conclusion using the Viterbi scores of the premises. 4.1 Scoring Computations For the sake of concreteness, suppose that we want to score computations based on the following model, taken from Zhang and Clark (2008). The score of a computation
is broken down into a sum of scores score.t; ct/ for combinations of a transition t in the transition sequence associated with
and the configuration ct in which t was taken: score.
/ D X t2
score.t; ct/ (1) The score score.t; ct/ is defined as the dot product of the feature representation of ct relative to a feature model ˚ and a transition-specific weight vector E˛t: score.t; ct/ D ˚.ct/ E˛t The feature model ˚ is a vector h1; : : : ; ni of elementary feature functions, and the feature representation ˚.c/ of a configuration c is a vector Ex D h1.c/; : : : ; n.c/i of atomic values. Two examples of feature functions are the word form associated with the topmost and second-topmost node on the stack; adopting the notation of Huang and Sagae (2010), we will write these functions as s0:w and s1:w, respectively. Feature functions like these have been used in several parsers (Nivre, 2006; Zhang and Clark, 2008; Huang et al., 2009). 4.2 Integration of Feature Models To integrate feature models into our tabulation of the arc-standard system, we can use extended items of the form Œi; h; j I ExL; ExR with the same intended interpretation as the old items Œi; h; j , except that the initial configuration of the asserted computations
D c0; : : : ; cm now is required to have the feature representation ExL, and the final configuration is required to have the representation ExR: ˚.c0/ D ExL and ˚.cm/ D ExR We shall refer to the vectors ExL and ExR as the leftcontext vector and the right-context vector of the computation
, respectively. We now need to change the deduction rules so that they become faithful to the extended interpretation. Intuitively speaking, we must ensure that the feature values can be computed along the inference rules. As a concrete example, consider the feature model ˚ D hs1:w; s0:wi. In order to integrate this model into our tabulation, we change the rule for ra as in Figure 4, where x1; : : : ; x4 range over possible word forms. The shared variable occurrences in this rule capture the constraints that hold between the feature values of the subcomputations
1 and
2 asserted by the premises, and the computations fra.
1;
2/ asserted by the conclusion. To illustrate this, suppose that
1 and
2 are as in Figure 2. Then the three occurrences of x3 for instance encode that Œs0:w.c6/ D Œs1:w.c15/ D Œs0:w.c16/ D w3 : We also need to extend the axioms, which correspond to computations consisting of a single sh transition. The most conservative way to do this is to use a generate-and-test technique: Extend the existing axioms by all valid choices of left-context and right-context vectors, that is, by all pairs ExL; ExR such that there exists a configuration c with ˚.c/ D ExL and ˚.sh.c// D ExR. The task of filtering out useless guesses can then be delegated to the deduction system. A more efficient way is to only have one axiom, for the case where c D I.w/, and to add to the deduction system a new, unary inference rule for sh as in Figure 4. This rule only creates items whose left-context vector is the right-context vector of some other item, which prevents the generation of useless items. In the following, we take this second approach, which is also the approach of Huang and Sagae (2010). 677 Œi; h; j I hx2; x1i; hx1; x3i W .p; v/ Œj; j; j C 1I hx1; x3i; hx3; wj i W .p C ; / .sh/ , where D hx1; x3i E˛sh Œi; h1; kI hx2; x1i; hx1; x3i W .p1; v1/ Œk; h2; jI hx1; x3i; hx3; x4i W .p2; v2/ Œi; h1; jI hx2; x1i; hx1; x3i W .p1 C v2 C ; v1 C v2 C / .ra/ , where D hx3; x4i E˛ra Figure 5: Extended inference rules under the feature model ˚ D hs0:w; s1:wi. The annotations indicate how to calculate a candidate for an update of the prefix score and Viterbi score of the conclusion. 4.3 Computing Viterbi Scores Once we have extended our deduction system with feature information, many values of interest can be computed. One simple example is the Viterbi score for an input w, defined as arg max
2 .w/ score.
/ ; (2) where .w/ denotes the set of all complete computations for w. The score of a complex computation ft.
1;
2/ is the sum of the scores of its subcomputations
1;
2, plus the transition-specific dot product. Since this dot product only depends on the feature representation of the final configuration of
2, the Viterbi score can be computed on top of the inference rules using standard techniques. The crucial calculation is indicated in Figure 4. 4.4 Computing Prefix Scores Another interesting value is the prefix score of an item, which, apart from the Viterbi score, also includes the cost of the best search path leading to the item. Huang and Sagae (2010) use this quantity to order the items in a beam search on top of their dynamic programming method. In our framework, prefix scores can be computed as indicated in Figure 5. Alternatively, we can also use the more involved calculation employed by Huang and Sagae (2010), which allows them to get rid of the left-context vector from their items.1 4.5 Compatibility So far we have restricted our attention to a concrete and extremely simplistic feature model. The feature models that are used in practical systems are considerably more complex, and not all of them are 1The essential idea in the calculation by Huang and Sagae (2010) is to delegate (in the computation of the Viterbi score) the scoring of sh transitions to the inference rules for la/ra. compatible with our framework in the sense that they can be integrated into our deduction system in the way described in Section 4.2. For a simple example of a feature model that is incompatible with our tabulation, consider the model ˚0 D hs0:rc:wi, whose single feature function extracts the word form of the right child (rc) of the topmost node on the stack. Even if we know the values of this feature for two computations
1;
2, we have no way to compute its value for the composed computation fra.
1;
2/: This value coincides with the word form of the topmost node on the stack associated with
2, but in order to have access to it in the context of the ra rule, our feature model would need to also include the feature function s0:w. The example just given raises the question whether there is a general criterion based on which we can decide if a given feature model is compatible with our tabulation. An attempt to provide such a criterion has been made by Huang and Sagae (2010), who define a constraint on feature models called “monotonicity” and claim that this constraint guarantees that feature values can be computed using their dynamic programming approach. Unfortunately, this claim is wrong. In particular, the feature model ˚0 given above is “monotonic”, but cannot be tabulated, neither in our nor in their framework. In general, it seems clear that the question of compatibility is a question about the relation between the tabulation and the feature model, and not about the feature model alone. To find practically useful characterizations of compatibility is an interesting avenue for future research. 5 Arc-Eager Model Up to now, we have only discussed the arc-standard model. In this section, we show that the framework of push computations also provides a tabulation of another widely-used model for dependency parsing, the arc-eager model (Nivre, 2003). 678 .; ijˇ; A/ ` .ji; ˇ; A/ .sh/ .ji; jjˇ; A/ ` .; jjˇ; A [ fj ! ig/ .lae/ only if i does not have an incoming arc .ji; jjˇ; A/ ` .jijj; ˇ; A [ fi ! j g/ .rae/ .ji; ˇ; A/ ` .; ˇ; A/ .re/ only if i has an incoming arc Figure 6: Transitions in the arc-eager model. 5.1 Transition System The arc-eager model has three types of transitions, shown in Figure 6: Shift (sh) works just like in arcstandard, moving the first node in the buffer to the stack. Left-Arc (lae) creates a new arc with the first node in the buffer as the head and the topmost node on the stack as the dependent, and pops the stack. It can only be applied if the topmost node on the stack has not already been assigned a head, so as to preserve the single-head constraint. Right-Arc (rae) creates an arc in the opposite direction as Left-Arc, and moves the first node in the buffer to the stack. Finally, Reduce (re) simply pops the stack; it can only be applied if the topmost node on the stack has already been assigned a head. Note that, unlike in the case of arc-standard, the parsing process in the arc-eager model is not bottomup: the right dependents of a node are attached before they have been assigned their own right dependents. 5.2 Shift-Reduce Parsing If we look at the specification of the transitions of the arc-standard and the arc-eager model and restrict our attention to the effect that they have on the stack and the buffer, then we can see that all seven transitions fall into one of three types: .; ijˇ/ ` .ji; ˇ/ sh; rae (T1) .jijj; ˇ/ ` .jj; ˇ/ la (T2) .ji; ˇ/ ` .; ˇ/ ra; lae; re (T3) We refer to transitions of type T1 as shift and to transitions of type T2 and T3 as reduce transitions. The crucial observation now is that the concept of push computations and the approach to their tabulation that we have taken for the arc-standard system can easily be generalized to other transition systems whose transitions are of the type shift or reduce. In particular, the proof of the correctness of our deduction system that we gave in Section 3 still goes through if instead of sh we write “shift” and instead of la and ra we write “reduce”. 5.3 Deduction System Generalizing our construction for the arc-standard model along these lines, we obtain a tabulation of the arc-eager model. Just like in the case of arcstandard, each single shift transition in that model (be it sh or rae) constitutes a push computation, while the reduce transitions induce operations flae and fre. The only difference is that the preconditions of lae and re must be met. Therefore, flae.
1;
2/ is only defined if the topmost node on the stack in the final configuration of
2 has not yet been assigned a head, and fre.
1;
2/ is only defined in the opposite case. Item form. In our deduction system for the arc-eager model we use items of the form Œi; hb; j, where 0 i h < j jwj, and b 2 f0; 1g. An item Œi; hb; j has the same meaning as the corresponding item in our deduction system for arc-standard, but also keeps record of whether the node h has been assigned a head (b D 1) or not (b D 0). Goal. The only goal item is Œ0; 00; jwj. (The item Œ0; 01; jwj asserts that the node 0 has a head, which never happens in a complete computation.) Axioms. Reasoning as in arc-standard, the axioms of the deduction system for the arc-eager model are the items of the form Œi; i0; i C 1 and Œj; j 1; j C 1, where j > 0: the former correspond to the push computations obtained from a single sh, the latter to those obtained from a single rae, which apart from shifting a node also assigns it a head. Inference rules. Also analogously to arc-standard, if we know that there exists a push computation
1 of the form asserted by the item Œi; hb; k, and a push computation
2 of the form asserted by Œk; g0; j, where j < jwj, then we can build the push computation flae.
1;
2/ of the form asserted by the item Œi; hb; j. Similarly, if
2 is of the form asserted by Œk; g1; j, then we can build fre.
1;
2/, which again is of the form by asserted Œi; hb; j. Thus: Œi; ib; k Œk; k0; j Œi; ib; j .lae/ , Œi; ib; k Œk; k1; j Œi; ib; j .re/ . 679 Item form: Œib; j , 0 i < j jwj , b 2 f0; 1g Goal: Œ00; jwj Axioms: Œ00; 1 Œib; j Œj 0; j C 1 .sh/ Œib; k Œk0; j Œib; j .laeI j ! k/ , j < jwj Œib; j Œj 1; j C 1 .raeI i ! j/ Œib; k Œk1; j Œib; j .re/ Figure 7: Deduction system for the arc-eager model. As mentioned above, the correctness and non-ambiguity of the system can be proved as in Section 3. Features can be added in the same way as discussed in Section 4. 5.4 Computational Complexity Looking at the inference rules, it is clear that an implementation of the deduction system for arc-eager takes space in O.jwj3/ and time in O.jwj5/, just like in the case of arc-standard. However, a closer inspection reveals that we can give even tighter bounds. In all derivable items Œi; hb; j , it holds that i D h. This can easily be shown by induction: The property holds for the axioms, and the first two indexes of a consequent of a deduction rule coincide with the first two indexes of the left antecedent. Thus, if we use the notation Œib; k as a shorthand for Œi; ib; k, then we can rewrite the inference rules for the arc-eager system as in Figure 7, where, additionally, we have added unary rules for sh and ra and restricted the set of axioms along the lines set out in Section 4.2. With this formulation, it is apparent that the space complexity of the generic implementation of the deduction system is in fact even in O.jwj2/, and its time complexity is in O.jwj3/. 6 Hybrid Model We now reverse the approach that we have taken in the previous sections: Instead of tabulating a transition system in order to get a dynamic-programming parser that simulates its computations, we start with a tabular parser and derive a transition system from it. In the new model, dependency trees are built bottom-up as in the arc-standard model, but the set of all computations in the system can be tabulated in space O.jwj2/ and time O.jwj3/, as in arc-eager. 6.1 Deduction System Gómez-Rodríguez et al. (2008) present a deductive version of the dependency parser of Yamada and Matsumoto (2003); their deduction system is given in Figure 8. The generic implementation of the deduction system takes space O.jwj2/ and time O.jwj3/. In the original interpretation of the deduction system, an item Œi; j asserts the existence of a pair of (projective) dependency trees: the first tree rooted at token wi, having all nodes in the substring wi wk 1 as descendants, where i < k j; and the second tree rooted at token wj , having all nodes in the substring wk wj as descendants. (Note that we use fencepost indexes, while Gómez-Rodríguez et al. (2008) indexes positions.) 6.2 Transition System In the context of our tabulation framework, we adopt a new interpretation of items: An item Œi; j has the same meaning as an item Œi; i; j in the tabulation of the arc-standard model; for every configuration c with ˇ.c/ D ˇi, it asserts the existence of a push computation that starts with c and ends with a configuration c0 for which ˇ.c0/ D ˇj and .c0/ D .c/ji. If we interpret the inference rules of the system in terms of composition operations on push computations as usual, and also take the intended direction of the dependency arcs into account, then this induces a transition system with three transitions: .; ijˇ; A/ ` .ji; ˇ; A/ .sh/ .ji; jjˇ; A/ ` .; jjˇ; A [ fj ! ig/ .lah/ .jijj; ˇ; A/ ` .ji; ˇ; A [ fi ! jg/ .ra/ We call this transition system the hybrid model, as sh and ra are just like in arc-standard, while lah is like the Left-Arc transition in the arc-eager model (lae), except that it does not have the precondition. Like the arc-standard but unlike the arc-eager model, the hybrid model builds dependencies bottom-up. 7 Conclusion In this paper, we have provided a general technique for the tabulation of transition-based dependency parsers, and applied it to obtain dynamic programming algorithms for two widely-used parsing models, 680 Item form: Œi; j , 0 i < j jwj Goal: Œ0; jwj Axioms: Œ0; 1 Inference rules: Œi; j Œj; j C 1 .sh/ Œi; k Œk; j Œi; j .lahI j ! k/ , j < jwj Œi; k Œk; j Œi; j .raI i ! k/ Figure 8: Deduction system for the hybrid model. arc-standard and (for the first time) arc-eager. The basic idea behind our technique is the same as the one implemented by Huang and Sagae (2010) for the special case of the arc-standard model, but instead of their graph-structured stack representation we use a tabulation akin to Lang’s approach to the simulation of pushdown automata (Lang, 1974). This considerably simplifies both the presentation and the implementation of parsing algorithms. It has also enabled us to give simple proofs of correctness and establish relations between transition-based parsers and existing parsers based on dynamic programming. While this paper has focused on the theoretical aspects and the analysis of dynamic programming versions of transition-based parsers, an obvious avenue for future work is the evaluation of the empirical performance and efficiency of these algorithms in connection with specific feature models. The feature models used in transition-based dependency parsing are typically very expressive, and exhaustive search with them quickly becomes impractical even for our cubic-time algorithms of the arc-eager and hybrid model. However, Huang and Sagae (2010) have provided evidence that the use of dynamic programming on top of a transition-based dependency parser can improve accuracy even without exhaustive search. The tradeoff between expressivity of the feature models on the one hand and the efficiency of the search on the other is a topic that we find worth investigating. Another interesting observation is that dynamic programming makes it possible to use predictive features, which cannot easily be integrated into a nontabular transition-based parser. This could lead to the development of parsing models that cross the border between transition-based and tabular parsing. Acknowledgments All authors contributed equally to the work presented in this paper. M. K. wrote most of the manuscript. C. G.-R. has been partially supported by Ministerio de Educación y Ciencia and FEDER (HUM2007-66607-C04) and Xunta de Galicia (PGIDIT07SIN005206PR, Rede Galega de Procesamento da Linguaxe e Recuperación de Información, Rede Galega de Lingüística de Corpus, Bolsas Estadías INCITE/FSE cofinanced). References Giuseppe Attardi. 2006. Experiments with a multilanguage non-projective dependency parser. In Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL), pages 166–170, New York, USA. Sylvie Billot and Bernard Lang. 1989. The structure of shared forests in ambiguous parsing. In Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics (ACL), pages 143–151, Vancouver, Canada. Michael Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL), pages 184–191, Santa Cruz, CA, USA. Jason Eisner and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and Head Automaton Grammars. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL), pages 457–464, College Park, MD, USA. Carlos Gómez-Rodríguez, John Carroll, and David J. Weir. 2008. A deductive approach to dependency parsing. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL): Human Language Technologies, pages 968–976, Columbus, OH, USA. Joshua Goodman. 1999. Semiring parsing. Computational Linguistics, 25(4):573–605. Liang Huang and Kenji Sagae. 2010. Dynamic programming for linear-time incremental parsing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1077–1086, Uppsala, Sweden. Liang Huang, Wenbin Jiang, and Qun Liu. 2009. Bilingually-constrained (monolingual) shift-reduce parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1222–1231, Singapore. Bernard Lang. 1974. Deterministic techniques for efficient non-deterministic parsers. In Jacques Loecx, 681 editor, Automata, Languages and Programming, 2nd Colloquium, University of Saarbrücken, July 29–August 2, 1974, number 14 in Lecture Notes in Computer Science, pages 255–269. Springer. Zhifei Li and Jason Eisner. 2009. First- and second-order expectation semirings with applications to minimumrisk training on translation forests. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 40–51, Singapore. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the Eighth International Workshop on Parsing Technologies (IWPT), pages 149–160, Nancy, France. Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Workshop on Incremental Parsing: Bringing Engineering and Cognition Together, pages 50–57, Barcelona, Spain. Joakim Nivre. 2006. Inductive Dependency Parsing, volume 34 of Text, Speech and Language Technology. Springer. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513–553. Stuart M. Shieber, Yves Schabes, and Fernando Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24(1–2):3–36. Masaru Tomita. 1986. Efficient Parsing for Natural Language: A Fast Algorithm for Practical Systems. Springer. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of the Eighth International Workshop on Parsing Technologies (IWPT), pages 195–206, Nancy, France. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 562—571, Honolulu, HI, USA. 682
|
2011
|
68
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 683–692, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Shift-Reduce CCG Parsing Yue Zhang University of Cambridge Computer Laboratory [email protected] Stephen Clark University of Cambridge Computer Laboratory [email protected] Abstract CCGs are directly compatible with binarybranching bottom-up parsing algorithms, in particular CKY and shift-reduce algorithms. While the chart-based approach has been the dominant approach for CCG, the shift-reduce method has been little explored. In this paper, we develop a shift-reduce CCG parser using a discriminative model and beam search, and compare its strengths and weaknesses with the chart-based C&C parser. We study different errors made by the two parsers, and show that the shift-reduce parser gives competitive accuracies compared to C&C. Considering our use of a small beam, and given the high ambiguity levels in an automatically-extracted grammar and the amount of information in the CCG lexical categories which form the shift actions, this is a surprising result. 1 Introduction Combinatory Categorial Grammar (CCG; Steedman (2000)) is a lexicalised theory of grammar which has been successfully applied to a range of problems in NLP, including treebank creation (Hockenmaier and Steedman, 2007), syntactic parsing (Hockenmaier, 2003; Clark and Curran, 2007), logical form construction (Bos et al., 2004) and surface realization (White and Rajkumar, 2009). From a parsing perspective, the C&C parser (Clark and Curran, 2007) has been shown to be competitive with state-of-theart statistical parsers on a variety of test suites, including those consisting of grammatical relations (Clark and Curran, 2007), Penn Treebank phrasestructure trees (Clark and Curran, 2009), and unbounded dependencies (Rimell et al., 2009). The binary branching nature of CCG means that it is naturally compatible with bottom-up parsing algorithms such as shift-reduce and CKY (Ades and Steedman, 1982; Steedman, 2000). However, the parsing work by Clark and Curran (2007), and also Hockenmaier (2003) and Fowler and Penn (2010), has only considered chart-parsing. In this paper we fill a gap in the CCG literature by developing a shiftreduce parser for CCG. Shift-reduce parsers have become popular for dependency parsing, building on the initial work of Yamada and Matsumoto (2003) and Nivre and Scholz (2004). One advantage of shift-reduce parsers is that the scoring model can be defined over actions, allowing highly efficient parsing by using a greedy algorithm in which the highest scoring action (or a small number of possible actions) is taken at each step. In addition, high accuracy can be maintained by using a model which utilises a rich set of features for making each local decision (Nivre et al., 2006). Following recent work applying global discriminative models to large-scale structured prediction problems (Collins and Roark, 2004; Miyao and Tsujii, 2005; Clark and Curran, 2007; Finkel et al., 2008), we build our shift-reduce parser using a global linear model, and compare it with the chartbased C&C parser. Using standard development and test sets from CCGbank, our shift-reduce parser gives a labeled F-measure of 85.53%, which is competitive with the 85.45% F-measure of the C&C parser on recovery of predicate-argument dependencies from CCGbank. Hence our work shows that 683 transition-based parsing can be successfully applied to CCG, improving on earlier attempts such as Hassan et al. (2008). Detailed analysis shows that our shift-reduce parser yields a higher precision, lower recall and higher F-score on most of the common CCG dependency types compared to C&C. One advantage of the shift-reduce parser is that it easily handles sentences for which it is difficult to find a spanning analysis, which can happen with CCG because the lexical categories at the leaves of a derivation place strong contraints on the set of possible derivations, and the supertagger which provides the lexical categories sometimes makes mistakes. Unlike the C&C parser, the shift-reduce parser naturally produces fragmentary analyses when appropriate (Nivre et al., 2006), and can produce sensible local structures even when a full spanning analysis cannot be found.1 Finally, considering this work in the wider parsing context, it provides an interesting comparison between heuristic beam search using a rich set of features, and optimal dynamic programming search where the feature range is restricted. We are able to perform this comparison because the use of the CCG supertagger means that the C&C parser is able to build the complete chart, from which it can find the optimal derivation, with no pruning whatsoever at the parsing stage. In contrast, the shift-reduce parser uses a simple beam search with a relatively small beam. Perhaps surprisingly, given the ambiguity levels in an automatically-extracted grammar, and the amount of information in the CCG lexical categories which form the shift actions, the shift-reduce parser using heuristic beam search is able to outperform the chart-based parser. 2 CCG Parsing CCG, and the application of CCG to wide-coverage parsing, is described in detail elsewhere (Steedman, 2000; Hockenmaier, 2003; Clark and Curran, 2007). Here we provide only a short description. During CCG parsing, adjacent categories are combined using CCG’s combinatory rules. For example, a verb phrase in English (S\NP) can combine with 1See e.g. Riezler et al. (2002) and Zhang et al. (2007) for chartbased parsers which can produce fragmentary analyses. an NP to its left using function application: NP S\NP ⇒S Categories can also combine using function composition, allowing the combination of “may” ((S\NP)/(S\NP)) and “like” ((S\NP)/NP) in coordination examples such as “John may like but may detest Mary”: (S\NP)/(S\NP) (S\NP)/NP ⇒(S\NP)/NP In addition to binary rules, such as function application and composition, there are also unary rules which operate on a single category in order to change its type. For example, forward type-raising can change a subject NP into a complex category looking to the right for a verb phrase: NP ⇒S/(S\NP) An example CCG derivation is given in Section 3. The resource used for building wide-coverage CCG parsers of English is CCGbank (Hockenmaier and Steedman, 2007), a version of the Penn Treebank in which each phrase-structure tree has been transformed into a normal-form CCG derivation. There are two ways to extract a grammar from this resource. One approach is to extract a lexicon, i.e. a mapping from words to sets of lexical categories, and then manually define the combinatory rule schemas, such as functional application and composition, which combine the categories together. The derivations in the treebank are then used to provide training data for the statistical disambiguation model. This is the method used in the C&C parser.2 The second approach is to read the complete grammar from the derivations, by extracting combinatory rule instances from the local trees consisting of a parent category and one or two child categories, and applying only those instances during parsing. (These rule instances also include rules to deal with punctuation and unary type-changing rules, in addition to instances of the combinatory rule schemas.) This is the method used by Hockenmaier (2003) and is the method we adopt in this paper. Fowler and Penn (2010) demonstrate that the second extraction method results in a context-free approximation to the grammar resulting from the first 2Although the C&C default mode applies a restriction for efficiency reasons in which only rule instances seen in CCGbank can be applied, making the grammar of the second type. 684 method, which has the potential to produce a mildlycontext sensitive grammar (given the existence of certain combinatory rules) (Weir, 1988). However, it is important to note that the advantages of CCG, in particular the tight relationship between syntax and semantic interpretation, are still maintained with the second approach, as Fowler and Penn (2010) argue. 3 The Shift-reduce CCG Parser Given an input sentence, our parser uses a stack of partial derivations, a queue of incoming words, and a series of actions—derived from the rule instances in CCGbank—to build a derivation tree. Following Clark and Curran (2007), we assume that each input word has been assigned a POS-tag (from the Penn Treebank tagset) and a set of CCG lexical categories. We use the same maximum entropy POS-tagger and supertagger as the C&C parser. The derivation tree can be transformed into CCG dependencies or grammatical relations by a post-processing step, which essentially runs the C&C parser deterministically over the derivation, interpreting the derivation and generating the required output. The configuration of the parser, at each step of the parsing process, is shown in part (a) of Figure 1, where the stack holds the partial derivation trees that have been built, and the queue contains the incoming words that have not been processed. In the figure, S(H) represents a category S on the stack with head word H, while Qi represents a word in the incoming queue. The set of action types used by the parser is as follows: {SHIFT, COMBINE, UNARY, FINISH}. Each action type represents a set of possible actions available to the parser at each step in the process. The SHIFT-X action pushes the next incoming word onto the stack, and assigns the lexical category X to the word (Figure 1(b)). The label X can be any lexical category from the set assigned to the word being shifted by the supertagger. Hence the shift action performs lexical category disambiguation. This is in contrast to a shift-reduce dependency parser in which a shift action typically just pushes a word onto the stack. The COMBINE-X action pops the top two nodes off the stack, and combines them into a new node, which is pushed back on the stack. The category of Figure 1: The parser configuration and set of actions. the new node is X. A COMBINE action corresponds to a combinatory rule in the CCG grammar (or one of the additional punctuation or type-changing rules), which is applied to the categories of the top two nodes on the stack. The UNARY-X action pops the top of the stack, transforms it into a new node with category X, and pushes the new node onto the stack. A UNARY action corresponds to a unary type-changing or typeraising rule in the CCG grammar, which is applied to the category on top of the stack. The FINISH action terminates the parsing process; it can be applied when all input words have been shifted onto the stack. Note that the FINISH action can be applied when the stack contains more than one node, in which case the parser produces a set of partial derivation trees, each corresponding to a node on the stack. This sometimes happens when a full derivation tree cannot be built due to supertagging errors, and provides a graceful solution to the problem of producing high-quality fragmentary parses when necessary. 685 Figure 2: An example parsing process. Figure 2 shows the shift-reduce parsing process for the example sentence “IBM bought Lotus”. First the word “IBM” is shifted onto the stack as an NP; then “bought” is shifted as a transitive verb looking for its object NP on the right and subject NP on the left ((S[dcl]\NP)/NP); and then “Lotus” is shifted as an NP. Then “bought” is combined with its object “Lotus” resulting in a verb phrase looking for its subject on the left (S[dcl]\NP). Finally, the resulting verb phrase is combined with its subject, resulting in a declarative sentence (S[dcl]). A key difference with previous work on shiftreduce dependency (Nivre et al., 2006) and CFG (Sagae and Lavie, 2006b) parsing is that, for CCG, there are many more shift actions – a shift action for each word-lexical category pair. Given the amount of syntactic information in the lexical categories, the choice of correct category, from those supplied by the supertagger, is often a difficult one, and often a choice best left to the parsing model. The C&C parser solves this problem by building the complete packed chart consistent with the lexical categories supplied by the supertagger, leaving the selection of the lexical categories to the Viterbi algorithm. For the shift-reduce parser the choice is also left to the parsing model, but in contrast to C&C the correct lexical category could be lost at any point in the heuristic search process. Hence it is perhaps surprising that we are able to achieve a high parsing accuracy of 85.5%, given a relatively small beam size. 4 Decoding Greedy local search (Yamada and Matsumoto, 2003; Sagae and Lavie, 2005; Nivre and Scholz, 2004) has typically been used for decoding in shift-reduce parsers, while beam-search has recently been applied as an alternative to reduce error-propagation (Johansson and Nugues, 2007; Zhang and Clark, 2008; Zhang and Clark, 2009; Huang et al., 2009). Both greedy local search and beam-search have linear time complexity. We use beam-search in our CCG parser. To formulate the decoding algorithm, we define a candidate item as a tuple ⟨S, Q, F⟩, where S represents the stack with partial derivations that have been built, Q represents the queue of incoming words that have not been processed, and F is a boolean value that represents whether the candidate item has been finished. A candidate item is finished if and only if the FINISH action has been applied to it, and no more actions can be applied to a candidate item after it reaches the finished status. Given an input sentence, we define the start item as the unfinished item with an empty stack and the whole input sentence as the incoming words. A derivation is built from the start item by repeated applications of actions until the item is finished. To apply beam-search, an agenda is used to hold the N-best partial (unfinished) candidate items at each parsing step. A separate candidate output is 686 function DECODE(input, agenda, list, N, grammar, candidate output): agenda.clear() agenda.insert(GETSTARTITEM(input)) candidate output = NONE while not agenda.empty(): list.clear() for item in agenda: for action in grammar.getActions(item): item′ = item.apply(action) if item′.F == TRUE: if candidate output == NONE or item′.score > candidate output.score: candidate output = item′ else: list.append(item′) agenda.clear() agenda.insert(list.best(N)) Figure 3: The decoding algorithm; N is the agenda size used to record the current best finished item that has been found, since candidate items can be finished at different steps. Initially the agenda contains only the start item, and the candidate output is set to none. At each step during parsing, each candidate item from the agenda is extended in all possible ways by applying one action according to the grammar, and a number of new candidate items are generated. If a newly generated candidate is finished, it is compared with the current candidate output. If the candidate output is none or the score of the newly generated candidate is higher than the score of the candidate output, the candidate output is replaced with the newly generated item; otherwise the newly generated item is discarded. If the newly generated candidate is unfinished, it is appended to a list of newly generated partial candidates. After all candidate items from the agenda have been processed, the agenda is cleared and the N-best items from the list are put on the agenda. Then the list is cleared and the parser moves on to the next step. This process repeats until the agenda is empty (which means that no new items have been generated in the previous step), and the candidate output is the final derivation. Pseudocode for the algorithm is shown in Figure 3. feature templates 1 S0wp, S0c, S0pc, S0wc, S1wp, S1c, S1pc, S1wc, S2pc, S2wc, S3pc, S3wc, 2 Q0wp, Q1wp, Q2wp, Q3wp, 3 S0Lpc, S0Lwc, S0Rpc, S0Rwc, S0Upc, S0Uwc, S1Lpc, S1Lwc, S1Rpc, S1Rwc, S1Upc, S1Uwc, 4 S0wcS1wc, S0cS1w, S0wS1c, S0cS1c, S0wcQ0wp, S0cQ0wp, S0wcQ0p, S0cQ0p, S1wcQ0wp, S1cQ0wp, S1wcQ0p, S1cQ0p, 5 S0wcS1cQ0p, S0cS1wcQ0p, S0cS1cQ0wp, S0cS1cQ0p, S0pS1pQ0p, S0wcQ0pQ1p, S0cQ0wpQ1p, S0cQ0pQ1wp, S0cQ0pQ1p, S0pQ0pQ1p, S0wcS1cS2c, S0cS1wcS2c, S0cS1cS2wc, S0cS1cS2c, S0pS1pS2p, 6 S0cS0HcS0Lc, S0cS0HcS0Rc, S1cS1HcS1Rc, S0cS0RcQ0p, S0cS0RcQ0w, S0cS0LcS1c, S0cS0LcS1w, S0cS1cS1Rc, S0wS1cS1Rc. Table 1: Feature templates. 5 Model and Training We use a global linear model to score candidate items, trained discriminatively with the averaged perceptron (Collins, 2002). Features for a (finished or partial) candidate are extracted from each action that have been applied to build the candidate. Following Collins and Roark (2004), we apply the “early update” strategy to perceptron training: at any step during decoding, if neither the candidate output nor any item in the agenda is correct, decoding is stopped and the parameters are updated using the current highest scored item in the agenda or the candidate output, whichever has the higher score. Table 1 shows the feature templates used by the parser. The symbols S0, S1, S2 and S3 in the table represent the top four nodes on the stack (if existent), and Q0, Q1, Q2 and Q3 represent the front four words in the incoming queue (if existent). S0H and S1H represent the subnodes of S0 and S1 that have the lexical head of S0 and S1, respectively. S0L represents the left subnode of S0, when the lexical head is from the right subnode. S0R and S1R represent the right subnode of S0 and S1, respectively, 687 when the lexical head is from the left subnode. If S0 is built by a UNARY action, S0U represents the only subnode of S0. The symbols w, p and c represent the word, the POS, and the CCG category, respectively. These rich feature templates produce a large number of features: 36 million after the first training iteration, compared to around 0.5 million in the C&C parser. 6 Experiments Our experiments were performed using CCGBank (Hockenmaier and Steedman, 2007), which was split into three subsets for training (Sections 02–21), development testing (Section 00) and the final test (Section 23). Extracted from the training data, the CCG grammar used by our parser consists of 3070 binary rule instances and 191 unary rule instances. We compute F-scores over labeled CCG dependencies and also lexical category accuracy. CCG dependencies are defined in terms of lexical categories, by numbering each argument slot in a complex category. For example, the first NP in a transitive verb category is a CCG dependency relation, corresponding to the subject of the verb. Clark and Curran (2007) gives a more precise definition. We use the generate script from the C&C tools3 to transform derivations into CCG dependencies. There is a mismatch between the grammar that generate uses, which is the same grammar as the C&C parser, and the grammar we extract from CCGbank, which contains more rule instances. Hence generate is unable to produce dependencies for some of the derivations our shift-reduce parser produces. In order to allow generate to process all derivations from the shift-reduce parser, we repeatedly removed rules that the generate script cannot handle from our grammar, until all derivations in the development data could be dealt with. In fact, this procedure potentially reduces the accuracy of the shift-reduce parser, but the effect is comparatively small because only about 4% of the development and test sentences contain rules that are not handled by the generate script. All experiments were performed using automati3Available at http://svn.ask.it.usyd.edu.au/trac/candc/wiki; we used the generate and evaluate scripts, as well as the C&C parser, for evaluation and comparison. cally assigned POS-tags, with 10-fold cross validation used to assign POS-tags and lexical categories to the training data. At the supertagging stage, multiple lexical categories are assigned to each word in the input. For each word, the supertagger assigns all lexical categories whose forward-backward probability is above β · max, where max is the highest lexical category probability for the word, and β is a threshold parameter. To give the parser a reasonable freedom in lexical category disambiguation, we used a small β value of 0.0001, which results in 3.6 lexical categories being assigned to each word on average in the training data. For training, but not testing, we also added the correct lexical category to the list of lexical categories for a word in cases when it was not provided by the supertagger. Increasing the size of the beam in the parser beam search leads to higher accuracies but slower running time. In our development experiments, the accuracy improvement became small when the beam size reached 16, and so we set the size of the beam to 16 for the remainder of the experiments. 6.1 Development test accuracies Table 2 shows the labeled precision (lp), recall (lr), F-score (lf), sentence-level accuracy (lsent) and lexical category accuracy (cats) of our parser and the C&C parser on the development data. We ran the C&C parser using the normal-form model (we reproduced the numbers reported in Clark and Curran (2007)), and copied the results of the hybrid model from Clark and Curran (2007), since the hybrid model is not part of the public release. The accuracy of our parser is much better when evaluated on all sentences, partly because C&C failed on 0.94% of the data due to the failure to produce a spanning analysis. Our shift-reduce parser does not suffer from this problem because it produces fragmentary analyses for those cases. When evaluated on only those sentences that C&C could analyze, our parser gave 0.29% higher F-score. Our shift-reduce parser also gave higher accuracies on lexical category assignment. The sentence accuracy of our shift-reduce parser is also higher than C&C, which confirms that our shift-reduce parser produces reasonable sentence-level analyses, despite the possibility for fragmentary analysis. 688 lp. lr. lf. lsent. cats. evaluated on shift-reduce 87.15% 82.95% 85.00% 33.82% 92.77% all sentences C&C (normal-form) 85.22% 82.52% 83.85% 31.63% 92.40% all sentences shift-reduce 87.55% 83.63% 85.54% 34.14% 93.11% 99.06% (C&C coverage) C&C (hybrid) – – 85.25% – – 99.06% (C&C coverage) C&C (normal-form) 85.22% 84.29% 84.76% 31.93% 92.83% 99.06% (C&C coverage) Table 2: Accuracies on the development test data. 60 65 70 75 80 85 90 0 5 10 15 20 25 30 precision % dependency length (bins of 5) Precision comparison by dependency length this paper C&C 50 55 60 65 70 75 80 85 90 0 5 10 15 20 25 30 recall % dependency length (bins of 5) Recall comparison by dependency length this paper C&C Figure 4: P & R scores relative to dependency length. 6.2 Error comparison with C&C parser Our shift-reduce parser and the chart-based C&C parser offer two different solutions to the CCG parsing problem. The comparison reported in this section is similar to the comparison between the chartbased MSTParser (McDonald et al., 2005) and shiftreduce MaltParser (Nivre et al., 2006) for dependency parsing. We follow McDonald and Nivre (2007) and characterize the errors of the two parsers by sentence and dependency length and dependency type. We measured precision, recall and F-score relative to different sentence lengths. Both parsers performed better on shorter sentences, as expected. Our shift-reduce parser performed consistently better than C&C on all sentence lengths, and there was no significant difference in the rate of performance degradation between the parsers as the sentence length increased. Figure 4 shows the comparison of labeled precision and recall relative to the dependency length (i.e. the number of words between the head and dependent), in bins of size 5 (e.g. the point at x=5 shows the precision or recall for dependency lengths 1 – 5). This experiment was performed using the normalform version of the C&C parser, and the evaluation was on the sentences for which C&C gave an analysis. The number of dependencies drops when the dependency length increases; there are 141, 180 and 124 dependencies from the gold-standard, C&C output and our shift-reduce parser output, respectively, when the dependency length is between 21 and 25, inclusive. The numbers drop to 47, 56 and 36 when the dependency length is between 26 and 30. The recall of our parser drops more quickly as the dependency length grows beyond 15. A likely reason is that the recovery of longer-range dependencies requires more processing steps, increasing the chance of the correct structure being thrown off the beam. In contrast, the precision did not drop more quickly than C&C, and in fact is consistently higher than C&C across all dependency lengths, which reflects the fact that the long range dependencies our parser managed to recover are comparatively reliable. Table 3 shows the comparison of labeled precision (lp), recall (lr) and F-score (lf) for the most common CCG dependency types. The numbers for C&C are for the hybrid model, copied from Clark and Curran (2007). While our shift-reduce parser gave higher precision for almost all categories, it gave higher recall on only half of them, but higher F-scores for all but one dependency type. 6.3 Final results Table 4 shows the accuracies on the test data. The numbers for the normal-form model are evaluated by running the publicly available parser, while those for the hybrid dependency model are from Clark and Curran (2007). Evaluated on all sentences, the accuracies of our parser are much higher than the C&C parser, since the C&C parser failed to produce any output for 10 sentences. When evaluating both 689 category arg lp. (o) lp. (C) lr. (o) lr. (C) lf. (o) lf. (C) freq. N/N 1 95.77% 95.28% 95.79% 95.62% 95.78% 95.45% 7288 NP/N 1 96.70% 96.57% 96.59% 96.03% 96.65% 96.30% 4101 (NP\NP)/NP 2 83.19% 82.17% 89.24% 88.90% 86.11% 85.40% 2379 (NP\NP)/NP 1 82.53% 81.58% 87.99% 85.74% 85.17% 83.61% 2174 ((S\NP)\(S\NP))/NP 3 77.60% 71.94% 71.58% 73.32% 74.47% 72.63% 1147 ((S\NP)\(S\NP))/NP 2 76.30% 70.92% 70.60% 71.93% 73.34% 71.42% 1058 ((S[dcl]\NP)/NP 2 85.60% 81.57% 84.30% 86.37% 84.95% 83.90% 917 PP/NP 1 73.76% 75.06% 72.83% 70.09% 73.29% 72.49% 876 ((S[dcl]\NP)/NP 1 85.32% 81.62% 82.00% 85.55% 83.63% 83.54% 872 ((S\NP)\(S\NP)) 2 84.44% 86.85% 86.60% 86.73% 85.51% 86.79% 746 Table 3: Accuracy comparison on the most common CCG dependency types. (o) – our parser; (C) – C&C (hybrid) lp. lr. lf. lsent. cats. evaluated shift-reduce 87.43% 83.61% 85.48% 35.19% 93.12% all sentences C&C (normal-form) 85.58% 82.85% 84.20% 32.90% 92.84% all sentences shift-reduce 87.43% 83.71% 85.53% 35.34% 93.15% 99.58% (C&C coverage) C&C (hybrid) 86.17% 84.74% 85.45% 32.92% 92.98% 99.58% (C&C coverage) C&C (normal-form) 85.48% 84.60% 85.04% 33.08% 92.86% 99.58% (C&C coverage) F&P (Petrov I-5)* 86.29% 85.73% 86.01% – – – (F&P ∩C&C coverage; 96.65% on dev. test) C&C hybrid* 86.46% 85.11% 85.78% – – – (F&P ∩C&C coverage; 96.65% on dev. test) Table 4: Comparison with C&C; final test. * – not directly comparable. parsers on the sentences for which C&C produces an analysis, our parser still gave the highest accuracies. The shift-reduce parser gave higher precision, and lower recall, than C&C; it also gave higher sentencelevel and lexical category accuracy. The last two rows in the table show the accuracies of Fowler and Penn (2010) (F&P), who applied the CFG parser of Petrov and Klein (2007) to CCG, and the corresponding accuracies for the C&C parser on the same test sentences. F&P can be treated as another chart-based parser; their evaluation is based on the sentences for which both their parser and C&C produced dependencies (or more specifically those sentences for which generate could produce dependencies), and is not directly comparable with ours, especially considering that their test set is smaller and potentially slightly easier. The final comparison is parser speed. The shiftreduce parser is linear-time (in both sentence length and beam size), and can analyse over 10 sentences per second on a 2GHz CPU, with a beam of 16, which compares very well with other constituency parsers. However, this is no faster than the chartbased C&C parser, although speed comparisons are difficult because of implementation differences (C&C uses heavily engineered C++ with a focus on efficiency). 7 Related Work Sagae and Lavie (2006a) describes a shift-reduce parser for the Penn Treebank parsing task which uses best-first search to allow some ambiguity into the parsing process. Differences with our approach are that we use a beam, rather than best-first, search; we use a global model rather than local models chained together; and finally, our results surpass the best published results on the CCG parsing task, whereas Sagae and Lavie (2006a) matched the best PTB results only by using a parser combination. Matsuzaki et al. (2007) describes similar work to ours but using an automatically-extracted HPSG, rather than CCG, grammar. They also use the generalised perceptron to train a disambiguation model. One difference is that Matsuzaki et al. (2007) use an approximating CFG, in addition to the supertagger, to improve the efficiency of the parser. 690 Ninomiya et al. (2009) (and Ninomiya et al. (2010)) describe a greedy shift-reduce parser for HPSG, in which a single action is chosen at each parsing step, allowing the possibility of highly efficient parsing. Since the HPSG grammar has relatively tight constraints, similar to CCG, the possibility arises that a spanning analysis cannot be found for some sentences. Our approach to this problem was to allow the parser to return a fragmentary analysis; Ninomiya et al. (2009) adopt a different approach based on default unification. Finally, our work is similar to the comparison of the chart-based MSTParser (McDonald et al., 2005) and shift-reduce MaltParser (Nivre et al., 2006) for dependency parsing. MSTParser can perform exhaustive search, given certain feature restrictions, because the complexity of the parsing task is lower than for constituent parsing. C&C can perform exhaustive search because the supertagger has already reduced the search space. We also found that approximate heuristic search for shift-reduce parsing, utilising a rich feature space, can match the performance of the optimal chart-based parser, as well as similar error profiles for the two CCG parsers compared to the two dependency parsers. 8 Conclusion This is the first work to present competitive results for CCG using a transition-based parser, filling a gap in the CCG parsing literature. Considered in terms of the wider parsing problem, we have shown that state-of-the-art parsing results can be obtained using a global discriminative model, one of the few papers to do so without using a generative baseline as a feature. The comparison with C&C also allowed us to compare a shift-reduce parser based on heuristic beam search utilising a rich feature set with an optimal chart-based parser whose features are restricted by dynamic programming, with favourable results for the shift-reduce parser. The complementary errors made by the chartbased and shift-reduce parsers opens the possibility of effective parser combination, following similar work for dependency parsing. The parser code can be downloaded at http://www.sourceforge.net/projects/zpar, version 0.5. Acknowledgements We thank the anonymous reviewers for their suggestions. Yue Zhang and Stephen Clark are supported by the European Union Seventh Framework Programme (FP7-ICT-2009-4) under grant agreement no. 247762. References A. E. Ades and M. Steedman. 1982. On the order of words. Linguistics and Philosophy, pages 517 – 558. Johan Bos, Stephen Clark, Mark Steedman, James R. Curran, and Julia Hockenmaier. 2004. Wide-coverage semantic representations from a CCG parser. In Proceedings of COLING-04, pages 1240–1246, Geneva, Switzerland. Stephen Clark and James R. Curran. 2007. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493–552. Stephen Clark and James R. Curran. 2009. Comparing the accuracy of CCG and Penn Treebank parsers. In Proceedings of ACL-2009 (short papers), pages 53– 56, Singapore. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of ACL, pages 111–118, Barcelona, Spain. Michael Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP, pages 1–8, Philadelphia, USA. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Feature-based, conditional random field parsing. In Proceedings of the 46th Meeting of the ACL, pages 959–967, Columbus, Ohio. Timothy A. D. Fowler and Gerald Penn. 2010. Accurate context-free parsing with Combinatory Categorial Grammar. In Proceedings of ACL-2010, Uppsala, Sweden. H. Hassan, K. Sima’an, and A. Way. 2008. A syntactic language model based on incremental CCG parsing. In Proceedings of the Second IEEE Spoken Language Technology Workshop, Goa, India. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics, 33(3):355–396. Julia Hockenmaier. 2003. Data and Models for Statistical Parsing with Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. Liang Huang, Wenbin Jiang, and Qun Liu. 2009. Bilingually-constrained (monolingual) shift-reduce 691 parsing. In Proceedings of the 2009 EMNLP Conference, pages 1222–1231, Singapore. Richard Johansson and Pierre Nugues. 2007. Incremental dependency parsing using online learning. In Proceedings of the CoNLL/EMNLP Conference, pages 1134–1138, Prague, Czech Republic. Takuya Matsuzaki, Yusuke Miyao, and Jun ichi Tsujii. 2007. Efficient HPSG parsing with supertagging and CFG-filtering. In Proceedings of IJCAI-07, pages 1671–1676, Hyderabad, India. Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of EMNLP/CoNLL, pages 122– 131, Prague, Czech Republic. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43rd Meeting of the ACL, pages 91–98, Michigan, Ann Arbor. Yusuke Miyao and Jun’ichi Tsujii. 2005. Probabilistic disambiguation models for wide-coverage HPSG parsing. In Proceedings of the 43rd meeting of the ACL, pages 83–90, University of Michigan, Ann Arbor. Takashi Ninomiya, Takuya Matsuzaki, Nobuyuki Shimizu, and Hiroshi Nakagawa. 2009. Deterministic shift-reduce parsing for unification-based grammars by using default unification. In Proceedings of EACL-09, pages 603–611, Athens, Greece. Takashi Ninomiya, Takuya Matsuzaki, Nobuyuki Shimizu, and Hiroshi Nakagawa. 2010. Deterministic shift-reduce parsing for unification-based grammars. Journal of Natural Language Engineering, DOI:10.1017/S1351324910000240. J. Nivre and M. Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of COLING04, pages 64–70, Geneva, Switzerland. Joakim Nivre, Johan Hall, Jens Nilsson, G¨uls¸en Eryiˇgit, and Svetoslav Marinov. 2006. Labeled pseudoprojective dependency parsing with support vector machines. In Proceedings of CoNLL, pages 221–225, New York, USA. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of HLT/NAACL, pages 404–411, Rochester, New York, April. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. Maxwell III, and Mark Johnson. 2002. Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. In Proceedings of the 40th Meeting of the ACL, pages 271–278, Philadelphia, PA. Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proceedings of EMNLP-09, pages 813–821, Singapore. Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceedings of IWPT, pages 125–132, Vancouver, Canada. Kenji Sagae and Alon Lavie. 2006a. A best-first probabilistic shift-reduce parser. In Proceedings of COLING/ACL poster session, pages 691–698, Sydney, Australia, July. Kenji Sagae and Alon Lavie. 2006b. Parser combination by reparsing. In Proceedings of HLT/NAACL, Companion Volume: Short Papers, pages 129–132, New York, USA. Mark Steedman. 2000. The Syntactic Process. The MIT Press, Cambridge, Mass. David Weir. 1988. Characterizing Mildly ContextSensitive Grammar Formalisms. Ph.D. thesis, University of Pennsylviania. Michael White and Rajakrishnan Rajkumar. 2009. Perceptron reranking for CCG realization. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 410–419, Singapore. H Yamada and Y Matsumoto. 2003. Statistical dependency analysis using support vector machines. In Proceedings of IWPT, Nancy, France. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graph-based and transition-based dependency parsing using beamsearch. In Proceedings of EMNLP-08, Hawaii, USA. Yue Zhang and Stephen Clark. 2009. Transition-based parsing of the Chinese Treebank using a global discriminative model. In Proceedings of IWPT, Paris, France, October. Yi Zhang, Valia Kordoni, and Erin Fitzgerald. 2007. Partial parse selection for robust deep processing. In Proceedings of the ACL 2007 Workshop on Deep Linguistic Processing, Prague, Czech Republic. 692
|
2011
|
69
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 62–71, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Domain Adaptation by Constraining Inter-Domain Variability of Latent Feature Representation Ivan Titov Saarland University Saarbruecken, Germany [email protected] Abstract We consider a semi-supervised setting for domain adaptation where only unlabeled data is available for the target domain. One way to tackle this problem is to train a generative model with latent variables on the mixture of data from the source and target domains. Such a model would cluster features in both domains and ensure that at least some of the latent variables are predictive of the label on the source domain. The danger is that these predictive clusters will consist of features specific to the source domain only and, consequently, a classifier relying on such clusters would perform badly on the target domain. We introduce a constraint enforcing that marginal distributions of each cluster (i.e., each latent variable) do not vary significantly across domains. We show that this constraint is effective on the sentiment classification task (Pang et al., 2002), resulting in scores similar to the ones obtained by the structural correspondence methods (Blitzer et al., 2007) without the need to engineer auxiliary tasks. 1 Introduction Supervised learning methods have become a standard tool in natural language processing, and large training sets have been annotated for a wide variety of tasks. However, most learning algorithms operate under assumption that the learning data originates from the same distribution as the test data, though in practice this assumption is often violated. This difference in the data distributions normally results in a significant drop in accuracy. To address this problem a number of domain-adaptation methods has recently been proposed (see e.g., (Daum´e and Marcu, 2006; Blitzer et al., 2006; Bickel et al., 2007)). In addition to the labeled data from the source domain, they also exploit small amounts of labeled data and/or unlabeled data from the target domain to estimate a more predictive model for the target domain. In this paper we focus on a more challenging and arguably more realistic version of the domainadaptation problem where only unlabeled data is available for the target domain. One of the most promising research directions on domain adaptation for this setting is based on the idea of inducing a shared feature representation (Blitzer et al., 2006), that is mapping from the initial feature representation to a new representation such that (1) examples from both domains ‘look similar’ and (2) an accurate classifier can be trained in this new representation. Blitzer et al. (2006) use auxiliary tasks based on unlabeled data for both domains (called pivot features) and a dimensionality reduction technique to induce such shared representation. The success of their domain-adaptation method (Structural Correspondence Learning, SCL) crucially depends on the choice of the auxiliary tasks, and defining them can be a non-trivial engineering problem for many NLP tasks (Plank, 2009). In this paper, we investigate methods which do not use auxiliary tasks to induce a shared feature representation. We use generative latent variable models (LVMs) learned on all the available data: unlabeled data for both domains and on the labeled data for the source domain. Our LVMs use vectors of latent features 62 to represent examples. The latent variables encode regularities observed on unlabeled data from both domains, and they are learned to be predictive of the labels on the source domain. Such LVMs can be regarded as composed of two parts: a mapping from initial (normally, word-based) representation to a new shared distributed representation, and also a classifier in this representation. The danger of this semi-supervised approach in the domain-adaptation setting is that some of the latent variables will correspond to clusters of features specific only to the source domain, and consequently, the classifier relying on this latent variable will be badly affected when tested on the target domain. Intuitively, one would want the model to induce only those features which generalize between domains. We encode this intuition by introducing a term in the learning objective which regularizes inter-domain difference in marginal distributions of each latent variable. Another, though conceptually similar, argument for our method is coming from theoretical results which postulate that the drop in accuracy of an adapted classifier is dependent on the discrepancy distance between the source and target domains (Blitzer et al., 2008; Mansour et al., 2009; Ben-David et al., 2010). Roughly, the discrepancy distance is small when linear classifiers cannot distinguish between examples from different domains. A necessary condition for this is that the feature expectations do not vary significantly across domains. Therefore, our approach can be regarded as minimizing a coarse approximation of the discrepancy distance. The introduced term regularizes model expectations and it can be viewed as a form of a generalized expectation (GE) criterion (Mann and McCallum, 2010). Unlike the standard GE criterion, where a model designer defines the prior for a model expectation, our criterion postulates that the model expectations should be similar across domains. In our experiments, we use a form of Harmonium Model (Smolensky, 1986) with a single layer of binary latent variables. Though exact inference with this class of models is infeasible we use an efficient approximation (Bengio and Delalleau, 2007), which can be regarded either as a mean-field approximation to the reconstruction error or a deterministic version of the Contrastive Divergence sampling method (Hinton, 2002). Though such an estimator is biased, in practice, it yields accurate models. We explain how the introduced regularizer can be integrated into the stochastic gradient descent learning algorithm for our model. We evaluate our approach on adapting sentiment classifiers on 4 domains: books, DVDs, electronics and kitchen appliances (Blitzer et al., 2007). The loss due to transfer to a new domain is very significant for this task: in our experiments it was approaching 9%, in average, for the non-adapted model. Our regularized model achieves 35% average relative error reduction with respect to the nonadapted classifier, whereas the non-regularized version demonstrates a considerably smaller reduction of 26%. Both the achieved error reduction and the absolute score match the results reported in (Blitzer et al., 2007) for the best version1 of the SCL method (SCL-MI, 36%), suggesting that our approach is a viable alternative to SCL. The rest of the paper is structured as follows. In Section 2 we introduce a model which uses vectors of latent variables to model statistical dependencies between the elementary features. In Section 3 we discuss its applicability in the domain-adaptation setting, and introduce constraints on inter-domain variability as a way to address the discovered limitations. Section 4 describes approximate learning and inference algorithms used in our experiments. In Section 5 we provide an empirical evaluation of the proposed method. We conclude in Section 6 with further examination of the related work. 2 The Latent Variable Model The adaptation method advocated in this paper is applicable to any joint probabilistic model which uses distributed representations, i.e. vectors of latent variables, to abstract away from hand-crafted features. These models, for example, include Restricted Boltzmann Machines (Smolensky, 1986; Hinton, 2002) and Sigmoid Belief Networks (SBNs) (Saul et al., 1996) for classification and regression tasks, Factorial HMMs (Ghahramani and Jordan, 1997) for sequence labeling problems, Incremental SBNs for parsing problems (Titov and Henderson, 2007a), 1Among the versions which do not exploit labeled data from the target domain. 63 as well as different types of Deep Belief Networks (Hinton and Salakhutdinov, 2006). The power of these methods is in their ability to automatically construct new features from elementary ones provided by the model designer. This feature induction capability is especially desirable for problems where engineering features is a labor-intensive process (e.g., multilingual syntactic parsing (Titov and Henderson, 2007b)), or for multitask learning problems where the nature of interactions between the tasks is not fully understood (Collobert and Weston, 2008; Gesmundo et al., 2009). In this paper we consider classification tasks, namely prediction of sentiment polarity of a user review (Pang et al., 2002), and model the joint distribution of the binary sentiment label y ∈{0, 1} and the multiset of text features x, xi ∈X. The hidden variable vector z (zi ∈{0, 1}, i = 1, . . . , m) encodes statistical dependencies between components of x and also dependencies between the label y and the features x. Intuitively, the model can be regarded as a logistic regression classifier with latent features. The model assumes that the features and the latent variable vector are generated jointly from a globallynormalized model and then the label y is generated from a conditional distribution dependent on z. Both of these distributions, P(x, z) and P(y|z), are parameterized as log-linear models and, consequently, our model can be seen as a combination of an undirected Harmonium model (Smolensky, 1986) and a directed SBN model (Saul et al., 1996). The formal definition is as follows: (1) Draw (x, z) ∼P(x, z|v), (2) Draw label y ∼σ(w0 + Pm i=1 wizi), where v and w are parameters, σ is the logistic sigmoid function, σ(t) = 1/(1 + e−t), and the joint distribution of (x, z) is given by the Gibbs distribution: P(x, z|v) ∝exp( |x| X j=1 vxj0+ n X i=1 v0izi+ |x|,n X j,i=1 vxjizi). Figure 1 presents the corresponding graphical model. Note that the arcs between x and z are undirected, whereas arcs between y and z are directed. The parameters of this model θ = (v, w) can be estimated by maximizing joint likelihood L(θ) of labeled data for the source domain {x(l), y(l)}l∈SL ... ... x z y v w Figure 1: The latent variable model: x, z, y are random variables, dependencies between x and z are parameterized by matrix v, and dependencies between z and y - by vector w. and unlabeled data for the source and target domain {x(l)}l∈SU∪TU , where SU and TU stand for the unlabeled datasets for the source and target domains, respectively. However, given that, first, amount of unlabeled data |SU ∪TU| normally vastly exceeds the amount of labeled data |SL| and, second, the number of features for each example |x(l)| is usually large, the label y will have only a minor effect on the mapping from the initial features x to the latent representation z (i.e. on the parameters v). Consequently, the latent representation induced in this way is likely to be inappropriate for the classification task in question. Therefore, we follow (McCallum et al., 2006) and use a multi-conditional objective, a specific form of hybrid learning, to emphasize the importance of labels y: L(θ, α)=α X l∈SL log P(y(l)|x(l), θ)+ X l∈SU∪TU∪SL log P(x(l)|θ), where α is a weight, α > 1. Direct maximization of the objective is problematic, as it would require summation over all the 2m latent vectors z. Instead we use a meanfield approximation. Similarly, an efficient approximate inference algorithm is used to compute arg maxy P(y|x, θ) at testing time. The approximations are described in Section 4. 3 Constraints on Inter-Domain Variability As we discussed in the introduction, our goal is to provide a method for domain adaptation based on semi-supervised learning of models with distributed representations. In this section, we first discuss the shortcomings of domain adaptation with the above-described semi-supervised approach and motivate constraints on inter-domain variability of 64 the induced shared representation. Then we propose a specific form of this constraint based on the Kullback-Leibler (KL) divergence. 3.1 Motivation for the Constraints Each latent variable zi encodes a cluster or a combination of elementary features xj. At least some of these clusters, when induced by maximizing the likelihood L(θ, α) with sufficiently large α, will be useful for the classification task on the source domain. However, when the domains are substantially different, these predictive clusters are likely to be specific only to the source domain. For example, consider moving from reviews of electronics to book reviews: the cluster of features related to equipment reliability and warranty service will not generalize to books. The corresponding latent variable will always be inactive on the books domain (or always active, if negative correlation is induced during learning). Equivalently, the marginal distribution of this variable will be very different for both domains. Note that the classifier, defined by the vector w, is only trained on the labeled source examples {x(l), y(l)}l∈SL and therefore it will rely on such latent variables, even though they do not generalize to the target domain. Clearly, the accuracy of such classifier will drop when it is applied to target domain examples. To tackle this issue, we introduce a regularizing term which penalizes differences in the marginal distributions between the domains. In fact, we do not need to consider the behavior of the classifier to understand the rationale behind the introduction of the regularizer. Intuitively, when adapting between domains, we are interested in representations z which explain domain-independent regularities rather than in modeling inter-domain differences. The regularizer favors models which focus on the former type of phenomena rather than the latter. Another motivation for the form of regularization we propose originates from theoretical analysis of the domain adaptation problems (Ben-David et al., 2010; Mansour et al., 2009; Blitzer et al., 2007). Under the assumption that there exists a domainindependent scoring function, these analyses show that the drop in accuracy is upper-bounded by the quantity called discrepancy distance. The discrepancy distance is dependent on the feature representation z, and the input distributions for both domains PS(z) and PT (z), and is defined as dz(S,T)=max f,f′ |EPS[f(z)̸=f′(z)]−EPT [f(z)̸=f′(z)]|, where f and f′ are arbitrary linear classifiers in the feature representation z. The quantity EP [f(z)̸=f′(z)] measures the probability mass assigned to examples where f and f′ disagree. Then the discrepancy distance is the maximal change in the size of this disagreement set due to transfer between the domains. For a more restricted class of classifiers which rely only on any single feature2 zi, the distance is equal to the maximum over the change in the distributions P(zi). Consequently, for arbitrary linear classifiers we have: dz(S,T) ≥ max i=1,...,m |EPS[zi = 1] −EPT [zi = 1]|. It follows that low inter-domain variability of the marginal distributions of latent variables is a necessary condition for low discrepancy distance. Minimizing the difference in the marginal distributions can be regarded as a coarse approximation to the minimization of the distance. However, we have to concede that the above argument is fairly informal, as the generalization bounds do not directly apply to our case: (1) our feature representation is learned from the same data as the classifier, (2) we cannot guarantee that the existence of a domainindependent scoring function is preserved under the learned transformation x→z and (3) in our setting we have access not only to samples from P(z|x, θ) but also to the distribution itself. 3.2 The Expectation Criterion Though the above argument suggests a specific form of the regularizing term, we believe that the penalizer should not be very sensitive to small differences in the marginal distributions, as useful variables (clusters) are likely to have somewhat different marginal distributions in different domains, but it should severely penalize extreme differences. To achieve this goal we instead propose to use the symmetrized Kullback-Leibler (KL) divergence between the marginal distributions as the penalty. The 2We consider only binary features here. 65 derivative of the symmetrized KL divergence is large when one of the marginal distributions is concentrated at 0 or 1 with another distribution still having high entropy, and therefore such configurations are severely penalized.3 Formally, the regularizer G(θ) is defined as G(θ) = m X i=1 D(PS(zi|θ)||PT (zi|θ)) +D(PT (zi|θ)||PS(zi|θ)), (1) where PS(zi) and PT (zi) stand for the training sample estimates of the marginal distributions of latent features, for instance: PT (zi = 1|θ) = 1 |TU| X l∈TU P(zi = 1|x(l), θ). We augment the multi-conditional log-likelihood L(θ, α) with the weighted regularization term G(θ) to get the composite objective function: LR(θ, α, β) = L(θ, α) −βG(θ), β > 0. Note that this regularization term can be regarded as a form of the generalized expectation (GE) criteria (Mann and McCallum, 2010), where GE criteria are normally defined as KL divergences between a prior expectation of some feature and the expectation of this feature given by the model, where the prior expectation is provided by the model designer as a form of weak supervision. In our case, both expectations are provided by the model but on different domains. Note that the proposed regularizer can be trivially extended to support the multi-domain case (Mansour et al., 2008) by considering symmetrized KL divergences for every pair of domains or regularizing the distributions for every domain towards their average. More powerful regularization terms can also be motivated by minimization of the discrepancy distance but their optimization is likely to be expensive, whereas LR(θ, α, β) can be optimized efficiently. 3An alternative is to use the Jensen-Shannon (JS) divergence, however, our preliminary experiments seem to suggest that the symmetrized KL divergence is preferable. Though the two divergences are virtually equivalent when the distributions are very similar (their ratio tends to a constant as the distributions go closer), the symmetrized KL divergence stronger penalizes extreme differences and this is important for our purposes. 4 Learning and Inference In this section we describe an approximate learning algorithm based on the mean-field approximation. Though we believe that our approach is independent of the specific learning algorithm, we provide the description for completeness. We also describe a simple approximate algorithm for computing P(y|x, θ) at test time. The stochastic gradient descent algorithm iterates over examples and updates the weight vector based on the contribution of every considered example to the objective function LR(θ, α, β). To compute these updates we need to approximate gradients of ∇θ log P(y(l)|x(l), θ) (l ∈SL), ∇θ log P(x(l)|θ) (l ∈SL ∪SU ∪TU) as well as to estimate the contribution of a given example to the gradient of the regularizer ∇θG(θ). In the next sections we will describe how each of these terms can be estimated. 4.1 Conditional Likelihood Term We start by explaining the mean-field approximation of log P(y|x, θ). First, we compute the means µ = (µ1, . . . , µm): µi = P(zi = 1|x, v) = σ(v0i + P|x| j=1 vxji). Now we can substitute them instead of z to approximate the conditional probability of the label: P(y = 1|x, θ) =P z P(y|z, w)P(z|x, v) ∝σ(w0 + Pm i=1 wiµi). We use this estimate both at testing time and also to compute gradients ∇θ log P(y(l)|x(l), θ) during learning. The gradients can be computed efficiently using a form of back-propagation. Note that with this approximation, we do not need to normalize over the feature space, which makes the model very efficient at classification time. This approximation is equivalent to the computation of the two-layer perceptron with the soft-max activation function (Bishop, 1995). However, the above derivation provides a probabilistic interpretation of the hidden layer. 4.2 Unlabeled Likelihood Term In this section, we describe how the unlabeled likelihood term is optimized in our stochastic learning 66 algorithm. First, we note that, given the directed nature of the arcs between z and y, the weights w do not affect the probability of input x, that is P(x|θ) = P(x|v). Instead of directly approximating the gradient ∇v log P(x(l)|v), we use a deterministic version of the Contrastive Divergence (CD) algorithm, equivalent to the mean-field approximation of the reconstruction error used in training autoassociaters (Bengio and Delalleau, 2007). The CD-based estimators are biased estimators but are guaranteed to converge. Intuitively, maximizing the likelihood of unlabeled data is closely related to minimizing the reconstruction error, that is training a model to discover such mapping parameters u that z encodes all the necessary information to accurately reproduce x(l) from z for every training example x(l). Formally, the meanfield approximation to the negated reconstruction error is defined as ˆL(x(l), v) = log P(x(l)|µ, v), where the means, µi = P(zi = 1|x(l), v), are computed as in the preceding section. Note that when computing the gradient of ∇v ˆL, we need to take into account both the forward and backward mappings: the computation of the means µ from x(l) and the computation of the log-probability of x(l) given the means µ: dˆL dvki = ∂ˆL ∂vki + ∂ˆL ∂µi dµi dvki . 4.3 Regularization Term The criterion G(θ) is also independent of the classifier parameters w, i.e. G(θ) = G(v), and our goal is to compute the contribution of a considered example l to the gradient ∇vG(v). The regularizer G(v) is defined as in equation (1) and it is a function of the sample-based domainspecific marginal distributions of latent variables PS and PT : PT (zi = 1|θ) = 1 |TU| X l∈TU µ(l) i , where the means µ(l) i = P(zi = 1|x(l), v); PS can be re-written analogously. G(v) is dependent on the parameters v only via the mean activations of the latent variables µ(l), and contribution of each example l can be computed by straightforward differentiation: dG(l)(v) dvki =(log p p′ −log 1 −p 1 −p′ −p′ p + 1 −p′ 1 −p )dµ(l) i dvki , where p = PS(zi = 1|θ) and p′ = PT (zi = 1|θ) if l is from the source domain, and, inversely, p = PT (zi = 1|θ) and p′ = PS(zi = 1|θ), otherwise. One problem with the above expression is that the exact computation of PS and PT requires recomputation of the means µ(l) for all the examples after each update of the parameters, resulting in O(|SL ∪SU ∪TU|2) complexity of each iteration of stochastic gradient descent. Instead, we shuffle examples and use amortization; we approximate PS at update t by: ˆP (t) S (zi = 1)= ( (1−γ) ˆP (t−1) S (zi =1)+γµ(l) i, l∈SL∪SU ˆP (t−1) S (zi = 1), otherwise, where l is an example considered at update t. The approximation ˆPT is computed analogously. 5 Empirical Evaluation In this section we empirically evaluate our approach on the sentiment classification task. We start with the description of the experimental set-up and the baselines, then we present the results and discuss the utility of the constraint on inter-domain variability. 5.1 Experimental setting To evaluate our approach, we consider the same dataset as the one used to evaluate the SCL method (Blitzer et al., 2007). The dataset is composed of labeled and unlabeled reviews of four different product types: books, DVDs, electronics and kitchen appliances. For each domain, the dataset contains 1,000 labeled positive reviews and 1,000 labeled negative reviews, as well as several thousands of unlabeled examples (4,919 reviews per domain in average: ranging from 3,685 for DVDs to 5,945 for kitchen appliances). As in Blitzer et al. (2007), we randomly split each labelled portion into 1,600 examples for training and 400 examples for testing. 67 70 75 80 85 Books 70.8 72.7 74.7 76.5 75.6 83.3 DVD Electronics Kitchen Average Base NoReg Reg Reg+ In-domain 73.3 74.6 74.8 76.2 75.4 82.8 77.6 75.6 73.9 76.6 77.9 78.8 84.6 NoReg+ 74.6 76.0 78.9 80.2 85.8 79.0 77.7 83.2 82.1 80.0 86.5 Figure 2: Averages accuracies when transferring to books, DVD, electronics and kitchen appliances domains, and average accuracy over all 12 domain pairs. We evaluate the performance of our domainadaptation approach on every ordered pair of domains. For every pair, the semi-supervised methods use labeled data from the source domain and unlabeled data from both domains. We compare them with two supervised methods: a supervised model (Base) which is trained on the source domain data only, and another supervised model (Indomain) which is learned on the labeled data from the target domain. The Base model can be regarded as a natural baseline model, whereas the In-domain model is essentially an upper-bound for any domainadaptation method. All the methods, supervised and semi-supervised, are based on the model described in Section 2. Instead of using the full set of bigram and unigram counts as features (Blitzer et al., 2007), we use a frequency cut-off of 30 to remove infrequent ngrams. This does not seem to have an adverse effect on the accuracy but makes learning very efficient: the average training time for the semi-supervised methods was about 20 minutes on a standard PC. We coarsely tuned the parameters of the learning methods using a form of cross-validation. Both the parameter of the multi-conditional objective α (see Section 2) and the weighting for the constraint β (see Section 3.2) were set to 5. We used 25 iterations of stochastic gradient descent. The initial learning rate and the weight decay (the inverse squared variance of the Gaussian prior) were set to 0.01, and both parameters were reduced by the factor of 2 every iteration the objective function estimate went down. The size of the latent representation was equal to 10. The stochastic weight updates were amortized with the momentum (γ) of 0.99. We trained the model both without regularization of the domain variability (NoReg, β = 0), and with the regularizing term (Reg). For the SCL method to produce an accurate classifier for the target domain it is necessary to train a classifier using both the induced shared representation and the initial nontransformed representation. In our case, due to joint learning and non-convexity of the learning problem, this approach would be problematic.4 Instead, we combine predictions of the semi-supervised models Reg and NoReg with the baseline out-of-domain model (Base) using the product-of-experts combination (Hinton, 2002), the corresponding methods are called Reg+ and NoReg+, respectively. In all our models, we augmented the vector z with an additional component set to 0 for examples in the source domain and to 1 for the target domain examples. In this way, we essentially subtracted a unigram domain-specific model from our latent variable model in the hope that this will further reduce the domain dependence of the rest of the model parameters. In preliminary experiments, this modification was beneficial for all the models including the non-constrained one (NoReg). 5.2 Results and Discussion The results of all the methods are presented in Figure 2. The 4 leftmost groups of results correspond to a single target domain, and therefore each of 4The latent variables are not likely to learn any useful mapping in the presence of observable features. Special training regimes may be used to attempt to circumvent this problem. 68 them is an average over experiments on 3 domainpairs, for instance, the group Books represents an average over adaptation experiments DVDs→books, electronics→books, kitchen→books. The rightmost group of the results corresponds to the average over all 12 experiments. First, observe that the total drop in the accuracy when moving to the target domain is 8.9%: from 84.6% demonstrated by the In-domain classifier to 75.6% shown by the non-adapted Base classifier. For convenience, we also present the errors due to transfer in a separate Table 1: our best method (Reg+) achieves 35% relative reduction of this loss, decreasing the gap to 5.7%. Now, let us turn to the question of the utility of the constraints. First, observe that the non-regularized version of the model (NoReg) often fails to outperform the baseline and achieves the scores considerably worse than the results of the regularized version (2.6% absolute difference). We believe that this happens because the clusters induced when optimizing the non-regularized learning objective are often domain-specific. The regularized model demonstrates substantially better results slightly beating the baseline in most cases. Still, to achieve a larger decrease of the domain-adaptation error, it was necessary to use the combined models, Reg+ and NoReg+. Here, again, the regularized model substantially outperforms the non-regularized one (35% against 26% relative error reduction for Reg+ and NoReg+, respectively). In Table 1, we also compare the results of our method with the results of the best version of the SCL method (SCL-MI) reported in Blitzer et al. (2007). The average error reductions for our method Reg+ and for the SCL method are virtually equal. However, formally, these two numbers are not directly comparable. First, the random splits are different, though this is unlikely to result in any significant difference, as the split proportions are the same and the test sets are sufficiently large. Second, the absolute scores achieved in Blitzer et al. (2007) are slightly worse than those demonstrated in our experiments both for supervised and semi-supervised methods. In absolute terms, our Reg+ method outperforms the SCL method by more than 1%: 75.6% against 74.5%, in average. This is probably due to the difference in the used learning methods: optimization of the Huber loss vs. D Base NoReg Reg NoReg+ Reg+ SCL-MI B 10.6 12.4 7.7 8.6 6.7 5.8 D 9.5 8.2 8.0 6.6 7.3 6.1 E 8.2 13.0 9.7 6.8 5.5 5.5 K 7.5 8.8 6.5 4.4 3.3 5.6 Av 8.9 10.6 8.0 6.6 5.7 5.8 Table 1: Drop in the accuracy score due to the transfer for the 4 domains: (B)ooks, (D)VD, (E)electronics and (K)itchen appliances, and in average over the domains. our latent variable model.5 This comparison suggests that our domain-adaptation method is a viable alternative to SCL. Also, it is important to point out that the SCL method uses auxiliary tasks to induce the shared feature representation, these tasks are constructed on the basis of unlabeled data. The auxiliary tasks and the original problem should be closely related, namely they should have the same (or similar) set of predictive features. Defining such tasks can be a challenging engineering problem. On the sentiment classification task in order to construct them two steps need to be performed: (1) a set of words correlated with the sentiment label is selected, and, then (2) prediction of each such word is regarded a distinct auxiliary problem. For many other domains (e.g., parsing (Plank, 2009)) the construction of an effective set of auxiliary tasks is still an open problem. 6 Related Work There is a growing body of work on domain adaptation. In this paper, we focus on the class of methods which induce a shared feature representation. Another popular class of domain-adaptation techniques assume that the input distributions P(x) for the source and the target domain share support, that is every example x which has a non-zero probability on the target domain must have also a non-zero probability on the source domain, and vice-versa. Such methods tackle domain adaptation by instance re-weighting (Bickel et al., 2007; Jiang and Zhai, 2007), or, similarly, by feature re-weighting (Satpal and Sarawagi, 2007). In NLP, most features 5The drop in accuracy for the SCL method in Table 1 is is computed with respect to the less accurate supervised in-domain classifier considered in Blitzer et al. (2007), otherwise, the computed drop would be larger. 69 are word-based and lexicons are very different for different domains, therefore such assumptions are likely to be overly restrictive. Various semi-supervised techniques for domainadaptation have also been considered, one example being self-training (McClosky et al., 2006). However, their behavior in the domain-adaptation setting is not well-understood. Semi-supervised learning with distributed representations and its application to domain adaptation has previously been considered in (Huang and Yates, 2009), but no attempt has been made to address problems specific to the domain-adaptation setting. Similar approaches has also been considered in the context of topic models (Xue et al., 2008), however the preference towards induction of domain-independent topics was not explicitly encoded in the learning objective or model priors. A closely related method to ours is that of (Druck and McCallum, 2010) which performs semi-supervised learning with posterior regularization (Ganchev et al., 2010). Our approach differs from theirs in many respects. First, they do not focus on the domain-adaptation setting and do not attempt to define constraints to prevent the model from learning domain-specific information. Second, their expectation constraints are estimated from labeled data, whereas we are trying to match expectations computed on unlabeled data for two domains. This approach bears some similarity to the adaptation methods standard for the setting where labelled data is available for both domains (Chelba and Acero, 2004; Daum´e and Marcu, 2006). However, instead of ensuring that the classifier parameters are similar across domains, we favor models resulting in similar marginal distributions of latent variables. 7 Discussion and Conclusions In this paper we presented a domain-adaptation method based on semi-supervised learning with distributed representations coupled with constraints favoring domain-independence of modeled phenomena. Our approach results in competitive domainadaptation performance on the sentiment classification task, rivalling that of the state-of-the-art SCL method (Blitzer et al., 2007). Both of these methods induce a shared feature representation but unlike SCL our method does not require construction of any auxiliary tasks in order to induce this representation. The primary area of the future work is to apply our method to structured prediction problems in NLP, such as syntactic parsing or semantic role labeling, where construction of auxiliary tasks proved problematic. Another direction is to favor domaininvariability not only of the expectations of individual variables but rather those of constraint functions involving latent variables, features and labels. Acknowledgements The author acknowledges the support of the Cluster of Excellence on Multimodal Computing and Interaction at Saarland University and thanks the anonymous reviewers for their helpful comments and suggestions. References Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning, 79:151–175. Yoshua Bengio and Olivier Delalleau. 2007. Justifying and generalizing contrastive divergence. Technical Report TR 1311, Department IRO, University of Montreal, November. S. Bickel, M. Br¨ueckner, and T. Scheffer. 2007. Discriminative learning for differing training and test distributions. In Proc. of the International Conference on Machine Learning (ICML), pages 81–88. Christopher M. Bishop. 1995. Neural Networks for Pattern Recognition. Oxford University Press, Oxford, UK. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proc. of EMNLP. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proc. 45th Meeting of Association for Computational Linguistics (ACL), Prague, Czech Republic. John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman. 2008. Learning bounds for domain adaptation. In Proc. Advances In Neural Information Processing Systems (NIPS ’07). Ciprian Chelba and Alex Acero. 2004. Adaptation of maximum entropy capitalizer: Little data can help a lot. In Proc. of the Conference on Empirical Methods for Natural Language Processing (EMNLP), pages 285–292. 70 R. Collobert and J. Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In International Conference on Machine Learning, ICML. Hal Daum´e and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence, 26:101–126. Gregory Druck and Andrew McCallum. 2010. Highperformance semi-supervised learning using discriminatively constrained generative models. In Proc. of the International Conference on Machine Learning (ICML), Haifa, Israel. Kuzman Ganchev, Joao Graca, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. Journal of Machine Learning Research (JMLR), pages 2001–2049. Andrea Gesmundo, James Henderson, Paola Merlo, and Ivan Titov. 2009. Latent variable model of synchronous syntactic-semantic parsing for multiple languages. In CoNLL 2009 Shared Task. Zoubin Ghahramani and Michael I. Jordan. 1997. Factorial hidden Markov models. Machine Learning, 29:245–273. G. E. Hinton and R. R. Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science, 313:504–507. Geoffrey E. Hinton. 2002. Training Products of Experts by Minimizing Contrastive Divergence. Neural Computation, 14:1771–1800. Fei Huang and Alexander Yates. 2009. Distributional representations for handling sparsity in supervised sequence labeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in nlp. In Proc. of the Annual Meeting of the ACL, pages 264–271, Prague, Czech Republic, June. Association for Computational Linguistics. Gideon S. Mann and Andrew McCallum. 2010. Generalized expectation criteria for semi-supervised learning with weakly labeled data. Journal of Machine Learning Research, 11:955–984. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2008. Domain adaptation with multiple sources. In Advances in Neural Information Processing Systems. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2009. Domain adaptation: Learning bounds and algorithms. In Proceedings of The 22nd Annual Conference on Learning Theory (COLT 2009), Montreal, Canada. Andrew McCallum, Chris Pal, Greg Druck, and Xuerui Wang. 2006. Multi-conditional learning: Generative/discriminative training for clustering and classification. In AAAI. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In Proc. of the Annual Meeting of the ACL and the International Conference on Computational Linguistics, Sydney, Australia. B. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Barbara Plank. 2009. Structural correspondence learning for parse disambiguation. In Proceedings of the Student Research Workshop at EACL 2009, pages 37–45, Athens, Greece, April. Association for Computational Linguistics. Sandeepkumar Satpal and Sunita Sarawagi. 2007. Domain adaptation of conditional probability models via feature subsetting. In Proceedings of 11th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD), Warzaw, Poland. Lawrence K. Saul, Tommi Jaakkola, and Michael I. Jordan. 1996. Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research, 4:61–76. Paul Smolensky. 1986. Information processing in dynamical systems: foundations of harmony theory. In D. Rumehart and J McCelland, editors, Parallel distributed processing: explorations in the microstructures of cognition, volume 1 : Foundations, pages 194– 281. MIT Press. Ivan Titov and James Henderson. 2007a. Constituent parsing with Incremental Sigmoid Belief Networks. In Proc. 45th Meeting of Association for Computational Linguistics (ACL), pages 632–639, Prague, Czech Republic. Ivan Titov and James Henderson. 2007b. Fast and robust multilingual dependency parsing with a generative latent variable model. In Proc. of the CoNLL shared task, Prague, Czech Republic. G.-R. Xue, W. Dai, Q. Yang, and Y. Yu. 2008. Topicbridged PLSA for cross-domain text classification. In Proceedings of the SIGIR Conference. 71
|
2011
|
7
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 693–702, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Web-Scale Features for Full-Scale Parsing Mohit Bansal and Dan Klein Computer Science Division University of California, Berkeley {mbansal, klein}@cs.berkeley.edu Abstract Counts from large corpora (like the web) can be powerful syntactic cues. Past work has used web counts to help resolve isolated ambiguities, such as binary noun-verb PP attachments and noun compound bracketings. In this work, we first present a method for generating web count features that address the full range of syntactic attachments. These features encode both surface evidence of lexical affinities as well as paraphrase-based cues to syntactic structure. We then integrate our features into full-scale dependency and constituent parsers. We show relative error reductions of 7.0% over the second-order dependency parser of McDonald and Pereira (2006), 9.2% over the constituent parser of Petrov et al. (2006), and 3.4% over a non-local constituent reranker. 1 Introduction Current state-of-the art syntactic parsers have achieved accuracies in the range of 90% F1 on the Penn Treebank, but a range of errors remain. From a dependency viewpoint, structural errors can be cast as incorrect attachments, even for constituent (phrase-structure) parsers. For example, in the Berkeley parser (Petrov et al., 2006), about 20% of the errors are prepositional phrase attachment errors as in Figure 1, where a preposition-headed (IN) phrase was assigned an incorrect parent in the implied dependency tree. Here, the Berkeley parser (solid blue edges) incorrectly attaches from debt to the noun phrase $ 30 billion whereas the correct attachment (dashed gold edges) is to the verb raising. However, there are a range of error types, as shown in Figure 2. Here, (a) is a non-canonical PP VBG VP NP NP … raising $ 30 billion PP from debt … Figure 1: A PP attachment error in the parse output of the Berkeley parser (on Penn Treebank). Guess edges are in solid blue, gold edges are in dashed gold and edges common in guess and gold parses are in black. attachment ambiguity where by yesterday afternoon should attach to had already, (b) is an NP-internal ambiguity where half a should attach to dozen and not to newspapers, and (c) is an adverb attachment ambiguity, where just should modify fine and not the verb ’s. Resolving many of these errors requires information that is simply not present in the approximately 1M words on which the parser was trained. One way to access more information is to exploit surface counts from large corpora like the web (Volk, 2001; Lapata and Keller, 2004). For example, the phrase raising from is much more frequent on the Web than $ x billion from. While this ‘affinity’ is only a surface correlation, Volk (2001) showed that comparing such counts can often correctly resolve tricky PP attachments. This basic idea has led to a good deal of successful work on disambiguating isolated, binary PP attachments. For example, Nakov and Hearst (2005b) showed that looking for paraphrase counts can further improve PP resolution. In this case, the existence of reworded phrases like raising it from on the Web also imply a verbal at693 S NP NP PP …Lehman Hutton Inc. by yesterday afternoon VP had already … PDT NP … half DT a PDT dozen PDT newspapers QP VBZ VP … ´s ADVP RB just ADJP JJ fine ADJP (a) (b) (c) Figure 2: Different kinds of attachment errors in the parse output of the Berkeley parser (on Penn Treebank). Guess edges are in solid blue, gold edges are in dashed gold and edges common in guess and gold parses are in black. tachment. Still other work has exploited Web counts for other isolated ambiguities, such as NP coordination (Nakov and Hearst, 2005b) and noun-sequence bracketing (Nakov and Hearst, 2005a; Pitler et al., 2010). For example, in (b), half dozen is more frequent than half newspapers. In this paper, we show how to apply these ideas to all attachments in full-scale parsing. Doing so requires three main issues to be addressed. First, we show how features can be generated for arbitrary head-argument configurations. Affinity features are relatively straightforward, but paraphrase features, which have been hand-developed in the past, are more complex. Second, we integrate our features into full-scale parsing systems. For dependency parsing, we augment the features in the second-order parser of McDonald and Pereira (2006). For constituent parsing, we rerank the output of the Berkeley parser (Petrov et al., 2006). Third, past systems have usually gotten their counts from web search APIs, which does not scale to quadratically-many attachments in each sentence. Instead, we consider how to efficiently mine the Google n-grams corpus. Given the success of Web counts for isolated ambiguities, there is relatively little previous research in this direction. The most similar work is Pitler et al. (2010), which use Web-scale n-gram counts for multi-way noun bracketing decisions, though that work considers only sequences of nouns and uses only affinity-based web features. Yates et al. (2006) use Web counts to filter out certain ‘semantically bad’ parses from extraction candidate sets but are not concerned with distinguishing amongst top parses. In an important contrast, Koo et al. (2008) smooth the sparseness of lexical features in a discriminative dependency parser by using clusterbased word-senses as intermediate abstractions in addition to POS tags (also see Finkel et al. (2008)). Their work also gives a way to tap into corpora beyond the training data, through cluster membership rather than explicit corpus counts and paraphrases. This work uses a large web-scale corpus (Google n-grams) to compute features for the full parsing task. To show end-to-end effectiveness, we incorporate our features into state-of-the-art dependency and constituent parsers. For the dependency case, we can integrate them into the dynamic programming of a base parser; we use the discriminativelytrained MST dependency parser (McDonald et al., 2005; McDonald and Pereira, 2006). Our first-order web-features give 7.0% relative error reduction over the second-order dependency baseline of McDonald and Pereira (2006). For constituent parsing, we use a reranking framework (Charniak and Johnson, 2005; Collins and Koo, 2005; Collins, 2000) and show 9.2% relative error reduction over the Berkeley parser baseline. In the same framework, we also achieve 3.4% error reduction over the non-local syntactic features used in Huang (2008). Our webscale features reduce errors for a range of attachment types. Finally, we present an analysis of influential features. We not only reproduce features suggested in previous work but also discover a range of new ones. 2 Web-count Features Structural errors in the output of state-of-the-art parsers, constituent or dependency, can be viewed as attachment errors, examples of which are Figure 1 and Figure 2.1 One way to address attachment errors is through features which factor over head-argument 1For constituent parsers, there can be minor tree variations which can result in the same set of induced dependencies, but these are rare in comparison. 694 raising $ from debt 𝝓(raising from) 𝝓($ from) 𝜙(head arg) Figure 3: Features factored over head-argument pairs. pairs, as is standard in the dependency parsing literature (see Figure 3). Here, we discuss which webcount based features φ(h, a) should fire over a given head-argument pair (we consider the words h and a to be indexed, and so features can be sensitive to their order and distance, as is also standard). 2.1 Affinity Features Affinity statistics, such as lexical co-occurrence counts from large corpora, have been used previously for resolving individual attachments at least as far back as Lauer (1995) for noun-compound bracketing, and later for PP attachment (Volk, 2001; Lapata and Keller, 2004) and coordination ambiguity (Nakov and Hearst, 2005b). The approach of Lauer (1995), for example, would be to take an ambiguous noun sequence like hydrogen ion exchange and compare the various counts (or associated conditional probabilities) of n-grams like hydrogen ion and hydrogen exchange. The attachment with the greater score is chosen. More recently, Pitler et al. (2010) use web-scale n-grams to compute similar association statistics for longer sequences of nouns. Our affinity features closely follow this basic idea of association statistics. However, because a real parser will not have access to gold-standard knowledge of the competing attachment sites (see Atterer and Schutze (2007)’s criticism of previous work), we must instead compute features for all possible head-argument pairs from our web corpus. Moreover, when there are only two competing attachment options, one can do things like directly compare two count-based heuristics and choose the larger. Integration into a parser requires features to be functions of single attachments, not pairwise comparisons between alternatives. A learning algorithm can then weight features so that they compare appropriately across parses. We employ a collection of affinity features of varying specificity. The basic feature is the core adjacency count feature ADJ, which fires for all (h, a) pairs. What is specific to a particular (h, a) is the value of the feature, not its identity. For example, in a naive approach, the value of the ADJ feature might be the count of the query issued to the web-corpus – the 2-gram q = ha or q = ah depending on the order of h and a in the sentence. However, it turns out that there are several problems with this approach. First, rather than a single all-purpose feature like ADJ, the utility of such query counts will vary according to aspects like the parts-of-speech of h and a (because a high adjacency count is not equally informative for all kinds of attachments). Hence, we add more refined affinity features that are specific to each pair of POS tags, i.e. ADJ ∧POS(h) ∧ POS(a). The values of these POS-specific features, however, are still derived from the same queries as before. Second, using real-valued features did not work as well as binning the query-counts (we used b = floor(logr(count)/5) ∗5) and then firing indicator features ADJ ∧POS(h) ∧POS(a) ∧b for values of b defined by the query count. Adding still more complex features, we conjoin to the preceding features the order of the words h and a as they occur in the sentence, and the (binned) distance between them. For features which mark distances, wildcards (⋆) are used in the query q = h ⋆a, where the number of wildcards allowed in the query is proportional to the binned distance between h and a in the sentence. Finally, we also include unigram variants of the above features, which are sensitive to only one of the head or argument. For all features used, we add cumulative variants where indicators are fired for all count bins b′ up to query count bin b. 2.2 Paraphrase Features In addition to measuring counts of the words present in the sentence, there exist clever ways in which paraphrases and other accidental indicators can help resolve specific ambiguities, some of which are discussed in Nakov and Hearst (2005a), Nakov and Hearst (2005b). For example, finding attestations of eat : spaghetti with sauce suggests a nominal attachment in Jean ate spaghetti with sauce. As another example, one clue that the example in Figure 1 is 695 a verbal attachment is that the proform paraphrase raising it from is commonly attested. Similarly, the attestation of be noun prep suggests nominal attachment. These paraphrase features hint at the correct attachment decision by looking for web n-grams with special contexts that reveal syntax superficially. Again, while effective in their isolated disambiguation tasks, past work has been limited by both the range of attachments considered and the need to intuit these special contexts. For instance, frequency of the pattern The noun prep suggests noun attachment and of the pattern verb adverb prep suggests verb attachment for the preposition in the phrase verb noun prep, but these features were not in the manually brainstormed list. In this work, we automatically generate a large number of paraphrase-style features for arbitrary attachment ambiguities. To induce our list of features, we first mine useful context words. We take each (correct) training dependency relation (h, a) and consider web n-grams of the form cha, hca, and hac. Aggregating over all h and a (of a given POS pair), we determine which context words c are most frequent in each position. For example, for h = raising and a = from (see Figure 1), we look at web n-grams of the form raising c from and see that one of the most frequent values of c on the web turns out to be the word it. Once we have collected context words (for each position p in {BEFORE, MIDDLE, AFTER}), we turn each context word c into a collection of features of the form PARA ∧POS(h) ∧POS(a) ∧c ∧p ∧ dir, where dir is the linear order of the attachment in the sentence. Note that h and a are head and argument words and so actually occur in the sentence, but c is a context word that generally does not. For such features, the queries that determine their values are then of the form cha, hca, and so on. Continuing the previous example, if the test set has a possible attachment of two words like h = lowering and a = with, we will fire a feature PARA ∧ VBG ∧IN ∧it ∧MIDDLE ∧→with value (indicator bins) set according to the results of the query lowering it with. The idea is that if frequent occurrences of raising it from indicated a correct attachment between raising and from, frequent occurrences of lowering it with will indicate the correctness of an attachment between lowering and with. Finally, to handle the cases where no induced context word is helpful, we also construct abstracted versions of these paraphrase features where the context words c are collapsed to their parts-of-speech POS(c), obtained using a unigram-tagger trained on the parser training set. As discussed in Section 5, the top features learned by our learning algorithm duplicate the hand-crafted configurations used in previous work (Nakov and Hearst, 2005b) but also add numerous others, and, of course, apply to many more attachment types. 3 Working with Web n-Grams Previous approaches have generally used search engines to collect count statistics (Lapata and Keller, 2004; Nakov and Hearst, 2005b; Nakov and Hearst, 2008). Lapata and Keller (2004) uses the number of page hits as the web-count of the queried ngram (which is problematic according to Kilgarriff (2007)). Nakov and Hearst (2008) post-processes the first 1000 result snippets. One challenge with this approach is that an external search API is now embedded into the parser, raising issues of both speed and daily query limits, especially if all possible attachments trigger queries. Such methods also create a dependence on the quality and postprocessing of the search results, limitations of the query process (for instance, search engines can ignore punctuation (Nakov and Hearst, 2005b)). Rather than working through a search API (or scraper), we use an offline web corpus – the Google n-gram corpus (Brants and Franz, 2006) – which contains English n-grams (n = 1 to 5) and their observed frequency counts, generated from nearly 1 trillion word tokens and 95 billion sentences. This corpus allows us to efficiently access huge amounts of web-derived information in a compressed way, though in the process it limits us to local queries. In particular, we only use counts of n-grams of the form x ⋆y where the gap length is ≤3. Our system requires the counts from a large collection of these n-gram queries (around 4.5 million). The most basic queries are counts of head-argument pairs in contiguous h a and gapped h ⋆a configurations.2 Here, we describe how we process queries 2Paraphrase features give situations where we query ⋆h a 696 of the form (q1, q2) with some number of wildcards in between. We first collect all such queries over all trees in preprocessing (so a new test set requires a new query-extraction phase). Next, we exploit a simple but efficient trie-based hashing algorithm to efficiently answer all of them in one pass over the n-grams corpus. Consider Figure 4, which illustrates the data structure which holds our queries. We first create a trie of the queries in the form of a nested hashmap. The key of the outer hashmap is the first word q1 of the query. The entry for q1 points to an inner hashmap whose key is the final word q2 of the query bigram. The values of the inner map is an array of 4 counts, to accumulate each of (q1q2), (q1 ⋆q2), (q1 ⋆⋆q2), and (q1 ⋆⋆⋆q2), respectively. We use kgrams to collect counts of (q1...q2) with gap length = k −2, i.e. 2-grams to get count(q1q2), 3-grams to get count(q1 ⋆q2) and so on. With this representation of our collection of queries, we go through the web n-grams (n = 2 to 5) one by one. For an n-gram w1...wn, if the first ngram word w1 doesn’t occur in the outer hashmap, we move on. If it does match (say ¯q1 = w1), then we look into the inner map for ¯q1 and check for the final word wn. If we have a match, we increment the appropriate query’s result value. In similar ways, we also mine the most frequent words that occur before, in between and after the head and argument query pairs. For example, to collect mid words, we go through the 3-grams w1w2w3; if w1 matches ¯q1 in the outer hashmap and w3 occurs in the inner hashmap for ¯q1, then we store w2 and the count of the 3-gram. After the sweep, we sort the context words in decreasing order of count. We also collect unigram counts of the head and argument words by sweeping over the unigrams once. In this way, our work is linear in the size of the n-gram corpus, but essentially constant in the number of queries. Of course, if the number of queries is expected to be small, such as for a one-off parse of a single sentence, other solutions might be more appropriate; in our case, a large-batch setting, the number of queries was such that this formulation was chosen. Our main experiments (with no parallelization) took 115 minutes to sweep over the 3.8 billion and h a ⋆; these are handled similarly. 𝒒𝟏= 𝒘𝟏 𝒒𝟐= 𝒘𝒏 Web N-grams Query Count-Trie counts 𝒒𝟏 𝒒𝟐 𝒒𝟏∗𝒒𝟐 𝒒𝟏∗∗𝒒𝟐 𝒒𝟏∗∗∗𝒒𝟐 𝑤1 . . . 𝑤𝑛 SCAN {𝑞2} hash {𝑞1} hash Figure 4: Trie-based nested hashmap for collecting ngram webcounts of queries. n-grams (n = 1 to 5) to compute the answers to 4.5 million queries, much less than the time required to train the baseline parsers. 4 Parsing Experiments Our features are designed to be used in full-sentence parsing rather than for limited decisions about isolated ambiguities. We first integrate our features into a dependency parser, where the integration is more natural and pushes all the way into the underlying dynamic program. We then add them to a constituent parser in a reranking approach. We also verify that our features contribute on top of standard reranking features.3 4.1 Dependency Parsing For dependency parsing, we use the discriminatively-trained MSTParser4, an implementation of first and second order MST parsing models of McDonald et al. (2005) and McDonald and Pereira (2006). We use the standard splits of Penn Treebank into training (sections 2-21), development (section 22) and test (section 23). We used the ‘pennconverter’5 tool to convert Penn trees from constituent format to dependency format. Following Koo et al. (2008), we used the MXPOST tagger (Ratnaparkhi, 1996) trained on the full training data to provide part-of-speech tags for the development 3All reported experiments are run on all sentences, i.e. without any length limit. 4http://sourceforge.net/projects/mstparser 5This supersedes ‘Penn2Malt’ and is available at http://nlp.cs.lth.se/software/treebank converter. We follow its recommendation to patch WSJ data with NP bracketing by Vadas and Curran (2007). 697 Order 2 + Web features % Error Redn. Dev (sec 22) 92.1 92.7 7.6% Test (sec 23) 91.4 92.0 7.0% Table 1: UAS results for English WSJ dependency parsing. Dev is WSJ section 22 (all sentences) and Test is WSJ section 23 (all sentences). The order 2 baseline represents McDonald and Pereira (2006). and the test set, and we used 10-way jackknifing to generate tags for the training set. We added our first-order Web-scale features to the MSTParser system to evaluate improvement over the results of McDonald and Pereira (2006).6 Table 1 shows unlabeled attachments scores (UAS) for their second-order projective parser and the improved numbers resulting from the addition of our Web-scale features. Our first-order web-scale features show significant improvement even over their non-local second-order features.7 Additionally, our web-scale features are at least an order of magnitude fewer in number than even their first-order base features. 4.2 Constituent Parsing We also evaluate the utility of web-scale features on top of a state-of-the-art constituent parser – the Berkeley parser (Petrov et al., 2006), an unlexicalized phrase-structure parser. Because the underlying parser does not factor along lexical attachments, we instead adopt the discriminative reranking framework, where we generate the top-k candidates from the baseline system and then rerank this k-best list using (generally non-local) features. Our baseline system is the Berkeley parser, from which we obtain k-best lists for the development set (WSJ section 22) and test set (WSJ section 23) using a grammar trained on all the training data (WSJ sections 2-21).8 To get k-best lists for the training set, we use 3-fold jackknifing where we train a grammar 6Their README specifies ‘training-k:5 iters:10 losstype:nopunc decode-type:proj’, which we used for all final experiments; we used the faster ‘training-k:1 iters:5’ setting for most development experiments. 7Work such as Smith and Eisner (2008), Martins et al. (2009), Koo and Collins (2010) has been exploring more nonlocal features for dependency parsing. It will be interesting to see how these features interact with our web features. 8Settings: 6 iterations of split and merge with smoothing. k = 1 k = 2 k = 10 k = 25 k = 50 k = 100 Dev 90.6 92.3 95.1 95.8 96.2 96.5 Test 90.2 91.8 94.7 95.6 96.1 96.4 Table 2: Oracle F1-scores for k-best lists output by Berkeley parser for English WSJ parsing (Dev is section 22 and Test is section 23, all lengths). on 2 folds to get parses for the third fold.9 The oracle scores of the k-best lists (for different values of k) for the development and test sets are shown in Table 2. Based on these results, we used 50-best lists in our experiments. For discriminative learning, we used the averaged perceptron (Collins, 2002; Huang, 2008). Our core feature is the log conditional likelihood of the underlying parser.10 All other features are indicator features. First, we add all the Web-scale features as defined above. These features alone achieve a 9.2% relative error reduction. The affinity and paraphrase features contribute about two-fifths and three-fifths of this improvement, respectively. Next, we rerank with only the features (both local and non-local) from Huang (2008), a simplified merge of Charniak and Johnson (2005) and Collins (2000) (here configurational). These features alone achieve around the same improvements over the baseline as our web-scale features, even though they are highly non-local and extensive. Finally, we rerank with both our Web-scale features and the configurational features. When combined, our web-scale features give a further error reduction of 3.4% over the configurational reranker (and a combined error reduction of 12.2%). All results are shown in Table 3.11 5 Analysis Table 4 shows error counts and relative reductions that our web features provide over the 2nd-order dependency baseline. While we do see substantial gains for classic PP (IN) attachment cases, we see equal or greater error reductions for a range of attachment types. Further, Table 5 shows how the to9Default: we ran the Berkeley parser in its default ‘fast’ mode; the output k-best lists are ordered by max-rule-score. 10This is output by the flag -confidence. Note that baseline results with just this feature are slightly worse than 1-best results because the k-best lists are generated by max-rule-score. We report both numbers in Table 3. 11We follow Collins (1999) for head rules. 698 Dev (sec 22) Test (sec 23) Parsing Model F1 EX F1 EX Baseline (1-best) 90.6 39.4 90.2 37.3 log p(t|w) 90.4 38.9 89.9 37.3 + Web features 91.6 42.5 91.1 40.6 + Configurational features 91.8 43.8 91.1 40.6 + Web + Configurational 92.1 44.0 91.4 41.4 Table 3: Parsing results for reranking 50-best lists of Berkeley parser (Dev is WSJ section 22 and Test is WSJ section 23, all lengths). Arg Tag # Attach Baseline This Work % ER NN 5725 5387 5429 12.4 NNP 4043 3780 3804 9.1 IN 4026 3416 3490 12.1 DT 3511 3424 3429 5.8 NNS 2504 2319 2348 15.7 JJ 2472 2310 2329 11.7 CD 1845 1739 1738 -0.9 VBD 1705 1571 1580 6.7 RB 1308 1097 1100 1.4 CC 1000 855 854 -0.7 VB 983 940 945 11.6 TO 868 761 776 14.0 VBN 850 776 786 13.5 VBZ 705 633 629 -5.6 PRP 612 603 606 33.3 Table 4: Error reduction for attachments of various child (argument) categories. The columns depict the tag, its total attachments as argument, number of correct ones in baseline (McDonald and Pereira, 2006) and this work, and the relative error reduction. Results are for dependency parsing on the dev set for iters:5,training-k:1. tal errors break down by gold head. For example, the 12.1% total error reduction for attachments of an IN argument (which includes PPs as well as complementized SBARs) includes many errors where the gold attachments are to both noun and verb heads. Similarly, for an NN-headed argument, the major corrections are for attachments to noun and verb heads, which includes both object-attachment ambiguities and coordination ambiguities. We next investigate the features that were given high weight by our learning algorithm (in the constituent parsing case). We first threshold features by a minimum training count of 400 to focus on frequently-firing ones (recall that our features are not bilexical indicators and so are quite a bit more Arg Tag % Error Redn for Various Parent Tags NN IN: 18, NN: 23, VB: 30, NNP:20, VBN: 33 IN NN: 11, VBD: 11, NNS: 20, VB:18, VBG: 23 NNS IN: 9, VBD: 29, VBP: 21, VB:15, CC: 33 Table 5: Error reduction for each type of parent attachment for a given child in Table 4. POShead POSarg Example (head, arg) RB IN back →into NN IN review →of NN DT The ←rate NNP IN Regulation →of VB NN limit →access VBD NN government ←cleared NNP NNP Dean ←Inc NN TO ability →to JJ IN active →for NNS TO reasons →to IN NN under →pressure NNS IN reports →on NN NNP Warner ←studio NNS JJ few ←plants Table 6: The highest-weight features (thresholded at a count of 400) of the affinity schema. We list only the head and argument POS and the direction (arrow from head to arg). We omit features involving punctuation. frequent). We then sort them by descending (signed) weight. Table 6 shows which affinity features received the highest weights, as well as examples of training set attachments for which the feature fired (for concreteness), suppressing both features involving punctuation and the features’ count and distance bins. With the standard caveats that interpreting feature weights in isolation is always to be taken for what it is, the first feature (RB→IN) indicates that high counts for an adverb occurring adjacent to a preposition (like back into the spotlight) is a useful indicator that the adverb actually modifies that preposition. The second row (NN→IN) indicates that whether a preposition is appropriate to attach to a noun is well captured by how often that preposition follows that noun. The fifth row (VB→NN) indicates that when considering an NP as the object of a verb, it is a good sign if that NP’s head frequently occurs immediately following that verb. All of these features essentially state cases where local surface counts are good indi699 POShead mid-word POSarg Example (head, arg) VBN this IN leaned, from VB this IN publish, in VBG him IN using, as VBG them IN joining, in VBD directly IN converted, into VBD held IN was, in VBN jointly IN offered, by VBZ it IN passes, in VBG only IN consisting, of VBN primarily IN developed, for VB us IN exempt, from VBG this IN using, as VBD more IN looked, like VB here IN stay, for VBN themselves IN launched, into VBG down IN lying, on Table 7: The highest-weight features (thresholded at a count of 400) of the mid-word schema for a verb head and preposition argument (with head on left of argument). cators of (possibly non-adjacent) attachments. A subset of paraphrase features, which in the automatically-extracted case don’t really correspond to paraphrases at all, are shown in Table 7. Here we show features for verbal heads and IN arguments. The mid-words m which rank highly are those where the occurrence of hma as an n-gram is a good indicator that a attaches to h (m of course does not have to actually occur in the sentence). Interestingly, the top such features capture exactly the intuition from Nakov and Hearst (2005b), namely that if the verb h and the preposition a occur with a pronoun in between, we have evidence that a attaches to h (it certainly can’t attach to the pronoun). However, we also see other indicators that the preposition is selected for by the verb, such as adverbs like directly. As another example of known useful features being learned automatically, Table 8 shows the previous-context-word paraphrase features for a noun head and preposition argument (N →IN). Nakov and Hearst (2005b) suggested that the attestation of be N IN is a good indicator of attachment to the noun (the IN cannot generally attach to forms of auxiliaries). One such feature occurs on this top list – for the context word have – and others occur farther down. We also find their surface marker / puncbfr-word POShead POSarg Example (head, arg) second NN IN season, in The NN IN role, of strong NN IN background, in our NNS IN representatives, in any NNS IN rights, against A NN IN review, of : NNS IN Results, in three NNS IN years, in In NN IN return, for no NN IN argument, about current NN IN head, of no NNS IN plans, for public NN IN appearance, at from NNS IN sales, of net NN IN revenue, of , NNS IN names, of you NN IN leave, in have NN IN time, for some NN IN money, for annual NNS IN reports, on Table 8: The highest-weight features (thresholded at a count of 400) of the before-word schema for a noun head and preposition argument (with head on left of argument). tuation cues of : and , preceding the noun. However, we additionally find other cues, most notably that if the N IN sequence occurs following a capitalized determiner, it tends to indicate a nominal attachment (in the n-gram, the preposition cannot attach leftward to anything else because of the beginning of the sentence). In Table 9, we see the top-weight paraphrase features that had a conjunction as a middle-word cue. These features essentially say that if two heads w1 and w2 occur in the direct coordination n-gram w1 and w2, then they are good heads to coordinate (coordination unfortunately looks the same as complementation or modification to a basic dependency model). These features are relevant to a range of coordination ambiguities. Finally, Table 10 depicts the high-weight, highcount general paraphrase-cue features for arbitrary head and argument categories, with those shown in previous tables suppressed. Again, many interpretable features appear. For example, the top entry (the JJ NNS) shows that when considering attaching an adjective a to a noun h, it is a good sign if the 700 POShead mid-CC POSarg Example (head, arg) NNS and NNS purchases, sales VB and VB buy, sell NN and NN president, officer NN and NNS public, media VBD and VBD said, added VBZ and VBZ makes, distributes JJ and JJ deep, lasting IN and IN before, during VBD and RB named, now VBP and VBP offer, need Table 9: The highest-weight features (thresholded at a count of 400) of the mid-word schema where the mid-word was a conjunction. For variety, for a given head-argument POS pair, we only list features corresponding to the and conjunction and h →a direction. trigram the a h is frequent – in that trigram, the adjective attaches to the noun. The second entry (NN - NN) shows that one noun is a good modifier of another if they frequently appear together hyphenated (another punctuation-based cue mentioned in previous work on noun bracketing, see Nakov and Hearst (2005a)). While they were motivated on separate grounds, these features can also compensate for inapplicability of the affinity features. For example, the third entry (VBD this NN) is a case where even if the head (a VBD like adopted) actually selects strongly for the argument (a NN like plan), the bigram adopted plan may not be as frequent as expected, because it requires a determiner in its minimal analogous form adopted the plan. 6 Conclusion Web features are a way to bring evidence from a large unlabeled corpus to bear on hard disambiguation decisions that are not easily resolvable based on limited parser training data. Our approach allows revealing features to be mined for the entire range of attachment types and then aggregated and balanced in a full parsing setting. Our results show that these web features resolve ambiguities not correctly handled by current state-of-the-art systems. Acknowledgments We would like to thank the anonymous reviewers for their helpful suggestions. This research is supPOSh POSa mid/bfr-word Example (h, a) NNS JJ b = the other ←things NN NN m = auto ←maker VBD NN m = this adopted →plan NNS NN b = of computer ←products NN DT m = current the ←proposal VBG IN b = of going →into NNS IN m = ” clusters →of IN NN m = your In →review TO VB b = used to →ease VBZ NN m = that issue ←has IN NNS m = two than →minutes IN NN b = used as →tool IN VBD m = they since →were VB TO b = will fail →to Table 10: The high-weight high-count (thresholded at a count of 2000) general features of the mid and before paraphrase schema (examples show head and arg in linear order with arrow from head to arg). ported by BBN under DARPA contract HR0011-06C-0022. References M. Atterer and H. Schutze. 2007. Prepositional phrase attachment without oracles. Computational Linguistics, 33(4):469476. Thorsten Brants and Alex Franz. 2006. The Google Web 1T 5-gram corpus version 1.1. LDC2006T13. Eugene Charniak and Mark Johnson. 2005. Coarse-tofine n-best parsing and MaxEnt discriminative reranking. In Proceedings of ACL. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25–70. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proceedings of ICML. Michael Collins. 2002. Discriminative training methods for Hidden Markov Models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL. 701 Adam Kilgarriff. 2007. Googleology is bad science. Computational Linguistics, 33(1). Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of ACL. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL. Mirella Lapata and Frank Keller. 2004. The Web as a baseline: Evaluating the performance of unsupervised Web-based models for a range of NLP tasks. In Proceedings of HLT-NAACL. M. Lauer. 1995. Corpus statistics meet the noun compound: some empirical results. In Proceedings of ACL. Andr´e F. T. Martins, Noah A. Smith, and Eric P. Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proceedings of ACLIJCNLP. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL. Preslav Nakov and Marti Hearst. 2005a. Search engine statistics beyond the n-gram: Application to noun compound bracketing. In Proceedings of CoNLL. Preslav Nakov and Marti Hearst. 2005b. Using the web as an implicit training set: Application to structural ambiguity resolution. In Proceedings of EMNLP. Preslav Nakov and Marti Hearst. 2008. Solving relational similarity problems using the web as a corpus. In Proceedings of ACL. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings of COLING-ACL. Emily Pitler, Shane Bergsma, Dekang Lin, , and Kenneth Church. 2010. Using web-scale n-grams to improve base NP parsing performance. In Proceedings of COLING. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of EMNLP. David A. Smith and Jason Eisner. 2008. Dependency parsing by belief propagation. In Proceedings of EMNLP. David Vadas and James R. Curran. 2007. Adding noun phrase structure to the Penn Treebank. In Proceedings of ACL. Martin Volk. 2001. Exploiting the WWW as a corpus to resolve PP attachment ambiguities. In Proceedings of Corpus Linguistics. Alexander Yates, Stefan Schoenmackers, and Oren Etzioni. 2006. Detecting parser errors using web-based semantic filters. In Proceedings of EMNLP. 702
|
2011
|
70
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 703–711, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics The impact of language models and loss functions on repair disfluency detection Simon Zwarts and Mark Johnson Centre for Language Technology Macquarie University {simon.zwarts|mark.johnson|}@mq.edu.au Abstract Unrehearsed spoken language often contains disfluencies. In order to correctly interpret a spoken utterance, any such disfluencies must be identified and removed or otherwise dealt with. Operating on transcripts of speech which contain disfluencies, we study the effect of language model and loss function on the performance of a linear reranker that rescores the 25-best output of a noisychannel model. We show that language models trained on large amounts of non-speech data improve performance more than a language model trained on a more modest amount of speech data, and that optimising f-score rather than log loss improves disfluency detection performance. Our approach uses a log-linear reranker, operating on the top n analyses of a noisy channel model. We use large language models, introduce new features into this reranker and examine different optimisation strategies. We obtain a disfluency detection f-scores of 0.838 which improves upon the current state-of-theart. 1 Introduction Most spontaneous speech contains disfluencies such as partial words, filled pauses (e.g., “uh”, “um”, “huh”), explicit editing terms (e.g., “I mean”), parenthetical asides and repairs. Of these, repairs pose particularly difficult problems for parsing and related Natural Language Processing (NLP) tasks. This paper presents a model of disfluency detection based on the noisy channel framework, which specifically targets the repair disfluencies. By combining language models and using an appropriate loss function in a log-linear reranker we are able to achieve f-scores which are higher than previously reported. Often in natural language processing algorithms, more data is more important than better algorithms (Brill and Banko, 2001). It is this insight that drives the first part of the work described in this paper. This paper investigates how we can use language models trained on large corpora to increase repair detection accuracy performance. There are three main innovations in this paper. First, we investigate the use of a variety of language models trained from text or speech corpora of various genres and sizes. The largest available language models are based on written text: we investigate the effect of written text language models as opposed to language models based on speech transcripts. Second, we develop a new set of reranker features explicitly designed to capture important properties of speech repairs. Many of these features are lexically grounded and provide a large performance increase. Third, we utilise a loss function, approximate expected f-score, that explicitly targets the asymmetric evaluation metrics used in the disfluency detection task. We explain how to optimise this loss function, and show that this leads to a marked improvement in disfluency detection. This is consistent with Jansche (2005) and Smith and Eisner (2006), who observed similar improvements when using approximate f-score loss for other problems. Similarly we introduce a loss function based on the edit-f-score in our domain. 703 Together, these three improvements are enough to boost detection performance to a higher f-score than previously reported in literature. Zhang et al. (2006) investigate the use of ‘ultra large feature spaces’ as an aid for disfluency detection. Using over 19 million features, they report a final f-score in this task of 0.820. Operating on the same body of text (Switchboard), our work leads to an f-score of 0.838, this is a 9% relative improvement in residual f-score. The remainder of this paper is structured as follows. First in Section 2 we describe related work. Then in Section 3 we present some background on disfluencies and their structure. Section 4 describes appropriate evaluation techniques. In Section 5 we describe the noisy channel model we are using. The next three sections describe the new additions: Section 6 describe the corpora used for language models, Section 7 describes features used in the loglinear model employed by the reranker and Section 8 describes appropriate loss functions which are critical for our approach. We evaluate the new model in Section 9. Section 10 draws up a conclusion. 2 Related work A number of different techniques have been proposed for automatic disfluency detection. Schuler et al. (2010) propose a Hierarchical Hidden Markov Model approach; this is a statistical approach which builds up a syntactic analysis of the sentence and marks those subtrees which it considers to be made up of disfluent material. Although they are interested not only in disfluency but also a syntactic analysis of the utterance, including the disfluencies being analysed, their model’s final f-score for disfluency detection is lower than that of other models. Snover et al. (2004) investigate the use of purely lexical features combined with part-of-speech tags to detect disfluencies. This approach is compared to approaches which use primarily prosodic cues, and appears to perform equally well. However, the authors note that this model finds it difficult to identify disfluencies which by themselves are very fluent. As we will see later, the individual components of a disfluency do not have to be disfluent by themselves. This can occur when a speaker edits her speech for meaning-related reasons, rather than errors that arise from performance. The edit repairs which are the focus of our work typically have this characteristic. Noisy channel models have done well on the disfluency detection task in the past; the work of Johnson and Charniak (2004) first explores such an approach. Johnson et al. (2004) adds some handwritten rules to the noisy channel model and use a maximum entropy approach, providing results comparable to Zhang et al. (2006), which are state-of-the art results. Kahn et al. (2005) investigated the role of prosodic cues in disfluency detection, although the main focus of their work was accurately recovering and parsing a fluent version of the sentence. They report a 0.782 f-score for disfluency detection. 3 Speech Disfluencies We follow the definitions of Shriberg (1994) regarding speech disfluencies. She identifies and defines three distinct parts of a speech disfluency, referred to as the reparandum, the interregnum and the repair. Consider the following utterance: I want a flight reparandum z }| { to Boston, uh, I mean | {z } interregnum to Denver | {z } repair on Friday (1) The reparandum to Boston is the part of the utterance that is ‘edited out’; the interregnum uh, I mean is a filled pause, which need not always be present; and the repair to Denver replaces the reparandum. Shriberg and Stolcke (1998) studied the location and distribution of repairs in the Switchboard corpus (Godfrey and Holliman, 1997), the primary corpus for speech disfluency research, but did not propose an actual model of repairs. They found that the overall distribution of speech disfluencies in a large corpus can be fit well by a model that uses only information on a very local level. Our model, as explained in section 5, follows from this observation. As our domain of interest we use the Switchboard corpus. This is a large corpus consisting of transcribed telephone conversations between two partners. In the Treebank III (Marcus et al., 1999) corpus there is annotation available for the Switchboard corpus, which annotates which parts of utterances are in a reparandum, interregnum or repair. 704 4 Evaluation metrics for disfluency detection systems Disfluency detection systems like the one described here identify a subset of the word tokens in each transcribed utterance as “edited” or disfluent. Perhaps the simplest way to evaluate such systems is to calculate the accuracy of labelling they produce, i.e., the fraction of words that are correctly labelled (i.e., either “edited” or “not edited”). However, as Charniak and Johnson (2001) observe, because only 5.9% of words in the Switchboard corpus are “edited”, the trivial baseline classifier which assigns all words the “not edited” label achieves a labelling accuracy of 94.1%. Because the labelling accuracy of the trivial baseline classifier is so high, it is standard to use a different evaluation metric that focuses more on the detection of “edited” words. We follow Charniak and Johnson (2001) and report the f-score of our disfluency detection system. The f-score f is: f = 2c g + e (2) where g is the number of “edited” words in the gold test corpus, e is the number of “edited” words proposed by the system on that corpus, and c is the number of the “edited” words proposed by the system that are in fact correct. A perfect classifier which correctly labels every word achieves an f-score of 1, while the trivial baseline classifiers which label every word as “edited” or “not edited” respectively achieve a very low f-score. Informally, the f-score metric focuses more on the “edited” words than it does on the “not edited” words. As we will see in section 8, this has implications for the choice of loss function used to train the classifier. 5 Noisy Channel Model Following Johnson and Charniak (2004), we use a noisy channel model to propose a 25-best list of possible speech disfluency analyses. The choice of this model is driven by the observation that the repairs frequently seem to be a “rough copy” of the reparandum, often incorporating the same or very similar words in roughly the same word order. That is, they seem to involve “crossed” dependencies between the reparandum and the repair. Example (3) shows the crossing dependencies. As this example also shows, the repair often contains many of the same words that appear in the reparandum. In fact, in our Switchboard training corpus we found that 62reparandum also appeared in the associated repair, to Boston uh, I mean, to Denver | {z } reparandum | {z } interregnum | {z } repair (3) 5.1 Informal Description Given an observed sentence Y we wish to find the most likely source sentence ˆX, where ˆX = argmax X P(Y |X)P(X) (4) In our model the unobserved X is a substring of the complete utterance Y . Noisy-channel models are used in a similar way in statistical speech recognition and machine translation. The language model assigns a probability P(X) to the string X, which is a substring of the observed utterance Y . The channel model P(Y |X) generates the utterance Y , which is a potentially disfluent version of the source sentence X. A repair can potentially begin before any word of X. When a repair has begun, the channel model incrementally processes the succeeding words from the start of the repair. Before each succeeding word either the repair can end or else a sequence of words can be inserted in the reparandum. At the end of each repair, a (possibly null) interregnum is appended to the reparandum. We will look at these two components in the next two Sections in more detail. 5.2 Language Model Informally, the task of language model component of the noisy channel model is to assess fluency of the sentence with disfluency removed. Ideally we would like to have a model which assigns a very high probability to disfluency-free utterances and a lower probability to utterances still containing disfluencies. For computational complexity reasons, as described in the next section, inside the noisy channel model we use a bigram language model. This 705 bigram language model is trained on the fluent version of the Switchboard corpus (training section). We realise that a bigram model might not be able to capture more complex language behaviour. This motivates our investigation of a range of additional language models, which are used to define features used in the log-linear reranker as described below. 5.3 Channel Model The intuition motivating the channel model design is that the words inserted into the reparandum are very closely related to those in the repair. Indeed, in our training data we find that 62% of the words in the reparandum are exact copies of words in the repair; this identity is strong evidence of a repair. The channel model is designed so that exact copy reparandum words will have high probability. Because these repair structures can involve an unbounded number of crossed dependencies, they cannot be described by a context-free or finite-state grammar. This motivates the use of a more expressive formalism to describe these repair structures. We assume that X is a substring of Y , i.e., that the source sentence can be obtained by deleting words from Y , so for a fixed observed utterance Y there are only a finite number of possible source sentences. However, the number of possible source sentences, X, grows exponentially with the length of Y , so exhaustive search is infeasible. Tree Adjoining Grammars (TAG) provide a systematic way of formalising the channel model, and their polynomialtime dynamic programming parsing algorithms can be used to search for likely repairs, at least when used with simple language models like a bigram language model. In this paper we first identify the 25 most likely analyses of each sentence using the TAG channel model together with a bigram language model. Further details of the noisy channel model can be found in Johnson and Charniak (2004). 5.4 Reranker To improve performance over the standard noisy channel model we use a reranker, as previously suggest by Johnson and Charniak (2004). We rerank a 25-best list of analyses. This choice is motivated by an oracle experiment we performed, probing for the location of the best analysis in a 100-best list. This experiment shows that in 99.5% of the cases the best analysis is located within the first 25, and indicates that an f-score of 0.958 should be achievable as the upper bound on a model using the first 25 best analyses. We therefore use the top 25 analyses from the noisy channel model in the remainder of this paper and use a reranker to choose the most suitable candidate among these. 6 Corpora for language modelling We would like to use additional data to model the fluent part of spoken language. However, the Switchboard corpus is one of the largest widelyavailable disfluency-annotated speech corpora. It is reasonable to believe that for effective disfluency detection Switchboard is not large enough and more text can provide better analyses. Schwartz et al. (1994), although not focusing on disfluency detection, show that using written language data for modelling spoken language can improve performance. We turn to three other bodies of text and investigate the use of these corpora for our task, disfluency detection. We will describe these corpora in detail here. The predictions made by several language models are likely to be strongly correlated, even if the language models are trained on different corpora. This motivates the choice for log-linear learners, which are built to handle features which are not necessarily independent. We incorporate information from the external language models by defining a reranker feature for each external language model. The value of this feature is the log probability assigned by the language model to the candidate underlying fluent substring X For each of our corpora (including Switchboard) we built a 4-gram language model with Kneser-Ney smoothing (Kneser and Ney, 1995). For each analysis we calculate the probability under that language model for the candidate underlying fluent substring X. We use this log probability as a feature in the reranker. We use the SRILM toolkit (Stolcke, 2002) both for estimating the model from the training corpus as well as for computing the probabilities of the underlying fluent sentences X of the different analysis. As previously described, Switchboard is our pri706 mary corpus for our model. The language model part of the noisy channel model already uses a bigram language model based on Switchboard, but in the reranker we would like to also use 4-grams for reranking. Directly using Switchboard to build a 4gram language model is slightly problematic. When we use the training data of Switchboard both for language fluency prediction and the same training data also for the loss function, the reranker will overestimate the weight associated with the feature derived from the Switchboard language model, since the fluent sentence itself is part of the language model training data. We solve this by dividing the Switchboard training data into 20 folds. For each fold we use the 19 other folds to construct a language model and then score the utterance in this fold with that language model. The largest widely-available corpus for language modelling is the Web 1T 5-gram corpus (Brants and Franz, 2006). This data set, collected by Google Inc., contains English word n-grams and their observed frequency counts. Frequency counts are produced from this billion-token corpus of web text. Because of the noise1 present in this corpus there is an ongoing debate in the scientific community of the use of this corpus for serious language modelling. The Gigaword Corpus (Graff and Cieri, 2003) is a large body of newswire text. The corpus contains 1.6 · 109 tokens, however fluent newswire text is not necessarily of the same domain as disfluency removed speech. The Fisher corpora Part I (David et al., 2004) and Part II (David et al., 2005) are large bodies of transcribed text. Unlike Switchboard there is no disfluency annotation available for Fisher. Together the two Fisher corpora consist of 2.2 · 107 tokens. 7 Features The log-linear reranker, which rescores the 25-best lists produced by the noisy-channel model, can also include additional features besides the noisychannel log probabilities. As we show below, these additional features can make a substantial improvement to disfluency detection performance. Our reranker incorporates two kinds of features. The first 1We do not mean speech disfluencies here, but noise in webtext; web-text is often poorly written and unedited text. are log-probabilities of various scores computed by the noisy-channel model and the external language models. We only include features which occur at least 5 times in our training data. The noisy channel and language model features consist of: 1. LMP: 4 features indicating the probabilities of the underlying fluent sentences under the language models, as discussed in the previous section. 2. NCLogP: The Log Probability of the entire noisy channel model. Since by itself the noisy channel model is already doing a very good job, we do not want this information to be lost. 3. LogFom: This feature is the log of the “figure of merit” used to guide search in the noisy channel model when it is producing the 25-best list for the reranker. The log figure of merit is the sum of the log language model probability and the log channel model probability plus 1.5 times the number of edits in the sentence. This feature is redundant, i.e., it is a linear combination of other features available to the reranker model: we include it here so the reranker has direct access to all of the features used by the noisy channel model. 4. NCTransOdd: We include as a feature parts of the noisy channel model itself, i.e. the channel model probability. We do this so that the task to choosing appropriate weights of the channel model and language model can be moved from the noisy channel model to the log-linear optimisation algorithm. The boolean indicator features consist of the following 3 groups of features operating on words and their edit status; the latter indicated by one of three possible flags: when the word is not part of a disfluency or E when it is part of the reparandum or I when it is part of the interregnum. 1. CopyFlags X Y: When there is an exact copy in the input text of length X (1 ≤X ≤3) and the gap between the copies is Y (0 ≤Y ≤3) this feature is the sequence of flags covering the two copies. Example: CopyFlags 1 0 (E 707 ) records a feature when two identical words are present, directly consecutive and the first one is part of a disfluency (Edited) while the second one is not. There are 745 different instances of these features. 2. WordsFlags L n R: This feature records the immediate area around an n-gram (n ≤3). L denotes how many flags to the left and R (0 ≤R ≤1) how many to the right are includes in this feature (Both L and R range over 0 and 1). Example: WordsFlags 1 1 0 (need ) is a feature that fires when a fluent word is followed by the word ‘need’ (one flag to the left, none to the right). There are 256808 of these features present. 3. SentenceEdgeFlags B L: This feature indicates the location of a disfluency in an utterance. The Boolean B indicates whether this features records sentence initial or sentence final behaviour, L (1 ≤ L ≤ 3) records the length of the flags. Example SentenceEdgeFlags 1 1 (I) is a feature recording whether a sentence ends on an interregnum. There are 22 of these features present. We give the following analysis as an example: but E but that does n’t work The language model features are the probability calculated over the fluent part. NCLogP, LogFom and NCTransOdd are present with their associated value. The following binary flags are present: CopyFlags 1 0 (E ) WordsFlags:0:1:0 (but E) WordsFlags:0:1:0 (but ) WordsFlags:1:1:0 (E but ) WordsFlags:1:1:0 ( that ) WordsFlags:0:2:0 (but E but ) etc.2 SentenceEdgeFlags:0:1 (E) SentenceEdgeFlags:0:2 (E ) SentenceEdgeFlags:0:3 (E ) These three kinds of boolean indicator features together constitute the extended feature set. 2An exhaustive list here would be too verbose. 8 Loss functions for reranker training We formalise the reranker training procedure as follows. We are given a training corpus T containing information about n possibly disfluent sentences. For the ith sentence T specifies the sequence of words xi, a set Yi of 25-best candidate “edited” labellings produced by the noisy channel model, as well as the correct “edited” labelling y⋆ i ∈Yi.3 We are also given a vector f = (f1, . . . , fm) of feature functions, where each fj maps a word sequence x and an “edit” labelling y for x to a real value fj(x, y). Abusing notation somewhat, we write f(x, y) = (f1(x, y), . . . , fm(x, y)). We interpret a vector w = (w1, . . . , wm) of feature weights as defining a conditional probability distribution over a candidate set Y of “edited” labellings for a string x as follows: Pw(y | x, Y) = exp(w · f(x, y)) P y′∈Y exp(w · f(x, y′)) We estimate the feature weights w from the training data T by finding a feature weight vector bw that optimises a regularised objective function: bw = argmin w LT (w) + α m X j=1 w2 j Here α is the regulariser weight and LT is a loss function. We investigate two different loss functions in this paper. LogLoss is the negative log conditional likelihood of the training data: LogLossT (w) = m X i=1 −log P(y⋆ i | xi, Yi) Optimising LogLoss finds the bw that define (regularised) conditional Maximum Entropy models. It turns out that optimising LogLoss yields suboptimal weight vectors bw here. LogLoss is a symmetric loss function (i.e., each mistake is equally weighted), while our f-score evaluation metric weights “edited” labels more highly, as explained in section 4. Because our data is so skewed (i.e., “edited” words are comparatively infrequent), we 3In the situation where the true “edited” labelling does not appear in the 25-best list Yi produced by the noisy-channel model, we choose y⋆ i to be a labelling in Yi closest to the true labelling. 708 can improve performance by using an asymmetric loss function. Inspired by our evaluation metric, we devised an approximate expected f-score loss function FLoss. FLossT (w) = 1 − 2Ew[c] g + Ew[e] This approximation assumes that the expectations approximately distribute over the division: see Jansche (2005) and Smith and Eisner (2006) for other approximations to expected f-score and methods for optimising them. We experimented with other asymmetric loss functions (e.g., the expected error rate) and found that they gave very similar results. An advantage of FLoss is that it and its derivatives with respect to w (which are required for numerical optimisation) are easy to calculate exactly. For example, the expected number of correct “edited” words is: Ew[c] = n X i=1 Ew[cy⋆ i | Yi], where: Ew[cy⋆ i | Yi] = X y∈Yi cy⋆ i (y) Pw(y | xi, Yi) and cy⋆(y) is the number of correct “edited” labels in y given the gold labelling y⋆. The derivatives of FLoss are: ∂FLossT ∂wj (w) = 1 g + Ew[e] FLossT (w)∂Ew[e] ∂wj −2∂Ew[c] ∂wj ! where: ∂Ew[c] ∂wj = n X i=1 ∂Ew[cy⋆ i | xi, Yi] ∂wj ∂Ew[cy⋆| x, Y] ∂wj = Ew[fjcy⋆| x, Y] −Ew[fj | x, Y] Ew[cy⋆| x, Y]. ∂E[e]/∂wj is given by a similar formula. 9 Results We follow Charniak and Johnson (2001) and split the corpus into main training data, held-out training data and test data as follows: main training consisted of all sw[23]∗.dps files, held-out training consisted of all sw4[5-9]∗.dps files and test consisted of all sw4[0-1]∗.dps files. However, we follow (Johnson and Charniak, 2004) in deleting all partial words and punctuation from the training and test data (they argued that this is more realistic in a speech processing application). Table 1 shows the results for the different models on held-out data. To avoid over-fitting on the test data, we present the f-scores over held-out training data instead of test data. We used the held-out data to select the best-performing set of reranker features, which consisted of features for all of the language models plus the extended (i.e., indicator) features, and used this model to analyse the test data. The fscore of this model on test data was 0.838. In this table, the set of Extended Features is defined as all the boolean features as described in Section 7. We first observe that adding different external language models does increase the final score. The difference between the external language models is relatively small, although the differences in choice are several orders of magnitude. Despite the putative noise in the corpus, a language model built on Google’s Web1T data seems to perform very well. Only the model where Switchboard 4-grams are used scores slightly lower, we explain this because the internal bigram model of the noisy channel model is already trained on Switchboard and so this model adds less new information to the reranker than the other models do. Including additional features to describe the problem space is very productive. Indeed the best performing model is the model which has all extended features and all language model features. The differences among the different language models when extended features are present are relatively small. We assume that much of the information expressed in the language models overlaps with the lexical features. We find that using a loss function related to our evaluation metric, rather than optimising LogLoss, consistently improves edit-word f-score. The standard LogLoss function, which estimates the “maximum entropy” model, consistently performs worse than the loss function minimising expected errors. The best performing model (Base + Ext. Feat. + All LM, using expected f-score loss) scores an fscore of 0.838 on test data. The results as indicated by the f-score outperform state-of-the-art models re709 Model F-score Base (noisy channel, no reranking) 0.756 Model log loss expected f-score loss Base + Switchboard 0.776 0.791 Base + Fisher 0.771 0.797 Base + Gigaword 0.777 0.797 Base + Web1T 0.781 0.798 Base + Ext. Feat. 0.824 0.827 Base + Ext. Feat. + Switchboard 0.827 0.828 Base + Ext. Feat. + Fisher 0.841 0.856 Base + Ext. Feat. + Gigaword 0.843 0.852 Base + Ext. Feat. + Web1T 0.843 0.850 Base + Ext. Feat. + All LM 0.841 0.857 Table 1: Edited word detection f-score on held-out data for a variety of language models and loss functions ported in literature operating on identical data, even though we use vastly less features than other do. 10 Conclusion and Future work We have described a disfluency detection algorithm which we believe improves upon current state-ofthe-art competitors. This model is based on a noisy channel model which scores putative analyses with a language model; its channel model is inspired by the observation that reparandum and repair are often very similar. As Johnson and Charniak (2004) noted, although this model performs well, a loglinear reranker can be used to increase performance. We built language models from a variety of speech and non-speech corpora, and examine the effect they have on disfluency detection. We use language models derived from different larger corpora effectively in a maximum reranker setting. We show that the actual choice for a language model seems to be less relevant and newswire text can be used equally well for modelling fluent speech. We describe different features to improve disfluency detection even further. Especially these features seem to boost performance significantly. Finally we investigate the effect of different loss functions. We observe that using a loss function directly optimising our interest yields a performance increase which is at least at large as the effect of using very large language models. We obtained an f-score which outperforms other models reported in literature operating on identical data, even though we use vastly fewer features than others do. Acknowledgements This work was supported was supported under Australian Research Council’s Discovery Projects funding scheme (project number DP110102593) and by the Australian Research Council as part of the Thinking Head Project the Thinking Head Project, ARC/NHMRC Special Research Initiative Grant # TS0669874. We thank the anonymous reviewers for their helpful comments. References Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram Version 1. Published by Linguistic Data Consortium, Philadelphia. Erik Brill and Michele Banko. 2001. Mitigating the Paucity-of-Data Problem: Exploring the Effect of Training Corpus Size on Classifier Performance for Natural Language Processing. In Proceedings of the First International Conference on Human Language Technology Research. Eugene Charniak and Mark Johnson. 2001. Edit detection and parsing for transcribed speech. In Proceedings of the 2nd Meeting of the North American Chapter of the Association for Computational Linguistics, pages 118–126. Christopher Cieri David, David Miller, and Kevin Walker. 2004. Fisher English Training Speech Part 1 Transcripts. Published by Linguistic Data Consortium, Philadelphia. 710 Christopher Cieri David, David Miller, and Kevin Walker. 2005. Fisher English Training Speech Part 2 Transcripts. Published by Linguistic Data Consortium, Philadelphia. John J. Godfrey and Edward Holliman. 1997. Switchboard-1 Release 2. Published by Linguistic Data Consortium, Philadelphia. David Graff and Christopher Cieri. 2003. English gigaword. Published by Linguistic Data Consortium, Philadelphia. Martin Jansche. 2005. Maximum Expected F-Measure Training of Logistic Regression Models. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 692–699, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics. Mark Johnson and Eugene Charniak. 2004. A TAGbased noisy channel model of speech repairs. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 33–39. Mark Johnson, Eugene Charniak, and Matthew Lease. 2004. An Improved Model for Recognizing Disfluencies in Conversational Speech. In Proceedings of the Rich Transcription Fall Workshop. Jeremy G. Kahn, Matthew Lease, Eugene Charniak, Mark Johnson, and Mari Ostendorf. 2005. Effective Use of Prosody in Parsing Conversational Speech. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 233–240, Vancouver, British Columbia, Canada. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 181– 184. Mitchell P. Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, and Ann Taylor. 1999. Treebank-3. Published by Linguistic Data Consortium, Philadelphia. William Schuler, Samir AbdelRahman, Tim Miller, and Lane Schwartz. 2010. Broad-Coverage Parsing using Human-Like Memory Constraints. Computational Linguistics, 36(1):1–30. Richard Schwartz, Long Nguyen, Francis Kubala, George Chou, George Zavaliagkos, and John Makhoul. 1994. On Using Written Language Training Data for Spoken Language Modeling. In Proceedings of the Human Language Technology Workshop, pages 94–98. Elizabeth Shriberg and Andreas Stolcke. 1998. How far do speakers back up in repairs? A quantitative model. In Proceedings of the International Conference on Spoken Language Processing, pages 2183– 2186. Elizabeth Shriberg. 1994. Preliminaries to a Theory of Speech Disuencies. Ph.D. thesis, University of California, Berkeley. David A. Smith and Jason Eisner. 2006. Minimum Risk Annealing for Training Log-Linear Models. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 787–794. Matthew Snover, Bonnie Dorr, and Richard Schwartz. 2004. A Lexically-Driven Algorithm for Disfluency Detection. In Proceedings of Human Language Technologies and North American Association for Computational Linguistics, pages 157–160. Andreas Stolcke. 2002. SRILM - An Extensible Language Modeling Toolkit. In Proceedings of the International Conference on Spoken Language Processing, pages 901–904. Qi Zhang, Fuliang Weng, and Zhe Feng. 2006. A progressive feature selection algorithm for ultra large feature spaces. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 561–568. 711
|
2011
|
71
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 712–721, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Learning Sub-Word Units for Open Vocabulary Speech Recognition Carolina Parada1, Mark Dredze1, Abhinav Sethy2, and Ariya Rastrow1 1Human Language Technology Center of Excellence, Johns Hopkins University 3400 N Charles Street, Baltimore, MD, USA [email protected], [email protected], [email protected] 2IBM T.J. Watson Research Center, Yorktown Heights, NY, USA [email protected] Abstract Large vocabulary speech recognition systems fail to recognize words beyond their vocabulary, many of which are information rich terms, like named entities or foreign words. Hybrid word/sub-word systems solve this problem by adding sub-word units to large vocabulary word based systems; new words can then be represented by combinations of subword units. Previous work heuristically created the sub-word lexicon from phonetic representations of text using simple statistics to select common phone sequences. We propose a probabilistic model to learn the subword lexicon optimized for a given task. We consider the task of out of vocabulary (OOV) word detection, which relies on output from a hybrid model. A hybrid model with our learned sub-word lexicon reduces error by 6.3% and 7.6% (absolute) at a 5% false alarm rate on an English Broadcast News and MIT Lectures task respectively. 1 Introduction Most automatic speech recognition systems operate with a large but limited vocabulary, finding the most likely words in the vocabulary for the given acoustic signal. While large vocabulary continuous speech recognition (LVCSR) systems produce high quality transcripts, they fail to recognize out of vocabulary (OOV) words. Unfortunately, OOVs are often information rich nouns, such as named entities and foreign words, and mis-recognizing them can have a disproportionate impact on transcript coherence. Hybrid word/sub-word recognizers can produce a sequence of sub-word units in place of OOV words. Ideally, the recognizer outputs a complete word for in-vocabulary (IV) utterances, and sub-word units for OOVs. Consider the word “Slobodan”, the given name of the former president of Serbia. As an uncommon English word, it is unlikely to be in the vocabulary of an English recognizer. While a LVCSR system would output the closest known words (e.x. “slow it dawn”), a hybrid system could output a sequence of multi-phoneme units: s l ow, b ax, d ae n. The latter is more useful for automatically recovering the word’s orthographic form, identifying that an OOV was spoken, or improving performance of a spoken term detection system with OOV queries. In fact, hybrid systems have improved OOV spoken term detection (Mamou et al., 2007; Parada et al., 2009), achieved better phone error rates, especially in OOV regions (Rastrow et al., 2009b), and obtained state-of-the-art performance for OOV detection (Parada et al., 2010). Hybrid recognizers vary in a number of ways: sub-word unit type: variable-length phoneme units (Rastrow et al., 2009a; Bazzi and Glass, 2001) or joint letter sound sub-words (Bisani and Ney, 2005); unit creation: data-driven or linguistically motivated (Choueiter, 2009); and how they are incorporated in LVCSR systems: hierarchical (Bazzi, 2002) or flat models (Bisani and Ney, 2005). In this work, we consider how to optimally create sub-word units for a hybrid system. These units are variable-length phoneme sequences, although in principle our work can be use for other unit types. Previous methods for creating the sub-word lexi712 con have relied on simple statistics computed from the phonetic representation of text (Rastrow et al., 2009a). These units typically represent the most frequent phoneme sequences in English words. However, it isn’t clear why these units would produce the best hybrid output. Instead, we introduce a probabilistic model for learning the optimal units for a given task. Our model learns a segmentation of a text corpus given some side information: a mapping between the vocabulary and a label set; learned units are predictive of class labels. In this paper, we learn sub-word units optimized for OOV detection. OOV detection aims to identify regions in the LVCSR output where OOVs were uttered. Towards this goal, we are interested in selecting units such that the recognizer outputs them only for OOV regions while prefering to output a complete word for in-vocabulary regions. Our approach yields improvements over state-of-the-art results. We begin by presenting our log-linear model for learning sub-word units with a simple but effective inference procedure. After reviewing existing OOV detection approaches, we detail how the learned units are integrated into a hybrid speech recognition system. We show improvements in OOV detection, and evaluate impact on phone error rates. 2 Learning Sub-Word Units Given raw text, our objective is to produce a lexicon of sub-word units that can be used by a hybrid system for open vocabulary speech recognition. Rather than relying on the text alone, we also utilize side information: a mapping of words to classes so we can optimize learning for a specific task. The provided mapping assigns labels Y to the corpus. We maximize the probability of the observed labeling sequence Y given the text W: P(Y |W). We assume there is a latent segmentation S of this corpus which impacts Y . The complete data likelihood becomes: P(Y |W) = P S P(Y, S|W) during training. Since we are maximizing the observed Y , segmentation S must discriminate between different possible labels. We learn variable-length multi-phone units by segmenting the phonetic representation of each word in the corpus. Resulting segments form the subword lexicon.1 Learning input includes a list of words to segment taken from raw text, a mapping between words and classes (side information indicating whether token is IV or OOV), a pronunciation dictionary D, and a letter to sound model (L2S), such as the one described in Chen (2003). The corpus W is the list of types (unique words) in the raw text input. This forces each word to have a unique segmentation, shared by all common tokens. Words are converted into phonetic representations according to their most likely dictionary pronunciation; non-dictionary words use the L2S model.2 2.1 Model Inspired by the morphological segmentation model of Poon et al. (2009), we assume P(Y, S|W) is a log-linear model parameterized by Λ: PΛ(Y, S|W) = 1 Z(W)uΛ(Y, S, W) (1) where uΛ(Y, S, W) defines the score of the proposed segmentation S for words W and labels Y according to model parameters Λ. Sub-word units σ compose S, where each σ is a phone sequence, including the full pronunciation for vocabulary words; the collection of σs form the lexicon. Each unit σ is present in a segmentation with some context c = (φl, φr) of the form φlσφr. Features based on the context and the unit itself parameterize uΛ. In addition to scoring a segmentation based on features, we include two priors inspired by the Minimum Description Length (MDL) principle suggested by Poon et al. (2009). The lexicon prior favors smaller lexicons by placing an exponential prior with negative weight on the length of the lexicon P σ |σ|, where |σ| is the length of the unit σ in number of phones. Minimizing the lexicon prior favors a trivial lexicon of only the phones. The corpus prior counters this effect, an exponential prior with negative weight on the number of units in each word’s segmentation, where |si| is the segmentation length and |wi| is the length of the word in phones. Learning strikes a balance between the two priors. Using these definitions, the segmentation score uΛ(Y, S, W) is given as: 1Since sub-word units can expand full-words, we refer to both words and sub-words simply as units. 2The model can also take multiple pronunciations (§3.1). 713 s l ow b ax d ae n s l ow (#,#, , b, ax) b ax (l,ow, , d, ae) d ae n (b,ax, , #, #) Figure 1: Units and bigram phone context (in parenthesis) for an example segmentation of the word “slobodan”. uΛ(Y, S, W) = exp X σ,y λσ,yfσ,y(S, Y ) + X c,y λc,yfc,y(S, Y ) + α · X σ∈S |σ| + β · X i∈W |si|/|wi| ! (2) fσ,y(S, Y ) are the co-occurrence counts of the pair (σ, y) where σ is a unit under segmentation S and y is the label. fc,y(S, Y ) are the co-occurrence counts for the context c and label y under S. The model parameters are Λ = {λσ,y, λc,y : ∀σ, c, y}. The negative weights for the lexicon (α) and corpus priors (β) are tuned on development data. The normalizer Z sums over all possible segmentations and labels: Z(W) = X S′ X Y ′ uΛ(Y ′, S′, W) (3) Consider the example segmentation for the word “slobodan” with pronunciation s,l,ow,b,ax,d,ae,n (Figure 1). The bigram phone context as a four-tuple appears below each unit; the first two entries correspond to the left context, and last two the right context. The example corpus (Figure 2) demonstrates how unit features fσ,y and context features fc,y are computed. 3 Model Training Learning maximizes the log likelihood of the observed labels Y ∗given the words W: ℓ(Y ∗|W) = log X S 1 Z(W)uΛ(Y ∗, S, W) (4) We use the Expectation-Maximization algorithm, where the expectation step predicts segmentations S Labeled corpus: president/y = 0 milosevic/y = 1 Segmented corpus: p r eh z ih d ih n t/0 m ih/1 l aa/1 s ax/1 v ih ch/1 Unit-feature:Value p r eh z ih d ih n t/0:1 m ih/1:1 l aa/1:1 s ax/1:1 v ih ch/1:1 Context-feature:Value (#/0,#/0, ,l/1,aa/1):1, (m/1,ih/1, ,s/1,ax/1):1, (l/1,aa/1, ,v/1,ih/1):1, (s/1,ax/1, ,#/0,#/0):1, (#/0,#/0, ,#/0,#/0):1 Figure 2: A small example corpus with segmentations and corresponding features. The notation m ih/1:1 represents unit/label:feature-value. Overlapping context features capture rich segmentation regularities associated with each class. given the model’s current parameters Λ (§3.1), and the maximization step updates these parameters using gradient ascent. The partial derivatives of the objective (4) with respect to each parameter λi are: ∂ℓ(Y ∗|W) ∂λi = ES|Y ∗,W [fi] −ES,Y |W [fi] (5) The gradient takes the usual form, where we encourage the expected segmentation from the current model given the correct labels to equal the expected segmentation and expected labels. The next section discusses computing these expectations. 3.1 Inference Inference is challenging since the lexicon prior renders all word segmentations interdependent. Consider a simple two word corpus: cesar (s,iy,z,er), and cesium (s,iy,z,iy,ax,m). Numerous segmentations are possible; each word has 2N−1 possible segmentations, where N is the number of phones in its pronunciation (i.e., 23 × 25 = 256). However, if we decide to segment the first word as: {s iy, z er}, then the segmentation for “cesium”:{s iy, z iy ax m} will incur a lexicon prior penalty for including the new segment z iy ax m. If instead we segment “cesar” as {s iy z, er}, the segmentation {s iy, z iy ax m} incurs double penalty for the lexicon prior (since we are including two new units in the lexicon: s iy and z iy ax m). This dependency requires joint segmentation of the entire corpus, which is intractable. Hence, we resort to approximations of the expectations in Eq. (5). One approach is to use Gibbs Sampling: iterating through each word, sampling a new seg714 mentation conditioned on the segmentation of all other words. The sampling distribution requires enumerating all possible segmentations for each word (2N−1) and computing the conditional probabilities for each segmentation: P(S|Y ∗, W) = P(Y ∗, S|W)/P(Y ∗|W) (the features are extracted from the remaining words in the corpus). Using M sampled segmentations S1, S2, . . . Sm we compute ES|Y ∗,W [fi] as follows: ES|Y ∗,W [fi] ≈1 M X j fi[Sj] Similarly, to compute ES,Y |W we sample a segmentation and a label for each word. We compute the joint probability of P(Y, S|W) for each segmentation-label pair using Eq. (1). A sampled segmentation can introduce new units, which may have higher probability than existing ones. Using these approximations in Eq. (5), we update the parameters using gradient ascent: ¯λnew = ¯λold + γ∇ℓ¯λ(Y ∗|W) where γ > 0 is the learning rate. To obtain the best segmentation, we use deterministic annealing. Sampling operates as usual, except that the parameters are divided by a value, which starts large and gradually drops to zero. To make burn in faster for sampling, the sampler is initialized with the most likely segmentation from the previous iteration. To initialize the sampler the first time, we set all the parameters to zero (only the priors have non-zero values) and run deterministic annealing to obtain the first segmentation of the corpus. 3.2 Efficient Sampling Sampling a segmentation for the corpus requires computing the normalization constant (3), which contains a summation over all possible corpus segmentations. Instead, we approximate this constant by sampling words independently, keeping fixed all other segmentations. Still, even sampling a single word’s segmentation requires enumerating probabilities for all possible segmentations. We sample a segmentation efficiently using dynamic programming. We can represent all possible segmentations for a word as a finite state machine (FSM) (Figure 3), where arcs weights arise from scoring the segmentation’s features. This weight is the negative log probability of the resulting model after adding the corresponding features and priors. However, the lexicon prior poses a problem for this construction since the penalty incurred by a new unit in the segmentation depends on whether that unit is present elsewhere in that segmentation. For example, consider the segmentation for the word ANJANI: AA N, JH, AA N, IY. If none of these units are in the lexicon, this segmentation yields the lowest prior penalty since it repeats the unit AA N. 3 This global dependency means paths must encode the full unit history, making computing forward-backward probabilities inefficient. Our solution is to use the Metropolis-Hastings algorithm, which samples from the true distribution P(Y, S|W) by first sampling a new label and segmentation (y′, s′) from a simpler proposal distribution Q(Y, S|W). The new assignment (y′, s′) is accepted with probability: α(Y ′, S′|Y, S, W)=min „ 1, P(Y ′, S′|W)Q(Y, S|Y ′, S′, W) P(Y, S|W)Q(Y ′, S′|Y, S, W) « We choose the proposal distribution Q(Y, S|W) as Eq. (1) omitting the lexicon prior, removing the challenge for efficient computation. The probability of accepting a sample becomes: α(Y ′, S′|Y, S, W)=min „ 1, P σ∈S′ |σ| P σ∈S |σ| « (6) We sample a path from the FSM by running the forward-backward algorithm, where the backward computations are carried out explicitly, and the forward pass is done through sampling, i.e. we traverse the machine only computing forward probabilities for arcs leaving the sampled state.4 Once we sample a segmentation (and label) we accept it according to Eq. (6) or keep the previous segmentation if rejected. Alg. 1 shows our full sub-word learning procedure, where sampleSL (Alg. 2) samples a segmentation and label sequence for the entire corpus from P(Y, S|W), and sampleS samples a segmentation from P(S|Y ∗, W). 3Splitting at phone boundaries yields the same lexicon prior but a higher corpus prior. 4We use OpenFst’s RandGen operation with a costumed arcselector (http://www.openfst.org/). 715 0 1 AA 5 AA_N_JH_AA_N 4 AA_N_JH_AA 3 AA_N_JH 2 AA_N N_JH_AA_N N_JH_AA N_JH N 6 N_JH_AA_N_IY IY N AA_N AA AA_N_IY JH_AA_N JH_AA JH JH_AA_N_IY Figure 3: FSM representing all segmentations for the word ANJANI with pronunciation: AA,N,JH,AA,N,IY Algorithm 1 Training Input: Lexicon L from training text W, Dictionary D, Mapping M, L2S pronunciations, Annealing temp T. Initialization: Assign label y∗ m = M[wm]. ¯λ0 = ¯0 S0 = random segmentation for each word in L. for i = 1 to K do /* E-Step */ Si = bestSegmentation(T, λi−1, Si−1). for k = 1 to NumSamples do (S′ k, Y ′ k) = sampleSL(P(Y, Si|W),Q(Y, Si|W)) ˜Sk = sampleS(P(Si|Y ∗, W),Q(Si|Y ∗, W)) end for /* M-Step */ ES,Y |W [fi] = 1 NumSamples P k fσ,l[S′ k, Y ′ k] ES|Y ∗,W [fσ,l] = 1 NumSamples P k fσ,l[ ˜Sk, Y ∗] ¯λi = ¯λi−1 + γ∇L¯λ(Y ∗|W) end for S = bestSegmentation(T, λK, S0) Output: Lexicon Lo from S 4 OOV Detection Using Hybrid Models To evaluate our model for learning sub-word units, we consider the task of out-of-vocabulary (OOV) word detection. OOV detection for ASR output can be categorized into two broad groups: 1) hybrid (filler) models: which explicitly model OOVs using either filler, sub-words, or generic word models (Bazzi, 2002; Schaaf, 2001; Bisani and Ney, 2005; Klakow et al., 1999; Wang, 2009); and 2) confidence-based approaches: which label unreliable regions as OOVs based on different confidence scores, such as acoustic scores, language models, and lattice scores (Lin et al., 2007; Burget et al., 2008; Sun et al., 2001; Wessel et al., 2001). In the next section we detail the OOV detection approach we employ, which combines hybrid and Algorithm 2 sampleSL(P(S, Y |W), Q(S, Y |W)) for m = 1 to M (NumWords) do (s′ m, y′ m) = Sample segmentation/label pair for word wm according to Q(S, Y |W) Y ′ = {y1 . . . ym−1y′ mym+1 . . . yM} S′ = {s1 . . . sm−1s′ msm+1 . . . sM} α=min 1, P σ∈S′ |σ| P σ∈S |σ| with prob α : ym,k = y′ m, sm,k = s′ m with prob (1 −α) : ym,k = ym, sm,k = sm end for return (S′ k, Y ′ k) = [(s1,k, y1,k) . . . (sM,k, yM,k)] confidence-based models, achieving state-of-the art performance for this task. 4.1 OOV Detection Approach We use the state-of-the-art OOV detection model of Parada et al. (2010), a second order CRF with features based on the output of a hybrid recognizer. This detector processes hybrid recognizer output, so we can evaluate different sub-word unit lexicons for the hybrid recognizer and measure the change in OOV detection accuracy. Our model (§2.1) can be applied to this task by using a dictionary D to label words as IV (yi = 0 if wi ∈D) and OOV (yi = 1 if wi /∈D). This results in a labeled corpus, where the labeling sequence Y indicates the presence of out-of-vocabulary words (OOVs). For comparison we evaluate a baseline method (Rastrow et al., 2009b) for selecting units. Given a sub-word lexicon, the word and subwords are combined to form a hybrid language model (LM) to be used by the LVCSR system. This hybrid LM captures dependencies between word and sub-words. In the LM training data, all OOVs are represented by the smallest number of sub-words which corresponds to their pronunciation. Pronunciations for all OOVs are obtained using grapheme 716 to phone models (Chen, 2003). Since sub-words represent OOVs while building the hybrid LM, the existence of sub-words in ASR output indicate an OOV region. A simple solution to the OOV detection problem would then be reduced to a search for the sub-words in the output of the ASR system. The search can be on the one-best transcripts, lattices or confusion networks. While lattices contain more information, they are harder to process; confusion networks offer a trade-off between richness (posterior probabilities are already computed) and compactness (Mangu et al., 1999). Two effective indications of OOVs are the existence of sub-words (Eq. 7) and high entropy in a network region (Eq. 8), both of which are used as features in the model of Parada et al. (2010). Sub-word Posterior= X σ∈tj p(σ|tj) (7) Word-Entropy=− X w∈tj p(w|tj) log p(w|tj) (8) tj is the current bin in the confusion network and σ is a sub-word in the hybrid dictionary. Improving the sub-word unit lexicon, improves the quality of the confusion networks for OOV detection. 5 Experimental Setup We used the data set constructed by Can et al. (2009) (OOVCORP) for the evaluation of Spoken Term Detection of OOVs since it focuses on the OOV problem. The corpus contains 100 hours of transcribed Broadcast News English speech. There are 1290 unique OOVs in the corpus, which were selected with a minimum of 5 acoustic instances per word and short OOVs inappropriate for STD (less than 4 phones) were explicitly excluded. Example OOVs include: NATALIE, PUTIN, QAEDA, HOLLOWAY, COROLLARIES, HYPERLINKED, etc. This resulted in roughly 24K (2%) OOV tokens. For LVCSR, we used the IBM Speech Recognition Toolkit (Soltau et al., 2005)5 to obtain a transcript of the audio. Acoustic models were trained on 300 hours of HUB4 data (Fiscus et al., 1998) and utterances containing OOV words as marked in OOVCORP were excluded. The language model was trained on 400M words from various text sources 5The IBM system used speaker adaptive training based on maximum likelihood with no discriminative training. with a 83K word vocabulary. The LVCSR system’s WER on the standard RT04 BN test set was 19.4%. Excluded utterances amount to 100hrs. These were divided into 5 hours of training for the OOV detector and 95 hours of test. Note that the OOV detector training set is different from the LVCSR training set. We also use a hybrid LVCSR system, combining word and sub-word units obtained from either our approach or a state-of-the-art baseline approach (Rastrow et al., 2009a) (§5.2). Our hybrid system’s lexicon has 83K words and 5K or 10K sub-words. Note that the word vocabulary is common to both systems and only the sub-words are selected using either approach. The word vocabulary used is close to most modern LVCSR system vocabularies for English Broadcast News; the resulting OOVs are more challenging but more realistic (i.e. mostly named entities and technical terms). The 1290 words are OOVs to both the word and hybrid systems. In addition we report OOV detection results on a MIT lectures data set (Glass et al., 2010) consisting of 3 Hrs from two speakers with a 1.5% OOV rate. These were divided into 1 Hr for training the OOV detector and 2 Hrs for testing. Note that the LVCSR system is trained on Broadcast News data. This outof-domain test-set help us evaluate the cross-domain performance of the proposed and baseline hybrid systems. OOVs in this data set correspond mainly to technical terms in computer science and math. e.g. ALGORITHM, DEBUG, COMPILER, LISP. 5.1 Learning parameters For learning the sub-words we randomly selected from training 5,000 words which belong to the 83K vocabulary and 5,000 OOVs6. For development we selected an additional 1,000 IV and 1,000 OOVs. This was used to tune our model hyper parameters (set to α = −1, β = −20). There is no overlap of OOVs in training, development and test sets. All feature weights were initialized to zero and had a Gaussian prior with variance σ = 100. Each of the words in training and development was converted to their most-likely pronunciation using the dictionary 6This was used to obtain the 5K hybrid system. To learn subwords for the 10K hybrid system we used 10K in-vocabulary words and 10K OOVs. All words were randomly selected from the LM training text. 717 for IV words or the L2S model for OOVs.7 The learning rate was γk = γ (k+1+A)τ , where k is the iteration, A is the stability constant (set to 0.1K), γ = 0.4, and τ = 0.6. We used K = 40 iterations for learning and 200 samples to compute the expectations in Eq. 5. The sampler was initialized by sampling for 500 iterations with deterministic annealing for a temperature varying from 10 to 0 at 0.1 intervals. Final segmentations were obtained using 10, 000 samples and the same temperature schedule. We limit segmentations to those including units of at most 5 phones to speed sampling with no significant degradation in performance. We observed improved performance by dis-allowing whole word units. 5.2 Baseline Unit Selection We used Rastrow et al. (2009a) as our baseline unit selection method, a data driven approach where the language model training text is converted into phones using the dictionary (or a letter-to-sound model for OOVs), and a N-gram phone LM is estimated on this data and pruned using a relative entropy based method. The hybrid lexicon includes resulting sub-words – ranging from unigrams to 5gram phones, and the 83K word lexicon. 5.3 Evaluation We obtain confusion networks from both the word and hybrid LVCSR systems. We align the LVCSR transcripts with the reference transcripts and tag each confusion region as either IV or OOV. The OOV detector classifies each region in the confusion network as IV/OOV. We report OOV detection accuracy using standard detection error tradeoff (DET) curves (Martin et al., 1997). DET curves measure tradeoffs between false alarms (x-axis) and misses (y-axis), and are useful for determining the optimal operating point for an application; lower curves are better. Following Parada et al. (2010) we separately evaluate unobserved OOVs.8 7In this work we ignore pronunciation variability and simply consider the most likely pronunciation for each word. It is straightforward to extend to multiple pronunciations by first sampling a pronunciation for each word and then sampling a segmentation for that pronunciation. 8Once an OOV word has been observed in the OOV detector training data, even if it was not in the LVCSR training data, it is no longer truly OOV. 6 Results We compare the performance of a hybrid system with baseline units9 (§5.2) and one with units learned by our model on OOV detection and phone error rate. We present results using a hybrid system with 5k and 10k sub-words. We evaluate the CRF OOV detector with two different feature sets. The first uses only Word Entropy and Sub-word Posterior (Eqs. 7 and 8) (Figure 4)10. The second (context) uses the extended context features of Parada et al. (2010) (Figure 5). Specifically, we include all trigrams obtained from the best hypothesis of the recognizer (a window of 5 words around current confusion bin). Predictions at different FA rates are obtained by varying a probability threshold. At a 5% FA rate, our system (This Paper 5k) reduces the miss OOV rate by 6.3% absolute over the baseline (Baseline 5k) when evaluating all OOVs. For unobserved OOVs, it achieves 3.6% absolute improvement. A larger lexicon (Baseline 10k and This Paper 10k ) shows similar relative improvements. Note that the features used so far do not necessarily provide an advantage for unobserved versus observed OOVs, since they ignore the decoded word/sub-word sequence. In fact, the performance on un-observed OOVs is better. OOV detection improvements can be attributed to increased coverage of OOV regions by the learned sub-words compared to the baseline. Table 1 shows the percent of Hits: sub-word units predicted in OOV regions, and False Alarms: sub-word units predicted for in-vocabulary words. We can see that the proposed system increases the Hits by over 8% absolute, while increasing the False Alarms by 0.3%. Interestingly, the average sub-word length for the proposed units exceeded that of the baseline units by 0.3 phones (Baseline 5K average length was 2.92, while that of This Paper 5K was 3.2). 9Our baseline results differ from Parada et al. (2010). When implementing the lexicon baseline, we discovered that their hybrid units were mistakenly derived from text containing test OOVs. Once excluded, the relative improvements of previous work remain, but the absolute error rates are higher. 10All real-valued features were normalized and quantized using the uniform-occupancy partitioning described in White et al. (2007). We used 50 partitions with a minimum of 100 training values per partition. 718 0 5 10 15 20 %FA 30 35 40 45 50 55 60 65 70 %Misses Baseline (5k) This Paper (5k) Baseline (10k) This Paper (10k) (a) 0 5 10 15 20 %FA 30 35 40 45 50 55 60 65 70 %Misses Baseline (5k) This Paper (5k) Baseline (10k) This Paper (10k) (b) Figure 4: DET curves for OOV detection using baseline hybrid systems for different lexicon size and proposed discriminative hybrid system on OOVCORP data set. Evaluation on un-observed OOVs (a) and all OOVs (b). 0 5 10 15 20 %FA 30 35 40 45 50 55 60 65 70 %Misses Baseline (10k) Baseline (10k) + context-features This Paper (10k) This Paper (10k) + context-features (a) 0 5 10 15 20 %FA 10 20 30 40 50 60 70 80 %Misses Baseline (10k) Baseline (10k) + context-features This Paper (10k) This Paper (10k) + context-features (b) Figure 5: Effect of adding context features to baseline and discriminative hybrid systems on OOVCORP data set. Evaluation on un-observed OOVs (a) and all OOVs (b). Consistent with previously published results, including context achieves large improvement in performance. The proposed hybrid system (This Paper 10k + context-features) still improves over the baseline (Baseline 10k + context-features), however the relative gain is reduced. In this case, we obtain larger gains for un-observed OOVs which benefit less from the context clues learned in training. Lastly, we report OOV detection performance on MIT Lectures. Both the sub-word lexicon and the LVCSR models were trained on Broadcast News data, helping us evaluate the robustness of learned sub-words across domains. Note that the OOVs in these domains are quite different: MIT Lectures’ OOVs correspond to technical computer sciHybrid System Hits FAs Baseline (5k) 18.25 1.49 This Paper (5k) 26.78 1.78 Baseline (10k) 24.26 1.82 This Paper (10k) 28.96 1.92 Table 1: Coverage of OOV regions by baseline and proposed sub-words in OOVCORP. ence and math terms, while in Broadcast News they are mainly named-entities. Figure 6 and 7 show the OOV detection results in the MIT Lectures data set. For un-observed OOVs, the proposed system (This Paper 10k) reduces the miss OOV rate by 7.6% with respect to the baseline (Baseline 10k) at a 5% FA rate. Similar to Broadcast News results, we found that the learned sub-words provide larger coverage of OOV regions in MIT Lectures domain. These results suggest that the proposed sub-words are not simply modeling the training OOVs (named-entities) better than the baseline sub-words, but also describe better novel unexpected words. Furthermore, including context features does not seem as helpful. We conjecture that this is due to the higher WER11 and the less structured nature of the domain: i.e. ungrammatical sentences, disfluencies, incomplete sentences, making it more difficult to predict OOVs based on context. 11WER = 32.7% since the LVCSR system was trained on Broadcast News data as described in Section 5. 719 0 5 10 15 20 %FA 30 40 50 60 70 80 90 %Misses Baseline (5k) This Paper (5k) Baseline (10k) This Paper (10k) (a) 0 5 10 15 20 %FA 30 40 50 60 70 80 90 %Misses Baseline (5k) This Paper (5k) Baseline (10k) This Paper (10k) (b) Figure 6: DET curves for OOV detection using baseline hybrid systems for different lexicon size and proposed discriminative hybrid system on MIT Lectures data set. Evaluation on un-observed OOVs (a) and all OOVs (b). 0 5 10 15 20 %FA 30 40 50 60 70 80 90 %Misses Baseline (10k) Baseline (10k) + context-features This Paper (10k) This Paper (10k) + context-features (a) 0 5 10 15 20 %FA 30 40 50 60 70 80 90 %Misses Baseline (10k) Baseline (10k) + context-features This Paper (10k) This Paper (10k) + context-features (b) Figure 7: Effect of adding context features to baseline and discriminative hybrid systems on MIT Lectures data set. Evaluation on un-observed OOVs (a) and all OOVs (b). 6.1 Improved Phonetic Transcription We consider the hybrid lexicon’s impact on Phone Error Rate (PER) with respect to the reference transcription. The reference phone sequence is obtained by doing forced alignment of the audio stream to the reference transcripts using acoustic models. This provides an alignment of the pronunciation variant of each word in the reference and the recognizer’s one-best output. The aligned words are converted to the phonetic representation using the dictionary. Table 2 presents PERs for the word and different hybrid systems. As previously reported (Rastrow et al., 2009b), the hybrid systems achieve better PER, specially in OOV regions since they predict sub-word units for OOVs. Our method achieves modest improvements in PER compared to the hybrid baseline. No statistically significant improvements in PER were observed on MIT Lectures. 7 Conclusions Our probabilistic model learns sub-word units for hybrid speech recognizers by segmenting a text corpus while exploiting side information. Applying our System OOV IV All Word 1.62 6.42 8.04 Hybrid: Baseline (5k) 1.56 6.44 8.01 Hybrid: Baseline (10k) 1.51 6.41 7.92 Hybrid: This Paper (5k) 1.52 6.42 7.94 Hybrid: This Paper (10k) 1.45 6.39 7.85 Table 2: Phone Error Rate for OOVCORP. method to the task of OOV detection, we obtain an absolute error reduction of 6.3% and 7.6% at a 5% false alarm rate on an English Broadcast News and MIT Lectures task respectively, when compared to a baseline system. Furthermore, we have confirmed previous work that hybrid systems achieve better phone accuracy, and our model makes modest improvements over a baseline with a similarly sized sub-word lexicon. We plan to further explore our new lexicon’s performance for other languages and tasks, such as OOV spoken term detection. Acknowledgments We gratefully acknowledge Bhuvaha Ramabhadran for many insightful discussions and the anonymous reviewers for their helpful comments. This work was funded by a Google PhD Fellowship. 720 References Issam Bazzi and James Glass. 2001. Learning units for domain-independent out-of-vocabulary word modeling. In EuroSpeech. Issam Bazzi. 2002. Modelling out-of-vocabulary words for robust speech recognition. Ph.D. thesis, Massachusetts Institute of Technology. M. Bisani and H. Ney. 2005. Open vocabulary speech recognition with flat hybrid models. In INTERSPEECH. L. Burget, P. Schwarz, P. Matejka, M. Hannemann, A. Rastrow, C. White, S. Khudanpur, H. Hermansky, and J. Cernocky. 2008. Combination of strongly and weakly constrained recognizers for reliable detection of OOVS. In ICASSP. D. Can, E. Cooper, A. Sethy, M. Saraclar, and C. White. 2009. Effect of pronounciations on OOV queries in spoken term detection. Proceedings of ICASSP. Stanley F. Chen. 2003. Conditional and joint models for grapheme-to-phoneme conversion. In Eurospeech, pages 2033–2036. G. Choueiter. 2009. Linguistically-motivated subword modeling with applications to speech recognition. Ph.D. thesis, Massachusetts Institute of Technology. Jonathan Fiscus, John Garofolo, Mark Przybocki, William Fisher, and David Pallett, 1998. 1997 English Broadcast News Speech (HUB4). Linguistic Data Consortium, Philadelphia. James Glass, Timothy Hazen, Lee Hetherington, and Chao Wang. 2010. Analysis and processing of lecture audio data: Preliminary investigations. In North American Chapter of the Association for Computational Linguistics (NAACL). Dietrich Klakow, Georg Rose, and Xavier Aubert. 1999. OOV-detection in large vocabulary system using automatically defined word-fragments as fillers. In Eurospeech. Hui Lin, J. Bilmes, D. Vergyri, and K. Kirchhoff. 2007. OOV detection by joint word/phone lattice alignment. In ASRU, pages 478–483, Dec. Jonathan Mamou, Bhuvana Ramabhadran, and Olivier Siohan. 2007. Vocabulary independent spoken term detection. In Proceedings of SIGIR. L. Mangu, E. Brill, and A. Stolcke. 1999. Finding consensus among words. In Eurospeech. A. Martin, G. Doddington, T. Kamm, M. Ordowski, and M. Przybocky. 1997. The det curve in assessment of detection task performance. In Eurospeech. Carolina Parada, Abhinav Sethy, and Bhuvana Ramabhadran. 2009. Query-by-example spoken term detection for oov terms. In ASRU. Carolina Parada, Mark Dredze, Denis Filimonov, and Fred Jelinek. 2010. Contextual information improves oov detection in speech. In North American Chapter of the Association for Computational Linguistics (NAACL). H. Poon, C. Cherry, and K. Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In ACL. Ariya Rastrow, Abhinav Sethy, and Bhuvana Ramabhadran. 2009a. A new method for OOV detection using hybrid word/fragment system. Proceedings of ICASSP. Ariya Rastrow, Abhinav Sethy, Bhuvana Ramabhadran, and Fred Jelinek. 2009b. Towards using hybrid, word, and fragment units for vocabulary independent LVCSR systems. INTERSPEECH. T. Schaaf. 2001. Detection of OOV words using generalized word models and a semantic class language model. In Eurospeech. H. Soltau, B. Kingsbury, L. Mangu, D. Povey, G. Saon, and G. Zweig. 2005. The ibm 2004 conversational telephony system for rich transcription. In ICASSP. H. Sun, G. Zhang, f. Zheng, and M. Xu. 2001. Using word confidence measure for OOV words detection in a spontaneous spoken dialog system. In Eurospeech. Stanley Wang. 2009. Using graphone models in automatic speech recognition. Master’s thesis, Massachusetts Institute of Technology. F. Wessel, R. Schluter, K. Macherey, and H. Ney. 2001. Confidence measures for large vocabulary continuous speech recognition. IEEE Transactions on Speech and Audio Processing, 9(3). Christopher White, Jasha Droppo, Alex Acero, and Julian Odell. 2007. Maximum entropy confidence estimation for speech recognition. In ICASSP. 721
|
2011
|
72
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 722–731, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Computing and Evaluating Syntactic Complexity Features for Automated Scoring of Spontaneous Non-Native Speech Miao Chen Klaus Zechner School of Information Studies NLP & Speech Group Syracuse University Educational Testing Service Syracuse, NY, USA Princeton, NJ, USA [email protected] [email protected] Abstract This paper focuses on identifying, extracting and evaluating features related to syntactic complexity of spontaneous spoken responses as part of an effort to expand the current feature set of an automated speech scoring system in order to cover additional aspects considered important in the construct of communicative competence. Our goal is to find effective features, selected from a large set of features proposed previously and some new features designed in analogous ways from a syntactic complexity perspective that correlate well with human ratings of the same spoken responses, and to build automatic scoring models based on the most promising features by using machine learning methods. On human transcriptions with manually annotated clause and sentence boundaries, our best scoring model achieves an overall Pearson correlation with human rater scores of r=0.49 on an unseen test set, whereas correlations of models using sentence or clause boundaries from automated classifiers are around r=0.2. 1 Introduction Past efforts directed at automated scoring of speech have used mainly features related to fluen cy (e.g., speaking rate, length and distribution of pauses), pronunciation (e.g., using log-likelihood scores from the acoustic model of an Automatic Speech Recognition (ASR) system), or prosody (e.g., information related to pitch contours or syllable stress) (e.g., Bernstein, 1999; Bernstein et al., 2000; Bernstein et al., 2010; Cucchiarini et al., 1997; Cucchiarini et al., 2000; Franco et al., 2000a; Franco et al., 2000b; Zechner et al., 2007, Zechner et al., 2009). While this approach is a good match to most of the important properties related to low entropy speech (i.e., speech which is highly predictable), such as reading a passage aloud, it lacks many important aspects of spontaneous speech which are relevant to be evaluated both by a human rater and an automated scoring system. Examples of such aspects of speech, which are considered part of the construct1 of “communicative competence (Bachman, 1990), include grammatical accuracy, syntactic complexity, vocabulary diversity, and aspects of spoken discourse structure, e.g., coherence and cohesion. These different aspects of speaking proficiency are often highly correlated in a non-native speaker (Xi and Mollaun, 2006; Bernstein et al., 2010), and so scoring models built solely on features of fluency and pronunciation may achieve reasonably high correlations with holistic human rater scores. However, it is important to point out that such systems would still be unable to assess many important aspects of the speaking construct and therefore cannot be seen as ideal from a validity point of view.2 The purpose of this paper is to address one of these important aspects of spoken language in more detail, namely syntactic complexity. This paper can be seen as a first step toward including 1 A construct is a set of knowledge, skills, and abilities measured by a test. 2 “Construct validity” refers to the extent that a test measures what it is designed to measure, in this case, communicative competence via speaking. 722 features related to this part of the speaking construct into an already existing automated speech scoring system for spontaneous speech which so far mostly uses features related to fluency and pronunciation (Zechner et al., 2009). We use data from the speaking section of the TOEFL® Practice Online (TPO) test, which is a low stakes practice test for non-native speakers where they are asked to provide six spontaneous speech samples of about one minute in length each in response to a variety of prompts. Some prompts may be simple questions, and others may involve reading or listening to passages first and then answering related questions. All responses were scored holistically by human raters according to pre-defined scoring rubrics (i.e., specific scoring guidelines) on a scale of 1 to 4, 4 being the highest proficiency level. In our automated scoring system, the first component is an ASR system that decodes the digitized speech sample, generating a time-annotated hypothesis for every response. Next, fluency and pronunciation features are computed based on the ASR output hypotheses, and finally a multiple regression scoring model, trained on human rater scores, computes the score for a given spoken response (see Zechner et al. (2009) for more details). We conducted the study in three steps: (1) finding important measures of syntactic complexity from second language acquisition (SLA) and English language learning (ELL) literature, and extending this feature set based on our observations of the TPO data in analogous ways; (2) computing features based on transcribed speech responses and selecting features with highest correlations to human rater scores, also considering their comparative values for native speakers taking the same test; and (3) building scoring models for the selected sub-set of the features to generate a proficiency score for each speaker, using all six responses of that speaker. In the remainder of the paper, we will address related work in syntactic complexity (Section 2), introduce the speech data sets of our study (Section 3), describe the methods we used for feature extraction (Section 4), provide the experiment design and results (Section 5), analyze and discuss the results in Section 6, before concluding the paper (Section 7). 2 Related Work 2.1 Literature on Syntactic Complexity Syntactic complexity is defined as “the range of forms that surface in language production and the degree of sophistication of such forms” (Ortega, 2003). It is an important factor in the second language assessment construct as described in Bachman’s (1990) conceptual model of language ability, and therefore is often used as an index of language proficiency and development status of L2 learners. Various studies have proposed and investigated measures of syntactic complexity as well as examined its predictiveness for language proficiency, in both L2 writing and speaking settings, which will be reviewed respectively. Writing Wolfe-Quintero et al. (1998) reviewed a number of grammatical complexity measures in L2 writing from thirty-nine studies, and their usage for predicting language proficiency was discussed. Some examples of syntactic complexity measures are: mean number of clauses per T-unit3 3 T-units are defined as “shortest grammatically allowable sentences into which (writing can be split) or minimally terminable units” (Hunt, 1965:20). , mean length of clauses, mean number of verbs per sentence, etc. The various measures can be grouped into two categories: (1) clauses, sentences, and T-units in terms of each other; and (2) specific grammatical structures (e.g., passives, nominals) in relation to clauses, sentences, or T-units (Wolfe-Quintero et al., 1998). Three primary methods of calculating syntactic complexity measures are frequency, ratio, and index, where frequency is the count of occurrences of a specific grammatical structure, ratio is the number of one type of unit divided by the total number of another unit, and index is computing numeric scores by specific formulae (WolfeQuintero et al., 1998). For example, the measure “mean number of clauses per T-unit” is obtained by using the ratio calculation method and the clause and T-unit grammatical structures. Some structures such as clauses and T-units only need shallow linguistic processing to acquire, while some require parsing. There are numerous combinations for measures and we need empirical evi723 dence to select measures with the highest performance. There have been a series of empirical studies examining the relationship of syntactic complexity measures to L2 proficiency using real-world data (Cooper, 1976; Larsen-Freeman, 1978; Perkins, 1980; Ho-Peng, 1983; Henry, 1996; Ortega, 2003; Lu, 2010). The studies investigate measures that highly correlate with proficiency levels or distinguish between different proficiency levels. Many T-unit related measures were identified as statistically significant indicators to L2 proficiency, such as mean length of T-unit (Henry, 1996; Lu, 2010), mean number of clauses per T-unit (Cooper, 1976; Lu, 2010), mean number of complex nominals per T-unit (Lu, 2010), or the mean number of errorfree T-units per sentence (Ho-Peng, 1983). Other significant measures are mean length of clause (Lu, 2010), or frequency of passives in composition (Kameen, 1979). Speaking Syntactic complexity analysis in speech mainly inherits measures from the writing domain, and the abovementioned measures can be employed in the same way on speech transcripts for complexity computation. A series of studies have examined relations between the syntactic complexity of speech and the speakers’ holistic speaking proficiency levels (Halleck, 1995; Bernstein et al., 2010; Iwashita, 2006). Three objective measures of syntactic complexity, including mean T-unit length, mean error-free T-unit length, and percent of error-free T-units were found to correlate with holistic evaluations of speakers in Halleck (1995). Iwashita’s (2006) study on Japanese L2 speakers found that length-based complexity features (i.e., number of T-units and number of clauses per Tunit) are good predictors for oral proficiency. In studies directly employing syntactic complexity measures in other contexts, ratio-based measures are frequently used. Examples are mean length of utterance (Condouris et al., 2003), word count or tree depth (Roll et al., 2007), or mean length of Tunits and mean number of clauses per T-unit (Bernstein et al., 2010). Frequency-based measures were used less, such as number of full phrases in Roll et al. (2007). The speaking output is usually less clean than writing data (e.g., considering disfluencies such as false starts, repetitions, filled pauses etc.). Therefore we may need to remove these disfluencies first before computing syntactic complexity features. Also, importantly, ASR output does not contain interpunctuation but both for sentential-based features as well as for parser-based features, the boundaries of clauses and sentences need to be known. For this purpose, we will use automated classifiers that are trained to predict clause and sentence boundaries, as described in Chen et al. (2010). With previous studies providing us a rich pool of complexity features, additionally we also develop features analogous to the ones from the literature, mostly by using different calculation methods. For instance, the frequency of Prepositional Phrases (PPs) is a feature from the literature, and we add some variants such as number of PPs per clause as a new feature to our extended feature set. 2.2 Devising the Initial Feature Set Through this literature review, we identified some important features that were frequently used in previous studies in both L2 speaking and writing, such as length of sentences and number of clauses per sentence. In addition, we also collected candidate features that were less frequently mentioned in the literature, in order to start with a larger field of potential candidate features. We further extended the feature set by inspecting our data, described in the following section, and created suitable additional features by means of analogy. This process resulted in a set of 91 features, 11 of which are related to clausal and sentential unit measurements (frequency-based) and 80 to measurements within such units (ratio-based). From the perspective of extracting measures, in our study, some measures can be computed using only clause and sentence boundary information, and some can be derived only if the spoken responses are syntactically parsed. In our feature set, there are two types of features: clause and sentence boundary based (26 in total) and parsing based (65). The features will be described in detail in Section 4. 3 Data Our data set contains (1) 1,060 non-native speech responses of 189 speakers from the TPO test (NN set), and (2) 100 responses from 48 native speakers that took the same test (Nat set). All responses were verbatim transcribed manually and scored 724 holistically by human raters. (We only made use of the scores for the non-native data set in this study, since we purposefully selected speakers with perfect or near perfect scores for the Nat set from a larger native speech data set.) As mentioned above, there are four proficiency levels for human scoring, levels 1 to 4, with higher levels indicating better speaking proficiency. The NN set was randomly partitioned into a training (NN-train) and a test set with 760 and 300 responses, respectively, and no speaker overlap. Data Set Responses Speakers Responses per Speaker (average) NNtrain 760 137 5.55 Description: used to train sentence and clause boundary detectors, evaluate features and train scoring models 1: NNtest-1Hum 300 52 5.77 Description: human transcriptions and annotations of sentence and clause boundaries 2: NNtest-2CB 300 52 5.77 Description: human transcriptions, automatically predicted clause boundaries 3: NNtest-3SB 300 52 5.77 Description: human transcriptions, automatically predicted sentence boundaries 4: NNtest-4ASRCB 300 52 5.77 Description: ASR hypotheses, automatically predicted clause boundaries 5: NNtest-5ASRSB 300 52 5.77 Description: ASR hypotheses, automatically predicted sentence boundaries Table 1. Overview of non-native data sets. A second version of the test set contains ASR hypotheses instead of human transcriptions. The word error rate (WER4 4 Word error rate (WER) is the ratio of errors from a string between the ASR hypothesis and the reference transcript, where the sum of substitutions, insertions, and deletions is ) on this data set is 50.5%. We used a total of five variants of the test sets, as described in Table 1. Sets 1-3 are based on human transcriptions, whereas sets 4 and 5 are based on ASR output. Further, set 1 contains human annotated clause and sentence boundaries, whereas the other 4 sets have clause or sentence boundaries predicted by a classifier. All human transcribed files from the NN data set were annotated for clause boundaries, clause types, and disfluencies by human annotators (see Chen et al. (2010)). For the Nat data set, all of the 100 transcribed responses were annotated in the same manner by a human annotator. They are not used for any training purposes but serve as a comparative reference for syntactic complexity features derived from the non-native corpus. The NN-train set was used both for training clause and sentence boundary classifiers, as well as for feature selection and training of the scoring models. The two boundary detectors were machine learning based Hidden Markov Models, trained by using a language model derived from the 760 training files which had sentence and clause boundary labels (NN-train; see also Chen et al. (2010)). Since a speaker’s response to a single test item can be quite short (fewer than 100 words in many cases), it may contain only very few syntactic complexity features we are looking for. (Note that much of the previous work focused on written language with much longer texts to be considered.) However, if we aggregate responses of a single speaker, we have a better chance of finding a larger number of syntactic complexity features in the aggregated file. Therefore we joined files from the same speaker to one file for the training set and the five test sets, resulting in 52 aggregated files in each test set. Accordingly, we averaged the response scores of a single speaker to obtain the total speaker score to be used later in scoring model training and evaluation (Section 5).5 While disfluencies were used for the training of the boundary detectors, they were removed afterwards from the annotated data sets to obtain a tran divided by the length of the reference. To obtain WER in percent, this ratio is multiplied by 100.0. 5 Although in most operational settings, features are derived from single responses, this may not be true in all cases. Furthermore, scores of multiple responses are often combined for score reporting, which would make such an approach easier to implement and argue for operationally. 725 scription which is “cleaner” and lends itself better to most of the feature extraction methods we use. 4 Feature Extraction 4.1 Feature Set As mentioned in Section 2, we gathered 91 candidate syntactic complexity features based on our literature review as initial feature set, which is grouped into two categories: (1) Clause and sentence Boundary based features (CB features); and (2) Parse Tree based features (PT features). Clause based features are based on both clause boundaries and clause types and can be generated from human clause annotations, e.g., “frequency of adjective clauses6 We first selected features showing high correlation to human assigned scores. In this process the CB features were computed from human labeled clause boundaries in transcripts for best accuracy, and PT features were calculated from using parsing and other tools because we did not have human parse tree annotations for our data. per one thousand words”, “mean number of dependent clauses per clause”, etc. Parse tree based features refer to features that are generated from parse trees and cannot be extracted from human annotated clauses directly. We used the Stanford Parser (Klein and Manning, 2003) in conjunction with the Stanford Tregex package (Levy and Andrew, 2006) which supports using rules to extract specific configurations from parse trees, in a package put together by Lu (Lu, 2011). When given a sentence, the Stanford Parser outputs its grammatical structure by grouping words (and phrases) in a tree structure and identifies grammatical roles of words and phrases. Tregex is a tree query tool that takes Stanford parser trees as input and queries the trees to find subtrees that meet specific rules written in Tregex syntax (Levy and Andrew, 2006). It uses relational operators regulated by Tregex, for example, “A << B” stands for “subtree A dominates subtree B”. The operators primarily function in subtree precedence, dominance, negation, regular expression, tree node identity, headship, or variable groups, among others (Levy and Andrew, 2006). 6 An adjective clause is a clause that functions as an adjective in modifying a noun. E.g., “This cat is a cat that is difficult to deal with.” Lu’s tool (Lu, 2011), built upon the Stanford Parser and Tregex, does syntactic complexity analysis given textual data. Lu’s tool contributed 8 of the initial CB features and 6 of the initial PT features, and we computed the remaining CB and PT features using Perl scripts, the Stanford Parser, and Tregex. Table 2 lists the sub-set of 17 features (out of 91 features total) that were used for building the scoring models described later (Section 5). 4.2 Feature Selection We determined the importance of the features by computing each feature’s correlation with human raters’ proficiency scores based on the training set NN-train. We also used criteria related to the speaking construct, comparisons with native speaker data, and feature inter-correlations. While approaches coming from a pure machine learning perspective would likely use the entire feature pool as input for a classifier, our goal here is to obtain an initial feature set by judicious and careful feature selection that can withstand the scrutiny of construct validity in assessment development. As noted earlier, the disfluencies in the training set had been removed to obtain a “cleaner” text that looks somewhat more akin to a written passage and is easier to process by NLP modules such as parsers and part-of-speech (POS) taggers. 7 7 We are aware that disfluencies can provide valuable clues about spoken proficiency in and of themselves; however, this study is focused exclusively on syntactic complexity analysis, and in this context, disfluencies would distort the picture considerably due to the introduction of parsing errors, e.g. The extracted features partly were taken directly from proposals in the literature and partly were slightly modified to fit our clause annotation scheme. In order to have a unified framework for computing syntactic complexity features, we used a combination of the Stanford Parser and Tregex for computing both clause- and sentence-based features as well as parse-tree-based features, i.e., we did not make use of the human clause boundary label annotations here. The only exception to this 726 is that we are using human clause and sentence labels to create a candidate set for the clause boundary features evaluated by the Stanford Parser and Tregex, as explained in the following subsection. 8 Feature type: CB=Clause boundary based feature type, PT=Parse tree based feature type 9A “linguistically meaningful PP” (PP_ling) is defined as a PP immediately dominated by another PP in cases where a preposition contains a noun such as “in spite of” or “in front of”. An example would be “she stood in front of a house” where “in front of a house” would be parsed as two embedded PPs but only the top PP would be counted in this case. 10 A “linguistically meaningful VP” (VP_ling) is defined as a verb phrase immediately dominated by a clausal phrase, in order to avoid VPs embedded in another VP, e.g., "should go to work" is identified as one VP instead of two embedded VPs. 11 The “P-based Sampson” is a raw production-based measure (Sampson, 1997), defined as "proportion of the daughters of a nonterminal node which are themselves nonterminal and nonrightmost, averaged over the nonterminals of a sentence". Clause and Sentence based Features (CB features) Firstly, we extracted all 26 initial CB features directly from human annotated data of NN-train, using information from the clause and sentence type labels. The reasoning behind this was to create an initial pool of clause-based features that reflects the distribution of clauses and sentences as accurately as possible, even though we did not plan to use this extraction method operationally, where the parser decides on clause and sentence types. After computing the values of each CB feature, we calculated correlations between each feature and human-rated scores. Then we created an initial CB feature pool by selecting features that met two criteria: (1) the absolute Pearson correlation coefficient with human scores was larger than 0.2; and (2) the mean value of the feature on non-native speakers was at least 20% lower than that for naName Type8 Meaning Correlation Regression MLS CB Mean length of sentences 0.329 0.101 MLT CB Mean length of T-units 0.300 -0.059 DC/C CB Mean number of dependent clauses per clause 0.291 2.873 SSfreq CB Frequency of simple sentences per 1000 words -.0242 0.001 MLSS CB Mean length of simple sentences 0.255 0.040 ADJCfreq CB Frequency of adjective clauses per 1000 words 0.253 0.004 Ffreq CB Frequency of fragments per 1000 words -0.386 -0.057 MLCC CB Mean length of coordinate clauses 0.224 0.017 CT/T PT Mean number of complex T-units per T-unit 0.248 0.908 PP_ling/S PT Mean number of linguistically meaningful prepositional phrases (PP) per sentence9 0.310 0.423 NP/S PT Mean number of noun phrases (NP) per sentence 0.244 -0.411 CN/S PT Mean number of complex nominal per sentence 0.325 0.653 VB _ling/T PT Mean number of linguistically meaningful10 0.273 verb phrases per T-unit -0.780 PAS/S PT Mean number of passives per sentence 0.260 1.520 DI/T PT Mean number of dependent infinitives per T-unit 0.325 1.550 MLev PT Mean number of parsing tree levels per sentence 0.306 -0.134 MPSam PT Mean P-based Sampson11 0.254 per sentence 0.234 Table 2. List of syntactic complexity features selected to be included in building the scoring models. 727 tive speakers in case of positive correlation and at least by 20% higher than for native speakers in case of negative correlation, using the Nat data set for the latter criterion. Note that all of these features were computed without using a parser. This resulted in 13 important features. Secondly, Tregex rules were developed based on Lu’s tool to extract these 13 CB features from parsing results where the parser is provided with one sentence at a time. By applying the same selection criteria as before, except for allowing for correlations above 0.1 and giving preference to linguistically more meaningful features, we found 8 features that matched our criteria: MLS, MLT, DC/C, SSfreq, MLSS, ADJCfreq, Ffreq, MLCC All 28 pairwise inter-correlations between these 8 features were computed and inspected to avoid including features with high inter-correlations in the scoring model. Since we did not find any intercorrelations larger than 0.9, the features were considered moderately independent and none of them were removed from this set so it also maintains linguistic richness for the feature set. Due to the importance of T-units in complexity analysis, we briefly introduce how we obtain them from annotations. Three types of clauses labeled in our transcript can serve as T-units, including simple sentences, independent clauses, and conjunct (coordination) clauses. These clauses were identified in the human-annotated text and extracted as T-units in this phase. T-units in parse trees are identified using rules in Lu’s tool. Parse Tree based Features (PT features) We evaluated 65 features in total and selected features with highest importance using the following two criteria (which are very similar as before): (1) the absolute Pearson correlation coefficient with human scores is larger than 0.2; and (2) the feature mean value on native speakers (Nat) is higher than on score 4 for non-native speakers in case of positive correlation, or lower for negative correlation. 20 of 65 features were found to meet the requirements. Next, we examined inter-correlations between these features and found some correlations larger than 0.85.12 CT/T, PP_ling/S, NP/S, CN/S, VP_ling/T, PAS/S, DI/T, MLev, MPSam For each feature pair exhibiting high inter-correlation, we removed one feature according to the criterion that the removed feature should be linguistically less meaningful than the remaining one. After this filtering, the 9 remaining PT features are: In summary, as a result of the feature selection process, a total of 17 features were identified as important features to be used in scoring models for predicting speakers’ proficiency scores. Among them 8 are clause boundary based and the other 9 are parse tree based. 5 Experiments and Results In the previous section, we identified 17 syntactic features that show promising correlations with human rater speaking proficiency scores. These features as well as the human-rated scores will be used to build scoring models by using machine learning methods. As introduced in Section 3, we have one training set (N=137 speakers with all of their responses combined) for model building and five testing sets (N=52 for each of them) for evaluation. The publicly available machine learning package Weka was used in our experiments (Hall et al. 2009). We experimented with two algorithms in Weka: multiple regression (called “LinearRegression” in Weka) and decision tree (called “M5P”in Weka). The score values to be predicted are real numbers (i.e., non-integer), because we have to compute the average score of one speaker’s responses. Our initial runs showed that decision tree models were consistently outperformed by multiple regression (MR) models and thus decided to only focus on MR models henceforth. We set the “AttributeSelectionMethod” parameter in Weka’s LinearRegression algorithm to all 3 of its possible values in turn: (Model-1) M5 method; (Model-2) no attribute selection; and (Model3) greedy method. The resulting three multiple regression models were then tested against the five testing sets. Overall, correlations for all models for the NN-test-1-Hum set were between 0.45 and 0.49, correlations for sets NN-test-2-CB and NN- 12 The reason for using a lower threshold than above was to obtain a roughly equal number of CB and PT features in the end. 728 test-3-SB (human transcript based, and using automated boundaries) around 0.2, and for sets NNtest-4-ASR-CB and NN-test-5-ASR-SB (ASR hypotheses, and using automated boundaries), the correlations were not significant. Model-2 (using all 17 features) had the highest correlation on NNtest-1-Hum and we provide correlation results of this model in Table 3. Test set Correlation coefficient Correlation significance (p < 0.05) NN-test-1-Hum 0.488 Significant NN-test-2-CB 0.220 Significant NN-test-3-SB 0.170 Significant NN-test-4-ASR-CB -0.025 Not significant NN-test-5-ASR-SB -0.013 Not significant Table 3. Multiple regression model testing results for Model-2. 6 Discussion As we can see from the result table (Table 3) in the previous section, using only syntactic complexity features, based on clausal or parse tree information derived from human transcriptions of spoken test responses, can predict holistic human rater scores for combined speaker responses over a whole test with an overall correlation of r=0.49. While this is a promising result for this study with a focus on a broad spectrum of syntactic complexity features, the results also show significant limitations for an immediate operational use of such features. First, the imperfect prediction of clause and sentence boundaries by the two automatic classifiers causes a substantial degradation of scoring model performance to about r=0.2, and secondly, the rather high error rate of the ASR system (50.5%) does not allow for the computation of features that would result in any significant correlation with human scores. We want to note here that while ASR systems can be found that exhibit WERs below 10% for certain tasks, such as restricted dictation in low-noise environments by native speakers, our ASR task is significantly harder in several ways: (1) we have to recognize non-native speakers’rresponses where speakers have a number of different native language backgrounds; (2) the proficiency level of the test takers varies widely; and (3) the responses are spontaneous and unconstrained in terms of vocabulary. As for the automatic clause and sentence boundary classifiers, we can observe (in Table 4) that although the sentence boundary classifier has a slightly higher F-score than the clause boundary classifier, errors in sentence boundary detection have more negative effects on the accuracy of score prediction than those made by the clause boundary classifier. In fact, the lower F-score of the latter is mainly due to its lower precision which indicates that there are more spurious clause boundaries in its output which apparently cause little harm to the feature extraction processes. Among the 17 final features, 3 of them are frequency-based and the remaining 14 are ratiobased, which mirrors our findings from previous work that frequency features have been used less successfully than ratio features. As for ratio features, 5 of them are grammatical structure counts against sentence units, 4 are counts against T-units, and only 1 is based on counts against clause units. The feature set covers a wide range of grammatical structures, such as T-units, verb phrases, noun phrases, complex nominals, adjective clauses, coordinate clauses, prepositional phrases, etc. While this wide coverage provides for richness of the construct of syntactic complexity, some of the features exhibit relatively high correlation with each other which reduces their overall contributions to the scoring model’s performance. Going through the workflow of our system, we find at least five major stages that can generate errors which in turn can adversely affect feature computation and scoring model building. Errors may appear in each stage of our workflow, passing or even enlarging their effects from previous stages to later stages: 1) grammatical errors by the speakers (test takers); 2) errors by the ASR system; 3) sentence/clause boundary detection errors; 4) parser errors; and 5) rule extraction errors. In future work we will need to address each error source to obtain a higher overall system performance. 729 Table 4. Performance of clause and sentence boundary detectors. 7 Conclusion and Future Work In this paper, we investigated associations between speakers’ syntactic complexity features and their speaking proficiency scores provided by human raters. By exploring empirical evidence from nonnative and native speakers’ data sets of spontaneous speech test responses, we identified 17 features related to clause types and parse trees as effective predictors of human speaking scores. The features were implemented based on Lu’s L2 Syntactic Complexity Analyzer toolkit (Lu, 2011) to be automatically extracted from human or ASR transcripts. Three multiple regression models were built from non-native speech training data with different parameter setup and were tested against five testing sets with different preprocessing steps. The best model used the complete set of 17 features and exhibited a correlation with human scores of r=0.49 on human transcripts with boundary annotations. When using automated classifiers to predict clause or sentence boundaries, correlations with human scores are around r=0.2. Our experiments indicate that by enhancing the accuracy of the two main automated preprocessing components, namely ASR and automatic sentence and clause boundary detectors, scoring model performance will increase substantially, as well. Furthermore, this result demonstrates clearly that syntactic complexity features can be devised that are able to predict human speaking proficiency scores. Since this is a preliminary study, there is ample space to improve all major stages in the feature extraction process. The errors listed in the previous section are potential working directions for preprocessing enhancements prior to machine learning. Among the five types of errors, we can work on improving the accuracy of the speech recognizer, sentence and clause boundary detectors, parser, and feature extraction rules; as for the grammatical errors produced by test takers, we are envisioning to automatically identify and correct such errors. We will further experiment with syntactic complexity measures to balance construct richness and model simplicity. Furthermore, we can also experiment with additional types of machine learning models and tune parameters to derive scoring models with better performance. Acknowledgements The authors wish to thank Lei Chen and Su-Youn Yoon for their help with the sentence and clause boundary classifiers. We also would like to thank our colleagues Jill Burstein, Keelan Evanini, Yoko Futagi, Derrick Higgins, Nitin Madnani, and Joel Tetreault, as well as the four anonymous ACL reviewers for their valuable and helpful feedback and comments on our paper. References Bachman, L.F. (1990). Fundamental considerations in language testing. Oxford: Oxford University Press. Bernstein, J. (1999). PhonePass testing: Structure and construct. Menlo Park, CA: Ordinate Corporation. Bernstein, J., DeJong, J., Pisoni, D. & Townshend, B. (2000). Two experiments in automatic scoring of spoken language proficiency. Proceedings of InSTILL 2000, Dundee, Scotland. Bernstein, J., Cheng, J., & Suzuki, M. (2010). Fluency and structural complexity as predictors of L2 oral proficiency. Proceedings of Interspeech 2010, Tokyo, Japan, September. Chen, L., Tetreault, J. & Xi, X. (2010). Towards using structural events to assess non-native speech. NAACL-HLT 2010. 5th Workshop on Innovative Use of NLP for Building Educational Applications, Los Angeles, CA, June. Condouris, K., Meyer, E. & Tagger-Flusberg, H. (2003). The relationship between standardized measures of language and measures of spontaneous speech in children with autism. American Journal of SpeechLanguage Pathology, 12(3), 349-358. Cooper, T.C. (1976). Measuring written syntactic patterns of second language learners of German. The Journal of Educational Research, 69(5), 176-183. Cucchiarini, C., Strik, H. & Boves, L. (1997). Automatic evaluation of Dutch pronunciation by using speech recognition technology. IEEE Automatic Speech Recognition and Understanding Workshop, Santa Barbara, CA. Classifier Accuracy Precision Recall F score Clause boundary 0.954 0.721 0.748 0.734 Sentence boundary 0.975 0.811 0.755 0.782 730 Cucchiarini, C., Strik, H. & Boves, L. (2000). Quantitative assessment of second language learners' fluency by means of automatic speech recognition technology. Journal of the Acoustical Society of America, 107, 989-999. Franco, H., Abrash, V., Precoda, K., Bratt, H., Rao, R. & Butzberger, J. (2000a). The SRI EduSpeak system: Recognition and pronunciation scoring for language learning. Proceedings of InSTiLL-2000 (Intelligent Speech Technology in Language Learning), Dundee, Scotland. Franco, H., Neumeyer, L., Digalakis, V. & Ronen, O. (2000b). Combination of machine scores for automatic grading of pronunciation quality. Speech Communication, 30, 121-130. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P. & Witten, I.H. (2009). The WEKA Data Mining Software: An Update. SIGKDD Explorations, 11(1). Halleck, G.B. (1995). Assessing oral proficiency: A comparison of holistic and objective measures. The Modern Language Journal, 79(2), 223-234. Henry, K. (1996). Early L2 writing development: A study of autobiographical essays by university-level students on Russian. The Modern Language Journal, 80(3), 309-326. Ho-Peng, L. (1983). Using T-unit measures to assess writing proficiency of university ESL students. RELC Journal, 14(2), 35-43. Hunt, K. (1965). Grammatical structures written at three grade levels. NCTE Research report No.3. Champaign, IL: NCTE. Iwashita, N. (2006). Syntactic complexity measures and their relations to oral proficiency in Japanese as a foreign language. Language Assessment Quarterly, 3(20), 151-169. Kameen, P.T. (1979). Syntactic skill and ESL writing quality. In C. Yorio, K. Perkins, & J. Schachter (Eds.), On TESOL ’79: The learner in focus (pp.343364). Washington, D.C.: TESOL. Klein, D. & Manning, C.D. (2003). Fast exact inference with a factored model for a natural language parsing. In S.Becker, S. Thrun & K. Obermayer (Eds.), Advances in Neural Information Processing Systems 15 (pp.3-10). Cambridge, MA: MIT Press. Larsen-Freeman, D. (1978). An ESL index of development. Teachers of English to Speakers of Other Languages Quarterly, 12(4), 439-448. Levy, R. & Andrew, G. (2006). Tregex and Tsurgeon: Tools for querying and manipulating tree data structures. Proceedings of the Fifth International Conference on Language Resources and Evaluation. Lu, X. (2010). Automatic analysis of syntactic complexity in second language writing. International Journal of Corpus Linguistics, 15(4), 474-496. Lu, X. (2011). L2 Syntactic Complexity Analyzer. Retrieved from http://www.personal.psu.edu/xxl13/downloads/l2sca. html Ortega, L. (2003). Syntactic complexity measures and their relationship to L2 proficiency: A research synthesis of college-level L2 writing. Applied Linguistics, 24(4), 492-518. Perkins, K. (1980). Using objective methods of attained writing proficiency to discriminate among holistic evaluations. Teachers of English to Speakers of Other Languages Quarterly, 14(1), 61-69. Roll, M., Frid, J. & Horne, M. (2007). Measuring syntactic complexity in spontaneous spoken Swedish. Language and Speech, 50(2), 227-245. Sampson, G. (1997). Depth in English grammar. Journal of Linguistics, 33, 131-151. Wolfe-Quintero, K., Inagaki, S. & Kim, H. Y. (1998). Second language development in writing: Measures of fluency, accuracy, & complexity. Honolulu, HI: University of Hawaii Press. Xi, X., & Mollaun, P. (2006). Investigating the utility of analytic scoring for the TOEFL® Academic Speaking Test (TAST). TOEFL iBT Research Report No. TOEFLiBT-01. Zechner, K., Higgins, D. & Xi, X. (2007). SpeechRater(SM): A construct-driven approach to score spontaneous non-native speech. Proceedings of the 2007 Workshop of the International Speech Communication Association (ISCA) Special Interest Group on Speech and Language Technology in Education (SLaTE), Farmington, PA, October. Zechner, K., Higgins, D., Xi, X, & Williamson, D.M. (2009). Automatic scoring of non-native spontaneous speech in tests of spoken English. Speech Communication, 51 (10), October. 731
|
2011
|
73
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 732–741, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics N-Best Rescoring Based on Pitch-accent Patterns Je Hun Jeon1 Wen Wang2 Yang Liu1 1Department of Computer Science, The University of Texas at Dallas, USA 2Speech Technology and Research Laboratory, SRI International, USA {jhjeon,yangl}@hlt.utdallas.edu, [email protected] Abstract In this paper, we adopt an n-best rescoring scheme using pitch-accent patterns to improve automatic speech recognition (ASR) performance. The pitch-accent model is decoupled from the main ASR system, thus allowing us to develop it independently. N-best hypotheses from recognizers are rescored by additional scores that measure the correlation of the pitch-accent patterns between the acoustic signal and lexical cues. To test the robustness of our algorithm, we use two different data sets and recognition setups: the first one is English radio news data that has pitch accent labels, but the recognizer is trained from a small amount of data and has high error rate; the second one is English broadcast news data using a state-of-the-art SRI recognizer. Our experimental results demonstrate that our approach is able to reduce word error rate relatively by about 3%. This gain is consistent across the two different tests, showing promising future directions of incorporating prosodic information to improve speech recognition. 1 Introduction Prosody refers to the suprasegmental features of natural speech, such as rhythm and intonation, since it normally extends over more than one phoneme segment. Speakers use prosody to convey paralinguistic information such as emphasis, intention, attitude, and emotion. Humans listening to speech with natural prosody are able to understand the content with low cognitive load and high accuracy. However, most modern ASR systems only use an acoustic model and a language model. Acoustic information in ASR is represented by spectral features that are usually extracted over a window length of a few tens of milliseconds. They miss useful information contained in the prosody of the speech that may help recognition. Recently a lot of research has been done in automatic annotation of prosodic events (Wightman and Ostendorf, 1994; Sridhar et al., 2008; Ananthakrishnan and Narayanan, 2008; Jeon and Liu, 2009). They used acoustic and lexical-syntactic cues to annotate prosodic events with a variety of machine learning approaches and achieved good performance. There are also many studies using prosodic information for various spoken language understanding tasks. However, research using prosodic knowledge for speech recognition is still quite limited. In this study, we investigate leveraging prosodic information for recognition in an n-best rescoring framework. Previous studies showed that prosodic events, such as pitch-accent, are closely related with acoustic prosodic cues and lexical structure of utterance. The pitch-accent pattern given acoustic signal is strongly correlated with lexical items, such as syllable identity and canonical stress pattern. Therefore as a first study, we focus on pitch-accent in this paper. We develop two separate pitch-accent detection models, using acoustic (observation model) and lexical information (expectation model) respectively, and propose a scoring method for the correlation of pitch-accent patterns between the two models for recognition hypotheses. The n-best list is rescored using the pitch-accent matching scores 732 combined with the other scores from the ASR system (acoustic and language model scores). We show that our method yields a word error rate (WER) reduction of about 3.64% and 2.07% relatively on two baseline ASR systems, one being a state-of-the-art recognizer for the broadcast news domain. The fact that it holds across different baseline systems suggests the possibility that prosody can be used to help improve speech recognition performance. The remainder of this paper is organized as follows. In the next section, we review previous work briefly. Section 3 explains the models and features for pitch-accent detection. We provide details of our n-best rescoring approach in Section 4. Section 5 describes our corpus and baseline ASR setup. Section 6 presents our experiments and results. The last section gives a brief summary along with future directions. 2 Previous Work Prosody is of interest to speech researchers because it plays an important role in comprehension of spoken language by human listeners. The use of prosody in speech understanding applications has been quite extensive. A variety of applications have been explored, such as sentence and topic segmentation (Shriberg et al., 2000; Rosenberg and Hirschberg, 2006), word error detection (Litman et al., 2000), dialog act detection (Sridhar et al., 2009), speaker recognition (Shriberg et al., 2005), and emotion recognition (Benus et al., 2007), just to name a few. Incorporating prosodic knowledge is expected to improve the performance of speech recognition. However, how to effectively integrate prosody within the traditional ASR framework is a difficult problem, since prosodic features are not well defined and they come from a longer region, which is different from spectral features used in current ASR systems. Various research has been conducted trying to incorporate prosodic information in ASR. One way is to directly integrate prosodic features into the ASR framework (Vergyri et al., 2003; Ostendorf et al., 2003; Chen and Hasegawa-Johnson, 2006). Such efforts include prosody dependent acoustic and pronunciation model (allophones were distinguished according to different prosodic phenomenon), language model (words were augmented by prosody events), and duration modeling (different prosodic events were modeled separately and combined with conventional HMM). This kind of integration has advantages in that spectral and prosodic features are more tightly coupled and jointly modeled. Alternatively, prosody was modeled independently from the acoustic and language models of ASR and used to rescore recognition hypotheses in the second pass. This approach makes it possible to independently model and optimize the prosodic knowledge and to combine with ASR hypotheses without any modification of the conventional ASR modules. In order to improve the rescoring performance, various prosodic knowledge was studied. (Ananthakrishnan and Narayanan, 2007) used acoustic pitch-accent pattern and its sequential information given lexical cues to rescore n-best hypotheses. (Kalinli and Narayanan, 2009) used acoustic prosodic cues such as pitch and duration along with other knowledge to choose a proper word among several candidates in confusion networks. Prosodic boundaries based on acoustic cues were used in (Szaszak and Vicsi, 2007). We take a similar approach in this study as the second approach above in that we develop prosodic models separately and use them in a rescoring framework. Our proposed method differs from previous work in the way that the prosody model is used to help ASR. In our approach, we explicitly model the symbolic prosodic events based on acoustic and lexical information. We then capture the correlation of pitch-accent patterns between the two different cues, and use that to improve recognition performance in an n-best rescoring paradigm. 3 Prosodic Model Among all the prosodic events, we use only pitchaccent pattern in this study, because previous studies have shown that acoustic pitch-accent is strongly correlated with lexical items, such as canonical stress pattern and syllable identity that can be easily acquired from the output of conventional ASR and pronunciation dictionary. We treat pitch-accent detection as a binary classification task, that is, a classifier is used to determine whether the base unit is prominent or not. Since pitch-accent is usually 733 carried by syllables, we use syllables as our units, and the syllable definition of each word is based on CMU pronunciation dictionary which has lexical stress and syllable boundary marks (Bartlett et al., 2009). We separately develop acoustic-prosodic and lexical-prosodic models and use the correlation between the two models for each syllable to rescore the n-best hypotheses of baseline ASR systems. 3.1 Acoustic-prosodic Features Similar to most previous work, the prosodic features we use include pitch, energy, and duration. We also add delta features of pitch and energy. Duration information for syllables is derived from the speech waveform and phone-level forced alignment of the transcriptions. In order to reduce the effect by both inter-speaker and intra-speaker variation, both pitch and energy values are normalized (z-value) with utterance specific means and variances. For pitch, energy, and their delta values, we apply several categories of 12 functions to generate derived features. • Statistics (7): minimum, maximum, range, mean, standard deviation, skewness and kurtosis value. These are used widely in prosodic event detection and emotion detection. • Contour (5): This is approximated by taking 5 leading terms in the Legendre polynomial expansion. The approximation of the contour using the Legendre polynomial expansion has been successfully applied in quantitative phonetics (Grabe et al., 2003) and in engineering applications (Dehak et al., 2007). Each term models a particular aspect of the contour, such as the slope, and information about the curvature. We use 6 duration features, that is, raw, normalized, and relative durations (ms) of the syllable and vowel. Normalization (z-value) is performed based on statistics for each syllable and vowel. The relative value is the difference between the normalized current duration and the following one. In the above description, we assumed that the event of a syllable is only dependent on its observations, and did not consider contextual effect. To alleviate this restriction, we expand the features by incorporating information about the neighboring syllables. Based on the study in (Jeon and Liu, 2010) that evaluated using left and right contexts, we choose to use one previous and one following context in the features. The total number of features used in this study is 162. 3.2 Lexical-prosodic Features There is a very strong correlation between pitchaccent in an utterance and its lexical information. Previous studies have shown that the lexical features perform well for pitch-accent prediction. The detailed features for training the lexical-prosodic model are as follows. • Syllable identity: We kept syllables that appear more than 5 times in the training corpus. The other syllables that occur less are collapsed into one syllable representation. • Vowel phone identity: We used vowel phone identity as a feature. • Lexical stress: This is a binary feature to represent if the syllable corresponds to a lexical stress based on the pronunciation dictionary. • Boundary information: This is a binary feature to indicate if there is a word boundary before the syllable. For lexical features, based on the study in (Jeon and Liu, 2010), we added two previous and two following contexts in the final features. 3.3 Prosodic Model Training We choose to use a support vector machine (SVM) classifier1 for the prosodic model based on previous work on prosody labeling study in (Jeon and Liu, 2010). We use RBF kernel for the acoustic model, and 3-order polynomial kernel for the lexical model. In our experiments, we investigate two kinds of training methods for prosodic modeling. The first one is a supervised method where models are trained using all the labeled data. The second is a semi-supervised method using co-training algorithm (Blum and Mitchell, 1998), described in Algorithm 1. Given a set L of labeled data and a set U of unlabeled data with two views, it then iterates in the 1LIBSVM – A Library for Support Vector Machines, location: http://www.csie.ntu.edu.tw/˜cjlin/libsvm/ 734 Algorithm 1 Co-training algorithm. Given: - L: labeled examples; U: unlabeled examples - there are two views V1 and V2 on an example x Initialize: - L1=L, samples used to train classifiers h1 - L2=L, samples used to train classifiers h2 Loop for k iterations - create a small pool U´ choosing from U - use V1(L1) to train classifier h1 and V2(L2) to train classifier h2 - let h1 label/select examples Dh1 from U´ - let h2 label/select examples Dh2 from U´ - add self-labeled examples Dh1 to L2 and Dh2 to L1 - remove Dh1 and Dh2 from U following procedure. The algorithm first creates a smaller pool U′ containing unlabeled data from U. It uses Li (i = 1, 2) to train two distinct classifiers: the acoustic classifier h1, and the lexical classifier h2. We use function Vi (i = 1, 2) to represent that only a single view is used for training h1 or h2. These two classifiers are used to make predictions for the unlabeled set U′, and only when they agree on the prediction for a sample, their predicted class is used as the label for this sample. Then among these self-labeled samples, the most confident ones by one classifier are added to the data set Li for training the other classifier. This iteration continues until reaching the defined number of iterations. In our experiment, the size of the pool U´ is 5 times of the size of training data Li, and the size of the added self-labeled example set, Dhi, is 5% of Li. For the newly selected Dhi, the distribution of the positive and negative examples is the same as that of the training data Li. This co-training method is expected to cope with two problems in prosodic model training. The first problem is the different decision patterns between the two classifiers: the acoustic model has relatively higher precision, while the lexical model has relatively higher recall. The goal of the co-training algorithm is to learn from the difference of each classifier, thus it can improve the performance as well as reduce the mismatch of two classifiers. The second problem is the mismatch of data used for model training and testing, which often results in system performance degradation. Using co-training, we can use the unlabeled data from the domain that matches the test data, adapting the model towards test domain. 4 N-Best Rescoring Scheme In order to leverage prosodic information for better speech recognition performance, we augment the standard ASR equation to include prosodic information as following: ˆW = arg max W p(W|As, Ap) = arg max W p(As, Ap|W)p(W) (1) where As and Ap represent acoustic-spectral features and acoustic-prosodic features. We can further assume that spectral and prosodic features are conditionally independent given a word sequence W, therefore, Equation 1 can be rewritten as following: ˆW ≈ arg max W p(As|W)p(W)p(Ap|W) (2) The first two terms stand for the acoustic and language models in the original ASR system, and the last term means the prosody model we introduce. Instead of using the prosodic model in the first pass decoding, we use it to rescore n-best candidates from a speech recognizer. This allows us to train the prosody models independently and better optimize the models. For p(Ap|W), the prosody score for a word sequence W, in this work we propose a method to estimate it, also represented as scoreW−prosody(W). The idea of scoring the prosody patterns is that there is some expectation of pitch-accent patterns given the lexical sequence (W), and the acoustic pitchaccent should match with this expectation. For instance, in the case of a prominent syllable, both acoustic and lexical evidence show pitch-accent, and vice versa. In order to maximize the agreement between the two sources, we measure how good the acoustic pitch-accent in speech signal matches the given lexical cues. For each syllable Si in the n-best list, we use acoustic-prosodic cues (ai) to estimate the posterior probability that the syllable is prominent (P), p(P|ai). Similarly, we use lexical cues (li) 735 to determine the syllable’s pitch-accent probability p(P|li). Then the prosody score for a syllable Si is estimated by the match of the pitch-accent patterns between acoustic and lexical information using the difference of the posteriors from the two models: scoreS−prosody(Si) ≈1−| p(P|ai) −p(P|li) | (3) Furthermore, we take into account the effect due to varying durations for different syllables. We notice that syllables without pitch-accent have much shorter duration than the prominent ones, and the prosody scores for the short syllables tend to be high. This means that if a syllable is split into two consecutive non-prominent syllables, the agreement score may be higher than a long prominent syllable. Therefore, we introduce a weighting factor based on syllable duration (dur(i)). For a candidate word sequence (W) consisting of n syllables, its prosodic score is the sum of the prosodic scores for all the syllables in it weighted by their duration (measured using milliseconds), that is: scoreW−prosody(W) ≈ n ∑ i=1 log(scoreS−prosody(Si)) · dur(i) (4) We then combine this prosody score with the original acoustic and language model likelihood (P(As|W) and P(W) in Equation 2). In practice, we need to weight them differently, therefore, the combined score for a hypothesis W is: Score(W) = λ · scoreW−prosody(W) + scoreASR(W) (5) where scoreASR(W) is generated by ASR systems (composed of acoustic and language model scores) and λ is optimized using held out data. 5 Data and Baseline Systems Our experiments are carried out using two different data sets and two different recognition systems as well in order to test the robustness of our proposed method. The first data set is the Boston University Radio News Corpus (BU) (Ostendorf et al., 1995), which consists of broadcast news style read speech. The BU corpus has about 3 hours of read speech from 7 speakers (3 female, 4 male). Part of the data has been labeled with ToBI-style prosodic annotations. In fact, the reason that we use this corpus, instead of other corpora typically used for ASR experiments, is because of its prosodic labels. We divided the entire data corpus into a training set and a test set. There was no speaker overlap between training and test sets. The training set has 2 female speakers (f2 and f3) and 3 male ones (m2, m3, m4). The test set is from the other two speakers (f1 and m1). We use 200 utterances for the recognition experiments. Each utterance in BU corpus consists of more than one sentences, so we segmented each utterance based on pause, resulting in a total number of 713 segments for testing. We divided the test set roughly equally into two sets, and used one for parameter tuning and the other for rescoring test. The recognizer used for this data set was based on Sphinx-32. The contextdependent triphone acoustic models with 32 Gaussian mixtures were trained using the training partition of the BU corpus described above, together with the broadcast new data. A standard back-off trigram language model with Kneser-Ney smoothing was trained using the combined text from the training partition of the BU, Wall Street Journal data, and part of Gigaword corpus. The vocabulary size was about 10K words and the out-of-vocabulary (OOV) rate on the test set was 2.1%. The second data set is from broadcast news (BN) speech used in the GALE program. The recognition test set contains 1,001 utterances. The n-best hypotheses for this data set are generated by a state-ofthe-art SRI speech recognizer, developed for broadcast news speech (Stolcke et al., 2006; Zheng et al., 2007). This system yields much better performance than the first one. We also divided the test set roughly equally into two sets for parameter tuning and testing. From the data used for training the speech recognizer, we randomly selected 5.7 hours of speech (4,234 utterances) for the co-training algorithm for the prosodic models. For prosodic models, we used a simple binary representation of pitch-accent in the form of presence versus absence. The reference labels are de2CMU Sphinx - Speech Recognition Toolkit, location: http://www.speech.cs.cmu.edu/sphinx/tutorial.html 736 rived from the ToBI annotation in the BU corpus, and the ratio of pitch-accented syllables is about 34%. Acoustic-prosodic and lexical-prosodic models were separately developed using the features described in Section 3. Feature extraction was performed at the syllable level from force-aligned data. For the supervised approach, we used those utterances in the training data partition with ToBI labels in the BU corpus (245 utterances, 14,767 syllables). For co-training, the labeled data from BU corpus is used as initial training, and the other unlabeled data from BU and BN are used as unlabeled data. 6 Experimental Results 6.1 Pitch-accent Detection First we evaluate the performance of our acousticprosodic and lexical-prosodic models for pitchaccent detection. For rescoring, not only the accuracies of the two individual prosodic models are important, but also the pitch-accent agreement score between the two models (as shown in Equation 3) is critical, therefore, we present results using these two metrics. Table 1 shows the accuracy of each model for pitch-accent detection, and also the average prosody score of the two models (i.e., Equation 3) for positive and negative classes (using reference labels). These results are based on the BU labeled data in the test set. To compare our pitch accent detection performance with previous work, we include the result of (Jeon and Liu, 2009) as a reference. Compared to previous work, the acoustic model achieved similar performance, while the performance of lexical model is a bit lower. The lower performance of lexical model is mainly because we do not use part-of-speech (POS) information in the features, since we want to only use the word output from the ASR system (without additional POS tagging). As shown in Table 1, when using the co-training algorithm, as described in Section 3.3, the overall accuracies improve slightly and therefore the prosody score is also increased. We expect this improved model will be more beneficial for rescoring. 6.2 N-Best Rescoring For the rescoring experiment, we use 100-best hypotheses from the two different ASR systems, as deAccuracy(%) Prosody score Acoustic Lexical Pos Neg Supervised 83.97 84.48 0.747 0.852 Co-training 84.54 84.99 0.771 0.867 Reference 83.53 87.92 Table 1: Pitch accent detection results: performance of individual acoustic and lexical models, and the agreement between the two models (i.e., prosody score for a syllable, Equation 3) for positive and negative classes. Also shown is the reference result for pitch accent detection from Jeon and Liu (2009). scribed in Section 5. We apply the acoustic and lexical prosodic models to each hypothesis to obtain its prosody score, and combine it with ASR scores to find the top hypothesis. The weights were optimized using one test set and applied to the other. We report the average result of the two testings. Table 2 shows the rescoring results using the first recognition system on BU data, which was trained with a relatively small amount of data. The 1best baseline uses the first hypothesis that has the best ASR score. The oracle result is from the best hypothesis that gives the lowest WER by comparing all the candidates to the reference transcript. We used two prosodic models as described in Section 3.3. The first one is the base prosodic model using supervised training (S-model). The second is the prosodic model with the co-training algorithm (Cmodel). For these rescoring experiments, we tuned λ (in Equation 5) when combining the ASR acoustic and language model scores with the additional prosody score. The value in parenthesis in Table 2 means the relative WER reduction when compared to the baseline result. We show the WER results for both the development and the test set. As shown in Table 2, we observe performance improvement using our rescoring method. Using the base S-model yields reasonable improvement, and C-model further reduces WER. Even though the prosodic event detection performance of these two prosodic models is similar, the improved prosody score between the acoustic and lexical prosodic models using co-training helps rescoring. After rescoring using prosodic knowledge, the WER is reduced by 0.82% (3.64% relative). Furthermore, we notice that the difference between development and 737 WER (%) 1-best baseline 22.64 S-model Dev 21.93 (3.11%) Test 22.10 (2.39%) C-model Dev 21.76 (3.88%) Test 21.81 (3.64%) Oracle 15.58 Table 2: WER of the baseline system and after rescoring using prosodic models. Results are based on the first ASR system. test data is smaller when using the C-model than Smodel, which means that the prosodic model with co-training is more stable. In fact, we found that the optimal value of λ is 94 and 57 for the two folds using S-model, and is 99 and 110 for the Cmodel. These verify again that the prosodic scores contribute more in the combination with ASR likelihood scores when using the C-model, and are more robust across different tuning sets. Ananthakrishnan and Narayanan (2007) also used acoustic/lexical prosodic models to estimate a prosody score and reported 0.3% recognition error reduction on BU data when rescoring 100-best list (their baseline WER is 22.8%). Although there is some difference in experimental setup (data, classifier, features) between ours and theirs, our S-model showed comparable performance gain and the result of C-model is significantly better than theirs. Next we test our n-best rescoring approach using a state-of-the-art SRI speech recognizer on BN data to verify if our approach can generalize to better ASR n-best lists. This is often the concern that improvements observed on a poor ASR system do not hold for better ASR systems. The rescoring results are shown in Table 3. We can see that the baseline performance of this recognizer is much better than that of the first ASR system (even though the recognition task is also harder). Our rescoring approach still yields performance gain even using this stateof-the-art system. The WER is reduced by 0.29% (2.07% relative). This error reduction is lower than that in the first ASR system. There are several possible reasons. First, the baseline ASR performance is higher, making further improvement hard; second, and more importantly, the prosody models do not match well to the test domain. We trained the prosody model using the BU data. Even though cotraining is used to leverage unlabeled BN data to reduce data mismatch, it is still not as good as using labeled in-domain data for model training. WER (%) 1-best baseline 13.77 S-model Dev 13.53 (1.78%) Test 13.55 (1.63%) C-model Dev 13.48 (2.16%) Test 13.49 (2.07%) Oracle 9.23 Table 3: WER of the baseline system and after rescoring using prosodic models. Results are based on the second ASR system. 6.3 Analysis and Discussion We also analyze what kinds of errors are reduced using our rescoring approach. Most of the error reduction came from substitution and insertion errors. Deletion error rate did not change much or sometimes even increased. For a better understanding of the improvement using the prosody model, we analyzed the pattern of corrections (the new hypothesis after rescoring is correct while the original 1-best is wrong) and errors. Table 4 shows some positive and negative examples from rescoring results using the first ASR system. In this table, each word is associated with some binary expressions inside a parenthesis, which stand for pitch-accent markers. Two bits are used for each syllable: the first one is for the acoustic-prosodic model and the second one is for the lexical-prosodic model. For both bits, 1 represents pitch-accent, and 0 indicates none. These hard decisions are obtained by setting a threshold of 0.5 for the posterior probabilities from the acoustic or lexical models. For example, when the acoustic classifier predicts a syllable as pitch-accented and the lexical one as not accented, ‘10’ marker is assigned to the syllable. The number of such pairs of pitch-accent markers is the same as the number of syllables in a word. The bold words indicate correct words and italic means errors. As shown in the positive example of Table 4, we find that our prosodic model is effective at identifying an erroneous word when it is split into two words, resulting in different pitch-accent patterns. Language models are 738 Positive example 1-best : most of the massachusetts (11 ) (10) (00) (11 00 01 00) rescored : most other massachusetts (11 ) (11 00) (11 00 01 00) Negative example 1-best : robbery and on a theft (11 00 00) (00) (10) (00) (11) rescored : robbery and lot of theft (11 00 00) (00) (11) (00) (11) Table 4: Examples of rescoring results. Binary expressions inside the parenthesis below a word represent pitch-accent markers for the syllables in the word. not good at correcting this kind of errors since both word sequences are plausible. Our model also introduces some errors, as shown in the negative example, which is mainly due to the inaccurate prosody model. We conducted more prosody rescoring experiments in order to understand the model behavior. These analyses are based on the n-best list from the first ASR system for the entire test set. In the first experiment, among the 100 hypotheses in n-best list, we gave a prosody score of 0 to the 100th hypothesis, and used automatically obtained prosodic scores for the other hypotheses. A zero prosody score means the perfect agreement given acoustic and lexical cues. The original scores from the recognizer were combined with the prosodic scores for rescoring. This was to verify that the range of the weighting factor λ estimated on the development data (using the original, not the modified prosody scores for all candidates) was reasonable to choose proper hypothesis among all the candidates. We noticed that 27% of the times the last hypothesis on the list was selected as the best hypothesis. This hypothesis has the highest prosodic scores, but lowest ASR score. This result showed that if the prosodic models were accurate enough, the correct candidate could be chosen using our rescoring framework. In the second experiment, we put the reference text together with the other candidates. We use the same ASR scores for all candidates, and generated prosodic scores using our prosody model. This was to test that our model could pick up correct candidate using only the prosodic score. We found that for 26% of the utterances, the reference transcript was chosen as the best one. This was significantly better than random selection (i.e., 1/100), suggesting the benefit of the prosody model; however, this percentage is not very high, implying the limitation of prosodic information for ASR or the current imperfect prosodic models. In the third experiment, we replaced the 100th candidate with the reference transcript and kept its ASR score. When using our prosody rescoring approach, we obtained a relative error rate reduction of 6.27%. This demonstrates again that our rescoring method works well – if the correct hypothesis is on the list, even though with a low ASR score, using prosodic information can help identify the correct candidate. Overall the performance improvement we obtained from rescoring by incorporating prosodic information is very promising. Our evaluation using two different ASR systems shows that the improvement holds even when we use a state-of-the-art recognizer and the training data for the prosody model does not come from the same corpus. We believe the consistent improvements we observed for different conditions show that this is a direction worthy of further investigation. 7 Conclusion In this paper, we attempt to integrate prosodic information for ASR using an n-best rescoring scheme. This approach decouples the prosodic model from the main ASR system, thus the prosodic model can be built independently. The prosodic scores that we use for n-best rescoring are based on the matching of pitch-accent patterns by acoustic and lexical features. Our rescoring method achieved a WER reduction of 3.64% and 2.07% relatively using two different ASR systems. The fact that the gain holds across different baseline systems (including a state-of-the739 art speech recognizer) suggests the possibility that prosody can be used to improve speech recognition performance. As suggested by our experiments, better prosodic models can result in more WER reduction. The performance of our prosodic model was improved with co-training, but there are still problems, such as the imbalance of the two classifiers’ prediction, as well as for the two events. In order to address these problems, we plan to improve the labeling and selection method in the co-training algorithm, and also explore other training algorithms to reduce domain mismatch. Furthermore, we are also interested in evaluating our approach on the spontaneous speech domain, which is quite different from the data we used in this study. In this study, we used n-best rather than lattice rescoring. Since the prosodic features we use include cross-word contextual information, it is not straightforward to apply it directly to lattices. In our future work, we will develop models with only within-word context, and thus allowing us to explore lattice rescoring, which we expect will yield more performance gain. References Sankaranarayanan Ananthakrishnan and Shrikanth Narayanan. 2007. Improved speech recognition using acoustic and lexical correlated of pitch accent in a n-best rescoring framework. Proc. of ICASSP, pages 65–68. Sankaranarayanan Ananthakrishnan and Shrikanth Narayanan. 2008. Automatic prosodic event detection using acoustic, lexical and syntactic evidence. IEEE Transactions on Audio, Speech, and Language Processing, 16(1):216–228. Susan Bartlett, Grzegorz Kondrak, and Colin Cherry. 2009. On the syllabification of phonemes. Proc. of NAACL-HLT, pages 308–316. Stefan Benus, Agust´ın Gravano, and Julia Hirschberg. 2007. Prosody, emotions, and whatever. Proc. of Interspeech, pages 2629–2632. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. Proc. of the Workshop on Computational Learning Theory, pages 92–100. Ken Chen and Mark Hasegawa-Johnson. 2006. Prosody dependent speech recognition on radio news corpus of American English. IEEE Transactions on Audio, Speech, and Language Processing, 14(1):232– 245. Najim Dehak, Pierre Dumouchel, and Patrick Kenny. 2007. Modeling prosodic features with joint factor analysis for speaker verification. IEEE Transactions on Audio, Speech, and Language Processing, 15(7):2095–2103. Esther Grabe, Greg Kochanski, and John Coleman. 2003. Quantitative modelling of intonational variation. Proc. of SASRTLM, pages 45–57. Je Hun Jeon and Yang Liu. 2009. Automatic prosodic events detection suing syllable-based acoustic and syntactic features. Proc. of ICASSP, pages 4565–4568. Je Hun Jeon and Yang Liu. 2010. Syllable-level prominence detection with acoustic evidence. Proc. of Interspeech, pages 1772–1775. Ozlem Kalinli and Shrikanth Narayanan. 2009. Continuous speech recognition using attention shift decoding with soft decision. Proc. of Interspeech, pages 1927– 1930. Diane J. Litman, Julia B. Hirschberg, and Marc Swerts. 2000. Predicting automatic speech recognition performance using prosodic cues. Proc. of NAACL, pages 218–225. Mari Ostendorf, Patti Price, and Stefanie ShattuckHufnagel. 1995. The Boston University radio news corpus. Linguistic Data Consortium. Mari Ostendorf, Izhak Shafran, and Rebecca Bates. 2003. Prosody models for conversational speech recognition. Proc. of the 2nd Plenary Meeting and Symposium on Prosody and Speech Processing, pages 147–154. Andrew Rosenberg and Julia Hirschberg. 2006. Story segmentation of broadcast news in English, Mandarin and Arabic. Proc. of HLT-NAACL, pages 125–128. Elizabeth Shriberg, Andreas Stolcke, Dilek Hakkani-T¨ur, and G¨okhan T¨ur. 2000. Prosody-based automatic segmentation of speech into sentences and topics. Speech Communication, 32(1-2):127–154. Elizabeth Shriberg, Luciana Ferrer, Sachin S. Kajarekar, Anand Venkataraman, and Andreas Stolcke. 2005. Modeling prosodic feature sequences for speaker recognition. Speech Communication, 46(3-4):455– 472. Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, and Shrikanth S. Narayanan. 2008. Exploiting acoustic and syntactic features for automatic prosody labeling in a maximum entropy framework. IEEE Transactions on Audio, Speech, and Language Processing, 16(4):797–811. Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, and Shrikanth Narayanan. 2009. Combining lexical, syntactic and prosodic cues for improved online 740 dialog act tagging. Computer Speech and Language, 23(4):407–422. Andreas Stolcke, Barry Chen, Horacio Franco, Venkata Ramana Rao Gadde, Martin Graciarena, Mei-Yuh Hwang, Katrin Kirchhoff, Arindam Mandal, Nelson Morgan, Xin Lin, Tim Ng, Mari Ostendorf, Kemal S¨onmez, Anand Venkataraman, Dimitra Vergyri, Wen Wang, Jing Zheng, and Qifeng Zhu. 2006. Recent innovations in speech-to-text transcription at SRI-ICSIUW. IEEE Transactions on Audio, Speech and Language Processing, 14(5):1729–1744. Special Issue on Progress in Rich Transcription. Gyorgy Szaszak and Klara Vicsi. 2007. Speech recognition supported by prosodic information for fixed stress languages. Proc. of TSD Conference, pages 262–269. Dimitra Vergyri, Andreas Stolcke, Venkata R. R. Gadde, Luciana Ferrer, and Elizabeth Shriberg. 2003. Prosodic knowledge sources for automatic speech recognition. Proc. of ICASSP, pages 208–211. Colin W. Wightman and Mari Ostendorf. 1994. Automatic labeling of prosodic patterns. IEEE Transaction on Speech and Auido Processing, 2(4):469–481. Jing Zheng, Ozgur Cetin, Mei-Yuh Hwang, Xin Lei, Andreas Stolcke, and Nelson Morgan. 2007. Combining discriminative feature, transform, and model training for large vocabulary speech recognition. Proc. of ICASSP, pages 633–636. 741
|
2011
|
74
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 742–751, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Lexically-Triggered Hidden Markov Models for Clinical Document Coding Svetlana Kiritchenko Colin Cherry Institute for Information Technology National Research Council Canada {Svetlana.Kiritchenko,Colin.Cherry}@nrc-cnrc.gc.ca Abstract The automatic coding of clinical documents is an important task for today’s healthcare providers. Though it can be viewed as multi-label document classification, the coding problem has the interesting property that most code assignments can be supported by a single phrase found in the input document. We propose a Lexically-Triggered Hidden Markov Model (LT-HMM) that leverages these phrases to improve coding accuracy. The LT-HMM works in two stages: first, a lexical match is performed against a term dictionary to collect a set of candidate codes for a document. Next, a discriminative HMM selects the best subset of codes to assign to the document by tagging candidates as present or absent. By confirming codes proposed by a dictionary, the LT-HMM can share features across codes, enabling strong performance even on rare codes. In fact, we are able to recover codes that do not occur in the training set at all. Our approach achieves the best ever performance on the 2007 Medical NLP Challenge test set, with an F-measure of 89.84. 1 Introduction The clinical domain presents a number of interesting challenges for natural language processing. Conventionally, most clinical documentation, such as doctor’s notes, discharge summaries and referrals, are written in a free-text form. This narrative form is flexible, allowing healthcare professionals to express any kind of concept or event, but it is not particularly suited for large-scale analysis, search, or decision support. Converting clinical narratives into a structured form would support essential activities such as administrative reporting, quality control, biosurveillance and biomedical research (Meystre et al., 2008). One way of representing a document is to code the patient’s conditions and the performed procedures into a nomenclature of clinical codes. The International Classification of Diseases, 9th and 10th revisions, Clinical Modification (ICD9-CM, ICD-10-CM) are the official administrative coding schemes for healthcare organizations in several countries, including the US and Canada. Typically, coding is performed by trained coding professionals, but this process can be both costly and errorprone. Automated methods can speed-up the coding process, improve the accuracy and consistency of internal documentation, and even result in higher reimbursement for the healthcare organization (Benson, 2006). Traditionally, statistical document coding is viewed as multi-class multi-label document classification, where each clinical free-text document is labelled with one or several codes from a pre-defined, possibly very large set of codes (Patrick et al., 2007; Suominen et al., 2008). One classification model is learned for each code, and then all models are applied in turn to a new document to determine which codes should be assigned to the document. The drawback of this approach is poor predictive performance on low-frequency codes, which are ubiquitous in the clinical domain. This paper presents a novel approach to document coding that simultaneously models code-specific as well as general patterns in the data. This allows 742 us to predict any code label, even codes for which no training data is available. Our approach, the lexically-triggered HMM (LT-HMM), is based on the fact that a code assignment is often indicated by short lexical triggers in the text. Consequently, a two-stage coding method is proposed. First, the LT-HMM identifies candidate codes by matching terms from a medical terminology dictionary. Then, it confirms or rejects each of the candidates by applying a discriminative sequence model. In this architecture, low-frequency codes can still be matched and confirmed using general characteristics of their trigger’s local context, leading to better prediction performance on these codes. 2 Document Coding and Lexical Triggers Document coding is a special case of multi-class multi-label text classification. Given a fixed set of possible codes, the ultimate goal is to assign a set of codes to documents, based on their content. Furthermore, we observe that for each code assigned to a document, there is generally at least one corresponding trigger term in the text that accounts for the code’s assignment. For example, if an ICD-9-CM coding professional were to see “allergic bronchitis” somewhere in a clinical narrative, he or she would immediately consider adding code 493.9 (Asthma, unspecified) to the document’s code set. The presence of these trigger terms separates document coding from text classification tasks, such as topic or genre classification, where evidence for a particular label is built up throughout a document. However, this does not make document coding a term recognition task, concerned only with the detection of triggers. Codes are assigned to a document as a whole, and code assignment decisions within a document may interact. It is an interesting combination of sentence and document-level processing. Formally, we define the document coding task as follows: given a set of documents X and a set of available codes C, assign to each document xi a subset of codes Ci ⊂C. We also assume access to a (noisy) mechanism to detect candidate triggers in a document. In particular, we will assume that an (incomplete) dictionary D(c) exists for each code c ∈C, which lists specific code terms associated with c.1 To continue our running example: D(493.9) would include the term “allergic bronchitis”. Each code can have several corresponding terms while each term indicates the presence of exactly one code. A candidate code c is proposed each time a term from D(c) is found in a document. 2.1 From triggers to codes The presence of a term from D(c) does not automatically imply the assignment of code c to a document. Even with extremely precise dictionaries, there are three main reasons why a candidate code may not appear in a document’s code subset. 1. The context of the trigger term might indicate the irrelevancy of the code. In the clinical domain, such irrelevancy can be specified by a negative or speculative statement (e.g., “evaluate for pneumonia”) or a family-related context (e.g., “family history of diabetes”). Only definite diagnosis of the patient should be coded. 2. There can be several closely related candidate codes; yet only one, the best fitted code should be assigned to the document. For example, the triggers “left-sided flank pain” (code 789.09) and “abdominal pain” (code 789.00) may both appear in the same clinical report, but only the most specific code, 789.09, should end up in the document code set. 3. The domain can have code dependency rules. For example, the ICD-9-CM coding rules state that no symptom codes should be given to a document if a definite diagnosis is present. That is, if a document is coded with pneumonia, it should not be coded with a fever or cough. On the other hand, if the diagnosis is uncertain, then codes for the symptoms should be assigned. This suggests a paradigm where a candidate code, suggested by a detected trigger term, is assessed in terms of both its local context (item 1) and the presence of other candidate codes for the document (items 2 and 3). 1Note that dictionary-based trigger detection could be replaced by tagging approaches similar to those used in namedentity-recognition or information extraction. 743 2.2 ICD-9-CM Coding As a specific application we have chosen the task of assigning ICD-9-CM codes to free-form clinical narratives. We use the dataset collected for the 2007 Medical NLP Challenge organized by the Computational Medicine Center in Cincinnati, Ohio, hereafter refereed to as “CMC Challenge” (Pestian et al., 2007). For this challenge, 1954 radiology reports on outpatient chest x-ray and renal procedures were collected, disambiguated, and anonymized. The reports were annotated with ICD-9-CM codes by three coding companies, and the majority codes were selected as a gold standard. In total, 45 distinct codes were used. For this task, our use of a dictionary to detect lexical triggers is quite reasonable. The medical domain is rich with manually-created and carefullymaintained knowledge resources. In particular, the ICD-9-CM coding guidelines come with an index file that contains hundreds of thousands of terms mapped to corresponding codes. Another valuable resource is Metathesaurus from the Unified Medical Language System (UMLS) (Lindberg et al., 1993). It has millions of terms related to medical problems, procedures, treatments, organizations, etc. Often, hospitals, clinics, and other healthcare organizations maintain their own vocabularies to introduce consistency in their internal and external documentation and to support reporting, reviewing, and metaanalysis. This task has some very challenging properties. As mentioned above, the ICD-9-CM coding rules create strong code dependencies: codes are assigned to a document as a set and not individually. Furthermore, the code distribution throughout the CMC training documents has a very heavy tail; that is, there are a few heavily-used codes and a large number of codes that are used only occasionally. An ideal approach will work well with both highfrequency and low-frequency codes. 3 Related work Automated clinical coding has received much attention in the medical informatics literature. Stanfill et al. reviewed 113 studies on automated coding published in the last 40 years (Stanfill et al., 2010). The authors conclude that there exists a variety of tools covering different purposes, healthcare specialties, and clinical document types; however, these tools are not generalizable and neither are their evaluation results. One major obstacle that hinders the progress in this domain is data privacy issues. To overcome this obstacle, the CMC Challenge was organized in 2007. The purpose of the challenge was to provide a common realistic dataset to stimulate the research in the area and to assess the current level of performance on the task. Forty-four teams participated in the challenge. The top-performing system achieved micro-averaged F1-score of 0.8908, and the mean score was 0.7670. Several teams, including the winner, built pure symbolic (i.e., hand-crafted rule-based) systems (e.g., (Goldstein et al., 2007)). This approach is feasible for the small code set used in the challenge, but it is questionable in real-life settings where thousands of codes need to be considered. Later, the winning team showed how their hand-crafted rules can be built in a semi-automatic way: the initial set of rules adopted from the official coding guidelines were automatically extended with additional synonyms and code dependency rules generated from the training data (Farkas and Szarvas, 2008). Statistical systems trained on only text-derived features (such as n-grams) did not show good performance due to a wide variety of medical language and a relatively small training set (Goldstein et al., 2007). This led to the creation of hybrid systems: symbolic and statistical classifiers used together in an ensemble or cascade (Aronson et al., 2007; Crammer et al., 2007) or a symbolic component providing features for a statistical component (Patrick et al., 2007; Suominen et al., 2008). Strong competition systems had good answers for dealing with negative and speculative contexts, taking advantage of the competition’s limited set of possible code combinations, and handling of low-frequency codes. Our proposed approach is a combination system as well. We combine a symbolic component that matches lexical strings of a document against a medical dictionary to determine possible codes (Lussier et al., 2000; Kevers and Medori, 2010) and a statistical component that finalizes the assignment of codes to the document. Our statistical component is similar to that of Crammer et al. (2007), in that we train a single model for all codes with code744 specific and generic features. However, Crammer et al. (2007) did not employ our lexical trigger step or our sequence-modeling formulation. In fact, they considered all possible code subsets, which can be infeasible in real-life settings. 4 Method To address the task of document coding, our lexically-triggered HMM operates using a two-stage procedure: 1. Lexically match text to the dictionary to get a set of candidate codes; 2. Using features derived from the candidates and the document, select the best code subset. In the first stage, dictionary terms are detected in the document using exact string matching. All codes corresponding to matches become candidate codes, and no other codes can be proposed for this document. In the second stage, a single classifier is trained to select the best code subset from the matched candidates. By training a single classifier, we use all of the training data to assign binary labels (present or absent) to candidates. This is the key distinction of our method from the traditional statistical approach where a separate classifier is trained for each code. The LT-HMM allows features learned from a document coded with ci to transfer at test time to predict code cj, provided their respective triggers appear in similar contexts. Training one common classifier improves our chances to reliably predict codes that have few training instances, and even codes that do not appear at all in the training data. 4.1 Trigger Detection We have manually assembled a dictionary of terms for each of the 45 codes used in the CMC challenge.2 The dictionaries were built by collecting relevant medical terminology from UMLS, the ICD-9CM coding guidelines, and the CMC training data. The test data was not consulted during dictionary construction. The dictionaries contain 440 terms, with 9.78 terms per code on average. Given these dictionaries, the exact-matching of terms to input 2Online at https://sites.google.com/site/ colinacherry/ICD9CM ACL11.txt documents is straightforward. In our experiments, this process finds on average 1.83 distinct candidate codes per document. The quality of the dictionary significantly affects the prediction performance of the proposed twostage approach. Especially important is the coverage of the dictionary. If a trigger term is missing from the dictionary and, as the result, the code is not selected as a candidate code, it will not be recovered in the following stage, resulting in a false negative. Preliminary experiments show that our dictionary recovers 94.42% of the codes in the training set and 93.20% in the test set. These numbers provide an upper bound on recall for the overall approach. 4.2 Sequence Construction After trigger detection, we view the input document as a sequence of candidate codes, each corresponding to a detected trigger (see Figure 1). By tagging these candidates in sequence, we can label each candidate code as present or absent and use previous tagging decisions to model code interactions. The final code subset is constructed by collecting all candidate codes tagged as present. Our training data consists of [document, code set] pairs, augmented with the trigger terms detected through dictionary matching. We transform this into a sequence to be tagged using the following steps: Ordering: The candidate code sequence is presented in reverse chronological order, according to when their corresponding trigger terms appear in the document. That is, the last candidate to be detected by the dictionary will be the first code to appear in our candidate sequence. Reverse order was chosen because clinical documents often close with a final (and informative) diagnosis. Merging: Each detected trigger corresponds to exactly one code; however, several triggers may be detected for the same code throughout a document. If a code has several triggers, we keep only the last occurrence. When possible, we collect relevant features (such as negation information) of all occurrences and associate them with this last occurrence. Labelling: Each candidate code is assigned a binary label (present or absent) based on whether it appears in the gold-standard code set. Note that this 745 Cough,
fever
in
9-‐year-‐ old
male.
IMPRESSION:
1.
Right
middle
lobe
pneumonia.
2.
Minimal
pleural
thickening
on
the
right
may
represent
small
pleural
effusion.
486
pneumonia
context=pos
sem=disease
N
Y
N
N
511.9
pleural
effusion
context=neg
sem=disease
780.6
fever
context=pos
sem=symptom
786.2
cough
context=pos
sem=symptom
Gold
code
set:
{486}
Figure 1: An example document and its corresponding gold-standard tag sequence. The top binary layer is the correct output tag sequence, which confirms or rejects the presence of candidate codes. The bottom layer shows the candidate code sequence derived from the text, with corresponding trigger phrases and some prominent features. process can not introduce gold-standard codes that were not proposed by the dictionary. The final output of these steps is depicted in Figure 1. To the left, we have an input text with underlined trigger phrases, as detected by our dictionary. This implies an input sequence (bottom right), which consists of detected codes and their corresponding trigger phrases. The gold-standard code set for the document is used to infer a gold-standard label sequence for these codes (top right). At test time, the goal of the classifier is to correctly predict the correct binary label sequence for new inputs. We discuss the construction of the features used to make this prediction in section 4.3. 4.3 Model We model this sequence data using a discriminative SVM-HMM (Taskar et al., 2003; Altun et al., 2003). This allows us to use rich, over-lapping features of the input while also modeling interactions between labels. A discriminative HMM has two major categories of features: emission features, which characterize a candidate’s tag in terms of the input document x, and transition features, which characterize a tag in terms of the tags that have come before it. We describe these two feature categories and then our training mechanism. All feature engineering discussed below was carried out using 10-fold crossvalidation on the training set. Transition Features The transition features are modeled as simple indicators over n-grams of present codes, for values of n up to 10, the largest number of codes proposed by our dictionary in the training set.3 This allows the system to learn sequences of codes that are (and are not) likely to occur in the gold-standard data. We found it useful to pad our n-grams with “beginning of document” tokens for sequences when fewer than n codes have been labelled as present, but found it harmful to include an end-of-document tag once labelling is complete. We suspect that the small training set for the challenge makes the system prone to over-fit when modeling code-set length. Emission Features The vast majority of our training signal comes from emission features, which carefully model both the trigger term’s local context and the document as a whole. For each candidate code, three types of features are generated: document features, ConText features, and code-semantics features (Table 1). Document: Document features include indicators on all individual words, 2-grams, 3-grams, and 4grams found in the document. These n-gram features have the candidate code appended to them, making them similar to features traditionally used in multiclass document categorization. ConText: We take advantage of the ConText algorithm’s output. ConText is publicly available software that determines the presence of negated, hypothetical, historical, and family-related context for a given phrase in a clinical text (Harkema et al., 2009). 3We can easily afford such a long history because input sequences are generally short and the tagging is binary, resulting in only a small number of possible histories for a document. 746 Features gen. spec. Document n-gram x ConText current match context x x only in context x x more than once in context x x other matches present x x present in context = pos x x code present in context x x Code Semantics current match sem type x other matches sem type, context = pos x x Table 1: The emission features used in LT-HMM. Typeset words represent variables replaced with specific values, i.e. context ∈{pos,neg}, sem type ∈ {symptom,disease}, code is one of 45 challenge codes, n-gram is a document n-gram. Features can come in generic and/or code-specific version. The algorithm is based on regular expression matching of the context to a precompiled list of context indicators. Regardless of its simplicity, the algorithm has shown very good performance on a variety of clinical document types. We run ConText for each trigger term located in the text and produce two types of features: features related to the candidate code in question and features related to other candidate codes of the document. Negated, hypothetical, and family-related contexts are clustered into a single negative context for the term. Absence of the negative context implies the positive context. We used the following ConText derived indicator features: for the current candidate code, if there is at least one trigger term found in a positive (negative) context, if all trigger terms for this code are found in a positive (negative) context, if there are more than one trigger terms for the code found in a positive (negative) context; for other candidate codes of the document, if there is at least one other candidate code, if there is another candidate code with at least one trigger term found in a positive context, if there is a trigger term for candidate code ci found in a positive (negative) context. Code Semantics: We include features that indicate if the code itself corresponds to a disease or a symptom. This assignment was determined based on the UMLS semantic type of the code. Like the ConText features, code features come in two types: those regarding the candidate code in question and those regarding other candidate codes from the same document. Generic versus Specific: Most of our features come in two versions: generic and code-specific. Generic features are concerned with classifying any candidate as present or absent based on characteristics of its trigger or semantics. Code-specific features append the candidate code to the feature. For example, the feature context=pos represents that the current candidate has a trigger term in a positive context, while context=pos:486 adds the information that the code in question is 486. Note that n-grams features are only code-specific, as they are not connected to any specific trigger term. To an extent, code-specific features allow us to replicate the traditional classification approach, which focuses on one code at a time. Using these features, the classifier is free to build complex submodels for a particular code, provided that this code has enough training examples. Generic versions of the features, on the other hand, make it possible to learn common rules applicable to all codes, including low-frequency ones. In this way, even in the extreme case of having zero training examples for a particular code, the model can still potentially assign the code to new documents, provided it is detected by our dictionary. This is impossible in a traditional document-classification setting. Training We train our SVM-HMM with the objective of separating the correct tag sequence from all others by a fixed margin of 1, using a primal stochastic gradient optimization algorithm that follows ShalevShwartz et al. (2007). Let S be a set of training points (x, y), where x is the input and y is the corresponding gold-standard tag sequence. Let φ(x, y) be a function that transforms complete input-output pairs into feature vectors. We also use φ(x, y′, y) as shorthand for the difference in features between 747 begin Input: S, λ, n Initialize: Set w0 to the 0 vector for t = 1, 2 . . . , n|S| Choose (x, y) ∈S at random Set the learning rate: ηt = 1 λt Search: y′ = arg maxy′′ [δ(y, y′′) + wt · φ(x, y′′)] Update: wt+1 = wt + ηt φ(x, y, y′) −λwt Adjust: wt+1 = wt+1 · min h 1, 1/ √ λ ∥wt+1∥ i end Output: wn|S|+1 end Figure 2: Training an SVM-HMM two outputs: φ(x, y′, y) = φ(x, y′) −φ(x, y). With this notation in place, the SVM-HMM minimizes the regularized hinge-loss: min w λ 2 w2 + 1 |S| X (x,y)∈S ℓ(w; (x, y)) (1) where ℓ(w; (x, y)) = max y′ δ(y, y′) + w · φ(x, y′, y) (2) and where δ(y, y′) = 0 when y = y′ and 1 otherwise.4 Intuitively, the objective attempts to find a small weight vector w that separates all incorrect tag sequences y′ from the correct tag sequence y by a margin of 1. λ controls the trade-off between regularization and training hinge-loss. The stochastic gradient descent algorithm used to optimize this objective is shown in Figure 2. It bears many similarities to perceptron HMM training (Collins, 2002), with theoretically-motivated alterations, such as selecting training points at random5 and the explicit inclusion of a learning rate η 4We did not experiment with structured versions of δ that account for the number of incorrect tags in the label sequence y′, as a fixed margin was already working very well. We intend to explore structured costs in future work. 5Like many implementations, we make n passes through S, shuffling S before each pass, rather than sampling from S with replacement n|S| times. training test # of documents 978 976 # of distinct codes 45 45 # of distinct code subsets 94 94 # of codes with < 10 ex. 24 24 avg # of codes per document 1.25 1.23 Table 2: The training and test set characteristics. and a regularization term λ. The search step can be carried out with a two-best version of the Viterbi algorithm; if the one-best answer y′ 1 matches the goldstandard y, that is δ(y, y′ 1) = 0, then y′ 2 is checked to see if its loss is higher. We tune two hyper-parameters using 10-fold cross-validation: the regularization parameter λ and a number of passes n through the training data. Using F1 as measured by 10-fold cross-validation on the training set, we found values of λ = 0.1 with n = 5 to prove optimal. Training time is less than one minute on modern hardware. 5 Experiments 5.1 Data For testing purposes, we use the CMC Challenge dataset. The data consists of 978 training and 976 test medical records labelled with one or more ICD9-CM codes from a set of 45 codes. The data statistics are presented in Table 2. The training and test sets have similar, very imbalanced distributions of codes. In particular, all codes in the test set have at least one training example. Moreover, for any code subset assigned to a test document there is at least one training document labelled with the same code subset. Notably, more than half of the codes have less than 10 instances in both training and test sets. Following the challenge’s protocol, we use microaveraged F1-measure for evaluation. 5.2 Baseline As the first baseline for comparison, we built a one-classifier-per-code statistical system. A document’s code subset is implied by the set of classifiers that assign it a positive label. The classifiers use a feature set designed to mimic our LT-HMM as closely as possible, including n-grams, dictionary matches, ConText output, and symptom/disease se748 mantic types. Each classifier is trained as an SVM with a linear kernel. Unlike our approach, this baseline cannot share features across codes, and it does not allow coding decisions for a document to inform one another. It also cannot propose codes that have not been seen in the training data as it has no model for these codes. However, one should note that it is a very strong baseline. Like our proposed system, it is built with many features derived from dictionary matches and their contexts, and thus it shares many of our system’s strengths. In fact, this baseline system outperforms all published statistical approaches tested on the CMC data. Our second baseline is a symbolic system, designed to evaluate the quality of our rule-based components when used alone. It is based on the same hand-crafted dictionary, filtered according to the ConText algorithm and four code dependency rules from (Farkas and Szarvas, 2008). These rules address the problem of overcoding: some symptom codes should be omitted when a specific disease code is present.6 This symbolic system has access to the same hand-crafted resources as our LT-HMM and, therefore, has a good chance of predicting low-frequency and unseen codes. However, it lacks the flexibility of our statistical solution to accept or reject code candidates based on the whole document text, which prevents it from compensating for dictionary or ConText errors. Similarly, the structure of the code dependency rules may not provide the same flexibility as our features that look at other detected triggers and previous code assignments. 5.3 Coding Accuracy We evaluate the proposed approach on both the training set (using 10-fold cross-validation) and the test set (Table 3). The experiments demonstrate the superiority of the proposed LT-HMM approach over the one-per-code statistical scheme as well as our symbolic baseline. Furthermore, the new approach shows the best results ever achieved on the dataset, beating the top-performing system in the challenge, a symbolic method. 6Note that we do not match the performance of the Farkas and Szarvas system, likely due to our use of a different (and simpler) dictionary. Cross-fold Test Symbolic baseline N/A 85.96 Statistical baseline 87.39 88.26 LT-HMM 89.39 89.84 CMC Best N/A 89.08 Table 3: Micro-averaged F1-scores for statistical and symbolic baselines, the proposed LT-HMM approach, and the best CMC hand-crafted rule-based system. System Prec. Rec. F1 Full 90.91 88.80 89.84 -ConText 88.54 85.89 87.19 -Document 89.89 88.55 89.21 -Code Semantics 90.10 88.38 89.23 -Append code-specific 88.96 88.30 88.63 -Transition 90.79 88.38 89.57 -ConText & Transition 86.91 85.39 86.14 Table 4: Results on the CMC test data with each major component removed. 5.4 Ablation Our system employs a number of emission feature templates. We measure the impact of each by removing the template, re-training, and testing on the challenge test data, as shown in Table 4. By far the most important component of our system is the output of the ConText algorithm. We also tested a version of the system that does not create a parallel code-specific feature set by appending the candidate code to emission features. This system tags code-candidates without any codespecific components, but it still does very well, outperforming the baselines. Removing the sequence-based transition features from our system has only a small impact on accuracy. This is because several of our emission features look at features of other candidate codes. This provides a strong approximation to the actual tagging decisions for these candidates. If we remove the ConText features, the HMM’s transition features become more important (compare line 2 of Table 4 to line 7). 5.5 Low-frequency codes As one can see from Table 2, more than half of the available codes appear fewer than 10 times in the 749 System Prec. Rec. F1 Symbolic baseline 42.53 56.06 48.37 Statistical baseline 73.33 33.33 45.83 LT-HMM 70.00 53.03 60.34 Table 5: Results on the CMC test set, looking only at the codes with fewer than 10 examples in the training set. System Prec. Rec. F1 Symbolic baseline 60.00 80.00 68.57 All training data 72.92 74.47 73.68 One code held out 79.31 48.94 60.53 Table 6: Results on the CMC test set when all instances of a low-frequency code are held-out during training. training documents. This does not provide much training data for a one-classifier-per-code approach, which has been a major motivating factor in the design of our LT-HMM. In Table 5, we compare our system to the baselines on the CMC test set, considering only these low-frequency codes. We show a 15-point gain in F1 over the statistical baseline on these hard cases, brought on by an substantial increase in recall. Similarly, we improve over the symbolic baseline, due to a much higher precision. In this way, the LT-HMM captures the strengths of both approaches. Our system also has the ability to predict codes that have not been seen during training, by labelling a dictionary match for a code as present according to its local context. We simulate this setting by dropping training data. For each low-frequency code c, we hold out all training documents that include c in their gold-standard code set. We then train our system on the reduced training set and measure its ability to detect c on the unseen test data. 11 of the 24 low-frequency codes have no dictionary matches in our test data; we omit them from our analysis as we are unable to predict them. The micro-averaged results for the remaining 13 low-frequency codes are shown in Table 6, with the results from the symbolic baseline and from our system trained on the complete training data provided for comparison. We were able to recover 49% of the test-time occurrences of codes withheld from training, while maintaining our full system’s precision. Considering that traditional statistical strategies would lead to recall dropping uniformly to 0, this is a vast improvement. However, the symbolic baseline recalls 80% of occurrences in aggregate, indicating that we are not yet making optimal use of the dictionary for cases when a code is missing from the training data. By holding out only correct occurrences of a code c, our system becomes biased against it: all trigger terms for c that are found in the training data must be labelled absent. Nonetheless, out of the 13 codes with dictionary matches, there were 9 codes that we were able to recall at a rate of 50% or more, and 5 codes that achieved 100% recall. 6 Conclusion We have presented the lexically-triggered HMM, a novel and effective approach for clinical document coding. The LT-HMM takes advantage of lexical triggers for clinical codes by operating in two stages: first, a lexical match is performed against a trigger term dictionary to collect a set of candidates codes for a document; next, a discriminative HMM selects the best subset of codes to assign to the document. Using both generic and code-specific features, the LT-HMM outperforms a traditional one-percode statistical classification method, with substantial improvements on low-frequency codes. Also, it achieves the best ever performance on a common testbed, beating the top-performer of the 2007 CMC Challenge, a hand-crafted rule-based system. Finally, we have demonstrated that the LT-HMM can correctly predict codes never seen in the training set, a vital characteristic missing from previous statistical methods. In the future, we would like to augment our dictionary-based matching component with entityrecognition technology. It would be interesting to model triggers as latent variables in the document coding process, in a manner similar to how latent subjective sentences have been used in documentlevel sentiment analysis (Yessenalina et al., 2010). This would allow us to employ a learned matching component that is trained to compliment our classification component. Acknowledgements Many thanks to Berry de Bruijn, Joel Martin, and the ACL-HLT reviewers for their helpful comments. 750 References Y. Altun, I. Tsochantaridis, and T. Hofmann. 2003. Hidden Markov support vector machines. In ICML. A. R. Aronson, O. Bodenreider, D. Demner-Fushman, K. W. Fung, V. K. Lee, J. G. Mork, A. Nvol, L. Peters, and W. J. Rogers. 2007. From indexing the biomedical literature to coding clinical text: Experience with MTI and machine learning approaches. In BioNLP, pages 105–112. S. Benson. 2006. Computer assisted coding software improves documentation, coding, compliance and revenue. Perspectives in Health Information Management, CAC Proceedings, Fall. M. Collins. 2002. Discriminative training methods for Hidden Markov Models: Theory and experiments with perceptron algorithms. In EMNLP. K. Crammer, M. Dredze, K. Ganchev, P. P. Talukdar, and S. Carroll. 2007. Automatic code assignment to medical text. In BioNLP, pages 129–136. R. Farkas and G. Szarvas. 2008. Automatic construction of rule-based ICD-9-CM coding systems. BMC Bioinformatics, 9(Suppl 3):S10. I. Goldstein, A. Arzumtsyan, and Uzuner. 2007. Three approaches to automatic assignment of ICD-9-CM codes to radiology reports. In AMIA, pages 279–283. H. Harkema, J. N. Dowling, T. Thornblade, and W. W. Chapman. 2009. Context: An algorithm for determining negation, experiencer, and temporal status from clinical reports. Journal of Biomedical Informatics, 42(5):839–851, October. L. Kevers and J. Medori. 2010. Symbolic classification methods for patient discharge summaries encoding into ICD. In Proceedings of the 7th International Conference on NLP (IceTAL), pages 197–208, Reykjavik, Iceland, August. D. A. Lindberg, B. L. Humphreys, and A. T. McCray. 1993. The Unified Medical Language System. Methods of Information in Medicine, 32(4):281–291. Y. A. Lussier, L. Shagina, and C. Friedman. 2000. Automating ICD-9-CM encoding using medical language processing: A feasibility study. In AMIA, page 1072. S. M. Meystre, G. K. Savova, K. C. Kipper-Schuler, and J. F. Hurdle. 2008. Extracting information from textual documents in the electronic health record: a review of recent research. Methods of Information in Medicine, 47(Suppl 1):128–144. J. Patrick, Y. Zhang, and Y. Wang. 2007. Developing feature types for classifying clinical notes. In BioNLP, pages 191–192. J. P. Pestian, C. Brew, P. Matykiewicz, D. J. Hovermale, N. Johnson, K. B. Cohen, and W. Duch. 2007. A shared task involving multi-label classification of clinical free text. In BioNLP, pages 97–104. S. Shalev-Shwartz, Y. Singer, and N. Srebro. 2007. Pegasos: Primal Estimated sub-GrAdient SOlver for SVM. In ICML, Corvallis, OR. M. H. Stanfill, M. Williams, S. H. Fenton, R. A. Jenders, and W. R. Hersh. 2010. A systematic literature review of automated clinical coding and classification systems. JAMIA, 17:646–651. H. Suominen, F. Ginter, S. Pyysalo, A. Airola, T. Pahikkala, S. Salanter, and T. Salakoski. 2008. Machine learning to automate the assignment of diagnosis codes to free-text radiology reports: a method description. In Proceedings of the ICML Workshop on Machine Learning for Health-Care Applications, Helsinki, Finland. B. Taskar, C. Guestrin, and D. Koller. 2003. Max-margin markov networks. In Neural Information Processing Systems Conference (NIPS03), Vancouver, Canada, December. A. Yessenalina, Y. Yue, and C. Cardie. 2010. Multilevel structured models for document-level sentiment classification. In EMNLP. 751
|
2011
|
75
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 752–762, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Learning to Grade Short Answer Questions using Semantic Similarity Measures and Dependency Graph Alignments Michael Mohler Dept. of Computer Science University of North Texas Denton, TX [email protected] Razvan Bunescu School of EECS Ohio University Athens, Ohio [email protected] Rada Mihalcea Dept. of Computer Science University of North Texas Denton, TX [email protected] Abstract In this work we address the task of computerassisted assessment of short student answers. We combine several graph alignment features with lexical semantic similarity measures using machine learning techniques and show that the student answers can be more accurately graded than if the semantic measures were used in isolation. We also present a first attempt to align the dependency graphs of the student and the instructor answers in order to make use of a structural component in the automatic grading of student answers. 1 Introduction One of the most important aspects of the learning process is the assessment of the knowledge acquired by the learner. In a typical classroom assessment (e.g., an exam, assignment or quiz), an instructor or a grader provides students with feedback on their answers to questions related to the subject matter. However, in certain scenarios, such as a number of sites worldwide with limited teacher availability, online learning environments, and individual or group study sessions done outside of class, an instructor may not be readily available. In these instances, students still need some assessment of their knowledge of the subject, and so, we must turn to computerassisted assessment (CAA). While some forms of CAA do not require sophisticated text understanding (e.g., multiple choice or true/false questions can be easily graded by a system if the correct solution is available), there are also student answers made up of free text that may require textual analysis. Research to date has concentrated on two subtasks of CAA: grading essay responses, which includes checking the style, grammaticality, and coherence of the essay (Higgins et al., 2004), and the assessment of short student answers (Leacock and Chodorow, 2003; Pulman and Sukkarieh, 2005; Mohler and Mihalcea, 2009), which is the focus of this work. An automatic short answer grading system is one that automatically assigns a grade to an answer provided by a student, usually by comparing it to one or more correct answers. Note that this is different from the related tasks of paraphrase detection and textual entailment, since a common requirement in student answer grading is to provide a grade on a certain scale rather than make a simple yes/no decision. In this paper, we explore the possibility of improving upon existing bag-of-words (BOW) approaches to short answer grading by utilizing machine learning techniques. Furthermore, in an attempt to mirror the ability of humans to understand structural (e.g. syntactic) differences between sentences, we employ a rudimentary dependency-graph alignment module, similar to those more commonly used in the textual entailment community. Specifically, we seek answers to the following questions. First, to what extent can machine learning be leveraged to improve upon existing approaches to short answer grading. Second, does the dependency parse structure of a text provide clues that can be exploited to improve upon existing BOW methodologies? 752 2 Related Work Several state-of-the-art short answer grading systems (Sukkarieh et al., 2004; Mitchell et al., 2002) require manually crafted patterns which, if matched, indicate that a question has been answered correctly. If an annotated corpus is available, these patterns can be supplemented by learning additional patterns semi-automatically. The Oxford-UCLES system (Sukkarieh et al., 2004) bootstraps patterns by starting with a set of keywords and synonyms and searching through windows of a text for new patterns. A later implementation of the Oxford-UCLES system (Pulman and Sukkarieh, 2005) compares several machine learning techniques, including inductive logic programming, decision tree learning, and Bayesian learning, to the earlier pattern matching approach, with encouraging results. C-Rater (Leacock and Chodorow, 2003) matches the syntactical features of a student response (i.e., subject, object, and verb) to that of a set of correct responses. This method specifically disregards the BOW approach to take into account the difference between “dog bites man” and “man bites dog” while still trying to detect changes in voice (i.e., “the man was bitten by the dog”). Another short answer grading system, AutoTutor (Wiemer-Hastings et al., 1999), has been designed as an immersive tutoring environment with a graphical “talking head” and speech recognition to improve the overall experience for students. AutoTutor eschews the pattern-based approach entirely in favor of a BOW LSA approach (Landauer and Dumais, 1997). Later work on AutoTutor(Wiemer-Hastings et al., 2005; Malatesta et al., 2002) seeks to expand upon their BOW approach which becomes less useful as causality (and thus word order) becomes more important. A text similarity approach was taken in (Mohler and Mihalcea, 2009), where a grade is assigned based on a measure of relatedness between the student and the instructor answer. Several measures are compared, including knowledge-based and corpusbased measures, with the best results being obtained with a corpus-based measure using Wikipedia combined with a “relevance feedback” approach that iteratively augments the instructor answer by integrating the student answers that receive the highest grades. In the dependency-based classification component of the Intelligent Tutoring System (Nielsen et al., 2009), instructor answers are parsed, enhanced, and manually converted into a set of content-bearing dependency triples or facets. For each facet of the instructor answer each student’s answer is labelled to indicate whether it has addressed that facet and whether or not the answer was contradictory. The system uses a decision tree trained on part-of-speech tags, dependency types, word count, and other features to attempt to learn how best to classify an answer/facet pair. Closely related to the task of short answer grading is the task of textual entailment (Dagan et al., 2005), which targets the identification of a directional inferential relation between texts. Given a pair of two texts as input, typically referred to as text and hypothesis, a textual entailment system automatically finds if the hypothesis is entailed by the text. In particular, the entailment-related works that are most similar to our own are the graph matching techniques proposed by Haghighi et al. (2005) and Rus et al. (2007). Both input texts are converted into a graph by using the dependency relations obtained from a parser. Next, a matching score is calculated, by combining separate vertex- and edge-matching scores. The vertex matching functions use wordlevel lexical and semantic features to determine the quality of the match while the the edge matching functions take into account the types of relations and the difference in lengths between the aligned paths. Following the same line of work in the textual entailment world are (Raina et al., 2005), (MacCartney et al., 2006), (de Marneffe et al., 2007), and (Chambers et al., 2007), which experiment variously with using diverse knowledge sources, using a perceptron to learn alignment decisions, and exploiting natural logic. 3 Answer Grading System We use a set of syntax-aware graph alignment features in a three-stage pipelined approach to short answer grading, as outlined in Figure 1. In the first stage (Section 3.1), the system is provided with the dependency graphs for each pair of instructor (Ai) and student (As) answers. For each 753 Figure 1: Pipeline model for scoring short-answer pairs. node in the instructor’s dependency graph, we compute a similarity score for each node in the student’s dependency graph based upon a set of lexical, semantic, and syntactic features applied to both the pair of nodes and their corresponding subgraphs. The scoring function is trained on a small set of manually aligned graphs using the averaged perceptron algorithm. In the second stage (Section 3.2), the node similarity scores calculated in the previous stage are used to weight the edges in a bipartite graph representing the nodes in Ai on one side and the nodes in As on the other. We then apply the Hungarian algorithm to find both an optimal matching and the score associated with such a matching. In this stage, we also introduce question demoting in an attempt to reduce the advantage of parroting back words provided in the question. In the final stage (Section 3.4), we produce an overall grade based upon the alignment scores found in the previous stage as well as the results of several semantic BOW similarity measures (Section 3.3). Using each of these as features, we use Support Vector Machines (SVM) to produce a combined realnumber grade. Finally, we build an Isotonic Regression (IR) model to transform our output scores onto the original [0,5] scale for ease of comparison. 3.1 Node to Node Matching Dependency graphs for both the student and instructor answers are generated using the Stanford Dependency Parser (de Marneffe et al., 2006) in collapse/propagate mode. The graphs are further post-processed to propagate dependencies across the “APPOS” (apposition) relation, to explicitly encode negation, part-of-speech, and sentence ID within each node, and to add an overarching ROOT node governing the main verb or predicate of each sentence of an answer. The final representation is a list of (relation,governor,dependent) triples, where governor and dependent are both tokens described by the tuple (sentenceID:token:POS:wordPosition). For example: (nsubj, 1:provide:VBZ:4, 1:program:NN:3) indicates that the noun “program” is a subject in sentence 1 whose associated verb is “provide.” If we consider the dependency graphs output by the Stanford parser as directed (minimally cyclic) graphs,1 we can define for each node x a set of nodes Nx that are reachable from x using a subset of the relations (i.e., edge types)2. We variously define “reachable” in four ways to create four subgraphs defined for each node. These are as follows: • N 0 x : All edge types may be followed • N 1 x : All edge types except for subject types, ADVCL, PURPCL, APPOS, PARATAXIS, ABBREV, TMOD, and CONJ • N 2 x : All edge types except for those in N 1 x plus object/complement types, PREP, and RCMOD • N 3 x : No edge types may be followed (This set is the single starting node x) Subgraph similarity (as opposed to simple node similarity) is a means to escape the rigidity involved in aligning parse trees while making use of as much of the sentence structure as possible. Humans intuitively make use of modifiers, predicates, and subordinate clauses in determining that two sentence entities are similar. For instance, the entity-describing phrase “men who put out fires” matches well with “firemen,” but the words “men” and “firemen” have 1The standard output of the Stanford Parser produces rooted trees. However, the process of collapsing and propagating dependences violates the tree structure which results in a tree with a few cross-links between distinct branches. 2For more information on the relations used in this experiment, consult the Stanford Typed Dependencies Manual at http://nlp.stanford.edu/software/dependencies manual.pdf 754 less inherent similarity. It remains to be determined how much of a node’s subgraph will positively enrich its semantics. In addition to the complete N 0 x subgraph, we chose to include N 1 x and N 2 x as tightening the scope of the subtree by first removing more abstract relations, then sightly more concrete relations. We define a total of 68 features to be used to train our machine learning system to compute node-node (more specifically, subgraph-subgraph) matches. Of these, 36 are based upon the semantic similarity of four subgraphs defined by N [0..3] x . All eight WordNet-based similarity measures listed in Section 3.3 plus the LSA model are used to produce these features. The remaining 32 features are lexicosyntactic features3 defined only for N 3 x and are described in more detail in Table 2. We use φ(xi, xs) to denote the feature vector associated with a pair of nodes ⟨xi, xs⟩, where xi is a node from the instructor answer Ai and xs is a node from the student answer As. A matching score can then be computed for any pair ⟨xi, xs⟩∈Ai × As through a linear scoring function f(xi, xs) = wT φ(xi, xs). In order to learn the parameter vector w, we use the averaged version of the perceptron algorithm (Freund and Schapire, 1999; Collins, 2002). As training data, we randomly select a subset of the student answers in such a way that our set was roughly balanced between good scores, mediocre scores, and poor scores. We then manually annotate each node pair ⟨xi, xs⟩as matching, i.e. A(xi, xs) = +1, or not matching, i.e. A(xi, xs) = −1. Overall, 32 student answers in response to 21 questions with a total of 7303 node pairs (656 matches, 6647 nonmatches) are manually annotated. The pseudocode for the learning algorithm is shown in Table 1. After training the perceptron, these 32 student answers are removed from the dataset, not used as training further along in the pipeline, and are not included in the final results. After training for 50 epochs,4 the matching score f(xi, xs) is calculated (and cached) for each node-node pair across all student answers for all assignments. 3Note that synonyms include negated antonyms (and vice versa). Hypernymy and hyponymy are restricted to at most two steps). 4This value was chosen arbitrarily and was not tuned in anyway 0. set w ←0, w ←0, n ←0 1. repeat for T epochs: 2. foreach ⟨Ai; As⟩: 3. foreach ⟨xi, xs⟩∈Ai × As: 4. if sgn(wT φ(xi, xs)) ̸= sgn(A(xi, xs)): 5. set w ←w + A(xi, xs)φ(xi, xs) 6. set w ←w + w, n ←n + 1 7. return w/n. Table 1: Perceptron Training for Node Matching. 3.2 Graph to Graph Alignment Once a score has been computed for each node-node pair across all student/instructor answer pairs, we attempt to find an optimal alignment for the answer pair. We begin with a bipartite graph where each node in the student answer is represented by a node on the left side of the bipartite graph and each node in the instructor answer is represented by a node on the right side. The score associated with each edge is the score computed for each node-node pair in the previous stage. The bipartite graph is then augmented by adding dummy nodes to both sides which are allowed to match any node with a score of zero. An optimal alignment between the two graphs is then computed efficiently using the Hungarian algorithm. Note that this results in an optimal matching, not a mapping, so that an individual node is associated with at most one node in the other answer. At this stage we also compute several alignmentbased scores by applying various transformations to the input graphs, the node matching function, and the alignment score itself. The first and simplest transformation involves the normalization of the alignment score. While there are several possible ways to normalize a matching such that longer answers do not unjustly receive higher scores, we opted to simply divide the total alignment score by the number of nodes in the instructor answer. The second transformation scales the node matching score by multiplying it with the idf 5 of the instructor answer node, i.e., replace f(xi, xs) with idf(xi) ∗f(xi, xs). The third transformation relies upon a certain real-world intuition associated with grading student 5Inverse document frequency, as computed from the British National Corpus (BNC) 755 Name Type # features Description RootMatch binary 5 Is a ROOT node matched to: ROOT, N, V, JJ, or Other Lexical binary 3 Exact match, Stemmed match, close Levenshtein match POSMatch binary 2 Exact POS match, Coarse POS match POSPairs binary 8 Specific X-Y POS matches found Ontological binary 4 WordNet relationships: synonymy, antonymy, hypernymy, hyponymy RoleBased binary 3 Has as a child - subject, object, verb VerbsSubject binary 3 Both are verbs and neither, one, or both have a subject child VerbsObject binary 3 Both are verbs and neither, one, or both have an object child Semantic real 36 Nine semantic measures across four subgraphs each Bias constant 1 A value of 1 for all vectors Total 68 Table 2: Subtree matching features used to train the perceptron answers – repeating words in the question is easy and is not necessarily an indication of student understanding. With this in mind, we remove any words in the question from both the instructor answer and the student answer. In all, the application of the three transformations leads to eight different transform combinations, and therefore eight different alignment scores. For a given answer pair (Ai, As), we assemble the eight graph alignment scores into a feature vector ψG(Ai, As). 3.3 Lexical Semantic Similarity Haghighi et al. (2005), working on the entailment detection problem, point out that finding a good alignment is not sufficient to determine that the aligned texts are in fact entailing. For instance, two identical sentences in which an adjective from one is replaced by its antonym will have very similar structures (which indicates a good alignment). However, the sentences will have opposite meanings. Further information is necessary to arrive at an appropriate score. In order to address this, we combine the graph alignment scores, which encode syntactic knowledge, with the scores obtained from semantic similarity measures. Following Mihalcea et al. (2006) and Mohler and Mihalcea (2009), we use eight knowledgebased measures of semantic similarity: shortest path [PATH], Leacock & Chodorow (1998) [LCH], Lesk (1986), Wu & Palmer(1994) [WUP], Resnik (1995) [RES], Lin (1998), Jiang & Conrath (1997) [JCN], Hirst & St. Onge (1998) [HSO], and two corpusbased measures: Latent Semantic Analysis [LSA] (Landauer and Dumais, 1997) and Explicit Semantic Analysis [ESA] (Gabrilovich and Markovitch, 2007). Briefly, for the knowledge-based measures, we use the maximum semantic similarity – for each open-class word – that can be obtained by pairing it up with individual open-class words in the second input text. We base our implementation on the WordNet::Similarity package provided by Pedersen et al. (2004). For the corpus-based measures, we create a vector for each answer by summing the vectors associated with each word in the answer – ignoring stopwords. We produce a score in the range [0..1] based upon the cosine similarity between the student and instructor answer vectors. The LSA model used in these experiments was built by training Infomap6 on a subset of Wikipedia articles that contain one or more common computer science terms. Since ESA uses Wikipedia article associations as vector features, it was trained using a full Wikipedia dump. 3.4 Answer Ranking and Grading We combine the alignment scores ψG(Ai, As) with the scores ψB(Ai, As) from the lexical semantic similarity measures into a single feature vector ψ(Ai, As) = [ψG(Ai, As)|ψB(Ai, As)]. The feature vector ψG(Ai, As) contains the eight alignment scores found by applying the three transformations in the graph alignment stage. The feature vector ψB(Ai, As) consists of eleven semantic features – the eight knowledge-based features plus LSA, ESA and a vector consisting only of tf*idf weights – both with and without question demoting. Thus, the entire feature vector ψ(Ai, As) contains a total of 30 features. 6http://Infomap-nlp.sourceforge.net/ 756 An input pair (Ai, As) is then associated with a grade g(Ai, As) = uT ψ(Ai, As) computed as a linear combination of features. The weight vector u is trained to optimize performance in two scenarios: Regression: An SVM model for regression (SVR) is trained using as target function the grades assigned by the instructors. We use the libSVM 7 implementation of SVR, with tuned parameters. Ranking: An SVM model for ranking (SVMRank) is trained using as ranking pairs all pairs of student answers (As, At) such that grade(Ai, As) > grade(Ai, At), where Ai is the corresponding instructor answer. We use the SVMLight 8 implementation of SVMRank with tuned parameters. In both cases, the parameters are tuned using a grid-search. At each grid point, the training data is partitioned into 5 folds which are used to train a temporary SVM model with the given parameters. The regression passage selects the grid point with the minimal mean square error (MSE), and the SVMRank package tries to minimize the number of discordant pairs. The parameters found are then used to score the test set – a set not used in the grid training. 3.5 Isotonic Regression Since the end result of any grading system is to give a student feedback on their answers, we need to ensure that the system’s final score has some meaning. With this in mind, we use isotonic regression (Zadrozny and Elkan, 2002) to convert the system scores onto the same [0..5] scale used by the annotators. This has the added benefit of making the system output more directly related to the annotated grade, which makes it possible to report root mean square error in addition to correlation. We train the isotonic regression model on each type of system output (i.e., alignment scores, SVM output, BOW scores). 4 Data Set To evaluate our method for short answer grading, we created a data set of questions from introductory computer science assignments with answers provided by a class of undergraduate students. The assignments were administered as part of a Data Struc7http://www.csie.ntu.edu.tw/˜cjlin/libsvm/ 8http://svmlight.joachims.org/ tures course at the University of North Texas. For each assignment, the student answers were collected via an online learning environment. The students submitted answers to 80 questions spread across ten assignments and two examinations.9 Table 3 shows two question-answer pairs with three sample student answers each. Thirty-one students were enrolled in the class and submitted answers to these assignments. The data set we work with consists of a total of 2273 student answers. This is less than the expected 31 × 80 = 2480 as some students did not submit answers for a few assignments. In addition, the student answers used to train the perceptron are removed from the pipeline after the perceptron training stage. The answers were independently graded by two human judges, using an integer scale from 0 (completely incorrect) to 5 (perfect answer). Both human judges were graduate students in the computer science department; one (grader1) was the teaching assistant assigned to the Data Structures class, while the other (grader2) is one of the authors of this paper. We treat the average grade of the two annotators as the gold standard against which we compare our system output. Difference Examples % of examples 0 1294 57.7% 1 514 22.9% 2 231 10.3% 3 123 5.5% 4 70 3.1% 5 9 0.4% Table 4: Annotator Analysis The annotators were given no explicit instructions on how to assign grades other than the [0..5] scale. Both annotators gave the same grade 57.7% of the time and gave a grade only 1 point apart 22.9% of the time. The full breakdown can be seen in Table 4. In addition, an analysis of the grading patterns indicate that the two graders operated off of different grading policies where one grader (grader1) was more generous than the other. In fact, when the two differed, grader1 gave the higher grade 76.6% of the time. The average grade given by grader1 is 4.43, 9Note that this is an expanded version of the dataset used by Mohler and Mihalcea (2009) 757 Sample questions, correct answers, and student answers Grades Question: What is the role of a prototype program in problem solving? Correct answer: To simulate the behavior of portions of the desired software product. Student answer 1: A prototype program is used in problem solving to collect data for the problem. 1, 2 Student answer 2: It simulates the behavior of portions of the desired software product. 5, 5 Student answer 3: To find problem and errors in a program before it is finalized. 2, 2 Question: What are the main advantages associated with object-oriented programming? Correct answer: Abstraction and reusability. Student answer 1: They make it easier to reuse and adapt previously written code and they separate complex programs into smaller, easier to understand classes. 5, 4 Student answer 2: Object oriented programming allows programmers to use an object with classes that can be changed and manipulated while not affecting the entire object at once. 1, 1 Student answer 3: Reusable components, Extensibility, Maintainability, it reduces large problems into smaller more manageable problems. 4, 4 Table 3: A sample question with short answers provided by students and the grades assigned by the two human judges while the average grade given by grader2 is 3.94. The dataset is biased towards correct answers. We believe all of these issues correctly mirror real-world issues associated with the task of grading. 5 Results We independently test two components of our overall grading system: the node alignment detection scores found by training the perceptron, and the overall grades produced in the final stage. For the alignment detection, we report the precision, recall, and F-measure associated with correctly detecting matches. For the grading stage, we report a single Pearson’s correlation coefficient tracking the annotator grades (average of the two annotators) and the output score of each system. In addition, we report the Root Mean Square Error (RMSE) for the full dataset as well as the median RMSE across each individual question. This is to give an indication of the performance of the system for grading a single question in isolation.10 5.1 Perceptron Alignment For the purpose of this experiment, the scores associated with a given node-node matching are converted into a simple yes/no matching decision where positive scores are considered a match and negative 10We initially intended to report an aggregate of question-level Pearson correlation results, but discovered that the dataset contained one question for which each student received full points – leaving the correlation undefined. We believe that this casts some doubt on the applicability of Pearson’s (or Spearman’s) correlation coefficient for the short answer grading task. We have retained its use here alongside RMSE for ease of comparison. scores a non-match. The threshold weight learned from the bias feature strongly influences the point at which real scores change from non-matches to matches, and given the threshold weight learned by the algorithm, we find an F-measure of 0.72, with precision(P) = 0.85 and recall(R) = 0.62. However, as the perceptron is designed to minimize error rate, this may not reflect an optimal objective when seeking to detect matches. By manually varying the threshold, we find a maximum F-measure of 0.76, with P=0.79 and R=0.74. Figure 2 shows the full precision-recall curve with the F-measure overlaid. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Score Recall Precision F-Measure Threshold Figure 2: Precision, recall, and F-measure on node-level match detection 5.2 Question Demoting One surprise while building this system was the consistency with which the novel technique of question demoting improved scores for the BOW similarity measures. With this relatively minor change the average correlation between the BOW methods’ sim758 ilarity scores and the student grades improved by up to 0.046 with an average improvement of 0.019 across all eleven semantic features. Table 5 shows the results of applying question demoting to our semantic features. When comparing scores using RMSE, the difference is less consistent, yielding an average improvement of 0.002. However, for one measure (tf*idf), the improvement is 0.063 which brings its RMSE score close to the lowest of all BOW metrics. The reasons for this are not entirely clear. As a baseline, we include here the results of assigning the average grade (as determined on the training data) for each question. The average grade was chosen as it minimizes the RMSE on the training data. ρ w/ QD RMSE w/ QD Med. RMSE w/ QD Lesk 0.450 0.462 1.034 1.050 0.930 0.919 JCN 0.443 0.461 1.022 1.026 0.954 0.923 HSO 0.441 0.456 1.036 1.034 0.966 0.935 PATH 0.436 0.457 1.029 1.030 0.940 0.918 RES 0.409 0.431 1.045 1.035 0.996 0.941 Lin 0.382 0.407 1.069 1.056 0.981 0.949 LCH 0.367 0.387 1.068 1.069 0.986 0.958 WUP 0.325 0.343 1.090 1.086 1.027 0.977 ESA 0.395 0.401 1.031 1.086 0.990 0.955 LSA 0.328 0.335 1.065 1.061 0.951 1.000 tf*idf 0.281 0.327 1.085 1.022 0.991 0.918 Avg.grade 1.097 1.097 0.973 0.973 Table 5: BOW Features with Question Demoting (QD). Pearson’s correlation, root mean square error (RMSE), and median RMSE for all individual questions. 5.3 Alignment Score Grading Before applying any machine learning techniques, we first test the quality of the eight graph alignment features ψG(Ai, As) independently. Results indicate that the basic alignment score performs comparably to most BOW approaches. The introduction of idf weighting seems to degrade performance somewhat, while introducing question demoting causes the correlation with the grader to increase while increasing RMSE somewhat. The four normalized components of ψG(Ai, As) are reported in Table 6. 5.4 SVM Score Grading The SVM components of the system are run on the full dataset using a 12-fold cross validation. Each of the 10 assignments and 2 examinations (for a total of 12 folds) is scored independently with ten of the remaining eleven used to train the machine learnStandard w/ IDF w/ QD w/ QD+IDF Pearson’s ρ 0.411 0.277 0.428 0.291 RMSE 1.018 1.078 1.046 1.076 Median RMSE 0.910 0.970 0.919 0.992 Table 6: Alignment Feature/Grade Correlations using Pearson’s ρ. Results are also reported when inverse document frequency weighting (IDF) and question demoting (QD) are used. ing system. For each fold, one additional fold is held out for later use in the development of an isotonic regression model (see Figure 3). The parameters (for cost C and tube width ǫ) were found using a grid search. At each point on the grid, the data from the ten training folds was partitioned into 5 sets which were scored according to the current parameters. SVMRank and SVR sought to minimize the number of discordant pairs and the mean absolute error, respectively. Both SVM models are trained using a linear kernel.11 Results from both the SVR and the SVMRank implementations are reported in Table 7 along with a selection of other measures. Note that the RMSE score is computed after performing isotonic regression on the SVMRank results, but that it was unnecessary to perform an isotonic regression on the SVR results as the system was trained to produce a score on the correct scale. We report the results of running the systems on three subsets of features ψ(Ai, As): BOW features ψB(Ai, As) only, alignment features ψG(Ai, As) only, or the full feature vector (labeled “Hybrid”). Finally, three subsets of the alignment features are used: only unnormalized features, only normalized features, or the full alignment feature set. B C A − Ten Folds B C A − Ten Folds B C A − Ten Folds IR Model SVM Model Features Figure 3: Dependencies of the SVM/IR training stages. 11We also ran the SVR system using quadratic and radial-basis function (RBF) kernels, but the results did not show significant improvement over the simpler linear kernel. 759 Unnormalized Normalized Both IAA Avg. grade tf*idf Lesk BOW Align Hybrid Align Hybrid Align Hybrid SVMRank Pearson’s ρ 0.586 0.327 0.450 0.480 0.266 0.451 0.447 0.518 0.424 0.493 RMSE 0.659 1.097 1.022 1.050 1.042 1.093 1.038 1.015 0.998 1.029 1.021 Median RMSE 0.605 0.973 0.918 0.919 0.943 0.974 0.903 0.865 0.873 0.904 0.901 SVR Pearson’s ρ 0.586 0.327 0.450 0.431 0.167 0.437 0.433 0.459 0.434 0.464 RMSE 0.659 1.097 1.022 1.050 0.999 1.133 0.995 1.001 0.982 1.003 0.978 Median RMSE 0.605 0.973 0.918 0.919 0.910 0.987 0.893 0.894 0.877 0.886 0.862 Table 7: The results of the SVM models trained on the full suite of BOW measures, the alignment scores, and the hybrid model. The terms “normalized”, “unnormalized”, and “both” indicate which subset of the 8 alignment features were used to train the SVM model. For ease of comparison, we include in both sections the scores for the IAA, the “Average grade” baseline, and two of the top performing BOW metrics – both with question demoting. 6 Discussion and Conclusions There are three things that we can learn from these experiments. First, we can see from the results that several systems appear better when evaluating on a correlation measure like Pearson’s ρ, while others appear better when analyzing error rate. The SVMRank system seemed to outperform the SVR system when measuring correlation, however the SVR system clearly had a minimal RMSE. This is likely due to the different objective function in the corresponding optimization formulations: while the ranking model attempts to ensure a correct ordering between the grades, the regression model seeks to minimize an error objective that is closer to the RMSE. It is difficult to claim that either system is superior. Likewise, perhaps the most unexpected result of this work is the differing analyses of the simple tf*idf measure – originally included only as a baseline. Evaluating with a correlative measure yields predictably poor results, but evaluating the error rate indicates that it is comparable to (or better than) the more intelligent BOW metrics. One explanation for this result is that the skewed nature of this ”natural” dataset favors systems that tend towards scores in the 4 to 4.5 range. In fact, 46% of the scores output by the tf*idf measure (after IR) were within the 4 to 4.5 range and only 6% were below 3.5. Testing on a more balanced dataset, this tendency to fit to the average would be less advantageous. Second, the supervised learning techniques are clearly able to leverage multiple BOW measures to yield improvements over individual BOW metrics. The correlation for the BOW-only SVM model for SVMRank improved upon the best BOW feature from .462 to .480. Likewise, using the BOW-only SVM model for SVR reduces the RMSE by .022 overall compared to the best BOW feature. Third, the rudimentary alignment features we have introduced here are not sufficient to act as a standalone grading system. However, even with a very primitive attempt at alignment detection, we show that it is possible to improve upon grade learning systems that only consider BOW features. The correlations associated with the hybrid systems (esp. those using normalized alignment data) frequently show an improvement over the BOW-only SVM systems. This is true for both SVM systems when considering either evaluation metric. Future work will concentrate on improving the quality of the answer alignments by training a model to directly output graph-to-graph alignments. This learning approach will allow the use of more complex alignment features, for example features that are defined on pairs of aligned edges or on larger subtrees in the two input graphs. Furthermore, given an alignment, we can define several phrase-level grammatical features such as negation, modality, tense, person, number, or gender, which make better use of the alignment itself. Acknowledgments This work was partially supported by a National Science Foundation CAREER award #0747340. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. 760 References N. Chambers, D. Cer, T. Grenager, D. Hall, C. Kiddon, B. MacCartney, M.C. de Marneffe, D. Ramage, E. Yeh, and C.D. Manning. 2007. Learning alignments and leveraging natural logic. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 165–170. Association for Computational Linguistics. M. Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP-02), Philadelphia, PA, July. I. Dagan, O. Glickman, and B. Magnini. 2005. The PASCAL recognising textual entailment challenge. In Proceedings of the PASCAL Workshop. M.C. de Marneffe, B. MacCartney, and C.D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In LREC 2006. M.C. de Marneffe, T. Grenager, B. MacCartney, D. Cer, D. Ramage, C. Kiddon, and C.D. Manning. 2007. Aligning semantic graphs for textual inference and machine reading. In Proceedings of the AAAI Spring Symposium. Citeseer. Y. Freund and R. Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 37:277–296. E. Gabrilovich and S. Markovitch. 2007. Computing Semantic Relatedness using Wikipedia-based Explicit Semantic Analysis. Proceedings of the 20th International Joint Conference on Artificial Intelligence, pages 6–12. A.D. Haghighi, A.Y. Ng, and C.D. Manning. 2005. Robust textual inference via graph matching. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 387–394. Association for Computational Linguistics. D. Higgins, J. Burstein, D. Marcu, and C. Gentile. 2004. Evaluating multiple aspects of coherence in student essays. In Proceedings of the annual meeting of the North American Chapter of the Association for Computational Linguistics, Boston, MA. G. Hirst and D. St-Onge, 1998. Lexical chains as representations of contexts for the detection and correction of malaproprisms. The MIT Press. J. Jiang and D. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of the International Conference on Research in Computational Linguistics, Taiwan. T.K. Landauer and S.T. Dumais. 1997. A solution to plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104. C. Leacock and M. Chodorow. 1998. Combining local context and WordNet sense similarity for word sense identification. In WordNet, An Electronic Lexical Database. The MIT Press. C. Leacock and M. Chodorow. 2003. C-rater: Automated Scoring of Short-Answer Questions. Computers and the Humanities, 37(4):389–405. M.E. Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the SIGDOC Conference 1986, Toronto, June. D. Lin. 1998. An information-theoretic definition of similarity. In Proceedings of the 15th International Conference on Machine Learning, Madison, WI. B. MacCartney, T. Grenager, M.C. de Marneffe, D. Cer, and C.D. Manning. 2006. Learning to recognize features of valid textual entailments. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, page 48. Association for Computational Linguistics. K.I. Malatesta, P. Wiemer-Hastings, and J. Robertson. 2002. Beyond the Short Answer Question with Research Methods Tutor. In Proceedings of the Intelligent Tutoring Systems Conference. R. Mihalcea, C. Corley, and C. Strapparava. 2006. Corpus-based and knowledge-based approaches to text semantic similarity. In Proceedings of the American Association for Artificial Intelligence (AAAI 2006), Boston. T. Mitchell, T. Russell, P. Broomhead, and N. Aldridge. 2002. Towards robust computerised marking of freetext responses. Proceedings of the 6th International Computer Assisted Assessment (CAA) Conference. M. Mohler and R. Mihalcea. 2009. Text-to-text semantic similarity for automatic short answer grading. In Proceedings of the European Association for Computational Linguistics (EACL 2009), Athens, Greece. R.D. Nielsen, W. Ward, and J.H. Martin. 2009. Recognizing entailment in intelligent tutoring systems. Natural Language Engineering, 15(04):479–501. T. Pedersen, S. Patwardhan, and J. Michelizzi. 2004. WordNet:: Similarity-Measuring the Relatedness of Concepts. Proceedings of the National Conference on Artificial Intelligence, pages 1024–1025. S.G. Pulman and J.Z. Sukkarieh. 2005. Automatic Short Answer Marking. ACL WS Bldg Ed Apps using NLP. R. Raina, A. Haghighi, C. Cox, J. Finkel, J. Michels, K. Toutanova, B. MacCartney, M.C. de Marneffe, C.D. Manning, and A.Y. Ng. 2005. Robust textual inference using diverse knowledge sources. Recognizing Textual Entailment, page 57. 761 P. Resnik. 1995. Using information content to evaluate semantic similarity. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, Montreal, Canada. V. Rus, A. Graesser, and K. Desai. 2007. Lexicosyntactic subsumption for textual entailment. Recent Advances in Natural Language Processing IV: Selected Papers from RANLP 2005, page 187. J.Z. Sukkarieh, S.G. Pulman, and N. Raikes. 2004. AutoMarking 2: An Update on the UCLES-Oxford University research into using Computational Linguistics to Score Short, Free Text Responses. International Association of Educational Assessment, Philadephia. P. Wiemer-Hastings, K. Wiemer-Hastings, and A. Graesser. 1999. Improving an intelligent tutor’s comprehension of students with Latent Semantic Analysis. Artificial Intelligence in Education, pages 535–542. P. Wiemer-Hastings, E. Arnott, and D. Allbritton. 2005. Initial results and mixed directions for research methods tutor. In AIED2005 - Supplementary Proceedings of the 12th International Conference on Artificial Intelligence in Education, Amsterdam. Z. Wu and M. Palmer. 1994. Verb semantics and lexical selection. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, Las Cruces, New Mexico. B. Zadrozny and C. Elkan. 2002. Transforming classifier scores into accurate multiclass probability estimates. Edmonton, Alberta. 762
|
2011
|
76
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 763–772, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Age Prediction in Blogs: A Study of Style, Content, and Online Behavior in Pre- and Post-Social Media Generations Sara Rosenthal Department of Computer Science Columbia University New York, NY 10027, USA [email protected] Kathleen McKeown Department of Computer Science Columbia University New York, NY 10027, USA [email protected] Abstract We investigate whether wording, stylistic choices, and online behavior can be used to predict the age category of blog authors. Our hypothesis is that significant changes in writing style distinguish pre-social media bloggers from post-social media bloggers. Through experimentation with a range of years, we found that the birth dates of students in college at the time when social media such as AIM, SMS text messaging, MySpace and Facebook first became popular, enable accurate age prediction. We also show that internet writing characteristics are important features for age prediction, but that lexical content is also needed to produce significantly more accurate results. Our best results allow for 81.57% accuracy. 1 Introduction The evolution of the internet has changed the way that people communicate. The introduction of instant messaging, forums, social networking and blogs has made it possible for people of every age to become authors. The users of these social media platforms have created their own form of unstructured writing that is best characterized as informal. Even how people communicate has dramatically changed, with multitasking increasing and responses generated immediately. We should be able to exploit those differences to automatically determine from blog posts whether an author is part of a pre- or postsocial media generation. This problem is called age prediction and raises two main questions: • Is there a point in time that proves to be a significantly better dividing line between pre and post-social media generations? • What features of communication most directly reveal the generation in which a blogger was born? We hypothesize that the dividing line(s) occur when people in generation Y1, or the millennial generation, (born anywhere from the mid1970s to the early 2000s) were typical collegeaged students (18-22). We focus on this generation due to the rise of popular social media technologies such as messaging and online social networks sites that occurred during that time. Therefore, we experimented with binary classification into age groups using all birth dates from 1975 through 1988, thus including students from generation Y who were in college during the emergence of social media technologies. We find five years where binary classification is significantly more accurate than other years: 1977, 1979, and 1982-1984. The appearance of social media technologies such as AOL Instant Messenger (AIM), weblogs, SMS text messaging, Facebook and MySpace occurred when people with these birth dates were in college. We explore two of these years in more detail, 1979 and 1984, and examine a wide variety of 1http://en.wikipedia.org/wiki/Generation Y 763 features that differ between the pre-social media and post-social media bloggers. We examine lexical-content features such as collocations and part-of-speech collocations, lexical-stylistic features such as internet slang and capitalization, and features representing online behavior such as time of post and number of friends. We find that both stylistic and content features have a significant impact on age prediction and show that, for unseen blogs, we are able to classify authors as born before or after 1979 with 80% accuracy and born before or after 1984 with 82% accuracy. In the remainder of this paper, we first discuss work to date on age prediction for blogs and then present the features that we extracted, which is a larger set than previously explored. We then turn separately to three experiments. In the first, we implement a prior approach to show that we can produce a similar outcome. In the second, we show how the accuracy of age prediction changes over time and pinpoint when major changes occur. In the last experiment, we describe our age prediction experiments in more detail for the most significant years. 2 Related Work In previous work, Mackinnon (2006) , used LiveJournal data to identify a blogger’s age by examining the mean age of his peer group using his social network and not just his immediate friends. They were able to predict the correct age within +/-5 years at 98% accuracy. This approach, however, is very different from ours as it requires access to the age of each of the blogger’s friends. Our approach uses only a body of text written by a person along with his blogging behavior to determine which age group he is more closely identified with. Initial research on predicting age without using the ages of friends focuses on identifying important candidate features, including blogging characteristics (e.g., time of post), text features (e.g., length of post), and profile information (e.g., interests) (Burger and Henderson, 2006). They aimed at binary prediction of age, classifying LiveJournal bloggers as either over or under 18, but were unable to automatically predict age with more accuracy than a baseline model that always chose the majority class. In our study on determining the ideal age split we did not find 18 (bloggers born in 1986 in their dataset) to be significant. Prior work by Schler et al. (2006) has examined metadata such as gender and age in blogger.com bloggers. In contrast to our work, they examine bloggers based on their age at the time of the experiment, whether in the 10’s, 20’s or 30’s age bracket. They identify interesting changes in content and style features across categories, in which they include blogging words (e.g., “LOL”), all defined by the Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2007). They did not use characteristics of online behavior (e.g., friends). They can distinguish between bloggers in the 10’s and in the 30’s with relatively high accuracy (above 96%) but many 30s are misclassified as 20s, which results in a overall accuracy of 76.2%. We re-implement Schler et al.’s work in section 5.1 with similar findings. Their work shows that ease of classification is dependent in part on what division is made between age groups and in turn motivates our decision to study whether the creation of social media technologies can be used to find the dividing line(s). Neither Schler et al., nor we, attempt to determine how a person’s writing changes over his lifespan (Pennebaker and Stone, 2003; Robins et al., 2002). Goswami et al. (2009) add to Schler et al.’s approach using the same data and have a 4% increase in accuracy. However, the paper is lacking details and it is entirely unclear how they were able to do this with fewer features than Schler et al. In other work, Tam and Martell (2009) attempt to detect age in the NPS chat corpus between teens and other ages. They use an SVM classifier with only n-grams as features. They achieve > 90% accuracy when classifying teens vs 30s, 40s, 50s, and all adults and achieve at best 76% when using 3 character gram features in classifying teens vs 20s. This work shows that n-grams are useful features for detecting age and it is difficult to detect differences between consecutive groups such as teens and 20s, and this 764 Figure 1: Number of bloggers in 2010 by year of birth from 1950-1996. A minimal amount of data occurred in years not shown. provides evidence for the need to find a good classification split. Other researchers have investigated weblogs for differences in writing style depending on gender identification (Herring and Paolillo, 2006; Yan and Yan, 2006; Nowson and Oberlander, 2006). Herring et al (2006) found that the typical gender related features were based on genre and independent of author gender. Yan et al (2006) used text categorization and stylistic web features, such as emoticons, to identify gender and achieved 60% F-measure. Nowson et al (2006) employed dictionary and n-gram based content analysis and achieved 91.5% accuracy using an SVM classifier. We also use a supervised machine learning approach, but classification by gender is naturally a binary classification task, while our work requires determining a natural dividing point. 3 Data Collection Our corpus consists of blogs downloaded from the virtual community LiveJournal. We chose to use LiveJournal blogs for our corpus because the website provides an easy-to-use format in XML for downloading and crawling their site. In addition, LiveJournal gives bloggers the opportunity to post their age on their profile. We take advantage of this feature by downloading blogs where the user chooses to publicly provide this metadata. We downloaded approximately 24,500 LiveJournal blogs containing age. We represent age as the year a person was born and not his age at the time of the experiment. Since technology has different effects in different countries, we only analyze the blogs of people who have listed US as their country. It is possible that text written in a language other than English is included in our corpus. However, in a manual check of a small portion of text from 500 blogs, we only found English words. Each blog was written by a unique individual and includes a user profile and up to 25 recent posts written between 2000-2010 with the most recent post being written in 2009-2010. The birth dates of the bloggers range in years from 1940 to 2000 and thus, their age ranges from 10 to 70 in 2010. Figure 1 shows the number of bloggers per age in our group with birth dates from 1950 to 1996. The majority of bloggers on LiveJournal were born between 1978-1989. 4 Methods We pre-processed the data to add Part-ofSpeech tags (POS) and dependencies (de Marneffe et al., 2006) between words using the Stanford Parser (Klein and Manning, 2003a; Klein and Manning, 2003b). The POS and syntactic dependencies were only found for approximately the first 90 words in each sentence. Our classification method investigates 17 different features that fall into three categories: online behavior, lexical-stylistic and lexical-content. All of the features we used are explained in Table 1 along with their trend as age decreases where applicable. Any feature that increased, decreased, or fluctuated should have some positive impact on the accuracy of predicting age. 4.1 Online Behavior and Interests Online behavior features are blog specific, such as number of comments and friends as described in Table 1.1. The first feature, interests, is our only feature that is specific to LiveJournal. Interests appear in the LiveJournal user profile, but are not found on all blog sites. All other online behavior features are typically available in any blog. 765 Feature Explanation Example Trend as Age Decreases 1 Interests Top3 interests provided on the profile page2 disney N/A 2 # of Friends Number of friends the blogger has 45 fluctuates # of Posts Number of downloadable posts (0-25) 23 decrease # of Lifetime Posts Number of posts written in total 821 decrease Time Mode hour (00-23) and day the blogger posts 11/Monday no change Comments Average number of comments per post 2.64 increase 3 Emoticons number of emoticons1 :) increase Acronyms number of internet acronyms1 lol increase Slang number of words that are not found in the dictionary1 wazzup increase Punctuation number of stand-alone punctuation1 ... increase Capitalization number of words (with length > 1) that are all CAPS1 YOU increase Sentence Length average sentence length 40 decrease Links/Images number of url and image links1 www.site.com fluctuates 4 Collocations Top3 Collocations in the age group. to [] the N/A Syntax Collocations Top3 Syntax Collocations in the age group. best friends N/A POS Collocations Top3 Part-of-Speech Collocations in the age group. this [] [] VB N/A Words Top3 words in the age group his N/A Table 1: List of all features used during classification divided into three categories (1,2) online behavior and interests, (3) lexical - content, and (4) lexical - stylistic 1 normalized per sentence per entry, 2 available in LiveJournal only, 3 pruned from top 200 features to include those that do not occur within +/- 10 position in any other age group We extracted the top 200 interests based on occurrence in the profile page from 1500 random blogs in three age groups. These age groups are used solely to illustrate the differences that occur at different ages and are not used in our classification experiments. We then pruned the list of interests by excluding any interest that occurred within a +/-10 window (based on its position in the list) in multiple age groups. We show the top interests in each age group in Table 2. For example, “disney” is the most popular unique interest in the 18-22 age group with only 39 other non-unique interests in that age group occurring more frequently. “Fanfiction” is a popular interest in all age groups, but it is significantly more popular in the 18-22 age group than in other age groups. Amongst the other online behavior features, the number of friends tends to fluctuate but seems to be higher for older bloggers. The number of lifetime posts (Figure 2(d)), and posts decreases as bloggers get younger which is as one would expect unless younger people were orders of magnitude more prolific than older people. The mode time (Figure 2(b)), refers to the most 18-22 28-32 38-42 disney 39 tori amos 49 polyamory 40 yaoi 40 hiking 55 sca 67 johnny depp 42 women 61 babylon 5 84 rent 44 gaming 62 leather 94 house 45 comic books 67 farscape 103 fanfiction 11 fanfiction 58 fanfiction 138 drawing 10 drawing 25 drawing 65 sci-fi 199 sci-fi 37 sci-fi 21 Table 2: Top interests for three different age groups. The top half refers to the top 5 interests that are unique to each age group. The value refers to the position of the interest in its list common hour of posting from 00-24 based on GMT time. We didn’t compute time based on the time zone because city/state is often not included. We found time to not be a useful feature in this manner and it is difficult to come to any conclusions from its change as year of birth decreases. 4.2 Lexical - Stylistic The Lexical-Stylistic features in Table 1.2, such as slang and sentence length, are computed us766 Figure 2: Examples of change to features over time (a) Average number of emoticons in a sentence increases as age decreases (b) The most common time fluctuates until 1982, where it is consistent (c) The number of links/images in a sentence fluctuates (d) The average number of lifetime posts per year decreases as age decreases ing the text from all of the posts written by the blogger. Other than sentence length, they were normalized by sentence and post to keep the numbers consistent between bloggers regardless of whether the user wrote one or many posts in his/her blog. The number of emoticons (Figure 2(a)), acronyms, and capital words increased as bloggers got younger. Slang and punctuation, which excludes the emoticons and acronyms counted in the other features, increased as well, but not as significantly. The length of sentences decreased as bloggers got younger and the number of links/images varied across all years as shown in Figure 2(c). 4.3 Lexical - Content The last category of features described in Table 1.3 consists of collocations and words, which are content based lexical terms. The top words are produced using a typical “bag-of-words” approach. The top collocations are computed using a system called Xtract (Smadja, 1993). We use Xtract to obtain important lexical collocations, syntactic collocations, and POS collocations as features from our text. Syntactic collocations refer to significant word pairs that have specific syntactic dependencies such as subject/verb and verb/object. Due to the length of time it takes to run this program, we ran Xtract on 1500 random blogs from each age group and examined the first 1000 words per blog. We looked at 1.5 million words in total and found approximately 2500-2700 words that were repeated more than 50 times. We extracted the top 200 words and collocations sorted by post frequency (pf), which is the number of posts the term occurred in. Then, similarly to interests, we pruned each list to include the features that did not occur within +/-10 window (based on its position in the list) within each age group. Prior to settling on these metrics, we also experimented with other metrics such as the number of times the collocation 767 18-22 28-32 38-42 ldquot (’) 101 great 166 may 164 t 152 find 167 old 183 school 172 many 177 house 191 x 173 years 179 world 192 anything 175 week 181 please 198 maybe 179 post 190 because 68 because 80 because 93 him 59 him 85 him 73 Table 3: Top words for three age groups. The top half refers to the top 5 words that are unique to each age group. The value refers to the position of the interest in its list occurred in total, defined as collocation or term frequency (tf), the number of blogs the collocation occurred in, defined as blog frequency (bf), and variations of TF*IDF (Salton and Buckley, 1988) where we tried using inverse blog frequency and inverse post frequency as the value for IDF. In addition, we also experimented with looking at a different number of important words and collocations ranging from the top 100-300 terms and experimented without pruning. None of these variations improved accuracy in our experiments, however, and thus, were dropped from further experimentation. Table 3 shows the top words for each age group; older people tend to use words such as “house” and “old” frequently and younger people talk about “school”. In our analysis of the top collocations, we found that younger people tend to use first person singular (I,me) in subject position while older people tend to use first person plural (we) in subject position, both with a variety of verbs. 5 Experiments and Results We ran three separate experiments to determine how well we can predict age: 1. classifying into three distinct age groups (Schler et al. (2006) experiment), 2. binary classification with the split at each birth year from 1975-1988 and 3. Detailed classification on two significant splits from the second experiment. We ran all of our experiments in Weka (Hall et al., 2009) using logistic regression over 10 runs of 10-fold cross-validation. All values shown are blogger.com livejournal.com download year 2004 2010 # of Blogs 19320 11521 # of Posts1 1.4 million 256,000 # of words1 295 million 50 million age 13·17 23·27 33·37 18·22 28·32 38·42 size 8240 8086 2994 3518 5549 2454 majority baseline 43.8% (13-17) 48.2% (22-32) Table 4: Statistics for Schler et al.’s data (blogger.com) vs our data (livejournal.com) 1 is approximate amount. the averages of the accuracies from the 10 crossvalidation runs and all results were compared for statistical significance using the t-test where applicable. We use logistic regression as our classifier because it has been shown that logistic regression typically has lower asymptotic error than naive Bayes for multiple classification tasks as well as for text classification (Ng and Jordan, 2002). We experimented with an SVM classifier and found logistic regression to do slightly better. 5.1 Age Groups The first experiment implements a variation of the experiment done by Schler et al. (2006). The differences between the two datasets are shown in Tables 4. The experiment looks at three age groups containing a 5-year gap between each group. Intermediate years were not included to provide clear differentiation between the groups because many of the blogs have been active for several years and this will make it less common for a blogger to have posts that fall into two age groups (Schler et al., 2006). We did not use the same age groups as Schler et al. because very few blogs on LiveJournal, in 2010, are in the 13-17 age group. Many early demographic studies (Perseus Development, 2004; Herring et al., 2004) show teens as the dominant age group in all blogs. However, more recent studies (Nowson and Oberlander, 2006; Lenhart et al., 2010) show that less teens blog. Furthermore, an early study on the LiveJournal 768 Figure 3: Style vs Content: Accuracy from 19751988 for Style (Online-Behavior+Lexical-Stylistic) vs Content (BOW) demographic (Kumar et al., 2004) reported that 28.6% of blogs are written by bloggers between the ages 13-18 whereas based on the current demographic statistics, in 20102, only 6.96% of blogs are written by that age group and the number of bloggers in the 31-36 age group increased from 3.9% to 12.08%. We chose the later age groups because this study is based on blogs updated in 2009-10 which is 5-6 years later and thus, the 13-17 age group is now 18-22 and so on. We use style-based (lexical-stylistic) and content-based features (BOW, interests) to mimic Schler et al.’s experiment as closely as possible and also experimented with adding online-behavior features. Our experiment with style-based and content-based features had an accuracy of 57%. However, when we added online-behavior, we increased our accuracy to 67%. A more detailed look at the better results show that our accuracies are consistently 7% lower than the original work but we have similar findings; 18-22s are distinguishable from 38-42s with accuracy of 94.5%, and 18-22s are distinguishable from 28-32s with accuracy of 80.5%. However, many 38-42s are misclassified as 2832s with an accuracy of 72.1%, yielding overall accuracy of 67%. Due to our findings, we believe that adding online-behavior features to Schler et al.’s dataset would improve their results as well. 2http://www.livejournal.com/stats.bml 5.2 Social Media and Generation Y In the first experiment we used the current age of a blogger based on when he wrote his last post. However, the age of a person changes; someone who was in one age group now will be in a different age group in 5 years. Furthermore, a blogger’s posts can fall into two categories depending on his age at the time. Therefore, our second experiment looks at year of birth instead of age, as that never changes. In contrast to Schler et al.’s experiment, our division does not introduce a gap between age groups, we do binary classification, and we use significantly less data. We approach age prediction as attempting to identify a shift in writing style over a 14 year time span from birth years 1975-1988: For each year X = 1975-1988: • get 1500 blogs (∼33,000 posts) balanced across years BEFORE X • get 1500 blogs (∼33,000 posts) balanced across years IN/AFTER X • Perform binary classification between blogs BEFORE X and IN/AFTER X The experiment focuses on the range of birth years of bloggers from 1975-1888 to identify at what point in time, if any, shift(s) in writing style occurred amongst college-aged students in generation Y. We were motivated to examine these years due to the emergence of social media technologies during that time. Furthermore, research by Pew Internet (Zickuhr, 2010) has found that this generation (defined as 19771992 in their research) uses social networking, blogs, and instant messaging more than their elders. The experiment is balanced to ensure that each birth year is evenly represented. We balance the data by choosing a blogger consecutively from each birth year in the category, repeating these sweeps through the category until we have obtained 1500 blogs. We chose to use 1500 blogs from each group because of processing power, time constraints, and the amount of blogs needed to reasonably sample the age group at each split. Due to the extensive running time, we only examined variations of a combination of 769 Figure 4: Style and Content: Accuracy from 19751988 using BOW, Online Behavior, and LexicalStylistic features online-behavior, lexical-stylistic, and BOW features. We found accuracy to increase as year of birth increases in various feature experiments which is consistent with the trends we found while examining the distribution of features such as emoticons and lifetime posts in Figure 2. We experimented with style and content features and found that both help improve accuracy. Figure 3 shows that content helps more than style, but style helps more as age decreases. However, as shown in Figure 4, style and content combined provided the best results. We found 5 years to have significant improvement over all prior years for p ≤.0005: 1977, 1979, and 1982-1984. Generation Y is considered the social media generation, so we decided to examine how the creation and/or popularity of social media technologies compared to the years that had a change in writing style. We looked at many popular social media technologies such as weblogs, messaging, and social networking sites. Figure 5 compares the significant years 1977,1979, and 1982-1984 against when each technology was created or became popular amongst college aged students. We find that all the technologies had an effect on one or more of those years. AIM and weblogs coincide with the earlier shifts at 1977 and 1979, SMS messaging coincide with both the earlier and later shifts at 1979 and 1982, and the social networking sites, MySpace and Facebook coincide with the later shifts of 1982Figure 5: The impact of social media technologies: The arrows correspond to the years that generation Yers were college aged students. The highlighted years represent the significant years. 1Year it became popular (Urmann, 2009) 1984. On the other hand, web forums and Twitter each coincide with only one outlying year which suggests that either they had less of an impact on writing style or, in the case of Twitter, the change has not yet been transferred to other writing forms. 5.3 A Closer Look: 1979 and 1984 Our final experiment provides a more detailed explanation of the results using various feature combinations when splitting pre- and post- social media bloggers by year of birth at two of the significant years found in the previous section; 1979 and 1984. The results for all of the experiments described are shown in Table 5. We experimented against two baselines, online behavior and interests. We chose these two features as baselines because they are both easy to generate and not lexical in nature. We found that we were able to exceed the baselines significantly using a simple bag-of-words (BOW) approach. This means the BOW does a better job of picking topics than interests. We found that including all 17 features did not do well, but we were able to get good results using a subset of the lexical features. We found the best results to have an accuracy of 79.96% and 81.57% for 1979 and 1984 respectively using BOW, interests, online behavior, and all lexical-stylistic features. In addition, we show accuracy without interests since they are not always available. 770 Experiment 1979 1984 Online-Behavior 59.66 61.61 Interests 70.22 74.61 Lexical-Stylistic 65.382 67.282 Slang+Emoticons+Acronyms 60.572 62.102 Online-Behavior + LexicalStylistic 67.162 71.312 Collocations + Syntax Collocations 53.471 73.452 POS-Collocations + POSSyntax Collocations 55.541 74.002 BOW 75.26 77.76 BOW+Online-Behavior 76.39 79.22 BOW + Online-Behavior + Lexical-Stylistic 77.45 80.88 BOW + Online-Behavior + Lexical-Stylistic + Syntax Collocations 74.8 80.36 BOW + Online-Behavior + Lexical-Stylistic + POSCollocations + POS Syntax Collocations 74.73 80.54 Online-Behavior + Interests + Lexical-Stylistic 74.39 77.20 BOW + Online-Behavior + Interests + Lexical-Stylistic 79.96 81.57 All Features 71.26 74.072 Table 5: Feature Accuracy. The top portion refers to the baselines. The best accuracies are shown in bold. Unless otherwise marked, all accuracies are statistically significant at p<=.0005 for both baselines. 1 not statistically significant over Online-Behavior and Interests. 2 not statistically significant over Interests. BOW, online-behavior, and lexical-stylistic features combined did best achieving accuracy of 77.45% and 80.88% in 1979 and 1984 respectively. This indicates that our classification method could work well on blogs from any website. It is interesting to note that collocations and POS-collocations were useful, but only when we use 1984 as the split which implies that bloggers born in 1984 and later are more homogeneous. 6 Conclusion and Future Work We have shown that it is possible to predict the age group of a person based on style, content, and online behavior features with good accuracy; these are all features that are available in any blog. While features representing writing practices that emerged with social media (e.g., capitalized words, abbreviations, slang) do not significantly impact age prediction on their own, these features have a clear change of value across time, with post-social media bloggers using them more often. We found that the birth years that had a significant change in writing style corresponded to the birth dates of college-aged students at the time of the creation/popularity of social media technologies, AIM, SMS text messaging, weblogs, Facebook and MySpace. In the future we plan on using age and other metadata to improve results in larger tasks such as identifying opinion, persuasion and power by targeting our approach in those tasks to the identified age of the person. Another approach that we will experiment with is the use of ranking, regression, and/or clustering to create meaningful age groups. 7 Acknowledgements This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the U.S. Army Research Lab. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI or the U.S. Government. References Shlomo Argamon, Moshe Koppel, Jonathan Fine, and Anat Rachel Shimoni. 2003. Gender, genre, and writing style in formal written texts. TEXT, 23:321–346. John D. Burger and John C. Henderson. 2006. An exploration of observable features related to blogger age. In AAAI Spring Symposia. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In In LREC 2006. Sumit Goswami, Sudeshna Sarkar, and Mayur Rustagi. 2009. Stylometric analysis of bloggers’ 771 age and gender. In International AAAI Conference on Weblogs and Social Media. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: An update. Susan C. Herring and John C. Paolillo. 2006. Gender and genre variation in weblogs. Journal of Sociolinguistics, 10(4):439–459. Susan C. Herring, L.A. Scheidt, S. Bonus, and E. Wright. 2004. Bridging the gap: A genre analysis of weblogs. In Proceedings of the 37th Hawaii International Conference on System Sciences. Dan Klein and Christopher D. Manning. 2003a. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423–430. Dan Klein and Christopher D. Manning. 2003b. Fast exact inference with a factored model for natural language parsing. In Advances in Neural Information Processing Systems, volume 15. MIT Press. Ravi Kumar, Jasmine Novak, Prabhakar Raghavan, and Andrew Tomkins. 2004. Structure and evolution of blogspace. Commun. ACM, 47:35–39, December. Amanda Lenhart, Kristen Purcell, Aaron Smith, and Kathryn Zickuhr. 2010. Social media and young adults. Ian Mackinnon. 2006. Age and geographic inferences of the livejournal social network. In In Statistical Network Analysis Workshop. Andrew Y Ng and Michael I Jordan. 2002. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Neural Information Processing Systems, 2:841–848. Scott Nowson and Jon Oberlander. 2006. The identity of bloggers: Openness and gender in personal weblogs. James W Pennebaker and Lori D Stone. 2003. Words of wisdom: language use over the life span. J Pers Soc Psychol, 85(2):291–301. J.W. Pennebaker, R.E. Booth, and M.E. Francis. 2007. Linguistic inquiry and word count: Liwc2007 Ð operatorÕs manual. Technical report, LIWC, Austin, TX. Perseus Development. 2004. The blogging iceberg: Of 4.12 million hosted weblogs, most little seen and quickly abandoned. Technical report, Perseus Development. R.W. Robins, K. H. Trzesniewski, J.L. Tracy, S.D Gosling, and J Potter. 2002. Global self-esteem across the lifespan. Psychology and Aging, 17:423– 434. Gerard Salton and Christopher Buckley. 1988. Term-weighting approaches in automatic text retrieval. In Information Processing and Management, pages 513–523. J. Schler, M. Koppel, S. Argamon, and J. Pennebaker. 2006. Effects of age and gender on blogging. In AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs. Frank Smadja. 1993. Retrieving collocations from text: Xtract. Computational Linguistics, 19:143– 177. Jenny Tam and Craig H. Martell. 2009. Age detection in chat. In Proceedings of the 2009 IEEE International Conference on Semantic Computing, ICSC ’09, pages 33–39, Washington, DC, USA. IEEE Computer Society. David H. Urmann. 2009. The history of text messaging. Xiang Yan and Ling Yan. 2006. Gender classification of weblog authors. In AAAI Spring Symposium Series on Computation Approaches to Analyzing Weblogs, pages 228–230. Kathryn Zickuhr. 2010. Generations 2010. 772
|
2011
|
77
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 773–782, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Extracting Social Power Relationships from Natural Language Philip Bramsen Louisville, KY [email protected]* Ami Patel Massachusetts Institute of Technology Cambridge, MA [email protected]* Martha Escobar-Molano San Diego, CA [email protected]* Rafael Alonso SET Corporation Arlington, VA [email protected] Abstract Sociolinguists have long argued that social context influences language use in all manner of ways, resulting in lects 1. This paper explores a text classification problem we will call lect modeling, an example of what has been termed computational sociolinguistics. In particular, we use machine learning techniques to identify social power relationships between members of a social network, based purely on the content of their interpersonal communication. We rely on statistical methods, as opposed to language-specific engineering, to extract features which represent vocabulary and grammar usage indicative of social power lect. We then apply support vector machines to model the social power lects representing superior-subordinate communication in the Enron email corpus. Our results validate the treatment of lect modeling as a text classification problem – albeit a hard one – and constitute a case for future research in computational sociolinguistics. 1 Introduction Linguists in sociolinguistics, pragmatics and related fields have analyzed the influence of social context on language and have catalogued countless phenomena that are influenced by it, confirming many with qualitative and quantitative studies. In * This work was done while these authors were at SET Corporation, an SAIC Company. 1 Fields that deal with society and language have inconsistent terminology; “lect” is chosen here because “lect” has no other English definitions and the etymology of the word gives it the sense we consider most relevant. deed, social context and function influence language at every level – morphologically, lexically, syntactically, and semantically, through discourse structure, and through higher-level abstractions such as pragmatics. Considered together, the extent to which speakers modify their language for a social context amounts to an identifiable variation on language, which we call a lect. Lect is a backformation from words such as dialect (geographically defined language) and ethnolect (language defined by ethnic context). In this paper, we describe lect classifiers for social power relationships. We refer to these lects as: • UpSpeak: Communication directed to someone with greater social authority. • DownSpeak: Communication directed to someone with less social authority. • PeerSpeak: Communication to someone of equal social authority. We call the problem of modeling these lects Social Power Modeling (SPM). The experiments reported in this paper focused primarily on modeling UpSpeak and DownSpeak. Manually constructing tools that effectively model specific linguistic phenomena suggested by sociolinguistics would be a Herculean effort. Moreover, it would be necessary to repeat the effort in every language! Our approach first identifies statistically salient phrases of words and parts of speech – known as n-grams – in training texts generated in conditions where the social power 773 relationship is known. Then, we apply machine learning to train classifiers with groups of these ngrams as features. The classifiers assign the UpSpeak and DownSpeak labels to unseen text. This methodology is a cost-effective approach to modeling social information and requires no language- or culture-specific feature engineering, although we believe sociolinguistics-inspired features hold promise. When applied to the corpus of emails sent and received by Enron employees (CALO Project 2009), this approach produced solid results, despite a limited number of training and test instances. This has many implications. Since manually determining the power structure of social networks is a time-consuming process, even for an expert, effective SPM could support data driven sociocultural research and greatly aid analysts doing national intelligence work. Social network analysis (SNA) presupposes a collection of individuals, whereas a social power lect classifier, once trained, would provide useful information about individual author-recipient links. On networks where SNA already has traction, SPM could provide complementary information based on the content of communications. If SPM were yoked with sentiment analysis, we might identify which opinions belong to respected members of online communities or lay the groundwork for understanding how respect is earned in social networks. More broadly, computational sociolinguistics is a nascent field with significant potential to aid in modeling and understanding human relationships. The results in this paper suggest that successes to date modeling authorship, sentiment, emotion, and personality extend to social power modeling, and our approach may well be applicable to other dimensions of social meaning. In the coming sections, we first establish the Related Work, primarily from Statistical NLP. We then cover our Approach, the Evaluation, and, finally, the Conclusions and Future Research. 2 Related Work The feasibility of Social Power Modeling is supported by sociolinguistic research identifying specific ways in which a person’s language reflects his relative power over others. Fairclough's classic work Language and Power explores how "sociolinguistic conventions . . . arise out of -- and give rise to – particular relations of power" (Fairclough, 1989). Brown and Levinson created a theory of politeness, articulating a set of strategies which people employ to demonstrate different levels of politeness (Brown & Levinson, 1987). Morand drew upon this theory in his analysis of emails sent within a corporate hierarchy; in it, he quantitatively showed that emails from subordinates to superiors are, in fact, perceived as more polite, and that this perceived politeness is correlated with specific linguistic tactics, including ones set out by Brown and Levinson (Morand, 2000). Similarly, Erikson et al identified measurable characteristics of the speech of witnesses in a courtroom setting which were directly associated with the witness’s level of social power (Erikson, 1978). Given, then, that there are distinct differences among what we term UpSpeak and DownSpeak, we treat Social Power Modeling as an instance of text classification (or categorization): we seek to assign a class (UpSpeak or DownSpeak) to a text sample. Closely related natural language processing problems are authorship attribution, sentiment analysis, emotion detection, and personality classification: all aim to extract higher-level information from language. Authorship attribution in computational linguistics is the task of identifying the author of a text. The earliest modern authorship attribution work was (Mosteller & Wallace, 1964), although forensic authorship analysis has been around much longer. Mosteller and Wallace used statistical language-modeling techniques to measure the similarity of disputed Federalist Papers to samples of known authorship. Since then, authorship identification has become a mature area productively exploring a broad spectrum of features (stylistic, lexical, syntactic, and semantic) and many generative and discriminative modeling approaches (Stamatatos, 2009). The generative models of authorship identification motivated our statistically extracted lexical and grammatical features, and future work should consider these language modeling (a.k.a. compression) approaches. Sentiment analysis, which strives to determine the attitude of an author from text, has recently garnered much attention (e.g. Pang, Lee, & Vaithyanathan, 2002; Kim & Hovy, 2004; Breck, Choi 774 & Cardie, 2007). For example, one problem is classifying user reviews as positive, negative or neutral. Typically, polarity lexicons (each term is labeled as positive, negative or neutral) help determine attitudes in text (Hiroya & Takamura, 2005, Ravichandran 2009, Choi & Cardie 2009). The polarity of an expression can be determined based on the polarity of its component lexical items (Choi & Cardie 2008). For example, the polarity of the expression is determined by the majority polarity of its lexical items or by rules applied to syntactic patterns of expressions on how to determine the polarity from its lexical components. McDonald et al studied models that classify sentiment on multiple levels of granularity: sentence and document-level (McDonald, 2007). Their work jointly classifies sentiment at both levels instead of using independent classifiers for each level or cascaded classifiers. Similar to our techniques, these studies determine the polarity of text based on its component lexical and grammatical sequences. Unlike their works, our text classification techniques take into account the frequency of occurrence of word n-grams and part-of-speech (POS) tag sequences, and other measures of statistical salience in training data. Text-based emotion prediction is another instance of text classification, where the goal is to detect the emotion appropriate to a text (Alm, Roth & Sproat, 2005) or provoked by an author, for example (Strapparava & Mihalcea, 2008). Alm, Roth, and Sproat explored a broad array of lexical and syntactic features, reminiscent of those of authorship attribution, as well as features related to story structure. A Winnow-based learning algorithm trained on these features convincingly predicted an appropriate emotion for individual sentences of narrative text. Strapparava and Mihalcea try to predict the emotion the author of a headline intends to provoke by leveraging words with known affective sense and by expanding those words’ synonyms. They used a Naïve Bayes classifier trained on short blogposts of known emotive sense. The knowledge engineering approaches were generally superior to the Naïve Bayes approach. Our approach is corpus-driven like the Naïve Bayes approach, but we interject statistically driven feature selection between the corpus and the machine learning classifiers. In personality classification, a person’s language is used to classify him on different personality dimensions, such as extraversion or neuroticism (Oberlander & Nowson, 2006; Mairesse & Walker; 2006). The goal is to recover the more permanent traits of a person, rather than fleeting characteristics such as sentiment or emotion. Oberlander and Nowson explore using a Naïve Bayes and an SVM classifier to perform binary classification of text on each personality dimension. For example, one classifier might determine if a person displays a high or low level of extraversion. Their attempt to classify each personality trait as either “high” or “low” echoes early sentiment analysis work that reduced sentiments to either positive or negative (Pang, Lee, & Vaithyanathan, 2002), and supports initially treating Social Power Modeling as a binary classification task. Personality classification seems to be the application of text classification which is the most relevant to Social Power Modeling. As Mairesse and Walker note, certain personality traits are indicative of leaders. Thus, the ability to model personality suggests an ability to model social power lects as well. Apart from text classification, work from the topic modeling community is also closely related to Social Power Modeling. Andrew McCallum extended Latent Dirichlet Allocation to model the author and recipient dependencies of per-message topic distributions with an Author-Recipient-Topic (ART) model (McCallum, Wang, & CorradaEmmanuel, 2007). This was the first significant work to model the content and relationships of communication in a social network. McCallum et al applied ART to the Enron email corpus to show that the resulting topics are strongly tied to role. They suggest that clustering these topic distributions would yield roles and argue that the personto-person similarity matrix yielded by this approach has advantages over those of canonical social network analysis. The same authors proposed several Role-Author-Recipient-Topic (RART) models to model authors, roles and words simultaneously. With a RART modeling roles-per-word, they produced per-author distributions of generated roles that appeared reasonable (e.g. they labeled Role 10 as ‘grant issues’ and Role 2 as ‘natural language researcher’). We have a similar emphasis on statistically modeling language and interpersonal communica775 tion. However, we model social power relationships, not roles or topics, and our approach produces discriminative classifiers, not generative models, which enables more concrete evaluation. Namata, Getoor, and Diehl effectively applied role modeling to the Enron email corpus, allowing them to infer the social hierarchy structure of Enron (Namata et al., 2006). They applied machine learning classifiers to map individuals to their roles in the hierarchy based on features related to email traffic patterns. They also attempt to identify cases of manager-subordinate relationships within the email domain by ranking emails using traffic-based and content-based features (Diehl et al., 2007). While their task is similar to ours, our goal is to classify any case in which one person has more social power than the other, not just identify instances of direct reporting. 3 Approach 3.1 Feature Set-Up Previous work in traditional text classification and its variants – such as sentiment analysis – has achieved successful results by using the bag-ofwords representation; that is, by treating text as a collection of words with no interdependencies, training a classifier on a large feature set of word unigrams which appear in the corpus. However, our hypothesis was that this approach would not be the best for SPM. Morand’s study, for instance, identified specific features that correlate with the direction of communication within a social hierarchy (Morand, 2000). Few of these tactics would be effectively encapsulated by word unigrams. Many would be better modeled by POS tag unigrams (with no word information) or by longer n-grams consisting of either words, POS tags, or a combination of the two. “Uses subjunctive” and “Uses past tense” are examples. Because considering such features would increase the size of the feature space, we suspected that including these features would also benefit from algorithmic means of selecting n-grams that are indicative of particular lects, and even from binning these relevant ngrams into sets to be used as features. Therefore, we focused on an approach where each feature is associated with a set of one or more n-grams. Each n-gram is a sequence of words, POS tags or a combination of words and POS tags (“mixed” n-grams). Let S represent a set {n1, …, nk} of n-grams. The feature associated with S on text T would be: 1 ( , ) ( , ) k i i f S T freq n T = =∑ where ( , ) i freq n T is the relative frequency (defined later) of in in text T. Let in represent the sequence 1 m s s … where js specifies either a word or a POS tag. Let T represent the text consisting of the sequence of tagged-word tokens 1 l t t … . ( , ) i freq n T is then defined as follows: 1 ( , ) ( , ) i m freq n T freq s s T = … { } 1 1 : ( ) 1 b b m p m b p p t t t s l m + + ≤≤ + ∀ = = − + … where: ( ) ( ) i j j i j i j j word t s if s is a word t s tag t s if s is atag = = ↔ = To illustrate, consider the following feature set, a bigram and a trigram (each term in the n-gram either has the form word or ^tag): {please ^VB, please ^‘comma’ ^VB}2 The tag “VB” denotes a verb. Suppose T consists of the following tokenized and tagged text (sentence initial and final tokens are not shown): please^RB bring^VB the^DET report^NN to^TO our^PRP$ next^JJ weekly^JJ meeting^NN .^. The first n-gram of the set, please ^VB, would match please^RB bring^VB from the text. The frequency of this n-gram in T would then be 1/9, where 1 is the number of substrings in T that match 2 To distinguish a comma separating elements of a set with a comma as part of an ngram, we use ‘comma’ to denote the punctuation mark ‘,’ as part of the ngram. 776 please ^VB and 9 is the number of bigrams in T, excluding sentence initial and final markers. The other n-gram, the trigram please ^‘comma’ ^VB, does not have any match, so the final value of the feature is 1/9. Defining features in this manner allows us to both explore the bag-of-words representation as well as use groups of n-grams as features, which we believed would be a better fit for this problem. 3.2 N-Gram Selection To identify n-grams which would be useful features, frequencies of n-grams in only the training set are considered. Different types of frequency measures were explored to capture different types of information about an n-gram’s usage. These are: • Absolute frequency: The total number of times a particular n-gram occurs in the text of a given class (social power lect). • Relative frequency: The total number of times a particular n-gram occurs in a given class, divided by the total number of ngrams in that class. Normalization by the size of the class makes relative frequency a better metric for comparing n-gram usage across classes. We then used the following frequency-based metrics to select n-grams: • We set a minimum threshold for the absolute frequency of the n-gram in a class. This helps weed out extremely infrequent words and spelling errors. • We require that the ratio of the relative frequency of the n-gram in one class to its relative frequency in the other class is also greater than a threshold. This is a simple means of selecting n-grams indicative of lect. In experiments based on the bag-of-words model, we only consider an absolute frequency threshold, whereas in later experiments, we also take into account the relative frequency ratio threshold. 3.3 N-gram Binning In experiments in which we bin n-grams, selected n-grams are assigned to the class in which their relative frequency is highest. For example, an ngram whose relative frequency in UpSpeak text is twice that in DownSpeak text would be assigned to the class UpSpeak. N-grams assigned to a class are then partitioned into sets of n-grams. Each of these sets of n-grams is associated with a feature. This partition is based on the n-gram type, the length of n-grams and the relative frequency ratio of the n-grams. While the n-grams composing a set may themselves be indicative of social power lects, this method of grouping them makes no guarantees as to how indicative the overall set is. Therefore, we experimented with filtering out sets which had a negligible information gain. Information gain is an information theoretic concept measuring how much the probability distributions for a feature differ among the different classes. A small information gain suggests that a feature may not be effective at discriminating between classes. Although this approach to partitioning is simple and worthy of improvement, it effectively reduced the dimensionality of the feature space. 3.4 Classification Once features are selected, a classifier is trained on these features. Many features are weak on their own; they either occur rarely or occur frequently but only hint weakly at social information. Therefore, we experimented with classifiers friendly to weak features, such as Adaboost and Logistic Regression (MaxEnt). However, we generally achieved the best results using support vector machines, a machine learning classifier which has been successfully applied to many previous text classification problems. We used Weka’s optimized SVMs (SMO) (Witten 2005, Platt 1998) and default parameters, except where noted. 4 Evaluation 4.1 Data To validate our supervised learning approach, we sought an adequately large English corpus of person-to-person communication labeled with the ground truth. For this, we used the publicly avail777 able Enron corpus. After filtering for duplicates and removing empty or otherwise unusable emails, the total number of emails is 245K, containing roughly 90 million words. However, this total includes emails to non-Enron employees, such as family members and employees of other corporations, emails to multiple people, and emails received from Enron employees without a known corporate role. Because the author-recipient relationships of these emails could not be established, they were not included in our experiments. Building upon previous annotation done on the corpus, we were able to ascertain the corporate role (CEO, Manager, Employee, etc.) of many email authors and recipients. From this information, we determined the author-recipient relationship by applying general rules about the structure of a corporate hierarchy (an email from an Employee to a CEO, for instance, is UpSpeak). This annotation method does not take into account promotions over time, secretaries speaking on behalf of their supervisors, or other causes of relationship irregularities. However, this misinformation would, if anything, generally hurt our classifiers. The emails were pre-processed to eliminate text not written by the author, such as forwarded text and email headers. As our approach requires text to be POS-tagged, we employed Stanford’s POS tagger (http://nlp.stanford.edu/software/tagger.shtml). In addition, text was regularized by conversion to lower case and tokenized to improve counts. To create training and test sets, we partitioned the authors of text from the corpus into two sets: A and B. Then, we used text authored by individuals in A as a training set and text authored by individuals in B as a test set. The training set is used to determine discriminating features upon which classifiers are built and applied to the test set. We Table 1. Author-based Training and Test partitions. The number of author-recipient pairs (links) and the number of words in text labeled as UpSpeak and DownSpeak are shown. found that partitioning by authors was necessary to avoid artificially inflated scores, because the classifiers pick up aspects of particular authors’ language (idiolect) in addition to social power lect information. It was not necessary to account for recipients because the emails did not contain text from the recipients. Table 1 summarizes the text partitions. Because preliminary experiments suggested that smaller text samples were harder to classify, the classifiers we describe in this paper were both trained and tested on a subset of the Enron corpus where at least 500 words of text was communicated from a specific author to a specific recipient. This subset contained 142 links, 40% of which were used as the test set. Weighting for Cost-Sensitive Learning: The original corpus was not balanced: the number of UpSpeak links was greater than the number of DownSpeak links. Varying the weight given to training instances is a technique for creating a classifier that is cost-sensitive, since a classifier built on an unbalanced training set can be biased towards avoiding errors on the overrepresented class (Witten, 2005). We wanted misclassifying UpSpeak as DownSpeak to have the same cost as misclassifying DownSpeak as UpSpeak. To do this, we assigned weights to each instance in the training set. UpSpeak instances were weighted less than DownSpeak instances, creating a training set that was balanced between UpSpeak and DownSpeak. Balancing the training set generally improved results. Weighting the test set in the same manner allowed us to evaluate the performance of the classifier in a situation in which the numbers of UpSpeak and DownSpeak instances were equal. A baseline classifier that always predicted the majority class would, on its own, achieve an accuracy of 74% on UpSpeak/DownSpeak classification of unweighted test set instances with a minimum length of 500 words. However, results on the weighted test set are properly compared to a baseline of 50%. We include both approaches to scoring in this paper. 4.2 UpSpeak/DownSpeak Classifiers In this section, we describe experiments on classification of interpersonal email communication into UpSpeak and DownSpeak. For these experiments, only emails exchanged between two people related by a superior/subordinate power relationship were UpSpeak DownSpeak Links Words Links Words Training 431 136K 328 63K Test 232 74K 148 27K 778 Table 2. Experiment Results. Accuracies/F-Scores with an SVM classifier for 10-fold cross validation on the weighted training set and evaluation against the weighted and unweighted test sets. Note that the baseline accuracy against the unweighted test set is 74%, but 50% for the weighted test set and cross-validation. Human-Engineered Features: Before examining the data itself, we identified some features which we thought would be predictive of UpSpeak or DownSpeak, and which could be fairly accurately modeled by mixed n-grams. These features included the use of different types of imperatives. We also thought that the type of greeting or signature used in the email might be reflective of formality, and therefore of UpSpeak and DownSpeak. For example, subordinates might be more likely to use an honorific when addressing a superior, or to sign an email with “Thanks.” We preformed some preliminary experiments using these features. While the feature set was too small to produce notable results, we identified which features actually were indicative of lect. One such feature was polite imperatives (imperatives preceded by the word “please”). The polite imperative feature was represented by the n-gram set: {please ^VB, please ^‘comma’ ^VB}. Unigrams and Bigrams: As a different sort of baseline, we considered the results of a bag-ofwords based classifier. Features used in these experiments consist of single words which occurred a minimum of four times in the relevant lects (UpSpeak and DownSpeak) of the training set. The results of the SVM classifier, shown in line (1) of Table 2, were fairly poor. We then performed experiments with word bigrams, selecting as features those which occurred at least seven times in the relevant lects of the training set. This threshold for bigram frequency minimized the difference in the number of features between the unigram and bigram experiments. While the bigrams on their own were less successful than the unigrams, as seen in line (2), adding them to the unigram features improved accuracy against the test set, shown in line (3). As we had speculated that including surfacelevel grammar information in the form of tag ngrams would be beneficial to our problem, we performed experiments using all tag unigrams and all tag bigrams occurring in the training set as features. The results are shown in line (4) of Table 2. The results of these experiments were not particularly strong, likely owing to the increased sparsity of the feature vectors. Binning: Next, we wished to explore longer ngrams of words or POS tags and to reduce the sparsity of the feature vectors. We therefore experimented with our method of binning the individual n-grams to be used as features. We binned features by their relative frequency ratios. In addition to binning, we also reduced the total number of n-grams by setting higher frequency thresholds and relative frequency ratio thresholds. When selecting n-grams for this experiment, we considered only word n-grams and tag n-grams – not mixed n-grams, which are a combination of words and tags. These mixed n-grams, while useful for specifying human-defined features, largely increased the dimensionality of the feature search space and did not provide significant benefit in preliminary experiments. For the word sequences, Cross-Validation Test Set (weighted) Test Set (unweighted) Features # of features # of n-grams Acc (%) F-score Acc (%) F-score Acc (%) F-score (1) Word unigrams 3899 3899 55.4 .481 62.1 .567 78.9 .748 (2) Word bigrams 3740 3740 54.5 .457 56.4 .498 73.7 .693 (3) Word unigrams + word bigrams 7639 7639 51.8 .398 63.3 .576 80.7 .762 (4) (3) + tag unigrams + tag bigrams 9014 9014 51.8 .398 58.8 .515 77.2 .719 (5) Binned n-grams 8 106 83.0 .830 78.1 .781 77.2 .783 (6) N-grams from (5), separated 106 106 83.0 .828 60.5 .587 70.2 .698 (7) (5) + polite imperatives 9 108 83.9 .839 77.1 .771 78.9 .797 779 we set an absolute frequency threshold that depended on class. The frequency of a word n-gram in a particular class was required to be 0.18 * nrlinks / n, where nrlinks is the number of links in each class (431 for UpSpeak and 328 for DownSpeak), and n is the number of words in the class. The relative frequency ratio was required to be at least 1.5. The tag sequences were required to meet an absolute frequency threshold of 20, but the same relative frequency ratio of 1.5. Binning the n-grams into features was done based on both the length of the n-gram and the relative frequency ratio. For example, one feature might represent the set of all word unigrams which have a relative frequency ratio between 1.5 and 1.6. We explored possible feature sets with cross validation. Before filtering for low information gain, we used six word n-gram bins per class (relative frequency ratios of 1.5, 1.6 ..., 1.9 and 2.0+), one tag n-gram bin for UpSpeak (2.0+), and three tag n-gram bins for DownSpeak (2.0+, 5.0+, 10.0+). Even with the weighted training set, DownSpeak instances were generally harder to identify and likely benefited from additional representation. Grouping features by length was a simple but arbitrary method for reducing dimensionality, yet sometimes produced small bins of otherwise good features. Therefore, as we explored the feature space, small bins of different n-gram lengths were merged. We then employed Weka’s InfoGain feature selection tool to remove those features with a low information gain3, which removed all but eight features. The results of this experiment are shown in line (5) of Table 2. It far outperforms the bag-ofwords baselines, despite significantly fewer features. To ascertain which feature reduction method had the greatest effect on performance – binning or setting a relative frequency ratio threshold – we performed an experiment in which all the n-grams that we used in the previous experiment were their own features. Line (6) of Table 2 shows that while this approach is an improvement over the basic bag-of-words method, grouping features still improves results. 3 In Weka, features (‘attributes’) with a sufficiently low information gain have this value rounded down to “0”; these are the features we removed. Our goal was to have successful results using only statistically extracted features; however, we examined the effect of augmenting this feature set with the most indicative of the human-identified feature – polite imperatives. The results, in line (7), show a slight improvement in both the cross validation accuracy, and the accuracy against the unweighted test set increases to 78.9%4. However, among the weighted test sets, the highest accuracy was 78.1%, with the features in line (5). We report the scores for cross-validation on the training set for these features; however, because the features were selected with knowledge of their per-class distribution in the training set, these cross-validation scores should not be seen as the classifier’s true accuracy. Self-Training: Besides sparse feature vectors, another factor likely to be hurting our classifier was the limited amount of training data. We attempted to increase the training set size by performing exploratory experiments with selftraining, an iterative semi-supervised learning method (Zhu, 2005) with the feature set from (7). On the first iteration, we trained the classifier on the labeled training set, classified the instances of the unlabeled test set, and then added the instances of the test set along with their predicted class to the training set to be used for the next iteration. After three iterations, the accuracy of the classifier when evaluated on the weighted test set improved to 82%, suggesting that our classifiers would benefit from more data. Impact of Cost-Sensitive Learning: Without cost-sensitive learning, the classifiers were heavily biased towards UpSpeak, tending to classify both DownSpeak and UpSpeak test instances as UpSpeak. With cost-sensitive training, overall performance improved and classifier performance on DownSpeak instances improved dramatically. In (5) of Table 2, DownSpeak classifier accuracy even edged out the accuracy for UpSpeak. We expect that on a larger dataset behavior with unweighted training and test data would improve. 5 Conclusions and Future Research We presented a corpus-based statistical learning approach to modeling social power relationships and experimental results for our methods. To our 4 The associated p-value is 6.56E-6. 780 knowledge, this is the first corpus-based approach to learning social power lects beyond those in direct reporting relationships. Our work strongly suggests that statistically extracted features are an efficient and effective approach to modeling social information. Our methods exploit many aspects of language use and effectively model social power information while using statistical methods at every stage to tease out the information we seek, significantly reducing language-, culture-, and lect-specific engineering needs. Our feature selection method picks up on indicators suggested by sociolinguistics, and it also allows for the identification of features that are not obviously characteristic of UpSpeak or DownSpeak. Some easily recognizable features include: Lect Ngram Example UpSpeak if you “Let me know if you need anything.” “Please call me if you have any questions.” DownSpeak give me “Read this over and give me a call.” “Please give me your comments next week.” On the other hand, other features are less intuitive: Lect Ngram Example UpSpeak I’ll, we’ll “I’ll let you know the final results soon” “Everyone is very excited […] and we’re confident we’ll be successful” DownSpeak that is, this is “Neither does any other group but that is not my problem” “I think this is an excellent letter” We hope to improve our methods for selecting and binning features with information theoretic selection metrics and clustering algorithms. We also have begun work on 3-way, UpSpeak/ DownSpeak/PeerSpeak classification. Training a multiclass SVM on the binned n-gram features from (5) produces 51.6% cross-validation accuracy on training data and 44.4% accuracy on the weighted test set (both numbers should be compared to a 33% baseline). That classifier contained no n-gram features selected from the PeerSpeak class. Preliminary experiments incorporating PeerSpeak n-grams yield slightly better numbers. However, early results also suggest that the threeway classification problem is made more tractable with cascaded two-way classifiers; feature selection was more manageable with binary problems. For example, one classifier determines whether an instance is UpSpeak; if it is not, a second classifier distinguishes between DownSpeak and PeerSpeak. Our text classification problem is similar to sentiment analysis in that there are class dependencies; for example, DownSpeak is more closely related to PeerSpeak than to UpSpeak. We might attempt to exploit these dependencies in a manner similar to Pang and Lee (2005) to improve three-way classification. In addition, we had promising early results for classification of author-recipient links with 200 to 500 words, so we plan to explore performance improvements for links of few words. In early, unpublished work, we had promising results with generative model-based approach to SPM, and we plan to revisit it; language models are a natural fit for lect modeling. Finally, we hope to investigate how SPM and SNA can enhance one another, and explore other lect classification problems for which the ground truth can be found. Acknowledgments Dr. Richard Sproat contributed time, valuable insights, and wise counsel on several occasions during the course of the research. Dr. Lillian Lee and her students in Natural Language Processing and Social Interaction reviewed the paper, offering valuable feedback and helpful leads. Our colleague, Diane Bramsen, created an excellent graphical interface for probing and understanding the results. Jeff Lau guided and advised throughout the project. We thank our anonymous reviewers for prudent advice. This work was funded by the Army Studies Board and sponsored by Col. Timothy Hill of the United Stated Army Intelligence and Security Command (INSCOM) Futures Directorate under contract W911W4-08-D-0011. References Cecilia Ovesdotter Alm, Dan Roth and Richard Sproat. 2005. Emotions from text: machine learning for textbased emotion prediction. HLT/EMNLP 2005. October 6-8, 2005, Vancouver. 781 Penelope Brown and Stephen C. Levinson. 1987. Politeness: Some universals in language usage. Cambridge: Cambridge University Press. Eric Breck, Yejin Choi and Claire Cardie. 2007. Identifying expressions of opinion in context. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI-2007) CALO Project. 2009. Enron E-Mail Dataset. http://www.cs.cmu.edu/~enron/. Yejin Choi and Claire Cardie. 2008. Learning with compositional semantics as structural inference for subsentential sentiment analysis. Proceedings of the Conference on Empirical Methods in Natural Language Processing. Honolulu, Hawaii: ACM. 793-801. Yejin Choi and Claire Cardie. 2009. Adapting a polarity lexicon using integer linear programming for domainspecific sentiment classification. Empirical Methods in Natural Language Processing (EMNLP). Christopher P. Diehl, Galileo Namata, and Lise Getoor. 2007. Relationship identification for social network discovery. AAAI '07: Proceedings of the 22nd National Conference on Artificial Intelligence. Bonnie Erickson, et al. 1978. Speech style and impression formation in a court setting: The effects of 'powerful’ and 'powerless' speech. Journal of Experimental Social Psychology 14: 266-79. Norman Fairclough. 1989. Language and power. London: Longman. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: An update. SIGKDD Exploration (1): Issue 1. JHU Center for Imaging Science. 2005. Scan Statistics on Enron Graphs. http://cis.jhu.edu/~parky/Enron/ Soo-min Kim and Eduard Hovy. 2004. Determining the Sentiment of Opinions. Proceedings of the COLING Conference. Geneva, Switzerland. Francois Mairesse and Marilyn Walker. 2006. Automatic recognition of personality in conversation. Proceedings of HLT-NAACL. New York City, New York. Galileo Mark S. Namata Jr., Lise Getoor, and Christopher P. Diehl. 2006. Inferring organizational titles in online communication. ICML 2006, 179-181. Andrew McCallum, Xuerui Wang, and Andres CorradaEmmanuel. 2007. Topic and role discovery in social networks with experiments on Enron and academic eMail. Journal of Artificial Intelligence Research 29. Ryan McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeff Reynar. 2007. Structured models for fine-to-coarse sentiment analysis. Proceedings of the ACL. David Morand. 2000. Language and power: An empirical analysis of linguistic strategies used in superior/subordinate communication. Journal of Organizational Behavior, 21:235-248. Frederick Mosteller and David L. Wallace. 1964. Inference and disputed authorship: The Federalist. Addison-Wesley, Reading, Mass. Jon Oberlander and Scott Nowson. 2006. Whose thumb is it anyway? Classifying author personality from weblog text. Proceedings of CoLing/ACL. Sydney, Australia. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. Proceedings of EMNLP, 79–86. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. Proceedings of the ACL. John Platt. 1998. Sequential minimal optimization: A fast algorithm for training support vector machines. In Technical Report MST-TR-98-14. Microsoft Research. Delip Rao and Deepak Ravichandran. 2009. Semisupervised polarity lexicon induction. European Chapter of the Association for Computational Linguistics. Efstathios Stamatatos. 2009. A survey of modern authorship attribution methods. JASIST 60(3): 538-556. Carol Strapparava and Rada Mihalcea. 2008. Learning to identify emotions in text. SAC 2008: 1556-1560 Hiroya Takamura, Takashi Inui, and Manabu Okumura. 2005. Semantic Orientations of Words using Spin Model. Annual Meeting of the Association for Computational Linguistics. Ian H. Witten and Eibe Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kauffman. Xiaojin Zhu. 2005. Semi-supervised learning literature survey. Technical Report 1530, Department of Computer Sciences, University of Wisconsin, Madison. 782
|
2011
|
78
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 783–792, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Bootstrapping Coreference Resolution Using Word Associations Hamidreza Kobdani, Hinrich Sch¨utze, Michael Schiehlen and Hans Kamp Institute for Natural Language Processing University of Stuttgart [email protected] Abstract In this paper, we present an unsupervised framework that bootstraps a complete coreference resolution (CoRe) system from word associations mined from a large unlabeled corpus. We show that word associations are useful for CoRe – e.g., the strong association between Obama and President is an indicator of likely coreference. Association information has so far not been used in CoRe because it is sparse and difficult to learn from small labeled corpora. Since unlabeled text is readily available, our unsupervised approach addresses the sparseness problem. In a self-training framework, we train a decision tree on a corpus that is automatically labeled using word associations. We show that this unsupervised system has better CoRe performance than other learning approaches that do not use manually labeled data. 1 Introduction Coreference resolution (CoRe) is the process of finding markables (noun phrases) referring to the same real world entity or concept. Until recently, most approaches tried to solve the problem by binary classification, where the probability of a pair of markables being coreferent is estimated from labeled data. Alternatively, a model that determines whether a markable is coreferent with a preceding cluster can be used. For both pair-based and cluster-based models, a well established feature model plays an important role. Typical systems use a rich feature space based on lexical, syntactic and semantic knowledge. Most commonly used features are described by Soon et al. (2001). Most existing systems are supervised systems, trained on human-labeled benchmark data sets for English. These systems use linguistic features based on number, gender, person etc. It is a challenge to adapt these systems to new domains, genres and languages because a significant human labeling effort is usually necessary to get good performance. To address this challenge, we pursue an unsupervised self-training approach. We train a classifier on a corpus that is automatically labeled using association information. Self-training approaches usually include the use of some manually labeled data. In contrast, our self-trained system is not trained on any manually labeled data and is therefore a completely unsupervised system. Although training on automatically labeled data can be viewed as a form of supervision, we reserve the term supervised system for systems that are trained on manually labeled data. The key novelty of our approach is that we bootstrap a competitive CoRe system from association information that is mined from an unlabeled corpus in a completely unsupervised fashion. While this method is shallow, it provides valuable information for CoRe because it considers the actual identity of the words in question. Consider the pair of markables (Obama, President). It is a likely coreference pair, but this information is not accessible to standard CoRe systems because they only use string-based features (often called lexical features), named entity features and semantic word class features (e.g., from WordNet) that do not distinguish, 783 say, Obama from Hawking. In our approach, word association information is used for clustering markables in unsupervised learning. Association information is calculated as association scores between heads of markables as described below. We view association information as an example of a shallow feature space which contrasts with the rich feature space that is generally used in CoRe. Our experiments are conducted using the MCORE system (“Modular COreference REsolution”).1 MCORE can operate in three different settings: unsupervised (subsystem A-INF), supervised (subsystem SUCRE (Kobdani and Sch¨utze, 2010)), and self-trained (subsystem UNSEL). The unsupervised subsystem A-INF (“Association INFormation”) uses the association scores between heads as the distance measure when clustering markables. SUCRE (“SUpervised Coreference REsolution”) is trained on a labeled corpus (manually or automatically labeled) similar to standard CoRe systems. Finally, the unsupervised self-trained subsystem UNSEL (“UNsupervised SELf-trained”) uses the unsupervised subsystem A-INF to automatically label an unlabeled corpus that is then used as a training set for SUCRE. Our main contributions in this paper are as follows: 1. We demonstrate that word association information can be used to develop an unsupervised model for shallow coreference resolution (subsystem A-INF). 2. We introduce an unsupervised self-trained method (UNSEL) that takes a two-learner twofeature-space approach. The two learners are A-INF and SUCRE. The feature spaces are the shallow and rich feature spaces. 3. We show that the performance of UNSEL is better than the performance of other unsupervised systems when it is self-trained on the automatically labeled corpus and uses the leveraging effect of a rich feature space. 4. MCORE is a flexible and modular framework that is able to learn from data with different 1MCORE can be downloaded from ifnlp.org/ ˜schuetze/mcore. quality and domain. Not only is it able to deal with shallow information spaces (A-INF), but it can also deliver competitive results for rich feature spaces (SUCRE and UNSEL). This paper is organized as follows. Related work is discussed in Section 2. In Section 3, we present our system architecture. Section 4 describes the experiments and Section 5 presents and discusses our results. The final section presents our conclusions. 2 Related Work There are three main approaches to CoRe: supervised, semi-supervised (or weakly supervised) and unsupervised. We use the term semi-supervised for approaches that use some amount of human-labeled coreference pairs. M¨uller et al. (2002) used co-training for coreference resolution, a semi-supervised method. Cotraining puts features into disjoint subsets when learning from labeled and unlabeled data and tries to leverage this split for better performance. Ng and Cardie (2003) use self-training in a multiple-learner framework and report performance superior to cotraining. They argue that the multiple learner approach is a better choice for CoRe than the multiple view approach of co-training. Our self-trained model combines multiple learners (A-INF and SUCRE) and multiple views (shallow/rich information). A key difference to the work by M¨uller et al. (2002) and Ng and Cardie (2003) is that we do not use any human-labeled coreference pairs. Our basic idea of self-training without human labels is similar to (Kehler et al., 2004), but we address the general CoRe problem, not just pronoun interpretation. Turning to unsupervised CoRe, Haghighi and Klein (2007) proposed a generative Bayesian model with good performance. Poon and Domingos (2008) introduced an unsupervised system in the framework of Markov logic. Ng (2008) presented a generative model that views coreference as an EM clustering process. We will show that our system, which is simpler than prior work, outperforms these systems. Haghighi and Klein (2010) present an “almost unsupervised” CoRe system. In this paper, we only compare with completely unsupervised approaches, 784 not with approaches that make some limited use of labeled data. Recent work by Haghighi and Klein (2009), Klenner and Ailloud (2009) and Raghunathan et al. (2010) challenges the appropriateness of machine learning methods for CoRe. These researchers show that a “deterministic” system (essentially a rulebased system) that uses a rich feature space including lexical, syntactic and semantic features can improve CoRe performance. Almost all CoRe systems, including ours, use a limited number of rules or filters, e.g., to implement binding condition A that reflexives must have a close antecedent in some sense of “close”. In our view, systems that use a few basic filters are fundamentally different from carefully tuned systems with a large number of complex rules, some of which use specific lexical information. A limitation of complex rule-based systems is that they require substantial effort to encode the large number of deterministic constraints that guarantee good performance. Moreover, these systems are not adaptable (since they are not machine-learned) and may have to be rewritten for each new domain, genre and language. Consequently, we do not compare our performance with deterministic systems. Ponzetto (2010) extracts metadata from Wikipedia for supervised CoRe. Using such additional resources in our unsupervised system should further improve CoRe performance. Elsner et al. (2009) present an unsupervised algorithm for identifying clusters of entities that belong to the same named entity (NE) class. Determining common membership in an NE class like person is an easier task than determining coreference of two NEs. 3 System Architecture Figure 1 illustrates the system architecture of our unsupervised self-trained CoRe system (UNSEL). Oval nodes are data, box nodes are processes. We take a self-training approach to coreference resolution: We first label the corpus using the unsupervised model A-INF and then train the supervised model SUCRE on this automatically labeled training corpus. Even though we train on a labeled corpus, the labeling of the corpus is produced in a completely automatic fashion, without recourse to huUnlabeled Data Unsupervised Model (A-INF) Automatically Labeled Data Supervised Model (SUCRE) Figure 1: System Architecture of UNSEL (Unsupervised Self-Trained Model). man labeling. Thus, it is an unsupervised approach. The MCORE architecture is very flexible; in particular, as will be explained presently, it can be easily adapted for supervised as well as unsupervised settings. The unsupervised and supervised models have an identical top level architecture; we illustrate this in Figure 2. In preprocessing, tokens (words), markables and their attributes are extracted from the input text. The key difference between the unsupervised and supervised approaches is in how pair estimation is accomplished — see Sections 3.1 & 3.2 for details. The main task in chain estimation is clustering. Figure 3 presents our clustering method, which is used for both supervised and unsupervised CoRe. We search for the best predicted antecedent (with coreference probability p ≥0.5) from right to left starting from the end of the document. McEnery et al. (1997) showed that in 98.68% of cases the antecedent is within a 10-sentence window; hence we use a window of 10 sentences for search. We have found that limiting the search to a window increases both efficiency and effectiveness. Filtering. We use a feature definition language to define the templates according to which the filters and features are calculated. These templates are hard constraints that filter out all cases that are clearly disreferent, e.g., (he, she) or (he, they). We use the following filters: (i) the antecedent of a reflexive pronoun must be in the same sentence; (ii) the antecedent of a pronoun must occur at a distance of at most 3 sentences; (iii) a coreferent pair of a noun and a pronoun or of two pronouns must not 785 Input Text Preprocessing Markables Pair Estimation Markable Chains Chain Estimation Markable Pairs Figure 2: Common architecture of unsupervised (A-INF) and supervised (SUCRE) models. Chain Estimation (M1, M2, . . . , Mn) 1. t ←1 2. For each markable Mi: Ci ←{Mi} 3. Proceed through the markables from the end of the document. For each Mj, consider each preceding Mi within 10 sentences: If Pair Estimation(Mi, Mj)>=t: Ci ←Ci∪Cj 4. t ←t −0.01 5. If t >= 0.5: go to step 3 Pair Estimation (Mi, Mj): If Filtering(Mi, Mj)==FALSE then return 0; else return the probability p (or association score N) of markable pair (Mi, Mj) being coreferent. Filtering (Mi, Mj): return TRUE if all filters for (Mi, Mj) are TRUE else FALSE Figure 3: MCORE chain estimation (clustering) algorithm (test). t is the clustering threshold. Ci refers to the cluster that Mi is a member of. disagree in number; (iv) a coreferent pair of two pronouns must not disagree in gender. These four filters are used in supervised and unsupervised modes of MCORE. 3.1 Unsupervised Model (A-INF) Figure 4 (top) shows how A-INF performs pair estimation. First, in the pair generation step, all possible pairs inside 10 sentences are generated. Other steps are separately explained for train and test as follows. Train. In addition to the filters (i)–(iv) described above, we use the following filter: (v) If the head of markable M2 matches the head of the preceding markable M1, then we ignore all other pairs for M2 in the calculation of association scores. This additional filter is necessary because an approach without some kind of string matching constraint yields poor results, given the importance of string matching for CoRe. As we will show below, even the simple filters (i)–(v) are sufficient to learn high-quality association scores; this means that we do not need the complex features of “deterministic” systems. However, if such complex features are available, then we can use them to improve performance in our self-trained setting. To learn word association information from an unlabeled corpus (see Section 4), we compute mutual information (MI) scores between heads of markables. We define MI as follows: (Cover and Thomas, 1991) MI(a, b) = X i∈{¯a,a} X j∈{¯b,b} P(i, j) log2 P(i, j) P(i)P(j) E.g., P(a,¯b) is the probability of a pair whose two elements are a and a word not equal to b. Test. A key virtue of our approach is that in the classification of pairs as coreferent/disreferent, the coreference probability p estimated in supervised learning plays exactly the same role as the association information score N (defined below). For p, it is important that we only consider pairs with p ≥0.5 as potentially coreferent (see Figure 3). To be able to impose the same constraint on N, we normalize the MI scores by the maximum values of the two words and take the average: N(a, b) = 1 2( MI(a, b) argmaxxMI(a, x)+ MI(a, b) argmaxxMI(x, b)) In the above equation, the value of N indicates how strongly two words are associated. N is normalized to ensure 0 ≤N ≤1. If a or b did not occur, then we set N =0. In filtering for test, we use filters (i)–(iv). We then fetch the MI values and calculate N values. The clustering algorithm described in Figure 3 uses these N values in exactly the same way as p: we search for the antecedent with the maximum association score 786 N greater than 0.5 from right to left starting from the end of the document. As we will see below, using N scores acquired from an unlabeled corpus as the only source of information for CoRe performs surprising well. However, the weaknesses of this approach are (i) the failure to cover pairs that do not occur in the unlabeled corpus (negatively affecting recall) and (ii) the generation of pairs that are not plausible candidates for coreference (negatively affecting precision). To address these problems, we train a model on a corpus labeled by A-INF in a self-training approach. 3.2 Supervised Model (SUCRE) Figure 4 (bottom) presents the architecture of pair estimation for the supervised approach (SUCRE). In the pair generation step for train, we take each coreferent markable pair (Mi, Mj) without intervening coreferent markables and use (Mi, Mj) as a positive training instance and (Mi, Mk), i < k < j, as negative training instances. For test, we generate all possible pairs within 10 sentences. After filtering, we then calculate a feature vector for each generated pair that survived filters (i)–(iv). Our basic features are similar to those described by Soon et al. (2001): string-based features, distance features, span features, part-of-speech features, grammatical features, semantic features, and agreement features. These basic features are engineered with the goal of creating a feature set that will result in good performance. For this purpose we used the relational feature engineering framework which has been presented in (Kobdani et al., 2010). It includes powerful and flexible methods for implementing and extracting new features. It allows systematic and fast search of the space of features and thereby reduces the time and effort needed for defining optimal features. We believe that the good performance of our supervised system SUCRE (tables 1 and 2) is the result of our feature engineering approach.2 As our classification method, we use a decision 2While this is not the focus of this paper, SUCRE has performance comparable to other state-of-the-art supervised systems. E.g., B3/MUC F1 are 75.6/72.4 on ACE-2 and 69.4/70.6 on MUC-6 compared to 78.3/66.0 on ACE-2 and 70.9/68.5 on MUC-6 for Reconcile (Stoyanov et al., 2010) tree3 (Quinlan, 1993) that is trained on the training set to estimate the coreference probability p for a pair and then applied to the test set. Note that, as is standard in CoRe, filtering and feature calculation are exactly the same for training and test, but that pair generation is different as described above. 4 Experimental Setup 4.1 Data Sets For computing word association, we used a corpus of about 63,000 documents from the 2009 English Wikipedia (the articles that were larger than 200 bytes). This corpus consists of more than 33.8 million tokens; the average document length is 500 tokens. The corpus was parsed using the Berkeley parser (Petrov and Klein, 2007). We ignored all sentences that had no parse output. The number of detected markables (all noun phrases extracted from parse trees) is about 9 million. We evaluate unsupervised, supervised and selftrained models on ACE (Phase 2) (Mitchell et al., 2003).4 This data set is one of the most widely used CoRe benchmarks and was used by the systems that are most comparable to our approach; in particular, it was used in most prior work on unsupervised CoRe. The corpus is composed of three data sets from three different news sources. We give the number of test documents for each: (i) Broadcast News (BNEWS): 51. (ii) Newspaper (NPAPER): 17. (iii) Newswire (NWIRE): 29. We report results for true markables (markables extracted from the answer keys) to be able to compare with other systems that use true markables. In addition, we use the recently published OntoNotes benchmark (Recasens et al., 2010). OntoNotes is an excerpt of news from the OntoNotes Corpus Release 2.0 (Pradhan et al., 2007). The advantage of OntoNotes is that it contains two parallel annotations: (i) a gold setting, gold standard manual annotations of the preprocessing information and (ii) an automatic setting, automatically predicted annotations of the preprocessing information. The automatic setting reflects the situation a CoRe system 3We also tried support vector machines and maximum entropy models, but they did not perform better. 4We used two variants of ACE (Phase 2): ACE-2 and ACE2003 787 Markable Pairs Filtering Association Calculation Pair Generation Filter Templates Association Information Train/Test Markable Pairs Filtering Feature Calculation Feature Vectors Pair Generation Filter Templates Feature Templates Train/Test Figure 4: Pair estimation in the unsupervised model A-INF (top) and in the supervised model SUCRE (bottom). faces in reality; in contrast, the gold setting should be considered less realistic. The issue of gold vs. automatic setting is directly related to a second important evaluation issue: the influence of markable detection on CoRe evaluation measures. In a real application, we do not have access to true markables, so an evaluation on system markables (markables automatically detected by the system) reflects actual expected performance better. However, reporting only CoRe numbers (even for system markables) is not sufficient either since accuracy of markable detection is necessary to interpret CoRe scores. Thus, we need (i) measures of the quality of system markables (i.e., an evaluation of the markable detection subtask) and CoRe performance on system markables as well as (ii) a measure of CoRe performance on true markables. We use OntoNotes in this paper to perform such a, in our view, complete and realistic evaluation of CoRe. The two evaluations correspond to the two evaluations performed at SemEval-2010 (Recasens et al., 2010): the automatic setting with system markables and the gold setting with true markables. Test set size is 85 documents. In the experiments with A-INF we use Wikipedia to compute association information and then evaluate the model on the test sets of ACE and OntoNotes. For the experiments with UNSEL, we use its unsupervised subsystem A-INF (which uses Wikipedia association scores) to automatically label the training sets of ACE and OntoNotes. Then for each data set, the supervised subsystem of UNSEL (i.e., SUCRE) is trained on its automatically labeled training set and then evaluated on its test set. Finally, for the supervised experiments, we use the manually labeled training sets and evaluate on the corresponding test sets. 4.2 Evaluation Metrics We report recall, precision, and F1 for MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998), and CEAF (Luo, 2005). We selected these three metrics because a single metric is often misleading and because we need to use metrics that were used in previous unsupervised work. It is well known that MUC by itself is insufficient because it gives misleadingly high scores to the “single-chain” system that puts all markables into one chain (Luo et al., 2004; Finkel and Manning, 2008). However, B3 and CEAF have a different bias: they give high scores to the “all-singletons” system that puts each markable in a separate chain. On OntoNotes test, we get B3 = 83.2 and CEAF = 71.2 for all-singletons, which incorrectly suggests that performance is good; but MUC F1 is 0 in this case, demonstrating that all-singletons performs poorly. With the goal of performing a complete evaluation, one that punishes all-singletons as well as single-chain, we use one of the following two combinations: (i) MUC and B3 or (ii) MUC and CEAF. Recasens et al. (2010) showed that B3 and CEAF are highly correlated (Pearson’s r = 0.91). Therefore, either combination (i) or combination (ii) fairly characterizes CoRe performance. 5 Results and Discussion Table 1 compares our unsupervised self-trained model UNSEL and unsupervised model A-INF to 788 MUC B3 CEAF BNEWS-ACE-2 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 1 P&D 68.3 66.6 67.4 70.3 65.3 67.7 – – – 2 A-INF 60.8 61.4 61.1 55.5 69.0 61.5 52.6 52.0 52.3 3 UNSEL 72.5 65.6 68.9 72.5 66.4 69.3 56.7 64.8 60.5 4 SUCRE 86.6 60.3 71.0 87.6 64.6 74.4 56.1 81.6 66.5 NWIRE-ACE-2 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 5 P&D 67.7 67.3 67.4 74.7 68.8 71.6 – – – 6 A-INF 62.4 57.4 59.8 59.2 62.4 60.7 46.8 52.5 49.5 7 UNSEL 76.2 61.5 68.1 81.5 67.6 73.9 61.5 77.1 68.4 8 SUCRE 82.5 65.7 73.1 85.4 72.3 78.3 63.5 80.6 71.0 NPAPER-ACE-2 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 9 P&D 69.2 71.7 70.4 70.0 66.5 68.2 – – – 10 A-INF 60.6 56.0 58.2 52.4 60.3 56.0 38.9 44.0 41.3 11 UNSEL 78.6 65.7 71.6 74.0 68.0 70.9 57.6 73.2 64.5 12 SUCRE 82.5 67.0 73.9 80.7 69.5 74.6 58.8 77.1 66.7 BNEWS-ACE2003 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 13 H&K 68.3 56.8 62.0 – – – 59.9 53.9 56.7 14 Ng 71.4 56.1 62.8 – – – 60.5 53.3 56.7 15 A-INF 60.9 64.9 62.8 50.9 72.5 59.8 53.8 49.4 51.5 16 UNSEL 69.5 65.0 67.1 70.2 65.9 68.0 58.5 64.2 61.2 17 SUCRE 73.9 68.5 71.1 75.4 69.6 72.4 60.1 66.6 63.2 NWIRE-ACE2003 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 18 H&K 66.2 46.8 54.8 – – – 62.8 49.6 55.4 19 Ng 68.3 47.0 55.7 – – – 60.7 49.2 54.4 20 A-INF 62.7 60.5 61.6 54.8 66.1 59.9 47.7 50.2 49.0 21 UNSEL 64.8 68.6 66.6 61.5 73.6 67.0 59.8 55.1 57.3 22 SUCRE 77.6 69.3 73.2 78.8 75.2 76.9 65.1 74.4 69.5 Table 1: Scores for MCORE (A-INF, SUCRE and UNSEL) and three comparable systems on ACE-2 and ACE2003. P&D (Poon and Domingos, 2008) on ACE-2; and to Ng (Ng, 2008) and H&K5 (Haghighi and Klein, 2007) on ACE2003. To our knowledge, these three papers are the best and most recent evaluation results for unsupervised learning and they all report results on ACE-2 and ACE-2003. Results on SUCRE will be discussed later in this section. A-INF scores are below some of the earlier unsupervised work reported in the literature (lines 2, 6, 10) although they are close to competitive on two of the datasets (lines 15 and 20: MUC scores are equal or better, CEAF scores are worse). Given the simplicity of A-INF, which uses nothing but asso5We report numbers for the better performing Pronoun-only Salience variant of H&K proposed by Ng (2008). ciations mined from a large unannotated corpus, its performance is surprisingly good. Turning to UNSEL, we see that F1 is always better for UNSEL than for A-INF, for all three measures (lines 3 vs 2, 7 vs 6, 11 vs 10, 16 vs 15, 21 vs 20). This demonstrates that the self-training step of UNSEL is able to correct many of the errors that A-INF commits. Both precision and recall are improved with two exceptions: recall of B3 decreases from line 2 to 3 and from 15 to 16. When comparing the unsupervised system UNSEL to previous unsupervised results, we find that UNSEL’s F1 is higher in all runs (lines 3 vs 1, 7 vs 5, 11 vs 9, 16 vs 13&14, 21 vs 18&19). The differences are large (up to 11%) compared to H&K and 789 Ng. The difference to P&D is smaller, ranging from 2.7% (B3, lines 11 vs 9) to 0.7% (MUC, lines 7 vs 5). Given that MCORE is a simpler and more efficient system than this prior work on unsupervised CoRe, these results are promising. In contrast to F1, there is no consistent trend for precision and recall. For example, P&D is better than UNSEL on MUC recall for BNEWS-ACE-2 (lines 1 vs 3) and H&K is better than UNSEL on CEAF precision for NWIRE-ACE2003 (lines 18 vs 21). But this higher variability for precision and recall is to be expected since every system trades the two measures off differently. These results show that the application of selftraining significantly improves performance. As discussed in Section 3.1, self-training has positive effects on both recall and precision. We now present two simplified examples that illustrate this point. Example for recall. Consider the markable pair (Novoselov6,he) in the test set. Its N score is 0 because our subset of 2009 Wikipedia sentences has no occurrence of Novoselov. However, A-INF finds many similar pairs like (Einstein,he) and (Hawking,he), pairs that have high N scores. Suppose we represent pairs using the following five features: <sentence distance, string match, type of first markable, type of second markable, number agreement>. Then (Einstein,he), (Hawking,he) and (Novoselov,he) will all be assigned the feature vector <1, No, Proper Noun, Personal Pronoun, Yes>. We can now automatically label Wikipedia using A-INF – this will label (Einstein,he) and (Hawking,he) as coreferent – and train SUCRE on the resulting training set. SUCRE can then resolve the coreference (Novoselov,he) correctly. We call this the better recall effect. Example for precision. Using the same representation of pairs, suppose that for the sequence of markables Biden, Obama, President the markable pairs (Biden,President) and (Obama,President) are assigned the feature vectors <8, No, Proper Noun, Proper Noun, Yes> and <1, No, Proper Noun, Proper Noun, Yes>, respectively. Since both pairs have N scores > 0.5, A-INF incorrectly puts the three markables into one cluster. But as we would expect, A-INF labels many more markable pairs 6The 2010 physics Nobel laureate. 10 20 30 40 50 60 70 80 100 20000 40000 60000 Prec., Rec. and F1 Number of input Wikipedia articles MUC-Prec. MUC-Rec. MUC-F1 Figure 5: MUC learning curve for A-INF. with the second feature vector (distance=1) as coreferent than with the first one (distance=8) in the entire automatically labeled training set. If we now train SUCRE on this training set, it can resolve such cases in the test set correctly even though they are so similar: (Biden,President) is classified as disreferent and (Obama,President) as coreferent. We call this the better precision effect. Recall that UNSEL has better recall and precision than A-INF in almost all cases (discussion of Table 1). This result shows that better precision and better recall effects do indeed benefit UNSEL. To summarize, the advantages of our self-training approach are: (i) We cover cases that do not occur in the unlabeled corpus (better recall effect); and (ii) we use the leveraging effect of a rich feature space including distance, person, number, gender etc. to improve precision (better precision effect). Learning curve. Figure 5 presents MUC scores of A-INF as a function of the number of Wikipedia articles used in unsupervised learning. We can see that a small number of input articles (e.g., 100) results in low recall and high precision. When we increase the number of input articles, recall rapidly increases and precision rapidly decreases up to about 10,000 articles. Increase and decrease continue more slowly after that. F1 increases throughout because lower precision is compensated by higher recall. This learning curve demonstrates the importance of the size of the corpus for A-INF. Comparison of UNSEL with SUCRE Table 2 compares our unsupervised self-trained (UNSEL) and supervised (SUCRE) models with the recently published SemEval-2010 OntoNotes re790 Gold setting + True markables System MD MUC B3 CEAF Relax 100 33.7 84.5 75.6 SUCRE2010 100 60.8 82.4 74.3 SUCRE 100 64.3 87.0 80.1 UNSEL 100 63.0 86.9 79.7 Automatic setting + System markables System MD MUC B3 CEAF SUCRE2010 80.7 52.5 67.1 62.7 Tanl-1 73.9 24.6 61.3 57.3 SUCRE 80.9 55.7 69.7 66.6 UNSEL 80.9 55.0 69.8 66.3 Table 2: F1 scores for MCORE (SUCRE and UNSEL) and the best comparable systems in SemEval-2010. MD: Markable Detection F1 (Recasens et al., 2010). sults (gold and automatic settings). We compare with the scores of the two best systems, Relax and SUCRE20107 (for the gold setting with true markables) and SUCRE2010 and Tanl-1 (for the automatic setting with system markables, 89.9% markable detection (MD) F1). It is apparent from this table that our supervised and unsupervised self-trained models outperform Relax, SUCRE2010 and Tanl-1. We should make clear that we did not use the test set for development to ensure a fair comparison with the participant systems at SemEval-2010. Table 1 shows that the unsupervised self-trained system (UNSEL) does a lot worse than the supervised system (SUCRE) on ACE.8 In contrast, UNSEL performs almost as well as SUCRE on OntoNotes (Table 2), for both gold and automatic settings: F1 differences range from +.1 (Automatic, B3) to −1.3 (Gold, MUC). We suspect that this is partly due to the much higher proportion of singletons in OntoNotes than in ACE-2: 85.2% (OntoNotes) vs. 60.2% (ACE-2). The low recall of the automatic labeling by A-INF introduces a bias for singletons when UNSEL is self-trained. Another reason is that the OntoNotes training set is about 4 times larger than each of BNEWS, NWIRE and 7It is the first version of our supervised system that took part in SemEval-2010. We call it SUCRE2010. 8A reviewer observes that SUCRE’s performance is better than the supervised system of Ng (2008). This may indicate that part of our improved unsupervised performance in Table 1 is due to better feature engineering implemented in SUCRE. NPAPER training sets. With more training data, UNSEL can correct more of its precision and recall errors. For an unsupervised approach, which only needs unlabeled data, there is little cost to creating large training sets. Thus, this comparison of ACE-2/Ontonotes results is evidence that in a realistic scenario using association information in an unsupervised self-trained system is almost as good as a system trained on manually labeled data. It is important to note that the comparison of SUCRE to UNSEL is the most direct comparison of supervised and unsupervised CoRe learning we are aware of. The two systems are identical with the single exception that they are trained on manual vs. automatic coreference labels. 6 Conclusion In this paper, we have demonstrated the utility of association information for coreference resolution. We first developed a simple unsupervised model for shallow CoRe that only uses association information for finding coreference chains. We then introduced an unsupervised self-trained approach where a supervised model is trained on a corpus that was automatically labeled by the unsupervised model based on the association information. The results of the experiments indicate that the performance of the unsupervised self-trained approach is better than the performance of other unsupervised learning systems. In addition, we showed that our system is a flexible and modular framework that is able to learn from data with different quality (perfect vs noisy markable detection) and domain; and is able to deliver good results for shallow information spaces and competitive results for rich feature spaces. Finally, our framework is the first CoRe system that is designed to support three major modes of machine learning equally well: supervised, self-trained and unsupervised. Acknowledgments This research was funded by DFG (grant SCHU 2246/4). We thank Aoife Cahill, Alexander Fraser, Thomas M¨uller and the anonymous reviewers for their helpful comments. 791 References Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In LREC Workshop on Linguistics Coreference ’98, pages 563–566. Thomas M. Cover and Joy A. Thomas. 1991. Elements of Information Theory. Wiley. Micha Elsner, Eugene Charniak, and Mark Johnson. 2009. Structured generative models for unsupervised named-entity clustering. In HLT-NAACL ’09, pages 164–172. Jenny Rose Finkel and Christopher D. Manning. 2008. Enforcing transitivity in coreference resolution. In HLT ’08, pages 45–48. Aria Haghighi and Dan Klein. 2007. Unsupervised coreference resolution in a nonparametric bayesian model. In ACL ’07, pages 848–855. Aria Haghighi and Dan Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In EMNLP ’09, pages 1152–1161. Aria Haghighi and Dan Klein. 2010. Coreference resolution in a modular, entity-centered model. In NAACLHLT ’10, pages 385–393. Andrew Kehler, Douglas E. Appelt, Lara Taylor, and Aleksandr Simma. 2004. Competitive Self-Trained Pronoun Interpretation. In HLT-NAACL ’04, pages 33–36. Manfred Klenner and ´Etienne Ailloud. 2009. Optimization in coreference resolution is not needed: A nearly-optimal algorithm with intensional constraints. In EACL, pages 442–450. Hamidreza Kobdani and Hinrich Sch¨utze. 2010. Sucre: A modular system for coreference resolution. In SemEval ’10, pages 92–95. Hamidreza Kobdani, Hinrich Sch¨utze, Andre Burkovski, Wiltrud Kessler, and Gunther Heidemann. 2010. Relational feature engineering of natural language processing. In CIKM ’10. ACM. Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A Mention-Synchronous Coreference Resolution Algorithm Based on the Bell Tree. In ACL ’04, pages 135– 142. Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In HLT ’05, pages 25–32. A. McEnery, I. Tanaka, and S. Botley. 1997. Corpus annotation and reference resolution. In ANARESOLUTION ’97, pages 67–74. Alexis Mitchell, Stephanie Strassel, Mark Przybocki, JK Davis, George Doddington, Ralph Grishman, Adam Meyers, Ada Brunstein, Lisa Ferro, and Beth Sundheim. 2003. ACE-2 version 1.0. Linguistic Data Consortium, Philadelphia. Christoph M¨uller, Stefan Rapp, and Michael Strube. 2002. Applying co-training to reference resolution. In ACL ’02, pages 352–359. Vincent Ng and Claire Cardie. 2003. Bootstrapping coreference classifiers with multiple machine learning algorithms. In EMNLP ’03, pages 113–120. Vincent Ng. 2008. Unsupervised models for coreference resolution. In EMNLP ’08, pages 640–649. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In HLT-NAACL ’07, pages 404–411. Simone Paolo Ponzetto. 2010. Knowledge Acquisition from a Collaboratively Generated Encyclopedia, volume 327 of Dissertations in Artificial Intelligence. Amsterdam, The Netherlands: IOS Press & Heidelberg, Germany: AKA Verlag. Hoifung Poon and Pedro Domingos. 2008. Joint unsupervised coreference resolution with markov logic. In EMNLP ’08, pages 650–659. Sameer S. Pradhan, Eduard Hovy, Mitch Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2007. Ontonotes: A unified relational semantic representation. In ICSC ’07, pages 517–526. J. Ross Quinlan. 1993. C4.5: Programs for machine learning. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. Karthik Raghunathan, Heeyoung Lee, Sudarshan Rangarajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multipass sieve for coreference resolution. In EMNLP ’10, pages 492–501. Marta Recasens, Llu´ıs M`arquez, Emili Sapena, M.Ant`onia Mart´ı, Mariona Taul´e, V´eronique Hoste, Massimo Poesio, and Yannick Versley. 2010. SemEval-2010 Task 1: Coreference resolution in multiple languages. In SemEval ’10, pages 70–75. Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. In CL ’01, pages 521–544. Veselin Stoyanov, Claire Cardie, Nathan Gilbert, Ellen Riloff, David Buttler, and David Hysom. 2010. Coreference resolution with reconcile. In ACL ’10, pages 156–161. Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme. In MUC6 ’95, pages 45–52. 792
|
2011
|
79
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 72–82, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Exact Decoding of Syntactic Translation Models through Lagrangian Relaxation Alexander M. Rush MIT CSAIL, Cambridge, MA 02139, USA [email protected] Michael Collins Department of Computer Science, Columbia University, New York, NY 10027, USA [email protected] Abstract We describe an exact decoding algorithm for syntax-based statistical translation. The approach uses Lagrangian relaxation to decompose the decoding problem into tractable subproblems, thereby avoiding exhaustive dynamic programming. The method recovers exact solutions, with certificates of optimality, on over 97% of test examples; it has comparable speed to state-of-the-art decoders. 1 Introduction Recent work has seen widespread use of synchronous probabilistic grammars in statistical machine translation (SMT). The decoding problem for a broad range of these systems (e.g., (Chiang, 2005; Marcu et al., 2006; Shen et al., 2008)) corresponds to the intersection of a (weighted) hypergraph with an n-gram language model.1 The hypergraph represents a large set of possible translations, and is created by applying a synchronous grammar to the source language string. The language model is then used to rescore the translations in the hypergraph. Decoding with these models is challenging, largely because of the cost of integrating an n-gram language model into the search process. Exact dynamic programming algorithms for the problem are well known (Bar-Hillel et al., 1964), but are too expensive to be used in practice.2 Previous work on decoding for syntax-based SMT has therefore been focused primarily on approximate search methods. This paper describes an efficient algorithm for exact decoding of synchronous grammar models for translation. We avoid the construction of (Bar-Hillel 1This problem is also relevant to other areas of statistical NLP, for example NL generation (Langkilde, 2000). 2E.g., with a trigram language model they run in O(|E|w6) time, where |E| is the number of edges in the hypergraph, and w is the number of distinct lexical items in the hypergraph. et al., 1964) by using Lagrangian relaxation to decompose the decoding problem into the following sub-problems: 1. Dynamic programming over the weighted hypergraph. This step does not require language model integration, and hence is highly efficient. 2. Application of an all-pairs shortest path algorithm to a directed graph derived from the weighted hypergraph. The size of the derived directed graph is linear in the size of the hypergraph, hence this step is again efficient. Informally, the first decoding algorithm incorporates the weights and hard constraints on translations from the synchronous grammar, while the second decoding algorithm is used to integrate language model scores. Lagrange multipliers are used to enforce agreement between the structures produced by the two decoding algorithms. In this paper we first give background on hypergraphs and the decoding problem. We then describe our decoding algorithm. The algorithm uses a subgradient method to minimize a dual function. The dual corresponds to a particular linear programming (LP) relaxation of the original decoding problem. The method will recover an exact solution, with a certificate of optimality, if the underlying LP relaxation has an integral solution. In some cases, however, the underlying LP will have a fractional solution, in which case the method will not be exact. The second technical contribution of this paper is to describe a method that iteratively tightens the underlying LP relaxation until an exact solution is produced. We do this by gradually introducing constraints to step 1 (dynamic programming over the hypergraph), while still maintaining efficiency. 72 We report experiments using the tree-to-string model of (Huang and Mi, 2010). Our method gives exact solutions on over 97% of test examples. The method is comparable in speed to state-of-the-art decoding algorithms; for example, over 70% of the test examples are decoded in 2 seconds or less. We compare our method to cube pruning (Chiang, 2007), and find that our method gives improved model scores on a significant number of examples. One consequence of our work is that we give accurate estimates of the number of search errors for cube pruning. 2 Related Work A variety of approximate decoding algorithms have been explored for syntax-based translation systems, including cube-pruning (Chiang, 2007; Huang and Chiang, 2007), left-to-right decoding with beam search (Watanabe et al., 2006; Huang and Mi, 2010), and coarse-to-fine methods (Petrov et al., 2008). Recent work has developed decoding algorithms based on finite state transducers (FSTs). Iglesias et al. (2009) show that exact FST decoding is feasible for a phrase-based system with limited reordering (the MJ1 model (Kumar and Byrne, 2005)), and de Gispert et al. (2010) show that exact FST decoding is feasible for a specific class of hierarchical grammars (shallow-1 grammars). Approximate search methods are used for more complex reordering models or grammars. The FST algorithms are shown to produce higher scoring solutions than cube-pruning on a large proportion of examples. Lagrangian relaxation is a classical technique in combinatorial optimization (Korte and Vygen, 2008). Lagrange multipliers are used to add linear constraints to an existing problem that can be solved using a combinatorial algorithm; the resulting dual function is then minimized, for example using subgradient methods. In recent work, dual decomposition—a special case of Lagrangian relaxation, where the linear constraints enforce agreement between two or more models—has been applied to inference in Markov random fields (Wainwright et al., 2005; Komodakis et al., 2007; Sontag et al., 2008), and also to inference problems in NLP (Rush et al., 2010; Koo et al., 2010). There are close connections between dual decomposition and work on belief propagation (Smith and Eisner, 2008). 3 Background: Hypergraphs Translation with many syntax-based systems (e.g., (Chiang, 2005; Marcu et al., 2006; Shen et al., 2008; Huang and Mi, 2010)) can be implemented as a two-step process. The first step is to take an input sentence in the source language, and from this to create a hypergraph (sometimes called a translation forest) that represents the set of possible translations (strings in the target language) and derivations under the grammar. The second step is to integrate an n-gram language model with this hypergraph. For example, in the system of (Chiang, 2005), the hypergraph is created as follows: first, the source side of the synchronous grammar is used to create a parse forest over the source language string. Second, transduction operations derived from synchronous rules in the grammar are used to create the target-language hypergraph. Chiang’s method uses a synchronous context-free grammar, but the hypergraph formalism is applicable to a broad range of other grammatical formalisms, for example dependency grammars (e.g., (Shen et al., 2008)). A hypergraph is a pair (V, E) where V = {1, 2, . . . , |V |} is a set of vertices, and E is a set of hyperedges. A single distinguished vertex is taken as the root of the hypergraph; without loss of generality we take this vertex to be v = 1. Each hyperedge e ∈E is a tuple ⟨⟨v1, v2, . . . , vk⟩, v0⟩where v0 ∈V , and vi ∈{2 . . . |V |} for i = 1 . . . k. The vertex v0 is referred to as the head of the edge. The ordered sequence ⟨v1, v2, . . . , vk⟩is referred to as the tail of the edge; in addition, we sometimes refer to v1, v2, . . . vk as the children in the edge. The number of children k may vary across different edges, but k ≥1 for all edges (i.e., each edge has at least one child). We will use h(e) to refer to the head of an edge e, and t(e) to refer to the tail. We will assume that the hypergraph is acyclic: intuitively this will mean that no derivation (as defined below) contains the same vertex more than once (see (Martin et al., 1990) for a formal definition). Each vertex v ∈V is either a non-terminal in the hypergraph, or a leaf. The set of non-terminals is VN = {v ∈V : ∃e ∈E such that h(e) = v} Conversely, the set of leaves is defined as VL = {v ∈V :̸ ∃e ∈E such that h(e) = v} 73 Finally, we assume that each v ∈V has a label l(v). The labels for leaves will be words, and will be important in defining strings and language model scores for those strings. The labels for non-terminal nodes will not be important for results in this paper.3 We now turn to derivations. Define an index set I = V ∪E. A derivation is represented by a vector y = {yr : r ∈I} where yv = 1 if vertex v is used in the derivation, yv = 0 otherwise (similarly ye = 1 if edge e is used in the derivation, ye = 0 otherwise). Thus y is a vector in {0, 1}|I|. A valid derivation satisfies the following constraints: • y1 = 1 (the root must be in the derivation). • For all v ∈VN, yv = P e:h(e)=v ye. • For all v ∈2 . . . |V |, yv = P e:v∈t(e) ye. We use Y to refer to the set of valid derivations. The set Y is a subset of {0, 1}|I| (not all members of {0, 1}|I| will correspond to valid derivations). Each derivation y in the hypergraph will imply an ordered sequence of leaves v1 . . . vn. We use s(y) to refer to this sequence. The sentence associated with the derivation is then l(v1) . . . l(vn). In a weighted hypergraph problem, we assume a parameter vector θ = {θr : r ∈I}. The score for any derivation is f(y) = θ · y = P r∈I θryr. Simple bottom-up dynamic programming—essentially the CKY algorithm—can be used to find y∗ = arg maxy∈Y f(y) under these definitions. The focus of this paper will be to solve problems involving the integration of a k’th order language model with a hypergraph. In these problems, the score for a derivation is modified to be f(y) = X r∈I θryr + n X i=k θ(vi−k+1, vi−k+2, . . . , vi) (1) where v1 . . . vn = s(y). The θ(vi−k+1, . . . , vi) parameters score n-grams of length k. These parameters are typically defined by a language model, for example with k = 3 we would have θ(vi−2, vi−1, vi) = log p(l(vi)|l(vi−2), l(vi−1)). The problem is then to find y∗= arg maxy∈Y f(y) under this definition. Throughout this paper we make the following assumption when using a bigram language model: 3They might for example be non-terminal symbols from the grammar used to generate the hypergraph. Assumption 3.1 (Bigram start/end assumption.) For any derivation y, with leaves s(y) = v1, v2, . . . , vn, it is the case that: (1) v1 = 2 and vn = 3; (2) the leaves 2 and 3 cannot appear at any other position in the strings s(y) for y ∈Y; (3) l(2) = <s> where <s> is the start symbol in the language model; (4) l(3) = </s> where </s> is the end symbol. This assumption allows us to incorporate language model terms that depend on the start and end symbols. It also allows a clean solution for boundary conditions (the start/end of strings).4 4 A Simple Lagrangian Relaxation Algorithm We now give a Lagrangian relaxation algorithm for integration of a hypergraph with a bigram language model, in cases where the hypergraph satisfies the following simplifying assumption: Assumption 4.1 (The strict ordering assumption.) For any two leaves v and w, it is either the case that: 1) for all derivations y such that v and w are both in the sequence l(y), v precedes w; or 2) for all derivations y such that v and w are both in l(y), w precedes v. Thus under this assumption, the relative ordering of any two leaves is fixed. This assumption is overly restrictive:5 the next section describes an algorithm that does not require this assumption. However deriving the simple algorithm will be useful in developing intuition, and will lead directly to the algorithm for the unrestricted case. 4.1 A Sketch of the Algorithm At a high level, the algorithm is as follows. We introduce Lagrange multipliers u(v) for all v ∈VL, with initial values set to zero. The algorithm then involves the following steps: (1) For each leaf v, find the previous leaf w that maximizes the score θ(w, v) −u(w) (call this leaf α∗(v), and define αv = θ(α∗(v), v) −u(α∗(v))). (2) find the highest scoring derivation using dynamic programming 4The assumption generalizes in the obvious way to k’th order language models: e.g., for trigram models we assume that v1 = 2, v2 = 3, vn = 4, l(2) = l(3) = <s>, l(4) = </s>. 5It is easy to come up with examples that violate this assumption: for example a hypergraph with edges ⟨⟨4, 5⟩, 1⟩and ⟨⟨5, 4⟩, 1⟩violates the assumption. The hypergraphs found in translation frequently contain alternative orderings such as this. 74 over the original (non-intersected) hypergraph, with leaf nodes having weights θv + αv + u(v). (3) If the output derivation from step 2 has the same set of bigrams as those from step 1, then we have an exact solution to the problem. Otherwise, the Lagrange multipliers u(v) are modified in a way that encourages agreement of the two steps, and we return to step 1. Steps 1 and 2 can be performed efficiently; in particular, we avoid the classical dynamic programming intersection, instead relying on dynamic programming over the original, simple hypergraph. 4.2 A Formal Description We now give a formal description of the algorithm. Define B ⊆VL×VL to be the set of all ordered pairs ⟨v, w⟩such that there is at least one derivation y with v directly preceding w in s(y). Extend the bit-vector y to include variables y(v, w) for ⟨v, w⟩∈B where y(v, w) = 1 if leaf v is followed by w in s(y), 0 otherwise. We redefine the index set to be I = V ∪ E ∪B, and define Y ⊆{0, 1}|I| to be the set of all possible derivations. Under assumptions 3.1 and 4.1 above, Y = {y : y satisfies constraints C0, C1, C2} where the constraint definitions are: • (C0) The yv and ye variables form a derivation in the hypergraph, as defined in section 3. • (C1) For all v ∈VL such that v ̸= 2, yv = P w:⟨w,v⟩∈B y(w, v). • (C2) For all v ∈VL such that v ̸= 3, yv = P w:⟨v,w⟩∈B y(v, w). C1 states that each leaf in a derivation has exactly one in-coming bigram, and that each leaf not in the derivation has 0 incoming bigrams; C2 states that each leaf in a derivation has exactly one out-going bigram, and that each leaf not in the derivation has 0 outgoing bigrams.6 The score of a derivation is now f(y) = θ · y, i.e., f(y) = X v θvyv+ X e θeye+ X ⟨v,w⟩∈B θ(v, w)y(v, w) where θ(v, w) are scores from the language model. Our goal is to compute y∗= arg maxy∈Y f(y). 6Recall that according to the bigram start/end assumption the leaves 2/3 are reserved for the start/end of the sequence s(y), and hence do not have an incoming/outgoing bigram. Initialization: Set u0(v) = 0 for all v ∈VL Algorithm: For t = 1 . . . T: • yt = arg maxy∈Y′ L(ut−1, y) • If yt satisfies constraints C2, return yt, Else ∀v ∈VL, ut(v) = ut−1(v) −δt yt(v) −P w:⟨v,w⟩∈B yt(v, w) . Figure 1: A simple Lagrangian relaxation algorithm. δt > 0 is the step size at iteration t. Next, define Y′ as Y′ = {y : y satisfies constraints C0 and C1} In this definition we have dropped the C2 constraints. To incorporate these constraints, we use Lagrangian relaxation, with one Lagrange multiplier u(v) for each constraint in C2. The Lagrangian is L(u, y) = f(y) + X v u(v)(y(v) − X w:⟨v,w⟩∈B y(v, w)) = β · y where βv = θv + u(v), βe = θe, and β(v, w) = θ(v, w) −u(v). The dual problem is to find minu L(u) where L(u) = max y∈Y′ L(u, y) Figure 1 shows a subgradient method for solving this problem. At each point the algorithm finds yt = arg maxy∈Y′ L(ut−1, y), where ut−1 are the Lagrange multipliers from the previous iteration. If yt satisfies the C2 constraints in addition to C0 and C1, then it is returned as the output from the algorithm. Otherwise, the multipliers u(v) are updated. Intuitively, these updates encourage the values of yv and P w:⟨v,w⟩∈B y(v, w) to be equal; formally, these updates correspond to subgradient steps. The main computational step at each iteration is to compute arg maxy∈Y′ L(ut−1, y) This step is easily solved, as follows (we again use βv, βe and β(v1, v2) to refer to the parameter values that incorporate Lagrange multipliers): • For all v ∈ VL, define α∗(v) = arg maxw:⟨w,v⟩∈B β(w, v) and αv = β(α∗(v), v). For all v ∈VN define αv = 0. 75 • Using dynamic programming, find values for the yv and ye variables that form a valid derivation, and that maximize f′(y) = P v(βv + αv)yv + P e βeye. • Set y(v, w) = 1 iff y(w) = 1 and α∗(w) = v. The critical point here is that through our definition of Y′, which ignores the C2 constraints, we are able to do efficient search as just described. In the first step we compute the highest scoring incoming bigram for each leaf v. In the second step we use conventional dynamic programming over the hypergraph to find an optimal derivation that incorporates weights from the first step. Finally, we fill in the y(v, w) values. Each iteration of the algorithm runs in O(|E| + |B|) time. There are close connections between Lagrangian relaxation and linear programming relaxations. The most important formal results are: 1) for any value of u, L(u) ≥f(y∗) (hence the dual value provides an upper bound on the optimal primal value); 2) under an appropriate choice of the step sizes δt, the subgradient algorithm is guaranteed to converge to the minimum of L(u) (i.e., we will minimize the upper bound, making it as tight as possible); 3) if at any point the algorithm in figure 1 finds a yt that satisfies the C2 constraints, then this is guaranteed to be the optimal primal solution. Unfortunately, this algorithm may fail to produce a good solution for hypergraphs where the strict ordering constraint does not hold. In this case it is possible to find derivations y that satisfy constraints C0, C1, C2, but which are invalid. As one example, consider a derivation with s(y) = 2, 4, 5, 3 and y(2, 3) = y(4, 5) = y(5, 4) = 1. The constraints are all satisfied in this case, but the bigram variables are invalid (e.g., they contain a cycle). 5 The Full Algorithm We now describe our full algorithm, which does not require the strict ordering constraint. In addition, the full algorithm allows a trigram language model. We first give a sketch, and then give a formal definition. 5.1 A Sketch of the Algorithm A crucial idea in the new algorithm is that of paths between leaves in hypergraph derivations. Previously, for each derivation y, we had defined s(y) = v1, v2, . . . , vn to be the sequence of leaves in y. In addition, we will define g(y) = p0, v1, p1, v2, p2, v3, p3, . . . , pn−1, vn, pn where each pi is a path in the derivation between leaves vi and vi+1. The path traces through the nonterminals that are between the two leaves in the tree. As an example, consider the following derivation (with hyperedges ⟨⟨2, 5⟩, 1⟩and ⟨⟨3, 4⟩, 2⟩): 1 2 3 4 5 For this example g(y) is ⟨1 ↓, 2 ↓⟩⟨2 ↓, 3 ↓⟩ ⟨3 ↓⟩, 3, ⟨3 ↑⟩⟨3 ↑, 4 ↓⟩⟨4 ↓⟩, 4, ⟨4 ↑⟩⟨4 ↑, 2 ↑⟩ ⟨2 ↑, 5 ↓⟩⟨5 ↓⟩, 5, ⟨5 ↑⟩⟨5 ↑, 1 ↑⟩. States of the form ⟨a ↓⟩and ⟨a ↑⟩where a is a leaf appear in the paths respectively before/after the leaf a. States of the form ⟨a, b⟩correspond to the steps taken in a top-down, left-to-right, traversal of the tree, where down and up arrows indicate whether a node is being visited for the first or second time (the traversal in this case would be 1, 2, 3, 4, 2, 5, 1). The mapping from a derivation y to a path g(y) can be performed using the algorithm in figure 2. For a given derivation y, define E(y) = {y : ye = 1}, and use E(y) as the set of input edges to this algorithm. The output from the algorithm will be a set of states S, and a set of directed edges T, which together fully define the path g(y). In the simple algorithm, the first step was to predict the previous leaf for each leaf v, under a score that combined a language model score with a Lagrange multiplier score (i.e., compute arg maxw β(w, v) where β(w, v) = θ(w, v) + u(w)). In this section we describe an algorithm that for each leaf v again predicts the previous leaf, but in addition predicts the full path back to that leaf. For example, rather than making a prediction for leaf 5 that it should be preceded by leaf 4, we would also predict the path ⟨4 ↑⟩⟨4 ↑, 2 ↑⟩⟨2 ↑, 5 ↓⟩⟨5 ↓⟩between these two leaves. Lagrange multipliers will be used to enforce consistency between these predictions (both paths and previous words) and a valid derivation. 76 Input: A set E of hyperedges. Output: A directed graph S, T where S is a set of vertices, and T is a set of edges. Step 1: Creating S: Define S = ∪e∈ES(e) where S(e) is defined as follows. Assume e = ⟨⟨v1, v2, . . . , vk⟩, v0⟩. Include the following states in S(e): (1) ⟨v0 ↓, v1 ↓⟩and ⟨vk↑, v0↑⟩. (2) ⟨vj ↑, vj+1↓⟩for j = 1 . . . k −1 (if k = 1 then there are no such states). (3) In addition, for any vj for j = 1 . . . k such that vj ∈VL, add the states ⟨vj ↓⟩ and ⟨vj ↑⟩. Step 2: Creating T: T is formed by including the following directed arcs: (1) Add an arc from ⟨a, b⟩∈S to ⟨c, d⟩∈S whenever b = c. (2) Add an arc from ⟨a, b ↓⟩∈S to ⟨c ↓⟩∈S whenever b = c. (3) Add an arc from ⟨a ↑⟩∈S to ⟨b ↑, c⟩∈S whenever a = b. Figure 2: Algorithm for constructing a directed graph (S, T) from a set of hyperedges E. 5.2 A Formal Description We first use the algorithm in figure 2 with the entire set of hyperedges, E, as its input. The result is a directed graph (S, T) that contains all possible paths for valid derivations in V, E (it also contains additional, ill-formed paths). We then introduce the following definition: Definition 5.1 A trigram path p is p = ⟨v1, p1, v2, p2, v3⟩where: a) v1, v2, v3 ∈ VL; b) p1 is a path (sequence of states) between nodes ⟨v1 ↑⟩and ⟨v2 ↓⟩in the graph (S, T); c) p2 is a path between nodes ⟨v2 ↑⟩and ⟨v3 ↓⟩in the graph (S, T). We define P to be the set of all trigram paths in (S, T). The set P of trigram paths plays an analogous role to the set B of bigrams in our previous algorithm. We use v1(p), p1(p), v2(p), p2(p), v3(p) to refer to the individual components of a path p. In addition, define SN to be the set of states in S of the form ⟨a, b⟩(as opposed to the form ⟨c ↓⟩or ⟨c ↑⟩ where c ∈VL). We now define a new index set, I = V ∪E ∪ SN ∪P, adding variables ys for s ∈SN, and yp for p ∈P. If we take Y ⊂{0, 1}|I| to be the set of valid derivations, the optimization problem is to find y∗= arg maxy∈Y f(y), where f(y) = θ · y, that is, f(y) = X v θvyv + X e θeye + X s θsys + X p θpyp In particular, we might define θs = 0 for all s, and θp = log p(l(v3(p))|l(v1(p)), l(v2(p))) where • D0. The yv and ye variables form a valid derivation in the original hypergraph. • D1. For all s ∈SN, ys = P e:s∈S(e) ye (see figure 2 for the definition of S(e)). • D2. For all v ∈VL, yv = P p:v3(p)=v yp • D3. For all v ∈VL, yv = P p:v2(p)=v yp • D4. For all v ∈VL, yv = P p:v1(p)=v yp • D5. For all s ∈SN, ys = P p:s∈p1(p) yp • D6. For all s ∈SN, ys = P p:s∈p2(p) yp • Lagrangian with Lagrange multipliers for D3–D6: L(y, λ, γ, u, v) = θ · y + P v λv yv −P p:v2(p)=v yp + P v γv yv −P p:v1(p)=v yp + P s us ys −P p:s∈p1(p) yp + P s vs ys −P p:s∈p2(p) yp . Figure 3: Constraints D0–D6, and the Lagrangian. p(w3|w1, w2) is a trigram probability. The set P is large (typically exponential in size): however, we will see that we do not need to represent the yp variables explicitly. Instead we will be able to leverage the underlying structure of a path as a sequence of states. The set of valid derivations is Y = {y : y satisfies constraints D0–D6} where the constraints are shown in figure 3. D1 simply states that ys = 1 iff there is exactly one edge e in the derivation such that s ∈S(e). Constraints D2–D4 enforce consistency between leaves in the trigram paths, and the yv values. Constraints D5 and D6 enforce consistency between states seen in the paths, and the ys values. The Lagrangian relaxation algorithm is then derived in a similar way to before. Define Y′ = {y : y satisfies constraints D0–D2} We have dropped the D3–D6 constraints, but these will be introduced using Lagrange multipliers. The resulting Lagrangian is shown in figure 3, and can be written as L(y, λ, γ, u, v) = β · y where βv = θv+λv+γv, βs = θs+us+vs, βp = θp−λ(v2(p))− γ(v1(p)) −P s∈p1(p) u(s) −P s∈p2(p) v(s). The dual is L(λ, γ, u, v) = maxy∈Y′ L(y, λ, γ, u, v); figure 4 shows a subgradient method that minimizes this dual. The key step in the algorithm at each iteration is to compute 77 Initialization: Set λ0 = 0, γ0 = 0, u0 = 0, v0 = 0 Algorithm: For t = 1 . . . T: • yt = arg maxy∈Y′ L(y, λt−1, γt−1, ut−1, vt−1) • If yt satisfies the constraints D3–D6, return yt, else: - ∀v ∈VL, λt v = λt−1 v −δt(yt v −P p:v2(p)=v yt p) - ∀v ∈VL, γt v = γt−1 v −δt(yt v −P p:v1(p)=v yt p) - ∀s ∈SN, ut s = ut−1 s −δt(yt s −P p:s∈p1(p) yt p) - ∀s ∈SN, vt s = vt−1 s −δt(yt s −P p:s∈p2(p) yt p) Figure 4: The full Lagrangian relaxation algortihm. δt > 0 is the step size at iteration t. arg maxy∈Y′ L(y, λ, γ, u, v) = arg maxy∈Y′ β · y where β is defined above. Again, our definition of Y′ allows this maximization to be performed efficiently, as follows: 1. For each v ∈ VL, define α∗ v = arg maxp:v3(p)=v β(p), and αv = β(α∗ v). (i.e., for each v, compute the highest scoring trigram path ending in v.) 2. Find values for the yv, ye and ys variables that form a valid derivation, and that maximize f′(y) = P v(βv +αv)yv +P e βeye +P s βsys 3. Set yp = 1 iff yv3(p) = 1 and p = α∗ v3(p). The first step involves finding the highest scoring incoming trigram path for each leaf v. This step can be performed efficiently using the Floyd-Warshall allpairs shortest path algorithm (Floyd, 1962) over the graph (S, T); the details are given in the appendix. The second step involves simple dynamic programming over the hypergraph (V, E) (it is simple to integrate the βs terms into this algorithm). In the third step, the path variables yp are filled in. 5.3 Properties We now describe some important properties of the algorithm: Efficiency. The main steps of the algorithm are: 1) construction of the graph (S, T); 2) at each iteration, dynamic programming over the hypergraph (V, E); 3) at each iteration, all-pairs shortest path algorithms over the graph (S, T). Each of these steps is vastly more efficient than computing an exact intersection of the hypergraph with a language model. Exact solutions. By usual guarantees for Lagrangian relaxation, if at any point the algorithm returns a solution yt that satisfies constraints D3–D6, then yt exactly solves the problem in Eq. 1. Upper bounds. At each point in the algorithm, L(λt, γt, ut, vt) is an upper bound on the score of the optimal primal solution, f(y∗). Upper bounds can be useful in evaluating the quality of primal solutions from either our algorithm or other methods such as cube pruning. Simplicity of implementation. Construction of the (S, T) graph is straightforward. The other steps—hypergraph dynamic programming, and allpairs shortest path—are widely known algorithms that are simple to implement. 6 Tightening the Relaxation The algorithm that we have described minimizes the dual function L(λ, γ, u, v). By usual results for Lagrangian relaxation (e.g., see (Korte and Vygen, 2008)), L is the dual function for a particular LP relaxation arising from the definition of Y′ and the additional constaints D3–D6. In some cases the LP relaxation has an integral solution, in which case the algorithm will return an optimal solution yt.7 In other cases, when the LP relaxation has a fractional solution, the subgradient algorithm will still converge to the minimum of L, but the primal solutions yt will move between a number of solutions. We now describe a method that incrementally adds hard constraints to the set Y′, until the method returns an exact solution. For a given y ∈Y′, for any v with yv = 1, we can recover the previous two leaves (the trigram ending in v) from either the path variables yp, or the hypergraph variables ye. Specifically, define v−1(v, y) to be the leaf preceding v in the trigram path p with yp = 1 and v3(p) = v, and v−2(v, y) to be the leaf two positions before v in the trigram path p with yp = 1 and v3(p) = v. Similarly, define v′ −1(v, y) and v′ −2(v, y) to be the preceding two leaves under the ye variables. If the method has not converged, these two trigram definitions may not be consistent. For a con7Provided that the algorithm is run for enough iterations for convergence. 78 sistent solution, we require v−1(v, y) = v′ −1(v, y) and v−2(v, y) = v′ −2(v, y) for all v with yv = 1. Unfortunately, explicitly enforcing all of these constraints would require exhaustive dynamic programming over the hypergraph using the (Bar-Hillel et al., 1964) method, something we wish to avoid. Instead, we enforce a weaker set of constraints, which require far less computation. Assume some function π : VL →{1, 2, . . . q} that partitions the set of leaves into q different partitions. Then we will add the following constraints to Y′: π(v−1(v, y)) = π(v′ −1(v, y)) π(v−2(v, y)) = π(v′ −2(v, y)) for all v such that yv = 1. Finding arg maxy∈Y′ θ · y under this new definition of Y′ can be performed using the construction of (Bar-Hillel et al., 1964), with q different lexical items (for brevity we omit the details). This is efficient if q is small.8 The remaining question concerns how to choose a partition π that is effective in tightening the relaxation. To do this we implement the following steps: 1) run the subgradient algorithm until L is close to convergence; 2) then run the subgradient algorithm for m further iterations, keeping track of all pairs of leaf nodes that violate the constraints (i.e., pairs a = v−1(v, y)/b = v′ −1(v, y) or a = v−2(v, y)/b = v′ −2(v, y) such that a ̸= b); 3) use a graph coloring algorithm to find a small partition that places all pairs ⟨a, b⟩into separate partitions; 4) continue running Lagrangian relaxation, with the new constraints added. We expand π at each iteration to take into account new pairs ⟨a, b⟩that violate the constraints. In related work, Sontag et al. (2008) describe a method for inference in Markov random fields where additional constraints are chosen to tighten an underlying relaxation. Other relevant work in NLP includes (Tromble and Eisner, 2006; Riedel and Clarke, 2006). Our use of partitions π is related to previous work on coarse-to-fine inference for machine translation (Petrov et al., 2008). 7 Experiments We report experiments on translation from Chinese to English, using the tree-to-string model described 8In fact in our experiments we use the original hypergraph to compute admissible outside scores for an exact A* search algorithm for this problem. We have found the resulting search algorithm to be very efficient. Time %age %age %age %age (LR) (DP) (ILP) (LP) 0.5s 37.5 10.2 8.8 21.0 1.0s 57.0 11.6 13.9 31.1 2.0s 72.2 15.1 21.1 45.9 4.0s 82.5 20.7 30.7 63.7 8.0s 88.9 25.2 41.8 78.3 16.0s 94.4 33.3 54.6 88.9 32.0s 97.8 42.8 68.5 95.2 Median time 0.79s 77.5s 12.1s 2.4s Figure 5: Results showing percentage of examples that are decoded in less than t seconds, for t = 0.5, 1.0, 2.0, . . . , 32.0. LR = Lagrangian relaxation; DP = exhaustive dynamic programming; ILP = integer linear programming; LP = linear programming (LP does not recover an exact solution). The (I)LP experiments were carried out using Gurobi, a high-performance commercial-grade solver. in (Huang and Mi, 2010). We use an identical model, and identical development and test data, to that used by Huang and Mi.9 The translation model is trained on 1.5M sentence pairs of Chinese-English data; a trigram language model is used. The development data is the newswire portion of the 2006 NIST MT evaluation test set (616 sentences). The test set is the newswire portion of the 2008 NIST MT evaluation test set (691 sentences). We ran the full algorithm with the tightening method described in section 6. We ran the method for a limit of 200 iterations, hence some examples may not terminate with an exact solution. Our method gives exact solutions on 598/616 development set sentences (97.1%), and 675/691 test set sentences (97.7%). In cases where the method does not converge within 200 iterations, we can return the best primal solution yt found by the algorithm during those iterations. We can also get an upper bound on the difference f(y∗)−f(yt) using mint L(ut) as an upper bound on f(y∗). Of the examples that did not converge, the worst example had a bound that was 1.4% of f(yt) (more specifically, f(yt) was -24.74, and the upper bound on f(y∗) −f(yt) was 0.34). Figure 5 gives information on decoding time for our method and two other exact decoding methods: integer linear programming (using constraints D0– D6), and exhaustive dynamic programming using the construction of (Bar-Hillel et al., 1964). Our 9We thank Liang Huang and Haitao Mi for providing us with their model and data. 79 method is clearly the most efficient, and is comparable in speed to state-of-the-art decoding algorithms. We also compare our method to cube pruning (Chiang, 2007; Huang and Chiang, 2007). We reimplemented cube pruning in C++, to give a fair comparison to our method. Cube pruning has a parameter, b, dictating the maximum number of items stored at each chart entry. With b = 50, our decoder finds higher scoring solutions on 50.5% of all examples (349 examples), the cube-pruning method gets a strictly higher score on only 1 example (this was one of the examples that did not converge within 200 iterations). With b = 500, our decoder finds better solutions on 18.5% of the examples (128 cases), cubepruning finds a better solution on 3 examples. The median decoding time for our method is 0.79 seconds; the median times for cube pruning with b = 50 and b = 500 are 0.06 and 1.2 seconds respectively. Our results give a very good estimate of the percentage of search errors for cube pruning. A natural question is how large b must be before exact solutions are returned on almost all examples. Even at b = 1000, we find that our method gives a better solution on 95 test examples (13.7%). Figure 5 also gives a speed comparison of our method to a linear programming (LP) solver that solves the LP relaxation defined by constraints D0– D6. We still see speed-ups, in spite of the fact that our method is solving a harder problem (it provides integral solutions). The Lagrangian relaxation method, when run without the tightening method of section 6, is solving a dual of the problem being solved by the LP solver. Hence we can measure how often the tightening procedure is absolutely necessary, by seeing how often the LP solver provides a fractional solution. We find that this is the case on 54.0% of the test examples: the tightening procedure is clearly important. Inspection of the tightening procedure shows that the number of partitions required (the parameter q) is generally quite small: 59% of examples that require tightening require q ≤6; 97.2% require q ≤10. 8 Conclusion We have described a Lagrangian relaxation algorithm for exact decoding of syntactic translation models, and shown that it is significantly more efficient than other exact algorithms for decoding treeto-string models. There are a number of possible ways to extend this work. Our experiments have focused on tree-to-string models, but the method should also apply to Hiero-style syntactic translation models (Chiang, 2007). Additionally, our experiments used a trigram language model, however the constraints in figure 3 generalize to higher-order language models. Finally, our algorithm recovers the 1-best translation for a given input sentence; it should be possible to extend the method to find kbest solutions. A Computing the Optimal Trigram Paths For each v ∈VL, define αv = maxp:v3(p)=v β(p), where β(p) = h(v1(p), v2(p), v3(p))−λ1(v1(p))−λ2(v2(p))− P s∈p1(p) u(s)−P s∈p2(p) v(s). Here h is a function that computes language model scores, and the other terms involve Lagrange mulipliers. Our task is to compute α∗ v for all v ∈VL. It is straightforward to show that the S, T graph is acyclic. This will allow us to apply shortest path algorithms to the graph, even though the weights u(s) and v(s) can be positive or negative. For any pair v1, v2 ∈VL, define P(v1, v2) to be the set of paths between ⟨v1 ↑⟩and ⟨v2 ↓⟩in the graph S, T. Each path p gets a score scoreu(p) = −P s∈p u(s). Next, define p∗ u(v1, v2) = arg maxp∈P(v1,v2) scoreu(p), and score∗ u(v1, v2) = scoreu(p∗). We assume similar definitions for p∗ v(v1, v2) and score∗ v(v1, v2). The p∗ u and score∗ u values can be calculated using an all-pairs shortest path algorithm, with weights u(s) on nodes in the graph. Similarly, p∗ v and score∗ v can be computed using all-pairs shortest path with weights v(s) on the nodes. Having calculated these values, define T (v) for any leaf v to be the set of trigrams (x, y, v) such that: 1) x, y ∈VL; 2) there is a path from ⟨x ↑⟩to ⟨y ↓⟩and from ⟨y ↑⟩to ⟨v ↓⟩in the graph S, T. Then we can calculate αv = max (x,y,v)∈T (v) (h(x, y, v) −λ1(x) −λ2(y) +p∗ u(x, y) + p∗ v(y, v)) in O(|T (v)|) time, by brute force search through the set T (v). Acknowledgments Alexander Rush and Michael Collins were supported under the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-C-0022. Michael Collins was also supported by NSF grant IIS-0915176. We also thank the anonymous reviewers for very helpful comments; we hope to fully address these in an extended version of the paper. 80 References Y. Bar-Hillel, M. Perles, and E. Shamir. 1964. On formal properties of simple phrase structure grammars. In Language and Information: Selected Essays on their Theory and Application, pages 116–150. D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 263–270. Association for Computational Linguistics. D. Chiang. 2007. Hierarchical phrase-based translation. computational linguistics, 33(2):201–228. Adria de Gispert, Gonzalo Iglesias, Graeme Blackwood, Eduardo R. Banga, and William Byrne. 2010. Hierarchical Phrase-Based Translation with Weighted FiniteState Transducers and Shallow-n Grammars. In Computational linguistics, volume 36, pages 505–533. Robert W. Floyd. 1962. Algorithm 97: Shortest path. Commun. ACM, 5:345. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 144–151, Prague, Czech Republic, June. Association for Computational Linguistics. Liang Huang and Haitao Mi. 2010. Efficient incremental decoding for tree-to-string translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 273–283, Cambridge, MA, October. Association for Computational Linguistics. Gonzalo Iglesias, Adri`a de Gispert, Eduardo R. Banga, and William Byrne. 2009. Rule filtering by pattern for efficient hierarchical translation. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 380–388, Athens, Greece, March. Association for Computational Linguistics. N. Komodakis, N. Paragios, and G. Tziritas. 2007. MRF optimization via dual decomposition: Messagepassing revisited. In International Conference on Computer Vision. Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1288–1298, Cambridge, MA, October. Association for Computational Linguistics. B.H. Korte and J. Vygen. 2008. Combinatorial optimization: theory and algorithms. Springer Verlag. Shankar Kumar and William Byrne. 2005. Local phrase reordering models for statistical machine translation. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 161–168, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics. I. Langkilde. 2000. Forest-based statistical sentence generation. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 170–177. Morgan Kaufmann Publishers Inc. Daniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. Spmt: Statistical machine translation with syntactified target language phrases. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 44– 52, Sydney, Australia, July. Association for Computational Linguistics. R.K. Martin, R.L. Rardin, and B.A. Campbell. 1990. Polyhedral characterization of discrete dynamic programming. Operations research, 38(1):127–138. Slav Petrov, Aria Haghighi, and Dan Klein. 2008. Coarse-to-fine syntactic machine translation using language projections. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 108–116, Honolulu, Hawaii, October. Association for Computational Linguistics. Sebastian Riedel and James Clarke. 2006. Incremental integer linear programming for non-projective dependency parsing. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP ’06, pages 129–137, Stroudsburg, PA, USA. Association for Computational Linguistics. Alexander M Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1–11, Cambridge, MA, October. Association for Computational Linguistics. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of ACL-08: HLT, pages 577–585, Columbus, Ohio, June. Association for Computational Linguistics. D.A. Smith and J. Eisner. 2008. Dependency parsing by belief propagation. In Proc. EMNLP, pages 145–156. D. Sontag, T. Meltzer, A. Globerson, T. Jaakkola, and Y. Weiss. 2008. Tightening LP relaxations for MAP using message passing. In Proc. UAI. Roy W. Tromble and Jason Eisner. 2006. A fast finite-state relaxation method for enforcing global constraints on sequence decoding. In Proceedings of 81 the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL ’06, pages 423–430, Stroudsburg, PA, USA. Association for Computational Linguistics. M. Wainwright, T. Jaakkola, and A. Willsky. 2005. MAP estimation via agreement on trees: message-passing and linear programming. In IEEE Transactions on Information Theory, volume 51, pages 3697–3717. Taro Watanabe, Hajime Tsukada, and Hideki Isozaki. 2006. Left-to-right target generation for hierarchical phrase-based translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 777–784, Morristown, NJ, USA. Association for Computational Linguistics. 82
|
2011
|
8
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 793–803, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models Sameer Singh§ Amarnag Subramanya† Fernando Pereira† Andrew McCallum§ § Department of Computer Science, University of Massachusetts, Amherst MA 01002 † Google Research, Mountain View CA 94043 [email protected], [email protected], [email protected], [email protected] Abstract Cross-document coreference, the task of grouping all the mentions of each entity in a document collection, arises in information extraction and automated knowledge base construction. For large collections, it is clearly impractical to consider all possible groupings of mentions into distinct entities. To solve the problem we propose two ideas: (a) a distributed inference technique that uses parallelism to enable large scale processing, and (b) a hierarchical model of coreference that represents uncertainty over multiple granularities of entities to facilitate more effective approximate inference. To evaluate these ideas, we constructed a labeled corpus of 1.5 million disambiguated mentions in Web pages by selecting link anchors referring to Wikipedia entities. We show that the combination of the hierarchical model with distributed inference quickly obtains high accuracy (with error reduction of 38%) on this large dataset, demonstrating the scalability of our approach. 1 Introduction Given a collection of mentions of entities extracted from a body of text, coreference or entity resolution consists of clustering the mentions such that two mentions belong to the same cluster if and only if they refer to the same entity. Solutions to this problem are important in semantic analysis and knowledge discovery tasks (Blume, 2005; Mayfield et al., 2009). While significant progress has been made in within-document coreference (Ng, 2005; Culotta et al., 2007; Haghighi and Klein, 2007; Bengston and Roth, 2008; Haghighi and Klein, 2009; Haghighi and Klein, 2010), the larger problem of cross-document coreference has not received as much attention. Unlike inference in other language processing tasks that scales linearly in the size of the corpus, the hypothesis space for coreference grows superexponentially with the number of mentions. Consequently, most of the current approaches are developed on small datasets containing a few thousand mentions. We believe that cross-document coreference resolution is most useful when applied to a very large set of documents, such as all the news articles published during the last 20 years. Such a corpus would have billions of mentions. In this paper we propose a model and inference algorithms that can scale the cross-document coreference problem to corpora of that size. Much of the previous work in cross-document coreference (Bagga and Baldwin, 1998; Ravin and Kazi, 1999; Gooi and Allan, 2004; Pedersen et al., 2006; Rao et al., 2010) groups mentions into entities with some form of greedy clustering using a pairwise mention similarity or distance function based on mention text, context, and document-level statistics. Such methods have not been shown to scale up, and they cannot exploit cluster features that cannot be expressed in terms of mention pairs. We provide a detailed survey of related work in Section 6. Other previous work attempts to address some of the above concerns by mapping coreference to inference on an undirected graphical model (Culotta et al., 2007; Poon et al., 2008; Wellner et al., 2004; Wick et al., 2009a). These models contain pairwise factors between all pairs of mentions capturing similarity between them. Many of these models also enforce transitivity and enable features over 793 Filmmaker Rapper BEIJING, Feb. 21— Kevin Smith, who played the god of war in the "Xena"... ... The Physiological Basis of Politics,” by Kevin B. Smith, Douglas Oxley, Matthew Hibbing... The filmmaker Kevin Smith returns to the role of Silent Bob... Like Back in 2008, the Lions drafted Kevin Smith, even though Smith was badly... Firefighter Kevin Smith spent almost 20 years preparing for Sept. 11. When he... ...shorthanded backfield in the wake of Kevin Smith's knee injury, and the addition of Haynesworth... ...were coming,'' said Dallas cornerback Kevin Smith. ''We just didn't know when... ...during the late 60's and early 70's, Kevin Smith worked with several local... ...the term hip-hop is attributed to Lovebug Starski. What does it actually mean... Nothing could be more irrelevant to Kevin Smith's audacious ''Dogma'' than ticking off... Cornerback Firefighter Actor Running back Author Figure 1: Cross-Document Coreference Problem: Example mentions of “Kevin Smith” from New York Times articles, with the true entities shown on the right. entities by including set-valued variables. Exact inference in these models is intractable and a number of approximate inference schemes (McCallum et al., 2009; Rush et al., 2010; Martins et al., 2010) may be used. In particular, Markov chain Monte Carlo (MCMC) based inference has been found to work well in practice. However as the number of mentions grows to Web scale, as in our problem of crossdocument coreference, even these inference techniques become infeasible, motivating the need for a scalable, parallelizable solution. In this work we first distribute MCMC-based inference for the graphical model representation of coreference. Entities are distributed across the machines such that the parallel MCMC chains on the different machines use only local proposal distributions. After a fixed number of samples on each machine, we redistribute the entities among machines to enable proposals across entities that were previously on different machines. In comparison to the greedy approaches used in related work, our MCMC-based inference provides better robustness properties. As the number of mentions becomes large, highquality samples for MCMC become scarce. To facilitate better proposals, we present a hierarchical model. We add sub-entity variables that represent clusters of similar mentions that are likely to be coreferent; these are used to propose composite jumps that move multiple mentions together. We also introduce super-entity variables that represent clusters of similar entities; these are used to distribute entities among the machines such that similar entities are assigned to the same machine. These additional levels of hierarchy dramatically increase the probability of beneficial proposals even with a large number of entities and mentions. To create a large corpus for evaluation, we identify pages that have hyperlinks to Wikipedia, and extract the anchor text and the context around the link. We treat the anchor text as the mention, the context as the document, and the title of the Wikipedia page as the entity label. Using this approach, 1.5 million mentions were annotated with 43k entity labels. On this dataset, our proposed model yields a B3 (Bagga and Baldwin, 1998) F1 score of 73.7%, improving over the baseline by 16% absolute (corresponding to 38% error reduction). Our experimental results also show that our proposed hierarchical model converges much faster even though it contains many more variables. 2 Cross-document Coreference The problem of coreference is to identify the sets of mention strings that refer to the same underlying entity. The identities and the number of the underlying entities is not known. In within-document coreference, the mentions occur in a single document. The number of mentions (and entities) in each document is usually in the hundreds. The difficulty of the task arises from a large hypothesis space (exponential in the number of mentions) and challenge in resolving nominal and pronominal mentions to the correct named mentions. In most cases, named mentions 794 are not ambiguous within a document. In crossdocument coreference, the number of mentions and entities is in the millions, making the combinatorics even more daunting. Furthermore, naming ambiguity is much more common as the same string can refer to multiple entities in different documents, and distinct strings may refer to the same entity in different documents. We show examples of ambiguities in Figure 1. Resolving the identity of individuals with the same name is a common problem in cross-document coreference. This problem is further complicated by the fact that in some situations, these individuals may belong to the same field. Another common ambiguity is that of alternate names, in which the same entity is referred to by different names or aliases (e.g. “Bill” is often used as a substitute for “William”). The figure also shows an example of the renaming ambiguity – “Lovebug Starski” refers to “Kevin Smith”, and this is an extreme form of alternate names. Rare singleton entities (like the firefighter) that may appear only once in the whole corpus are also often difficult to isolate. 2.1 Pairwise Factor Model Factor graphs are a convenient representation for a probability distribution over a vector of output variables given observed variables. The model that we use for coreference represents mentions (M) and entities (E) as random variables. Each mention can take an entity as its value, and each entity takes a set of mentions as its value. Each mention also has a feature vector extracted from the observed text mention and its context. More precisely, the probability of a configuration E = e is defined by p(e) ∝exp P e∈e nP m,n∈e,n̸=m ψa(m, n) + P m∈e,n/∈e ψr(m, n) o where factor ψa represents affinity between mentions that are coreferent according to e, and factor ψr represents repulsion between mentions that are not coreferent. Different factors are instantiated for different predicted configurations. Figure 2 shows the model instantiated with five mentions over a twoentity hypothesis. For the factor potentials, we use cosine similarity of mention context pairs (φmn) such that m1 m2 m3 m4 m5 e1 e2 Figure 2: Pairwise Coreference Model: Factor graph for a 2-entity configuration of 5 mentions. Affinity factors are shown with solid lines, and repulsion factors with dashed lines. ψa(m, n) = φmn −b and ψr(m, n) = −(φmn −b), where b is the bias. While one can certainly make use of a more sophisticated feature set, we leave this for future work as our focus is to scale up inference. However, it should be noted that this approach is agnostic to the particular set of features used. As we will note in the next section, we do not need to calculate features between all pairs of mentions (as would be prohibitively expensive for large datasets); instead we only compute the features as and when required. 2.2 MCMC-based Inference Given the above model of coreference, we seek the maximum a posteriori (MAP) configuration: ˆe = arg maxe p(e) = arg maxe P e∈e nP m,n∈e,n̸=m ψa(m, n) + P m∈e,n/∈e ψr(m, n) o Computing ˆe exactly is intractable due to the large space of possible configurations.1 Instead, we employ MCMC-based optimization to discover the MAP configuration. A proposal function q is used to propose a change e′ to the current configuration e. This jump is accepted with the following Metropolis-Hastings acceptance probability: α(e, e′) = min 1, p(e′) p(e) 1/t q(e) q(e′) ! (1) 1Number of possible entities is Bell(n) in the number of mentions, i.e. number of partitions of n items 795 where t is the annealing temperature parameter. MCMC chains efficiently explore the highdensity regions of the probability distribution. By slowly reducing the temperature, we can decrease the entropy of the distribution to encourage convergence to the MAP configuration. MCMC has been used for optimization in a number of related work (McCallum et al., 2009; Goldwater and Griffiths, 2007; Changhe et al., 2004). The proposal function moves a randomly chosen mention l from its current entity es to a randomly chosen entity et. For such a proposal, the log-model ratio is: log p(e′) p(e) = X m∈et ψa(l, m) + X n∈es ψr(l, n) − X n∈es ψa(l, n) − X m∈et ψr(l, m) (2) Note that since only the factors between mention l and mentions in es and et are involved in this computation, the acceptance probability of each proposal is calculated efficiently. In general, the model may contain arbitrarily complex set of features over pairs of mentions, with parameters associated with them. Given labeled data, these parameters can be learned by Perceptron (Collins, 2002), which uses the MAP configuration according to the model (ˆe). There also exist more efficient training algorithms such as SampleRank (McCallum et al., 2009; Wick et al., 2009b) that update parameters during inference. However, we only focus on inference in this work, and the only parameter that we set manually is the bias b, which indirectly influences the number of entities in ˆe. Unless specified otherwise, in this work the initial configuration for MCMC is the singleton configuration, i.e. all entities have a size of 1. This MCMC inference technique, which has been used in McCallum and Wellner (2004), offers several advantages over other inference techniques: (a) unlike message-passing-methods, it does not require the full ground graph, (b) we only have to examine the factors that lie within the changed entities to evaluate a proposal, and (c) inference may be stopped at any point to obtain the current best configuration. However, the super exponential nature of the hypothesis space in cross-doc coreference renders this algorithm computationally unsuitable for large scale coreference tasks. In particular, fruitful proposals (that increase the model score) are extremely rare, resulting in a large number of proposals that are not accepted. We describe methods to speed up inference by 1) evaluating multiple proposal simultaneously (Section 3), and 2) by augmenting our model with hierarchical variables that enable better proposal distributions (Section 4). 3 Distributed MAP Inference The key observation that enables distribution is that the acceptance probability computation of a proposal only examines a few factors that are not common to the previous and next configurations (Eq. 2). Consider a pair of proposals, one that moves mention l from entity es to entity et, and the other that moves mention l′ from entity e′ s to entity e′ t. The set of factors to compute acceptance of the first proposal are factors between l and mentions in es and et, while the set of factors required to compute acceptance of the second proposal lie between l′ and mentions in e′ s and e′ t. Since these set of factors are completely disjoint from each other, and the resulting configurations do not depend on each other, these two proposals are mutually-exclusive. Different orders of evaluating such proposals are equivalent, and in fact, these proposals can be proposed and evaluated concurrently. This mutual-exclusivity is not restricted only to pairs of proposals; a set of proposals are mutually-exclusive if no two proposals require the same factor for evaluation. Using this insight, we introduce the following approach to distributed cross-document coreference. We divide the mentions and entities among multiple machines, and propose moves of mentions between entities assigned to the same machine. These jumps are evaluated exactly and accepted without communication between machines. Since acceptance of a mention’s move requires examining factors that lie between other mentions in its entity, we ensure that all mentions of an entity are assigned the same machine. Unless specified otherwise, the distribution is performed randomly. To enable exploration of the complete configuration space, rounds of sampling are interleaved by redistribution stages, in which the entities are redistributed among the machines (see Figure 3). We use MapReduce (Dean and Ghe796 Distributor Inference Inference Figure 3: Distributed MCMC-based Inference: Distributor divides the entities among the machines, and the machines run inference. The process is repeated by the redistributing the entities. mawat, 2004) to manage the distributed computation. This approach to distribution is equivalent to inference with all mentions and entities on a single machine with a restricted proposer, but is faster since it exploits independencies to propose multiple jumps simultaneously. By restricting the jumps as described above, the acceptance probability calculation is exact. Partitioning the entities and proposing local jumps are restrictions to the single-machine proposal distribution; redistribution stages ensure the equivalent Markov chains are still irreducible. See Singh et al. (2010) for more details. 4 Hierarchical Coreference Model The proposal function for MCMC-based MAP inference presents changes to the current entities. Since we use MCMC to reach high-scoring regions of the hypothesis space, we are interested in the changes that improve the current configuration. But as the number of mentions and entities increases, these fruitful samples become extremely rare due to the blowup in the possible space of configurations, resulting in rejection of a large number of proposals. By distributing as described in the previous section, we propose samples in parallel, improving chances of finding changes that result in better configurations. However, due to random redistribution and a naive proposal function within each machine, a large fraction of proposals are still wasted. We address these concerns by adding hierarchy to the model. 4.1 Sub-Entities Consider the task of proposing moves of mentions (within a machine). Given the large number of mentions and entities, the probability that a randomly picked mention that is moved to a random entity results in a better configuration is extremely small. If such a move is accepted, this gives us evidence that the mention did not belong to the previous entity, and we should also move similar mentions from the previous entity simultaneously to the same entity. Since the proposer moves only a single mention at a time, a large number of samples may be required to discover these fruitful moves. To enable block proposals that move similar mentions simultaneously, we introduce latent sub-entity variables that represent groups of similar mentions within an entity, where the similarity is defined by the model. For inference, we have stages of sampling sub-entities (moving individual mentions) interleaved with stages of entity sampling (moving all mentions within a sub-entity). Even though our configuration space has become larger due to these extra variables, the proposal distribution has also improved since it proposes composite moves. 4.2 Super-Entities Another issue faced during distributed inference is that random redistribution is often wasteful. For example, if dissimilar entities are assigned to a machine, none of the proposals may be accepted. For a large number of entities and machines, the probability that similar entities will be assigned to the same machine is extremely small, leading to a larger number of wasted proposals. To alleviate this problem, we introduce super-entities that represent groups of similar entities. During redistribution, we ensure all entities in the same super-entity are assigned to the same machine. As for sub-entities above, inference switches between regular sampling of entities and sampling of super-entities (by moving entities). Although these extra variables have made the configuration space larger, they also allow more efficient distribution of entities, leading to useful proposals. 4.3 Combined Hierarchical Model Each of the described levels of the hierarchy are similar to the initial model (Section 2.1): mentions/subentities have the same structure as the entities/superentities, and are modeled using similar factors. To represent the “context” of a sub-entity we take the union of the bags-of-words of the constituent mention contexts. Similarly, we take the union of sub797 Super-Entities Entities Mentions Sub-Entities Figure 4: Combined Hierarchical Model with factors instantiated for a hypothesis containing 2 superentities, 4 entities, and 8 sub-entities, shown as colored circles, over 16 mentions. Dotted lines represent repulsion factors and solid lines represent affinity factors (the color denotes the type of variable that the factor touches). The boxes on factors were excluded for clarity. entity contexts to represent the context of an entity. The factors are instantiated in the same manner as Section 2.1 except that we change the bias factor b for each level (increasing it for sub-entities, and decreasing it for super-entities). The exact values of these biases indirectly determines the number of predicted sub-entities and super-entities. Since these two levels of hierarchy operate at separate granularities from each other, we combine them into a single hierarchical model that contains both sub- and super-entities. We illustrate this hierarchical structure in Figure 4. Inference for this model takes a round-robin approach by fixing two of the levels of the hierarchy and sampling the third, cycling through these three levels. Unless specified otherwise, the initial configuration is the singleton configuration, in which all sub-entities, entities, and super-entities are of size 1. 5 Experiments We evaluate our models and algorithms on a number of datasets. First, we compare performance on the small, publicly-available “John Smith” dataset. Second, we run the automated Person-X evaluation to obtain thousands of mentions that we use to demonstrate accuracy and scalability improvements. Most importantly, we create a large labeled corpus using links to Wikipedia to explore the performance in the large-scale setting. 5.1 John Smith Corpus To compare with related work, we run an evaluation on the “John Smith” corpus (Bagga and Baldwin, 1998), containing 197 mentions of the name “John Smith” from New York Times articles (labeled to obtain 35 true entities). The bias b for our approach is set to result in the correct number of entities. Our model achieves B3 F1 accuracy of 66.4% on this dataset. In comparison, Rao et al. (2010) obtains 61.8% using the model most similar to ours, while their best model (which uses sophisticated topic-model features that do not scale easily) achieves 69.7%. It is encouraging to note that our approach, using only a subset of the features, performs competitively with related work. However, due to the small size of the dataset, we require further evaluation before reaching any conclusions. 5.2 Person-X Evaluation There is a severe lack of labeled corpora for crossdocument coreference due to the effort required to evaluate the coreference decisions. Related approaches have used automated Person-X evaluation (Gooi and Allan, 2004), in which unique person-name strings are treated as the true entity labels for the mentions. Every mention string is replaced with an “X” for the coreference system. We use this evaluation methodology on 25k personname mentions from the New York Times corpus (Sandhaus, 2008) each with one of 50 unique strings. As before, we set the bias b to achieve the same number of entities. We use 1 million samples in each round of inference, followed by random redistribution in the flat model, and super-entities in the hierarchical model. Results are averaged over five runs. 798 Figure 5: Person-X Evaluation of Pairwise model: Performance as number of machines is varied, averaged over 5 runs. Number of Entities 43,928 Number of Mentions 1,567,028 Size of Largest Entity 6,096 Average Mentions per Entity 35.7 Variance of Mentions per Entity 5191.7 Table 1: Wikipedia Link Corpus Statistics. Size of an entity is the number of mentions of that entity. Figure 5 shows accuracy compared to relative wallclock running time for distributed inference on the flat, pairwise model. Speed and accuracy improve as additional machines are added, but larger number of machines lead to diminishing returns for this small dataset. Distributed inference on our hierarchical model is evaluated in Figure 6 against inference on the pairwise model from Figure 5. We see that the individual hierarchical models perform much better than the pairwise model; they achieve the same accuracy as the pairwise model in approximately 10% of the time. Moreover, distributed inference on the combined hierarchical model is both faster and more accurate than the individual hierarchical models. 5.3 Wikipedia Link Corpus To explore the application of the proposed approach to a larger, realistic dataset, we construct a corpus based on the insight that links to Wikipedia that appear on webpages can be treated as mentions, and since the links were added manually by the page author, we use the destination Wikipedia page as the Figure 6: Person-X Evaluation of Hierarchical Models: Performance of inference on hierarchical models compared to the pairwise model. Experiments were run using 50 machines. entity the link refers to. The dataset is created as follows: First, we crawl the web and select hyperlinks on webpages that link to an English Wikipedia page.2 The anchors of these links form our set of mentions, with the surrounding block of clean text (obtained after removing markup, etc.) around each link being its context. We assign the title of the linked Wikipedia page as the entity label of that link. Since this set of mentions and labels can be noisy, we use the following filtering steps. All links that have less than 36 words in their block, or whose anchor text has a large string edit distance from the title of the Wikipedia page, are discarded. While this results in cases in which “President” is discarded when linked to the “Barack Obama” Wikipedia page, it was necessary to reduce noise. Further, we also discard links to Wikipedia pages that are concepts (such as “public_domain”) rather than entities. All entities with less than 6 links to them are also discarded. Table 1 shows some statistics about our automatically generated data set. We randomly sampled 5% of the entities to create a development set, treating the remaining entities as the test set. Unlike the John Smith and Person-X evaluation, this data set also contains non-person entities such as organizations and locations. For our models, we augment the factor potentials with mention-string similarity: 2e.g. http://en.wikipedia.org/Hillary_Clinton 799 ψa/r(m, n) = ± (φmn −b + wSTREQ(m, n)) where STREQ is 1 if mentions m and n are string identical (0 otherwise), and w is the weight to this feature.3 In our experiments we found that setting w = 0.8 and b = 1e −4 gave the best results on the development set. Due to the large size of the corpus, existing crossdocument coreference approaches could not be applied to this dataset. However, since a majority of related work consists of using clustering after defining a similarity function (Section 6), we provide a baseline evaluation of clustering with SubSquare (Bshouty and Long, 2010), a scalable, distributed clustering method. Subsquare takes as input a weighted graph with mentions as nodes and similarity between mentions used as edge weights. Subsquare works by stochastically assigning a vertex to the cluster of one its neighbors if they have significant neighborhood overlap. This algorithm is an efficient form of approximate spectral clustering (Bshouty and Long, 2010), and since it is given the same distances between mentions as our models, we expect it to get similar accuracy. We also generate another baseline clustering by assigning mentions with identical strings to the same entity. This mention-string clustering is also used as the initial configuration of our inference. Figure 7: Wikipedia Link Evaluation: Performance of inference for different number of machines (N = 100, 500). Mention-string match clustering is used as the initial configuration. 3Note that we do not use mention-string similarity for John Smith or Person-X as the mention strings are all identical. Method Pairwise B3 Score P/ R F1 P/ R F1 String-Match 30.0 / 66.7 41.5 82.7 / 43.8 57.3 Subsquare 38.2 / 49.1 43.0 87.6 / 51.4 64.8 Our Model 44.2 / 61.4 51.4 89.4 / 62.5 73.7 Table 2: F1 Scores on the Wikipedia Link Data. The results are significant at the 0.0001 level over Subsquare according to the difference of proportions significance test. Inference is run for 20 rounds of 10 million samples each, distributed over N machines. We use N = 100, 500 and the B3 F1 score results obtained set for each case are shown in Figure 7. It can be seen that N = 500 converges to a better solution faster, showing effective use of parallelism. Table 2 compares the results of our approach (at convergence for N = 500), the baseline mention-string match and the Subsquare algorithm. Our approach significantly outperforms the competitors. 6 Related Work Although the cross-document coreference problem is challenging and lacks large labeled datasets, its ubiquitous role as a key component of many knowledge discovery tasks has inspired several efforts. A number of previous techniques use scoring functions between pairs of contexts, which are then used for clustering. One of the first approaches to cross-document coreference (Bagga and Baldwin, 1998) uses an idf-based cosine-distance scoring function for pairs of contexts, similar to the one we use. Ravin and Kazi (1999) extend this work to be somewhat scalable by comparing pairs of contexts only if the mentions are deemed “ambiguous” using a heuristic. Others have explored multiple methods of context similarity, and concluded that agglomerative clustering provides effective means of inference (Gooi and Allan, 2004). Pedersen et al. (2006) and Purandare and Pedersen (2004) integrate second-order co-occurrence of words into the similarity function. Mann and Yarowsky (2003) use biographical facts from the Web as features for clustering. Niu et al. (2004) incorporate information extraction into the context similarity model, and annotate a small dataset to learn the parameters. A number of other approaches include various forms of 800 hand-tuned weights, dictionaries, and heuristics to define similarity for name disambiguation (Blume, 2005; Baron and Freedman, 2008; Popescu et al., 2008). These approaches are greedy and differ in the choice of the distance function and the clustering algorithm used. Daum´e III and Marcu (2005) propose a generative approach to supervised clustering, and Haghighi and Klein (2010) use entity profiles to assist within-document coreference. Since many related methods use clustering, there are a number of distributed clustering algorithms that may help scale these approaches. Datta et al. (2006) propose an algorithm for distributed kmeans. Chen et al. (2010) describe a parallel spectral clustering algorithm. We use the Subsquare algorithm (Bshouty and Long, 2010) as baseline because it works well in practice. Mocian (2009) presents a survey of distributed clustering algorithms. Rao et al. (2010) have proposed an online deterministic method that uses a stream of input mentions and assigns them greedily to entities. Although it can resolve mentions from non-trivial sized datasets, the method is restricted to a single machine, which is not scalable to the very large number of mentions that are encountered in practice. Our representation of the problem as an undirected graphical model, and performing distributed inference on it, provides a combination of advantages not available in any of these approaches. First, most of the methods will not scale to the hundreds of millions of mentions that are present in real-world applications. By utilizing parallelism across machines, our method can run on very large datasets simply by increasing the number of machines used. Second, approaches that use clustering are limited to using pairwise distance functions for which additional supervision and features are difficult to incorporate. In addition to representing features from all of the related work, graphical models can also use more complex entity-wide features (Culotta et al., 2007; Wick et al., 2009a), and parameters can be learned using supervised (Collins, 2002) or semisupervised techniques (Mann and McCallum, 2008). Finally, the inference for most of the related approaches is greedy, and earlier decisions are not revisited. Our technique is based on MCMC inference and simulated annealing, which are able to escape local maxima. 7 Conclusions Motivated by the problem of solving the coreference problem on billions of mentions from all of the newswire documents from the past few decades, we make the following contributions. First, we introduce distributed version of MCMC-based inference technique that can utilize parallelism to enable scalability. Second, we augment the model with hierarchical variables that facilitate fruitful proposal distributions. As an additional contribution, we use links to Wikipedia pages to obtain a high-quality crossdocument corpus. Scalability and accuracy gains of our method are evaluated on multiple datasets. There are a number of avenues for future work. Although we demonstrate scalability to more than a million mentions, we plan to explore performance on datasets in the billions. We also plan to examine inference on complex coreference models (such as with entity-wide factors). Another possible avenue for future work is that of learning the factors. Since our approach supports parameter estimation, we expect significant accuracy gains with additional features and supervised data. Our work enables crossdocument coreference on very large corpora, and we would like to explore the downstream applications that can benefit from it. Acknowledgments This work was done when the first author was an intern at Google Research. The authors would like to thank Mark Dredze, Sebastian Riedel, and anonymous reviewers for their valuable feedback. This work was supported in part by the Center for Intelligent Information Retrieval, the University of Massachusetts gratefully acknowledges the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181., in part by an award from Google, in part by The Central Intelligence Agency, the National Security Agency and National Science Foundation under NSF grant #IIS-0326249, in part by NSF grant #CNS-0958392, and in part by UPenn NSF medium IIS-0803847. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. 801 References Amit Bagga and Breck Baldwin. 1998. Entity-based cross-document coreferencing using the vector space model. In International Conference on Computational Linguistics, pages 79–85. A. Baron and M. Freedman. 2008. Who is who and what is what: experiments in cross-document co-reference. In Empirical Methods in Natural Language Processing (EMNLP), pages 274–283. Eric Bengston and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Empirical Methods in Natural Language Processing (EMNLP). Matthias Blume. 2005. Automatic entity disambiguation: Benefits to NER, relation extraction, link analysis, and inference. In International Conference on Intelligence Analysis (ICIA). Nader H. Bshouty and Philip M. Long. 2010. Finding planted partitions in nearly linear time using arrested spectral clustering. In Johannes F¨urnkranz and Thorsten Joachims, editors, Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 135–142, Haifa, Israel, June. Omnipress. Yuan Changhe, Lu Tsai-Ching, and Druzdzel Marek. 2004. Annealed MAP. In Uncertainty in Artificial Intelligence (UAI), pages 628–635, Arlington , Virginia. AUAI Press. Wen-Yen Chen, Yangqiu Song, Hongjie Bai, Chih-Jen Lin, and Edward Y. Chang. 2010. Parallel spectral clustering in distributed systems. IEEE Transactions on Pattern Analysis and Machine Intelligence. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithm. In Annual Meeting of the Association for Computational Linguistics (ACL). Aron Culotta, Michael Wick, and Andrew McCallum. 2007. First-order probabilistic models for coreference resolution. In North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT). S. Datta, C. Giannella, and H. Kargupta. 2006. K-Means Clustering over a Large, Dynamic Network. In SIAM Data Mining Conference (SDM). Hal Daum´e III and Daniel Marcu. 2005. A Bayesian model for supervised clustering with the Dirichlet process prior. Journal of Machine Learning Research (JMLR), 6:1551–1577. Jeffrey Dean and Sanjay Ghemawat. 2004. Mapreduce: Simplified data processing on large clusters. Symposium on Operating Systems Design & Implementation (OSDI). Sharon Goldwater and Tom Griffiths. 2007. A fully bayesian approach to unsupervised part-of-speech tagging. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 744–751. Chung Heong Gooi and James Allan. 2004. Crossdocument coreference on a large scale corpus. In North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT), pages 9–16. Aria Haghighi and Dan Klein. 2007. Unsupervised coreference resolution in a nonparametric bayesian model. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 848–855. Aria Haghighi and Dan Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In Empirical Methods in Natural Language Processing (EMNLP), pages 1152–1161. Aria Haghighi and Dan Klein. 2010. Coreference resolution in a modular, entity-centered model. In North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT), pages 385–393. Gideon S. Mann and Andrew McCallum. 2008. Generalized expectation criteria for semi-supervised learning of conditional random fields. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 870–878. Gideon S. Mann and David Yarowsky. 2003. Unsupervised personal name disambiguation. In North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT), pages 33–40. Andre Martins, Noah Smith, Eric Xing, Pedro Aguiar, and Mario Figueiredo. 2010. Turbo parsers: Dependency parsing by approximate variational inference. In Empirical Methods in Natural Language Processing (EMNLP), pages 34–44, Cambridge, MA, October. Association for Computational Linguistics. J. Mayfield, D. Alexander, B. Dorr, J. Eisner, T. Elsayed, T. Finin, C. Fink, M. Freedman, N. Garera, P. McNamee, et al. 2009. Cross-document coreference resolution: A key technology for learning by reading. In AAAI Spring Symposium on Learning by Reading and Learning to Read. Andrew McCallum and Ben Wellner. 2004. Conditional models of identity uncertainty with application to noun coreference. In Neural Information Processing Systems (NIPS). Andrew McCallum, Karl Schultz, and Sameer Singh. 2009. FACTORIE: Probabilistic programming via imperatively defined factor graphs. In Neural Information Processing Systems (NIPS). Horatiu Mocian. 2009. Survey of Distributed Clustering Techniques. Ph.D. thesis, Imperial College of London. 802 Vincent Ng. 2005. Machine learning for coreference resolution: From local classification to global ranking. In Annual Meeting of the Association for Computational Linguistics (ACL). Cheng Niu, Wei Li, and Rohini K. Srihari. 2004. Weakly supervised learning for cross-document person name disambiguation supported by information extraction. In Annual Meeting of the Association for Computational Linguistics (ACL), page 597. Ted Pedersen, Anagha Kulkarni, Roxana Angheluta, Zornitsa Kozareva, and Thamar Solorio. 2006. An unsupervised language independent method of name discrimination using second order co-occurrence features. In International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), pages 208–222. Hoifung Poon, Pedro Domingos, and Marc Sumner. 2008. A general method for reducing the complexity of relational inference and its application to MCMC. In AAAI Conference on Artificial Intelligence. Octavian Popescu, Christian Girardi, Emanuele Pianta, and Bernardo Magnini. 2008. Improving crossdocument coreference. Journ´ees Internationales d’Analyse statistique des Donn´ees Textuelles, 9:961– 969. A. Purandare and T. Pedersen. 2004. Word sense discrimination by clustering contexts in vector and similarity spaces. In Conference on Computational Natural Language Learning (CoNLL), pages 41–48. Delip Rao, Paul McNamee, and Mark Dredze. 2010. Streaming cross document entity coreference resolution. In International Conference on Computational Linguistics (COLING), pages 1050–1058, Beijing, China, August. Coling 2010 Organizing Committee. Yael Ravin and Zunaid Kazi. 1999. Is Hillary Rodham Clinton the president? disambiguating names across documents. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 9–16. Alexander M Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Empirical Methods in Natural Language Processing (EMNLP), pages 1–11, Cambridge, MA, October. Association for Computational Linguistics. Evan Sandhaus. 2008. The New York Times annotated corpus. Linguistic Data Consortium. Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2010. Distributed map inference for undirected graphical models. In Neural Information Processing Systems (NIPS), Workshop on Learning on Cores, Clusters and Clouds. Ben Wellner, Andrew McCallum, Fuchun Peng, and Michael Hay. 2004. An integrated, conditional model of information extraction and coreference with application to citation matching. In Uncertainty in Artificial Intelligence (UAI), pages 593–601. Michael Wick, Aron Culotta, Khashayar Rohanimanesh, and Andrew McCallum. 2009a. An entity-based model for coreference resolution. In SIAM International Conference on Data Mining (SDM). Michael Wick, Khashayar Rohanimanesh, Aron Culotta, and Andrew McCallum. 2009b. Samplerank: Learning preferences from atomic gradients. In Neural Information Processing Systems (NIPS), Workshop on Advances in Ranking. 803
|
2011
|
80
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 804–813, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Cross-Lingual ILP Solution to Zero Anaphora Resolution Ryu Iida Tokyo Institute of Technology 2-12-1, ˆOokayama, Meguro, Tokyo 152-8552, Japan [email protected] Massimo Poesio Universit`a di Trento, Center for Mind / Brain Sciences University of Essex, Language and Computation Group [email protected] Abstract We present an ILP-based model of zero anaphora detection and resolution that builds on the joint determination of anaphoricity and coreference model proposed by Denis and Baldridge (2007), but revises it and extends it into a three-way ILP problem also incorporating subject detection. We show that this new model outperforms several baselines and competing models, as well as a direct translation of the Denis / Baldridge model, for both Italian and Japanese zero anaphora. We incorporate our model in complete anaphoric resolvers for both Italian and Japanese, showing that our approach leads to improved performance also when not used in isolation, provided that separate classifiers are used for zeros and for explicitly realized anaphors. 1 Introduction In so-called ‘pro-drop’ languages such as Japanese and many romance languages including Italian, phonetic realization is not required for anaphoric references in contexts in which in English noncontrastive pronouns are used: e.g., the subjects of Italian and Japanese translations of buy in (1b) and (1c) are not explicitly realized. We call these nonrealized mandatory arguments zero anaphors. (1) a. [EN] [John]i went to visit some friends. On the way, [he]i bought some wine. b. [IT] [Giovanni]i and`o a far visita a degli amici. Per via, φi compr`o del vino. c. [JA] [John]i-wa yujin-o houmon-sita. Tochu-de φi wain-o ka-tta. The felicitousness of zero anaphoric reference depends on the referred entity being sufficiently salient, hence this type of data–particularly in Japanese and Italian–played a key role in early work in coreference resolution, e.g., in the development of Centering (Kameyama, 1985; Walker et al., 1994; Di Eugenio, 1998). This research highlighted both commonalities and differences between the phenomenon in such languages. Zero anaphora resolution has remained a very active area of study for researchers working on Japanese, because of the prevalence of zeros in such languages1 (Seki et al., 2002; Isozaki and Hirao, 2003; Iida et al., 2007a; Taira et al., 2008; Imamura et al., 2009; Sasano et al., 2009; Taira et al., 2010). But now the availability of corpora annotated to study anaphora, including zero anaphora, in languages such as Italian (e.g., Rodriguez et al. (2010)), and their use in competitions such as SEMEVAL 2010 Task 1 on Multilingual Coreference (Recasens et al., 2010), is leading to a renewed interest in zero anaphora resolution, particularly at the light of the mediocre results obtained on zero anaphors by most systems participating in SEMEVAL. Resolving zero anaphora requires the simultaneous decision that one of the arguments of a verb is phonetically unrealized (and which argument exactly–in this paper, we will only be concerned with subject zeros as these are the only type to occur in Italian) and that a particular entity is its antecedent. It is therefore natural to view zero anaphora resolution as a joint inference 1As shown in Table 1, 64.3% of anaphors in the NAIST Text Corpus of Anaphora are zeros. 804 task, for which Integer Linear Programming (ILP)– introduced to NLP by Roth and Yih (2004) and successfully applied by Denis and Baldridge (2007) to the task of jointly inferring anaphoricity and determining the antecedent–would be appropriate. In this work we developed, starting from the ILP system proposed by Denis and Baldridge, an ILP approach to zero anaphora detection and resolution that integrates (revised) versions of Denis and Baldridge’s constraints with additional constraints between the values of three distinct classifiers, one of which is a novel one for subject prediction. We demonstrate that treating zero anaphora resolution as a three-way inference problem is successful for both Italian and Japanese. We integrate the zero anaphora resolver with a coreference resolver and demonstrate that the approach leads to improved results for both Italian and Japanese. The rest of the paper is organized as follows. Section 2 briefly summarizes the approach proposed by Denis and Baldridge (2007). We next present our new ILP formulation in Section 3. In Section 4 we show the experimental results with zero anaphora only. In Section 5 we discuss experiments testing that adding our zero anaphora detector and resolver to a full coreference resolver would result in overall increase in performance. We conclude and discuss future work in Section 7. 2 Using ILP for joint anaphoricity and coreference determination Integer Linear Programming (ILP) is a method for constraint-based inference aimed at finding the values for a set of variables that maximize a (linear) objective function while satisfying a number of constraints. Roth and Yih (2004) advocated ILP as a general solution for a number of NLP tasks that require combining multiple classifiers and which the traditional pipeline architecture is not appropriate, such as entity disambiguation and relation extraction. Denis and Baldridge (2007) defined the following object function for the joint anaphoricity and coreference determination problem. min ⟨i,j⟩∈P cC ⟨i,j⟩· x⟨i,j⟩+ c−C ⟨i,j⟩· (1 −x⟨i,j⟩) + j∈M cA j · yj + c−A j · (1 −yj) (2) subject to x⟨i,j⟩∈{0, 1} ∀⟨i, j⟩∈P yj ∈{0, 1} ∀j ∈M M stands for the set of mentions in the document, and P the set of possible coreference links over these mentions. x⟨i,j⟩is an indicator variable that is set to 1 if mentions i and j are coreferent, and 0 otherwise. yj is an indicator variable that is set to 1 if mention j is anaphoric, and 0 otherwise. The costs cC ⟨i,j⟩= −log(P(COREF|i, j)) are (logs of) probabilities produced by an antecedent identification classifier with −log, whereas cA j = −log(P(ANAPH|j)), are the probabilities produced by an anaphoricity determination classifier with −log. In the Denis & Baldridge model, the search for a solution to antecedent identification and anaphoricity determination is guided by the following three constraints. Resolve only anaphors: if a pair of mentions ⟨i, j⟩ is coreferent (x⟨i,j⟩= 1), then mention j must be anaphoric (yj = 1). x⟨i,j⟩≤yj ∀⟨i, j⟩∈P (3) Resolve anaphors: if a mention is anaphoric (yj = 1), it must be coreferent with at least one antecedent. yj ≤ i∈Mj x⟨i,j⟩ ∀j ∈M (4) Do not resolve non-anaphors: if a mention is nonanaphoric (yj = 0), it should have no antecedents. yj ≥ 1 |Mj| i∈Mj x⟨i,j⟩ ∀j ∈M (5) 3 An ILP-based account of zero anaphora detection and resolution In the corpora used in our experiments, zero anaphora is annotated using as markable the first verbal form (not necessarily the head) following the position where the argument would have been realized, as in the following example. 805 (6) [Pahor]i `e nato a Trieste, allora porto principale dell’Impero Austro-Ungarico. A sette anni [vide]i l’incendio del Narodni dom, The proposal of Denis and Baldridge (2007) can be easily turned into a proposal for the task of detecting and resolving zero anaphora in this type of data by reinterpreting the indicator variables as follows: • yj is 1 if markable j (a verbal form) initiates a verbal complex whose subject is unrealized, 0 otherwise; • x⟨i,j⟩is 1 if the empty mention realizing the subject argument of markable j and markable i are mentions of the same entity, 0 otherwise. There are however a number of ways in which this direct adaptation can be modified and extended. We discuss them in turn. 3.1 Best First In the context of zero anaphora resolution, the ‘Do not resolve non-anaphors’ constraint (5) is too weak, as it allows the redundant choice of more than one candidate antecedent. We developed therefore the following alternative, that blocks selection of more than one antecedent. Best First (BF): yj ≥ i∈Mj x⟨i,j⟩ ∀j ∈M (7) 3.2 A subject detection model The greatest difficulty in zero anaphora resolution in comparison to, say, pronoun resolution, is zero anaphora detection. Simply relying for this on the parser is not enough: most dependency parsers are not very accurate at identifying cases in which the verb does not have a subject on syntactic grounds only. Again, it seems reasonable to suppose this is because zero anaphora detection requires a combination of syntactic information and information about the current context. Within the ILP framework, this hypothesis can be implemented by turning the zero anaphora resolution optimization problem into one with three indicator variables, with the objective function in (8). The third variable, zj, encodes the information provided by the parser: it is 1 with cost cS j = −log(P(SUBJ|j)) if the parser thinks that verb j has an explicit subject with probability P(SUBJ|j), otherwise it is 0. min ⟨i,j⟩∈P cC ⟨i,j⟩· x⟨i,j⟩+ c−C ⟨i,j⟩· (1 −x⟨i,j⟩) + j∈M cA j · yj + c−A j · (1 −yj) + j∈M cS j · zj + c−S j · (1 −zj) (8) subject to x⟨i,j⟩∈{0, 1} ∀⟨i, j⟩∈P yj ∈{0, 1} ∀j ∈M zj ∈{0, 1} ∀j ∈M The crucial fact about the relation between zj and yj is that a verb has either a syntactically realized NP or a zero pronoun as a subject, but not both. This is encoded by the following constraint. Resolve only non-subjects: if a predicate j syntactically depends on a subject (zj = 1), then the predicate j should have no antecedents of its subject zero pronoun. yj + zj ≤1 ∀j ∈M (9) 4 Experiment 1: zero anaphora resolution In a first round of experiments, we evaluated the performance of the model proposed in Section 3 on zero anaphora only (i.e., not attempting to resolve other types of anaphoric expressions). 4.1 Data sets We use the two data sets summarized in Table 1. The table shows that NP anaphora occurs more frequently than zero-anaphora in Italian, whereas in Japanese the frequency of anaphoric zero-anaphors2 is almost double the frequency of the remaining anaphoric expressions. Italian For Italian coreference, we used the annotated data set presented in Rodriguez et al. (2010) and developed for the Semeval 2010 task ‘Coreference Resolution in Multiple Languages’ (Recasens et al., 2010), where both zero-anaphora and NP 2In Japanese, like in Italian, zero anaphors are often used non-anaphorically, to refer to situationally introduced entities, as in I went to John’s office, but they told me that he had left. 806 #instances (anaphoric/total) language type #docs #sentences #words zero-anaphors others all Italian train 97 3,294 98,304 1,093 / 1,160 6,747 / 27,187 7,840 / 28,347 test 46 1,478 41,587 792 / 837 3,058 / 11,880 3,850 / 12,717 Japanese train 1,753 24,263 651,986 18,526 / 29,544 10,206 / 161,124 28,732 / 190,668 test 696 9,287 250,901 7,877 / 11,205 4,396 / 61,652 12,273 / 72,857 In the 6th column we use the term ‘anaphoric’ to indicate the number of zero anaphors that have an antecedent in the text, whereas the total figure is the sum of anaphoric and exophoric zero-anaphors - zeros with a vague / generic reference. Table 1: Italian and Japanese Data Sets coreference are annotated. This dataset consists of articles from Italian Wikipedia, tokenized, POStagged and morphologically analyzed using TextPro, a freely available Italian pipeline (Pianta et al., 2008). We parsed the corpus using the Italian version of the DESR dependency parser (Attardi et al., 2007). In Italian, zero pronouns may only occur as omitted subjects of verbs. Therefore, in the task of zero-anaphora resolution all verbs appearing in a text are considered candidates for zero pronouns, and all gold mentions or system mentions preceding a candidate zero pronoun are considered as candidate antecedents. (In contrast, in the experiments on coreference resolution discussed in the following section, all mentions are considered as both candidate anaphors and candidate antecedents. To compare the results with gold mentions and with system detected mentions, we carried out an evaluation using the mentions automatically detected by the Italian version of the BART system (I-BART) (Poesio et al., 2010), which is freely downloadable.3 Japanese For Japanese coreference we used the NAIST Text Corpus (Iida et al., 2007b) version 1.4β, which contains the annotated data about NP coreference and zero-anaphoric relations. We also used the Kyoto University Text Corpus4 that provides dependency relations information for the same articles as the NAIST Text Corpus. In addition, we also used a Japanese named entity tagger, CaboCha5 for automatically tagging named entity labels. In the NAIST Text Corpus mention boundaries are not annotated, only the heads. Thus, we considered 3http://www.bart-coref.org/ 4http://www-lab25.kuee.kyoto-u.ac.jp/nlresource/corpus.html 5http://chasen.org˜taku/software/cabocha/ as pseudo-mentions all bunsetsu chunks (i.e. base phrases in Japanese) whose head part-of-speech was automatically tagged by the Japanese morphological analyser Chasen6 as either ‘noun’ or ‘unknown word’ according to the NAIST-jdic dictionary.7 For evaluation, articles published from January 1st to January 11th and the editorials from January to August were used for training and articles dated January 14th to 17th and editorials dated October to December are used for testing as done by Taira et al. (2008) and Imamura et al. (2009). Furthermore, in the experiments we only considered subject zero pronouns for a fair comparison to Italian zeroanaphora. 4.2 Models In these first experiments we compared the three ILP-based models discussed in Section 3: the direct reimplementation of the Denis and Baldridge proposal (i.e., using the same constrains), a version replacing Do-Not-Resolve-Not-Anaphors with BestFirst, and a version with Subject Detection as well. As discussed by Iida et al. (2007a) and Imamura et al. (2009), useful features in intra-sentential zeroanaphora are different from ones in inter-sentential zero-anaphora because in the former problem syntactic information between a zero pronoun and its candidate antecedent is essential, while the latter needs to capture the significance of saliency based on Centering Theory (Grosz et al., 1995). To directly reflect this difference, we created two antecedent identification models; one for intrasentential zero-anaphora, induced using the training instances which a zero pronoun and its candidate antecedent appear in the same sentences, the other for 6http://chasen-legacy.sourceforge.jp/ 7http://sourceforge.jp/projects/naist-jdic/ 807 inter-sentential cases, induced from the remaining training instances. To estimate the feature weights of each classifier, we used MEGAM8, an implementation of the Maximum Entropy model, with default parameter settings. The ILP-based models were compared with the following baselines. PAIRWISE: as in the work by Soon et al. (2001), antecedent identification and anaphoricity determination are simultaneously executed by a single classifier. DS-CASCADE: the model first filters out nonanaphoric candidate anaphors using an anaphoricity determination model, then selects an antecedent from a set of candidate antecedents of anaphoric candidate anaphors using an antecedent identification model. 4.3 Features The feature sets for antecedent identification and anaphoricity determination are briefly summarized in Table 2 and Table 3, respectively. The agreement features such as NUM AGREE and GEN AGREE are automatically derived using TextPro. Such agreement features are not available in Japanese because Japanese words do not contain such information. 4.4 Creating subject detection models To create a subject detection model for Italian, we used the TUT corpus9 (Bosco et al., 2010), which contains manually annotated dependency relations and their labels, consisting of 80,878 tokens in CoNLL format. We induced an maximum entropy classifier by using as items all arcs of dependency relations, each of which is used as a positive instance if its label is subject; otherwise it is used as a negative instance. To train the Japanese subject detection model we used 1,753 articles contained both in the NAIST Text Corpus and the Kyoto University Text Corpus. By merging these two corpora, we can obtain the annotated data including which dependency arc is subject10. To create the training instances, any pair of a predicate and its dependent are extracted, each of 8http://www.cs.utah.edu/˜hal/megam/ 9http://www.di.unito.it/˜tutreeb/ 10Note that Iida et al. (2007b) referred to this relation as ‘nominative’. feature description SUBJ PRE 1 if subject is included in the preceding words of ZERO in a sentence; otherwise 0. TOPIC PRE* 1 if topic case marker appears in the preceding words of ZERO in a sentence; otherwise 0. NUM PRE (GEN PRE) 1 if a candidate which agrees with ZERO with regards to number (gender) is included in the set of NP; otherwise 0. FIRST SENT 1 if ZERO appears in the first sentence of a text; otherwise 0. FIRST WORD 1 if the predicate which has ZERO is the first word in a sentence; otherwise 0. POS / LEMMA / DEP LABEL part-of-speech / dependency label / lemma of the predicate which has ZERO. D POS / D LEMMA / D DEP LABEL part-of-speech / dependency label / lemma of the dependents of the predicate which has ZERO. PATH* dependency labels (functional words) of words intervening between a ZERO and the sentence head The features marked with ‘*’ used only in Japanese. Table 3: Features for anaphoricity determination which is judged as positive if its relation is subject; as negative otherwise. As features for Italian, we used lemmas, PoS tag of a predicate and its dependents as well as their morphological information (i.e. gender and number) automatically computed by TextPro (Pianta et al., 2008). For Japanese, the head lemmas of predicate and dependent chunks as well as the functional words involved with these two chunks were used as features. One case specially treated is when a dependent is placed as an adnominal constituent of a predicate, as in this case relation estimation of dependency arcs is difficult. In such case we instead use the features shown in Table 2 for accurate estimation. 4.5 Results with zero anaphora only In zero anaphora resolution, we need to find all predicates that have anaphoric unrealized subjects (i.e. zero pronouns which have an antecedent in a text), and then identify an antecedent for each such argument. The Italian and Japanese test data sets contain 4,065 and 25,467 verbal predicates respectively. The performance of each model at zero-anaphora detection and resolution is shown in Table 4, using recall 808 feature description HEAD LEMMA characters of the head lemma in NP. POS part-of-speech of NP. DEFINITE 1 if NP contains the article corresponding to DEFINITE ‘the’; otherwise 0. DEMONSTRATIVE 1 if NP contains the article corresponding to DEMONSTRATIVE such as ‘that’ and ‘this’; otherwise 0. POSSESSIVE 1 if NP contains the article corresponding to POSSESSIVE such as ‘his’ and ‘their’; otherwise 0. CASE MARKER** case marker followed by NP, such as ‘wa (topic)’, ‘ga (subject)’, ‘o (object)’. DEP LABEL* dependency label of NP. COOC MI** the score of well-formedness model estimated from a large number of triplets ⟨NP, Case, Predicate⟩. FIRST SENT 1 if NP appears in the first sentence of a text; otherwise 0. FIRST MENTION 1 if NP first appears in the set of candidate antecedents; otherwise 0. CL RANK** a rank of NP in forward looking-center list based on Centering Theory (Grosz et al., 1995) CL ORDER** a order of NP in forward looking-center list based on Centering Theory (Grosz et al., 1995) PATH dependency labels (functional words) of words intervening between a ZERO and NP NUM (DIS)AGREE 1 if NP (dis)agrees with ZERO with regards to number; otherwise 0. GEN (DIS)AGREE 1 if NP (dis)agrees with ZERO with regards to gender; otherwise 0. HEAD MATCH 1 if ANA and NP have the same head lemma; otherwise 0. REGEX MATCH 1 if the string of NP subsumes the string of ANA; otherwise 0. COMP MATCH 1 if ANA and NP have the same string; otherwise 0. NP, ANA and ZERO stand for a candidate antecedent, a candidate anaphor and a candidate zero pronoun respectively. The features marked with ‘*’ are only used in Italian, while the features marked with ‘**’ are only used in Japanese. Table 2: Features used for antecedent identification Italian Japanese system mentions gold mentions model R P F R P F R P F PAIRWISE 0.864 0.172 0.287 0.864 0.172 0.287 0.286 0.308 0.296 DS-CASCADE 0.396 0.684 0.502 0.404 0.697 0.511 0.345 0.194 0.248 ILP 0.905 0.034 0.065 0.929 0.028 0.055 0.379 0.238 0.293 ILP +BF 0.803 0.375 0.511 0.834 0.369 0.511 0.353 0.256 0.297 ILP +SUBJ 0.900 0.034 0.066 0.927 0.028 0.055 0.371 0.315 0.341 ILP +BF +SUBJ 0.777 0.398 0.526 0.815 0.398 0.534 0.345 0.348 0.346 Table 4: Results on zero pronouns / precision / F over link detection as a metric (model theoretic metrics do not apply for this task as only subsets of coreference chains are considered). As can be seen from Table 4, the ILP version with DoNot-Resolve-Non-Anaphors performs no better than the baselines for either languages, but in both languages replacing that constraint with Best-First results in a performance above the baselines; adding Subject Detection results in further improvement for both languages. Notice also that the performance of the models on Italian is quite a bit higher than for Japanese although the dataset is much smaller, possibly meaning that the task is easier in Italian. 5 Experiment 2: coreference resolution for all anaphors In a second series of experiments we evaluated the performance of our models together with a full coreference system resolving all anaphors, not just zeros. 5.1 Separating vs combining classifiers Different types of nominal expressions display very different anaphoric behavior: e.g., pronoun resolution involves very different types of information from nominal expression resolution, depending more on syntactic information and on the local context and less on commonsense knowledge. But the most common approach to coreference resolu809 tion (Soon et al., 2001; Ng and Cardie, 2002, etc.) is to use a single classifier to identify antecedents of all anaphoric expressions, relying on the ability of the machine learning algorithm to learn these differences. These models, however, often fail to capture the differences in anaphoric behavior between different types of expressions–one of the reasons being that the amount of training instances is often too small to learn such differences.11 Using different models would appear to be key in the case of zeroanaphora resolution, which differs even more from the rest of anaphora resolution, e.g., in being particularly sensitive to local salience, as amply discussed in the literature on Centering discussed earlier. To test the hypothesis that using what we will call separated models for zero anaphora and everything else would work better than combined models induced from all the learning instances, we manually split the training instances in terms of these two anaphora types and then created two classifiers for antecedent identification: one for zero-anaphora, the other for NP-anaphora, separately induced from the corresponding training instances. Likewise, anaphoricity determination models were separately induced with regards to these two anaphora types. 5.2 Results with all anaphors In Table 5 and Table 6 we show the (MUC scorer) results obtained by adding the zero anaphoric resolution models proposed in this paper to both a combined and a separated classifier. For the separated classifier, we use the ILP+BF model for explicitly realized NPs, and different ILP models for zeros. The results show that the separated classifier works systematically better than a combined classifier. For both Italian and Japanese the ILP+BF+SUBJ model works clearly better than the baselines, whereas simply applying the original Denis and Baldridge model unchanged to this case we obtain worse results than the baselines. For Italian we could also compare our results with those obtained on the same dataset by one of the two systems that participated to the Italian section of SEMEVAL, I-BART. I-BART’s results are clearly better than those with both baselines, but also clearly in11E.g., the entire MUC-6 corpus contains a grand total of 3 reflexive pronouns. Japanese combined separated model R P F R P F PAIRWISE 0.345 0.236 0.280 0.427 0.240 0.308 DS-CASCADE 0.207 0.592 0.307 0.291 0.488 0.365 ILP 0.381 0.330 0.353 0.490 0.304 0.375 ILP +BF 0.349 0.390 0.368 0.446 0.340 0.386 ILP +SUBJ 0.376 0.366 0.371 0.484 0.353 0.408 ILP +BF +SUBJ 0.344 0.450 0.390 0.441 0.415 0.427 Table 6: Results for overall coreference: Japanese (MUC score) ferior to the results obtained with our models. In particular, the effect of introducing the separated model with ILP+BF+SUBJ is more significant when using the system detected mentions; it obtained performance more than 13 points better than I-BART when the model referred to the system detected mentions. 6 Related work We are not aware of any previous machine learning model for zero anaphora in Italian, but there has been quite a lot of work on Japanese zeroanaphora (Iida et al., 2007a; Taira et al., 2008; Imamura et al., 2009; Taira et al., 2010; Sasano et al., 2009). In work such as Taira et al. (2008) and Imamura et al. (2009), zero-anaphora resolution is considered as a sub-task of predicate argument structure analysis, taking the NAIST text corpus as a target data set. Taira et al. (2008) and Taira et al. (2010) applied decision lists and transformation-based learning respectively in order to manually analyze which clues are important for each argument assignment. Imamura et al. (2009) also tackled to the same problem setting by applying a pairwise classifier for each argument. In their approach, a ‘null’ argument is explicitly added into the set of candidate argument to learn the situation where an argument of a predicate is ‘exophoric’. They reported their model achieved better performance than the work by Taira et al. (2008). Iida et al. (2007a) also used the NAIST text corpus. They adopted the BACT learning algorithm (Kudo and Matsumoto, 2004) to effectively learn subtrees useful for both antecedent identification and zero pronoun detection. Their model drastically outperformed a simple pairwise model, but it is still performed as a cascaded process. Incorporating 810 Italian system mentions gold mentions combined separated combined separated model R P F R P F R P F R P F PAIRWISE 0.508 0.208 0.295 0.472 0.241 0.319 0.582 0.261 0.361 0.566 0.314 0.404 DS-CASCADE 0.225 0.553 0.320 0.217 0.574 0.315 0.245 0.609 0.349 0.246 0.686 0.362 I-BART 0.324 0.294 0.308 – – – 0.532 0.441 0.482 – – – ILP 0.539 0.321 0.403 0.535 0.316 0.397 0.614 0.369 0.461 0.607 0.384 0.470 ILP +BF 0.471 0.404 0.435 0.483 0.409 0.443 0.545 0.517 0.530 0.563 0.519 0.540 ILP +SUBJ 0.537 0.325 0.405 0.534 0.318 0.399 0.611 0.372 0.463 0.606 0.387 0.473 ILP +BF +SUBJ 0.464 0.410 0.435 0.478 0.418 0.446 0.538 0.527 0.533 0.559 0.536 0.547 R: Recall, P: Precision, F: f-score, BF: best first constraint, SUBJ: subject detection model. Table 5: Results for overall coreference: Italian (MUC score) their model into the ILP formulation proposed here looks like a promising further extension. Sasano et al. (2009) obtained interesting experimental results about the relationship between zeroanaphora resolution and the scale of automatically acquired case frames. In their work, their case frames were acquired from a very large corpus consisting of 100 billion words. They also proposed a probabilistic model to Japanese zero-anaphora in which an argument assignment score is estimated based on the automatically acquired case frames. They concluded that case frames acquired from larger corpora lead to better f-score on zeroanaphora resolution. In contrast to these approaches in Japanese, the participants to Semeval 2010 task 1 (especially the Italian coreference task) simply solved the problems using one coreference classifier, not distinguishing zero-anaphora from the other types of anaphora (Kobdani and Sch¨utze, 2010; Poesio et al., 2010). On the other hand, our approach shows separating problems contributes to improving performance in Italian zero-anaphora. Although we used gold mentions in our evaluations, mention detection is also essential. As a next step, we also need to take into account ways of incorporating a mention detection model into the ILP formulation. 7 Conclusion In this paper, we developed a new ILP-based model of zero anaphora detection and resolution that extends the coreference resolution model proposed by Denis and Baldridge (2007) by introducing modified constraints and a subject detection model. We evaluated this model both individually and as part of the overall coreference task for both Italian and Japanese zero anaphora, obtaining clear improvements in performance. One avenue for future research is motivated by the observation that whereas introducing the subject detection model and the best-first constraint results in higher precision maintaining the recall compared to the baselines, that precision is still low. One of the major source of the errors is that zero pronouns are frequently used in Italian and Japanese in contexts in which in English as so-called generic they would be used: “I walked into the hotel and (they) said ..”. In such case, the zero pronoun detection model is often incorrect. We are considering adding a generic they detection component. We also intend to experiment with introducing more sophisticated antecedent identification models in the ILP framework. In this paper, we used a very basic pairwise classifier; however Yang et al. (2008) and Iida et al. (2003) showed that the relative comparison of two candidate antecedents leads to obtaining better accuracy than the pairwise model. However, these approaches do not output absolute probabilities, but relative significance between two candidates, and therefore cannot be directly integrated with the ILP-framework. We plan to examine ways of appropriately estimating an absolute score from a set of relative scores for further refinement. Finally, we would like to test our model with English constructions which closely resemble zero anaphora. One example were studied in the Semeval 2010 ‘Linking Events and their Participants in Discourse’ task, which provides data about null instan811 tiation, omitted arguments of predicates like “We arrived φgoal at 8pm.”. (Unfortunately the dataset available for SEMEVAL was very small.) Another interesting area of application of these techniques would be VP ellipsis. Acknowledgments Ryu Iida’s stay in Trento was supported by the Excellent Young Researcher Overseas Visit Program of the Japan Society for the Promotion of Science (JSPS). Massimo Poesio was supported in part by the Provincia di Trento Grande Progetto LiveMemories, which also funded the creation of the Italian corpus used in this study. We also wish to thank Francesca Delogu, Kepa Rodriguez, Olga Uryupina and Yannick Versley for much help with the corpus and BART. References G. Attardi, F. Dell’Orletta, M. Simi, A. Chanev, and M. Ciaramita. 2007. Multilingual dependency parsing and domain adaptation using desr. In Proc. of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, Prague. C. Bosco, S. Montemagni, A. Mazzei, V. Lombardo, F. Dell’Orletta, A. Lenci, L. Lesmo, G. Attardi, M. Simi, A. Lavelli, J. Hall, J. Nilsson, and J. Nivre. 2010. Comparing the influence of different treebank annotations on dependency parsing. In Proceedings of LREC, pages 1794–1801. P. Denis and J. Baldridge. 2007. Joint determination of anaphoricity and coreference resolution using integer programming. In Proc. of HLT/NAACL, pages 236– 243. B. Di Eugenio. 1998. Centering in Italian. In M. A. Walker, A. K. Joshi, and E. F. Prince, editors, Centering Theory in Discourse, chapter 7, pages 115–138. Oxford. B. J. Grosz, A. K. Joshi, and S. Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–226. R. Iida, K. Inui, H. Takamura, and Y. Matsumoto. 2003. Incorporating contextual cues in trainable models for coreference resolution. In Proceedings of the 10th EACL Workshop on The Computational Treatment of Anaphora, pages 23–30. R. Iida, K. Inui, and Y. Matsumoto. 2007a. Zeroanaphora resolution by learning rich syntactic pattern features. ACM Transactions on Asian Language Information Processing (TALIP), 6(4). R. Iida, M. Komachi, K. Inui, and Y. Matsumoto. 2007b. Annotating a Japanese text corpus with predicateargument and coreference relations. In Proceeding of the ACL Workshop ‘Linguistic Annotation Workshop’, pages 132–139. K. Imamura, K. Saito, and T. Izumi. 2009. Discriminative approach to predicate-argument structure analysis with zero-anaphora resolution. In Proceedings of ACL-IJCNLP, Short Papers, pages 85–88. H. Isozaki and T. Hirao. 2003. Japanese zero pronoun resolution based on ranking rules and machine learning. In Proceedings of EMNLP, pages 184–191. M. Kameyama. 1985. Zero Anaphora: The case of Japanese. Ph.D. thesis, Stanford University. H. Kobdani and H. Sch¨utze. 2010. Sucre: A modular system for coreference resolution. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 92–95. T. Kudo and Y. Matsumoto. 2004. A boosting algorithm for classification of semi-structured text. In Proceedings of EMNLP, pages 301–308. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th ACL, pages 104–111. E. Pianta, C. Girardi, and R. Zanoli. 2008. The TextPro tool suite. In In Proceedings of LREC, pages 28–30. M. Poesio, O. Uryupina, and Y. Versley. 2010. Creating a coreference resolution system for Italian. In Proceedings of LREC. M. Recasens, L. M`arquez, E. Sapena, M. A. Mart´ı, M. Taul´e, V. Hoste, M. Poesio, and Y. Versley. 2010. Semeval-2010 task 1: Coreference resolution in multiple languages. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 1–8. K-J. Rodriguez, F. Delogu, Y. Versley, E. Stemle, and M. Poesio. 2010. Anaphoric annotation of wikipedia and blogs in the live memories corpus. In Proc. LREC. D. Roth and W.-T. Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proc. of CONLL. R. Sasano, D. Kawahara, and S. Kurohashi. 2009. The effect of corpus size on case frame acquisition for discourse analysis. In Proceedings of HLT/NAACL, pages 521–529. K. Seki, A. Fujii, and T. Ishikawa. 2002. A probabilistic method for analyzing Japanese anaphora integrating zero pronoun detection and resolution. In Proceedings of the 19th COLING, pages 911–917. W. M. Soon, H. T. Ng, and D. C. Y. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521– 544. 812 H. Taira, S. Fujita, and M. Nagata. 2008. A Japanese predicate argument structure analysis using decision lists. In Proceedings of EMNLP, pages 523–532. H. Taira, S. Fujita, and M. Nagata. 2010. Predicate argument structure analysis using transformation based learning. In Proceedings of the ACL 2010 Conference Short Papers, pages 162–167. M. A. Walker, M. Iida, and S. Cote. 1994. Japanese discourse and the process of centering. Computational Linguistics, 20(2):193–232. X. Yang, J. Su, and C. L. Tan. 2008. Twin-candidate model for learning-based anaphora resolution. Computational Linguistics, 34(3):327–356. 813
|
2011
|
81
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 814–824, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Coreference Resolution with World Knowledge Altaf Rahman and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {altaf,vince}@hlt.utdallas.edu Abstract While world knowledge has been shown to improve learning-based coreference resolvers, the improvements were typically obtained by incorporating world knowledge into a fairly weak baseline resolver. Hence, it is not clear whether these benefits can carry over to a stronger baseline. Moreover, since there has been no attempt to apply different sources of world knowledge in combination to coreference resolution, it is not clear whether they offer complementary benefits to a resolver. We systematically compare commonly-used and under-investigated sources of world knowledge for coreference resolution by applying them to two learning-based coreference models and evaluating them on documents annotated with two different annotation schemes. 1 Introduction Noun phrase (NP) coreference resolution is the task of determining which NPs in a text or dialogue refer to the same real-world entity. The difficulty of the task stems in part from its reliance on world knowledge (Charniak, 1972). To exemplify, consider the following text fragment. Martha Stewart is hoping people don’t run out on her. The celebrity indicted on charges stemming from . . . Having the (world) knowledge that Martha Stewart is a celebrity would be helpful for establishing the coreference relation between the two NPs. One may argue that employing heuristics such as subject preference or syntactic parallelism (which prefers resolving an NP to a candidate antecedent that has the same grammatical role) in this example would also allow us to correctly resolve the celebrity (Mitkov, 2002), thereby obviating the need for world knowledge. However, since these heuristics are not perfect, complementing them with world knowledge would be an important step towards bringing coreference systems to the next level of performance. Despite the usefulness of world knowledge for coreference resolution, early learning-based coreference resolvers have relied mostly on morphosyntactic features (e.g., Soon et al. (2001), Ng and Cardie (2002), Yang et al. (2003)). With recent advances in lexical semantics research and the development of large-scale knowledge bases, researchers have begun to employ world knowledge for coreference resolution. World knowledge is extracted primarily from three data sources, web-based encyclopedia (e.g., Ponzetto and Strube (2006), Uryupina et al. (2011)), unannotated data (e.g., Daum´e III and Marcu (2005), Ng (2007)), and coreferenceannotated data (e.g., Bengtson and Roth (2008)). While each of these three sources of world knowledge has been shown to improve coreference resolution, the improvements were typically obtained by incorporating world knowledge (as features) into a baseline resolver composed of a rather weak coreference model (i.e., the mention-pair model) and a small set of features (i.e., the 12 features adopted by Soon et al.’s (2001) knowledge-lean approach). As a result, some questions naturally arise. First, can world knowledge still offer benefits when used in combination with a richer set of features? Second, since automatically extracted world knowledge is typically noisy (Ponzetto and Poesio, 2009), are recently-developed coreference models more noisetolerant than the mention-pair model, and if so, can they profit more from the noisily extracted world knowledge? Finally, while different world knowl814 edge sources have been shown to be useful when applied in isolation to a coreference system, do they offer complementary benefits and therefore can further improve a resolver when applied in combination? We seek answers to these questions by conducting a systematic evaluation of different world knowledge sources for learning-based coreference resolution. Specifically, we (1) derive world knowledge from encyclopedic sources that are underinvestigated for coreference resolution, including FrameNet (Baker et al., 1998) and YAGO (Suchanek et al., 2007), in addition to coreference-annotated data and unannotated data; (2) incorporate such knowledge as features into a richer baseline feature set that we previously employed (Rahman and Ng, 2009); and (3) evaluate their utility using two coreference models, the traditional mention-pair model (Soon et al., 2001) and the recently developed cluster-ranking model (Rahman and Ng, 2009). Our evaluation corpus contains 410 documents, which are coreference-annotated using the ACE annotation scheme as well as the OntoNotes annotation scheme (Hovy et al., 2006). By evaluating on two sets of coreference annotations for the same set of documents, we can determine whether the usefulness of world knowledge sources for coreference resolution is dependent on the underlying annotation scheme used to annotate the documents. 2 Preliminaries In this section, we describe the corpus, the NP extraction methods, the coreference models, and the evaluation measures we will use in our evaluation. 2.1 Data Set We evaluate on documents that are coreferenceannotated using both the ACE annotation scheme and the OntoNotes annotation scheme, so that we can examine whether the usefulness of our world knowledge sources is dependent on the underlying coreference annotation scheme. Specifically, our data set is composed of the 410 English newswire articles that appear in both OntoNotes-2 and ACE 2004/2005. We partition the documents into a training set and a test set following a 80/20 ratio. ACE and OntoNotes employ different guidelines to annotate coreference chains. A major difference between the two annotation schemes is that ACE only concerns establishing coreference chains among NPs that belong to the ACE entity types, whereas OntoNotes does not have this restriction. Hence, the OntoNotes annotation scheme should produce more coreference chains (i.e., nonsingleton coreference clusters) than the ACE annotation scheme for a given set of documents. For our data set, the OntoNotes scheme yielded 4500 chains, whereas the ACE scheme yielded only 3637 chains. Another difference between the two annotation schemes is that singleton clusters are annotated in ACE but not OntoNotes. As discussed below, the presence of singleton clusters may have an impact on NP extraction and coreference evaluation. 2.2 NP Extraction Following common practice, we employ different methods to extract NPs from the documents annotated with the two annotation schemes. To extract NPs from the ACE-annotated documents, we train a mention extractor on the training texts (see Section 5.1 of Rahman and Ng (2009) for details), which recalls 83.6% of the NPs in the test set. On the other hand, to extract NPs from the OntoNotes-annotated documents, the same method should not be applied. To see the reason, recall that only the NPs in non-singleton clusters are annotated in these documents. Training a mention extractor on these NPs implies that we are learning to extract non-singleton NPs, which are typically much smaller in number than the entire set of NPs. In other words, doing so could substantially simplify the coreference task. Consequently, we follow the approach adopted by traditional learning-based resolvers and employ an NP chunker to extract NPs. Specifically, we use the markable identification system in the Reconcile resolver (Stoyanov et al., 2010) to extract NPs from the training and test texts. This identifier recalls 77.4% of the NPs in the test set. 2.3 Coreference Models We evaluate the utility of world knowledge using the mention-pair model and the cluster-ranking model. 2.3.1 Mention-Pair Model The mention-pair (MP) model is a classifier that determines whether two NPs are coreferent or not. 815 Each instance i(NPj, NPk) corresponds to NPj and NPk, and is represented by a Baseline feature set consisting of 39 features. Linguistically, these features can be divided into four categories: string-matching, grammatical, semantic, and positional. These features can also be categorized based on whether they are relational or not. Relational features capture the relationship between NPj and NPk, whereas nonrelational features capture the linguistic property of one of these two NPs. Since space limitations preclude a description of these features, we refer the reader to Rahman and Ng (2009) for details. We follow Soon et al.’s (2001) method for creating training instances: we create (1) a positive instance for each anaphoric NP, NPk, and its closest antecedent, NPj; and (2) a negative instance for NPk paired with each of the intervening NPs, NPj+1, NPj+2, . . ., NPk−1. The classification of a training instance is either positive or negative, depending on whether the two NPs are coreferent in the associated text. To train the MP model, we use the SVM learning algorithm from SVMlight (Joachims, 2002).1 After training, the classifier is used to identify an antecedent for an NP in a test text. Specifically, each NP, NPk, is compared in turn to each preceding NP, NPj, from right to left, and NPj is selected as its antecedent if the pair is classified as coreferent. The process terminates as soon as an antecedent is found for NPk or the beginning of the text is reached. Despite its popularity, the MP model has two major weaknesses. First, since each candidate antecedent for an NP to be resolved (henceforth an active NP) is considered independently of the others, this model only determines how good a candidate antecedent is relative to the active NP, but not how good a candidate antecedent is relative to other candidates. So, it fails to answer the critical question of which candidate antecedent is most probable. Second, it has limitations in its expressiveness: the information extracted from the two NPs alone may not be sufficient for making a coreference decision. 2.3.2 Cluster-Ranking Model The cluster-ranking (CR) model addresses the two weaknesses of the MP model by combining the strengths of the entity-mention model (e.g., Luo et 1For this and subsequent uses of the SVM learner in our experiments, we set all parameters to their default values. al. (2004), Yang et al. (2008)) and the mentionranking model (e.g., Denis and Baldridge (2008)). Specifically, the CR model ranks the preceding clusters for an active NP so that the highest-ranked cluster is the one to which the active NP should be linked. Employing a ranker addresses the first weakness, as a ranker allows all candidates to be compared simultaneously. Considering preceding clusters rather than antecedents as candidates addresses the second weakness, as cluster-level features (i.e., features that are defined over any subset of NPs in a preceding cluster) can be employed. Details of the CR model can be found in Rahman and Ng (2009). Since the CR model ranks preceding clusters, a training instance i(cj, NPk) represents a preceding cluster, cj, and an anaphoric NP, NPk. Each instance consists of features that are computed based solely on NPk as well as cluster-level features, which describe the relationship between cj and NPk. Motivated in part by Culotta et al. (2007), we create cluster-level features from the relational features in our feature set using four predicates: NONE, MOSTFALSE, MOST-TRUE, and ALL. Specifically, for each relational feature X, we first convert X into an equivalent set of binary-valued features if it is multivalued. Then, for each resulting binary-valued feature Xb, we create four binary-valued cluster-level features: (1) NONE-Xb is true when Xb is false between NPk and each NP in cj; (2) MOST-FALSE-Xb is true when Xb is true between NPk and less than half (but at least one) of the NPs in cj; (3) MOST-TRUEXb is true when Xb is true between NPk and at least half (but not all) of the NPs in cj; and (4) ALL-Xb is true when Xb is true between NPk and each NP in cj. We train a cluster ranker to jointly learn anaphoricity determination and coreference resolution using SVMlight’s ranker-learning algorithm. Specifically, for each NP, NPk, we create a training instance between NPk and each preceding cluster cj using the features described above. Since we are learning a joint model, we need to provide the ranker with the option to start a new cluster by creating an additional training instance that contains the nonrelational features describing NPk. The rank value of a training instance i(cj, NPk) created for NPk is the rank of cj among the competing clusters. If NPk is anaphoric, its rank is HIGH if NPk belongs to cj, and LOW otherwise. If NPk is non-anaphoric, its rank is 816 LOW unless it is the additional training instance described above, which has rank HIGH. After training, the cluster ranker processes the NPs in a test text in a left-to-right manner. For each active NP, NPk, we create test instances for it by pairing it with each of its preceding clusters. To allow for the possibility that NPk is non-anaphoric, we create an additional test instance as during training. All these test instances are then presented to the ranker. If the additional test instance is assigned the highest rank value, then we create a new cluster containing NPk. Otherwise, NPk is linked to the cluster that has the highest rank. Note that the partial clusters preceding NPk are formed incrementally based on the predictions of the ranker for the first k −1 NPs. 2.4 Evaluation Measures We employ two commonly-used scoring programs, B3 (Bagga and Baldwin, 1998) and CEAF (Luo, 2005), both of which report results in terms of recall (R), precision (P), and F-measure (F) by comparing the gold-standard (i.e., key) partition, KP, against the system-generated (i.e., response) partition, RP. Briefly, B3 computes the R and P values of each NP and averages these values at the end. Specifically, for each NP, NPj, B3 first computes the number of common NPs in KPj and RPj, the clusters containing NPj in KP and RP, respectively, and then divides this number by |KPj| and |RPj| to obtain the R and P values of NPj, respectively. On the other hand, CEAF finds the best one-to-one alignment between the key clusters and the response clusters. A complication arises when B3 is used to score a response partition containing automatically extracted NPs. Recall that B3 constructs a mapping between the NPs in the response and those in the key. Hence, if the response is generated using goldstandard NPs, then every NP in the response is mapped to some NP in the key and vice versa. In other words, there are no twinless (i.e., unmapped) NPs (Stoyanov et al., 2009). This is not the case when automatically extracted NPs are used, but the original description of B3 does not specify how twinless NPs should be scored (Bagga and Baldwin, 1998). To address this problem, we set the recall and precision of a twinless NP to zero, regardless of whether the NP appears in the key or the response. Note that CEAF can compare partitions with twinless NPs without any modification, since it operates by finding the best alignment between the clusters in the two partitions. Additionally, in order not to over-penalize a response partition, we remove all the twinless NPs in the response that are singletons. The rationale is simple: since the resolver has successfully identified these NPs as singletons, it should not be penalized, and removing them avoids such penalty. Since B3 and CEAF align NPs/clusters, the lack of singleton clusters in the OntoNotes annotations implies that the resulting scores reflect solely how well a resolver identifies coreference links and do not take into account how well it identifies singleton clusters. 3 Extracting World Knowledge In this section, we describe how we extract world knowledge for coreference resolution from three different sources: large-scale knowledge bases, coreference-annotated data and unannotated data. 3.1 World Knowledge from Knowledge Bases We extract world knowledge from two large-scale knowledge bases, YAGO and FrameNet. 3.1.1 Extracting Knowledge from YAGO We choose to employ YAGO rather than the more popularly-used Wikipedia due to its potentially richer knowledge, which comprises 5 million facts extracted from Wikipedia and WordNet. Each fact is represented as a triple (NPj, rel, NPk), where rel is one of the 90 YAGO relation types defined on two NPs, NPj and NPk. Motivated in part by previous work (Bryl et al., 2010; Uryupina et al., 2011), we employ the two relation types that we believe are most useful for coreference resolution, TYPE and MEANS. TYPE is essentially an IS-A relation. For instance, the triple (AlbertEinstein, TYPE, physicist) denotes the fact that Albert Einstein is a physicist. MEANS provides different ways of expressing an entity, and therefore allows us to deal with synonymy and ambiguity. For instance, the two triples (Einstein, MEANS, AlbertEinstein) and (Einstein, MEANS, AlfredEinstein) denote the facts that Einstein may refer to the physicist Albert Einstein and the musicologist Alfred Einstein, respectively. Hence, the presence of one or 817 both of these relations between two NPs provides strong evidence that the two NPs are coreferent. YAGO’s unification of the information in Wikipedia and WordNet enables it to extract facts that cannot be extracted with Wikipedia or WordNet alone, such as (MarthaStewart, TYPE, celebrity). To better appreciate YAGO’s strengths, let us see how this fact was extracted. YAGO first heuristically maps each of the Wiki categories in the Wiki page for Martha Stewart to its semantically closest WordNet synset. For instance, the Wiki category AMERICAN TELEVISION PERSONALITIES is mapped to the synset corresponding to sense #2 of the word personality. Then, given that personality is a direct hyponym of celebrity in WordNet, YAGO extracts the desired fact. This enables YAGO to extract facts that cannot be extracted with Wikipedia or WordNet alone. We incorporate the world knowledge from YAGO into our coreference models as a binary-valued feature. If the MP model is used, the YAGO feature for an instance will have the value 1 if and only if the two NPs involved are in a TYPE or MEANS relation. On the other hand, if the CR model is used, the YAGO feature for an instance involving NPk and preceding cluster c will have the value 1 if and only if NPk has a TYPE or MEANS relation with any of the NPs in c. Since knowledge extraction from webbased encyclopedia is typically noisy (Ponzetto and Poesio, 2009), we use YAGO to determine whether two NPs have a relation only if one NP is a named entity (NE) of type person, organization, or location according to the Stanford NE recognizer (Finkel et al., 2005) and the other NP is a common noun. 3.1.2 Extracting Knowledge from FrameNet FrameNet is a lexico-semantic resource focused on semantic frames (Baker et al., 1998). As a schematic representation of a situation, a frame contains the lexical predicates that can invoke it as well as the frame elements (i.e., semantic roles). For example, the JUDGMENT COMMUNICATION frame describes situations in which a COMMUNICATOR communicates a judgment of an EVALUEE to an ADDRESSEE. This frame has COMMUNICATOR and EVALUEE as its core frame elements and ADDRESSEE as its noncore frame elements, and can be invoked by more than 40 predicates, such as acclaim, accuse, commend, decry, denounce, praise, and slam. To better understand why FrameNet contains potentially useful knowledge for coreference resolution, consider the following text segment: Peter Anthony decries program trading as “limiting the game to a few,” but he is not sure whether he wants to denounce it because ... To establish the coreference relation between it and program trading, it may be helpful to know that decry and denounce appear in the same frame and the two NPs have the same semantic role. This example suggests that features encoding both the semantic roles of the two NPs under consideration and whether the associated predicates are “related” to each other in FrameNet (i.e., whether they appear in the same frame) could be useful for identifying coreference relations. Two points regarding our implementation of these features deserve mention. First, since we do not employ verb sense disambiguation, we consider two predicates related as long as there is at least one semantic frame in which they both appear. Second, since FrameNet-style semantic role labelers are not publicly available, we use ASSERT (Pradhan et al., 2004), a semantic role labeler that provides PropBank-style semantic roles such as ARG0 (the PROTOAGENT, which is typically the subject of a transitive verb) and ARG1 (the PROTOPATIENT, which is typically its direct object). Now, assuming that NPj and NPk are the arguments of two stemmed predicates, predj and predk, we create 15 features using the knowledge extracted from FrameNet and ASSERT as follows. First, we encode the knowledge extracted from FrameNet as one of three possible values: (1) predj and predk are in the same frame; (2) they are both predicates in FrameNet but never appear in the same frame; and (3) one or both predicates do not appear in FrameNet. Second, we encode the semantic roles of NPj and NPk as one of five possible values: ARG0ARG0, ARG1-ARG1, ARG0-ARG1, ARG1-ARG0, and OTHERS (the default case).2 Finally, we create 15 binary-valued features by pairing the 3 possible values extracted from FrameNet and the 5 possible values provided by ASSERT. Since these features 2We focus primarily on ARG0 and ARG1 because they are the most important core arguments of a predicate and may provide more useful information than other semantic roles. 818 are computed over two NPs, we can employ them directly for the MP model. Note that by construction, exactly one of these features will have a non-zero value. For the CR model, we extend their definitions so that they can be computed between an NP, NPk, and a preceding cluster, c. Specifically, the value of a feature is 1 if and only if its value between NPk and one of the NPs in c is 1 under its original definition. The above discussion assumes that the two NPs under consideration serve as predicate arguments. If this assumption fails, we will not create any features based on FrameNet for these two NPs. To our knowledge, FrameNet has not been exploited for coreference resolution. However, the use of related verbs is similar in spirit to Bean and Riloff’s (2004) use of patterns for inducing contextual role knowledge, and the use of semantic roles is also discussed in Ponzetto and Strube (2006). 3.2 World Knowledge from Annotated Data Since world knowledge is needed for coreference resolution, a human annotator must have employed world knowledge when coreference-annotating a document. We aim to design features that can “recover” such world knowledge from annotated data. 3.2.1 Features Based on Noun Pairs A natural question is: what kind of world knowledge can we extract from annotated data? We may gather the knowledge that Barack Obama is a U.S. president if we see these two NPs appearing in the same coreference chain. Equally importantly, we may gather the commonsense knowledge needed for determining non-coreference. For instance, we may discover that a lion and a tiger are unlikely to refer to the same real-world entity after realizing that they never appear in the same chain in a large number of annotated documents. Note that any features computed based on WordNet distance or distributional similarity are likely to incorrectly suggest that lion and tiger are coreferent, since the two nouns are similar distributionally and according to WordNet. Given these observations, one may collect the noun pairs from the (coreference-annotated) training data and use them as features to train a resolver. However, for these features to be effective, we need to address data sparseness, as many noun pairs in the training data may not appear in the test data. To improve generalization, we instead create different kinds of noun-pair-based features given an annotated text. To begin with, we preprocess each document. A training text is preprocessed by randomly replacing 10% of its common nouns with the label UNSEEN. If an NP, NPk, is replaced with UNSEEN, all NPs that have the same string as NPk will also be replaced with UNSEEN. A test text is preprocessed differently: we simply replace all NPs whose strings are not seen in the training data with UNSEEN. Hence, artificially creating UNSEEN labels from a training text will allow a learner to learn how to handle unseen words in a test text. Next, we create noun-pair-based features for the MP model, which will be used to augment the Baseline feature set. Here, each instance corresponds to two NPs, NPj and NPk, and is represented by three groups of binary-valued features. Unseen features are applicable when both NPj and NPk are UNSEEN. Either an UNSEEN-SAME feature or an UNSEEN-DIFF feature is created, depending on whether the two NPs are the same string before being replaced with the UNSEEN token. Lexical features are applicable when neither NPj nor NPk is UNSEEN. A lexical feature is an ordered pair consisting of the heads of the NPs. For a pronoun or a common noun, the head is the last word of the NP; for a proper name, the head is the entire NP. Semi-lexical features aim to improve generalization, and are applicable when neither NPj nor NPk is UNSEEN. If exactly one of NPj and NPk is tagged as a NE by the Stanford NE recognizer, we create a semi-lexical feature that is identical to the lexical feature described above, except that the NE is replaced with its NE label. On the other hand, if both NPs are NEs, we check whether they are the same string. If so, we create a *NE*-SAME feature, where *NE* is replaced with the corresponding NE label. Otherwise, we check whether they have the same NE tag and a word-subset match (i.e., whether the word tokens in one NP appears in the other’s list of word tokens). If so, we create a *NE*-SUBSAME feature, where *NE* is replaced with their NE label. Otherwise, we create a feature that is the concatenation of the NE labels of the two NPs. The noun-pair-based features for the CR model can be generated using essentially the same method. Specifically, since each instance now corresponds to 819 an NP, NPk, and a preceding cluster, c, we can generate a noun-pair-based feature by applying the above method to NPk and each of the NPs in c, and its value is the number of times it is applicable to NPk and c. 3.2.2 Features Based on Verb Pairs As discussed above, features encoding the semantic roles of two NPs and the relatedness of the associated verbs could be useful for coreference resolution. Rather than encoding verb relatedness, we may replace verb relatedness with the verbs themselves in these features, and have the learner learn directly from coreference-annotated data whether two NPs serving as the objects of decry and denounce are likely to be coreferent or not, for instance. Specifically, assuming that NPj and NPk are the arguments of two stemmed predicates, predj and predk, in the training data, we create five features as follows. First, we encode the semantic roles of NPj and NPk as one of five possible values: ARG0ARG0, ARG1-ARG1, ARG0-ARG1, ARG1-ARG0, and OTHERS (the default case). Second, we create five binary-valued features by pairing each of these five values with the two stemmed predicates. Since these features are computed over two NPs, we can employ them directly for the MP model. Note that by construction, exactly one of these features will have a non-zero value. For the CR model, we extend their definitions so that they can be computed between an NP, NPk, and a preceding cluster, c. Specifically, the value of a feature is 1 if and only if its value between NPk and one of the NPs in c is 1 under its original definition. The above discussion assumes that the two NPs under consideration serve as predicate arguments. If this assumption fails, we will not create any features based on verb pairs for these two NPs. 3.3 World Knowledge from Unannotated Data Previous work has shown that syntactic appositions, which can be extracted using heuristics from unannotated documents or parse trees, are a useful source of world knowledge for coreference resolution (e.g., Daum´e III and Marcu (2005), Ng (2007), Haghighi and Klein (2009)). Each extraction is an NP pair such as <Barack Obama, the president> and <Eastern Airlines, the carrier>, where the first NP in the pair is a proper name and the second NP is a common NP. Low-frequency extractions are typically assumed to be noisy and discarded. We combine the extractions produced by Fleischman et al. (2003) and Ng (2007) to form a database consisting of 1.057 million NP pairs, and create a binary-valued feature for our coreference models using this database. If the MP model is used, this feature will have the value 1 if and only if the two NPs appear as a pair in the database. On the other hand, if the CR model is used, the feature for an instance involving NPk and preceding cluster c will have the value 1 if and only if NPk and at least one of the NPs in c appears as a pair in the database. 4 Evaluation 4.1 Experimental Setup As described in Section 2, we use as our evaluation corpus the 411 documents that are coreferenceannotated using the ACE and OntoNotes annotation schemes. Specifically, we divide these documents into five (disjoint) folds of roughly the same size, training the MP model and the CR model using SVMlight on four folds and evaluate their performance on the remaining fold. The linguistic features, as well as the NPs used to create the training and test instances, are computed automatically. We employ B3 and CEAF as described in Section 2.3 to score the output of a coreference system. 4.2 Results and Discussion 4.2.1 Baseline Models Since our goal is to evaluate the effectiveness of the features encoding world knowledge for learningbased coreference resolution, we employ as our baselines the MR model and the CR model trained on the Baseline feature set, which does not contain any features encoding world knowledge. For the MP model, the Baseline feature set consists of the 39 features described in Section 2.3.1; for the CR model, the Baseline feature set consists of the cluster-level features derived from the 39 features used in the Baseline MP model (see Section 2.3.2). Results of the MP model and the CR model employing the Baseline feature set are shown in rows 1 and 8 of Table 1, respectively. Each row contains the B3 and CEAF results of the corresponding coreference model when it is evaluated using the ACE and 820 ACE OntoNotes B3 CEAF B3 CEAF Feature Set R P F R P F R P F R P F Results for the Mention-Pair Model 1 Base 56.5 69.7 62.4 54.9 66.3 60.0 50.4 56.7 53.3 48.9 54.5 51.5 2 Base+YAGO Types (YT) 57.3 70.3 63.1 58.7 67.5 62.8 51.7 57.9 54.6 50.3 55.6 52.8 3 Base+YAGO Means (YM) 56.7 70.0 62.7 55.3 66.5 60.4 50.6 57.0 53.6 49.3 54.9 51.9 4 Base+Noun Pairs (WP) 57.5 70.6 63.4 55.8 67.4 61.1 51.6 57.6 54.4 49.7 55.4 52.4 5 Base+FrameNet (FN) 56.4 70.9 62.8 54.9 67.5 60.5 50.5 57.5 53.8 48.8 55.1 51.8 6 Base+Verb Pairs (VP) 56.9 71.3 63.3 55.2 67.6 60.8 50.7 57.9 54.0 49.0 55.4 52.0 7 Base+Appositives (AP) 56.9 70.0 62.7 55.6 66.9 60.7 50.3 57.1 53.5 49.1 55.1 51.9 Results for the Cluster-Ranking Model 8 Base 61.7 71.2 66.1 59.6 68.8 63.8 53.4 59.2 56.2 51.1 57.3 54.0 9 Base+YAGO Types (YT) 63.5 72.4 67.6 61.7 70.0 65.5 54.8 60.6 57.6 52.4 58.9 55.4 10 Base+YAGO Means (YM) 62.0 71.4 66.4 59.9 69.1 64.1 53.9 59.5 56.6 51.4 57.5 54.3 11 Base+Noun Pairs (WP) 64.1 73.4 68.4 61.3 70.1 65.4 55.9 62.1 58.8 53.5 59.1 56.2 12 Base+FrameNet (FN) 61.8 71.9 66.5 59.8 69.3 64.2 53.5 60.0 56.6 51.1 57.9 54.3 13 Base+Verb Pairs (VP) 62.1 72.2 66.8 60.1 69.3 64.4 54.4 60.1 57.1 51.9 58.2 54.9 14 Base+Appositives (AP) 63.1 71.7 67.1 60.5 69.4 64.6 54.1 60.1 56.9 51.9 57.8 54.7 Table 1: Results obtained by applying different types of features in isolation to the Baseline system. ACE OntoNotes B3 CEAF B3 CEAF Feature Set R P F R P F R P F R P F Results for the Mention-Pair Model 1 Base 56.5 69.7 62.4 54.9 66.3 60.0 50.4 56.7 53.3 48.9 54.5 51.5 2 Base+YT 57.3 70.3 63.1 58.7 67.5 62.8 51.7 57.9 54.6 50.3 55.6 52.8 3 Base+YT+YM 57.8 70.9 63.6 59.1 67.9 63.2 52.1 58.3 55.0 50.8 56.0 53.3 4 Base+YT+YM+WP 59.5 71.9 65.1 57.5 69.4 62.9 53.1 59.2 56.0 51.5 57.1 54.1 5 Base+YT+YM+WP+FN 59.6 72.1 65.3 57.2 69.7 62.8 53.1 59.5 56.2 51.3 57.4 54.2 6 Base+YT+YM+WP+FN+VP 59.9 72.5 65.6 57.8 70.0 63.3 53.4 59.8 56.4 51.8 57.7 54.6 7 Base+YT+YM+WP+FN+VP+AP 59.7 72.4 65.4 57.6 69.8 63.1 53.2 59.8 56.3 51.5 57.6 54.4 Results for the Cluster-Ranking Model 8 Base 61.7 71.2 66.1 59.6 68.8 63.8 53.4 59.2 56.2 51.1 57.3 54.0 9 Base+YT 63.5 72.4 67.6 61.7 70.0 65.5 54.8 60.6 57.6 52.4 58.9 55.4 10 Base+YT+YM 63.9 72.6 68.0 62.1 70.4 66.0 55.2 61.0 57.9 52.8 59.1 55.8 11 Base+YT+YM+WP 66.1 75.4 70.4 62.9 72.4 67.3 57.7 64.4 60.8 55.1 61.6 58.2 12 Base+YT+YM+WP+FN 66.3 75.1 70.4 63.1 72.3 67.4 57.3 64.1 60.5 54.7 61.2 57.8 13 Base+YT+YM+WP+FN+VP 66.6 75.9 70.9 63.5 72.9 67.9 57.7 64.4 60.8 55.1 61.6 58.2 14 Base+YT+YM+WP+FN+VP+AP 66.4 75.7 70.7 63.3 72.9 67.8 57.6 64.3 60.8 55.0 61.5 58.1 Table 2: Results obtained by adding different types of features incrementally to the Baseline system. OntoNotes annotations as the gold standard. As we can see, the MP model achieves F-measure scores of 62.4 (B3) and 60.0 (CEAF) on ACE and 53.3 (B3) and 51.5 (CEAF) on OntoNotes, and the CR model achieves F-measure scores of 66.1 (B3) and 63.8 (CEAF) on ACE and 56.2 (B3) and 54.0 (CEAF) on OntoNotes. Also, the results show that the CR model is stronger than the MP model, corroborating previous empirical findings (Rahman and Ng, 2009). 4.2.2 Incorporating World Knowledge Next, we examine the usefulness of world knowledge for coreference resolution. The remaining rows in Table 1 show the results obtained when different types of features encoding world knowledge are applied to the Baseline system in isolation. The best result for each combination of data set, evaluation measure, and coreference model is boldfaced. Two points deserve mention. First, each type of features improves the Baseline, regardless of the coreference model, the evaluation measure, and the annotation scheme used. This suggests that all these feature types are indeed useful for coreference resolution. It is worth noting that in all but a few cases involving the FrameNet-based and appositive-based features, the rise in F-measure is accompanied by a 821 1. The Bush White House is breeding non-duck ducks the same way the Nixon White House did: It hops on an issue that is unopposable – cleaner air, better treatment of the disabled, better child care. The President came up with a good bill, but now may end up signing the awful bureaucratic creature hatched on Capitol Hill. 2. The tumor, he suggested, developed when the second, normal copy also was damaged. He believed colon cancer might also arise from multiple “hits” on cancer suppressor genes, as it often seems to develop in stages. Table 3: Examples errors introduced by YAGO and FrameNet. simultaneous rise in recall and precision. This is perhaps not surprising: as the use of world knowledge helps discover coreference links, recall increases; and as more (relevant) knowledge is available to make coreference decisions, precision increases. Second, the feature types that yield the best improvement over the Baseline are YAGO TYPE and Noun Pairs. When the MP model is used, the best coreference system improves the Baseline by 1– 1.3% (B3) and 1.3–2.8% (CEAF) in F-measure. On the other hand, when the CR model is used, the best system improves the Baseline by 2.3–2.6% (B3) and 1.7–2.2% (CEAF) in F-measure. Table 2 shows the results obtained when the different types of features are added to the Baseline one after the other. Specifically, we add the feature types in this order: YAGO TYPE, YAGO MEANS, Noun Pairs, FrameNet, Verb Pairs, and Appositives. In comparison to the results in Table 1, we can see that better results are obtained when the different types of features are applied to the Baseline in combination than in isolation, regardless of the coreference model, the evaluation measure, and the annotation scheme used. The best-performing system, which employs all but the Appositive features, outperforms the Baseline by 3.1–3.3% in F-measure when the MR model is used and by 4.1–4.8% in F-measure when the CR model is used. In both cases, the gains in F-measure are accompanied by a simultaneous rise in recall and precision. Overall, these results seem to suggest that the CR model is making more effective use of the available knowledge than the MR model, and that the different feature types are providing complementary information for the two coreference models. 4.3 Example Errors While the different types of features we considered improve the performance of the Baseline primarily via the establishment of coreference links, some of these links are spurious. Sentences 1 and 2 of Table 3 show the spurious coreference links introduced by the CR model when YAGO and FrameNet are used, respectively. In sentence 1, while The President and Bush are coreferent, YAGO caused the CR model to establish the spurious link between The President and Nixon owing to the proximity of the two NPs and the presence of this NP pair in the YAGO TYPE relation. In sentence 2, FrameNet caused the CR model to establish the spurious link between The tumor and colon cancer because these two NPs are the ARG0 arguments of develop and arise, which appear in the same semantic frame in FrameNet. 5 Conclusions We have examined the utility of three major sources of world knowledge for coreference resolution, namely, large-scale knowledge bases (YAGO, FrameNet), coreference-annotated data (Noun Pairs, Verb Pairs), and unannotated data (Appositives), by applying them to two learning-based coreference models, the mention-pair model and the clusterranking model, and evaluating them on documents annotated with the ACE and OntoNotes annotation schemes. When applying the different types of features in isolation to a Baseline system that does not employ world knowledge, we found that all of them improved the Baseline regardless of the underlying coreference model, the evaluation measure, and the annotation scheme, with YAGO TYPE and Noun Pairs yielding the largest performance gains. Nevertheless, the best results were obtained when they were applied in combination to the Baseline system. We conclude from these results that the different feature types we considered are providing complementary world knowledge to the coreference resolvers, and while each of them provides fairly small gains, their cumulative benefits can be substantial. 822 Acknowledgments We thank the three reviewers for their invaluable comments on an earlier draft of the paper. This work was supported in part by NSF Grant IIS-0812261. References Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of the Linguistic Coreference Workshop at The First International Conference on Language Resources and Evaluation, pages 563–566. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics, Volume 1, pages 86–90. David Bean and Ellen Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 297–304. Eric Bengtson and Dan Roth. 2008. Understanding the values of features for coreference resolution. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 294–303. Volha Bryl, Claudio Giuliano, Luciano Serafini, and Kateryna Tymoshenko. 2010. Using background knowledge to support coreference resolution. In Proceedings of the 19th European Conference on Artificial Intelligence, pages 759–764. Eugene Charniak. 1972. Towards a Model of Children’s Story Comphrension. AI-TR 266, Artificial Intelligence Laboratory, Massachusetts Institute of Technology. Aron Culotta, Michael Wick, and Andrew McCallum. 2007. First-order probabilistic models for coreference resolution. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 81–88. Hal Daum´e III and Daniel Marcu. 2005. A large-scale exploration of effective global features for a joint entity detection and tracking model. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 97–104. Pascal Denis and Jason Baldridge. 2008. Specialized models and ranking for coreference resolution. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 660–669. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 363–370. Michael Fleischman, Eduard Hovy, and Abdessamad Echihabi. 2003. Offline strategies for online question answering: Answering questions before they are asked. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 1–7. Aria Haghighi and Dan Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1152–1161. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 57–60. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 133–142. Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A mentionsynchronous coreference resolution algorithm based on the Bell tree. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 135–142. Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25– 32. Ruslan Mitkov. 2002. Anaphora Resolution. Longman. Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 104–111. Vincent Ng. 2007. Shallow semantics for coreference resolution. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence, pages 1689–1694. Simone Paolo Ponzetto and Massimo Poesio. 2009. State-of-the-art NLP approaches to coreference resolution: Theory and practical recipes. In Tutorial Abstracts of ACL-IJCNLP 2009, page 6. Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution. In Proceedings 823 of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 192–199. Sameer S. Pradhan, Wayne H. Ward, Kadri Hacioglu, James H. Martin, and Dan Jurafsky. 2004. Shallow semantic parsing using support vector machines. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 233–240. Altaf Rahman and Vincent Ng. 2009. Supervised models for coreference resolution. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 968–977. Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. Veselin Stoyanov, Nathan Gilbert, Claire Cardie, and Ellen Riloff. 2009. Conundrums in noun phrase coreference resolution: Making sense of the state-of-theart. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 656–664. Veselin Stoyanov, Claire Cardie, Nathan Gilbert, Ellen Riloff, David Buttler, and David Hysom. 2010. Coreference resolution with Reconcile. In Proceedings of the ACL 2010 Conference Short Papers, pages 156– 161. Fabian Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. YAGO: A core of semantic knowledge unifying wordnet and wikipedia. In Proceedings of the World Wide Web Conference, pages 697–706. Olga Uryupina, Massimo Poesio, Claudio Giuliano, and Kateryna Tymoshenko. 2011. Disambiguation and filtering methods in using web knowledge for coreference resolution. In Proceedings of the 24th International Florida Artificial Intelligence Research Society Conference. Xiaofeng Yang, Guodong Zhou, Jian Su, and Chew Lim Tan. 2003. Coreference resolution using competitive learning approach. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 176–183. Xiaofeng Yang, Jian Su, Jun Lang, Chew Lim Tan, and Sheng Li. 2008. An entity-mention model for coreference resolution with inductive logic programming. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 843–851. 824
|
2011
|
82
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 825–834, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics How to train your multi bottom-up tree transducer Andreas Maletti Universit¨at Stuttgart, Institute for Natural Language Processing Azenbergstraße 12, 70174 Stuttgart, Germany [email protected] Abstract The local multi bottom-up tree transducer is introduced and related to the (non-contiguous) synchronous tree sequence substitution grammar. It is then shown how to obtain a weighted local multi bottom-up tree transducer from a bilingual and biparsed corpus. Finally, the problem of non-preservation of regularity is addressed. Three properties that ensure preservation are introduced, and it is discussed how to adjust the rule extraction process such that they are automatically fulfilled. 1 Introduction A (formal) translation model is at the core of every machine translation system. Predominantly, statistical processes are used to instantiate the formal model and derive a specific translation device. Brown et al. (1990) discuss automatically trainable translation models in their seminal paper. However, the IBM models of Brown et al. (1993) are stringbased in the sense that they base the translation decision on the words and their surrounding context. Contrary, in the field of syntax-based machine translation, the translation models have full access to the syntax of the sentences and can base their decision on it. A good exposition to both fields is presented in (Knight, 2007). In this paper, we deal exclusively with syntaxbased translation models such as synchronous tree substitution grammars (STSG), multi bottom-up tree transducers (MBOT), and synchronous tree-sequence substitution grammars (STSSG). Chiang (2006) gives a good introduction to STSG, which originate from the syntax-directed translation schemes of Aho and Ullman (1972). Roughly speaking, an STSG has rules in which two linked nonterminals are replaced (at the same time) by two corresponding trees containing terminal and nonterminal symbols. In addition, the nonterminals in the two replacement trees are linked, which creates new linked nonterminals to which further rules can be applied. Henceforth, we refer to these two trees as input and output tree. MBOT have been introduced in (Arnold and Dauchet, 1982; Lilin, 1981) and are slightly more expressive than STSG. Roughly speaking, they allow one replacement input tree and several output trees in a single rule. This change and the presence of states yields many algorithmically advantageous properties such as closure under composition, efficient binarization, and efficient input and output restriction [see (Maletti, 2010)]. Finally, STSSG, which have been derived from rational tree relations (Raoult, 1997), have been discussed by Zhang et al. (2008a), Zhang et al. (2008b), and Sun et al. (2009). They are even more expressive than the local variant of the multi bottom-up tree transducer (LMBOT) that we introduce here and can have several input and output trees in a single rule. In this contribution, we restrict MBOT to a form that is particularly relevant in machine translation. We drop the general state behavior of MBOT and replace it by the common locality tests that are also present in STSG, STSSG, and STAG (Shieber and Schabes, 1990; Shieber, 2007). The obtained device is the local MBOT (LMBOT). Maletti (2010) argued the algorithmical advantages of MBOT over STSG and proposed MBOT as an implementation alternative for STSG. In particular, the training procedure would train STSG; i.e., it would not utilize the additional expressive power 825 of MBOT. However, Zhang et al. (2008b) and Sun et al. (2009) demonstrate that the additional expressivity gained from non-contiguous rules greatly improves the translation quality. In this contribution we address this separation and investigate a training procedure for LMBOT that allows non-contiguous fragments while preserving the algorithmic advantages of MBOT. To this end, we introduce a rule extraction and weight training method for LMBOT that is based on the corresponding procedures for STSG and STSSG. However, general LMBOT can be too expressive in the sense that they allow translations that do not preserve regularity. Preservation of regularity is an important property for efficient representations and efficient algorithms [see (May et al., 2010)]. Consequently, we present 3 properties that ensure that an LMBOT preserves regularity. In addition, we shortly discuss how these properties could be enforced in the rule extraction procedure. 2 Notation The set of nonnegative integers is N. We write [k] for the set {i | 1 ≤i ≤k}. We treat functions as special relations. For every relation R ⊆A × B and S ⊆A, we write R(S) = {b ∈B | ∃a ∈S : (a, b) ∈R} R−1 = {(b, a) | (a, b) ∈R} , where R−1 is called the inverse of R. Given an alphabet Σ, the set of all words (or sequences) over Σ is Σ∗, of which the empty word is ε. The concatenation of two words u and w is simply denoted by the juxtaposition uw. The length of a word w = σ1 · · · σk with σi ∈Σ for all i ∈[k] is |w| = k. Given 1 ≤i ≤j ≤k, the (i, j)span w[i, j] of w is σiσi+1 · · · σj. The set TΣ of all Σ-trees is the smallest set T such that σ(t) ∈T for all σ ∈Σ and t ∈T ∗. We generally use bold-face characters (like t) for sequences, and we refer to their elements using subscripts (like ti). Consequently, a tree t consists of a labeled root node σ followed by a sequence t of its children. To improve readability we sometimes write a sequence t1 · · · tk as t1, . . . , tk. The positions pos(t) ⊆N∗of a tree t = σ(t) are inductively defined by pos(t) = {ε}∪pos(t), where pos(t) = [ 1≤i≤|t| {ip | p ∈pos(ti)} . Note that this yields an undesirable difference between pos(t) and pos(t), but it will always be clear from the context whether we refer to a single tree or a sequence. Note that positions are ordered via the (standard) lexicographic ordering. Let t ∈TΣ and p ∈pos(t). The label of t at position p is t(p), and the subtree rooted at position p is t|p. Formally, they are defined by t(p) = ( σ if p = ε t(p) otherwise t(ip) = ti(p) t|p = ( t if p = ε t|p otherwise t|ip = ti|p for all t = σ(t) and 1 ≤i ≤|t|. As demonstrated, these notions are also used for sequences. A position p ∈pos(t) is a leaf (in t) if p1 /∈pos(t). Given a subset NT ⊆Σ, we let ↓NT(t) = {p ∈pos(t) | t(p) ∈NT, p leaf in t} . Later NT will be the set of nonterminals, so that the elements of ↓NT(t) will be the leaf nonterminals of t. We extend the notion to sequences t by ↓NT(t) = [ 1≤i≤|t| {ip | p ∈↓NT(ti)} . We also need a substitution that replaces subtrees. Let p1, . . . , pn ∈pos(t) be pairwise incomparable positions and t1, . . . , tn ∈TΣ. Then t[pi ←ti | 1 ≤i ≤n] denotes the tree that is obtained from t by replacing (in parallel) the subtrees at pi by ti for every i ∈[k]. Finally, let us recall regular tree languages. A finite tree automaton M is a tuple (Q, Σ, δ, F) such that Q is a finite set, δ ⊆Q∗× Σ × Q is a finite relation, and F ⊆Q. We extend δ to a mapping δ: TΣ →2Q by δ(σ(t)) = {q | (q, σ, q) ∈δ, ∀i ∈[ |t| ]: qi ∈δ(ti)} for every σ ∈Σ and t ∈T ∗ Σ. The finite tree automaton M recognizes the tree language L(M) = {t ∈TΣ | δ(t) ∩F ̸= ∅} . A tree language L ⊆TΣ is regular if there exists a finite tree automaton M such that L = L(M). 826 VP VBD signed PP → PV twlY , NP-OBJ NP DET-NN AltwqyE PP 1 S NP-SBJ VP → VP PV NP-OBJ NP-SBJ 1 2 1 Figure 1: Sample LMBOT rules. 3 The model In this section, we recall particular multi bottomup tree transducers, which have been introduced by Arnold and Dauchet (1982) and Lilin (1981). A detailed (and English) presentation of the general model can be found in Engelfriet et al. (2009) and Maletti (2010). Using the nomenclature of Engelfriet et al. (2009), we recall a variant of linear and nondeleting extended multi bottom-up tree transducers (MBOT) here. Occasionally, we will refer to general MBOT, which differ from the local variant discussed here because they have explicit states. Throughout the article, we assume sets Σ and ∆ of input and output symbols, respectively. Moreover, let NT ⊆Σ ∪∆be the set of designated nonterminal symbols. Finally, we avoid weights in the formal development to keep it simple. It is straightforward to add weights to our model. Essentially, the model works on pairs ⟨t, u⟩ consisting of an input tree t ∈ TΣ and a sequence u ∈T ∗ ∆of output trees. Each such pair is called a pre-translation and the rank rk(⟨t, u⟩) the pre-translation ⟨t, u⟩is |u|. In other words, the rank of a pre-translation equals the number of output trees stored in it. Given a pre-translation ⟨t, u⟩∈TΣ×T k ∆ and i ∈[k], we call ui the ith translation of t. An alignment for the pre-translation ⟨t, u⟩is an injective mapping ψ: ↓NT(u) →↓NT(t) × N such that (p, j) ∈ψ(↓NT(u)) for every (p, i) ∈ψ(↓NT(u)) and j ∈[i]. In other words, an alignment should request each translation of a particular subtree at most once and if it requests the ith translation, then it should also request all previous translations. Definition 1 A local multi bottom-up tree transducer (LMBOT) is a finite set R of rules such that every rule, written l →ψ r, contains a pre-translation ⟨l, r⟩and an alignment ψ for it. The component l is the left-hand side, r is the right-hand side, and ψ is the alignment of a rule l →ψ r ∈R. The rules of an LMBOT are similar to the rules of an STSG (synchronous tree substitution grammar) of Eisner (2003) and Shieber (2004), but right-hand sides of LMBOT contain a sequence of trees instead of just a single tree as in an STSG. In addition, the alignments in an STSG rule are bijective between leaf nonterminals, whereas our model permits multiple alignments to a single leaf nonterminal in the left-hand side. A model that is even more powerful than LMBOT is the non-contiguous version of STSSG (synchronous tree-sequence substitution grammar) of Zhang et al. (2008a), Zhang et al. (2008b), and Sun et al. (2009), which allows sequences of trees on both sides of rules [see also (Raoult, 1997)]. Figure 1 displays sample rules of an LMBOT using a graphical representation of the trees and the alignment. Next, we define the semantics of an LMBOT R. To avoid difficulties1, we explicitly exclude rules like l →ψ r where l ∈NT or r ∈NT∗; i.e., rules where the left- or right-hand side are only leaf nonterminals. We first define the traditional bottom-up semantics. Let ρ = l →ψ r ∈R be a rule and p ∈↓NT(l). The p-rank rk(ρ, p) of ρ is rk(ρ, p) = |{i ∈N | (p, i) ∈ψ(↓NT(r))}|. Definition 2 The set τ(R) of pre-translations of an LMBOT R is inductively defined to be the smallest set such that: If ρ = l →ψ r ∈R is a rule, ⟨tp, up⟩∈τ(R) is a pre-translation of R for every p ∈↓NT(l), and • rk(ρ, p) = rk(⟨tp, up⟩), • l(p) = tp(ε), and 1Actually, difficulties arise only in the weighted setting. 827 PP IN for NP NNP Serbia D PP PREP En NP NN-PROP SrbyA E VP VBD signed PP IN for NP NNP Serbia D PV twlY , NP-OBJ NP DET-NN AltwqyE PP PREP En NP NN-PROP SrbyA E S .. . VP VBD signed PP IN for NP NNP Serbia D VP PV twlY NP-OBJ NP DET-NN AltwqyE PP PREP En NP NN-PROP SrbyA ... E Figure 2: Top left: (a) Initial pre-translation; Top right: (b) Pre-translation obtained from the left rule of Fig. 1 and (a); Bottom: (c) Pre-translation obtained from the right rule of Fig. 1 and (b). • r(p′) = up′′(i) with ψ(p′) = (p′′, i) for every p′ ∈↓NT(r), then ⟨t, u⟩∈τ(R) where • t = l[p ←tp | p ∈↓NT(l)] and • u = r[p′ ←(up′′)i | p′ ∈ψ−1(p′′, i)]. In plain words, each nonterminal leaf p in the left-hand side of a rule ρ can be replaced by the input tree t of a pre-translation ⟨t, u⟩whose root is labeled by the same nonterminal. In addition, the rank rk(ρ, p) of the replaced nonterminal should match the rank rk(⟨t, u⟩) of the pre-translation and the nonterminals in the right-hand side that are aligned to p should be replaced by the translation that the alignment requests, provided that the nonterminal matches with the root symbol of the requested translation. The main benefit of the bottomup semantics is that it works exclusively on pretranslations. The process is illustrated in Figure 2. Using the classical bottom-up semantics, we simply obtain the following theorem by Maletti (2010) because the MBOT constructed there is in fact an LMBOT. Theorem 3 For every STSG, an equivalent LMBOT can be constructed in linear time, which in turn yields a particular MBOT in linear time. Finally, we want to relate LMBOT to the STSSG of Sun et al. (2009). To this end, we also introduce the top-down semantics for LMBOT. As expected, both semantics coincide. The top-down semantics is introduced using rule compositions, which will play an important rule later on. Definition 4 The set Rk of k-fold composed rules is inductively defined as follows: • R1 = R and • ℓ→ϕ s ∈Rk+1 for all ρ = l →ψ r ∈R and ρp = lp →ψp rp ∈Rk such that – rk(ρ, p) = rk(⟨lp, rp⟩), – l(p) = lp(ε), and – r(p′) = rp′′(i) with ψ(p′) = (p′′, i) for every p ∈↓NT(l) and p′ ∈↓NT(r) where – ℓ= l[p ←lp | p ∈↓NT(l)], – s = r[p′ ←(rp′′)i | p′ ∈ψ−1(p′′, i)], and – ϕ(p′p) = p′′ψp′′(ip) for all positions p′ ∈ψ−1(p′′, i) and ip ∈↓NT(rp′′). The rule closure R≤∞of R is R≤∞= S i≥1 Ri. The top-down pre-translation of R is τt(R) = {⟨l, r ⟩| l →ψ r ∈R≤∞, ↓NT(l) = ∅} . 828 X X → X a X , X a X 1 2 X X → X b X , X b X 1 2 X X X → X a X b X , X a X b X 1 2 Figure 3: Composed rule. The composition of the rules, which is illustrated in Figure 3, in the second item of Definition 4 could also be represented as ρ(ρ1, . . . , ρk) where ρ1, . . . , ρk is an enumeration of the rules {ρp | p ∈↓NT(l)} used in the item. The following theorem is easy to prove. Theorem 5 The bottom-up and top-down semantics coincide; i.e., τ(R) = τt(R). Chiang (2005) and Graehl et al. (2008) argue that STSG have sufficient expressive power for syntaxbased machine translation, but Zhang et al. (2008a) show that the additional expressive power of treesequences helps the translation process. This is mostly due to the fact that smaller (and less specific) rules can be extracted from bi-parsed word-aligned training data. A detailed overview that focusses on STSG is presented by Knight (2007). Theorem 6 For every LMBOT, an equivalent STSSG can be constructed in linear time. 4 Rule extraction and training In this section, we will show how to automatically obtain an LMBOT from a bi-parsed, word-aligned parallel corpus. Essentially, the process has two steps: rule extraction and training. In the rule extraction step, an (unweighted) LMBOT is extracted from the corpus. The rule weights are then set in the training procedure. The two main inspirations for our rule extraction are the corresponding procedures for STSG (Galley et al., 2004; Graehl et al., 2008) and for STSSG (Sun et al., 2009). STSG are always contiguous in both the left- and right-hand side, which means that they (completely) cover a single span of input or output words. On the contrary, STSSG rules can be noncontiguous on both sides, but the extraction procedure of Sun et al. (2009) only extracts rules that are contiguous on the left- or right-hand side. We can adjust its 1st phase that extracts rules with (potentially) non-contiguous right-hand sides. The adjustment is necessary because LMBOT rules cannot have (contiguous) tree sequences in their left-hand sides. Overall, the rule extraction process is sketched in Algorithm 1. Algorithm 1 Rule extraction for LMBOT Require: word-aligned tree pair (t, u) Return: LMBOT rules R such that (t, u) ∈τ(R) while there exists a maximal non-leaf node p ∈pos(t) and minimal p1, . . . , pk ∈pos(u) such that t|p and (u|p1, . . . , u|pk) have a consistent alignment (i.e., no alignments from within t|p to a leaf outside (u|p1, . . . , u|pk) and vice versa) do 2: add rule ρ = t|p →ψ (up1, . . . , upk) to R with the nonterminal alignments ψ // excise rule ρ from (t, u) 4: t ←t[p ←t(p)] u ←u[pi ←u(pi) | i ∈{1, . . . , k}] 6: establish alignments according to position end while The requirement that we can only have one input tree in LMBOT rules indeed might cause the extraction of bigger and less useful rules (when compared to the corresponding STSSG rules) as demonstrated in (Sun et al., 2009). However, the stricter rule shape preserves the good algorithmic properties of LMBOT. The more powerful STSSG rules can cause nonclosure under composition (Raoult, 1997; Radmacher, 2008) and parsing to be less efficient. Figure 4 shows an example of biparsed aligned parallel text. According to the method of Galley et al. (2004) we can extract the (minimal) STSG rule displayed in Figure 5. Using the more liberal format of LMBOT rules, we can decompose the STSG rule of Figure 5 further into the rules displayed in Figure 1. The method of Sun et al. (2009) would also extract the rule displayed in Figure 6. Let us reconsider Figures 1 and 2. Let ρ1 be the top left rule of Figure 2 and ρ2 and ρ3 be the 829 S NP-SBJ NML JJ Yugoslav NNP President NNP Voislav VP VBD signed PP IN for NP NNP Serbia VP PV twlY NP-OBJ NP DET-NN AltwqyE PP PREP En NP NN-PROP SrbyA NP-SBJ NP DET-NN Alr}ys DET-ADJ AlywgwslAfy NP NN-PROP fwyslAf Figure 4: Biparsed aligned parallel text. S NP-SBJ VP VBD signed PP → VP PV twlY NP-OBJ NP DET-NN AltwqyE PP NP-SBJ 1 1 Figure 5: Minimal STSG rule. left and right rule of Figure 1, respectively. We can represent the lower pre-translation of Figure 2 by ρ3(· · · , ρ2(ρ1)), where ρ2(ρ1) represents the upper right pre-translation of Figure 2. If we name all rules of R, then we can represent each pretranslation of τ(R) symbolically by a tree containing rule names. Such trees containing rule names are often called derivation trees. Overall, we obtain the following result, for which details can be found in (Arnold and Dauchet, 1982). Theorem 7 The set D(R) is a regular tree language for every LMBOT R, and the set of derivations is also regular for every MBOT. VBD signed , IN for → PV twlY , NP DET-NN AltwqyE , PREP En Figure 6: Sample STSSG rule. Moreover, using the input and output product constructions of Maletti (2010) we obtain that even the set Dt,u(R) of derivations for a specific input tree t and output tree u is regular. Since Dt,u(R) is regular, we can compute the inside and outside weight of each (weighted) rule of R following the method of Graehl et al. (2008). Similarly, we can adjust the training procedure of Graehl et al. (2008), which yields that we can automatically obtain a weighted LMBOT from a bi-parsed parallel corpus. Details on the run-time can be found in (Graehl et al., 2008). 5 Preservation of regularity Clearly, LMBOT are not symmetric. Although, the backwards application of an LMBOT preserves regularity, this property does not hold for forward application. We will focus on forward application here. Given a set T of pre-translations and a tree language 830 L ⊆TΣ, we let Tc(L) = {ui | (u1, . . . , uk) ∈T (L), i ∈[k]} , which collects all translations of input trees in L. We say that T preserves regularity if Tc(L) is regular for every regular tree language L ⊆TΣ. Correspondingly, an LMBOT R preserves regularity if its set τ(R) of pre-translations preserves regularity. As mentioned, an LMBOT does not necessarily preserve regularity. The rules of an LMBOT have only alignments between the left-hand side (input tree) and the right-hand side (output tree), which are also called inter-tree alignments. However, several alignments to a single nonterminal in the left-hand side can transitively relate two different nonterminals in the output side and thus simulate an intratree alignment. For example, the right rule of Figure 1 relates a ‘PV’ and an ‘NP-OBJ’ node to a single ‘VP’ node in the left-hand side. This could lead to an intra-tree alignment (synchronization) between the ‘PV’ and ‘NP-OBJ’ nodes in the right-hand side. Figure 7 displays the rules R of an LMBOT that does not preserve regularity. This can easily be seen on the leaf (word) languages because the LMBOT can translate the word x to any element of L = {wcwc | w ∈{a, b}∗}. Clearly, this word language L is not context-free. Since the leaf language of every regular tree language is context-free and regular tree languages are closed under intersection (needed to single out the translations that have the symbol Y at the root), this also proves that τ(R)c(TΣ) is not regular. Since TΣ is regular, this proves that the LMBOT does not preserve regularity. Preservation of regularity is an important property for a number of translation model manipulations. For example, the bucket-brigade and the on-the-fly method for the efficient inference described in (May et al., 2010) essentially build on it. Moreover, a regular tree grammar (i.e., a representation of a regular tree language) is an efficient representation. More complex representations such as context-free tree grammars [see, e.g., (Fujiyoshi, 2004)] have worse algorithmic properties (e.g., more complex parsing and problematic intersection). In this section, we investigate three syntactic restrictions on the set R of rules that guarantees that the obtained LMBOT preserves regularity. Then we shortly discuss how to adjust the rule extraction algorithm, so that the extracted rules automatically have these property. First, we quickly recall the notion of composed rules from Definition 4 because it will play an essential role in all three properties. Figure 3 shows a composition of two rules from Figure 7. Mind that R2 might not contain all rules of R, but it contains all those without leaf nonterminals. Definition 8 An LMBOT R is finitely collapsing if there is n ∈N such that ψ: ↓NT(r) →↓NT(l)×{1} for every rule l →ψ r ∈Rn. The following statement follows from a more general result of Raoult (1997), which we will introduce with our second property. Theorem 9 Every finitely collapsing LMBOT preserves regularity. Often the simple condition ‘finitely collapsing’ is fulfilled after rule extraction. In addition, it is automatically fulfilled in an LMBOT that was obtained from an STSG using Theorem 3. It can also be ensured in the rule extraction process by introducing collapsing points for output symbols that can appear recursively in the corpus. For example, we could enforce that all extracted rules for clause-level output symbols (assuming that there is no recursion not involving a clause-level output symbols) should have only 1 output tree in the right-hand side. However, ‘finitely collapsing’ is a rather strict property. Finitely collapsing LMBOT have only slightly more expressive power than STSG. In fact, they could be called STSG with input desynchronization. This is due to the fact that the alignment in composed rules establishes an injective relation between leaf nonterminals (as in an STSG), but it need not be bijective. Consequently, there can be leaf nonterminals in the left-hand side that have no aligned leaf nonterminal in the right-hand side. In this sense, those leaf nonterminals are desynchronized. This feature is illustrated in Figure 8 and such an LMBOT can compute the transformation {(t, a) | t ∈TΣ}, which cannot be computed by an STSG (assuming that TΣ is suitably rich). Thus STSG with input desynchronization are more expressive than STSG, but they still compute a class of transformations that is not closed under composition. 831 X x →X c , X c X X → X a X , X a X 1 2 X X → X b X , X b X 1 2 Y X → Y X X 1 2 Figure 7: Output subtree synchronization (intra-tree). X X X → a X a → DE Figure 8: Finitely collapsing LMBOT. Theorem 10 For every STSG, we can construct an equivalent finitely collapsing LMBOT in linear time. Moreover, finitely collapsing LMBOT are strictly more expressive than STSG. Next, we investigate a weaker property by Raoult (1997) that still ensures preservation of regularity. Definition 11 An LMBOT R has finite synchronization if there is n ∈N such that for every rule l →ψ r ∈Rn and p ∈↓NT(l) there exists i ∈N with ψ−1({p} × N) ⊆{iw | w ∈N∗}. In plain terms, multiple alignments to a single leaf nonterminal at p in the left-hand side are allowed, but all leaf nonterminals of the right-hand side that are aligned to p must be in the same tree. Clearly, an LMBOT with finite synchronization is finitely collapsing. Raoult (1997) investigated this restriction in the context of rational tree relations, which are a generalization of our LMBOT. Raoult (1997) shows that finite synchronization can be decided. The next theorem follows from the results of Raoult (1997). Theorem 12 Every LMBOT with finite synchronization preserves regularity. MBOT can compute arbitrary compositions of STSG (Maletti, 2010). However, this no longer remains true for MBOT (or LMBOT) with finite synchronization.2 In Figure 9 we illustrate a translation that can be computed by a composition of two STSG, but that cannot be computed by an MBOT (or LMBOT) with finite synchronization. Intuitively, when processing the chain of ‘X’s of the transformation depicted in Figure 9, the first and second suc2This assumes a straightforward generalization of the ‘finite synchronization’ property for MBOT. Y X ... X Y t1 t2 t3 → Z t1 t2 t3 Figure 9: Transformation that cannot be computed by an MBOT with finite synchronization. cessor of the ‘Z’-node at the root on the output side must be aligned to the ‘X’-chain. This is necessary because those two mentioned subtrees must reproduce t1 and t2 from the end of the ‘X’-chain. We omit the formal proof here, but obtain the following statement. Theorem 13 For every STSG, we can construct an equivalent LMBOT with finite synchronization in linear time. LMBOT and MBOT with finite synchronization are strictly more expressive than STSG and compute classes that are not closed under composition. Again, it is straightforward to adjust the rule extraction algorithm by the introduction of synchronization points (for example, for clause level output symbols). We can simply require that rules extracted for those selected output symbols fulfill the condition mentioned in Definition 11. Finally, we introduce an even weaker version. Definition 14 An LMBOT R is copy-free if there is n ∈N such that for every rule l →ψ r ∈Rn and p ∈↓NT(l) we have (i) ψ−1({p} × N) ⊆N, or (ii) ψ−1({p} × N) ⊆{iw | w ∈N∗} for an i ∈N. Intuitively, a copy-free LMBOT has rules whose right hand sides may use all leaf nonterminals that are aligned to a given leaf nonterminal in the lefthand side directly at the root (of one of the trees 832 X X ... X X → X a X a ... X a X , X a X a ... X a X 1 2 Figure 10: Composed rule that is not copy-free. in the right-hand side forest) or group all those leaf nonterminals in a single tree in the forest. Clearly, the LMBOT of Figure 7 is not copy-free because the second rule composes with itself (see Figure 10) to a rule that does not fulfill the copy-free condition. Theorem 15 Every copy-free LMBOT preserves regularity. Proof sketch: Let n be the integer of Definition 14. We replace the LMBOT with rules R by the equivalent LMBOT M with rules Rn. Then all rules have the form required in Definition 14. Moreover, let L ⊆TΣ be a regular tree language. Then we can construct the input product of τ(M) with L. In this way, we obtain an MBOT M′, whose rules still fulfill the requirements (adapted for MBOT) of Definition 14 because the input product does not change the structure of the rules (it only modifies the state behavior). Consequently, we only need to show that the range of the MBOT M′ is regular. This can be achieved using a decomposition into a relabeling, which clearly preserves regularity, and a deterministic finite-copying top-down tree transducer (Engelfriet et al., 1980; Engelfriet, 1982). 2 Figure 11 shows some relevant rules of a copyfree LMBOT that computes the transformation of Figure 9. Clearly, copy-free LMBOT are more general than LMBOT with finite synchronization, so we again can obtain copy-free LMBOT from STSG. In addition, we can adjust the rule extraction process using synchronization points as for LMBOT with finite synchronization using the restrictions of Definition 14. Theorem 16 For every STSG, we can construct an equivalent copy-free LMBOT in linear time. Y X S → Z S S S 1 2 X X → D S , S E 1 2 X Y S S → D S , S E 1 2 Figure 11: Copy-free LMBOT for the transformation of Figure 9. Copy-free LMBOT are strictly more expressive than LMBOT with finite synchronization. 6 Conclusion We have introduced a simple restriction of multi bottom-up tree transducers. It abstracts from the general state behavior of the general model and only uses the locality tests that are also present in STSG, STSSG, and STAG. Next, we introduced a rule extraction procedure and a corresponding rule weight training procedure for our LMBOT. However, LMBOT allow translations that do not preserve regularity, which is an important property for efficient algorithms. We presented 3 properties that ensure that regularity is preserved. In addition, we shortly discussed how these properties could be enforced in the presented rule extraction procedure. Acknowledgements The author gratefully acknowledges the support by KEVIN KNIGHT, who provided the inspiration and the data. JONATHAN MAY helped in many fruitful discussions. The author was financially supported by the German Research Foundation (DFG) grant MA / 4959 / 1-1. 833 References Alfred V. Aho and Jeffrey D. Ullman. 1972. The Theory of Parsing, Translation, and Compiling. Prentice Hall. Andr´e Arnold and Max Dauchet. 1982. Morphismes et bimorphismes d’arbres. Theoret. Comput. Sci., 20(1):33–93. Peter F. Brown, John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Fredrick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79–85. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. Mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. ACL, pages 263–270. Association for Computational Linguistics. David Chiang. 2006. An introduction to synchronous grammars. In Proc. ACL. Association for Computational Linguistics. Part of a tutorial given with Kevin Knight. Jason Eisner. 2003. Simpler and more general minimization for weighted finite-state automata. In Proc. NAACL, pages 64–71. Association for Computational Linguistics. Joost Engelfriet, Grzegorz Rozenberg, and Giora Slutzki. 1980. Tree transducers, L systems, and two-way machines. J. Comput. System Sci., 20(2):150–202. Joost Engelfriet, Eric Lilin, and Andreas Maletti. 2009. Composition and decomposition of extended multi bottom-up tree transducers. Acta Inform., 46(8):561– 590. Joost Engelfriet. 1982. The copying power of one-state tree transducers. J. Comput. System Sci., 25(3):418– 435. Akio Fujiyoshi. 2004. Restrictions on monadic contextfree tree grammars. In Proc. CoLing, pages 78–84. Association for Computational Linguistics. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proc. HLT-NAACL, pages 273–280. Association for Computational Linguistics. Jonathan Graehl, Kevin Knight, and Jonathan May. 2008. Training tree transducers. Computational Linguistics, 34(3):391–427. Kevin Knight. 2007. Capturing practical natural language transformations. Machine Translation, 21(2):121–133. Eric Lilin. 1981. Propri´et´es de clˆoture d’une extension de transducteurs d’arbres d´eterministes. In Proc. CAAP, volume 112 of LNCS, pages 280–289. Springer. Andreas Maletti. 2010. Why synchronous tree substitution grammars? In Proc. NAACL, pages 876–884. Association for Computational Linguistics. Jonathan May, Kevin Knight, and Heiko Vogler. 2010. Efficient inference through cascades of weighted tree transducers. In Proc. ACL, pages 1058–1066. Association for Computational Linguistics. Frank G. Radmacher. 2008. An automata theoretic approach to rational tree relations. In Proc. SOFSEM, volume 4910 of LNCS, pages 424–435. Springer. Jean-Claude Raoult. 1997. Rational tree relations. Bull. Belg. Math. Soc. Simon Stevin, 4(1):149–176. Stuart M. Shieber and Yves Schabes. 1990. Synchronous tree-adjoining grammars. In Proc. CoLing, volume 3, pages 253–258. Association for Computational Linguistics. Stuart M. Shieber. 2004. Synchronous grammars as tree transducers. In Proc. TAG+7, pages 88–95, Vancouver, BC, Canada. Simon Fraser University. Stuart M. Shieber. 2007. Probabilistic synchronous treeadjoining grammars for machine translation: The argument from bilingual dictionaries. In Proc. SSST, pages 88–95. Association for Computational Linguistics. Jun Sun, Min Zhang, and Chew Lim Tan. 2009. A noncontiguous tree sequence alignment-based model for statistical machine translation. In Proc. ACL, pages 914–922. Association for Computational Linguistics. Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan, and Sheng Li. 2008a. A tree sequence alignment-based tree-to-tree translation model. In Proc. ACL, pages 559–567. Association for Computational Linguistics. Min Zhang, Hongfei Jiang, Haizhou Li, Aiti Aw, and Sheng Li. 2008b. Grammar comparison study for translational equivalence modeling and statistical machine translation. In Proc. CoLing, pages 1097–1104. Association for Computational Linguistics. 834
|
2011
|
83
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 835–845, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Binarized Forest to String Translation Hao Zhang Google Research [email protected] Licheng Fang Computer Science Department University of Rochester [email protected] Peng Xu Google Research [email protected] Xiaoyun Wu Google Research [email protected] Abstract Tree-to-string translation is syntax-aware and efficient but sensitive to parsing errors. Forestto-string translation approaches mitigate the risk of propagating parser errors into translation errors by considering a forest of alternative trees, as generated by a source language parser. We propose an alternative approach to generating forests that is based on combining sub-trees within the first best parse through binarization. Provably, our binarization forest can cover any non-consitituent phrases in a sentence but maintains the desirable property that for each span there is at most one nonterminal so that the grammar constant for decoding is relatively small. For the purpose of reducing search errors, we apply the synchronous binarization technique to forest-tostring decoding. Combining the two techniques, we show that using a fast shift-reduce parser we can achieve significant quality gains in NIST 2008 English-to-Chinese track (1.3 BLEU points over a phrase-based system, 0.8 BLEU points over a hierarchical phrase-based system). Consistent and significant gains are also shown in WMT 2010 in the English to German, French, Spanish and Czech tracks. 1 Introduction In recent years, researchers have explored a wide spectrum of approaches to incorporate syntax and structure into machine translation models. The unifying framework for these models is synchronous grammars (Chiang, 2005) or tree transducers (Graehl and Knight, 2004). Depending on whether or not monolingual parsing is carried out on the source side or the target side for inference, there are four general categories within the framework: • string-to-string (Chiang, 2005; Zollmann and Venugopal, 2006) • string-to-tree (Galley et al., 2006; Shen et al., 2008) • tree-to-string (Lin, 2004; Quirk et al., 2005; Liu et al., 2006; Huang et al., 2006; Mi et al., 2008) • tree-to-tree (Eisner, 2003; Zhang et al., 2008) In terms of search, the string-to-x models explore all possible source parses and map them to the target side, while the tree-to-x models search over the subspace of structures of the source side constrained by an input tree or trees. Hence, tree-to-x models are more constrained but more efficient. Models such as Huang et al. (2006) can match multilevel tree fragments on the source side which means larger contexts are taken into account for translation (Poutsma, 2000), which is a modeling advantage. To balance efficiency and accuracy, forest-tostring models (Mi et al., 2008; Mi and Huang, 2008) use a compact representation of exponentially many trees to improve tree-to-string models. Traditionally, such forests are obtained through hyper-edge pruning in the k-best search space of a monolingual parser (Huang, 2008). The pruning parameters that control the size of forests are normally handtuned. Such forests encode both syntactic variants and structural variants. By syntactic variants, we refer to the fact that a parser can parse a substring into either a noun phrase or verb phrase in certain cases. 835 We believe that structural variants which allow more source spans to be explored during translation are more important (DeNeefe et al., 2007), while syntactic variants might improve word sense disambiguation but also introduce more spurious ambiguities (Chiang, 2005) during decoding. To focus on structural variants, we propose a family of binarization algorithms to expand one single constituent tree into a packed forest of binary trees containing combinations of adjacent tree nodes. We control the freedom of tree node binary combination by restricting the distance to the lowest common ancestor of two tree nodes. We show that the best results are achieved when the distance is two, i.e., when combining tree nodes sharing a common grand-parent. In contrast to conventional parser-produced-forestto-string models, in our model: • Forests are not generated by a parser but by combining sub-structures using a tree binarizer. • Instead of using arbitary pruning parameters, we control forest size by an integer number that defines the degree of tree structure violation. • There is at most one nonterminal per span so that the grammar constant is small. Since GHKM rules (Galley et al., 2004) can cover multi-level tree fragments, a synchronous grammar extracted using the GHKM algorithm can have synchronous translation rules with more than two nonterminals regardless of the branching factor of the source trees. For the first time, we show that similar to string-to-tree decoding, synchronous binarization significantly reduces search errors and improves translation quality for forest-to-string decoding. To summarize, the whole pipeline is as follows. First, a parser produces the highest-scored tree for an input sentence. Second, the parse tree is restructured using our binarization algorithm, resulting in a binary packed forest. Third, we apply the forest-based variant of the GHKM algorithm (Mi and Huang, 2008) on the new forest for rule extraction. Fourth, on the translation forest generated by all applicable translation rules, which is not necessarily binary, we apply the synchronous binarization algorithm (Zhang et al., 2006) to generate a binary translation forest. Finally, we use a bottom-up decoding algorithm with intergrated LM intersection using the cube pruning technique (Chiang, 2005). The rest of the paper is organized as follows. In Section 2, we give an overview of the forest-tostring models. In Section 2.1, we introduce a more efficient and flexible algorithm for extracting composed GHKM rules based on the same principle as cube pruning (Chiang, 2007). In Section 3, we introduce our source tree binarization algorithm for producing binarized forests. In Section 4, we explain how to do synchronous rule factorization in a forest-to-string decoder. Experimental results are in Section 5. 2 Forest-to-string Translation Forest-to-string models can be described as e = Y( arg max d∈D(T), T∈F(f) P(d|T) ) (1) where f stands for a source string, e stands for a target string, F stands for a forest, D stands for a set of synchronous derivations on a given tree T, and Y stands for the target side yield of a derivation. The search problem is finding the derivation with the highest probability in the space of all derivations for all parse trees for an input sentence. The log probability of a derivation is normally a linear combination of local features which enables dynamic programming to find the optimal combination efficiently. In this paper, we focus on the models based on the Synchronous Tree Substitution Grammars (STSG) defined by Galley et al. (2004). In contrast to a tree-to-string model, the introduction of F augments the search space systematically. When the first-best parse is wrong or no good translation rules are applicable to the first-best parse, the model can recover good translations from alternative parses. In STSG, local features are defined on tree-tostring rules, which are synchronous grammar rules defining how a sequence of terminals and nonterminals on the source side translates to a sequence of target terminals and nonterminals. One-to-one mapping of nonterminals is assumed. But terminals do not necessarily need to be aligned. Figure 1 shows a typical English-Chinese tree-to-string rule with a reordering pattern consisting of two nonterminals and different numbers of terminals on the two sides. 836 VP VBD was VP-C .x1:VBN PP P by .x2:NP-C →bei 被x2 x1 Figure 1: An example tree-to-string rule. Forest-to-string translation has two stages. The first stage is rule extraction on word-aligned parallel texts with source forests. The second stage is rule enumeration and DP decoding on forests of input strings. In both stages, at each tree node, the task on the source side is to generate a list of tree fragments by composing the tree fragments of its children. We propose a cube-pruning style algorithm that is suitable for both rule extraction during training and rule enumeration during decoding. At the highest level, our algorithm involves three steps. In the first step, we label each node in the input forest by a boolean variable indicating whether it is a site of interest for tree fragment generation. If it is marked true, it is an admissible node. In the case of rule extraction, a node is admissible if and only if it corresponds to a phrase pair according to the underlying word alignment. In the case of decoding, every node is admissible for the sake of completeness of search. An initial one-node tree fragment is placed at each admissible node for seeding the tree fragment generation process. In the second step, we do cube-pruning style bottom-up combinations to enumerate a pruned list of tree fragments at each tree node. In the third step, we extract or enumerateand-match tree-to-string rules for the tree fragments at the admissible nodes. 2.1 A Cube-pruning-inspired Algorithm for Tree Fragment Composition Galley et al. (2004) defined minimal tree-to-string rules. Galley et al. (2006) showed that tree-to-string rules made by composing smaller ones are important to translation. It can be understood by the analogy of going from word-based models to phrasebased models. We relate composed rule extraction to cube-pruning (Chiang, 2007). In cube-pruning, the process is to keep track of the k-best sorted language model states at each node and combine them bottom-up with the help of a priority queue. We can imagine substituting k-best LM states with k composed rules at each node and composing them bottom-up. We can also borrow the cube pruning trick to compose multiple lists of rules using a priority queue to lazily explore the space of combinations starting from the top-most element in the cube formed by the lists. We need to define a ranking function for composed rules. To simulate the breadth-first expansion heuristics of Galley et al. (2006), we define the figure of merit of a tree-to-string rule as a tuple m = (h, s, t), where h is the height of a tree fragment, s is the number of frontier nodes, i.e., bottom-level nodes including both terminals and non-terminals, and t is the number of terminals in the set of frontier nodes. We define an additive operator +: m1 + m2 = ( max{h1, h2} + 1, s1 + s2, t1 + t2 ) and a min operator based on the order <: m1 < m2 ⇐⇒ h1 < h2 ∨ h1 = h2 ∧s1 < s2 ∨ h1 = h2 ∧s1 = s2 ∧t1 < t2 The + operator corresponds to rule compositions. The < operator corresponds to ranking rules by their sizes. A concrete example is shown in Figure 2, in which case the monotonicity property of (+, <) holds: if ma < mb, ma +mc < mb +mc. However, this is not true in general for the operators in our definition, which implies that our algorithm is indeed like cube-pruning: an approximate k-shortest-path algorithm. 3 Source Tree Binarization The motivation of tree binarization is to factorize large and rare structures into smaller but frequent ones to improve generalization. For example, Penn Treebank annotations are often flat at the phrase level. Translation rules involving flat phrases are unlikely to generalize. If long sequences are binarized, 837 VBD (1, 1, 0) VBD was (2, 1, 1) × VP-C (1, 1, 0) VP-C VPB PP (2, 2, 0) VP-C VPB PP P NP-C (3, 3, 1) = (1, 1, 0) (2, 2, 0) (3, 3, 1) (1, 1, 0) VP VBD VP-C (2, 2, 0) VP VBD VP-C VPB PP (3, 3, 0) VP VBD VP-C VPB PP P NP-C (4, 4, 1) (2, 1, 1) VP VBD was VP-C (3, 2, 1) VP VBD was VP-C VPB PP (3, 3, 1) VP VBD was VP-C VPB PP P NP-C (4, 4, 2) Figure 2: Tree-to-string rule composition as cube-pruning. The left shows two lists of composed rules sorted by their geometric measures (height, # frontiers, # frontier terminals), under the gluing rule of VP →VBD VP−C. The right part shows a cube view of the combination space. We explore the space from the top-left corner to the neighbors. the commonality of subsequences can be discovered. For example, the simplest binarization methods left-to-right, right-to-left, and head-out explore sharing of prefixes or suffixes. Among exponentially many binarization choices, these algorithms pick a single bracketing structure for a sequence of sibling nodes. To explore all possible binarizations, we use a CYK algorithm to produce a packed forest of binary trees for a given sibling sequence. With CYK binarization, we can explore any span that is nested within the original tree structure, but still miss all cross-bracket spans. For example, translating from English to Chinese, The phrase “There is” should often be translated into one verb in Chinese. In a correct English parse tree, however, the subject-verb boundary is between “There” and “is”. As a result, tree-to-string translation based on constituent phrases misses the good translation rule. The CYK-n binarization algorithm shown in Algorithm 1 is a parameterization of the basic CYK binarization algorithm we just outlined. The idea is that binarization can go beyond the scope of parent nodes to more distant ancestors. The CYK-n algorithm first annotates each node with its n nearest ancestors in the source tree, then generates a binarization forest that allows combining any two nodes with common ancestors. The ancestor chain labeled at each node licenses the node to only combine with nodes having common ancestors in the past n generations. The algorithm creates new tree nodes on the fly. New tree nodes need to have their own states indicated by a node label representing what is covered internally by the node and an ancestor chain representing which nodes the node attaches to externally. Line 22 and Line 23 of Algorithm 1 update the label and ancestor annotations of new tree nodes. Using the parsing semiring notations (Goodman, 1999), the ancestor computation can be summarized by the (∩, ∪) pair. ∩produces the ancestor chain of a hyper-edge. ∪produces the ancestor chain of a hyper-node. The node label computation can be summarized by the (concatenate, min) pair. concatenate produces a concatenation of node labels. min yields the label with the shortest length. A tree-sequence (Liu et al., 2007) is a sequence of sub-trees covering adjacent spans. It can be proved that the final label of each new node in the forest corresponds to the tree sequence which has the minimum length among all sequences covered by the node span. The ancestor chain of a new node is the common ancestors of the nodes in its minimum tree sequence. For clarity, we do full CYK loops over all O(|w|2) spans and O(|w|3) potential hyper-edges, where |w| is the length of a source string. In reality, only descendants under a shared ancestor can combine. If we assume trees have a bounded branching factor b, the number of descendants after n generations is still bounded by a constant c = bn. The algorithm is O(c3 · |w|), which is still linear to the size of input sentence when the parameter n is a constant. 838 VP VBD+VBN VBD was VBN PP P by NP-C VP VBD was VP-C VBN+P VBN P by NP-C (a) (b) VP VBD+VBN+P VBD+VBN VBD was VBN P by NP-C VP VBD+VBN+P VBD was VBN+P VBN P by NP-C (c) (d) 1 2 3 4 0 VBD VBD+VBN VBD+VBN+P VP 1 VBN VBN+P VP-C 2 P PP 3 NP-C Figure 3: Alternative binary parses created for the original tree fragment in Figure 1 through CYK-2 binarization (a and b) and CYK-3 binarization (c and d). In the chart representation at the bottom, cells with labels containing the concatenation symbol + hold nodes created through binarization. Figure 3 shows some examples of alternative trees generated by the CYK-n algorithm. In this example, standard CYK binarization will not create any new trees since the input is already binary. The CYK-2 and CYK-3 algorithms discover new trees with an increasing degree of freedom. 4 Synchronous Binarization for Forest-to-string Decoding In this section, we deal with binarization of translation forests, also known as translation hypergraphs (Mi et al., 2008). A translation forest is a packed forest representation of all synchronous derivations composed of tree-to-string rules that match the source forest. Tree-to-string decoding algorithms work on a translation forest, rather than a source forest. A binary source forest does not necessarily always result in a binary translation forest. In the treeto-string rule in Figure 4, the source tree is already ADJP RB+JJ x0:RB JJ responsible PP IN for NP-C NPB DT the x1:NN x2:PP → x0 fuze 负责x2 de 的x1 ADJP RB+JJ x0:RB JJ responsible x1:PP → x0 fuze 负责x1 PP IN for NP-C NPB DT the x0:NN x1:PP → x1 de 的x0 Figure 4: Synchronous binarization for a tree-to-string rule. The top rule can be binarized into two smaller rules. binary with the help of source tree binarization, but the translation rule involves three variables in the set of frontier nodes. If we apply synchronous binarization (Zhang et al., 2006), we can factorize it into two smaller translation rules each having two variables. Obviously, the second rule, which is a common pattern, is likely to be shared by many translation rules in the derivation forest. When beams are fixed, search goes deeper in a factorized translation forest. The challenge of synchronous binarization for a forest-to-string system is that we need to first match large tree fragments in the input forest as the first step of decoding. Our solution is to do the matching using the original rules and then run synchronous binarization to break matching rules down to factor rules which can be shared in the derivation forest. This is different from the offline binarization scheme described in (Zhang et al., 2006), although the core algorithm stays the same. 5 Experiments We ran experiments on public data sets for English to Chinese, Czech, French, German, and Spanish 839 Algorithm 1 The CYK-n Binarization Algorithm 1: function CYKBINARIZER(T,n) 2: for each tree node ∈T in bottom-up topological order do 3: Make a copy of node in the forest output F 4: Ancestors[node] = the nearest n ancestors of node 5: Label[node] = the label of node in T 6: L ←the length of the yield of T 7: for k = 2...L do 8: for i = 0, ..., L −k do 9: for j = i + 1, ..., i + k −1 do 10: lnode ←Node[i, j]; rnode ←Node[j, i + k] 11: if Ancestors[lnode] ∩Ancestors[rnode] ̸= ∅then 12: pnode ←GETNODE(i, i + k) 13: ADDEDGE(pnode, lnode, rnode) return F 14: function GETNODE(begin,end) 15: if Node[begin, end] /∈F then 16: Create a new node for the span (begin, end) 17: Ancestors[node] = ∅ 18: Label[node] = the sequence of terminals in the span (begin, end) in T 19: return Node[begin, end] 20: function ADDEDGE(pnode, lnode, rnode) 21: Add a hyper-edge from lnode and rnode to pnode 22: Ancestors[pnode] = Ancestors[pnode] ∪(Ancestors[lnode] ∩Ancestors[rnode]) 23: Label[pnode] = min{Label[pnode], CONCATENATE(Label[lnode], Label[rnode])} translation to evaluate our methods. 5.1 Setup For English-to-Chinese translation, we used all the allowed training sets in the NIST 2008 constrained track. For English to the European languages, we used the training data sets for WMT 2010 (CallisonBurch et al., 2010). For NIST, we filtered out sentences exceeding 80 words in the parallel texts. For WMT, the filtering limit is 60. There is no filtering on the test data set. Table 1 shows the corpus statistics of our bilingual training data sets. Source Words Target Words English-Chinese 287M 254M English-Czech 66M 57M English-French 857M 996M English-German 45M 43M English-Spanish 216M 238M Table 1: The Sizes of Parallel Texts. At the word alignment step, we did 6 iterations of IBM Model-1 and 6 iterations of HMM. For English-Chinese, we ran 2 iterations of IBM Model4 in addition to Model-1 and HMM. The word alignments are symmetrized using the “union” heuristics. Then, the standard phrase extraction heuristics (Koehn et al., 2003) were applied to extract phrase pairs with a length limit of 6. We ran the hierarchical phrase extraction algorithm with the standard heuristics of Chiang (2005). The phrase-length limit is interpreted as the maximum number of symbols on either the source side or the target side of a given rule. On the same aligned data sets, we also ran the tree-to-string rule extraction algorithm described in Section 2.1 with a limit of 16 rules per tree node. The default parser in the experiments is a shiftreduce dependency parser (Nivre and Scholz, 2004). It achieves 87.8% labelled attachment score and 88.8% unlabeled attachment score on the standard Penn Treebank test set. We convert dependency parses to constituent trees by propagating the partof-speech tags of the head words to the corresponding phrase structures. We compare three systems: a phrase-based system (Och and Ney, 2004), a hierarchical phrasebased system (Chiang, 2005), and our forest-tostring system with different binarization schemes. In the phrase-based decoder, jump width is set to 8. In the hierarchical decoder, only the glue rule is applied 840 to spans longer than 10. For the forest-to-string system, we do not have such length-based reordering constraints. We trained two 5-gram language models with Kneser-Ney smoothing for each of the target languages. One is trained on the target side of the parallel text, the other is on a corpus provided by the evaluation: the Gigaword corpus for Chinese and news corpora for the others. Besides standard features (Och and Ney, 2004), the phrase-based decoder also uses a Maximum Entropy phrasal reordering model (Zens and Ney, 2006). Both the hierarchical decoder and the forest-to-string decoder only use the standard features. For feature weight tuning, we do Minimum Error Rate Training (Och, 2003). To explore a larger n-best list more efficiently in training, we adopt the hypergraph-based MERT (Kumar et al., 2009). To evaluate the translation results, we use BLEU (Papineni et al., 2002). 5.2 Translation Results Table 2 shows the scores of our system with the best binarization scheme compared to the phrasebased system and the hierarchical phrase-based system. Our system is consistently better than the other two systems in all data sets. On the English-Chinese data set, the improvement over the phrase-based system is 1.3 BLEU points, and 0.8 over the hierarchical phrase-based system. In the tasks of translating to European languages, the improvements over the phrase-based baseline are in the range of 0.5 to 1.0 BLEU points, and 0.3 to 0.5 over the hierarchical phrase-based system. All improvements except the bf2s and hier difference in English-Czech are significant with confidence level above 99% using the bootstrap method (Koehn, 2004). To demonstrate the strength of our systems including the two baseline systems, we also show the reported best results on these data sets from the 2010 WMT workshop. Our forest-to-string system (bf2s) outperforms or ties with the best ones in three out of four language pairs. 5.3 Different Binarization Methods The translation results for the bf2s system in Table 2 are based on the cyk binarization algorithm with bracket violation degree 2. In this section, we BLEU dev test English-Chinese pb 29.7 39.4 hier 31.7 38.9 bf2s 31.9 40.7∗∗ English-Czech wmt best 15.4 pb 14.3 15.5 hier 14.7 16.0 bf2s 14.8 16.3∗ English-French wmt best 27.6 pb 24.1 26.1 hier 23.9 26.1 bf2s 24.5 26.6∗∗ English-German wmt best 16.3 pb 14.5 15.5 hier 14.9 15.9 bf2s 15.2 16.3∗∗ English-Spanish wmt best 28.4 pb 24.1 27.9 hier 24.2 28.4 bf2s 24.9 28.9∗∗ Table 2: Translation results comparing bf2s, the binarized-forest-to-string system, pb, the phrase-based system, and hier, the hierarchical phrase-based system. For comparison, the best scores from WMT 2010 are also shown. ∗∗indicates the result is significantly better than both pb and hier. ∗indicates the result is significantly better than pb only. vary the degree to generate forests that are incrementally augmented from a single tree. Table 3 shows the scores of different tree binarization methods for the English-Chinese task. It is clear from reading the table that cyk-2 is the optimal binarization parameter. We have verified this is true for other language pairs on non-standard data sets. We can explain it from two angles. At degree 2, we allow phrases crossing at most one bracket in the original tree. If the parser is reasonably good, crossing just one bracket is likely to cover most interesting phrases that can be translation units. From another point of view, enlarging the forests entails more parameters in the resulting translation model, making over-fitting likely to happen. 5.4 Binarizer or Parser? A natural question is how the binarizer-generated forests compare with parser-generated forests in translation. To answer this question, we need a 841 BLEU rules dev test no binarization 378M 28.0 36.3 head-out 408M 30.0 38.2 cyk-1 527M 31.6 40.5 cyk-2 803M 31.9 40.7 cyk-3 1053M 32.0 40.6 cyk-∞ 1441M 32.0 40.3 Table 3: Comparing different source tree binarization schemes for English-Chinese translation, showing both BLEU scores and model sizes. The rule counts include normal phrases which are used at the leaf level during decoding. parser that can generate a packed forest. Our fast deterministic dependency parser does not generate a packed forest. Instead, we use a CRF constituent parser (Finkel et al., 2008) with state-of-the-art accuracy. On the standard Penn Treebank test set, it achieves an F-score of 89.5%. It uses a CYK algorithm to do full dynamic programming inference, so is much slower. We modified the parser to do hyperedge pruning based on posterior probabilities. The parser preprocesses the Penn Treebank training data through binarization. So the packed forest it produces is also a binarized forest. We compare two systems: one is using the cyk-2 binarizer to generate forests; the other is using the CRF parser with pruning threshold e−p, where p = 2 to generate forests.1 Although the parser outputs binary trees, we found cross-bracket cyk-2 binarization is still helpful. BLEU dev test cyk-2 14.9 16.0 parser 14.7 15.7 Table 4: Binarized forests versus parser-generated forests for forest-to-string English-German translation. Table 4 shows the comparison of binarization forest and parser forest on English-German translation. The results show that cyk-2 forest performs slightly 1All hyper-edges with negative log posterior probability larger than p are pruned. In Mi and Huang (2008), the threshold is p = 10. The difference is that they do the forest pruning on a forest generated by a k-best algorithm, while we do the forest-pruning on the full CYK chart. As a result, we need more aggressive pruning to control forest size. better than the parser forest. We have not done full exploration of forest pruning parameters to fine-tune the parser-forest. The speed of the constituent parser is the efficiency bottleneck. This actually demonstrates the advantage of the binarizer plus forest-tostring scheme. It is flexible, and works with any parser that generates projective parses. It does not require hand-tuning of forest pruning parameters for training. 5.5 Synchronous Binarization In this section, we demonstrate the effect of synchronous binarization for both tree-to-string and forest-to-string translation. The experiments are on the English-Chinese data set. The baseline systems use k-way cube pruning, where k is the branching factor, i.e., the maximum number of nonterminals on the right-hand side of any synchronous translation rule in an input grammar. The competing system does online synchronous binarization as described in Section 4 to transform the grammar intersected with the input sentence to the minimum branching factor k′ (k′ < k), and then applies k′-way cube pruning. Typically, k′ is 2. BLEU dev test head-out cube pruning 29.2 37.0 + synch. binarization 30.0 38.2 cyk-2 cube pruning 31.7 40.5 + synch. binarization 31.9 40.7 Table 5: The effect of synchronous binarization for treeto-string and forest-to-string systems, on the EnglishChinese task. Table 5 shows that synchronous binarization does help reduce search errors and find better translations consistently in all settings. 6 Related Work The idea of concatenating adjacent syntactic categories has been explored in various syntax-based models. Zollmann and Venugopal (2006) augmented hierarchial phrase based systems with joint syntactic categories. Liu et al. (2007) proposed treesequence-to-string translation rules but did not provide a good solution to place joint subtrees into connection with the rest of the tree structure. Zhang et 842 al. (2009) is the closest to our work. But their goal was to augment a k-best forest. They did not binarize the tree sequences. They also did not put constraint on the tree-sequence nodes according to how many brackets are crossed. Wang et al. (2007) used target tree binarization to improve rule extraction for their string-to-tree system. Their binarization forest is equivalent to our cyk-1 forest. In contrast to theirs, our binarization scheme affects decoding directly because we match tree-to-string rules on a binarized forest. Different methods of translation rule binarization have been discussed in Huang (2007). Their argument is that for tree-to-string decoding target side binarization is simpler than synchronous binarization and works well because creating discontinous source spans does not explode the state space. The forest-to-string senario is more similar to string-totree decoding in which state-sharing is important. Our experiments show that synchronous binarization helps significantly in the forest-to-string case. 7 Conclusion We have presented a new approach to tree-to-string translation. It involves a source tree binarization step and a standard forest-to-string translation step. The method renders it unnecessary to have a k-best parser to generate a packed forest. We have demonstrated state-of-the-art results using a fast parser and a simple tree binarizer that allows crossing at most one bracket in each binarized node. We have also shown that reducing search errors is important for forest-to-string translation. We adapted the synchronous binarization technqiue to improve search and have shown significant gains. In addition, we also presented a new cube-pruning-style algorithm for rule extraction. In the new algorithm, it is easy to adjust the figure-of-merit of rules for extraction. In the future, we plan to improve the learning of translation rules with binarized forests. Acknowledgments We would like to thank the members of the MT team at Google, especially Ashish Venugopal, Zhifei Li, John DeNero, and Franz Och, for their help and discussions. We would also like to thank Daniel Gildea for his suggestions on improving the paper. References Chris Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar Zaidan. 2010. Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and Metrics(MATR), pages 17–53, Uppsala, Sweden, July. Association for Computational Linguistics. Revised August 2010. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Conference of the Association for Computational Linguistics (ACL-05), pages 263–270, Ann Arbor, MI. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Steve DeNeefe, Kevin Knight, Wei Wang, and Daniel Marcu. 2007. What can syntax-based MT learn from phrase-based MT? In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 755–763, Prague, Czech Republic, June. Association for Computational Linguistics. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In Proceedings of the 41st Meeting of the Association for Computational Linguistics, companion volume, pages 205–208, Sapporo, Japan. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL-08: HLT, pages 959–967, Columbus, Ohio, June. Association for Computational Linguistics. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of the 2004 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-04), pages 273–280. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06), pages 961–968, July. Joshua Goodman. 1999. Semiring parsing. Computational Linguistics, 25(4):573–605. Jonathan Graehl and Kevin Knight. 2004. Training tree transducers. In Proceedings of the 2004 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-04). 843 Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of the 7th Biennial Conference of the Association for Machine Translation in the Americas (AMTA), Boston, MA. Liang Huang. 2007. Binarization, synchronous binarization, and target-side binarization. In Proceedings of the NAACL/AMTA Workshop on Syntax and Structure in Statistical Translation (SSST), pages 33–40, Rochester, NY. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of the 46th Annual Conference of the Association for Computational Linguistics: Human Language Technologies (ACL-08:HLT), Columbus, OH. ACL. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-03), Edmonton, Alberta. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 388–395, Barcelona, Spain, July. Shankar Kumar, Wolfgang Macherey, Chris Dyer, and Franz Och. 2009. Efficient minimum error rate training and minimum bayes-risk decoding for translation hypergraphs and lattices. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 163–171, Suntec, Singapore, August. Association for Computational Linguistics. Dekang Lin. 2004. A path-based transfer model for machine translation. In Proceedings of the 20th International Conference on Computational Linguistics (COLING-04), pages 625–630, Geneva, Switzerland. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-tostring alignment template for statistical machine translation. In Proceedings of the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06), Sydney, Australia, July. Yang Liu, Yun Huang, Qun Liu, and Shouxun Lin. 2007. Forest-to-string statistical translation rules. In Proceedings of the 45th Annual Conference of the Association for Computational Linguistics (ACL-07), Prague. Haitao Mi and Liang Huang. 2008. Forest-based translation rule extraction. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 206–214, Honolulu, Hawaii, October. Association for Computational Linguistics. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. In Proceedings of the 46th Annual Conference of the Association for Computational Linguistics: Human Language Technologies (ACL08:HLT), pages 192–199. Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of Coling 2004, pages 64–70, Geneva, Switzerland, Aug 23–Aug 27. COLING. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449. Franz Josef Och. 2003. Minimum error rate training for statistical machine translation. In Proceedings of the 41th Annual Conference of the Association for Computational Linguistics (ACL-03). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Conference of the Association for Computational Linguistics (ACL-02). Arjen Poutsma. 2000. Data-oriented translation. In Proceedings of the 18th International Conference on Computational Linguistics (COLING-00). Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal SMT. In Proceedings of the 43rd Annual Conference of the Association for Computational Linguistics (ACL-05), pages 271–279, Ann Arbor, Michigan. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of the 46th Annual Conference of the Association for Computational Linguistics: Human Language Technologies (ACL-08:HLT), Columbus, OH. ACL. Wei Wang, Kevin Knight, and Daniel Marcu. 2007. Binarizing syntax trees to improve syntax-based machine translation accuracy. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 746– 754, Prague, Czech Republic, June. Association for Computational Linguistics. Richard Zens and Hermann Ney. 2006. Discriminative reordering models for statistical machine translation. In Proceedings on the Workshop on Statistical Machine Translation, pages 55–63, New York City, June. Association for Computational Linguistics. Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proceedings of the 2006 Meeting of the 844 North American chapter of the Association for Computational Linguistics (NAACL-06), pages 256–263, New York, NY. Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan, and Sheng Li. 2008. A tree sequence alignment-based tree-to-tree translation model. In Proceedings of ACL-08: HLT, pages 559–567, Columbus, Ohio, June. Association for Computational Linguistics. Hui Zhang, Min Zhang, Haizhou Li, Aiti Aw, and Chew Lim Tan. 2009. Forest-based tree sequence to string translation model. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 172–180, Suntec, Singapore, August. Association for Computational Linguistics. Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings on the Workshop on Statistical Machine Translation, pages 138–141, New York City, June. Association for Computational Linguistics. 845
|
2011
|
84
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 846–855, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Learning to Transform and Select Elementary Trees for Improved Syntax-based Machine Translations Bing Zhao†, and Young-Suk Lee†, and Xiaoqiang Luo†, and Liu Li‡ IBM T.J. Watson Research† and Carnegie Mellon University‡ {zhaob, ysuklee, xiaoluo}@us.ibm.com and [email protected] Abstract We propose a novel technique of learning how to transform the source parse trees to improve the translation qualities of syntax-based translation models using synchronous context-free grammars. We transform the source tree phrasal structure into a set of simpler structures, expose such decisions to the decoding process, and find the least expensive transformation operation to better model word reordering. In particular, we integrate synchronous binarizations, verb regrouping, removal of redundant parse nodes, and incorporate a few important features such as translation boundaries. We learn the structural preferences from the data in a generative framework. The syntax-based translation system integrating the proposed techniques outperforms the best Arabic-English unconstrained system in NIST08 evaluations by 1.3 absolute BLEU, which is statistically significant. 1 Introduction Most syntax-based machine translation models with synchronous context free grammar (SCFG) have been relying on the off-the-shelf monolingual parse structures to learn the translation equivalences for string-to-tree, tree-to-string or tree-to-tree grammars. However, stateof-the-art monolingual parsers are not necessarily well suited for machine translation in terms of both labels and chunks/brackets. For instance, in Arabic-to-English translation, we find only 45.5% of Arabic NP-SBJ structures are mapped to the English NP-SBJ with machine alignment and parse trees, and only 60.1% of NP-SBJs are mapped with human alignment and parse trees as in § 2. The chunking is of more concern; at best only 57.4% source chunking decisions are translated contiguously on the target side. To translate the rest of the chunks one has to frequently break the original structures. The main issue lies in the strong assumption behind SCFG-style nonterminals – each nonterminal (or variable) assumes a source chunk should be rewritten into a contiguous chunk in the target. Without integrating techniques to modify the parse structures, the SCFGs are not to be effective even for translating NP-SBJ in linguistically distant language-pairs such as Arabic-English. Such problems have been noted in previous literature. Zollmann and Venugopal (2006) and Marcu et al. (2006) used broken syntactic fragments to augment their grammars to increase the rule coverage; while we learn optimal tree fragments transformed from the original ones via a generative framework, they enumerate the fragments available from the original trees without learning process. Mi and Huang (2008) introduced parse forests to blur the chunking decisions to a certain degree, to expand search space and reduce parsing errors from 1-best trees (Mi et al., 2008); others tried to use the parse trees as soft constraints on top of unlabeled grammar such as Hiero (Marton and Resnik, 2008; Chiang, 2010; Huang et al., 2010; Shen et al., 2010) without sufficiently leveraging rich tree context. Recent works tried more complex approaches to integrate both parsing and decoding in one single search space as in (Liu and Liu, 2010), at the cost of huge search space. In (Zhang et al., 2009), combinations of tree forest and tree-sequence (Zhang et al., 2008) based approaches were carried out by adding pseudo nodes and hyper edges into the forest. Overall, the forest-based translation can reduce the risks from upstream parsing errors and expand the search space, but it cannot sufficiently address the syntactic divergences between various language-pairs. The tree sequence approach adds pseudo nodes and hyper edges to the forest, which makes the forest even denser and harder for navigation and search. As trees thrive in the search space, especially with the pseudo nodes and edges being added to the already dense forest, it is becoming harder to wade through the deep forest for the best derivation path out. We propose to simplify suitable subtrees to a reasonable level, at which the correct reordering can be easily identified. The transformed structure should be frequent enough to have rich statistics for learning a model. Instead of creating pseudo nodes and edges and make the forest dense, we transform a tree with a few simple operators; only meaningful frontier nodes, context nodes and edges are kept to induce the correct reordering; such operations also enable the model to share the statistics among all similar subtrees. On the basis of our study on investigating the language divergence between Arabic-English with human aligned and parsed data, we integrate several simple statistical operations, to transform parse trees adaptively to serve the 846 translation purpose better. For each source span in the given sentence, a subgraph, corresponding to an elementary tree (in Eqn. 1), is proposed for PSCFG translation; we apply a few operators to transform the subgraph into some frequent subgraphs seen in the whole training data, and thus introduce alternative similar translational equivalences to explain the same source span with enriched statistics and features. For instance, if we regroup two adjacent nodes IV and NP-SBJ in the tree, we can obtain the correct reordering pattern for verb-subject order, which is not easily available otherwise. By finding a set of similar elementary trees derived from the original elementary trees, statistics can be shared for robust learning. We also investigate the features using the context beyond the phrasal subtree. This is to further disambiguate the transformed subgraphs so that informative neighboring nodes and edges can influence the reordering preferences for each of the transformed trees. For instance, at the beginning and end of a sentence, we do not expect dramatic long distance reordering to happen; or under SBAR context, the clause may prefer monotonic reordering for verb and subject. Such boundary features were treated as hard constraints in previous literature in terms of re-labeling (Huang and Knight, 2006) or re-structuring (Wang et al., 2010). The boundary cases were not addressed in the previous literature for trees, and here we include them in our feature sets for learning a MaxEnt model to predict the transformations. We integrate the neighboring context of the subgraph in our transformation preference predictions, and this improve translation qualities further. The rest of the paper is organized as follows: in section 2, we analyze the projectable structures using human aligned and parsed data, to identify the problems for SCFG in general; in section 3, our proposed approach is explained in detail, including the statistical operators using a MaxEnt model; in section 4, we illustrate the integration of the proposed approach in our decoder; in section 5, we present experimental results; in section 6, we conclude with discussions and future work. 2 The Projectable Structures A context-free style nonterminal in PSCFG rules means the source span governed by the nonterminal should be translated into a contiguous target chunk. A “projectable” phrase-structure means that it is translated into a contiguous span on the target side, and thus can be generalized into a nonterminal in our PSCFG rule. We carried out a controlled study on the projectable structures using human annotated parse trees and word alignment for 5k Arabic-English sentence-pairs. In Table 1, the unlabeled F-measures with machine alignment and parse trees show that, for only 48.71% of the time, the boundaries introduced by the source parses Alignment Parse Labels Accuracy H H NP-SBJ 0.6011 PP 0.3436 NP 0.4832 unlabel 0.5739 M H NP-SBJ 0.5356 PP 0.2765 NP 0.3959 unlabel 0.5305 M M NP-SBJ 0.4555 PP 0.1935 NP 0.3556 unlabel 0.4871 Table 1: The labeled and unlabeled F-measures for projecting the source nodes onto the target side via alignments and parse trees; unlabeled F-measures show the bracketing accuracies for translating a source span contiguously. H: human, M: machine. are real translation boundaries that can be explained by a nonterminal in PSCFG rule. Even for human parse and alignment, the unlabeled F-measures are still as low as 57.39%. Such statistics show that we should not blindly learn tree-to-string grammar; additional transformations to manipulate the bracketing boundaries and labels accordingly have to be implemented to guarantee the reliability of source-tree based syntax translation grammars. The transformations could be as simple as merging two adjacent nonterminals into one bracket to accommodate non-contiguity on the target side, or lexicalizing those words which have fork-style, many-to-many alignment, or unaligned content words to enable the rest of the span to be generalized into nonterminals. We illustrate several cases using the tree in Figure 1. NP−SBJ the millde east crisis up that make mn PREP PRON +hA Azmp NOUN Al$rq ADJ AlAwsT PRON IV Alty ttAlf NOUN WHNP VP PP−CLR SBAR S NP Figure 1: Non-projectable structures in an SBAR tree with human parses and alignment; there are non-projectable structures: the deleted nonterminals PRON (+hA), the many-tomany alignment for IV(ttAlf) PREP(mn), fork-style alignment for NOUN (Azmp). In Figure 1, several non-projectable nodes were illus847 trated: the deleted nonterminals PRON (+hA), the manyto-many alignment for IV(ttAlf) PREP(mn), fork-style alignment for NOUN (Azmp). Intuitively, it would be good to glue the nodes NOUN(Al$rq) ADJ(AlAwsT) under the node of NP, because it is more frequent for moving ADJ before NOUN in our training data. It should be easier to model the swapping of (NOUN ADJ) using the tree (NP NOUN, ADJ) instead of the original bigger tree of (NP-SBJ Azmp, NOUN, ADJ) with one lexicalized node. Approaches in tree-sequence based grammar (Zhang et al., 2009) tried to address the bracketing problem by using arbitrary pseudo nodes to weave a new “tree” back into the forest for further grammar extractions. Such approach may improve grammar coverage, but the pseudo node labels would be arguably a worse choice to split the already sparse data. Some of the interior nodes connecting the frontier nodes might be very informative for modeling reordering. Also, due to the introduced pseudo nodes, it would need exponentially many nonterminals to keep track of the matching tree-structures for translations. The created pseudo node could easily block the informative neighbor nodes associated with the subgraph which could change the reordering nature. For instance, IV and NP-SBJ tends to swap at the beginning of a sentence, but it may prefer monotone if they share a common parent of SBAR for a subclause. In this case, it is unnecessary to create a pseudo node “IV+SBJ” to block useful factors. We propose to navigate through the forest, via simplifying trees by grouping the nodes, cutting the branches, and attaching connected neighboring informative nodes to further disambiguate the derivation path. We apply explicit translation motivated operators, on a given monolingual elementary tree, to transform it into similar but simpler trees, and expose such statistical preferences to the decoding process to select the best rewriting rule from the enriched grammar rule sets, for generating target strings. 3 Elementary Trees to String Grammar We propose to use variations of an elementary tree, which is a connected subgraph fitted in the original monolingual parse tree. The subgraph is connected so that the frontiers (two or more) are connected by their immediate common parent. Let γ be a source elementary tree: γ =< ℓ; vf, vi, E >, (1) where vf is a set of frontier nodes which contain nonterminals or words; vi are the interior nodes with source labels/symbols; E is the set of edges connecting the nodes v = vf +vi into a connected subgraph fitted in the source parse tree; ℓis the immediate common parent of the frontier nodes vf. Our proposed grammar rule is formulated as follows: < γ; α; ∼; ¯m; ¯t >, (2) where α is the target string, containing the terminals and/or nonterminals in a target language; ∼is the oneto-one alignment of the nonterminals between γ and α; ¯t contains possible sequence of transform operations (to be explained later in this section) associated with each rule; ¯m is a function of enumerating the neighborhood of the source elementary tree γ, and certain tree context (nodes and edges) can be used to further disambiguate the reordering or the given lexical choices. The interior nodes of γ.vi, however, are not necessarily informative for the reordering decisions, like the unary nodes WHNP,VP, and PP-CLR in Figure 1; while the frontier nodes γ.vf are the ones directly executing the reordering decisions. We can selectively cut off the interior nodes, which have no or only weak causal relations to the reordering decisions. This will enable the frequency or derived probabilities for executing the reordering to be more focused. We call such transformation operators ¯t. We specified a few operators for transforming an elementary tree γ, including flattening tree operators such as removing interior nodes in vi, or grouping the children via binarizations. Let’s use the trigram “Alty ttAlf mn” in Figure 1 as an example, the immediate common parent for the span is SBAR: γ.ℓ= SBAR; the interior nodes are γ.vi = {WHNP VP S PP-CLR}; the frontier nodes are γ.vf = (x:PRON x:IV x:PREP). The edges γ.E (as highlighted in Figure 1) connect γ.vi and γ.vf into a subgraph for the given source ngram. For any source span, we look up one elementary tree γ covering the span, then we select an operator ¯t ∈T, to explore a set of similar elementary trees ¯t(γ, ¯m) = {γ′} as simplified alternatives for translating that source tree (span) γ into an optimal target string α∗accordingly. Our generative model is summarized in Eqn. 3: α∗= arg max ¯t∈T ;γ′∈¯t(γ, ¯m) pa(α′|γ′)× pb(γ′|¯t, γ, ¯m)× pc(¯t|γ, ¯m). (3) In our generative scheme, for a given elementary tree γ, we sample an operator (or a combination of operations) ¯t with the probability of pc(¯t|γ); with operation ¯t, we transform γ into a set of simplified versions γ′ ∈¯t(γ, ¯m) with the probability of pb(γ′|¯t, γ); finally we select the transformed version γ′ to generate the target string α′ with a probability of pa(α′|γ′). Note here, γ′ and γ share the same immediate common parent ℓ, but not necessarily the frontier, or interior, or even neighbors. The frontier nodes can be merged, lexicalized, or even deleted in the tree-to-string rule associated with γ′, as long as the alignment for the nonterminals are book-kept in the derivations. To simplify the model, one can choose the operator ¯t to be only one level, and the model using a single operator ¯t is to be deterministic. Thus, the final set of models 848 to learn are pa(α′|γ′) for rule alignment, and the preference model pb(γ′|¯t, γ, ¯m), and the operator proposal model pc(¯t|γ, ¯m), which in our case is a maximum entropy model— the key model in our proposed approach in this paper for transforming the original elementary tree into similar trees for evaluating the reordering probabilities. Eqn. 3 significantly enriches reordering powers for syntax-based machine translation. This is because it uses all similar set of elementary trees to generate the best target strings. In the next section, we’ll first define the operators conceptually, and then explain how we learn each of the models. 3.1 Model pa(α′|γ′) A log linear model is applied here to approximate pa(α′|γ′) ∝exp(¯λ·ff) via weighted combination (¯λ) of feature functions ff(α′, γ′), including relative frequencies in both directions, and IBM Model-1 scores in both directions as γ′ and α′ have lexical items within them. We also employed a few binary features listed in the following table. γ′ is observed less than 2 times (α′, γ′) deletes a src content word (α′, γ′) deletes a src function word (α′, γ′) over generates a tgt content word (α′, γ′) over generates a tgt function word Table 2: Additional 5 Binary Features for pa(α′|γ′) 3.2 Model pb(γ′|¯t, γ, ¯m) pb(γ′|¯t, γ, ¯m) is our preference model. For instance using the operator ¯t of cutting an unary interior node in γ.vi, if γ.vi has more than one unary interior node, like the SBAR tree in Figure 1, having three unary interior node: WHNP, VP and PP-CLR, pb(γ′|¯t, γ, ¯m) specifies which one should have more probabilities to be cut. In our case, to make model simple, we simply choose histogram/frequency for modeling the choices. 3.3 Model pc(¯t|γ, ¯m) pc(¯t|γ, ¯m) is our operator proposal model. It ranks the operators which are valid to be applied for the given source tree γ together with its neighborhood ¯m. Here, in our approach, we applied a Maximum Entropy model, which is also employed to train our Arabic parser: pc(¯t|γ, ¯m) ∝exp ¯λ · ff(¯t, γ, ¯m). The feature sets we use here are almost the same set we used to train our Arabic parser; the only difference is the future space here is operator categories, and we check bag-of-nodes for interior nodes and frontier nodes. The key feature categories we used are listed as in the Table 3. The headtable used in our training is manually built for Arabic. bag-of-nodes γ.vi bag-of-nodes and ngram of γ.vf chunk-level features: left-child, right-child, etc. lexical features: unigram and bigram pos features: unigram and bigram contextual features: surrounding words Table 3: Feature Features for learning pc(¯t|γ, ¯m) 3.4 ¯t: Tree Transformation Function Obvious systematic linguistic divergences between language-pairs could be handled by some simple operators such as using binarization to re-group contiguously aligned children. Here, we start from the human aligned and parsed data as used in section 2 to explore potential useful operators. 3.4.1 Binarizations One of the simplest way for transforming a tree is via binarization. Monolingual binarization chooses to re-group children into smaller subtree with a suitable label for the newly created root. We choose a function mapping to select the top-frequent label as the root for the grouped children; if such label is not found we simply use the label of the immediate common parent for γ. In decoding time, we need to select trees from all possible binarizations, while in the training time, we restrict the choices allowed with the alignment constraint, that every grouped children should be aligned contiguously on the target side. Our goal is to simulate the synchronous binarization as much as we can. In this paper, we applied the four basic operators for binarizing a tree: left-most, right-most and additionally head-out left and head-out right for more than three children. Two examples are given in Table 4, in which we used LDC style representation for the trees. With the proper binarization, the structure becomes rich in sub-structures which allow certain reordering to happen more likely than others. For instance for the subtree (VP PV NP-SBJ), one would apply stronger statistics from training data to support the swap of NP-SBJ and PV for translation. 3.4.2 Regrouping verbs Verbs are keys for reordering especially for Araic-English with VSO translated into SVO. However, if the verb and its relevant arguments for reordering are at different levels in the tree, the reordering is difficult to model as more interior nodes combinations will distract the distributions and make the model less focused. We provide the following two operations specific for verb in VP trees as in Table 5. 3.4.3 Removing interior nodes and edges For reordering patterns, keeping the deep tree structure might not be the best choice. Sometimes it is not even 849 Binarization Operations Examples right-most (NP Xnoun Xadj1 Xadj2) 7→(NP Xnoun (ADJP Xadj1 Xadj2)) left-most (VP Xpv XNP-SBJ XSBAR) 7→(VP (VP Xpv XNP-SBJ) XSBAR) Table 4: Operators for binarizing the trees Operators for regroup verbs Examples regroup verb (V P1 Xv (V P2 Y )) 7→(V P1 (V P2 Xv Y )) regroup verb and remove the top level VP (R (V P1 Xv (R2 Y ))) 7→(R (R2 XvY )) Table 5: Operators for manipulating the trees possible due to the many-to-many alignment, insertions and deletions of terminals. So, we introduce the operators to remove the interior nodes γ.vi selectively; this way, we can flatten the tree, remove irrelevant nodes and edges, and can use more frequent observations of simplified structures to capture the reordering patterns. We use two operators as shown in Table 6. The second operator deletes all the interior nodes, labels and edges; thus reordering will become a Hiero-alike (Chiang, 2007) unlabeled rule, and additionally a special glue rule: X1X2 →X1X2. This operator is necessary, we need a scheme to automatically back off to the meaningful glue or Hiero-alike rules, which may lead to a cheaper derivation path for constructing a partial hypothesis, at the decoding time. NP* PREP to ignite the situation AlAwDAE DET+NOUN AlAnfjAr DET+NOUN dfE NOUN NP NP PP* NP Aly Figure 2: A NP tree with an “inside-out” alignment. The nodes “NP*” and “PP*” are not suitable for generalizing into NTs used in PSCFG rules. As shown in Table 1, NP brackets has only 35.56% of time to be translated contiguously as an NP in machine aligned & parsed data. The NP tree in Figure 2 happens to be an “inside-out” style alignment, and context free grammar such as ITG (Wu, 1997) can not explain this structure well without necessary lexicalization. Actually, the Arabic tokens of “dfE Aly AlAnfjAr” form a combination and is turned into English word “ignite” in an idiomatic way. With lexicalization, a Hiero style rule “dfE X Aly AlAnfjAr 7→to ignite X” is potentially a better alternative for translating the NP tree. Our operators allow us to back off to such Hiero-style rules to construct derivations, which share the immediate common parent NP, as defined for the elementary tree, for the given source span. 3.5 ¯m: Neighboring Function For a given elementary tree, we use function ¯m to check the context beyond the subgraph. This includes looking the nodes and edges connected to the subgraph. Similar to the features used in (Dyer et al., 2009), we check the following three cases. 3.5.1 Sentence boundaries When the tree γ frontier sets contain the left-most token, right-most token, or both sides, we will add to the neighboring nodes the corresponding decoration tags L (left), R (right), and B (both), respectively. These decorations are important especially when the reordering patterns for the same trees are depending on the context. For instance, at the beginning or end of a sentence, we do not expect dramatic reordering – moving a token too far away in the middle of the sentences. 3.5.2 SBAR/IP/PP/FRAG boundaries We check siblings of the root for γ for a few special labels, including SBAR, IP, PP, and FRAG. These labels indicate a partial sentence or clause, and the reordering patterns may get different distributions due to the position relative to these nodes. For instance, the PV and SBJ nodes under SBAR tends to have more monotone preference for word reordering (Carpuat et al., 2010). We mark the boundaries with position markers such as L-PP, to indicate having a left sibling PP, R-IP for having a right sibling IP, and C-SBAR to indicate the elementary tree is a child of SBAR. These labels are selected mainly based on our linguistic intuitions and errors in our translation system. A data-driven approach might be more promising for identifying useful markups w.r.t specific reordering patterns. 3.5.3 Translation boundaries In the Figure 2, there are two special nodes under NP: NP* and PP*. These two nodes are aligned in a “insideout” fashion, and none of them can be generalized into a nonterminal to be rewritten in a PSCFG rule. In other words, the phrasal brackets induced from NP* and PP* 850 operators for removing nodes/edges Examples remove unary nodes (R Xt1(R1 (R2 Xt2))) 7→(R Xt1(R2 Xt2))) remove all labels (R (R1 Xt1(R2 Xt2))) 7→ (R Xt2Xt1) Table 6: Operators for simplifying the trees are not translation boundaries, and to avoid translation errors we should identify them by applying a PSCFG rule on top of them. During training, we label nodes with translation boundaries, as one additional function tag; during decoding, we employ the MaxEnt model to predict the translation boundary label probability for each span associated with a subgraph γ, and discourage derivations accordingly for using nonterminals over the nontranslation boundary span. The translation boundaries over elementary trees have much richer representation power. The previous works as in Xiong et al. (2010), defined translation boundaries on phrase-decoder style derivation trees due to the nature of their shift-reduce algorithm, which is a special case in our model. 4 Decoding Decoding using the proposed elementary tree to string grammar naturally resembles bottom up chart parsing algorithms. The key difference is at the grammar querying step. Given a grammar G, and the input source parse tree π from a monolingual parser, we first construct the elementary tree for a source span, and then retrieve all the relevant subgraphs seen in the given grammar through the proposed operators. This step is called populating, using the proposed operators to find all relevant elementary trees γ which may have contributed to explain the source span, and put them in the corresponding cells in the chart. There would have been exponential number of relevant elementary trees to search if we do not have any restrictions in the populating step; we restrict the maximum number of interior nodes |γ.vi| to be 3, and the size of frontier nodes |γ.vf| to be less than 6; additional pruning for less frequent elementary trees is carried out. After populating the elementary trees, we construct the partial hypotheses bottom up, by rewriting the frontier nodes of each elementary tree with the probabilities(costs) for γ →α∗as in Eqn. 3. Our decoder (Zhao and Al-Onaizan, 2008) is a template-based chart decoder in C++. It generalizes over the dotted-product operator in Earley style parser, to allow us to leverage many operators ¯t ∈T as above-mentioned, such as binarizations, at different levels for constructing partial hypothesis. 5 Experiments In our experiments, we built our system using most of the parallel training data available to us: 250M Arabic running tokens, corresponding to the “unconstrained” condition in NIST-MT08. We chose the testsets of newswire and weblog genres from MT08 and DEV101. In particular, we choose MT08 to enable the comparison of our results to the reported results in NIST evaluations. Our training and test data is summarized in Table 5. For testings, we have 129,908 tokens in our testsets. For language models (LM), we used 6-gram LM trained with 10.3 billion English tokens, and also a shrinkage-based LM (Chen, 2009) – “ModelM” (Chen and Chu, 2010; Emami et al., 2010) with 150 word-clusters learnt from 2.1 million tokens. From the parallel data, we extract phrase pairs(blocks) and elementary trees to string grammar in various configurations: basic tree-to-string rules (Tr2str), elementary tree-to-string rules with boundaries ¯t(elm2str+ ¯m), and with both ¯t and ¯m (elm2str+¯t + ¯m). This is to evaluate the operators’ effects at different levels for decoding. To learn our MaxEnt models defined in § 3.3, we collect the events during extracting elm2str grammar in training time, and learn the model using improved iterative scaling. We use the same training data as that used in training our Arabic parser. There are 16 thousand human parse trees with human alignment; additional 1 thousand human parse and aligned sent-pairs are used as unseen test set to verify our MaxEnt models and parsers. For our Arabic parser, we have a labeled F-measure of 78.4%, and POS tag accuracy 94.9%. In particular, we’ll evaluate model pc(¯t|γ, ¯m) in Eqn. 3 for predicting the translation boundaries in § 3.5.3 for projectable spans as detailed in § 5.1. Our decoder (Zhao and Al-Onaizan, 2008) supports grammars including monotone, ITG, Hiero, tree-tostring, string-to-tree, and several mixtures of them (Lee et al., 2010). We used 19 feature functions, mainly from those used in phrase-based decoder like Moses (Koehn et al., 2007), including two language models (one for a 6-gram LM, one for ModelM, one brevity penalty, IBM Model-1 (Brown et al., 1993) style alignment probabilities in both directions, relative frequency in both directions, word/rule counts, content/function word mismatch, together with features on tr2str rule probabilities. We use BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) to evaluate translation qualities. Our baseline used basic elementary tree to string grammar without any manipulations and boundary markers in the model, 1DEV10 are unseen testsets used in our GALE project. It was selected from recently released LDC data LDC2010E43.v3. 851 Data Train MT08-NW MT08-WB Dev10-NW Dev10-WB # Sents 8,032,837 813 547 1089 1059 # Tokens 349M(ar)/230M(en) 25,926 19,654 41,240 43,088 Table 7: Training and test data; using all training parallel training data for 4 test sets and we achieved a BLEUr4n4 55.01 for MT08-NW, or a cased BLEU of 53.31, which is close to the best officially reported result 53.85 for unconstrained systems.2 We expose the statistical decisions in Eqn. 3 as the rule probability as one of the 19 dimensions, and use Simplex Downhill algorithm with Armijo line search (Zhao and Chen, 2009) to optimize the weight vector for decoding. The algorithm moves all dimensions at the same time, and empirically achieved more stable results than MER(Och, 2003) in many of our experiments. 5.1 Predicting Projectable Structures The projectable structure is important for our proposed elementary tree to string grammar (elm2str). When a span is predicted not to be a translation boundary, we want the decoder to prefer alternative derivations outside of the immediate elementary tree, or more aggressive manipulation of the trees, such as deleting interior nodes, to explore unlabeled grammar such as Hiero style rules, with proper costs. We test separately on predicting the projectable structures, like predicting function tags in § 3.5.3, for each node in syntactic parse tree. We use one thousand test sentences with two conditions: human parses and machine parses. There are totally 40,674 nodes excluding the sentence-level node. The results are shown in Table 8. It showed our MaxEnt model is very accurate using human trees: 94.5% of accuracy, and about 84.7% of accuracy for using the machine parsed trees. Our accuracies are higher compared with the 71+% accuracies reported in (Xiong et al., 2010) for their phrasal decoder. Setups Accuracy Human Parses 94.5% Machine Parses 84.7% Table 8: Accuracies of predicting projectable structures We zoom in the translation boundaries for MT08-NW, in which we studied a few important frequent labels including VP and NP-SBJ as in Table 9. According to our MaxEnt model, 20% of times we should discourage a VP tree to be translated contiguously; such VP trees have an average span length of 16.9 tokens in MT08-NW. Similar statistics are 15.9% for S-tree with an average span of 13.8 tokens. 2See link: http://www.itl.nist.gov/iad/mig/tests/mt/2008/doc/mt08 official results v0.html Labels total NonProj Percent Avg.len VP* 4479 920 20.5% 16.9 NP* 14164 825 5.8% 8.12 S* 3123 495 15.9% 13.8 NP-SBJ* 1284 53 4.12% 11.9 Table 9: The predicted projectable structures in MT08-NW Using the predicted projectable structures for elm2str grammar, together with the probability defined in Eqn. 3 as additional cost, the translation results in Table 11 show it helps BLEU by 0.29 BLEU points (56.13 v.s. 55.84). The boundary decisions penalize the derivation paths using nonterminals for non-projectable spans for partial hypothesis construction. Setups TER BLEUr4n4 Baseline 39.87 55.01 right-binz (rbz) 39.10 55.19 left-binz (lbz) 39.67 55.31 Head-out-left (hlbz) 39.56 55.50 Head-out-right (hrbz) 39.52 55.53 +all binzation (abz) 39.42 55.60 +regroup-verb 39.29 55.72 +deleting interior nodes γ.vi 38.98 55.84 Table 10: TER and BLEU for MT08-NW, using only ¯t(γ) 5.2 Integrating ¯t and ¯m We carried out a series of experiments to explore the impacts using ¯t and ¯m for elm2str grammar. We start from transforming the trees via simple operator ¯t(γ), and then expand the function with more tree context to include the neighboring functions: ¯t(γ, ¯m). Setups TER BLEUr4n4 Baseline w/ ¯t 38.98 55.84 + TM Boundaries 38.89 56.13 + SENT Bound 38.63 56.46 all ¯t(γ, ¯m) 38.61 56.87 Table 11: TER and BLEU for MT08-NW, using ¯t(γ, ¯m). Experiments in Table 10 focus on testing operators especially binarizations for transforming the trees. In Table 10, the four possible binarization methods all improve 852 Data MT08-NW MT08-WB Dev10-NW Dev10-WB Tr2Str 55.01 39.19 37.33 41.77 elm2str+¯t 55.84 39.43 38.02 42.70 elm2str+ ¯m 55.57 39.60 37.67 42.54 elm2str+¯t(γ, ¯m) 56.87 39.82 38.62 42.75 Table 12: BLEU scores on various test sets; comparing elementary tree-to-string grammar (tr2str), transformation of the trees (elm2str+¯t), using the neighboring function for boundaries ( elm2str+ ¯m), and combination of all together ( elm2str+¯t(γ, ¯m)). MT08-NW and MT08-WB have four references; Dev10-WB has three references, and Dev10-NW has one reference. BLEUn4 were reported. over the baseline from +0.18 (via right-most binarization) to +0.52 (via head-out-right) BLEU points. When we combine all binarizations (abz), we did not see additive gains over the best individual case – hrbz. Because during our decoding time, we do not frequently see large number of children (maximum at 6), and for smaller trees (with three or four children), these operators will largely generate same transformed trees, and that explains the differences from these individual binarization are small. For other languages, these binarization choices might give larger differences. Additionally, regrouping the verbs is marginally helpful for BLEU and TER. Upon close examinations, we found it is usually beneficial to group verb (PV or IV) with its neighboring nodes for expressing phrases like “have to do” and “will not only”. Deleting the interior nodes helps on shrinking the trees, so that we can translate it with more statistics and confidences. It helps more on TER than BLEU for MT08-NW. Table 11 extends Table 10 with neighboring function to further disambiguate the reordering rule using the tree context. Besides the translation boundary, the reordering decisions should be different with regard to the positions of the elementary tree relative to the sentence. At the sentence-beginning one might expect more for monotone decoding, while in the middle of the sentence, one might expect more reorderings. Table 11 shows when we add such boundary markups in our rules, an improvement of 0.33 BLEU points were obtained (56.46 v.s. 56.13) on top of the already improved setups. A close check up showed that the sentence-begin/end markups significantly reduced the leading “and” (from Arabic word w#) in the decoding output. Also, the verb subject order under SBAR seems to be more like monotone with a leading pronoun, rather than the general strong reordering of moving verb after subject. Overall, our results showed that such boundary conditions are helpful for executing the correct reorderings. We conclude the investigation with full function ¯t(γ, ¯m), which leads to a BLEUr4n4 of 56.87 (cased BLEUr4n4c 55.16), a significant improvement of 1.77 BLEU point over a already strong baseline. We apply the setups for several other NW and WEB datasets to further verify the improvement. Shown in Table 12, we apply separately the operators for ¯t and ¯m first, then combine them as the final results. Varied improvements were observed for different genres. On DEV10NW, we observed 1.29 BLEU points improvement, and about 0.63 and 0.98 improved BLEU points for MT08WB and DEV10-WB, respectively. The improvements for newwire are statistically significant. The improvements for weblog are, however, only marginally better. One possible reason is the parser quality for web genre is reliable, as our training data is all in newswire. Regarding to the individual operators proposed in this paper, we observed consistent improvements of applying them across all the datasets. The generative model in Eqn. 3 leverages the operators further by selecting the best transformed tree form for executing the reorderings. 5.3 A Translation Example To illustrate the advantages of the proposed grammar, we use a testing case with long distance word reordering and the source side parse trees. We compare the translation from a strong phrasal decoder (DTM2) (Ittycheriah and Roukos, 2007), which is one of the top systems in NIST08 evaluation for Arabic-English. The translations from both decoders with the same training data (LM+TM) are in Table 13. The highlighted parts in Figure 3 show that, the rules on partial trees are effectively selected and applied for capturing long-distance word reordering, which is otherwise rather difficult to get correct in a phrasal system even with a MaxEnt reordering model. 6 Discussions and Conclusions We proposed a framework to learn models to predict how to transform an elementary tree into its simplified forms for better executing the word reorderings. Two types of operators were explored, including (a) transforming the trees via binarizations, grouping or deleting interior nodes to change the structures; and (b) neighboring boundary context to further disambiguate the reordering decisions. Significant improvements were observed on top of a strong baseline system, and consistent improvements were observed across genres; we achieved a cased BLEU of 55.16 for MT08-NW, which is significantly better than the officially reported results in NIST MT08 Arabic-English evaluations. 853 Src Sent qAl AlAmyr EbdAlrHmn bn EbdAlEzyz nA}b wzyr AldfAE AlsEwdy AlsAbq fy tSryH SHAfy An +h mtfA}l b# qdrp Almmlkp Ely AyjAd Hl l# Alm$klp . Phrasal Decoder prince abdul rahman bin abdul aziz , deputy minister of defense former saudi said in a press statement that he was optimistic about the kingdom ’s ability to find a solution to the problem . Elm2Str+¯t(γ, ¯m) former saudi deputy defense minister prince abdul rahman bin abdul aziz said in a press statement that he was optimistic of the kingdom ’s ability to find a solution to the problem . Table 13: A translation example, comparing with phrasal decoder. Figure 3: A testing case: illustrating the derivations from chart decoder. The left panel is source parse tree for the Arabic sentence — the input to our decoder; the right panel is the English translation together with the simplified derivation tree and alignment from our decoder output. Each “X” is a nonterminal in the grammar rule; a “Block” means a phrase pair is applied to rewrite a nonterminal; “Glue” and “Hiero” means the unlabeled rules were chosen to explain the span as explained in § 3.4.3 ; “Tree” means a labeled rule is applied for the span. For instance, for the source span [1,10], a rule is applied on a partial tree with PV and NP-SBJ; for the span [18,23], a rule is backed off to an unlabeled rule (Hiero-alike); for the span [21,22], it is another partial tree of NPs. Within the proposed framework, we also presented several special cases including the translation boundaries for nonterminals in SCFG for translation. We achieved a high accuracy of 84.7% for predicting such boundaries using MaxEnt model on machine parse trees. Future works aim at transforming such non-projectable trees into projectable form (Eisner, 2003), driven by translation rules from aligned data(Burkett et al., 2010), and informative features form both the source 3 and the target sides (Shen et al., 2008) to enable the system to leverage more 3The BLEU score on MT08-NW has been improved to 57.55 since the acceptance of this paper, using the proposed technique but with our GALE P5 data pipeline and setups. isomorphic trees, and avoid potential detour errors. We are exploring the incremental decoding framework, like (Huang and Mi, 2010), to improve pruning and speed. Acknowledgments This work was partially supported by the Defense Advanced Research Projects Agency under contract No. HR0011-08-C-0110. The views and findings contained in this material are those of the authors and do not necessarily reflect the position or policy of the U.S. government and no official endorsement should be inferred. We are also very grateful to the three anonymous reviewers for their suggestions and comments. 854 References Peter F. Brown, Stephen A. Della Pietra, Vincent. J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. In Computational Linguistics, volume 19(2), pages 263–331. David Burkett, John Blitzer, and Dan Klein. 2010. Joint parsing and alignment with weakly synchronized grammars. In Proceedings of HLT-NAACL, pages 127–135, Los Angeles, California, June. Association for Computational Linguistics. Marine Carpuat, Yuval Marton, and Nizar Habash. 2010. Reordering matrix post-verbal subjects for arabic-to-english smt. In 17th Confrence sur le Traitement Automatique des Langues Naturelles, Montral, Canada, July. Stanley F. Chen and Stephen M. Chu. 2010. Enhanced word classing for model m. In Proceedings of Interspeech. Stanley F. Chen. 2009. Shrinking exponential language models. In Proceedings of NAACL HLT,, pages 468–476. David Chiang. 2007. Hierarchical phrase-based translation. In Computational Linguistics, volume 33(2), pages 201–228. David Chiang. 2010. Learning to translate with source and target syntax. In Proc. ACL, pages 1443–1452. Chris Dyer, Hendra Setiawan, Yuval Marton, and Philip Resnik. 2009. The University of Maryland statistical machine translation system for the Fourth Workshop on Machine Translation. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 145–149, Athens, Greece, March. Jason Eisner. 2003. Learning Non-Isomorphic tree mappings for Machine Translation. In Proc. ACL-2003, pages 205– 208. Ahmad Emami, Stanley F. Chen, Abe Ittycheriah, Hagen Soltau, and Bing Zhao. 2010. Decoding with shrinkagebased language models. In Proceedings of Interspeech. Bryant Huang and Kevin Knight. 2006. Relabeling syntax trees to improve syntax-based machine translation quality. In Proc. NAACL-HLT, pages 240–247. Liang Huang and Haitao Mi. 2010. Efficient incremental decoding for tree-to-string translation. In Proceedings of EMNLP, pages 273–283, Cambridge, MA, October. Association for Computational Linguistics. Zhongqiang Huang, Martin Cmejrek, and Bowen Zhou. 2010. Soft syntactic constraints for hierarchical phrase-based translation using latent syntactic distributions. In Proceedings of the 2010 EMNLP, pages 138–147. Abraham Ittycheriah and Salim Roukos. 2007. Direct translation model 2. In Proc of HLT-07, pages 57–64. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris CallisonBurch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL, pages 177–180. Young-Suk Lee, Bing Zhao, and Xiaoqian Luo. 2010. Constituent reordering and syntax models for english-to-japanese statistical machine translation. In Proceedings of Coling2010, pages 626–634, Beijing, China, August. Yang Liu and Qun Liu. 2010. Joint parsing and translation. In Proceedings of COLING 2010,, pages 707–715, Beijing, China, August. Daniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. Spmt: Statistical machine translation with syntactified target language phraases. In Proceedings of EMNLP-2006, pages 44–52. Yuval Marton and Philip Resnik. 2008. Soft syntactic constraints for hierarchical phrased-based translation. In Proceedings of ACL-08: HLT, pages 1003–1011. Haitao Mi and Liang Huang. 2008. Forest-based translation rule extraction. In Proceedings of EMNLP 2008, pages 206– 214. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forest-based translation. In In Proceedings of ACL-HLT, pages 192–199. Franz Josef Och. 2003. Minimum error rate training in Statistical Machine Translation. In ACL-2003, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of the ACL-02), pages 311–318, Philadelphia, PA, July. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of ACL08: HLT, pages 577–585, Columbus, Ohio, June. Association for Computational Linguistics. Libin Shen, Bing Zhang, Spyros Matsoukas, Jinxi Xu, and Ralph Weischedel. 2010. Statistical machine translation with a factorized grammar. In Proceedings of the 2010 EMNLP, pages 616–625, Cambridge, MA, October. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In AMTA. W. Wang, J. May, K. Knight, and D. Marcu. 2010. Restructuring, re-labeling, and re-aligning for syntax-based statistical machine translation. In Computational Linguistics, volume 36(2), pages 247–277. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. In Computational Linguistics, volume 23(3), pages 377–403. Deyi Xiong, Min Zhang, and Haizhou Li. 2010. Learning translation boundaries for phrase-based decoding. In NAACL-HLT 2010, pages 136–144. Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan, and Sheng Li. 2008. A tree sequence alignment-based tree-to-tree translation model. In ACL-HLT, pages 559–567. Hui Zhang, Min Zhang, Haizhou Li, Aiti Aw, and Chew Lim Tan. 2009. Forest-based tree sequence to string translation model. In Proc. of ACL 2009, pages 172–180. Bing Zhao and Yaser Al-Onaizan. 2008. Generalizing local and non-local word-reordering patterns for syntax-based machine translation. In Proceedings of EMNLP, pages 572– 581, Honolulu, Hawaii, October. Bing Zhao and Shengyuan Chen. 2009. A simplex armijo downhill algorithm for optimizing statistical machine translation decoding parameters. In Proceedings of HLT-NAACL, pages 21–24, Boulder, Colorado, June. Association for Computational Linguistics. Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proc. of NAACL 2006 - Workshop on SMT, pages 138–141. 855
|
2011
|
85
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 856–864, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Rule Markov Models for Fast Tree-to-String Translation Ashish Vaswani Information Sciences Institute University of Southern California [email protected] Haitao Mi Institute of Computing Technology Chinese Academy of Sciences [email protected] Liang Huang and David Chiang Information Sciences Institute University of Southern California {lhuang,chiang}@isi.edu Abstract Most statistical machine translation systems rely on composed rules (rules that can be formed out of smaller rules in the grammar). Though this practice improves translation by weakening independence assumptions in the translation model, it nevertheless results in huge, redundant grammars, making both training and decoding inefficient. Here, we take the opposite approach, where we only use minimal rules (those that cannot be formed out of other rules), and instead rely on a rule Markov model of the derivation history to capture dependencies between minimal rules. Large-scale experiments on a state-of-the-art tree-to-string translation system show that our approach leads to a slimmer model, a faster decoder, yet the same translation quality (measured using Bleu) as composed rules. 1 Introduction Statistical machine translation systems typically model the translation process as a sequence of translation steps, each of which uses a translation rule, for example, a phrase pair in phrase-based translation or a tree-to-string rule in tree-to-string translation. These rules are usually applied independently of each other, which violates the conventional wisdom that translation should be done in context. To alleviate this problem, most state-of-the-art systems rely on composed rules, which are larger rules that can be formed out of smaller rules (including larger phrase pairs that can be formerd out of smaller phrase pairs), as opposed to minimal rules, which are rules that cannot be formed out of other rules. Although this approach does improve translation quality dramatically by weakening the independence assumptions in the translation model, they suffer from two main problems. First, composition can cause a combinatorial explosion in the number of rules. To avoid this, ad-hoc limits are placed during composition, like upper bounds on the number of nodes in the composed rule, or the height of the rule. Under such limits, the grammar size is manageable, but still much larger than the minimal-rule grammar. Second, due to large grammars, the decoder has to consider many more hypothesis translations, which slows it down. Nevertheless, the advantages outweigh the disadvantages, and to our knowledge, all top-performing systems, both phrase-based and syntax-based, use composed rules. For example, Galley et al. (2004) initially built a syntax-based system using only minimal rules, and subsequently reported (Galley et al., 2006) that composing rules improves Bleuby 3.6 points, while increasing grammar size 60-fold and decoding time 15-fold. The alternative we propose is to replace composed rules with a rule Markov model that generates rules conditioned on their context. In this work, we restrict a rule’s context to the vertical chain of ancestors of the rule. This ancestral context would play the same role as the context formerly provided by rule composition. The dependency treelet model developed by Quirk and Menezes (2006) takes such an approach within the framework of dependency translation. However, their study leaves unanswered whether a rule Markov model can take the place of composed rules. In this work, we investigate the use of rule Markov models in the context of tree856 to-string translation (Liu et al., 2006; Huang et al., 2006). We make three new contributions. First, we carry out a detailed comparison of rule Markov models with composed rules. Our experiments show that, using trigram rule Markov models, we achieve an improvement of 2.2 Bleuover a baseline of minimal rules. When we compare against vertically composed rules, we find that our rule Markov model has the same accuracy, but our model is much smaller and decoding with our model is 30% faster. When we compare against full composed rules, we find that our rule Markov model still often reaches the same level of accuracy, again with savings in space and time. Second, we investigate methods for pruning rule Markov models, finding that even very simple pruning criteria actually improve the accuracy of the model, while of course decreasing its size. Third, we present a very fast decoder for tree-tostring grammars with rule Markov models. Huang and Mi (2010) have recently introduced an efficient incremental decoding algorithm for tree-to-string translation, which operates top-down and maintains a derivation history of translation rules encountered. This history is exactly the vertical chain of ancestors corresponding to the contexts in our rule Markov model, which makes it an ideal decoder for our model. We start by describing our rule Markov model (Section 2) and then how to decode using the rule Markov model (Section 3). 2 Rule Markov models Our model which conditions the generation of a rule on the vertical chain of its ancestors, which allows it to capture interactions between rules. Consider the example Chinese-English tree-tostring grammar in Figure 1 and the example derivation in Figure 2. Each row is a derivation step; the tree on the left is the derivation tree (in which each node is a rule and its children are the rules that substitute into it) and the tree pair on the right is the source and target derived tree. For any derivation node r, let anc1(r) be the parent of r (or ǫ if it has no parent), anc2(r) be the grandparent of node r (or ǫ if it has no grandparent), and so on. Let ancn 1(r) be the chain of ancestors anc1(r) · · · ancn(r). The derivation tree is generated as follows. With probability P(r1 | ǫ), we generate the rule at the root node, r1. We then generate rule r2 with probability P(r2 | r1), and so on, always taking the leftmost open substitution site on the English derived tree, and generating a rule ri conditioned on its chain of ancestors with probability P(ri | ancn 1(ri)). We carry on until no more children can be generated. Thus the probability of a derivation tree T is P(T) = Y r∈T P(r | ancn 1(r)) (1) For the minimal rule derivation tree in Figure 2, the probability is: P(T) = P(r1 | ǫ) · P(r2 | r1) · P(r3 | r1) · P(r4 | r1, r3) · P(r6 | r1, r3, r4) · P(r7 | r1, r3, r4) · P(r5 | r1, r3) (2) Training We run the algorithm of Galley et al. (2004) on word-aligned parallel text to obtain a single derivation of minimal rules for each sentence pair. (Unaligned words are handled by attaching them to the highest node possible in the parse tree.) The rule Markov model can then be trained on the path set of these derivation trees. Smoothing We use interpolation with absolute discounting (Ney et al., 1994): Pabs(r | ancn 1(r)) = max n c(r | ancn 1(r)) −Dn, 0 o P r′ c(r′ | ancn 1(r′)) + (1 −λn)Pabs(r | ancn−1 1 (r)), (3) where c(r | ancn 1(r)) is the number of times we have seen rule r after the vertical context ancn 1(r), Dn is the discount for a context of length n, and (1 −λn) is set to the value that makes the smoothed probability distribution sum to one. We experiment with bigram and trigram rule Markov models. For each, we try different values of D1 and D2, the discount for bigrams and trigrams, respectively. Ney et al. (1994) suggest using the following value for the discount Dn: Dn = n1 n1 + n2 (4) 857 rule id translation rule r1 IP(x1:NP x2:VP) →x1 x2 r2 NP(B`ush´ı) →Bush r3 VP(x1:PP x2:VP) →x2 x1 r4 PP(x1:P x2:NP) →x1 x2 r5 VP(VV(jˇux´ıng) AS(le) NPB(hu`ıt´an)) →held talks r6 P(yˇu) →with r′ 6 P(yˇu) →and r7 NP(Sh¯al´ong) →Sharon Figure 1: Example tree-to-string grammar. derivation tree derived tree pair ǫ IP@ǫ : IP@ǫ r1 IP@ǫ NP@1 VP@2 IP@ǫ NP@1 VP@2 : NP@1 VP@2 r1 r2 r3 IP@ǫ NP@1 B`ush´ı VP@2 [email protected] [email protected] IP@ǫ NP@1 B`ush´ı VP@2 [email protected] [email protected] : Bush [email protected] [email protected] r1 r2 r3 r4 r5 IP@ǫ NP@1 B`ush´ı VP@2 [email protected] [email protected] [email protected] [email protected] VV jˇux´ıng AS le NP hu`ıt´an IP@ǫ NP@1 B`ush´ı VP@2 [email protected] [email protected] [email protected] [email protected] VV jˇux´ıng AS le NP hu`ıt´an : Bush held talks [email protected] [email protected] r1 r2 r3 r4 r6 r7 r5 IP@ǫ NP@1 B`ush´ı VP@2 [email protected] [email protected] yˇu [email protected] Sh¯al´ong [email protected] VV jˇux´ıng AS le NP hu`ıt´an IP@ǫ NP@1 B`ush´ı VP@2 [email protected] [email protected] yˇu [email protected] Sh¯al´ong [email protected] VV jˇux´ıng AS le NP hu`ıt´an : Bush held talks with Sharon Figure 2: Example tree-to-string derivation. Each row shows a rewriting step; at each step, the leftmost nonterminal symbol is rewritten using one of the rules in Figure 1. 858 Here, n1 and n2 are the total number of n-grams with exactly one and two counts, respectively. For our corpus, D1 = 0.871 and D2 = 0.902. Additionally, we experiment with 0.4 and 0.5 for Dn. Pruning In addition to full n-gram Markov models, we experiment with three approaches to build smaller models to investigate if pruning helps. Our results will show that smaller models indeed give a higher Bleuscore than the full bigram and trigram models. The approaches we use are: • RM-A: We keep only those contexts in which more than P unique rules were observed. By optimizing on the development set, we set P = 12. • RM-B: We keep only those contexts that were observed more than P times. Note that this is a superset of RM-A. Again, by optimizing on the development set, we set P = 12. • RM-C: We try a more principled approach for learning variable-length Markov models inspired by that of Bejerano and Yona (1999), who learn a Prediction Suffix Tree (PST). They grow the PST in an iterative manner by starting from the root node (no context), and then add contexts to the tree. A context is added if the KL divergence between its predictive distribution and that of its parent is above a certain threshold and the probability of observing the context is above another threshold. 3 Tree-to-string decoding with rule Markov models In this paper, we use our rule Markov model framework in the context of tree-to-string translation. Tree-to-string translation systems (Liu et al., 2006; Huang et al., 2006) have gained popularity in recent years due to their speed and simplicity. The input to the translation system is a source parse tree and the output is the target string. Huang and Mi (2010) have recently introduced an efficient incremental decoding algorithm for tree-to-string translation. The decoder operates top-down and maintains a derivation history of translation rules encountered. The history is exactly the vertical chain of ancestors corresponding to the contexts in our rule Markov model. This IP@ǫ NP@1 B`ush´ı VP@2 [email protected] [email protected] yˇu [email protected] Sh¯al´ong [email protected] [email protected] jˇux´ıng [email protected] le [email protected] hu`ıt´an Figure 3: Example input parse tree with tree addresses. makes incremental decoding a natural fit with our generative story. In this section, we describe how to integrate our rule Markov model into this incremental decoding algorithm. Note that it is also possible to integrate our rule Markov model with other decoding algorithms, for example, the more common non-incremental top-down/bottom-up approach (Huang et al., 2006), but it would involve a non-trivial change to the decoding algorithms to keep track of the vertical derivation history, which would result in significant overhead. Algorithm Given the input parse tree in Figure 3, Figure 4 illustrates the search process of the incremental decoder with the grammar of Figure 1. We write X@η for a tree node with label X at tree address η (Shieber et al., 1995). The root node has address ǫ, and the ith child of node η has address η.i. At each step, the decoder maintains a stack of active rules, which are rules that have not been completed yet, and the rightmost (n −1) English words translated thus far (the hypothesis), where n is the order of the word language model (in Figure 4, n = 2). The stack together with the translated English words comprise a state of the decoder. The last column in the figure shows the rule Markov model probabilities with the conditioning context. In this example, we use a trigram rule Markov model. After initialization, the process starts at step 1, where we predict rule r1 (the shaded rule) with probability P(r1 | ǫ) and push its English side onto the stack, with variables replaced by the corresponding tree nodes: x1 becomes NP@1 and x2 becomes VP@2. This gives us the following stack: s = [ NP@1 VP@2] The dot () indicates the next symbol to process in 859 stack hyp. MR prob. 0 [<s> IP@ǫ </s>] <s> 1 [<s> IP@ǫ </s>] [ NP@1 VP@2] <s> P(r1 | ǫ) 2 [<s> IP@ǫ </s>] [ NP@1 VP@2] [ Bush] <s> P(r2 | r1) 3 [<s> IP@ǫ </s>] [ NP@1 VP@2] [Bush ] . . . Bush 4 [<s> IP@ǫ </s>] [NP@1 VP@2] . . . Bush 5 [<s> IP@ǫ </s>] [NP@1 VP@2] [ [email protected] [email protected]] . . . Bush P(r3 | r1) 6 [<s> IP@ǫ </s>] [NP@1 VP@2] [ [email protected] [email protected]] [ held talks] . . . Bush P(r5 | r1, r3) 7 [<s> IP@ǫ </s>] [NP@1 VP@2] [ [email protected] [email protected]] [ held talks] . . . held 8 [<s> IP@ǫ </s>] [NP@1 VP@2] [ [email protected] [email protected]] [ held talks ] . . . talks 9 [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] . . . talks 10 [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] [ [email protected] [email protected]] . . . talks P(r4 | r1, r3) 11 [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] [ [email protected] [email protected]] [ with] . . . with P(r6 | r3, r4) 12 [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] [ [email protected] [email protected]] [with ] . . . with 13 [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] [[email protected] [email protected]] . . . with 14 [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] [[email protected] [email protected]] [ Sharon] . . . with P(r7 | r3, r4) 11′ [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] [ [email protected] [email protected]] [ and] . . . and P(r′ 6 | r3, r4) 12′ [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] [ [email protected] [email protected]] [and ] . . . and 13′ [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] [[email protected] [email protected]] . . . and 14′ [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] [[email protected] [email protected]] [ Sharon] . . . and P(r7 | r3, r4) 15 [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] [[email protected] [email protected]] [Sharon ] . . . Sharon 16 [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected]] [[email protected] [email protected] ] . . . Sharon 17 [<s> IP@ǫ </s>] [NP@1 VP@2] [[email protected] [email protected] ] . . . Sharon 18 [<s> IP@ǫ </s>] [NP@1 VP@2 ] . . . Sharon 19 [<s> IP@ǫ </s>] . . . Sharon 20 [<s> IP@ǫ </s> ] . . . </s> Figure 4: Simulation of incremental decoding with rule Markov model. The solid arrows indicate one path and the dashed arrows indicate an alternate path. 860 VP@2 [email protected] [email protected] [email protected] yˇu [email protected] VP@2 [email protected] [email protected] [email protected] yˇu [email protected] Figure 5: Vertical context r3 r4 which allows the model to correctly translate yˇu as with. the English word order. We expand node NP@1 first with English word order. We then predict lexical rule r2 with probability P(r2 | r1) and push rule r2 onto the stack: [ NP@1 VP@2 ] [ Bush] In step 3, we perform a scan operation, in which we append the English word just after the dot to the current hypothesis and move the dot after the word. Since the dot is at the end of the top rule in the stack, we perform a complete operation in step 4 where we pop the finished rule at the top of the stack. In the scan and complete steps, we don’t need to compute rule probabilities. An interesting branch occurs after step 10 with two competing lexical rules, r6 and r′ 6. The Chinese word yˇu can be translated as either a preposition with (leading to step 11) or a conjunction and (leading to step 11′). The word n-gram model does not have enough information to make the correct choice, with. As a result, good translations might be pruned because of the beam. However, our rule Markov model has the correct preference because of the conditioning ancestral sequence (r3, r4), shown in Figure 5. Since [email protected] has a preference for yˇu translating to with, our corpus statistics will give a higher probability to P(r6 | r3, r4) than P(r′ 6 | r3, r4). This helps the decoder to score the correct translation higher. Complexity analysis With the incremental decoding algorithm, adding rule Markov models does not change the time complexity, which is O(nc|V|g−1), where n is the sentence length, c is the maximum number of incoming hyperedges for each node in the translation forest, V is the target-language vocabulary, and g is the order of the n-gram language model (Huang and Mi, 2010). However, if one were to use rule Markov models with a conventional CKY-style bottom-up decoder (Liu et al., 2006), the complexity would increase to O(nCm−1|V|4(g−1)), where C is the maximum number of outgoing hyperedges for each node in the translation forest, and m is the order of the rule Markov model. 4 Experiments and results 4.1 Setup The training corpus consists of 1.5M sentence pairs with 38M/32M words of Chinese/English, respectively. Our development set is the newswire portion of the 2006 NIST MT Evaluation test set (616 sentences), and our test set is the newswire portion of the 2008 NIST MT Evaluation test set (691 sentences). We word-aligned the training data using GIZA++ followed by link deletion (Fossum et al., 2008), and then parsed the Chinese sentences using the Berkeley parser (Petrov and Klein, 2007). To extract tree-to-string translation rules, we applied the algorithm of Galley et al. (2004). We trained our rule Markov model on derivations of minimal rules as described above. Our trigram word language model was trained on the target side of the training corpus using the SRILM toolkit (Stolcke, 2002) with modified Kneser-Ney smoothing. The base feature set for all systems is similar to the set used in Mi et al. (2008). The features are combined into a standard log-linear model, which we trained using minimum error-rate training (Och, 2003) to maximize the Bleu score on the development set. At decoding time, we again parse the input sentences using the Berkeley parser, and convert them into translation forests using rule patternmatching (Mi et al., 2008). We evaluate translation quality using case-insensitive IBM Bleu-4, calculated by the script mteval-v13a.pl. 4.2 Results Table 1 presents the main results of our paper. We used grammars of minimal rules and composed rules of maximum height 3 as our baselines. For decoding, we used a beam size of 50. Using the best bigram rule Markov models and the minimal rule grammar gives us an improvement of 1.5 Bleuover the minimal rule baseline. Using the best trigram rule Markov model brings our gain up to 2.3 Bleu. 861 grammar rule Markov max parameters (×106) Bleu time model rule height full dev+test test (sec/sent) minimal None 3 4.9 0.3 24.2 1.2 RM-B bigram 3 4.9+4.7 0.3+0.5 25.7 1.8 RM-A trigram 3 4.9+7.6 0.3+0.6 26.5 2.0 vertical composed None 7 176.8 1.3 26.5 2.9 composed None 3 17.5 1.6 26.4 2.2 None 7 448.7 3.3 27.5 6.8 RM-A trigram 7 448.7+7.6 3.3+1.0 28.0 9.2 Table 1: Main results. Our trigram rule Markov model strongly outperforms minimal rules, and performs at the same level as composed and vertically composed rules, but is smaller and faster. The number of parameters is shown for both the full model and the model filtered for the concatenation of the development and test sets (dev+test). These gains are statistically significant with p < 0.01, using bootstrap resampling with 1000 samples (Koehn, 2004). We find that by just using bigram context, we are able to get at least 1 Bleupoint higher than the minimal rule grammar. It is interesting to see that using just bigram rule interactions can give us a reasonable boost. We get our highest gains from using trigram context where our best performing rule Markov model gives us 2.3 Bleupoints over minimal rules. This suggests that using longer contexts helps the decoder to find better translations. We also compared rule Markov models against composed rules. Since our models are currently limited to conditioning on vertical context, the closest comparison is against vertically composed rules. We find that our approach performs equally well using much less time and space. Comparing against full composed rules, we find that our system matches the score of the baseline composed rule grammar of maximum height 3, while using many fewer parameters. (It should be noted that a parameter in the rule Markov model is just a floating-point number, whereas a parameter in the composed-rule system is an entire rule; therefore the difference in memory usage would be even greater.) Decoding with our model is 0.2 seconds faster per sentence than with composed rules. These experiments clearly show that rule Markov models with minimal rules increase translation quality significantly and with lower memory requirements than composed rules. One might wonder if the best performance can be obtained by combining composed rules with a rule Markov model. This rule Markov D1 Bleu time model dev (sec/sent) RM-A 0.871 29.2 1.8 RM-B 0.4 29.9 1.8 RM-C 0.871 29.8 1.8 RM-Full 0.4 29.7 1.9 Table 2: For rule bigrams, RM-B with D1 = 0.4 gives the best results on the development set. rule Markov D1 D2 Bleu time model dev (sec/sent) RM-A 0.5 0.5 30.3 2.0 RM-B 0.5 0.5 29.9 2.0 RM-C 0.5 0.5 30.1 2.0 RM-Full 0.4 0.5 30.1 2.2 Table 3: For rule bigrams, RM-A with D1, D2 = 0.5 gives the best results on the development set. is straightforward to implement: the rule Markov model is still defined over derivations of minimal rules, but in the decoder’s prediction step, the rule Markov model’s value on a composed rule is calculated by decomposing it into minimal rules and computing the product of their probabilities. We find that using our best trigram rule Markov model with composed rules gives us a 0.5 Bleugain on top of the composed rule grammar, statistically significant with p < 0.05, achieving our highest score of 28.0.1 4.3 Analysis Tables 2 and 3 show how the various types of rule Markov models compare, for bigrams and trigrams, 1For this experiment, a beam size of 100 was used. 862 parameters (×106) Bleudev/test time (sec/sent) dev/test without RMM with RMM without/with RMM 2.6 31.0/27.0 31.1/27.4 4.5/7.0 2.9 31.5/27.7 31.4/27.3 5.6/8.1 3.3 31.4/27.5 31.4/28.0 6.8/9.2 Table 6: Adding rule Markov models to composed-rule grammars improves their translation performance. D2 D1 0.4 0.5 0.871 0.4 30.0 30.0 0.5 29.3 30.3 0.902 30.0 Table 4: RM-A is robust to different settings of Dn on the development set. parameters (×106) Bleu time dev+test dev test (sec/sent) 1.2 30.2 26.1 2.8 1.3 30.1 26.5 2.9 1.3 30.1 26.2 3.2 Table 5: Comparison of vertically composed rules using various settings (maximum rule height 7). respectively. It is interesting that the full bigram and trigram rule Markov models do not give our highest Bleuscores; pruning the models not only saves space but improves their performance. We think that this is probably due to overfitting. Table 4 shows that the RM-A trigram model does fairly well under all the settings of Dn we tried. Table 5 shows the performance of vertically composed rules at various settings. Here we have chosen the setting that gives the best performance on the test set for inclusion in Table 1. Table 6 shows the performance of fully composed rules and fully composed rules with a rule Markov Model at various settings.2 In the second line (2.9 million rules), the drop in Bleuscore resulting from adding the rule Markov model is not statistically significant. 5 Related Work Besides the Quirk and Menezes (2006) work discussed in Section 1, there are two other previous 2For these experiments, a beam size of 100 was used. efforts both using a rule bigram model in machine translation, that is, the probability of the current rule only depends on the immediate previous rule in the vertical context, whereas our rule Markov model can condition on longer and sparser derivation histories. Among them, Ding and Palmer (2005) also use a dependency treelet model similar to Quirk and Menezes (2006), and Liu and Gildea (2008) use a tree-to-string model more like ours. Neither compared to the scenario with composed rules. Outside of machine translation, the idea of weakening independence assumptions by modeling the derivation history is also found in parsing (Johnson, 1998), where rule probabilities are conditioned on parent and grand-parent nonterminals. However, besides the difference between parsing and translation, there are still two major differences. First, our work conditions rule probabilities on parent and grandparent rules, not just nonterminals. Second, we compare against a composed-rule system, which is analogous to the Data Oriented Parsing (DOP) approach in parsing (Bod, 2003). To our knowledge, there has been no direct comparison between a history-based PCFG approach and DOP approach in the parsing literature. 6 Conclusion In this paper, we have investigated whether we can eliminate composed rules without any loss in translation quality. We have developed a rule Markov model that captures vertical bigrams and trigrams of minimal rules, and tested it in the framework of treeto-string translation. We draw three main conclusions from our experiments. First, our rule Markov models dramatically improve a grammar of minimal rules, giving an improvement of 2.3 Bleu. Second, when we compare against vertically composed rules we are able to get about the same Bleuscore, but our model is much smaller and decoding with our 863 model is faster. Finally, when we compare against full composed rules, we find that we can reach the same level of performance under some conditions, but in order to do so consistently, we believe we need to extend our model to condition on horizontal context in addition to vertical context. We hope that by modeling context in both axes, we will be able to completely replace composed-rule grammars with smaller minimal-rule grammars. Acknowledgments We would like to thank Fernando Pereira, Yoav Goldberg, Michael Pust, Steve DeNeefe, Daniel Marcu and Kevin Knight for their comments. Mi’s contribution was made while he was visiting USC/ISI. This work was supported in part by DARPA under contracts HR0011-06-C-0022 (subcontract to BBN Technologies), HR0011-09-10028, and DOI-NBC N10AP20031, by a Google Faculty Research Award to Huang, and by the National Natural Science Foundation of China under contracts 60736014 and 90920004. References Gill Bejerano and Golan Yona. 1999. Modeling protein families using probabilistic suffix trees. In Proc. RECOMB, pages 15–24. ACM Press. Rens Bod. 2003. An efficient implementation of a new DOP model. In Proceedings of EACL, pages 19–26. Yuan Ding and Martha Palmer. 2005. Machine translation using probablisitic synchronous dependency insertion grammars. In Proceedings of ACL, pages 541– 548. Victoria Fossum, Kevin Knight, and Steve Abney. 2008. Using syntax to improve word alignment precision for syntax-based machine translation. In Proceedings of the Workshop on Statistical Machine Translation. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of HLT-NAACL, pages 273–280. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of COLING-ACL, pages 961–968. Liang Huang and Haitao Mi. 2010. Efficient incremental decoding for tree-to-string translation. In Proceedings of EMNLP, pages 273–283. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of AMTA, pages 66–73. Mark Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24:613– 632. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP, pages 388–395. Ding Liu and Daniel Gildea. 2008. Improved tree-tostring transducer for machine translation. In Proceedings of the Workshop on Statistical Machine Translation, pages 62–69. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-tostring alignment template for statistical machine translation. In Proceedings of COLING-ACL, pages 609– 616. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. In Proceedings of ACL: HLT, pages 192–199. H. Ney, U. Essen, and R. Kneser. 1994. On structuring probabilistic dependencies in stochastic language modelling. Computer Speech and Language, 8:1–38. Franz Joseph Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL, pages 160–167. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of HLTNAACL, pages 404–411. Chris Quirk and Arul Menezes. 2006. Do we need phrases? Challenging the conventional wisdom in statistical machine translation. In Proceedings of NAACL HLT, pages 9–16. Stuart Shieber, Yves Schabes, and Fernando Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24:3–36. Andreas Stolcke. 2002. SRILM – an extensible language modeling toolkit. In Proceedings of ICSLP, volume 30, pages 901–904. 864
|
2011
|
86
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 865–874, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Hierarchical Pitman-Yor Process HMM for Unsupervised Part of Speech Induction Phil Blunsom Department of Computer Science University of Oxford [email protected] Trevor Cohn Department of Computer Science University of Sheffield [email protected] Abstract In this work we address the problem of unsupervised part-of-speech induction by bringing together several strands of research into a single model. We develop a novel hidden Markov model incorporating sophisticated smoothing using a hierarchical Pitman-Yor processes prior, providing an elegant and principled means of incorporating lexical characteristics. Central to our approach is a new type-based sampling algorithm for hierarchical Pitman-Yor models in which we track fractional table counts. In an empirical evaluation we show that our model consistently out-performs the current state-of-the-art across 10 languages. 1 Introduction Unsupervised part-of-speech (PoS) induction has long been a central challenge in computational linguistics, with applications in human language learning and for developing portable language processing systems. Despite considerable research effort, progress in fully unsupervised PoS induction has been slow and modern systems barely improve over the early Brown et al. (1992) approach (Christodoulopoulos et al., 2010). One popular means of improving tagging performance is to include supervision in the form of a tag dictionary or similar, however this limits portability and also comprimises any cognitive conclusions. In this paper we present a novel approach to fully unsupervised PoS induction which uniformly outperforms the existing state-of-the-art across all our corpora in 10 different languages. Moreover, the performance of our unsupervised model approaches that of many existing semi-supervised systems, despite our method not receiving any human input. In this paper we present a Bayesian hidden Markov model (HMM) which uses a non-parametric prior to infer a latent tagging for a sequence of words. HMMs have been popular for unsupervised PoS induction from its very beginnings (Brown et al., 1992), and justifiably so, as the most discriminating feature for deciding a word’s PoS is its local syntactic context. Our work brings together several strands of research including Bayesian non-parametric HMMs (Goldwater and Griffiths, 2007), Pitman-Yor language models (Teh, 2006b; Goldwater et al., 2006b), tagging constraints over word types (Brown et al., 1992) and the incorporation of morphological features (Clark, 2003). The result is a non-parametric Bayesian HMM which avoids overfitting, contains no free parameters, and exhibits good scaling properties. Our model uses a hierarchical Pitman-Yor process (PYP) prior to affect sophisicated smoothing over the transition and emission distributions. This allows the modelling of sub-word structure, thereby capturing tag-specific morphological variation. Unlike many existing approaches, our model is a principled generative model and does not include any hand tuned language specific features. Inspired by previous successful approaches (Brown et al., 1992), we develop a new typelevel inference procedure in the form of an MCMC sampler with an approximate method for incorporating the complex dependencies that arise between jointly sampled events. Our experimental evaluation demonstrates that our model, particularly when restricted to a single tag per type, produces 865 state-of-the-art results across a range of corpora and languages. 2 Background Past research in unsupervised PoS induction has largely been driven by two different motivations: a task based perspective which has focussed on inducing word classes to improve various applications, and a linguistic perspective where the aim is to induce classes which correspond closely to annotated part-of-speech corpora. Early work was firmly situtated in the task-based setting of improving generalisation in language models. Brown et al. (1992) presented a simple first-order HMM which restricted word types to always be generated from the same class. Though PoS induction was not their aim, this restriction is largely validated by empirical analysis of treebanked data, and moreover conveys the significant advantage that all the tags for a given word type can be updated at the same time, allowing very efficient inference using the exchange algorithm. This model has been popular for language modelling and bilingual word alignment, and an implementation with improved inference called mkcls (Och, 1999)1 has become a standard part of statistical machine translation systems. The HMM ignores orthographic information, which is often highly indicative of a word’s partof-speech, particularly so in morphologically rich languages. For this reason Clark (2003) extended Brown et al. (1992)’s HMM by incorporating a character language model, allowing the modelling of limited morphology. Our work draws from these models, in that we develop a HMM with a one class per tag restriction and include a character level language model. In contrast to these previous works which use the maximum likelihood estimate, we develop a Bayesian model with a rich prior for smoothing the parameter estimates, allowing us to move to a trigram model. A number of researchers have investigated a semisupervised PoS induction task in which a tag dictionary or similar data is supplied a priori (Smith and Eisner, 2005; Haghighi and Klein, 2006; Goldwater and Griffiths, 2007; Toutanova and Johnson, 2008; Ravi and Knight, 2009). These systems achieve 1Available from http://fjoch.com/mkcls.html. much higher accuracy than fully unsupervised systems, though it is unclear whether the tag dictionary assumption has real world application. We focus solely on the fully unsupervised scenario, which we believe is more practical for text processing in new languages and domains. Recent work on unsupervised PoS induction has focussed on encouraging sparsity in the emission distributions in order to match empirical distributions derived from treebank data (Goldwater and Griffiths, 2007; Johnson, 2007; Gao and Johnson, 2008). These authors took a Bayesian approach using a Dirichlet prior to encourage sparse distributions over the word types emitted from each tag. Conversely, Ganchev et al. (2010) developed a technique to optimize the more desirable reverse property of the word types having a sparse posterior distribution over tags. Recently Lee et al. (2010) combined the one class per word type constraint (Brown et al., 1992) in a HMM with a Dirichlet prior to achieve both forms of sparsity. However this work approximated the derivation of the Gibbs sampler (omitting the interdependence between events when sampling from a collapsed model), resulting in a model which underperformed Brown et al. (1992)’s one-class HMM. Our work also seeks to enforce both forms of sparsity, by developing an algorithm for type-level inference under the one class constraint. This work differs from previous Bayesian models in that we explicitly model a complex backoff path using a hierachical prior, such that our model jointly infers distributions over tag trigrams, bigrams and unigrams and whole words and their character level representation. This smoothing is critical to ensure adequate generalisation from small data samples. Research in language modelling (Teh, 2006b; Goldwater et al., 2006a) and parsing (Cohn et al., 2010) has shown that models employing Pitman-Yor priors can significantly outperform the more frequently used Dirichlet priors, especially where complex hierarchical relationships exist between latent variables. In this work we apply these advances to unsupervised PoS tagging, developing a HMM smoothed using a Pitman-Yor process prior. 866 3 The PYP-HMM We develop a trigram hidden Markov model which models the joint probability of a sequence of latent tags, t, and words, w, as Pθ(t, w) = L+1 Y l=1 Pθ(tl|tl−1, tl−2)Pθ(wl|tl) , where L = |w| = |t| and t0 = t−1 = tL+1 = $ are assigned a sentinel value to denote the start or end of the sentence. A key decision in formulating such a model is the smoothing of the tag trigram and emission distributions, which would otherwise be too difficult to estimate from small datasets. Prior work in unsupervised PoS induction has employed simple smoothing techniques, such as additive smoothing or Dirichlet priors (Goldwater and Griffiths, 2007; Johnson, 2007), however this body of work has overlooked recent advances in smoothing methods used for language modelling (Teh, 2006b; Goldwater et al., 2006b). Here we build upon previous work by developing a PoS induction model smoothed with a sophisticated non-parametric prior. Our model uses a hierarchical Pitman-Yor process prior for both the transition and emission distributions, encoding a backoff path from complex distributions to successsively simpler ones. The use of complex distributions (e.g., over tag trigrams) allows for rich expressivity when sufficient evidence is available, while the hierarchy affords a means of backing off to simpler and more easily estimated distributions otherwise. The PYP has been shown to generate distributions particularly well suited to modelling language (Teh, 2006a; Goldwater et al., 2006b), and has been shown to be a generalisation of Kneser-Ney smoothing, widely recognised as the best smoothing method for language modelling (Chen and Goodman, 1996). The model is depicted in the plate diagram in Figure 1. At its centre is a standard trigram HMM, which generates a sequence of tags and words, tl|tl−1, tl−2, T ∼Ttl−1,tl−2 wl|tl, E ∼Etl . U Bj Tij Ej Cjk w1 t1 w2 t2 w3 t3 ... Dj Figure 1: Plate diagram representation of the trigram HMM. The indexes i and j range over the set of tags and k ranges over the set of characters. Hyper-parameters have been omitted from the figure for clarity. The trigram transition distribution, Tij, is drawn from a hierarchical PYP prior which backs off to a bigram Bj and then a unigram U distribution, Tij|aT , bT , Bj ∼PYP(aT , bT , Bj) Bj|aB, bB, U ∼PYP(aB, bB, U) U|aU, bU ∼PYP(aU, bU, Uniform) , where the prior over U has as its base distribition a uniform distribution over the set of tags, while the priors for Bj and Tij back off by discarding an item of context. This allows the modelling of trigram tag sequences, while smoothing these estimates with their corresponding bigram and unigram distributions. The degree of smoothing is regulated by the hyper-parameters a and b which are tied across each length of n-gram; these hyper-parameters are inferred during training, as described in 3.1. The tag-specific emission distributions, Ej, are also drawn from a PYP prior, Ej|aE, bE, C ∼PYP(aE, bE, Cj) . We consider two different settings for the base distribution Cj: 1) a simple uniform distribution over the vocabulary (denoted HMM for the experiments in section 4); and 2) a character-level language model (denoted HMM+LM). In many languages morphological regularities correlate strongly with a word’s part-of-speech (e.g., suffixes in English), which we hope to capture using a basic character language model. This model was inspired by Clark (2003) 867 The big dog 5 23 23 7 b r o w n Figure 2: The conditioning structure of the hierarchical PYP with an embedded character language models. who applied a character level distribution to the single class HMM (Brown et al., 1992). We formulate the character-level language model as a bigram model over the character sequence comprising word wl, wlk|wlk−1, tl, C ∼Ctlwlk−1 Cjk|aC, bC, Dj ∼PYP(aC, bC, Dj) Dj|aD, bD ∼PYP(aD, bD, Uniform) , where k indexes the characters in the word and, in a slight abuse of notation, the character itself, w0 and is set to a special sentinel value denoting the start of the sentence (ditto for a final end of sentence marker) and the uniform base distribution ranges over the set of characters. We expect that the HMM+LM model will outperform the uniform HMM as it can capture many consistent morphological affixes and thereby better distinguish between different parts-of-speech. The HMM+LM is shown in Figure 2, illustrating the decomposition of the tag sequence into n-grams and a word into its component character bigrams. 3.1 Training In order to induce a tagging under this model we use Gibbs sampling, a Markov chain Monte Carlo (MCMC) technique for drawing samples from the posterior distribution over the tag sequences given observed word sequences. We present two different sampling strategies: First, a simple Gibbs sampler which randomly samples an update to a single tag given all other tags; and second, a type-level sampler which updates all tags for a given word under a one-tag-per-word-type constraint. In order to extract a single tag sequence to test our model against the gold standard we find the tag at each site with maximum marginal probability in the sample set. Following standard practice, we perform inference using a collapsed sampler whereby the model parameters U, B, T, E and C are marginalised out. After marginalisation the posterior distribution under a PYP prior is described by a variant of the Chinese Restaurant Process (CRP). The CRP is based around the analogy of a restaurant with an infinite number of tables, with customers entering one at a time and seating themselves at a table. The choice of table is governed by P(zl = k|z−l) = n− k −a l−1+b 1 ≤k ≤K− K−a+b l−1+b k = K−+ 1 (1) where zl is the table chosen by the lth customer, z−l is the seating arrangement of the l −1 previous customers, n− k is the number of customers in z−l who are seated at table k, K−= K(z−l) is the total number of tables in z−l, and z1 = 1 by definition. The arrangement of customers at tables defines a clustering which exhibits a power-law behavior controlled by the hyperparameters a and b. To complete the restaurant analogy, a dish is then served to each table which is shared by all the customers seated there. This corresponds to a draw from the base distribution, which in our case ranges over tags for the transition distribution, and words for the observation distribution. Overall the PYP leads to a distribution of the form P T (tl = i|z−l, t−l) = 1 n− h + bT × (2) n− hi −K− hiaT + K− h aT + bT P B(i|z−l, t−l) , illustrating the trigram transition distribution, where t−l are all previous tags, h = (tl−2, tl−1) is the conditioning bigram, n− hi is the count of the trigram hi in t−l, n− h the total count over all trigrams beginning with h, K− hi the number of tables served dish i and P B(·) is the base distribution, in this case the bigram distribution. A hierarchy of PYPs can be formed by making the base distribution of a PYP another PYP, following a 868 semantics whereby whenever a customer sits at an empty table in a restaurant, a new customer is also said to enter the restaurant for its base distribution. That is, each table at one level is equivalent to a customer at the next deeper level, creating the invariants: K− hi = n− ui and K− ui = n− i , where u = tl−1 indicates the unigram backoff context of h. The recursion terminates at the lowest level where the base distribution is static. The hierarchical setting allows for the modelling of elaborate backoff paths from rich and complex structure to successively simpler structures. Gibbs samplers Both our Gibbs samplers perform the same calculation of conditional tag distributions, and involve first decrementing all trigrams and emissions affected by a sampling action, and then reintroducing the trigrams one at a time, conditioning their probabilities on the updated counts and table configurations as we progress. The first local Gibbs sampler (PYP-HMM) updates a single tag assignment at a time, in a similar fashion to Goldwater and Griffiths (2007). Changing one tag affects three trigrams, with posterior P(tl|z−l, t−l, w) ∝P(tl±2, wl|z−l±2, t−l±2) , where l±2 denotes the range l−2, l−1, l, l+1, l+2. The joint distribution over the three trigrams contained in tl±2 can be calculated using the PYP formulation. This calculation is complicated by the fact that these events are not independent; the counts of one trigram can affect the probability of later ones, and moreover, the table assignment for the trigram may also affect the bigram and unigram counts, of particular import when the same tag occurs twice in a row such as in Figure 2. Many HMMs used for inducing word classes for language modelling include the restriction that all occurrences of a word type always appear with the same class throughout the corpus (Brown et al., 1992; Och, 1999; Clark, 2003). Our second sampler (PYP-1HMM) restricts inference to taggings which adhere to this one tag per type restriction. This restriction permits efficient inference techniques in which all tags of all occurrences of a word type are updated in parallel. Similar techniques have been used for models with Dirichlet priors (Liang et al., 2010), though one must be careful to manage the dependencies between multiple draws from the posterior. The dependency on table counts in the conditional distributions complicates the process of drawing samples for both our models. In the non-hierarchical model (Goldwater and Griffiths, 2007) these dependencies can easily be accounted for by incrementing customer counts when such a dependence occurs. In our model we would need to sum over all possible table assignments that result in the same tagging, at all levels in the hierarchy: tag trigrams, bigrams and unigrams; and also words, character bigrams and character unigrams. To avoid this rather onerous marginalisation2 we instead use expected table counts to calculate the conditional distributions for sampling. Unfortunately we know of no efficient algorithm for calculating the expected table counts, so instead develop a novel approximation En+1 [Ki] ≈En [Ki] + (aUEn [K] + bU)P0(i) (n −En [Ki] bU) + (aUEn [K] + bU)P0(i) , (3) where Ki is the number of tables for the tag unigram i of which there are n + 1 occurrences, En [·] denotes an expectation after observing n items and En [K] = P j En [Kj]. This formulation defines a simple recurrence starting with the first customer seated at a table, E1 [Ki] = 1, and as each subsequent customer arrives we fractionally assign them to a new table based on their conditional probability of sitting alone. These fractional counts are then carried forward for subsequent customers. This approximation is tight for small n, and therefore it should be effective in the case of the local Gibbs sampler where only three trigrams are being resampled. For the type based resampling where large numbers of n are involved (consider resampling the), this approximation can deviate from the actual value due to errors accumulated in the recursion. Figure 3 illustrates a simulation demonstrating that the approximation is a close match for small a and n but underestimates the true value for high a 2Marginalisation is intractable in general, i.e. for the 1HMM where many sites are sampled jointly. 869 0 20 40 60 80 100 2 4 6 8 10 12 number of customers expected tables a=0.9 a=0.8 a=0.5 a=0.1 Figure 3: Simulation comparing the expected table count (solid lines) versus the approximation under Eq. 3 (dashed lines) for various values of a. This data was generated from a single PYP with b = 1, P0(i) = 1 4 and n = 100 customers which all share the same tag. and n. The approximation was much less sensitive to the choice of b (not shown). To resample a sequence of trigrams we start by removing their counts from the current restaurant configuration (resulting in z−). For each tag we simulate adding back the trigrams one at a time, calculating their probability under the given z−plus the fractional table counts accumulated by Equation 3. We then calculate the expected table count contribution from this trigram and add it to the accumulated counts. The fractional table count from the trigram then results in a fractional customer entering the bigram restaurant, and so on down to unigrams. At each level we must update the expected counts before moving on to the next trigram. After performing this process for all trigrams under consideration and for all tags, we then normalise the resulting tag probabilities and sample an outcome. Once a tag has been sampled, we then add all the trigrams to the restaurants sampling their tables assignments explicitly (which are no longer fractional), recorded in z. Because we do not marginalise out the table counts and our expectations are only approximate, this sampler will be biased. We leave to future work properly accounting for this bias, e.g., by devising a Metropolis Hastings acceptance test. Sampling hyperparameters We treat the hyper-parameters {(ax, bx) , x ∈(U, B, T, E, C)} as random variables in our model and infer their values. We place prior distributions on the PYP discount ax and concentration bx hyperparamters and sample their values using a slice sampler. For the discount parameters we employ a uniform Beta distribution (ax ∼ Beta(1, 1)), and for the concentration parameters we use a vague gamma prior (bx ∼Gamma(10, 0.1)). All the hyper-parameters are resampled after every 5th sample of the corpus. The result of this hyperparameter inference is that there are no user tunable parameters in the model, an important feature that we believe helps explain its consistently high performance across test settings. 4 Experiments We perform experiments with a range of corpora to both investigate the properties of our proposed models and inference algorithms, as well as to establish their robustness across languages and domains. For our core English experiments we report results on the entire Penn. Treebank (Marcus et al., 1993), while for other languages we use the corpora made available for the CoNLL-X Shared Task (Buchholz and Marsi, 2006). We report results using the manyto-one (M-1) and v-measure (VM) metrics considered best by the evaluation of Christodoulopoulos et al. (2010). M-1 measures the accuracy of the model after mapping each predicted class to its most frequent corresponding tag, while VM is a variant of the F-measure which uses conditional entropy analogies of precision and recall. The log-posterior for the HMM sampler levels off after a few hundred samples, so we report results after five hundred. The 1HMM sampler converges more quickly so we use two hundred samples for these models. All reported results are the mean of three sampling runs. An important detail for any unsupervised learning algorithm is its initialisation. We used slightly different initialisation for each of our inference strategies. For the unrestricted HMM we randomly assigned each word token to a class. For the restricted 1HMM we use a similar initialiser to 870 Model M-1 VM Prototype meta-model (CGS10) 76.1 68.8 MEMM (BBDK10) 75.5 mkcls (Och, 1999) 73.7 65.6 MLE 1HMM-LM (Clark, 2003)∗ 71.2 65.5 BHMM (GG07) 63.2 56.2 PR (Ganchev et al., 2010)∗ 62.5 54.8 Trigram PYP-HMM 69.8 62.6 Trigram PYP-1HMM 76.0 68.0 Trigram PYP-1HMM-LM 77.5 69.7 Bigram PYP-HMM 66.9 59.2 Bigram PYP-1HMM 72.9 65.9 Trigram DP-HMM 68.1 60.0 Trigram DP-1HMM 76.0 68.0 Trigram DP-1HMM-LM 76.8 69.8 Table 1: WSJ performance comparing previous work to our own model. The columns display the many-to-1 accuracy and the V measure, both averaged over 5 independent runs. Our model was run with the local sampler (HMM), the type-level sampler (1HMM) and also with the character LM (1HMM-LM). Also shown are results using Dirichlet Process (DP) priors by fixing a = 0. The system abbreviations are CGS10 (Christodoulopoulos et al., 2010), BBDK10 (Berg-Kirkpatrick et al., 2010) and GG07 (Goldwater and Griffiths, 2007). Starred entries denote results reported in CGS10. Clark (2003), assigning each of the k most frequent word types to its own class, and then randomly dividing the rest of the types between the classes. As a baseline we report the performance of mkcls (Och, 1999) on all test corpora. This model seems not to have been evaluated in prior work on unsupervised PoS tagging, which is surprising given its consistently good performance. First we present our results on the most frequently reported evaluation, the WSJ sections of the Penn. Treebank, along with a number of state-of-the-art results previously reported (Table 1). All of these models are allowed 45 tags, the same number of tags as in the gold-standard. The performance of our models is strong, particularly the 1HMM. We also see that incorporating a character language model (1HMM-LM) leads to further gains in performance, improving over the best reported scores under both M-1 and VM. We have omitted the results for the HMM-LM as experimentation showed that the local Gibbs sampler became hopelessly stuck, failing to 0 10 20 30 40 50 0 2 4 6 8 10 12 14 16 18 x 10 4 Tags sorted by frequency Frequency Gold tag distribution 1HMM 1HMM−LM MKCLS Figure 4: Sorted frequency of tags for WSJ. The gold standard distribution follows a steep exponential curve while the induced model distributions are more uniform. mix due to the model’s deep structure (its peak performance was ≈55%). To evaluate the effectiveness of the PYP prior we include results using a Dirichlet Process prior (DP). We see that for all models the use of the PYP provides some gain for the HMM, but diminishes for the 1HMM. This is perhaps a consequence of the expected table count approximation for the typesampled PYP-1HMM: the DP relies less on the table counts than the PYP. If we restrict the model to bigrams we see a considerable drop in performance. Note that the bigram PYP-HMM outperforms the closely related BHMM (the main difference being that we smooth tag bigrams with unigrams). It is also interesting to compare the bigram PYP-1HMM to the closely related model of Lee et al. (2010). That model incorrectly assumed independence of the conditional sampling distributions, resulting in a accuracy of 66.4%, well below that of our model. Figures 4 and 5 provide insight into the behavior of the sampling algorithms. The former shows that both our models and mkcls induce a more uniform distribution over tags than specified by the treebank. It is unclear whether it is desirable for models to exhibit behavior closer to the treebank, which dedicates separate tags to very infrequent phenomena while lumping the large range of noun types into a single category. The graph in Figure 5 shows that the type-based 1HMM sampler finds a good tagging extremely quickly and then sticks with it, 871 0 50 100 150 10 20 30 40 50 60 70 80 Number of samples M−1 Accuracy (%) PYP−1HMM PYP−1HMM−LM PYP−HMM PYP−HMM−LM Figure 5: M-1 accuracy vs. number of samples. NN IN NNP DT JJ NNS , . CD RB VBD VB CC TO VBZ VBN PRP VBG VBP MD POS PRP$ $ ‘‘ ’’ : WDT JJR RP NNPS WP WRB JJS RBR −RRB− −LRB− EX RBS PDT FW WP$ # UH SYM NN IN NNP DT JJ NNS , . CD RB VBD VB CC TO VBZ VBN PRP VBG VBP MD POS PRP$ $ ‘‘ ’’ : WDT JJR RP NNPS WP WRB JJS RBR −RRB− −LRB− EX RBS PDT FW WP$ # UH SYM Figure 6: Cooccurence between frequent gold (y-axis) and predicted (x-axis) tags, comparing mkcls (top) and PYP-1HMM-LM (bottom). Both axes are sorted in terms of frequency. Darker shades indicate more frequent cooccurence and columns represent the induced tags. save for the occasional step change demonstrated by the 1HMM-LM line. The locally sampled model is far slower to converge, rising slowly and plateauing well below the other models. In Figure 6 we compare the distributions over WSJ tags for mkcls and the PYP-1HMM-LM. On the macro scale we can see that our model induces a sparser distribution. With closer inspection we can identify particular improvements our model makes. In the first column for mkcls and the third column for our model we can see similar classes with significant counts for DTs and PRPs, indicating a class that the models may be using to represent the start of sentences (informed by start transitions or capitalisation). This column exemplifies the sparsity of the PYP model’s posterior. We continue our evaluation on the CoNLL multilingual corpora (Table 2). These results show a highly consistent story of performance for our models across diverse corpora. In all cases the PYP-1HMM outperforms the PYP-HMM, which are both outperformed by the PYP-1HMM-LM. The character language model provides large gains in performance on a number of corpora, in particular those with rich morphology (Arabic +5%, Portuguese +5%, Spanish +4%). We again note the strong performance of the mkcls model, significantly beating recently published state-of-theart results for both Dutch and Swedish. Overall our best model (PYP-1HMM-LM) outperforms both the state-of-the-art, where previous work exists, as well as mkcls consistently across all languages. 5 Discussion The hidden Markov model, originally developed by Brown et al. (1992), continues to be an effective modelling structure for PoS induction. We have combined hierarchical Bayesian priors with a trigram HMM and character language model to produce a model with consistently state-of-the-art performance across corpora in ten languages. However our analysis indicates that there is still room for improvement, particularly in model formulation and developing effective inference algorithms. Induced tags have already proven their usefulness in applications such as Machine Translation, thus it will prove interesting as to whether the improvements seen from our models can lead to gains in downstream tasks. The continued successes of models combining hierarchical Pitman-Yor priors with expressive graphical models attests to this framework’s enduring attraction, we foresee continued interest in applying this technique to other NLP tasks. 872 Language mkcls HMM 1HMM 1HMM-LM Best pub. Tokens Tag types Arabic 58.5 57.1 62.7 67.5 54,379 20 Bulgarian 66.8 67.8 69.7 73.2 190,217 54 Czech 59.6 62.0 66.3 70.1 1,249,408 12c Danish 62.7 69.9 73.9 76.2 66.7⋆ 94,386 25 Dutch 64.3 66.6 68.7 70.4 67.3† 195,069 13c Hungarian 54.3 65.9 69.0 73.0 131,799 43 Portuguese 68.5 72.1 73.5 78.5 75.3⋆ 206,678 22 Spanish 63.8 71.6 74.7 78.8 73.2⋆ 89,334 47 Swedish 64.3 66.6 67.0 68.6 60.6† 191,467 41 Table 2: Many-to-1 accuracy across a range of languages, comparing our model with mkcls and the best published result (⋆Berg-Kirkpatrick et al. (2010) and †Lee et al. (2010)). This data was taken from the CoNLL-X shared task training sets, resulting in listed corpus sizes. Fine PoS tags were used for evaluation except for items marked with c, which used the coarse tags. For each language the systems were trained to produce the same number of tags as the gold standard. References Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cˆot´e, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 582–590, Los Angeles, California, June. Association for Computational Linguistics. Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Classbased n-gram models of natural language. Comput. Linguist., 18:467–479, December. Sabine Buchholz and Erwin Marsi. 2006. Conll-x shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL-X ’06, pages 149– 164, Morristown, NJ, USA. Association for Computational Linguistics. Stanley F. Chen and Joshua Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, pages 310–318, Morristown, NJ, USA. Association for Computational Linguistics. Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. 2010. Two decades of unsupervised POS induction: How far have we come? In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 575–584, Cambridge, MA, October. Association for Computational Linguistics. Alexander Clark. 2003. Combining distributional and morphological information for part of speech induction. In Proceedings of the tenth Annual Meeting of the European Association for Computational Linguistics (EACL), pages 59–66. Trevor Cohn, Phil Blunsom, and Sharon Goldwater. 2010. Inducing tree-substitution grammars. Journal of Machine Learning Research, pages 3053–3096. Kuzman Ganchev, Jo˜ao Grac¸a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. Journal of Machine Learning Research, 99:2001–2049, August. Jianfeng Gao and Mark Johnson. 2008. A comparison of bayesian estimators for unsupervised hidden markov model pos taggers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 344–352, Morristown, NJ, USA. Association for Computational Linguistics. Sharon Goldwater and Tom Griffiths. 2007. A fully bayesian approach to unsupervised part-of-speech tagging. In Proc. of the 45th Annual Meeting of the ACL (ACL-2007), pages 744–751, Prague, Czech Republic, June. Sharon Goldwater, Tom Griffiths, and Mark Johnson. 2006a. Contextual dependencies in unsupervised word segmentation. In Proc. of the 44th Annual Meeting of the ACL and 21st International Conference on Computational Linguistics (COLING/ACL-2006), Sydney. Sharon Goldwater, Tom Griffiths, and Mark Johnson. 2006b. Interpolating between types and tokens by estimating power-law generators. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, Advances in Neural 873 Information Processing Systems 18, pages 459–466. MIT Press, Cambridge, MA. Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 320– 327, Morristown, NJ, USA. Association for Computational Linguistics. Mark Johnson. 2007. Why doesnt EM find good HMM POS-taggers? In Proc. of the 2007 Conference on Empirical Methods in Natural Language Processing (EMNLP-2007), pages 296–305, Prague, Czech Republic. Yoong Keok Lee, Aria Haghighi, and Regina Barzilay. 2010. Simple type-level unsupervised pos tagging. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 853–861, Morristown, NJ, USA. Association for Computational Linguistics. P. Liang, M. I. Jordan, and D. Klein. 2010. Type-based MCMC. In North American Association for Computational Linguistics (NAACL). Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: the Penn treebank. Computational Linguistics, 19(2):313–330. Franz Josef Och. 1999. An efficient method for determining bilingual word classes. In Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics, pages 71–76, Morristown, NJ, USA. Association for Computational Linguistics. Sujith Ravi and Kevin Knight. 2009. Minimized models for unsupervised part-of-speech tagging. In Proceedings of the Joint Conferenceof the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP), pages 504–512. Noah A. Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 354–362, Ann Arbor, Michigan, June. Y. W. Teh. 2006a. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 985–992. Yee Whye Teh. 2006b. A hierarchical bayesian language model based on pitman-yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 985–992, Morristown, NJ, USA. Association for Computational Linguistics. Kristina Toutanova and Mark Johnson. 2008. A bayesian lda-based model for semi-supervised part-of-speech tagging. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1521–1528. MIT Press, Cambridge, MA. 874
|
2011
|
87
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 875–884, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Using Deep Morphology to Improve Automatic Error Detection in Arabic Handwriting Recognition Nizar Habash and Ryan M. Roth Center for Computational Learning Systems Columbia University {habash,ryanr}@ccls.columbia.edu Abstract Arabic handwriting recognition (HR) is a challenging problem due to Arabic’s connected letter forms, consonantal diacritics and rich morphology. In this paper we isolate the task of identification of erroneous words in HR from the task of producing corrections for these words. We consider a variety of linguistic (morphological and syntactic) and non-linguistic features to automatically identify these errors. Our best approach achieves a roughly ∼15% absolute increase in F-score over a simple but reasonable baseline. A detailed error analysis shows that linguistic features, such as lemma (i.e., citation form) models, help improve HR-error detection precisely where we expect them to: semantically incoherent error words. 1 Introduction After years of development, optical character recognition (OCR) for Latin-character languages, such as English, has been refined greatly. Arabic, however, possesses a complex orthography and morphology that makes OCR more difficult (Märgner and Abed, 2009; Halima and Alimi, 2009; Magdy and Darwish, 2006). Because of this, only a few systems for Arabic OCR of printed text have been developed, and these have not been thoroughly evaluated (Märgner and Abed, 2009). OCR of Arabic handwritten text (handwriting recognition, or HR), whether online or offline, is even more challenging compared to printed Arabic OCR, where the uniformity of letter shapes and other factors allow for easier recognition (Biadsy et al., 2006; Natarajan et al., 2008; Saleem et al., 2009). OCR and HR systems are often improved by performing post-processing; these are attempts to evaluate whether each word, phrase or sentence in the OCR/HR output is legal and/or probable. When an illegal word or phrase is discovered (error detection), these systems usually attempt to generate a legal alternative (error correction). In this paper, we present a HR error detection system that uses deep lexical and morphological feature models to locate possible "problem zones" – words or phrases that are likely incorrect – in Arabic HR output. We use an off-the-shelf HR system (Natarajan et al., 2008; Saleem et al., 2009) to generate an N-best list of hypotheses for each of several scanned segments of Arabic handwriting. Our problem zone detection (PZD) system then tags the potentially erroneous (problem) words. A subsequent HR post-processing system can then focus its effort on these words when generating additional alternative hypotheses. We only discuss the PZD system and not the task of new hypothesis generation; the evaluation is on error/problem identification. PZD can also be useful in highlighting erroneous text for human post-editors. This paper is structured as follows: Section 2 provides background on the difficulties of the Arabic HR task. Section 3 presents an analysis of HR errors and defines what is considered a problem zone to be tagged. The experimental features, data and other variables are outlined in Section 4. The experiments are presented and discussed in Section 5. We discuss and compare to some related work in detail in Section 6. Conclusions and suggested avenues of for future progress are presented in Section 7. 2 Arabic Handwriting Recognition Challenges Arabic has several orthographic and morphological properties that make HR challenging (Darwish and Oard, 2002; Magdy and Darwish, 2006; Märgner and Abed, 2009). 875 2.1 Arabic Orthography Challenges The use of cursive, connected script creates problems in that it becomes more difficult for a machine to distinguish between individual characters. This is certainly not a property unique to Arabic; methods developed for other cursive script languages (such as Hidden Markov Models) can be applied successfully to Arabic (Natarajan et al., 2008; Saleem et al., 2009; Märgner and Abed, 2009; Lu et al., 1999). Arabic writers often make use of elongation (tatweel/kashida) to beautify the script. Arabic also contains certain ligature constructions that require consideration during OCR/HR (Darwish and Oard, 2002). Sets of dots and optional diacritic markers are used to create character distinctions in Arabic. However, trace amounts of dust or dirt on the original document scan can be easily mistaken for these markers (Darwish and Oard, 2002). Alternatively, these markers in handwritten text may be too small, light or closely-spaced to readily distinguish, causing the system to drop them entirely. While Arabic disconnective letters may make it hard to determine word boundaries, they could plausibly contribute to reduced ambiguity of otherwise similar shapes. 2.2 Arabic Morphology Challenges Arabic words can be described in terms of their morphemes. In addition to concatenative prefixes and suffixes, Arabic has templatic morphemes called roots and patterns. For example, the word ÑîD.KA¾Ò»ð wkmkAtbhm1 (w+k+mkAtb+hm) ‘and like their offices’ has two prefixes and one suffix, in addition to a stem composed of the root H. H¼ k-t-b ‘writing related’ and the pattern m1A23.2 Arabic words can also be described in terms of lexemes and inflectional features. The set of word forms that only vary inflectionally among each other is called the lexeme. A lemma is a particular word form used to represent the lexeme word set – a citation form that stands in for the class (Habash, 2010). For instance, the lemma I. JºÓ mktb ‘office’ represents the class of all forms sharing the core meaning ‘office’: I. KA¾Ó mkAtb ‘offices’ (irregular plural), I. JºÖÏ@ Almktb ‘the 1All Arabic transliterations are presented in the HSB transliteration scheme (Habash et al., 2007). 2The digits in the pattern correspond to positions where root radicals are inserted. office’, AîD.JºÖÏ lmktbhA ‘for her office’, and so on. Just as the lemma abstracts over inflectional morphology, the root abstracts over both inflectional and derivational morphology and thus provides a very high level of lexical abstraction, indicating the “core” meaning of the word. The Arabic root H. H¼ k-t-b ‘writing related’, e.g., relates words like I. JºÓ mktb ‘office’, H. ñJºÓ mktwb ‘letter’, and éJ. J» ktyb¯h ‘military unit (of conscripts)’. Arabic morphology allows for tens of billions of potential, legal words (Magdy and Darwish, 2006; Moftah et al., 2009). The large potential vocabulary size by itself complicates HR methods that rely on conventional, word-based dictionary lookup strategies. In this paper we consider the value of morpholexical and morpho-syntactic features such as lemmas and part-of-speech tags, respectively, that may allow machine learning algorithms to learn generalizations. We do not consider the root since it has been shown to be too general for NLP purposes (Larkey et al., 2007). Other researchers have used stems for OCR correction (Magdy and Darwish, 2006); we discuss their work and compare to it in Section 6, but we do not present a direct experimental comparison. 3 Problem Zones in Handwriting Recognition 3.1 HR Error Classifications We can classify three types of HR errors: substitutions, insertions and deletions. Substitutions involve replacing the correct word by another incorrect form. Insertions are words that are incorrectly added into the HR hypothesis. An insertion error is typically paired with a substitution error, where the two errors reflect a mis-identification of a single word as two words. Deletions are simply missing words. Examples of these different types of errors appear in Table 1. In the dev set that we study here (see Section 4.1), 25.8% of the words are marked as problematic. Of these, 87.2% are letter-based words (henceforth words), as opposed to 9.3% punctuation and 3.5% digits. Orthogonally, 81.4% of all problem words are substitution errors, 10.6% are insertion errors and 7.9% are deletion errors. Whereas punctuation symbols are 9.3% of all errors, they represent over 38% 876 REF ! èXñm.Ì'@ éJ ËA« èYK. P éJ.Ê« @ñª J à
@ ËY K
B@ð PA ¯ð éJ J ¢ J¢®Ë@ ú m'A ¯AK àñÒÊÖÏ@ AîE
@ Õç' Qj. «
@ ! Aljwd¯h ςAly¯h zbd¯h ςlb¯h tSnςwA Ân wAlÂndls wfArs AlqsTnTyny¯h yAfAtHy Almslmwn ÂyhA Âςjztm HYP èXñm.Ì'@ éJ ËA« èñK à éJ Ê« @ñ ¯Qå à@ áÖÏ Y gB@ð PA ¯ð éJ J ¢ J¢®Ë@ ú GAKAK. àñÒÊÖÏ@ AîE @ Õç' Q «
@ Aljwd¯h ςAly¯h ywh n ςlyh tSrfwA An lmn wAlAxð wfArs AlqsTnTyny¯h bAtAný Almslmwn AyhA θm Âςyr PZD PROB OK PROB PROB PROB PROB PROB PROB PROB OK OK PROB OK PROB PROB PROB DELX INS SUB DOTS SUB ORTH INS SUB SUB ORTH INS SUB Table 1: An example highlighting the different types of Arabic HR errors. The first row shows the reference sentence (right-to-left). The second row shows an automatically generated hypothesis of the same sentence. The last row shows which words in the hypothesis are marked as problematic (PROB) by the system and the specific category of the problem (illustrative, not used by system): SUB (substituted), ORTH (substituted by an orthographic variant), DOTS (substituted by a word with different dotting), INS (inserted), and DELX (adjacent to a deleted word). The remaining words are tagged as OK. The reference translates as ‘Are you unable O’Moslems, you who conquered Constantinople and Persia and Andalusia, to manufacture a tub of high quality butter!’. The hypothesis roughly translates as ‘I loan then O’Moslems Pattani Constantinople, and Persia and taking from whom that you spend on him N Yeoh high quality’. of all deletion errors, almost 22% of all insertion errors and less than 5% of substitution errors. Similarly digits, which are 3.5% of all errors, are almost 14% of deletions, 7% of insertions and just over 2% of all substitutions. Punctuation and digits bring different challenges: whereas punctuation marks are a small class, their shape is often confusable with Arabic letters or letter components, e.g., @
ˇA and ! or P r and ,. Digits on the other hand are a hard class to language model since the vocabulary (of multidigit numbers) is infinite. Potentially this can be addressed using a pattern-based model that captures forms of digit sequences (such as date and currency formats); we leave this as future work. Words (non-digit, non-punctuation) still constitute the majority in every category of error: 47.7% of deletions, 71.3% of insertions and over 93% of substitutions. Among substitutions, 26.5% are simple orthographic variants that are often normalized in Arabic NLP because they result from frequent inconsistencies in spelling: Alef Hamza forms (@/
@/@
/@ A/Â/ ˇA/ ¯A) and Ya/Alef-Maqsura (ø /ø y/ý). If we consider whether the lemma of the correct word and its incorrect form are matchable, an additional 6.9% can be added to the orthographic variant sum (since all of these cases can share the same lemmas). The rest of the cases, or 59.7% of the words, involve complex orthographic errors. Simple dot misplacement can only account for 2.4% of all substitution errors. The HR system output does not contain any illegal non-words since its vocabulary is restricted by its training data and language models. The large proportion of errors involving lemma differences is consistent with the perception that most OCR/HR errors create semantically incoherent sentences. This suggests that lemma models can be helpful in identifying such errors. 3.2 Problem Zone Definition Prior to developing a model for PZD, it is necessary to define what is considered a ‘problem’. Once a definition is chosen, gold problem tags can be generated for the training and test data by comparing the hypotheses to their references.3 We decided in this paper to use a simple binary problem tag: a hypothesis word is tagged as "PROB" if it is the result of an insertion or substitution of a word. Deleted words in a hypothesis, which cannot be tagged themselves, cause their adjacent words to be marked as PROB instead. In this way, a subsequent HR post-processing system can be alerted to the possibility of a missing word via its surroundings (hence the idea of a problem ‘zone’). Any words not marked as PROB are given an "OK" tag (see the PZD row of Table 1). We describe in Section 5.6 some preliminary experiments we conducted using more fine-grained tags. 4 Experimental Settings 4.1 Training and Evaluation Data The data used in this paper is derived from image scans provided by the Linguistic Data Consortium (LDC) (Strassel, 2009). This data consists of high-resolution (600 dpi) handwriting scans of Arabic text taken from newswire articles, web logs and 3For clarity, we refer to these tags as ‘gold’, whereas the correct segment for a given hypothesis set is called the ‘reference’. 877 newsgroups, along with ground truth annotations and word bounding box information. The scans include variations in scribe demographic background, writing instrument, paper and writing speed. The BBN Byblos HR system (Natarajan et al., 2008; Saleem et al., 2009) is then used to process these scanned images into sequences of segments (sentence fragments). The system generates a ranked N-best list of hypotheses for each segment, where N could be as high as 300. On average, a segment has 6.87 words (including punctuation). We divide the N-best list data into training, development (dev) and test sets.4 For training, we consider two sets of size 2000 and 4000 segments (S) with the 10 top-ranked hypotheses (H) for each segment to provide additional variations.5 The references are also included in the training sets to provide examples of perfect text. The dev and test sets use 500 segments with one top-ranked hypothesis each {H=1}. We can construct a trivial PZD baseline by assuming all the input words are PROBs; this results in baseline % Precision/Recall/F-scores of 25.8/100/41.1 and 26.0/100/41.2 for the dev and test sets, respectively. Note that in this paper we eschew these baselines in favor of comparison to a non-trivial baseline generated by a simple PZD model. 4.2 PZD Models and Features The PZD system relies on a set of SVM classifiers trained using morphological and lexical features. The SVM classifiers are built using Yamcha (Kudo and Matsumoto, 2003). The SVMs use a quadratic polynomial kernel. For the models presented in this paper, the static feature window context size is set to +/- 2 words; the previous two (dynamic) classifications (i.e. targets) are also used as features. Experiments with smaller window sizes result in poorer performance, while a larger window size (+/- 6 words) yields roughly the same performance at the expense of an order-of-magnitude increase in required training time. Over 30 different 4Naturally, we do not use data that the BBN Byblos HR system was trained on. 5We conducted additional experiments where we varied the number of segments and hypotheses and found that the system benefited from added variety of segments more than hypotheses. We also modified training composition in terms of the ratio of problem/non-problem words; this did not help performance. Simple Description word The surface word form nw Normalized word: the word after Alef, Ya and digit normalization pos The part-of-speech (POS) of the word lem The lemma of the word na No-analysis: a binary feature indicating whether the morphological analyzer produced any analyses for the word Binned Description nw N-grams Normword 1/2/3-gram probabilities lem N-grams Lemma 1/2/3-gram probabilities pos N-grams POS 1/2/3-gram probabilities conf Word confidence: the ratio of the number of hypotheses in the N-best list that contain the word over the total number of hypotheses Table 2: PZD model features. Simple features are used directly by the PZD SVM models, whereas Binned features’ (numerical) values are reduced to a small, labeled category set whose labels are used as model features. combinations of features were considered. Table 2 shows the individual feature definitions. In order to obtain the morphological features, all of the training and test data is passed through MADA 3.0, a software tool for Arabic morphological analysis disambiguation (Habash and Rambow, 2005; Roth et al., 2008; Habash et al., 2010). For these experiments, MADA provides the pos (using MADA’s native 34-tag set) and the lemma for each word. Occasionally MADA will not be able to produce any interpretations (analyses) for a word; since this is often a sign that the word is misspelled or uncommon, we define a binary na feature to indicate when MADA fails to generate analyses. In addition to using the MADA features directly, we also develop a set of nine N-gram models (where N=1, 2, and 3) for the nw, pos, and lem features defined in Table 2. We train these models using 220M words from the Arabic Gigaword 3 corpus (Graff, 2007) which had also been run through MADA 3.0 to extract the pos and lem information. The models are built using the SRI Language Modeling Toolkit (Stolcke, 2002). Each word in a hypothesis can then be assigned a probability by each of these nine models. We reduce these probabilities into one of nine bins, with each successive bin representing an order of magnitude drop in probability (the final bin is re878 served for word N-grams which did not appear in the models). The bin labels are used as the SVM features. Finally, we also use a word confidence (conf) feature, which is aimed at measuring the frequency with which a given word is chosen by the HR system for a given segment scan. The conf is defined here as the ratio of the number of hypotheses in the Nbest list that the word appears in to the total number of hypotheses. These numbers are calculated using the original N-best hypothesis list, before the data is trimmed to H={1, 10}. Like the N-grams, this number is binned; in this case there are 11 bins, with 10 spread evenly over the [0,1) range, and an extra bin for values of exactly 1 (i.e., when the word appears in every hypothesis in the set). 5 Results We describe next different experiments conducted by varying the features used in the PZD model. We present the results in terms of F-score only for simplicity; we then conduct an error analysis that examines precision and recall. 5.1 Effect of Feature Set Choice Selecting an appropriate set of features for PZD requires extensive testing. Even when only considering the few features described in Table 2, the parameter space is quite large. Rather then exhaustively test every possible feature combination, we selectively choose feature subsets that can be compared to gain a sense of the incremental benefit provided by individual features. 5.1.1 Simple Features Table 3 illustrates the result of taking a baseline feature set (containing word as the only feature) and adding a single feature from the Simple set to it. The result of combining all the Simple features is also indicated. From this, we see that Simple features, even collectively, provide only minor improvements. 5.1.2 Binned Features Table 4 shows models which include both Simple and Binned features. First, Table 4 shows the effect of adding nw N-grams of successively higher orders to the word baseline. Here we see that even a simple unigram provides a significant benefit (compared Feature Set F-score %Imp word 43.85 – word+nw 43.86 ∼0 word+na 44.78 2.1 word+lem 45.85 4.6 word+pos 45.91 4.7 word+nw+pos+lem+na 46.34 5.7 Table 3: PZD F-scores for simple feature combinations. The training set used was {S=2000, H=10} and the models were evaluated on the dev set. The improvement over the word baseline case is also indicated. %Imp is the relative improvement over the first row. Feature Set F-score %Imp word 43.85 – word+nw 1-gram 49.51 12.9 word+nw 1-gram+nw 2-gram 59.26 35.2 word+nw N-grams 59.33 35.3 +pos 58.50 33.4 +pos N-grams 57.35 30.8 +lem+lem N-grams 59.63 36.0 +lem+lem N-grams+na 59.93 36.7 +lem+lem N-grams+na+nw 59.77 36.3 +lem 60.92 38.9 +lem+na 60.47 37.9 +lem+lem N-grams 60.44 37.9 Table 4: PZD F-scores for models that include Binned features. The training set used was {S=2000, H=10} and the models were evaluated on the dev set. The improvement over the word baseline case is also indicated. The label "N-grams" following a Binned feature refers to using 1, 2 and 3-grams of that feature. Indentation marks accumulative features in model. The best performing row (with bolded score) is word+nw N-grams+lem. to the improvements gained in Table 3). The largest improvement comes with the addition of the bigram (thus introducing context into the model), but the trigram provides only a slight improvement above that. This implies that pursuing higher order N-grams will result in negligible returns. In the next part of Table 4, we see that the single feature (pos) which provided the highest singlefeature benefit in Table 3 does not provide similar improvements under these combinations, and in fact seems detrimental. We also note that using all the features in one model is outperformed by more selective choices. Here, the best performer is the model which utilizes the word, nw N-grams, 879 Base Feature Set F-score %Imp +conf word 43.85 55.83 27.3 +nw N-grams 59.33 61.71 4.0 +lem 60.92 62.60 2.8 +lem+na 60.47 63.14 4.4 +lem+lem N-grams 60.44 62.88 4.0 +pos+pos N-grams +na+nw (all system) 59.77 62.44 4.5 Table 5: PZD F-scores for models when word confidence is added to the feature set. The training set used was {S=2000, H=10} and the models were evaluated on the dev set. The improvement generated by including word confidence is indicated. The label "N-grams" following a Binned feature refers to using 1, 2 and 3-grams of that feature. Indentation marks accumulative features in model. %Imp is the relative improvement gained by adding the conf feature. and lem as the only features. However, the differences among this model and the other models using lem Table 4 are not statistically significant. The differences between this model and the other lower performing models are statistically significant (p<0.05). 5.1.3 Word Confidence The conf feature deserves special consideration because it is the only feature which draws on information from across the entire hypothesis set. In Table 5, we show the effect of adding conf as a feature to several base feature sets taken from Table 4. Except for the baseline case, conf provides a relatively consistent benefit. The large (27.3%) improvement gained by adding conf to the word baseline shows that conf is a valuable feature, but the smaller improvements in the other models indicate that the information it provides largely overlaps with the information already present in those models. The differences among the last four models (all including lem) in Table 5 are not statistically significant. The differences between these four models and the first two are statistically significant (p<0.05). 5.2 Effect of Training Data Size In order to allow for rapid examination of multiple feature combinations, we restricted the size of the training set (S) to maintain manageable training times. With this decision comes the implicit asS = 2000 S = 4000 Feature Set F-score F-score %Imp word 43.85 52.08 18.8 word+conf 55.83 57.50 3.0 word+nw N-grams+lem +conf (best system) 62.60 66.34 6.0 +na 63.14 66.21 4.9 +lem N-grams 62.88 64.43 2.5 all 62.44 65.62 5.1 Table 6: PZD F-scores for selected models when the number of training segments (S) is doubled. The training set used was {S=2000, H=10} and {S=4000, H=10}, and the models were evaluated on the dev set. The label "N-grams" following a Binned feature refers to using 1, 2 and 3-grams of that feature. Indentation marks accumulative features in model. sumption that the results obtained will scale with additional training data. We test this assumption by taking the best-performing feature sets from Table 5 and training new models using twice the training data {S=4000}. The results are shown in Table 6. In each case, the improvements are relatively consistent (and on the order of the gains provided by the inclusion of conf as seen in Table 5), indicating that the model performance does scale with data size. However, these improvements come with a cost of a roughly 4-7x increase in training time. We note that the value of doubling S is roughly 3-6x times greater for the word baseline than the others; however, simply adding conf to the baseline provides an even greater improvement than doubling S. The differences between the final four models in Table 6 are not statistically significant. The differences between these models and the first two models in the table are statistically significant (p<0.05). For convenience, in the next section we refer to the third model listed in Table 6 as the best system (because it has the highest absolute F-score on the large data set), but readers should recall that these four models are roughly equivalent in performance. 5.3 Error Analysis In this section, we look closely at the performance of a subset of systems on different types of problem words. We compare the following model settings: for {S=4000} training, we use word, word + conf, the best system from Table 6 and the model 880 (a) S=4000 S=2000 word wconf best all all Precision 54.7 59.5 67.1 67.4 62.4 Recall 49.7 55.7 65.6 64.0 62.5 F-score 52.1 57.5 66.3 65.6 62.4 Accuracy 76.4 78.7 82.8 82.7 80.6 (b) %Prob word wconf best all all Words 87.2 51.8 57.3 68.5 67.1 64.9 Punc. 9.3 39.5 44.7 50.0 46.1 40.8 Digits 3.5 24.1 44.8 34.5 34.5 62.1 INS 10.6 46.0 49.4 62.1 62.1 55.2 DEL 7.9 29.2 20.0 24.6 21.5 27.7 SUB 81.4 52.2 60.0 70.0 68.4 66.9 Ortho 21.6 63.3 51.4 52.5 53.7 48.6 Lemma 5.6 45.7 52.2 63.0 52.2 54.4 Semantic 54.2 48.4 64.2 77.7 75.9 75.5 Table 7: Error analysis results comparing the performance of multiple systems over different metrics (a) and word/error types (b). %Prob shows the distribution of problem words into different word types (word, punctuation and digit) and error types. INS, DEL and SUB stand for insertion, deletion and substitution error types, respectively. Ortho stands for orthographic variant. Lemma stands for ‘shared lemma’. The columns to the right of the %Prob column show recall percentage for each word/error type. using all possible features (word, wconf, best and all, respectively); and we also use all trained with {S=2000}. We consider the performance in terms of precision and recall in addition to F-score – see Table 7 (a). We also consider the percentage of recall per error type, such as word/punctuation/digit or deletion/insertion/substitution and different types of substitution errors – see Table 7 (b). The second column in this table (%Prob) shows the distribution of gold-tagged problem words into word and error type categories. Overall, there is no major tradeoff between precision and recall across the different settings; although we can observe the following: (i) adding more training data helps precision more than recall (over three times more) – compare the last two columns in Table 7 (a); and (ii) the best setting has a slightly lower precision than all features, although a much better recall – compare columns 4 and 5 in Table 7 (a). The performance of different settings on words is generally better than punctuation and that is better than digits. The only exceptions are in the digit category, which may be explained by that category’s small count which makes it prone to large percentage fluctuations. In terms of error type, the performance on substitutions is better than insertions, which is in turn better than deletions, for all systems compared. This makes sense since deletions are rather hard to detect and they are marked on possibly correct adjacent words, which may confuse the classifiers. One insight for future work is to develop systems for different types of errors. Considering substitutions in more detail, we see that surprisingly, the simple approach of using the word feature only (without conf) correctly recalls a bigger proportion of problems involving orthographic variants than other settings. It seems that the more complex the model, the harder it is to model these cases correctly. Error types that include semantic variations (different lemmas) or shared lemmas (but not explained by orthographic variation), are by contrast much harder for the simple models. The more complex models do quite well recalling errors involving semantically incoherent substitutions (around 77.7% of those cases) and words that share the same lemma but vary in inflectional features (63% of those cases). These two results are quite a jump from the basic word baseline (around 29% and 18% respectively). The simple addition of data seems to contribute more towards the orthographic variation errors and less towards semantic errors. The different settings we use (training size and features) show some degree of complementarity in how they identify errors. We try to exploit this fact in Section 5.5 exploring some simple system combination ideas. 5.4 Blind Test Set Table 8 shows the results of applying the same models described in Table 7 to a blind test set of yet unseen data. As mentioned in Section 4.1, the trivial baseline of the test set is comparable to the dev set. However, the test set is harder to tag than the dev set; this can be seen in the overall lower F-scores. That said, the relative order of performing features is the same as with the dev set, confirming that our best model is optimal for test too. On further study, we noticed that the reason for the test set difference is that the overlap in word forms between test and 881 word wconf best all Precision 37.55 51.48 57.01 55.46 Recall 51.73 53.39 61.97 60.44 F-score 43.51 52.42 59.39 57.84 Accuracy 65.13 74.83 77.99 77.13 Table 8: Results on test set of 500 segments with one hypothesis each. The models were trained on the {S=4000, H=10} training set. train is less than dev and train: 63% versus 81%, respectively on {S=4000}. 5.5 Preliminary Combination Analysis In a preliminarily investigation of the value of complementarity across these different systems, we tried two simple model combination techniques. We restricted the search to the systems in the error analysis (Table 7). First, we considered a sliding voting scheme where a word is marked as problematic if at least n systems agreed to that. Naturally, as n increases, precision increases and recall decreases, providing multiple tradeoff options. The range spans 49.1/83.2/61.8 (% Precision/Recall/F-score) at one end (n = 1) to 80.4/27.5/41.0 on the other (n = all). The best F-score combination was with n = 2 (any two agree) producing 62.8/72.4/67.3, an almost 1% higher than our best system. In a different combination exploration, we exhaustively sought the best three systems from which any agreement (2 or 3) can produce an even better system. The best combination included the word model, the best model (both in {S=4000} training) and the all model (in {S=2000}). This combination yields 70.2/64.0/66.9, a lower F-score than the best general voting approach discussed above, but with a different bias towards better precision. These basic exploratory experiments show that there is a lot of value in pursuing combinations of systems, if not for overall improvement, then at least to benefit from tradeoffs in precision and recall that may be appropriate for different applications. 5.6 Preliminary Tag Set Exploration In all of the experiments described so far, the PZD models tag words using a binary tag set of PROB/OK. We may also consider more complex tag sets based on problem subtypes, such as SUB/INS/DEL/OK (where all the problem subtypes are differentiated), SUB/INS/OK (ignores deletions), and SUB/OK (ignores deletions and insertions). Care must be taken when comparing these systems, because the differences in tag set definition results in different baselines. Therefore we compare the % error reduction over the trivial baseline achieved in each case. For an all model trained on the {S=2000, H=10} set, using the PROB/OK tag set results in a 36.3% error reduction over its trivial baseline (using the dev set). The corresponding SUB/INS/DEL/OK tag set only achieves a 34.8% error reduction. The SUB/INS/OK tag set manages a 40.1% error reduction, however. The SUB/OK tag set achieves a 38.9% error reduction. We suspect that the very low relative number of deletions (7.9% in the dev data) and the awkwardness of a DEL tag indicating a neighboring deletion (rather than the current word) may be confusing the models, and so ignoring them seems to result in a clearer picture. 6 Related Work Common OCR/HR post-processing strategies are similar to spelling correction solutions involving dictionary lookup (Kukich, 1992; Jurafsky and Martin, 2000) and morphological restrictions (Domeij et al., 1994; Oflazer, 1996). Error detection systems using dictionary lookup can sometimes be improved by adding entries representing morphological variations of root words, particularly if the language involved has a complex morphology (Pal et al., 2000). Alternatively, morphological information can be used to construct supplemental lexicons or language models (Sari and Sellami, 2002; Magdy and Darwish, 2006). In comparison to (Magdy and Darwish, 2006), our paper is about error detection only (done in using discriminative machine learning); whereas their work is on error correction (done in a standard generative manner (Kolak and Resnik, 2002)) with no assumptions of some cases being correct or incorrect. In essence, their method of detection is the same as our trivial baseline. The morphological features they use are shallow and restricted to breaking up a word into prefix+stem+suffix; whereas we analyze words into their lemmas, abstracting away over a large number of variations. We also made use of 882 part-of-speech tags, which they do not use, but suggest may help. In their work, the morphological features did not help (and even hurt a little), whereas for us, the lemma feature actually helped. Their hypothesis that their large language model (16M words) may be responsible for why the word-based models outperformed stem-based (morphological) models is challenged by the fact that our language model data (220M words) is an order of magnitude larger, but we are still able to show benefit for using morphology. We cannot directly compare to their results because of the different training/test sets and target (correction vs detection); however, we should note that their starting error rate was quite high (39% on Alef/Ya normalized words), whereas our starting error rate is almost half of that (∼26% with unnormalized Alef/Yas, which account for almost 5% absolute of the errors). Perhaps a combination of the two kinds of efforts can push the perfomance on correction even further by biasing towards problematic words and avoiding incorrectly changing correct words. Magdy and Darwish (2006) do not report on percentages of words that they incorrectly modify. 7 Conclusions and Future Work We presented a study with various settings (linguistic and non-linguistic features and learning curve) for automatically detecting problem words in Arabic handwriting recognition. Our best approach achieves a roughly ∼15% absolute increase in Fscore over a simple baseline. A detailed error analysis shows that linguistic features, such as lemma models, help improve HR-error detection specifically where we expect them to: identifying semantically inconsistent error words. In the future, we plan to continue improving our system by considering smarter trainable combination techniques and by separating the training for different types of errors, particularly deletions from insertions and substitutions. We would also like to conduct an extended evaluation comparing other types of morphological features, such as roots and stems, directly. One additional idea is to implement a lemma-confidence feature that examines lemma use in hypotheses across the document. This could potentially provide valuable semantic information at the document level. We also plan to integrate our system with a system for producing correction hypotheses. We also will consider different uses for the basic system setup we developed to identify other types of text errors, such as spelling errors or code-switching between languages and dialects. Acknowledgments We would like to thank Premkumar Natarajan, Rohit Prasad, Matin Kamali, Shirin Saleem, Katrin Kirchhoff, and Andreas Stolcke. This work was funded under DARPA project number HR0011-08-C-0004. References Fadi Biadsy, Jihad El-Sana, and Nizar Habash. 2006. Online Arabic handwriting recognition using Hidden Markov Models. In The 10th International Workshop on Frontiers in Handwriting Recognition (IWFHR’10), La Baule, France. Kareem Darwish and Douglas W. Oard. 2002. Term Selection for Searching Printed Arabic. In SIGIR ’02: Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 261–268, New York, NY, USA. ACM. Rickard Domeij, Joachim Hollman, and Viggo Kann. 1994. Detection of spelling errors in Swedish not using a word list en clair. J. Quantitative Linguistics, 1:1–195. David Graff. 2007. Arabic Gigaword 3, LDC Catalog No.: LDC2003T40. Linguistic Data Consortium, University of Pennsylvania. Nizar Habash and Owen Rambow. 2005. Arabic Tokenization, Part-of-Speech Tagging and Morphological Disambiguation in One Fell Swoop. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 573–580, Ann Arbor, Michigan, June. Association for Computational Linguistics. Nizar Habash, Abdelhadi Soudi, and Tim Buckwalter. 2007. On Arabic Transliteration. In A. van den Bosch and A. Soudi, editors, Arabic Computational Morphology: Knowledge-based and Empirical Methods. Springer. Nizar Habash, Owen Rambow, and Ryan Roth. 2010. MADA+TOKAN Manual. Technical Report CCLS10-01, Center for Computational Learning Systems (CCLS), Columbia University. Nizar Habash. 2010. Introduction to Arabic Natural Language Processing. Morgan & Claypool Publishers. 883 Mohamed Ben Halima and Adel M. Alimi. 2009. A multi-agent system for recognizing printed Arabic words. In Khalid Choukri and Bente Maegaard, editors, Proceedings of the Second International Conference on Arabic Language Resources and Tools, Cairo, Egypt, April. The MEDAR Consortium. Daniel Jurafsky and James H. Martin. 2000. Speech and Language Processing. Prentice Hall, New Jersey, USA. Okan Kolak and Philip Resnik. 2002. OCR error correction using a noisy channel model. In Proceedings of the second international conference on Human Language Technology Research. Taku Kudo and Yuji Matsumoto. 2003. Fast Methods for Kernel-Based Text Analysis. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL’03), pages 24–31, Sapporo, Japan, July. Association for Computational Linguistics. Karen Kukich. 1992. Techniques for Automatically Correcting Words in Text. ACM Computing Surveys, 24(4). Leah S. Larkey, Lisa Ballesteros, and Margaret E. Connell, 2007. Arabic Computational Morphology: Knowledge-based and Empirical Methods, chapter Light Stemming for Arabic Information Retrieval. Springer Netherlands, Kluwer/Springer edition. Zhidong Lu, Issam Bazzi, Andras Kornai, John Makhoul, Premkumar Natarajan, and Richard Schwartz. 1999. A Robust, Language-Independent OCR System. In the 27th AIPR Workshop: Advances in Computer Assisted Recognition, SPIE. Walid Magdy and Kareem Darwish. 2006. Arabic OCR Error Correction Using Character Segment Correction, Language Modeling, and Shallow Morphology. In Proceedings of 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006), pages 408–414, Sydney, Austrailia. Volker Märgner and Haikal El Abed. 2009. Arabic Word and Text Recognition - Current Developments. In Khalid Choukri and Bente Maegaard, editors, Proceedings of the Second International Conference on Arabic Language Resources and Tools, Cairo, Egypt, April. The MEDAR Consortium. Mohsen Moftah, Waleed Fakhr, Sherif Abdou, and Mohsen Rashwan. 2009. Stem-based Arabic language models experiments. In Khalid Choukri and Bente Maegaard, editors, Proceedings of the Second International Conference on Arabic Language Resources and Tools, Cairo, Egypt, April. The MEDAR Consortium. Prem Natarajan, Shirin Saleem, Rohit Prasad, Ehry MacRostie, and Krishna Subramanian, 2008. Arabic and Chinese Handwriting Recognition, volume 4768 of Lecture Notes in Computer Science, pages 231–250. Springer, Berlin, Germany. Kemal Oflazer. 1996. Error-tolerant finite-state recognition with applications to morphological analysis and spelling correction. Computational Linguistics, 22:73–90. U. Pal, P. K. Kundu, and B. B. Chaudhuri. 2000. OCR error correction of an inflectional Indian language using morphological parsing. J. Information Sci. and Eng., 16:903–922. Ryan Roth, Owen Rambow, Nizar Habash, Mona Diab, and Cynthia Rudin. 2008. Arabic Morphological Tagging, Diacritization, and Lemmatization Using Lexeme Models and Feature Ranking. In Proceedings of ACL-08: HLT, Short Papers, pages 117–120, Columbus, Ohio, June. Association for Computational Linguistics. Shirin Saleem, Huaigu Cao, Krishna Subramanian, Marin Kamali, Rohit Prasad, and Prem Natarajan. 2009. Improvements in BBN’s HMM-based Offline Handwriting Recognition System. In Khalid Choukri and Bente Maegaard, editors, 10th International Conference on Document Analysis and Recognition (ICDAR), Barcelona, Spain, July. Toufik Sari and Mokhtar Sellami. 2002. MOrphoLEXical analysis for correcting OCR-generated Arabic words (MOLEX). In The 8th International Workshop on Frontiers in Handwriting Recognition (IWFHR’02), Niagara-on-the-Lake, Canada. Andreas Stolcke. 2002. SRILM - an Extensible Language Modeling Toolkit. In Proceedings of the International Conference on Spoken Language Processing (ICSLP), volume 2, pages 901–904, Denver, CO. Stephanie Strassel. 2009. Linguistic Resources for Arabic Handwriting Recognition. In Khalid Choukri and Bente Maegaard, editors, Proceedings of the Second International Conference on Arabic Language Resources and Tools, Cairo, Egypt, April. The MEDAR Consortium. 884
|
2011
|
88
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 885–894, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing John Lee Department of Chinese, Translation and Linguistics City University of Hong Kong [email protected] Jason Naradowsky, David A. Smith Department of Computer Science University of Massachusetts, Amherst {narad,dasmith}@cs.umass.edu Abstract Most previous studies of morphological disambiguation and dependency parsing have been pursued independently. Morphological taggers operate on n-grams and do not take into account syntactic relations; parsers use the “pipeline” approach, assuming that morphological information has been separately obtained. However, in morphologically-rich languages, there is often considerable interaction between morphology and syntax, such that neither can be disambiguated without the other. In this paper, we propose a discriminative model that jointly infers morphological properties and syntactic structures. In evaluations on various highly-inflected languages, this joint model outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection. 1 Introduction To date, studies of morphological analysis and dependency parsing have been pursued more or less independently. Morphological taggers disambiguate morphological attributes such as partof-speech (POS) or case, without taking syntax into account (Hakkani-T¨ur et al., 2000; Hajiˇc et al., 2001); dependency parsers commonly assume the “pipeline” approach, relying on morphological information as part of the input (Buchholz and Marsi, 2006; Nivre et al., 2007). This approach serves many languages well, especially those with less morphological ambiguity. In English, for example, accuracy of POS tagging has risen above 97% (Toutanova et al., 2003), and that of dependency parsing has reached the low nineties (Nivre et al., 2007). For these languages, there may be little to be gained to justify the computational cost of incorporating syntactic inference during the morphological tagging task; conversely, it is doubtful that errorful morphological information is a main cause of errors in English dependency parsing. However, the pipeline approach seems more problematic for morphologically-rich languages with substantial interactions between morphology and syntax (Tsarfaty, 2006). Consider the Latin sentence, Una dies omnis potuit praecurrere amantis, ‘One day was able to make up for all the lovers’1. As shown in Table 1, the adjective omnis (‘all’) is ambiguous in number, gender, and case; there are seven valid analyses. From the perspective of a finitestate morphological tagger, the most attractive analysis is arguably the singular nominative, since omnis is immediately followed by the singular verb potuit (‘could’). Indeed, the baseline tagger used in this study did make this decision. Given its nominative case, the pipeline parser assigned the verb potuit to be its head; the two words form the typical subjectverb relation, agreeing in number. Unfortunately, as shown in Figure 1, the word omnis in fact modifies the noun amantis, at the end of the sentence. As a result, despite the distance between them, they must agree in number, gender and case, i.e., both must be plural masculine (or feminine) accusative. The pipeline parser, acting on the input that omnis is nominative, naturally did not see 1Taken from poem 1.13 by Sextus Propertius, English translation by Katz (2004). 885 Latin Una dies omnis potuit praecurrere amantis English one day all could to surpass lovers Number sg pl sg pl sg sg pl sg sg pl Gender f n m/f m/f m/f m/f/n m/f m/f/n m/f Case nom/ab nom/acc nom nom/acc nom gen acc gen acc Table 1: The Latin sentence “Una dies omnis potuit praecurrere amantis”, meaning ‘One day was able to make up for all the lovers’, shown with glosses and possible morphological analyses. The correct analyses are shown in bold. The word omnis has 7 possible combinations of number, gender and case, while amantis has 5. Disambiguation partly depends on establishing amantis as the head of omnis, and so the two must agree in all three attributes. this agreement, and therefore did not consider this syntactic relation likely. Such a dilemma is not uncommon in languages with relatively free word order. On the one hand, it appears difficult to improve morphological tagging accuracy on words like omnis without syntactic knowledge; on the other hand, a parser cannot reliably disambiguate syntax unless it has accurate morphological information, in this example the agreement in number, gender, and case. In this paper we propose to attack this chickenand-egg problem with a discriminative model that jointly infers morphological and syntactic properties of a sentence, given its words as input. In evaluations on various highly-inflected languages, the model outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection. After a description of previous work (§2), the joint model (§3) will be contrasted with the baseline pipeline model (§4). Experimental results (§56) will then be presented, followed by conclusions and future directions. 2 Previous Work Since space does not allow a full review of the vast literature on morphological analysis and parsing, we focus only on past research involving joint morphological and syntactic inference (§2.1); we then discuss Latin (§2.2), a language representative of the challenges that motivated our approach. 2.1 Joint Morphological and Syntactic Inference Most previous work in morphological disambiguation, even when applied on morphologically complex languages with relatively free word order, potuit could dies day una one praecurrere to surpass amantis lovers omnis all Figure 1: Dependency tree for the sentence “Una dies omnis potuit praecurrere amantis”. The word omnis is an adjective modifying the noun amantis. This information is key to the morphological disambiguation of both words, as shown in Table 1. such as Turkish (Hakkani-T¨ur et al., 2000) and Czech (Hajiˇc et al., 2001), did not consider syntactic relationships between words. In the literature on data-driven parsing, two recent studies attempted joint inference on morphology and syntax, and both considered phrase-structure trees for Modern Hebrew (Cohen and Smith, 2007; Goldberg and Tsarfaty, 2008). The primary focus of morphological processing in Modern Hebrew is splitting orthographic words into morphemes: clitics such as prepositions, pronouns, and the definite article must be separated from the core word. Each of the resulting morphemes is then tagged with an atomic “part-of-speech” to indicate word class and some morphological features. Similarly, the English POS tags in the Penn Treebank combine word class information with morphologi886 cal attributes such as “plural” or “past tense”. Cohen and Smith (2007) separately train a discriminative conditional random field (CRF) for segmentation and tagging, and a generative probabilistic context-free grammar (PCFG) for parsing. At decoding time, the two models are combined as a product of experts. Goldberg and Tsarfaty (2008) propose a generative joint model. This paper is the first to use a fully discriminative model for joint morphological and syntactic inference on dependency trees. 2.2 Latin Unlike Modern Hebrew, Latin does not require extensive morpheme segmentation2. However, it does have a relatively free word order, and is also highly inflected, with each word having up to nine morphological attributes, listed in Table 2. In addition to its absolute numbers of cases, moods, and tenses, Latin morphology is fusional. For instance, the suffix −is in omnis cannot be segmented into morphemes that separately indicate gender, number, and case. According to the Latin morphological database encoded in MORPHEUS (Crane, 1991), 30% of Latin nouns can be parsed as another part-of-speech, and on average each has 3.8 possible morphological interpretations. We know of only one previous attempt in datadriven dependency parsing for Latin (Bamman and Crane, 2008), with the goal of constructing a dynamic lexicon for a digital library. Parsing is performed using the usual pipeline approach, first with the TreeTagger analyzer (Schmid, 1994) and then with a state-of-the-art dependency parser (McDonald et al., 2005). Head selection accuracy was 61.49%, and rose to 64.99% with oracle morphological tags. Of the nine morphological attributes, gender and especially case had the lowest accuracy. This observation echoes the findings for Czech (Smith et al., 2005), where case was also the most difficult to disambiguate. 3 Joint Model This section describes a model that jointly infers morphological and syntactic properties of a sentence. It will be presented as a graphical model, 2Except for enclitics such as -que, -ve, and -ne, but their segmentation is rather straightforward compared to Modern Hebrew or other Semitic languages. Attribute Values Part-ofnoun, verb, participle, adjective, speech adverb, conjunction, preposition, (POS) pronoun, numeral, interjection, exclamation, punctuation Person first, second, third Number singular, plural Tense present, imperfect, perfect, pluperfect, future perfect, future Mood indicative, subjunctive, infinitive, imperative, participle, gerund, gerundive, supine Voice active, passive Gender masculine, feminine, neuter Case nominative, genitive, dative, accusative, ablative, vocative, locative Degree comparative, superlative Table 2: Morphological attributes and values for Latin. Ancient Greek has the same attributes; Czech and Hungarian lack some of them. In all categories except POS, a value of null (‘-’) may also be assigned. For example, a noun has ‘-’ for the tense attribute. starting with the variables and then the factors, which represents constraints on the variables. Let n be the number of words and m be the number of possible values for a morphological attribute. The variables are: • WORD: the n words w1,...,wn of the input sentence, all observed. • TAG: O(nm) boolean variables3 Ta,i,v, corresponding to each value of the morphological attributes listed in Table 2. Ta,i,v = true when the word wi has value v as its morphological attribute a. In Figure 2, CASE3,acc is the shorthand representing the variable Tcase,3,acc. It is set to true since the word w3 has the accusative case. • LINK: O(n2) boolean variables Li,j corresponding to a possible link between each pair 3The TAG variables were actually implemented as multinomials, but are presented here as booleans for ease of understanding. 887 UNIGRAM CASE− UNIGRAM CASE− CASE− LINK CASE− LINK CASE− LINK CASE− LINK CASE 6,gen CASE 3,gen CASE 3,nom 3,acc CASE UNIGRAM CASE− UNIGRAM CASE− UNIGRAM CASE− CASE 2,... CASE LINK CASE 6,acc CASE− BIGRAM CASE− BIGRAM TREE WORD− LINK WORD LINK CASE 5,... L L 3,6 4,6 Figure 2: The joint model (§3) depicted as a graphical model. The variables, all boolean, are represented by circles and are bolded if their correct values are true. Factors are represented by rectangles and are bolded if they fire. For clarity, this graph shows only those variables and factors associated with one pair of words (i.e., w3=omnis and w6=amantis) and with one morphological attribute (i.e., case). The variables L3,6, CASE3,acc and CASE6,acc are bolded, indicating that w3 and w6 are linked and both have the accusative case. The ternary factor CASE-LINK, that connects to these three variable, therefore fires. of words4. Li,j = true when there is a dependency link from the word wi to the word wj. In Figure 2, the variable L3,6 is set to true since there is a dependency link between the words w3 and w6. We define a probability distribution over all joint assignments A to the above variables, p(A) = 1 Z Y k Fk(A) (1) where Z is a normalizing constant. The assignment A is subject to a hard constraint, represented in Figure 2 as TREE, requiring that the values of the LINK variables must yield a tree, which may be non-projective. The factors Fk(A) represent soft constraints evaluating various aspects of the “goodness” of the tree structure implied by A. We say a factor “fires” when all its neighboring variables are 4Variables for link labels can be integrated in a straightforward manner, if desired. true and it evaluates to a non-negative real number; otherwise, it evaluates to 1 and has no effect on the product in equation (1). Soft constraints in the model are divided into local and link factors, to which we now turn. 3.1 Local Factors The local factors consult either one word or two neighboring words, and their morphological attributes. These factors express the desirability of the assignments of morphological attributes based on local context. There are three types: • TAG-UNIGRAM: There are O(nm) such unary factors, each instance of which is connected to a TAG variable. The factor fires when Ta,i,v is true. The features consist of the value v of the morphological attribute concerned, combined with the word identity of wi, with backoff using all suffixes of the word. The CASEUNIGRAM factors shown in Figure 2 are examples of this family of factors. 888 • TAG-BIGRAM: There are O(nm2) of such binary factors, each connected to the TAG variables of a pair of neighboring words. The factor fires when Ta,i,v1 and Ta,i+1,v2 are both true. The CASE-BIGRAM factors shown in Figure 2 are examples of this family of factors. • TAG-CONSISTENCY: For each word, the TAG variables representing the possible POS values are connected to those representing the values of other morphological attributes, yielding O(nm2) binary factors. They fire when Tpos,i,v1 and Ta,i,v2 are both true. These factors are intended to discourage inconsistent assignments, such as a non-null tense for a noun. It is clear that so far, none of these factors are aware of the morphological agreement between omnis and amantis, crucial for inferring their syntactic relation. We now turn our attention to link factors, which serve this purpose. 3.2 Link Factors The link factors consult all pairs of words, possibly separated by a long distance, that may have a dependency link. These factors model the likelihood of such a link based on the word identities and their morphological attributes: • WORD-LINK: There are O(n2) such unary factors, each connected to a LINK variable, as shown in Figure 2. The factor fires when Li,j is true. Features include various combinations of the word identities of the parent wi and child wj, and 5-letter prefixes of these words, replicating the so-called “basic features” used by McDonald et al. (2005). • POS-LINK: There are O(n2m2) such ternary factors, each connected to the variables Li,j, Ti,pos,vi and Tj,pos,vj. It fires when all three are true or, in other words, when the parent word wi has POS vi, and the child wj has POS vj. Features replicate all the so-called “basic features” used by McDonald et al. (2005) that involve POS. These factors are not shown in Figure 2, but would have exactly the same structure as the CASE-LINK factors. Beyond these basic features, McDonald et al. (2005) also utilize POS trigrams and POS 4grams. Both include the POS of two linked words, wi and wj. The third component in the trigrams is the POS of each word wk located between wi and wj, i < k < j. The two additional components that make up the 4-grams are subsets of the POS of words located to the immediate left and right of wi and wj. If fully implemented in our joint model, these features would necessitate two separate families of link factors: O(n3m3) factors for the POS trigrams, and O(n2m4) factors for the POS 4-grams. To avoid this substantial increase in model complexity, these features are instead approximated: the POS of all words involved in the trigrams and 4-grams, except those of wi and wj, are regarded as fixed, their values being taken from the output of a morphological tagger (§4.1), rather than connected to the appropriate TAG variables. This approximation allows these features to be incorporated in the POS-LINK factors. • MORPH-LINK: There are O(n2m2) such ternary factors, each connected to the variables Li,j, Ti,a,vi and Tj,a,vj, for every attribute a other than POS. The factor fires when all three variables are true, and both vi and vj are nonnull; i.e., it fires when the parent word wi has vi as its morphological attribute a, and the child wj has vj. Features include the combination of vi and vj themselves, and agreement between them. The CASE-LINK factors in Figure 2 are an example of this family of factors. 4 Baselines To ensure a meaningful comparison with the joint model, our two baselines are both implemented in the same graphical model framework, and trained with the same machine-learning algorithm. Roughly speaking, they divide up the variables and factors of the joint model and train them separately. For morphological disambiguation, we use the baseline tagger described in §4.1. For dependency parsing, our baseline is a “pipeline” parser (§4.2) that infers syntax upon the output of the baseline tagger. 889 4.1 Baseline Morphological Tagger The tagger is a graphical model with the WORD and TAG variables, connected by the local factors TAG-UNIGRAM, TAG-BIGRAM, and TAGCONSISTENCY, all used in the joint model (§3). 4.2 Baseline Dependency Parser The parser has no local factors, but has the same variables as the joint model and the same features from all three families of link factors (§3). However, since it takes as input the morphological attributes predicted by the tagger, the TAG variables are now observed. This leads to a change in the structure of the link factors — all features from the POSLINK factors now belong to the WORD-LINK factors, since the POS of all words are observed. In short, the features of the parser are a replication of (McDonald et al., 2005), but also extended beyond POS to the other morphological attributes, with the features in the MORPH-LINK factors incorporated into WORD-LINK for similar reasons. 5 Experimental Set-up 5.1 Data Our evaluation focused on the Latin Dependency Treebank (Bamman and Crane, 2006), created at the Perseus Digital Library by tailoring the Prague Dependency Treebank guidelines for the Latin language. It consists of excerpts from works by eight Latin authors. We randomly divided the 53K-word treebank into 10 folds of roughly equal sizes, with an average of 5314 words (347 sentences) per fold. We used one fold as the development set and performed cross-validation on the other nine. To measure how well our model generalizes to other highly-inflected, relatively free-word-order languages, we considered Ancient Greek, Hungarian, and Czech. Their respective datasets consist of 8000 sentences from the Ancient Greek Dependency Treebank (Bamman et al., 2009), 5800 from the Hungarian Szeged Dependency Treebank (Vincze et al., 2010), and a subset of 3100 from the Prague Dependency Treebank (B¨ohmov´a et al., 2003). 5.2 Training We define each factor in (1) as a log-linear function: Fk(A) = exp X h θhfh(A, W, k) (2) Given an assignment A and words W, fh is an indicator function describing the presence or absence of the feature, and θh is the corresponding set of weights learned using stochastic gradient ascent, with the gradients inferred by loopy belief propagation (Smith and Eisner, 2008). The variance of the Gaussian prior is set to 1. The other two parameters in the training process, the number of belief propagation iterations and the number of training rounds, were tuned on the development set. 5.3 Decoding The output of the joint model is the assignment to the TAG and LINK variables. Loopy belief propagation (BP) was used to calculate the posterior probabilities of these variables. For TAG, we emit the tag with the highest posterior probability as computed by sum-product BP. We produced head attachments by first calculating the posteriors of the LINK variables with BP and then passing them to an edgefactored tree decoder. This is equivalent to minimum Bayes risk decoding (Goodman, 1996), which is used by Cohen and Smith (2007) and Smith and Eisner (2008). This MBR decoding procedure enforces the hard constraint that the output be a tree but sums over possible morphological assignments.5 5.4 Reducing Model Complexity In principle, the joint model should consider every possible combination of morphological attributes for every word. In practice, to reduce the complexity of the model, we used a pre-existing morphological database, MORPHEUS (Crane, 1991), to constrain the range of possible values of the attributes listed in Table 2; more precisely, we add a hard constraint, requiring that assignments to the TAG variables be compatible with MORPHEUS. This constraint significantly reduces the value of m in the big-O notation 5This approach to nuisance variables has also been used effectively for parsing with tree-substitution grammars, where several derived trees may correspond to each derivation tree, and parsing with PCFGs with latent annotations. 890 Model Tagger Joint Tagger Joint Attr. ↓ all all non-null non-null POS 94.4 94.5 94.4 94.5 Person 99.4 99.5 97.1 97.6 Number 95.3 95.9 93.7 94.5 Tense 98.0 98.2 93.2 93.9 Mood 98.1 98.3 93.8 94.4 Voice 98.5 98.6 95.3 95.7 Gender 93.1 93.9 87.7 89.1 Case 89.3 90.0 79.9 81.2 Degree 99.9 99.9 86.4 90.8 UAS 61.0 61.9 — — Table 3: Latin morphological disambiguation and parsing. For some attributes, such as degree, a substantial portion of words have the null value. The non-null columns provides a sharper picture by excluding these “easy” cases. Note that POS is never null. for the number of variables and factors described in §3. To illustrate the effect, the graphical model of the sentence in Table 1, whose six words are all covered by the database, has 1,866 factors; without the benefit of the database, the full model would have 31,901 factors. The MORPHEUS database was automatically generated from a list of stems, inflections, irregular forms and morphological rules. It covers about 99% of the distinct words in the Latin Dependency Treebank. At decoding time, for each fold, the database is further augmented with tags seen in training data. After this augmentation, an average of 44 words are “unseen” in each fold. Similarly, we constructed morphological dictionaries for Czech, Ancient Greek, and Hungarian from words that occurred at least five times in the training data; words that occurred fewer times were unrestricted in the morphological attributes they could take on. 6 Experimental Results We compare the performance of the pipeline model (§4) and the joint model (§3) on morphological disambiguation and unlabeled dependency parsing. Model Tagger Joint Tagger Joint Attr. ↓ all all non-null non-null POS 95.5 95.7 95.5 95.7 Person 98.4 98.8 93.5 95.6 Number 91.2 92.3 87.0 88.4 Tense 98.4 98.8 92.7 96.1 Voice 98.5 98.7 93.2 95.8 Gender 86.6 87.9 75.6 78.0 Case 84.1 85.6 74.3 76.5 Degree 97.9 98.0 90.1 90.1 UAS 67.4 68.7 — — Table 4: Czech morphological disambiguation and parsing. As with Latin, the model is least accurate with noun/adjective categories of gender number, and case, particularly when considering only words whose true value is non-null for those attributes. Joint inference with syntactic features improves accuracy across the board. Model Tagger Joint Tagger Joint Attr. ↓ all all non-null non-null POS 94.9 95.7 94.9 95.7 Person 98.7 99.0 92.2 94.6 Number 97.4 97.9 96.5 97.1 Tense 96.8 97.2 84.1 86.8 Mood 97.9 98.3 91.4 93.2 Voice 97.8 98.0 91.3 92.4 Gender 95.4 96.1 90.7 91.9 Case 95.9 96.3 92.0 92.6 Degree 99.8 99.9 33.3 55.6 UAS 68.0 70.5 — — Table 5: Ancient Greek morphological disambiguation and parsing. Noun/adjective morphology is more accurate, but verbal morphology is more problematic. Model Tagger Joint Tagger Joint Attr. ↓ all all non-null non-null POS 95.8 95.8 95.8 95.8 Person 98.5 98.6 94.9 94.1 Number 97.4 97.5 96.8 96.6 Tense 98.9 99.3 97.2 97.3 Mood 98.7 99.2 95.8 97.3 Case 96.7 97.0 94.5 94.9 Degree 97.9 98.1 87.5 88.6 UAS 78.2 78.8 — — Table 6: Hungarian morphological disambiguation and parsing. The agglutinative morphological system makes local cues more effective, but syntactic information helps in almost all categories. 891 6.1 Morphological Disambiguation As seen in Table 3, the joint model outperforms6 the baseline tagger in all attributes in Latin morphological disambiguation. Among words not covered by the morphological database, accuracy in POS is slightly better, but lower for case, gender and number. The joint model made the most gains on adjectives and participles. Both parts-of-speech are particularly ambiguous: according to MORPHEUS, 43% of the adjectives can be interpreted as another POS, most frequently nouns; while participles have an average of 5.5 morphological interpretations. Both also often have identical forms for different genders, numbers and cases. In these situations, syntactic considerations help nudge the joint model to the correct interpretations. Experiments on the other three languages bear out similar results: the joint model improves morphological disambiguation. The performance of Czech (Table 4) exhibits the closest analogue to Latin: gender, number, and case are much less accurately predicted than are the other morphological attributes. Like Latin, Czech lacks definite and indefinite articles to provide high-confidence cues for noun phrase boundaries. The Ancient Greek treebank comprises both archaic texts, before the development of a definite article, and later classic Greek, which has a definite article; Hungarian has both a definite and an indefinite article. In both languages (Tables 5 and 6), noun and adjective gender, number, and case are more accurately predicted than in Czech and Latin. The verbal system of ancient Greek, in contrast, is more complex than that of the other languages, so mood, voice, and tense accuracy are lower. 6.2 Dependency Parsing In addition to morphological disambiguation, we also measured the performance of the joint model on dependency parsing of Latin and the other languages. The baseline pipeline parser (§4.2) yielded 61.00% head selection accuracy (i.e., unlabeled attachment score, UAS), outperformed7 by the joint 6The differences are statistically significant in all (p < 0.01 by McNemar’s Test) but POS (p = 0.5). 7Significant at p < e−11 by McNemar’s Test. model at 61.88%. The joint model showed similar improvements in Ancient Greek, Hungarian, and Czech. Wrong decisions made by the baseline tagger often misled the pipeline parser. For adjectives, the example shown in Table 1 and Figure 1 is a typical scenario, where an accusative adjective was tagged as nominative, and was then misanalyzed by the parser as modifying a verb (as a subject) rather than modifying an accusative noun. For participles modifying a noun, the wrong noun was often chosen based on inaccurate morphological information. In these cases, the joint model, entertaining all morphological possibilities, was able to find the combination of links and morphological analyses that are collectively more likely. The accuracy figures of our baselines are comparable, but not identical, to their counterparts reported in (Bamman and Crane, 2008). The differences may partially be attributed to the different morphological tagger used, and the different learning algorithm, namely Margin Infused Relaxed Algorithm (MIRA) in (McDonald et al., 2005) rather than maximum likelihood. More importantly, the Latin Dependency Treebank has grown from about 30K at the time of the previous work to 53K at present, resulting in significantly different training and testing material. Gold Pipeline Parser When given perfect morphological information, the Latin parser performs at 65.28% accuracy in head selection. Despite the oracle morphology, the head selection accuracy is still below other languages. This is hardly surprising, given the relatively small training set, and that the “the most difficult languages are those that combine a relatively free word order with a high degree of inflection”, as observed at the recent dependency parsing shared task (Nivre et al., 2007); both of these are characteristics of Latin. A particularly troublesome structure is coordination; the most frequent link errors all involve either a parent or a child as a conjunction. In a list of words, all words and coordinators depend on the final coordinator. Since the factors in our model consult only one link at a time, they do not sufficiently capture this kind of structures. Higher-order features, particularly those concerned with links with grandparents and siblings, have been shown to benefit dependency 892 parsing (Smith and Eisner, 2008) and may be able to address this issue. 7 Conclusions and Future Work We have proposed a discriminative model that jointly infers morphological properties and syntactic structures. In evaluations on various highly-inflected languages, this joint model outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection. This model may be refined by incorporating richer features and improved decoding. In particular, we would like to experiment with higher-order features (§6), and with maximum a posteriori decoding, via max-product BP or (relaxed) integer linear programming. Further evaluation on other morphological systems would also be desirable. Acknowledgments We thank David Bamman and Gregory Crane for their feedback and support. Part of this research was performed by the first author while visiting Perseus Digital Library at Tufts University, under the grants A Reading Environment for Arabic and Islamic Culture, Department of Education (P017A060068-08) and The Dynamic Lexicon: Cyberinfrastructure and the Automatic Analysis of Historical Languages, National Endowment for the Humanities (PR-50013-08). The latter two authors were supported by Army prime contract #W911NF07-1-0216 and University of Pennsylvania subaward #103-548106; by SRI International subcontract #27001338 and ARFL prime contract #FA8750-09-C0181; and by the Center for Intelligent Information Retrieval. Any opinions, findings, and conclusions or recommendations expressed in this material are the authors’ and do not necessarily reflect those of the sponsors. References David Bamman and Gregory Crane. 2006. The Design and Use of a Latin Dependency Treebank. Proc. Workshop on Treebanks and Linguistic Theories (TLT). Prague, Czech Republic. David Bamman and Gregory Crane. 2008. Building a Dynamic Lexicon from a Digital Library. Proc. 8th ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL 2008). Pittsburgh, PA. David Bamman, Francesco Mambrini, and Gregory Crane. 2009. An Ownership Model of Annotation: The Ancient Greek Dependency Treebank. Proc. Workshop on Treebanks and Linguistic Theories (TLT). A. B¨ohmov´a, J. Hajiˇc, E. Hajiˇcov´a, and B. Hladk´a. 2003. The PDT: a 3-level Annotation Scenario. In Treebanks: Building and Using Parsed Corpora, A. Abeill´e (ed). Kluwer. Sabine Buchholz and Erwin Marsi. 2006. CoNLLX Shared Task on Multilingual Dependency Parsing. Proc. CoNLL. New York, NY. Shay B. Cohen and Noah A. Smith. 2007. Joint Morphological and Syntactic Disambiguation. Proc. EMNLPCoNLL. Prague, Czech Republic. Gregory Crane. 1991. Generating and Parsing Classical Greek. Literary and Linguistic Computing 6(4):243– 245. Yoav Goldberg and Reut Tsarfaty. 2008. A Single Generative Model for Joint Morphological Segmentation and Syntactic Parsing. Proc. ACL. Columbus, OH. Joshua Goodman. 1996. Parsing Algorithms and Metrics. Proc. ACL. J. Hajiˇc, P. Krbec, P. Kvˇetoˇn, K. Oliva, and V. Petkeviˇc. 2001. Serial Combination of Rules and Statistics: A Case Study in Czech Tagging. Proc. ACL. D. Z. Hakkani-T¨ur, K. Oflazer, and G. T¨ur. 2000. Statistical Morphological Disambiguation for Agglutinative Languages. Proc. COLING. Vincent Katz. 2004. The Complete Elegies of Sextus Propertius. Princeton University Press, Princeton, NJ. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jana Hajiˇc. 2005. Non-projective Dependency Parsing using Spanning Tree Algorithms. Proc. HLT/EMNLP. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online Large-Margin Training of Dependency Parsers. Proc. ACL. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 Shared Task on Dependency Parsing. Proc. CoNLL Shared Task Session of EMNLP-CoNLL. Prague, Czech Republic. Helmut Schmid. 1994. Probabilistic Part-of-Speech Tagging using Decision Trees. Proc. International Conference on New Methods in Language Processing. Manchester, UK. Noah A. Smith, David A. Smith and Roy W. Tromble. 2005. Context-Based Morphological Disambiguation with Random Fields. Proc. HLT/EMNLP. Vancouver, Canada. 893 David Smith and Jason Eisner. 2008. Dependency Parsing by Belief Propagation. Proc. EMNLP. Honolulu, Hawaii. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-Rich Part-ofSpeech Tagging with a Cyclic Dependency Network. Proc. HLT-NAACL. Edmonton, Canada. Reut Tsarfaty. 2006. Integrated Morphological and Syntactic Disambiguation for Modern Hebrew. Proc. COLING-ACL Student Research Workshop. Veronika Vincze, D´ora Szauter, Attila Alm´asi, Gy¨orgy M´ora, Zolt´an Alexin, and J´anos Csirik. 2010. Hungarian Dependency Treebank. Proc. LREC. 894
|
2011
|
89
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 83–92, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Jigs and Lures: Associating Web Queries with Structured Entities Patrick Pantel Microsoft Research Redmond, WA, USA [email protected] Ariel Fuxman Microsoft Research Mountain View, CA, USA [email protected] Abstract We propose methods for estimating the probability that an entity from an entity database is associated with a web search query. Association is modeled using a query entity click graph, blending general query click logs with vertical query click logs. Smoothing techniques are proposed to address the inherent data sparsity in such graphs, including interpolation using a query synonymy model. A large-scale empirical analysis of the smoothing techniques, over a 2-year click graph collected from a commercial search engine, shows significant reductions in modeling error. The association models are then applied to the task of recommending products to web queries, by annotating queries with products from a large catalog and then mining queryproduct associations through web search session analysis. Experimental analysis shows that our smoothing techniques improve coverage while keeping precision stable, and overall, that our top-performing model affects 9% of general web queries with 94% precision. 1 Introduction Commercial search engines use query associations in a variety of ways, including the recommendation of related queries in Bing, ‘something different’ in Google, and ‘also try’ and related concepts in Yahoo. Mining techniques to extract such query associations generally fall into four categories: (a) clustering queries by their co-clicked url patterns (Wen et al., 2001; Baeza-Yates et al., 2004); (b) leveraging co-occurrences of sequential queries in web search query sessions (Zhang and Nasraoui, 2006; Boldi et al., 2009); (c) pattern-based extraction over lexicosyntactic structures of individual queries (Pas¸ca and Durme, 2008; Jain and Pantel, 2009); and (d) distributional similarity techniques over news or web corpora (Agirre et al., 2009; Pantel et al., 2009). These techniques operate at the surface level, associating one surface context (e.g., queries) to another. In this paper, we focus instead on associating surface contexts with entities that refer to a particular entry in a knowledge base such as Freebase, IMDB, Amazon’s product catalog, or The Library of Congress. Whereas the former models might associate the string “Ronaldinho” with the strings “AC Milan” or “Lionel Messi”, our goal is to associate “Ronaldinho” with, for example, the Wikipedia entity page “wiki/AC Milan” or the Freebase entity “en/lionel mess”. Or for the query string “ice fishing”, we aim to recommend products in a commercial catalog, such as jigs or lures. The benefits and potential applications are large. By knowing the entity identifiers associated with a query (instead of strings), one can greatly improve both the presentation of search results as well as the click-through experience. For example, consider when the associated entity is a product. Not only can we present the product name to the web user, but we can also display the image, price, and reviews associated with the entity identifier. Once the entity is clicked, instead of issuing a simple web search query, we can now directly show a product page for the exact product; or we can even perform actions directly on the entity, such as buying the entity on Amazon.com, retrieving the product’s oper83 ating manual, or even polling your social network for friends that own the product. This is a big step towards a richer semantic search experience. In this paper, we define the association between a query string q and an entity id e as the probability that e is relevant given the query q, P(e|q). Following Baeza-Yates et al. (2004), we model relevance as the likelihood that a user would click on e given q, events which can be observed in large query-click graphs. Due to the extreme sparsity of query click graphs (Baeza-Yates, 2004), we propose several smoothing models that extend the click graph with query synonyms and then use the synonym click probabilities as a background model. We demonstrate the effectiveness of our smoothing models, via a large-scale empirical study over realworld data, which significantly reduce model errors. We further apply our models to the task of queryproduct recommendation. Queries in session logs are annotated using our association probabilities and recommendations are obtained by modeling sessionlevel query-product co-occurrences in the annotated sessions. Finally, we demonstrate that our models affect 9% of general web queries with 94% recommendation precision. 2 Related Work We introduce a novel application of significant commercial value: entity recommendations for general Web queries. This is different from the vast body of work on query suggestions (Baeza-Yates et al., 2004; Fuxman et al., 2008; Mei et al., 2008b; Zhang and Nasraoui, 2006; Craswell and Szummer, 2007; Jagabathula et al., 2011), because our suggestions are actual entities (as opposed to queries or documents). There is also a rich literature on recommendation systems (Sarwar et al., 2001), including successful commercial systems such as the Amazon product recommendation system (Linden et al., 2003) and the Netflix movie recommendation system (Bell et al., 2007). However, these are entityto-entity recommendations systems. For example, Netflix recommends movies based on previously seen movies (i.e., entities). Furthermore, these systems have access to previous transactions (i.e., actual movie rentals or product purchases), whereas our recommendation system leverages a different resource, namely query sessions. In principle, one could consider vertical search engines (Nie et al., 2007) as a mechanism for associating queries to entities. For example, if we type the query “canon eos digital camera” on a commerce search engine such as Bing Shopping or Google Products, we get a listing of digital camera entities that satisfy our query. However, vertical search engines are essentially rankers that given a query, return a sorted list of (pointers to) entities that are related to the query. That is, they do not expose actual association scores, which is a key contribution of our work, nor do they operate on general search queries. Our smoothing methods for estimating association probabilities are related to techniques developed by the NLP and speech communities to smooth n-gram probabilities in language modeling. The simplest are discounting methods, such as additive smoothing (Lidstone, 1920) and GoodTuring (Good, 1953). Other methods leverage lower-order background models for low-frequency events, such as Katz’ backoff smoothing (Katz, 1987), Witten-Bell discounting (Witten and Bell, 1991), Jelinek-Mercer interpolation (Jelinek and Mercer, 1980), and Kneser-Ney (Kneser and Ney, 1995). In the information retrieval community, Ponte and Croft (1998) are credited for accelerating the use of language models. Initial proposals were based on learning global smoothing models, where the smoothing of a word would be independent of the document that the word belongs to (Zhai and Lafferty, 2001). More recently, a number of local smoothing models have been proposed (Liu and Croft, 2004; Kurland and Lee, 2004; Tao et al., 2006). Unlike global models, local models leverage relationships between documents in a corpus. In particular, they rely on a graph structure that represents document similarity. Intuitively, the smoothing of a word in a document is influenced by the smoothing of the word in similar documents. For a complete survey of these methods and a general optimization framework that encompasses all previous proposals, please see the work of Mei, Zhang et al. (2008a). All the work on local smoothing models has been applied to the prediction of priors for words in documents. To the best of our knowledge, we are the first to establish that query-click graphs can be used to 84 create accurate models of query-entity associations. 3 Association Model Task Definition: Consider a collection of entities E. Given a search query q, our task is to compute P(e|q), the probability that an entity e is relevant to q, for all e ∈E. We limit our model to sets of entities that can be accessed through urls on the web, such as Amazon.com products, IMDB movies, Wikipedia entities, and Yelp points of interest. Following Baeza-Yates et al. (2004), we model relevance as the click probability of an entity given a query, which we can observe from click logs of vertical search engines, i.e., domain-specific search engines such as the product search engine at Amazon, the local search engine at Yelp, or the travel search engine at Bing Travel. Clicked results in a vertical search engine are edges between queries and entities e in the vertical’s knowledge base. General search query click logs, which capture direct user intent signals, have shown significant improvements when used for web search ranking (Agichtein et al., 2006). Unlike for general search engines, vertical search engines have typically much less traffic resulting in extremely sparse click logs. In this section, we define a graph structure for recording click information and we propose several models for estimating P(e|q) using the graph. 3.1 Query Entity Click Graph We define a query entity click graph, QEC(Q∪U ∪ E, Cu ∪Ce), as a tripartite graph consisting of a set of query nodes Q, url nodes U, entity nodes E, and weighted edges Cu exclusively between nodes of Q and nodes of U, as well as weighted edges Ce exclusively between nodes of Q and nodes of E. Each edge in Cu and Ce represents the number of clicks observed between query-url pairs and query-entity pairs, respectively. Let wu(q, u) be the click weight of the edges in Cu, and we(q, e) be the click weight of the edges in Ce. If Ce is very large, then we can model the association probability, P(e|q), as the maximum likelihood estimation (MLE) of observing clicks on e given the query q: ˆPmle(e|q) = we(q,e) P e′∈E we(q,e′) (3.1) Figure 1 illustrates an example query entity graph linking general web queries to entities in a large commercial product catalog. Figure 1a illustrates eight queries in Q with their observed clicks (solid lines) with products in E1. Some probability estimates, assigned by Equation 3.1, include: ˆPmle(panfish jigs, e1) = 0, ˆPmle(ice jigs, e1) = 1, and ˆPmle(ice auger, e4) = ce(ice auger,e4) ce(ice auger,e3)+ce(ice auger,e4). Even for the largest search engines, query click logs are extremely sparse, and smoothing techniques are necessary (Craswell and Szummer, 2007; Gao et al., 2009). By considering only Ce, those clicked urls that map to our entity collection E, the sparsity situation is even more dire. The sparsity of the graph comes in two forms: a) there are many queries for which an entity is relevant that will never be seen in the click logs (e.g., “panfish jig” in Figure 1a); and b) the query-click distribution is Zipfian and most observed edges will have very low click counts yielding unreliable statistics. In the following subsections, we present a method to expand QEC with unseen queries that are associated with entities in E. Then we propose smoothing methods for leveraging a background model over the expanded click graph. Throughout our models, we make the simplifying assumption that the knowledge base E is complete. 3.2 Graph Expansion Following Gao et al. (2009), we address the sparsity of edges in Ce by inferring new edges through traversing the query-url click subgraph, UC(Q ∪ U, Cu), which contains many more edges than Ce. If two queries qi and qj are synonyms or near synonyms2, then we expect their click patterns to be similar. We define the synonymy similarity, s(qi, qj) as the cosine of the angle between qi and qj, the click pattern vectors of qi and qj, respectively: cosine(qi, qj) = qi·qj √qi·qi·√qj·qj where q is an nu dimensional vector consisting of the pointwise mutual information between q and each url u in U, pmi(q, u): 1Clicks are collected from a commerce vertical search engine described in Section 5.1. 2A query qi is a near synonym of a query qj if most relevant results of qi are also relevant to qj. Section 5.2.1 describes our adopted metric for near synonymy. 85 ice fishing ice auger Eskimo Mako Auger Luretech Hot Hooks Hi-Tech Fish ‘N’ Bucket icefishingworld.com iceteam.com cabelas.com strikemaster.com ice fishing tackle fishusa.com power auger ice jigs fishing bucket customjigs.com keeperlures.com panfish jigs d rock StrikeLite II Auger Luretech Hot Hooks ice fishing tackle ice jigs panfish jigs e q we , e q we , ˆ E Q U j i q q s , ice auger cabelas.com strikemaster.com power auger d rock u q wu , a) b) c) d) fishing ice fishing ice fishing minnesota d rock ice fishing tackle ice fishing t0 t1 t3 t4 t2 (e1) (e1) (e2) (e3) (e4) Figure 1: Example QEC graph: (a) Sample queries in Q, clicks connecting queries with urls in U, and clicks to entities in E; (b) Zoom on edges in Cu illustrating clicks observed on urls with weight wu(q, u) as well as synonymy edges between queries with similarity score s(qi, qj) (Section 3.2); (c) Zoom on edges in Ce where solid lines indicate observed clicks with weight we(q, e) and dotted lines indicate inferred clicks with smoothed weight ˆwe(q, e) (Section 3.3); and (d) A temporal sequence of queries in a search session illustrating entity associations propagating from the QEC graph to the queries in the session (Section 4). pmi(q, u) = log wu(q,u)×P q′∈Q,u′∈U wu(q′,u′) P u′∈U wu(q,u′) P q′∈Q wu(q′,u) (3.2) PMI is known to be biased towards infrequent events. We apply the discounting factor, δ(q, u), proposed in (Pantel and Lin, 2002): δ(q,u)= wu(q,u) wu(q,u)+1· min( P q′∈Q wu(q′,u),P u′∈U wu(q,u′)) min( P q′∈Q wu(q′,u),P u′∈U wu(q,u′))+1 Enrichment: We enrich the original QEC graph by creating a new edge {q′,e}, where q′ ∈Q and e ∈ E, if there exists a query q where s(q, q′) > ρ and we(q, e) > 0. ρ is set experimentally, as described in Section 5.2. Figure 1b illustrates similarity edges created between query “ice auger” and both “power auger” and “d rock”. Since “ice auger” was connected to entities e3 and e4 in the original QEC, our expansion model creates new edges in Ce between {power auger, e3}, {power auger, e4}, and {d rock, e3}. For each newly added edge {q,e}, ˆPmle = 0 according to our model from Equation 3.1 since we have never observed any clicks between q and e. Instead, we define a new model that uses ˆPmle when clicks are observed and otherwise assigns uniform probability mass, as: ˆPhybr(e|q) = ˆPmle(e|q) if ∃e′|we(q,e′)>0 1 P e′∈E φ(q,e′) otherwise (3.3) where φ(q, e) is an indicator variable which is 1 if there is an edge between {q, e} in Ce. This model does not leverage the local synonymy graph in order to transfer edge weight to unseen edges. In the next section, we investigate smoothing techniques for achieving this. 3.3 Smoothing Smoothing techniques can be useful to alleviate data sparsity problems common in statistical models. In practice, methods that leverage a background model (e.g., a lower-order n-gram model) have shown most promise (Katz, 1987; Witten and Bell, 1991; Jelinek and Mercer, 1980; Kneser and Ney, 1995). In this section, we present two smoothing methods, derived from Jelinek-Mercer interpolation (Jelinek and Mercer, 1980), for estimating the target association probability P(e|q). Figure 1c highlights two edges, illustrated with dashed lines, inserted into Ce during the graph expansion phase of Section 3.2. ˆwe(q, e) represents the weight of our background model, which can be viewed as smoothed click counts, and are obtained 86 Label Model Reference UNIF ˆPunif(e|q) Eq. 3.8 MLE ˆPmle(e|q) Eq. 3.1 HYBR ˆPhybr(e|q) Eq. 3.3 INTU ˆPintu(e|q) Eq. 3.6 INTP ˆPintp(e|q) Eq. 3.7 Table 1: Models for estimating the association probability P(e|q). by propagating clicks to unseen edges using the synonymy model as follows: ˆwe(q, e) = P q′∈Q s(q,q′) Nsq × ˆPmle(e|q′) (3.4) where Nsq = P q′∈Q s(q, q′). By normalizing the smoothed weights, we obtain our background model, ˆPbsim: ˆPbsim(e|q) = ˆwe(q,e) P e′∈E ˆwe(q,e′) (3.5) Below we propose two models for interpolating our foreground model from Equation 3.1 with the background model from Equation 3.5. Basic Interpolation: This smoothing model, ˆPintu(e|q), linearly combines our foreground and background models using a model parameter α: ˆPintu(e|q)=α ˆPmle(e|q)+(1−α) ˆPbsim(e|q) (3.6) Bucket Interpolation: Intuitively, edges {q, e} ∈ Ce with higher observed clicks, we(q, e), should be trusted more than those with low or no clicks. A limitation of ˆPintu(e|q) is that it weighs the foreground and background models in the same way irrespective of the observed foreground clicks. Our final model, ˆPintp(e|q) parameterizes the interpolation by the number of observed clicks: ˆPintp(e|q)=α[we(q, e)] ˆPmle(e|q) + (1 −α[we(q, e)]) ˆPbsim(e|q) (3.7) In practice, we bucket the observed click parameter, we(q, e), into eleven buckets: {1-click, 2-clicks, ..., 10-clicks, more than 10 clicks}. Section 5.2 outlines our procedure for learning the model parameters for both ˆPintu(e|q) and ˆPintp(e|q). 3.4 Summary Table 1 summarizes the association models presented in this section as well as a strawman that assigns uniform probability to all edges in QEC: ˆPunif(e|q) = 1 P e′∈E φ(q, e′) (3.8) In the following section, we apply these models to the task of extracting product recommendations for general web search queries. A large-scale experimental study is presented in Section 5 supporting the effectiveness of our models. 4 Entity Recommendation Query recommendations are pervasive in commercial search engines. Many systems extract recommendations by mining temporal query chains from search sessions and clickthrough patterns (Zhang and Nasraoui, 2006). We adopt a similar strategy, except instead of mining query-query associations, we propose to mine query-entity associations, where entities come from an entity database as described in Section 1. Our technical challenge lies in annotating sessions with entities that are relevant to the session. 4.1 Product Entity Domain Although our model generalizes to any entity domain, we focus now on a product domain. Specifically, our universe of entities, E, consists of the entities in a large commercial product catalog, for which we observe query-click-product clicks, Ce, from the vertical search logs. Our QEC graph is completed by extracting query-click-urls from a search engine’s general search logs, Cu. These datasets are described in Section 5.1. 4.2 Recommendation Algorithm We hypothesize that if an entity is relevant to a query, then it is relevant to all other queries cooccurring in the same session. Key to our method are the models from Section 3. Step 1 – Query Annotation: For each query q in a session s, we annotate it with a set Eq, consisting of every pair {e, ˆP(e|q)}, where e ∈E such that there exists an edge {q, e} ∈Ce with probability ˆP(e|q). Note that Eq will be empty for many queries. Step 2 – Session Analysis: We build a queryentity frequency co-occurrence matrix, A, consisting of n|Q| rows and n|E| columns, where each row corresponds to a query and each column to an entity. 87 The value of the cell Aqe is the sum over each session s, of the maximum edge weight between any query q′ ∈s and e3: Aqe = P s∈S ψ(s, e) where S consists of all observed search sessions and: ψ(s, e) = arg max ˆP(e|q′) ({e, ˆP(e|q′)} ∈Eq′), ∀q′ ∈s Step 3 – Ranking: We compute ranking scores between each query q and entity e using pointwise mutual information over the frequencies in A, similarly to Eq. 3.2. The final recommendations for a query q are obtained by returning the top-k entities e according to Step 3. Filters may be applied on: f the frequency Aqe; and p the pointwise mutual information ranking score between q and e. 5 Experimental Results 5.1 Datasets We instantiate our models from Sections 3 and 4 using search query logs and a large catalog of products from a commercial search engine. We form our QEC graphs by first collecting in Ce aggregate query-click-entity counts observed over two years in a commerce vertical search engine. Similarly, Cu is formed by collecting aggregate query-click-url counts observed over six months in a web search engine, where each query must have frequency at least 10. Three final QEC graphs are sampled by taking various snapshots of the above graph as follows: a) TRAIN consists of 50% of the graph; b) TEST consists of 25% of the graph; c) DEV consists of 25% of the graph. 5.2 Association Models 5.2.1 Model Parameters We tune the α parameters for ˆPintu and ˆPintp against the DEV QEC graph. There are twelve parameters to be tuned: α for ˆPintu and α(1), α(2), ..., α(10), α(> 10) for ˆPintp, where α(x) is the observed click bucket as described in Section 3.3. For each, we choose the parameter value that minimizes the mean-squared error (MSE) of the DEV set, where 3Note that this co-occurrence occurs because q′ was annotated with entity e in the same session as q occurred. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 MSE Alpha Alpha vs. MSE: Heldout Training For Alpha Parameters Basic Bucket_1 Bucket_2 Bucket_3 Bucket_4 Bucket_5 Bucket_6 Bucket_7 Bucket_8 Bucket_9 Bucket_10 Bucket_11 Figure 2: Alpha tuning on held out data. Model MSE Var Err/MLE MSEW Var Err/MLE ˆ Punif 0.0328† 0.0112 -25.7% 0.0663† 0.0211 -71.8% ˆ Pmle 0.0261 0.0111 – 0.0386 0.0141 – ˆ Phybr 0.0232† 0.0071 11.1% 0.0385 0.0132 0.03% ˆ Pintu 0.0226† 0.0075 13.4% 0.0369† 0.0133 4.4% ˆ Pintp 0.0213† 0.0068 18.4% 0.0375† 0.0131 2.8% Table 2: Model analysis: MSE and MSEW with variance and error reduction relative to ˆPmle. † indicates statistical significance over ˆPmle with 95% confidence. model probabilities are computed using the TRAIN QEC graph. Figure 2 illustrates the MSE ranging over [0, 0.05, 0.1, ..., 1]. We trained the query synonym model of Section 3.2 on the DEV set and hand-annotated 100 random synonymy pairs according to whether or not the pairs were synonyms 2. Setting ρ = 0.4 results in a precision > 0.9. 5.2.2 Analysis We evaluate the quality of our models in Table 1 by evaluating their mean-squared error (MSE) against the target P(e|q) computed on the TEST set: MSE( ˆP)=P {q,e}∈CT e (P T (e|q)−ˆP(e|q))2 MSEW ( ˆP)=P {q,e}∈CT e wT e (q,e)·(P T (e|q)−ˆP(e|q))2 where CT e are the edges in the TEST QEC graph with weight wT e (q, e), P T (e|q) is the target probability computed over the TEST QEC graph, and ˆP is one of our models trained on the TRAIN QEC graph. MSE measures against each edge type, which makes it sensitive to the long tail of the click graph. Conversely, MSEW measures against each edge instance, which makes it a good measure against the head of the click graph. We expect our smoothing models to have much more impact on MSE (i.e., the tail) than on MSEW since head queries do not suffer from data sparsity. Table 2 lists the MSE and MSEW results for each model. We consider ˆPunif as a strawman and ˆPmle as a strong baseline (i.e., without any graph expansion nor any smoothing against a background 88 0 0.01 0.02 0.03 0.04 0.05 0.06 MSE Click Bucket (scaled by query-instance coverage) Mean Squared Error vs. Click Bucket UNIF MLE HYBR INTU INTP 1 2 4 3 5 6 7 8 9 10 Figure 3: MSE of each model against the number of clicks in the TEST corpus. Buckets scaled by query instance coverage of all queries with 10 or fewer clicks. model). ˆPunif performs generally very poorly, however ˆPmle is much better, with an expected estimation error of 0.16 accounting for an MSE of 0.0261. As expected, our smoothing models have little improvement on the head-sensitive metric (MSEW ) relative to ˆPmle. In particular, ˆPhybr performs nearly identically to ˆPmle on the head. On the tail, all three smoothing models significantly outperform ˆPmle with ˆPintp reducing the error by 18.4%. Table 3 lists query-product associations for five randomly sampled products along with their model scores from ˆPmle with ˆPintp. Figure 3 provides an intrinsic view into MSE as a function of the number of observed clicks in the TEST set. As expected, for larger observed click counts (>4), all models perform roughly the same, indicating that smoothing is not necessary. However, for low click counts, which in our dataset accounts for over 20% of the overall click instances, we see a large reduction in MSE with ˆPintp outperforming ˆPintu, which in turn outperforms ˆPhybr. ˆPunif performs very poorly. The reason it does worse as the observed click count rises is that head queries tend to result in more distinct urls with high-variance clicks, which in turn makes a uniform model susceptible to more error. Figure 3 illustrates that the benefit of the smoothing models is in the tail of the click graph, which supports the larger error reductions seen in MSE in Table 2. For associations only observed once, ˆPintp reduces the error by 29% relative to ˆPmle. We also performed an editorial evaluation of the query-entity associations obtained with bucket interpolation. We created two samples from the TEST dataset: one randomly sampled by taking click weights into account, and the other sampled uniformly at random. Each set contains results for Query ˆ Pmle ˆ Pintp Query ˆ Pmle ˆ Pintp Garmin GTM 20 GPS Canon PowerShot SX110 IS garmin gtm 20 0.44 0.45 canon sx110 0.57 0.57 garmin traffic receiver 0.30 0.27 powershot sx110 0.48 0.48 garmin nuvi 885t 0.02 0.02 powershot sx110 is 0.38 0.36 gtm 20 0 0.33 powershot sx130 is 0 0.33 garmin gtm20 0 0.33 canon power shot sx110 0 0.20 nuvi 885t 0 0.01 canon dig camera review 0 0.10 Samsung PN50A450 50” TV Devil May Cry: 5th Anniversary Col. samsung 50 plasma hdtv 0.75 0.83 devil may cry 0.76 0.78 samsung 50 0.33 0.32 devilmaycry 0 1.00 50” hdtv 0.17 0.12 High Island Hammock/Stand Combo samsung plasma tv review 0 0.42 high island hammocks 1.00 1.00 50” samsung plasma hdtv 0 0.35 hammocks and stands 0 0.10 Table 3: Example query-product association scores for a random sample of five products. Bold queries resulted from the expansion algorithm in Section 3.2. 100 queries. The former consists of 203 queryproduct associations, and the latter of 159 associations. The evaluation was done using Amazon Mechanical Turk4. We created a Mechanical Turk HIT5 where we show to the Mechanical Turk workers the query and the actual Web page in a Product search engine. For each query-entity association, we gathered seven labels and considered an association to be correct if five Mechanical Turk workers gave a positive label. An association was considered to be incorrect if at least five workers gave a negative label. Borderline cases where no label got five votes were discarded (14% of items were borderline for the uniform sample; 11% for the weighted sample). To ensure the quality of the results, we introduced 30% of incorrect associations as honeypots. We blocked workers who responded incorrectly on the honeypots so that the precision on honeypots is 1. The result of the evaluation is that the precision of the associations is 0.88 on the weighted sample and 0.90 on the uniform sample. 5.3 Related Product Recommendation We now present an experimental evaluation of our product recommendation system using the baseline model ˆPmle and our best-performing model ˆPintp. The goals of this evaluation are to (1) determine the quality of our product recommendations; and (2) assess the impact of our association models on the product recommendations. 5.3.1 Experimental Setup We instantiate our recommendation algorithm from Section 4.2 using session co-occurrence frequencies 4https://www.mturk.com 5HIT stands for Human Intelligence Task 89 Query Set Sample Query Bag Sample f 10 25 50 100 10 25 50 100 p 10 10 10 10 10 10 10 10 ˆ Pmle precision 0.89 0.93 0.96 0.96 0.94 0.94 0.93 0.92 ˆ Pintp precision 0.86 0.92 0.96 0.96 0.94 0.94 0.93 0.94 ˆ Pmle coverage 0.007 0.004 0.002 0.001 0.085 0.067 0.052 0.039 ˆ Pintp coverage 0.008 0.005 0.003 0.002 0.094 0.076 0.059 0.045 Rintp,mle 1.16 1.14 1.13 1.14 1.11 1.13 1.15 1.19 Table 4: Experimental results for product recommendations. All configurations are for k = 10. from a one-month snapshot of user query sessions at a Web search engine, where session boundaries occur when 60 seconds elapse in between user queries. We experiment with the recommendation parameters defined at the end of Section 4.2 as follows: k = 10, f ranging from 10 to 100, and p ranging from 3 to 10. For each configuration, we report coverage as the total number of queries in the output (i.e., the queries for which there is some recommendation) divided by the total number of queries in the log. For our performance metrics, we sampled two sets of queries: (a) Query Set Sample: uniform random sample of 100 queries from the unique queries in the one-month log; and (b) Query Bag Sample: weighted random sample, by query frequency, of 100 queries from the query instances in the onemonth log. For each sample query, we pooled together and randomly shuffled all recommendations by our algorithm using both ˆPmle and ˆPintp on each parameter configuration. We then manually annotated each {query, product} pair as relevant, mildly relevant or non-relevant. In total, 1127 pairs were annotated. Interannotator agreement between two judges on this task yielded a Cohen’s Kappa (Cohen, 1960) of 0.56. We therefore collapsed the mildly relevant and non-relevant classes yielding two final classes: relevant and non-relevant. Cohen’s Kappa on this binary classification is 0.71. Let CM be the number of relevant (i.e., correct) suggestions recommended by a configuration M and let |M| be the number of recommendations returned by M. Then we define the (micro-) precision of M as: PM = CM C . We define relative recall (Pantel et al., 2004) between two configurations M1 and M2 as RM1,M2 = PM1×|M1| PM2×|M2|. 5.3.2 Results Table 4 summarizes our results for some configurations (others omitted for lack of space). Most reQuery Product Recommendation wedding gowns 27 Dresses (Movie Soundtrack) wedding gowns Bridal Gowns: The Basics of Designing, [...] (Book) wedding gowns Wedding Dress Hankie wedding gowns The Perfect Wedding Dress (Magazine) wedding gowns Imagine Wedding Designer (Video Game) low blood pressure Omron Blood Pressure Monitor low blood pressure Healthcare Automatic Blood Pressure Monitor low blood pressure Ridgecrest Blood Pressure Formula - 60 Capsules low blood pressure Omron Portable Wrist Blood Pressure Monitor ’hello cupcake’ cookbook Giant Cupcake Cast Pan ’hello cupcake’ cookbook Ultimate 3-In-1 Storage Caddy ’hello cupcake’ cookbook 13 Cup Cupcakes and More Dessert Stand ’hello cupcake’ cookbook Cupcake Stand Set (Toys) 1 800 flowers Todd Oldham Party Perfect Bouquet 1 800 flowers Hugs and Kisses Flower Bouquet with Vase Table 5: Sample product recommendations. markable is the {f = 10, p = 10} configuration where the ˆPintp model affected 9.4% of all query instances posed by the millions of users of a major search engine, with a precision of 94%. Although this model covers 0.8% of the unique queries, the fact that it covers many head queries such as walmart and iphone accounts for the large query instance coverage. Also since there may be many general web queries for which there is no appropriate product in the database, a coverage of 100% is not attainable (nor desirable); in fact the upper bound for the coverage is likely to be much lower. Turning to the impact of the association models on product recommendations, we note that precision is stable in our ˆPintp model relative to our baseline ˆPmle model. However, a large lift in relative recall is observed, up to a 19% increase for the {f = 100, p = 10} configuration. These results are consistent with those of Section 5.2, which compared the association models independently of the application and showed that ˆPintp outperforms ˆPmle. Table 5 shows sample product recommendations discovered by our ˆPintp model. Manual inspection revealed two main sources of errors. First, ambiguity is introduced both by the click model and the graph expansion algorithm of Section 3.2. In many cases, the ambiguity is resolved by user click patterns (i.e., users disambiguate queries through their browsing behavior), but one such error was seen for the query “shark attack videos” where several Shark-branded vacuum cleaners are recommended. This is because of the ambiguous query “shark” that is found in the click logs and in query sessions co-occurring with the query “shark attack videos”. The second source of errors is caused by systematic user errors commonly found in session logs such as a user accidentally submitting a query while typing. An example 90 session is: {“speedo”, “speedometer”} where the intended session was just the second query and the unintended first query is associated with products such as Speedo swimsuits. This ultimately causes our system to recommend various swimsuits for the query “speedometer”. 6 Conclusion Learning associations between web queries and entities has many possible applications, including query-entity recommendation, personalization by associating entity vectors to users, and direct advertising. Although many techniques have been developed for associating queries to queries or queries to documents, to the best of our knowledge this is the first that aims to associate queries to entities by leveraging click graphs from both general search logs and vertical search logs. We developed several models for estimating the probability that an entity is relevant given a user query. The sparsity of query entity graphs is addressed by first expanding the graph with query synonyms, and then smoothing query-entity click counts over these unseen queries. Our best performing model, which interpolates between a foreground click model and a smoothed background model, significantly reduces testing error when compared against a strong baseline, by 18%. On associations observed only once in our test collection, the modeling error is reduced by 29% over the baseline. We applied our best performing model to the task of query-entity recommendation, by analyzing session co-occurrences between queries and annotated entities. Experimental analysis shows that our smoothing techniques improve coverage while keeping precision stable, and overall, that our topperforming model affects 9% of general web queries with 94% precision. References [Agichtein et al.2006] Eugene Agichtein, Eric Brill, and Susan T. Dumais. 2006. Improving web search ranking by incorporating user behavior information. In SIGIR, pages 19–26. [Agirre et al.2009] Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pas¸ca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In NAACL, pages 19–27. [Baeza-Yates et al.2004] Ricardo Baeza-Yates, Carlos Hurtado, and Marcelo Mendoza. 2004. Query recommendation using query logs in search engines. In Wolfgang Lindner, Marco Mesiti, Can T¨urker, Yannis Tzitzikas, and Athena Vakali, editors, EDBT Workshops, volume 3268 of Lecture Notes in Computer Science, pages 588–596. Springer. [Baeza-Yates2004] Ricardo Baeza-Yates. 2004. Web usage mining in search engines. In In Web Mining: Applications and Techniques, Anthony Scime, editor. Idea Group, pages 307–321. [Bell et al.2007] R. Bell, Y. Koren, and C. Volinsky. 2007. Modeling relationships at multiple scales to improve accuracy of large recommender systems. In KDD, pages 95–104. [Boldi et al.2009] Paolo Boldi, Francesco Bonchi, Carlos Castillo, Debora Donato, and Sebastiano Vigna. 2009. Query suggestions using query-flow graphs. In WSCD ’09: Proceedings of the 2009 workshop on Web Search Click Data, pages 56–63. ACM. [Cohen1960] Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37–46, April. [Craswell and Szummer2007] Nick Craswell and Martin Szummer. 2007. Random walks on the click graph. In SIGIR, pages 239–246. [Fuxman et al.2008] A. Fuxman, P. Tsaparas, K. Achan, and R. Agrawal. 2008. Using the wisdom of the crowds for keyword generation. In WWW, pages 61– 70. [Gao et al.2009] Jianfeng Gao, Wei Yuan, Xiao Li, Kefeng Deng, and Jian-Yun Nie. 2009. Smoothing clickthrough data for web search ranking. In SIGIR, pages 355–362. [Good1953] Irving John Good. 1953. The population frequencies of species and the estimation of population parameters. Biometrika, 40(3 and 4):237–264. [Jagabathula et al.2011] S. Jagabathula, N. Mishra, and S. Gollapudi. 2011. Shopping for products you don’t know you need. In To appear at WSDM. [Jain and Pantel2009] Alpa Jain and Patrick Pantel. 2009. Identifying comparable entities on the web. In CIKM, pages 1661–1664. [Jelinek and Mercer1980] Frederick Jelinek and Robert L. Mercer. 1980. Interpolated estimation of markov source parameters from sparse data. In In Proceedings of the Workshop on Pattern Recognition in Practice, pages 381–397. [Katz1987] Slava M. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. In IEEE Transactions on 91 Acoustics, Speech and Signal Processing, pages 400– 401. [Kneser and Ney1995] Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 181–184. [Kurland and Lee2004] O. Kurland and L. Lee. 2004. Corpus structure, language models, and ad-hoc information retrieval. In SIGIR, pages 194–201. [Lidstone1920] George James Lidstone. 1920. Note on the general case of the bayes-laplace formula for inductive or a posteriori probabilities. Transactions of the Faculty of Actuaries, 8:182–192. [Linden et al.2003] G. Linden, B. Smith, and J. York. 2003. Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Computing, 7(1):76–80. [Liu and Croft2004] X. Liu and W. Croft. 2004. Clusterbased retrieval using language models. In SIGIR, pages 186–193. [Mei et al.2008a] Q. Mei, D. Zhang, and C. Zhai. 2008a. A general optimization framework for smoothing language models on graph structures. In SIGIR, pages 611–618. [Mei et al.2008b] Q. Mei, D. Zhou, and Church K. 2008b. Query suggestion using hitting time. In CIKM, pages 469–478. [Nie et al.2007] Z. Nie, J. Wen, and W. Ma. 2007. Object-level vertical search. In Conference on Innovative Data Systems Research (CIDR), pages 235–246. [Pantel and Lin2002] Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In SIGKDD, pages 613–619, Edmonton, Canada. [Pantel et al.2004] Patrick Pantel, Deepak Ravichandran, and Eduard Hovy. 2004. Towards terascale knowledge acquisition. In COLING, pages 771–777. [Pantel et al.2009] Patrick Pantel, Eric Crestan, Arkady Borkovsky, Ana-Maria Popescu, and Vishnu Vyas. 2009. Web-scale distributional similarity and entity set expansion. In EMNLP, pages 938–947. [Pas¸ca and Durme2008] Marius Pas¸ca and Benjamin Van Durme. 2008. Weakly-supervised acquisition of open-domain classes and class attributes from web documents and query logs. In ACL, pages 19–27. [Ponte and Croft1998] J. Ponte and B. Croft. 1998. A language modeling approach to information retrieval. In SIGIR, pages 275–281. [Sarwar et al.2001] B. Sarwar, G. Karypis, J. Konstan, and J. Reidl. 2001. Item-based collaborative filtering recommendation system. In WWW, pages 285–295. [Tao et al.2006] T. Tao, X. Wang, Q. Mei, and C. Zhai. 2006. Language model information retrieval with document expansion. In HLT/NAACL, pages 407–414. [Wen et al.2001] Ji-Rong Wen, Jian-Yun Nie, and HongJiang Zhang. 2001. Clustering user queries of a search engine. In WWW, pages 162–168. [Witten and Bell1991] I.H. Witten and T.C. Bell. 1991. The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression. IEEE Transactions on Information Theory, 37(4). [Zhai and Lafferty2001] C. Zhai and J. Lafferty. 2001. A study of smoothing methods for language models applied to ad hoc information retrieval. In SIGIR, pages 334–342. [Zhang and Nasraoui2006] Z. Zhang and O. Nasraoui. 2006. Mining search engine query logs for query recommendation. In WWW, pages 1039–1040. 92
|
2011
|
9
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 895–904, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Unsupervised Bilingual Morpheme Segmentation and Alignment with Context-rich Hidden Semi-Markov Models Jason Naradowsky∗ Department of Computer Science University of Massachusetts Amherst Amherst, MA 01003 [email protected] Kristina Toutanova Microsoft Research Redmond, WA 98502 [email protected] Abstract This paper describes an unsupervised dynamic graphical model for morphological segmentation and bilingual morpheme alignment for statistical machine translation. The model extends Hidden Semi-Markov chain models by using factored output nodes and special structures for its conditional probability distributions. It relies on morpho-syntactic and lexical source-side information (part-of-speech, morphological segmentation) while learning a morpheme segmentation over the target language. Our model outperforms a competitive word alignment system in alignment quality. Used in a monolingual morphological segmentation setting it substantially improves accuracy over previous state-of-the-art models on three Arabic and Hebrew datasets. 1 Introduction An enduring problem in statistical machine translation is sparsity. The word alignment models of modern MT systems attempt to capture p(ei|fj), the probability that token ei is a translation of fj. Underlying these models is the assumption that the word-based tokenization of each sentence is, if not optimal, at least appropriate for specifying a conceptual mapping between the two languages. However, when translating between unrelated languages – a common task – disparate morphological systems can place an asymmetric conceptual burden on words, making the lexicon of one language much more coarse. This intensifies the problem of sparsity as the large number of word forms created ∗This research was conducted during the author’s internship at Microsoft Research through morphologically productive processes hinders attempts to find concise mappings between concepts. For instance, Bulgarian adjectives may contain markings for gender, number, and definiteness. The following tree illustrates nine realized forms of the Bulgarian word for red, with each leaf listing the definite and indefinite markings. Feminine Neuter Singular Plural Root Masculine cherven(iq)(iqt) cherveni(te) chervena(ta) cherveno(to) Table 1: Bulgarian forms of red Contrast this with English, in which this information is marked either on the modified word or by separate function words. In comparison to a language which isn’t morphologically productive on adjectives, the alignment model must observe nine times as much data (assuming uniform distribution of the inflected forms) to yield a comparable statistic. In an area of research where the amount of data available plays a large role in a system’s overall performance, this sparsity can be extremely problematic. Further complications are created when lexical sparsity is compounded with the desire to build up alignments over increasingly larger contiguous phrases. To address this issue we propose an alternative to word alignment: morpheme alignment, an alignment that operates over the smallest meaningful subsequences of words. By striving to keep a direct 1to-1 mapping between corresponding semantic units across languages, we hope to find better estimates 895 ون the red flower cherven tsvet DET ADJ NN ia i s they want to h nA^ PRN VB INF dyr sr~d y teach him VB PRN nw y червен цвет я и теه أن ريد درّس ي ي te Figure 1: A depiction of morpheme-level alignment. Here dark lines indicate the more stem-focused alignment strategy of a traditional word or phrasal alignment model, while thin lines indicate a more fine-grained alignment across morphemes. In the alignment between English and Bulgarian (a) the morpheme-specific alignment reduces sparsity in the adjective and noun (red flowers) by isolating the stems from their inflected forms. Despite Arabic exhibiting templatic morphology, there are still phenomena which can be accounted for with a simpler segmentational approach. The Arabic alignment (b) demonstrates how the plural marker on English they would normally create sparsity by being marked in three additional places, two of them inflections in larger wordforms. for the alignment statistics. Our results show that this improves alignment quality. In the following sections we describe an unsupervised dynamic graphical model approach to monolingual morphological segmentation and bilingual morpheme alignment using a linguistically motivated statistical model. In a bilingual setting, the model relies on morpho-syntactic and lexical source-side information (part-of-speech, morphological segmentation, dependency analysis) while learning a morpheme segmentation over the target language. In a monolingual setting we introduce effective use of context by feature-rich modeling of the probabilities of morphemes, morphemetransitions, and word boundaries. These additional sources of information provide powerful bias for unsupervised learning, without increasing the asymptotic running time of the inference algorithm. Used as a monolingual model, our system significantly improves the state-of-the-art segmentation performance on three Arabic and Hebrew datasets. Used as a bilingual model, our system outperforms the state-of-the-art WDHMM (He, 2007) word alignment model as measured by alignment error rate (AER). In agreement with some previous work on tokenization/morpheme segmentation for alignment (Chung and Gildea, 2009; Habash and Sadat, 2006), we find that the best segmentation for alignment does not coincide with the gold-standard segmentation and our bilingual model does not outperform our monolingual model in segmentation F-Measure. 2 Model Our model defines the probability of a target language sequence of words (each consisting of a sequence of morphemes), and alignment from target to source morphemes, given a source language sequence of words (each consisting of a sequence of morphemes). An example morpheme segmentation and alignment of phrases in English-Arabic and EnglishBulgarian is shown in Figure 1. In our task setting, the words of the source and target language as well as the morpheme segmentation of the source (English) language are given. The morpheme segmentation of the target language and the alignments between source and target morphemes are hidden. The source-side input, which we assume to be English, is processed with a gold morphological segmentation, part-of-speech, and dependency tree analysis. While these tools are unavailable in resource-poor languages, they are often available for at least one of the modeled languages in common translation tasks. This additional information then provides a source of features and conditioning information for the translation model. Our model is derived from the hidden-markov model for word alignment (Vogel et al., 1996; Och and Ney, 2000). Based on it, we define a dynamic 896 cherven.i.te flower the red = 'cherven' μ1 = 2 a1 = OFF b1 = OFF b2 = ON b3 = 'i' μ2 = 'te' μ3 = 4 = 1 a2 a3 s = stem t1 = suffix = suffix t2 t3 Figure 2: A graphical depiction of the model generating the transliteration of the first Bulgarian word from Figure 1. Trigram dependencies and some incoming/outgoing arcs have been omitted for clarity. graphical model which lets us encode more linguistic intuition about morpheme segmentation and alignment: (i) we extend it to a hidden semi-markov model to account for hidden target morpheme segmentation; (ii) we introduce an additional observation layer to model observed word boundaries and thus truly represent target sentences as words composed of morphemes, instead of just a sequence of tokens; (iii) we employ hierarchically smoothed models and log-linear models to capture broader context and to better represent the morpho-syntactic mapping between source and target languages. (iv) we enrich the hidden state space of the model to encode morpheme types {prefix,suffix,stem}, in addition to morpheme alignment and segmentation information. Before defining our model formally, we introduce some notation. Each possible morphological segmentation and alignment for a given sentence pair can be described by the following random variables: Let µ1µ2 . . . µI denote I morphemes in the segmentation of the target sentence. For the Example in Figure 1 (a) I=5 and µ1=cherven, µ2=i . . . , and µ5=ia. Let b1, b2, . . . bI denote Bernoulli variables indicating whether there is a word boundary after morpheme µi. For our example, b3 = 1, b5 = 1, and the other bi are 0. Let c1, c2, . . . , cT denote the non-space characters in the target string, and wb1, . . . , wbT denote Bernoulli variables indicating whether there is a word boundary after the corresponding target character. For our example, T = 14 (for the Cyrillic version) and the only wb variables that are on are wb9 and wb14. The c and wb variables are observed. Let s1s2 . . . sT denote Bernoulli segmentation variables indicating whether there is a morpheme boundary after the corresponding character. The values of the hidden segmentation variables s together with the values of the observed c and wb variables uniquely define the values of the morpheme variables µi and the word boundary variables bi. Naturally we enforce the constraint that a given word boundary wbt = 1 entails a segmentation boundary st = 1. If we use bold letters to indicate a vector of corresponding variables, we have that c, wb, s=µ, b. We will define the assumed parametric form of the learned distribution using the µ, b but the inference algorithms are implemented in terms of the s and wb variables. We denote the observed source language morphemes by e1 . . . eJ. Our model makes use of additional information from the source which we will mention when necessary. The last part of the hidden model state represents the alignment between target and source morphemes and the type of target morphemes. Let tai = [ai, ti], i = 1 . . . I indicate a factored state where ai represents one of the J source words (or NULL) and ti represents one of the three morpheme types {prefix,suffix,stem}. ai is the source morpheme aligned to µi and ti is the type of µi. We are finally ready to define the desired probability of target morphemes, morpheme types, alignments, and word boundaries given source: P(µ, ta, b|e) = IY i=1 PT (µi|tai, bi−1, bi−2, µi−1, e) · PB(bi|µi, µi−1, tai, bi−1, bi−2, e) · PD(tai|tai−1, bi−1, e) · LP(|µi|) We now describe each of the factors used by our model in more detail. The formulation makes explicit the full extent of dependencies we have explored in this work. By simplifying the factors 897 we can recover several previously used models for monolingual segmentation and bilingual joint segmentation and alignment. We discuss the relationship of this model to prior work and study the impact of the novel components in our experiments. When the source sentence is assumed to be empty (and thus contains no morphemes to align to) our model turns into a monolingual morpheme segmentation model, which we show exceeds the performance of previous state-of-the-art models. When we remove the word boundary component, reduce the order of the alignment transition, omit the morphological type component of the state space, and retain only minimal dependencies in the morpheme translation model, we recover the joint tokenization and alignment model based on IBM Model-1 proposed by (Chung and Gildea, 2009). 2.1 Morpheme Translation Model In the model equation, PT denotes the morpheme translation probability. The standard dependence on the aligned source morpheme is represented as a dependence on the state tai and the whole annotated source sentence e. We experimented with multiple options for the amount of conditioning context to be included. When most context is used, there is a bigram dependency of target language morphemes as well as dependence on two previous boundary variables and dependence on the aligned source morpheme eai as well as its POS tag. When multiple conditioning variables are used we assume a special linearly interpolated backoff form of the model, similar to models routinely used in language modeling. As an example, suppose we estimate the morpheme translation probability as PT (µi|eai, ti). We estimate this in the M-step, given expected joint counts c(µi, eai, ti) and marginal counts derived from these as follows: PT (µi|eai, ti) = c(µi,eai,ti)+α2P2(µi|ti) c(eai,ti)+α2 The lower order distributions are estimated recursively in a similar way: P2(µi|ti) = c(µi,ti)+α1P1(µi) c(ti)+α1 P1(µi) = c(µi)+α0P0(µi) c(.)+α0 For P0 we used a unigram character language model. This hierarchical smoothing can be seen as an approximation to hierarchical Dirichlet priors with maximum aposteriori estimation. Note how our explicit treatment of word boundary variables bi allows us to use a higher order dependence on these variables. If word boundaries are treated as morphemes on their own, we would need to have a four-gram model on target morphemes to represent this dependency which we are now representing using only a bigram model on hidden morphemes. 2.2 Word Boundary Generation Model The PB distribution denotes the probability of generating word boundaries. As a sequence model of sentences the basic hidden semi-markov model completely ignores word boundaries. However, they can be powerful predictors of morpheme segments (by for example, indicating that common prefixes follow word boundaries, or that common suffixes precede them). The log-linear model of (Poon et al., 2009) uses word boundaries as observed left and right context features, and Morfessor (Creutz and Lagus, 2007) includes boundaries as special boundary symbols which can inform about the morpheme state of a morpheme (but not its identity). Our model includes a special generative process for boundaries which is conditioned not only on the previous morpheme state but also the previous two morphemes and other boundaries. Due to the fact that boundaries are observed their inclusion in the model does not increase the complexity of inference. The inclusion of this distribution lets us estimate the likelihood of a word consisting of one,two,three, or more morphemes. It also allows the estimation of likelihood that particular morphemes are in the beginning/middle/end of words. Through the included factored state variable tai word boundaries can also inform about the likelihood of a morpheme aligned to a source word of a particular pos tag to end a word. We discuss the particular conditioning context for this distribution we found most helpful in our experiments. Similarly to the PT distribution, we make use of multiple context vectors by hierarchical smoothing of distributions of different granularities. 898 2.3 Distortion Model PD indicates the distortion modeling distribution we use. 1 Traditional distortion models represent P(aj|aj−1, e), the probability of an alignment given the previous alignment, to bias the model away from placing large distances between the aligned tokens of consecutively sequenced tokens. In addition to modeling a larger state space to also predict morpheme types, we extend this model by using a special log-linear model form which allows the integration of rich morpho-syntactic context. Log-linear models have been previously used in unsupervised learning for local multinomial distributions like this one in e.g. (Berg-Kirkpatrick et al., 2010), and for global distributions in (Poon et al., 2009). The special log-linear form allows the inclusion of features targeted at learning the transitions among morpheme types and the transitions between corresponding source morphemes. The set of features with example values for this model is depicted in Table 3. The example is focussed on the features firing for the transition from the Bulgarian suffix te aligned to the first English morpheme µi−1 = te, ti−1=suffix, ai−1=1, to the Bulgarian root tsvet aligned to the third English morpheme µi = tsvet, ti=root, ai=3. The first feature is the absolute difference between ai and ai−1 + 1 and is similar to information used in other HMM word alignment models (Och and Ney, 2000) as well as phrasetranslation models (Koehn, 2004). The alignment positions ai are defined as indices of the aligned source morphemes. We additionally compute distortion in terms of distance in number of source words that are skipped. This distance corresponds to the feature name WORD DISTANCE. Looking at both kinds of distance is useful to capture the intuition that consecutive morphemes in the same target word should prefer to have a higher proximity of their aligned source words, as compared to consecutive morphemes which are not part of the same target word. The binned distances look at the sign of the distortion and bin the jumps into 5 bins, pooling the distances greater than 2 together. The feature SAME TARGET WORD indicates whether the two consecu1To reduce complexity of exposition we have omitted the final transition to a special state beyond the source sentence end after the last target morpheme. Feature Value MORPH DISTANCE 1 WORD DISTANCE 1 BINNED MORPH DISTANCE fore1 BINNED WORD DISTANCE fore1 MORPH STATE TRANSITION suffix-root SAME TARGET WORD False POS TAG TRANSITION DET-NN DEP RELATION DET←NN NULL ALIGNMENT False conjunctions ... Figure 3: Features in log-linear distortion model firing for the transition from te:suffix:1 to tsvet:root:3 in the example sentence pair in Figure 1a. tive morphemes are part of the same word. In this case, they are not. This feature is not useful on its own because it does not distinguish between different alignment possibilities for tai, but is useful in conjunction with other features to differentiate the transition behaviors within and across target words. The DEP RELATION feature indicates the direct dependency relation between the source words containing the aligned source morphemes, if such relationship exists. We also represent alignments to null and have one null for each source word, similarly to (Och and Ney, 2000) and have a feature to indicate null. Additionally, we make use of several feature conjunctions involving the null, same target word, and distance features. 2.4 Length Penalty Following (Chung and Gildea, 2009) and (Liang and Klein, 2009) we use an exponential length penalty on morpheme lengths to bias the model away from the maximum likelihood under-segmentation solution. The form of the penalty is: LP(|µi|) = 1 e|µi|lp Here lp is a hyper-parameter indicating the power that the morpheme length is raised to. We fit this parameter using an annotated development set, to optimize morpheme-segmentation F1. The model is extremely sensitive to this value and performs quite poorly if such penalty is not used. 2.5 Inference We perform inference by EM training on the aligned sentence pairs. In the E-step we compute expected 899 counts of all hidden variable configurations that are relevant for our model. In the M-step we re-estimate the model parameters (using LBFGS in the M-step for the distortion model and using count interpolation for the translation and word-boundary models). The computation of expectations in the E-step is of the same order as an order two semi-markov chain model using hidden state labels of cardinality (J × 3 = number of source morphemes times number of target morpheme types). The running time of the forward and backward dynamic programming passes is T × l2 × (3J)2, where T is the length of the target sentence in characters, J is the number of source morphemes, and l is the maximum morpheme length. Space does not permit the complete listing of the dynamic programming solution but it is not hard to derive by starting from the dynamic program for the IBM-1 like tokenization model of (Chung and Gildea, 2009) and extending it to account for the higher order on morphemes and the factored alignment state space. Even though the inference algorithm is low polynomial it is still much more expensive than the inference for an HMM model for word-alignment without segmentation. To reduce the running time of the model we limit the space of considered morpheme boundaries as follows: Given the target side of the corpus, we derive a list of K most frequent prefixes and suffixes using a simple trie-based method proposed by (Schone and Jurafsky, 2000).2 After we determine a list of allowed prefixes and suffixes we restrict our model to allow only segmentations of the form : ((p*)r(s*))+ where p and s belong to the allowed prefixes and suffixes and r can match any substring. We determine the number of prefixes and suffixes to consider using the maximum recall achievable by limiting the segmentation points in this way. Restricting the allowable segmentations in this way not only improves the speed of inference but also leads to improvements in segmentation accuracy. 2Words are inserted into a trie with each complete branch naturally identifying a potential suffix, inclusive of its subbranches. The list comprises of the K most frequent of these complete branches. Inserting the reversed words will then yield potential prefixes. 3 Evaluation For a majority of our testing we borrow the parallel phrases corpus used in previous work (Snyder and Barzilay, 2008), which we refer to as S&B. The corpus consists of 6,139 short phrases drawn from English, Hebrew, and Arabic translations of the Bible. We use an unmodified version of this corpus for the purpose of comparing morphological segmentation accuracy. For evaluating morpheme alignment accuracy, we have also augmented the English/Arabic subset of the corpus with a gold standard alignment between morphemes. Here morphological segmentations were obtained using the previously-annotated gold standard Arabic morphological segmentation, while the English was preprocessed with a morphological analyzer and then further hand annotated with corrections by two native speakers. Morphological alignments were manually annotated. Additionally, we evaluate monolingual segmentation models on the full Arabic Treebank (ATB), also used for unsupervised morpheme segmentation in (Poon et al., 2009). 4 Results 4.1 Morpheme Segmentation We begin by evaluating a series of models which are simplifications of our complete model, to assess the impact of individual modeling decisions. We focus first on a monolingual setting, where the source sentence aligned to each target sentence is empty. Unigram Model with Length Penalty The first model we study is the unigram monolingual segmentation model using an exponential length penalty as proposed by (Liang and Klein, 2009; Chung and Gildea, 2009), which has been shown to be quite accurate. We refer to this model as Model-UP (for unigram with penalty). It defines the probability of a target morpheme sequence as follows: (µ1 . . . µI) = (1 −θ) QI i=1 θPT (µi)LP(|µi|) This model can be (almost) recovered as a special case of our full model, if we drop the transition and word boundary probabilities, do not model morpheme types, and use no conditioning for the morpheme translation model. The only parameter not present in our model is the probability θ of generating a morpheme as opposed to stopping to gener900 ate morphemes (with probability 1 −θ). We experimented with this additional parameter, but found it had no significant impact on performance, and so we do not report results including it. We select the value of the length penalty power by a gird search in the range 1.1 to 2.0, using .1 increments and choosing the values resulting in best performance on a development set containing 500 phrase pairs for each language. We also select the optimal number of prefixes/suffixes to consider by measuring performance on the development set. 3 Morpheme Type Models The next model we consider is similar to the unigram model with penalty, but introduces the use of the hidden ta states which indicate only morpheme types in the monolingual setting. We use the ta states and test different configurations to derive the best set of features that can be used in the distortion model utilizing these states, and the morpheme translation model. We consider two variants: (1) Model-HMMP-basic (for HMM model with length penalty), which includes the hidden states but uses them with a simple uniform transition matrix P(tai|tai−1, bi−1) (uniform over allowable transitions but forbidding the prefixes from transitioning directly to suffixes, and preventing suffixes from immediately following a word boundary), and (2) a richer model Model-HMMP which is allowed to learn a log-linear distortion model and a feature rich translation model as detailed in the model definition. This model is allowed to use word boundary information for conditioning (because word boundaries are observed), but does not include the PB predictive word boundary distribution. Full Model with Word Boundaries Finally we consider our full monolingual model which also includes the distribution predicting word boundary variables bi. We term this model ModelFullMono. We detail the best context features for the conditional PD distribution for each language. We initialize this model with the morpheme trans3For the S&B Arabic dataset, we selected to use seven prefixes and seven suffixes, which correspond to maximum achievable recall of 95.3. For the S&B Hebrew dataset, we used six prefixes and six suffixes, for a maximum recall of 94.3. The Arabic treebank data required a larger number of affixes: we used seven prefixes and 20 suffixes, for a maximum recall of 98.3. lation unigram distribution of ModelHMMP-basic, trained for 5 iterations. Table 4 details the test set results of the different model configurations, as well as previously reported results on these datasets. For our main results we use the automatically derived list of prefixes and suffixes to limit segmentation points. The names of models that use such limited lists are prefixed by Dict in the Table. For comparison, we also report the results achieved by models that do not limit the segmentation points in this way. As we can see the unigram model with penalty, Dict-Model-UP, is already very strong, especially on the S&B Arabic dataset. When the segmentation points are not limited, its performance is much worse. The introduction of hidden morpheme states in Dict-HMMP-basic gives substantial improvement on Arabic and does not change results much on the other datasets. A small improvement is observed for the unconstrained models.4 When our model includes all components except word boundary prediction, Dict-Model-HMMP, the results are substantially improved on all languages. Model-HMMP is also the first unconstrained model in our sequence to approach or surpass previous state-of-the-art segmentation performance. Finally, when the full model Dict-MonoFull is used, we achieve a substantial improvement over the previous state-of-the-art results on all three corpora, a 6.5 point improvement on Arabic, 6.2 point improvement on Hebrew, and a 9.3 point improvement on ATB. The best configuration of this model uses the same distortion model for all languages: using the morph state transition and boundary features. The translation models used only ti for Hebrew and ATB and ti and µi−1 for Arabic. Word boundary was predicted using ti in Arabic and Hebrew, and additionally using bi−1 and bi−2 for ATB. The unconstrained models without affix dictionaries are also very strong, outperforming previous state-ofthe-art models. For ATB, the unconstrained model slightly outperforms the constrained one. The segmentation errors made by this system shed light on how it might be improved. We find the dis4Note that the inclusion of states in HMMP-basic only serves to provide a different distribution over the number of morphemes in a word, so it is interesting it can have a positive impact. 901 Arabic Hebrew ATB P R F1 P R F1 P R F1 UP 88.1 55.1 67.8 43.2 87.6 57.9 79.0 54.6 64.6 Dict-UP 85.8 73.1 78.9 57.0 79.4 66.3 61.6 91.0 73.5 HMMP-basic 83.3 58.0 68.4 43.5 87.8 58.2 79.0 54.9 64.8 Dict-HMMP-basic 84.8 76.3 80.3 56.9 78.8 66.1 69.3 76.2 72.6 HMMP 73.6 76.9 75.2 70.2 73.0 71.6 94.0 76.1 84.1 Dict-HMMP 82.4 81.3 81.8 62.7 77.6 69.4 85.2 85.8 85.5 MonoFull 80.5 87.3 83.8 72.2 71.7 72.0 86.2 88.5 87.4 Dict-MonoFull 86.1 83.2 84.6 73.7 72.5 73.1 92.9 81.8 87.0 Poon et. al 76.0 80.2 78.1 67.6 66.1 66.9 88.5 69.2 77.7 S&B-Best 67.8 77.3 72.2 64.9 62.9 63.9 – – – Morfessor 71.1 60.5 65.4 65.4 57.7 61.3 77.4 72.6 74.9 Figure 4: Results on morphological segmentation achieved by monolingual variants of our model (top) with results from prior work are included for comparison (bottom). Results from models with a small, automatically-derived list of possible prefixes and suffixes are labeled as ”Dict-” followed by the model name. tributions over the frequencies of particular errors follow a Zipfian skew across both S&B datasets, with the Arabic being more pronounced (the most frequent error being made 27 times, with 627 errors being made just once) in comparison with the Hebrew (with the most frequent error being made 19 times, and with 856 isolated errors). However, in both the Arabic and Hebrew S&B tasks we find that a tendency to over-segment certain characters off of their correct morphemes and on to other frequently occurring, yet incorrect, particles is actually the cause of many of these isolated errors. In Arabic the system tends to over segment the character aleph (totally about 300 errors combined). In Hebrew the source of error is not as overwhelmingly directed at a single character, but yod and he, the latter functioning quite similarly to the problematic Arabic character and frequently turn up in the corresponding places of cognate words in Biblical texts. We should note that our models select a large number of hyper-parameters on an annotated development set, including length penalty, hierarchical smoothing parameters α, and the subset of variables to use in each of three component sub-models. This might in part explain their advantage over previousstate-of-the-art models, which might use fewer (e.g. (Poon et al., 2009) and (Snyder and Barzilay, 2008)) or no specifically tuned for these datasets hyperparameters (Morfessor (Creutz and Lagus, 2007)). 4.2 Alignment Next we evaluate our full bilingual model and a simpler variant on the task of word alignment. We use the morpheme-level annotation of the S&B EnglishArabic dataset and project the morpheme alignments to word alignments. We can thus compare alignment performance of the results of different segmentations. Additionally, we evaluate against a stateof-the-art word alignment system WDHMM (He, 2007), which performs comparably or better than IBM-Model4. The table in Figure 5 presents the results. In addition to reporting alignment error rate for different segmentation models, we report their morphological segmentation F1. The word-alignment WDHMM model performs best when aligning English words to Arabic words (using Arabic as source). In this direction it is able to capture the many-to-one correspondence between English words and arabic morphemes. When we combine alignments in both directions using the standard grow-diag-final method, the error goes up. We compare the (Chung and Gildea, 2009) model (termed Model-1) to our full bilingual model. We can recover Model-1 similarly to Model-UP, except now every morpheme is conditioned on an aligned source morpheme. Our full bilingual model outperforms Model-1 in both AER and segmentation F1. The specific form of the full model was selected as in the previous experiments, by choosing the model with best segmentations of the development set. For Arabic, the best model conditions target mor902 Arabic Hebrew Align P Align R AER P R F1 P R F1 Model-1 (C&G 09) 91.6 81.2 13.9 72.4 76.2 74.3 61.0 71.8 65.9 Bilingual full 91.0 88.3 10.3 90.0 72.0 80.0 63.3 71.2 67.0 WDHMM E-to-A 82.4 96.7 11.1 WDHMM GDF 82.1 94.6 12.1 Figure 5: Alignment Error Rate (AER) and morphological segmentation F1 achieved by bilingual variants of our model. AER performance of WDHMM is also reported. Gold standard alignments are not available for the Hebrew data set. phemes on source morphemes only, uses the boundary model with conditioning on number of morphemes in the word, aligned source part-of-speech, and type of target morpheme. The distortion model uses both morpheme and word-based absolute distortion, binned distortion, morpheme types of states, and aligned source-part-of-speech tags. Our best model for Arabic outperforms WDHMM in word alignment error rate. For Hebrew, the best model uses a similar boundary model configuration but a simpler uniform transition distortion distribution. Note that the bilingual models perform worse than the monolingual ones in segmentation F1. This finding is in line with previous work showing that the best segmentation for MT does not necessarily agree with a particular linguistic convention about what morphemes should contain (Chung and Gildea, 2009; Habash and Sadat, 2006), but contradicts other results (Snyder and Barzilay, 2008). Further experimentation is required to make a general claim. We should note that the Arabic dataset used for word-alignment evaluation is unconventionally small and noisy (the sentences are very short phrases, automatically extracted using GIZA++). Thus the phrases might not be really translations, and the sentence length is much smaller than in standard parallel corpora. This warrants further model evaluation in a large-scale alignment setting. 5 Related Work This work is most closely related to the unsupervised tokenization and alignment models of Chung and Gildea (2009), Xu et al. (2008), Snyder and Barzilay (2008), and Nguyen et al. (2010). Chung & Gildea (2009) introduce a unigram model of tokenization based on IBM Model-1,which is a special case of our model. Snyder and Barzilay (2008) proposes a hierarchical Bayesian model that combines the learning of monolingual segmentations and a cross-lingual alignment; their model is very different from ours. Incorporating morphological information into MT has received reasonable attention. For example, Goldwater & McClosky (2005) show improvements when preprocessing Czech input to reflect a morphological decomposition using combinations of lemmatization, pseudowords, and morphemes. Yeniterzi and Oflazer (2010) bridge the morphological disparity between languages in a unique way by effectively aligning English syntactic elements (function words connected by dependency relations) to Turkish morphemes, using rule-based postprocessing of standard word alignment. Our work is partly inspired by that work and attempts to automate both the morpho-syntactic alignment and morphological analysis tasks. 6 Conclusion We have described an unsupervised model for morpheme segmentation and alignment based on Hidden Semi-Markov Models. Our model makes use of linguistic information to improve alignment quality. On the task of monolingual morphological segmentation it produces a new state-of-the-art level on three datasets. The model shows quantitative improvements in both word segmentation and word alignment, but its true potential lies in its finergrained interpretation of word alignment, which will hopefully yield improvements in translation quality. Acknowledgements We thank the ACL reviewers for their valuable comments on earlier versions of this paper, and Michael J. Burling for his contributions as a corpus annotator and to the Arabic aspects of this paper. 903 References Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cote, John DeNero, and Dan Klein. 2010. Unsupervised learning with features. In Proceedings of the North American chapter of the Association for Computational Linguistics (NAACL). Tagyoung Chung and Daniel Gildea. 2009. Unsupervised tokenization for machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Trans. Speech Lang. Process. Nizar Habash and Fatiha Sadat. 2006. Arabic preprocessing schemes for statistical machine translation. In North American Chapter of the Association for Computational Linguistics. Xiaodong He. 2007. Using word-dependent transition models in HMM based word alignment for statistical machine translation. In ACL 2nd Statistical MT workshop, pages 80–87. Philip Koehn. 2004. Pharaoh: A beam search decoder for phrase-based statistical machine translation models. In AMTA. P. Liang and D. Klein. 2009. Online EM for unsupervised models. In North American Association for Computational Linguistics (NAACL). ThuyLinh Nguyen, Stephan Vogel, and Noah A. Smith. 2010. Nonparametric word segmentation for machine translation. In Proceedings of the International Conference on Computational Linguistics. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In North American Chapter of the Association for Computation Linguistics - Human Language Technologies 2009 conference (NAACL/HLT-09). Patrick Schone and Daniel Jurafsky. 2000. Knowlegefree induction of morphology using latent semantic analysis. In Proceedings of the Conference on Computational Natural Language Learning (CoNLL-2000). Benjamin Snyder and Regina Barzilay. 2008. Unsupervised multilingual learning for morphological segmentation. In ACL. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In In COLING 96: The 16th Int. Conf. on Computational Linguistics. Jia Xu, Jianfeng Gao, Kristina Toutanova, and Hermann Ney. 2008. Bayesian semi-supervised chinese word segmentation for statistical machine translation. In COLING. Reyyan Yeniterzi and Kemal Oflazer. 2010. Syntax-tomorphology mapping in factored phrase-based statistical machine translation from english to turkish. In Proceedings of Association of Computational Linguistics. 904
|
2011
|
90
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 905–914, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Graph Approach to Spelling Correction in Domain-Centric Search Zhuowei Bao University of Pennsylvania Philadelphia, PA 19104, USA [email protected] Benny Kimelfeld IBM Research–Almaden San Jose, CA 95120, USA [email protected] Yunyao Li IBM Research–Almaden San Jose, CA 95120, USA [email protected] Abstract Spelling correction for keyword-search queries is challenging in restricted domains such as personal email (or desktop) search, due to the scarcity of query logs, and due to the specialized nature of the domain. For that task, this paper presents an algorithm that is based on statistics from the corpus data (rather than the query log). This algorithm, which employs a simple graph-based approach, can incorporate different types of data sources with different levels of reliability (e.g., email subject vs. email body), and can handle complex spelling errors like splitting and merging of words. An experimental study shows the superiority of the algorithm over existing alternatives in the email domain. 1 Introduction An abundance of applications require spelling correction, which (at the high level) is the following task. The user intends to type a chunk q of text, but types instead the chunk s that contains spelling errors (which we discuss in detail later), due to uncareful typing or lack of knowledge of the exact spelling of q. The goal is to restore q, when given s. Spelling correction has been extensively studied in the literature, and we refer the reader to comprehensive summaries of prior work (Peterson, 1980; Kukich, 1992; Jurafsky and Martin, 2000; Mitton, 2010). The focus of this paper is on the special case where q is a search query, and where s instead of q is submitted to a search engine (with the goal of retrieving documents that match the search query q). Spelling correction for search queries is important, because a significant portion of posed queries may be misspelled (Cucerzan and Brill, 2004). Effective spelling correction has a major effect on the experience and effort of the user, who is otherwise required to ensure the exact spellings of her queries. Furthermore, it is critical when the exact spelling is unknown (e.g., person names like Schwarzenegger). 1.1 Spelling Errors The more common and studied type of spelling error is word-to-word error: a single word w is misspelled into another single word w′. The specific spelling errors involved include omission of a character (e.g., atachment), inclusion of a redundant character (e.g., attachement), and replacement of characters (e.g., attachemnt). The fact that w′ is a misspelling of (and should be corrected to) w is denoted by w′ →w (e.g., atachment →attachment). Additional common spelling errors are splitting of a word, and merging two (or more) words: • attach ment →attachment • emailattachment →email attachment Part of our experiments, as well as most of our examples, are from the domain of (personal) email search. An email from the Enron email collection (Klimt and Yang, 2004) is shown in Figure 1. Our running example is the following misspelling of a search query, involving multiple types of errors. sadeep kohli excellatach ment → sandeep kohli excel attachment (1) In this example, correction entails fixing sadeep, splitting excellatach, fixing excell, merging atach ment, and fixing atachment. Beyond the complexity of errors, this example also illustrates other challenges in spelling correction for search. We need to identify not only that sadeep is misspelled, but also that kohli is correctly spelled. Just having kohli in a dictionary is not enough. 905 Subject: Follow-Up on Captive Generation From: [email protected] X-From: Sandeep Kohli X-To: Stinson Gibner@ECT, Vince J Kaminski@ECT Vince/Stinson, Please find below two attachemnts. The Excell spreadsheet shows some calculations. . .The seond attachement (Word) has the wordings that I think we can send in to the press.. . I am availabel on mobile if you have questions o clarifications... Regards, Sandeep. Figure 1: Enron email (misspelled words are underlined) For example, in kohli coupons the user may very well mean kohls coupons if Sandeep Kohli has nothing to do with coupons (in contrast to the store chain Kohl’s). A similar example is the word nail, which is a legitimate English word, but in the context of email the query nail box is likely to be a misspelling of mail box (unless nail boxes are indeed relevant to the user’s email collection). Finally, while the word kohli is relevant to some email users (e.g., Kohli’s colleagues), it may have no meaning at all to other users. 1.2 Domain Knowledge The common approach to spelling correction utilizes statistical information (Kernighan et al., 1990; Schierle et al., 2007; Mitton, 2010). As a simple example, if we want to avoid maintaining a manually-crafted dictionary to accommodate the wealth of new terms introduced every day (e.g., ipod and ipad), we may decide that atachment is a misspelling of attachment due to both the (relative) proximity between the words, and the fact that attachment is significantly more popular than atachment. As another example, the fact that the expression sandeep kohli is frequent in the domain increases our confidence in sadeep kohli →sandeep kohli (rather than, e.g., sadeep kohli → sudeep kohli). One can further note that, in email search, the fact that Sandeep Kohli sent multiple excel attachments increases our confidence in excell →excel. A source of statistics widely used in prior work is the query log (Cucerzan and Brill, 2004; Ahmad and Kondrak, 2005; Li et al., 2006a; Chen et al., 2007; Sun et al., 2010). However, while query logs are abundant in the context of Web search, in many other search applications (e.g. email search, desktop search, and even small-enterprise search) query logs are too scarce to provide statistical information that is sufficient for effective spelling correction. Even an email provider of a massive scale (such as GMail) may need to rely on the (possibly tiny) query log of the single user at hand, due to privacy or security concerns; moreover, as noted earlier about kohli, the statistics of one user may be relevant to one user, while irrelevant to another. The focus of this paper is on spelling correction for search applications like the above, where querylog analysis is impossible or undesirable (with email search being a prominent example). Our approach relies mainly on the corpus data (e.g., the collection of emails of the user at hand) and external, generic dictionaries (e.g., English). As shown in Figure 1, the corpus data may very well contain misspelled words (like query logs do), and such noise is a part of the challenge. Relying on the corpus has been shown to be successful in spelling correction for text cleaning (Schierle et al., 2007). Nevertheless, as we later explain, our approach can still incorporate query-log data as features involved in the correction, as well as means to refine the parameters. 1.3 Contribution and Outline As said above, our goal is to devise spelling correction that relies on the corpus. The corpus often contains various types of information, with different levels of reliability (e.g., n-grams from email subjects and sender information, vs. those from email bodies). The major question is how to effectively exploit that information while addressing the various types of spelling errors such as those discussed in Section 1.1. The key contribution of this work is a novel graph-based algorithm, MaxPaths, that handles the different types of errors and incorporates the corpus data in a uniform (and simple) fashion. We describe MaxPaths in Section 2. We evaluate the effectiveness of our algorithm via an experimental study in Section 3. Finally, we make concluding remarks and discuss future directions in Section 4. 2 Spelling-Correction Algorithm In this section, we describe our algorithm for spelling correction. Recall that given a search query 906 s of a user who intends to phrase q, the goal is to find q. Our corpus is essentially a collection D of unstructured or semistructured documents. For example, in email search such a document is an email with a title, a body, one or more recipients, and so on. As conventional in spelling correction, we devise a scoring function scoreD(r | s) that estimates our confidence in r being the correction of s (i.e., that r is equal to q). Eventually, we suggest a sequence r from a set CD(s) of candidates, such that scoreD(r | s) is maximal among all the candidates in CD(s). In this section, we describe our graphbased approach to finding CD(s) and to determining scoreD(r | s). We first give some basic notation. We fix an alphabet Σ of characters that does not include any of the conventional whitespace characters. By Σ∗ we denote the set of all the words, namely, finite sequences over Σ. A search query s is a sequence w1, . . . , wn, where each wi is a word. For convenience, in our examples we use whitespace instead of comma (e.g., sandeep kohli instead of sandeep, kohli). We use the DamerauLevenshtein edit distance (as implemented by the Jazzy tool) as our primary edit distance between two words r1, r2 ∈Σ∗, and we denote this distance by ed(r1, r2). 2.1 Word-Level Correction We first handle a restriction of our problem, where the search query is a single word w (rather than a general sequence s of words). Moreover, we consider only candidate suggestions that are words (rather than sequences of words that account for the case where w is obtained by merging keywords). Later, we will use the solution for this restricted problem as a basic component in our algorithm for the general problem. Let UD ⊆Σ∗be a finite universal lexicon, which (conceptually) consists of all the words in the corpus D. (In practice, one may want add to D words of auxiliary sources, like English dictionary, and to filter out noisy words; we did so in the site-search domain that is discussed in Section 3.) The set CD(w) of candidates is defined by CD(w) def= {w} ∪{w′ ∈UD | ed(w, w′) ≤δ} . for some fixed number δ. Note that CD(w) contains Table 1: Feature set WFD in email search Basic Features ed(w, w′): weighted Damerau-Levenshtein edit distance ph(w, w′): 1 if w and w′ are phonetically equal, 0 otherwise english(w′): 1 is w′ is in English, 0 otherwise Corpus-Based Features logfreq(w′)): logarithm of #occurrences of w′ in the corpus Domain-Specific Features subject(w′): 1 if w′ is in some “Subject” field, 0 otherwise from(w′): 1 if w′ is in some “From” field, 0 otherwise xfrom(w′): 1 if w′ is in some “X-From” field, 0 otherwise w even if w is misspelled; furthermore, CD(w) may contain other misspelled words (with a small edit distance to w) that appear in D. We now define scoreD(w′ | w). Here, our corpus D is translated into a set WFD of word features, where each feature f ∈WFD gives a scoring function scoref(w′ | w). The function scoreD(w′ | w) is simply a linear combination of the scoref(w′ | w): scoreD(w′ | w) def= X f∈WFD af · scoref(w′ | w) As a concrete example, the features of WFD we used in the email domain are listed in Table 1; the resulting scoref(w′|w) is in the spirit of the noisy channel model (Kernighan et al., 1990). Note that additional features could be used, like ones involving the stems of w and w′, and even query-log statistics (when available). Rather than manually tuning the parameters af, we learned them using the well known Support Vector Machine, abbreviated SVM (Cortes and Vapnik, 1995), as also done by Schaback and Li (2007) for spelling correction. We further discuss this learning step in Section 3. We fix a natural number k, and in the sequel we denote by topD(w) a set of k words w′ ∈CD(w) with the highest scoreD(w′ | w). If |CD(w)| < k, then topD(w) is simply CD(w). 2.2 Query-Level Correction: MaxPaths We now describe our algorithm, MaxPaths, for spelling correction. The input is a (possibly misspelled) search query s = s1, . . . , sn. As done in the word-level correction, the algorithm produces a set CD(s) of suggestions and determines the values 907 Algorithm 1 MaxPaths Input: a search query s Output: a set CD(s) of candidate suggestions r, ranked by scoreD(r | s) 1: Find the strongly plausible tokens 2: Construct the correction graph 3: Find top-k full paths (with the largest weights) 4: Re-rank the paths by word correlation scoreD(r | s), for all r ∈CD(s), in order to rank CD(s). A high-level overview of MaxPaths is given in the pseudo-code of Algorithm 1. In the rest of this section, we will detail each of the four steps in Algorithm 1. The name MaxPaths will become clear towards the end of this section. We use the following notation. For a word w = c1 · · · cm of m characters ci and integers i < j in {1, . . . , m + 1}, we denote by w[i,j) the word ci · · · cj−1. For two words w1, w2 ∈Σ∗, the word w1w2 ∈Σ∗is obtained by concatenating w1 and w2. Note that for the search query s = s1, . . . , sn it holds that s1 · · · sn is a single word (in Σ∗). We denote the word s1 · · · sn by ⌊s⌋. For example, if s1 = sadeep and s2 = kohli, then s corresponds to the query sadeep kohli while ⌊s⌋is the word sadeepkohli; furthermore, ⌊s⌋[1,7) = sadeep. 2.2.1 Plausible Tokens To support merging and splitting, we first identify the possible tokens of the given query s. For example, in excellatach ment we would like to identify excell and atach ment as tokens, since those are indeed the tokens that the user has in mind. Formally, suppose that ⌊s⌋= c1 · · · cm. A token is a word ⌊s⌋[i,j) where 1 ≤i < j ≤m + 1. To simplify the presentation, we make the (often false) assumption that a token ⌊s⌋[i,j) uniquely identifies i and j (that is, ⌊s⌋[i,j) ̸= ⌊s⌋[i′,j′) if i ̸= i′ or j ̸= j′); in reality, we should define a token as a triple (⌊s⌋[i,j), i, j). In principle, every token ⌊s⌋[i,j) could be viewed as a possible word that user meant to phrase. However, such liberty would require our algorithm to process a search space that is too large to manage in reasonable time. Instead, we restrict to strongly plausible tokens, which we define next. A token w = ⌊s⌋[i,j) is plausible if w is a word of s, or there is a word w′ ∈CD(w) (as defined in Section 2.1) such that scoreD(w′ | w) > ϵ for some fixed number ϵ. Intuitively, w is plausible if it is an original token of s, or we have a high confidence in our word-level suggestion to correct w (note that the suggested correction for w can be w itself). Recall that ⌊s⌋= c1 · · · cm. A tokenization of s is a sequence j1, . . . , jl, such that j1 = 1, jl = m + 1, and ji < ji+1 for 1 ≤i < l. The tokenization j1, . . . , jl induces the tokens ⌊s⌋[j1,j2),.. . ,⌊s⌋[jl−1,jl). A tokenization is plausible if each of its induced tokens is plausible. Observe that a plausible token is not necessarily induced by any plausible tokenization; in that case, the plausible token is useless to us. Thus, we define a strongly plausible token, abbreviated sp-token, which is a token that is induced by some plausible tokenization. As a concrete example, for the query excellatach ment, the sp-tokens in our implementation include excellatach, ment, excell, and atachment. As the first step (line 1 in Algorithm 1), we find the sp-tokens by employing an efficient (and fairly straightforward) dynamic-programming algorithm. 2.2.2 Correction Graph In the next step (line 2 in Algorithm 1), we construct the correction graph, which we denote by GD(s). The construction is as follows. We first find the set topD(w) (defined in Section 2.1) for each sp-token w. Table 2 shows the sptokens and suggestions thereon in our running example. This example shows the actual execution of our implementation within email search, where s is the query sadeep kohli excellatach ment; for clarity of presentation, we omitted a few sp-tokens and suggested corrections. Observe that some of the corrections in the table are actually misspelled words (as those naturally occur in the corpus). A node of the graph GD(s) is a pair ⟨w, w′⟩, where w is an sp-token and w′ ∈topD(w). Recall our simplifying assumption that a token ⌊s⌋[i,j) uniquely identifies the indices i and j. The graph GD(s) contains a (directed) edge from a node ⟨w1, w′ 1⟩to a node ⟨w2, w′ 2⟩if w2 immediately follows w1 in ⌊q⌋; in other words, GD(s) has an edge from ⟨w1, w′ 1⟩ to ⟨w2, w′ 2⟩whenever there exist indices i, j and k, such that w1 = ⌊s⌋[i,j) and w2 = ⌊s⌋[j,k). Observe that GD(s) is a directed acyclic graph (DAG). 908 except excell excel excellence excellent sandeep jaideep kohli attachement attachment attached sandeep kohli sent meet ment Figure 2: The graph GD(s) For example, Figure 2 shows GD(s) for the query sadeep kohli excellatach ment, with the sp-tokens w and the sets topD(w) being those of Table 2. For now, the reader should ignore the node in the grey box (containing sandeep kohli) and its incident edges. For simplicity, in this figure we depict each node ⟨w, w′⟩by just mentioning w′; the word w is in the first row of Table 2, above w′. 2.2.3 Top-k Paths Let P = ⟨w1, w′ 1⟩→· · · →⟨wk, w′ k⟩be a path in GD(s). We say that P is full if ⟨w1, w′ 1⟩has no incoming edges in GD(s), and ⟨wk, w′ k⟩has no outgoing edges in GD(s). An easy observation is that, since we consider only strongly plausible tokens, if P is full then w1 · · · wk = ⌊s⌋; in that case, the sequence w′ 1, . . . , w′ k is a suggestion for spelling correction, and we denote it by crc(P). As an example, Figure 3 shows two full paths P1 and P2 in the graph GD(s) of Figure 2. The corrections crc(Pi), for i = 1, 2, are jaideep kohli excellent ment and sandeep kohli excel attachement, respectively. To obtain corrections crc(P) with high quality, we produce a set of k full paths with the largest weights, for some fixed k; we denote this set by topPathsD(s). The weight of a path P, denoted weight(P), is the sum of the weights of all the nodes and edges in P, and we define the weights of nodes and edges next. To find these paths, we use a well known efficient algorithm (Eppstein, 1994). kohli kohli excellent ment excel attachment jaideep sandeep P1 P2 Figure 3: Full paths in the graph GD(s) of Figure 2 Consider a node u = ⟨w, w′⟩of GD(s). In the construction of GD(s), zero or more merges of (part of) original tokens have been applied to obtain the token w; let #merges(w) be that number. Consider an edge e of GD(s) from a node u1 = ⟨w1, w′ 1⟩to u2 = ⟨w2, w′ 2⟩. In s, either w1 and w2 belong to different words (i.e., there is a whitespace between them) or not; in the former case define #splits(e) = 0, and in the latter #splits(e) = 1. We define: weight(u) def= scoreD(w′ | w) + am · #merges(w) weight(e) def= as · #splits(e) Note that am and as are negative, as they penalize for merges and splits, respectively. Again, in our implementations, we learned am and as by means of SVM. Recall that topPathsD(s) is the set of k full paths (in the graph GD(s)) with the largest weights. From topPathsD(s) we get the set CD(s) of candidate suggestions: CD(s) def= {crc(P) | P ∈topPathsD(s)} . 2.2.4 Word Correlation To compute scoreD(r|s) for r ∈CD(s), we incorporate correlation among the words of r. Intuitively, we would like to reward a candidate with pairs of words that are likely to co-exist in a query. For that, we assume a (symmetric) numerical function crl(w′ 1, w′ 2) that estimates the extent to which the words w′ 1 and w′ 2 are correlated. As an example, in the email domain we would like crl(kohli, excel) to be high if Kohli sent many emails with excel attachments. Our implementation of crl(w′ 1, w′ 2) essentially employs pointwise mutual information that has also been used in (Schierle et al., 2007), and that 909 Table 2: topD(w) for sp-tokens w sadeep kohli excellatach ment excell atachment sandeep kohli excellent ment excel attachment jaideep excellence sent excell attached meet except attachement compares the number of documents (emails) containing w′ 1 and w′ 2 separately and jointly. Let P ∈topPathsD(s) be a path. We denote by crl(P) a function that aggregates the numbers crl(w′ 1, w′ 2) for nodes ⟨w1, w′ 1⟩and ⟨w2, w′ 2⟩ of P (where ⟨w1, w′ 1⟩and ⟨w2, w′ 2⟩are not necessarily neighbors in P). Over the email domain, our crl(P) is the minimum of the crl(w′ 1, w′ 2). We define scoreD(P) = weight(P) + crl(P). To improve the performance, in our implementation we learned again (re-trained) all the parameters involved in scoreD(P). Finally, as the top suggestions we take crc(P) for full paths P with highest scoreD(P). Note that crc(P) is not necessarily injective; that is, there can be two full paths P1 ̸= P2 satisfying crc(P1) = crc(P2). Thus, in effect, scoreD(r|s) is determined by the best evidence of r; that is, scoreD(r | s) def= max{scoreD(P) | crc(P) = r∧ P ∈topPathsD(s)} . Note that our final scoring function essentially views P as a clique rather than a path. In principle, we could define GD(s) in a way that we would extract the maximal cliques directly without finding topPathsD(s) first. However, we chose our method (finding top paths first, and then re-ranking) to avoid the inherent computational hardness involved in finding maximal cliques. 2.3 Handling Expressions We now briefly discuss our handling of frequent ngrams (expressions). We handle n-grams by introducing new nodes to the graph GD(s); such a new node u is a pair ⟨t, t′⟩, where t is a sequence of n consecutive sp-tokens and t′ is a n-gram. The weight of such a node u is rewarded for constituting a frequent or important n-gram. An example of such a node is in the grey box of Figure 2, where sandeep kohli is a bigram. Observe that sandeep kohli may be deemed an important bigram because it occurs as a sender of an email, and not necessarily because it is frequent. An advantage of our approach is avoidance of over-scoring due to conflicting n-grams. For example, consider the query textile import expert, and assume that both textile import and import export (with an “o” rather than an “e”) are frequent bigrams. If the user referred to the bigram textile import, then expert is likely to be correct. But if she meant for import export, then expert is misspelled. However, only one of these two options can hold true, and we would like textile import export to be rewarded only once—for the bigram import export. This is achieved in our approach, since a full path in GD(s) may contain either a node for textile import or a node for import export, but it cannot contain nodes for both of these bigrams. Finally, we note that our algorithm is in the spirit of that of Cucerzan and Brill (2004), with a few inherent differences. In essence, a node in the graph they construct corresponds to what we denote here as ⟨w, w′⟩in the special case where w is an actual word of the query; that is, no re-tokenization is applied. They can split a word by comparing it to a bigram. However, it is not clear how they can split into non-bigrams (without a huge index) and to handle simultaneous merging and splitting as in our running example (1). Furthermore, they translate bigram information into edge weights, which implies that the above problem of over-rewarding due to conflicting bigrams occurs. 3 Experimental Study Our experimental study aims to investigate the effectiveness of our approach in various settings, as we explain next. 3.1 Experimental Setup We first describe our experimental setup, and specifically the datasets and general methodology. Datasets. The focus of our experimental study is on personal email search; later on (Section 3.6), we will consider (and give experimental results for) a totally different setting—site search over www. ibm.com, which is a massive and open domain. Our dataset (for the email domain) is obtained from 910 the Enron email collection (Bekkerman et al., 2004; Klimt and Yang, 2004). Specifically, we chose the three users with the largest number of emails. We refer to the three email collections by the last names of their owners: Farmer, Kaminski and Kitchen. Each user mailbox is a separate domain, with a separate corpus D, that one can search upon. Due to the absence of real user queries, we constructed our dataset by conducting a user study, as described next. For each user, we randomly sampled 50 emails and divided them into 5 disjoint sets of 10 emails each. We gave each 10-email set to a unique human subject that was asked to phrase two search queries for each email: one for the entire email content (general query), and the other for the From and X-From fields (sender query). (Figure 1 shows examples of the From and X-From fields.) The latter represents queries posed against a specific field (e.g., using “advanced search”). The participants were not told about the goal of this study (i.e., spelling correction), and the collected queries have no spelling errors. For generating spelling errors, we implemented a typo generator.1 This generator extends an online typo generator (Seobook, 2010) that produces a variety of spelling errors, including skipped letter, doubled letter, reversed letter, skipped space (merge), missed key and inserted key; in addition, our generator produces inserted space (split). When applied to a search query, our generator adds random typos to each word, independently, with a specified probability p that is 50% by default. For each collected query (and for each considered value of p) we generated 5 misspelled queries, and thereby obtained 250 instances of misspelled general queries and 250 instances of misspelled sender queries. Methodology. We compared the accuracy of MaxPaths (Section 2) with three alternatives. The first alternative is the open-source Jazzy, which is a widely used spelling-correction tool based on (weighted) edit distance. The second alternative is the spelling correction provided by Google. We provided Jazzy with our unigram index (as a dictionary). However, we were not able to do so with Google, as we used remote access via its Java API (Google, 2010); hence, the Google tool is un1The queries and our typo generator are publicly available at https://dbappserv.cis.upenn.edu/spell/. aware of our domain, but is rather based on its own statistics (from the World Wide Web). The third alternative is what we call WordWise, which applies word-level correction (Section 2.1) to each input query term, independently. More precisely, WordWise is a simplified version of MaxPaths, where we forbid splitting and merging of words (i.e., only the original tokens are considered), and where we do not take correlation into account. Our emphasis is on correcting misspelled queries, rather than recognizing correctly spelled queries, due to the role of spelling in a search engine: we wish to provide the user with the correct query upon misspelling, but there is no harm in making a suggestion for correctly spelled queries, except for visual discomfort. Hence, by default accuracy means the number of properly corrected queries (within the top-k suggestions) divided by the number of the misspelled queries. An exception is in Section 3.5, where we study the accuracy on correct queries. Since MaxPaths and WordWise involve parameter learning (SVM), the results for them are consistently obtained by performing 5-folder cross validation over each collection of misspelled queries. 3.2 Fixed Error Probability Here, we compare MaxPaths to the alternatives when the error probability p is fixed (0.5). We consider only the Kaminski dataset; the results for the other two datasets are similar. Figure 4(a) shows the accuracy, for general queries, of top-k suggestions for k = 1, k = 3 and k = 10. Note that we can get only one (top-1) suggestion from Google. As can be seen, MaxPaths has the highest accuracy in all cases. Moreover, the advantage of MaxPaths over the alternatives increases as k increases, which indicates potential for further improving MaxPaths. Figure 4(b) shows the accuracy of top-k suggestions for sender queries. Overall, the results are similar to those of Figure 4(a), except that top-1 of both WordWise and MaxPaths has a higher accuracy in sender queries than in general queries. This is due to the fact that the dictionaries of person names and email addresses extracted from the X-From and From fields, respectively, provide strong features for the scoring function, since a sender query refers to these two fields. In addition, the accuracy of MaxPaths is further enhanced by exploiting the cor911 0% 20% 40% 60% 80% 100% Top 1 Top 3 Top 10 Google Jazzy WordWise MaxPaths (a) General queries (Kaminski) 0% 20% 40% 60% 80% 100% Top 1 Top 3 Top 10 Google Jazzy WordWise MaxPaths (b) Sender queries (Kaminski) 0% 25% 50% 75% 100% 0% 20% 40% 60% 80% 100% Google Jazzy WordWise MaxPaths Spelling Error Probability (c) Varying error probability (Kaminski) Figure 4: Accuracy for Kaminski (misspelled queries) relation between the first and last name of a person. 3.3 Impact of Error Probability We now study the impact of the complexity of spelling errors on our algorithm. For that, we measure the accuracy while the error probability p varies from 10% to 90% (with gaps of 20%). The results are in Figure 4(c). Again, we show the results only for Kaminski, since we get similar results for the other two datasets. As expected, in all examined methods the accuracy decreases as p increases. Now, not only does MaxPaths outperform the alternatives, its decrease (as well as that of WordWise) is the mildest—13% as p increases from 10% to 90% (while Google and Jazzy decrease by 23% or more). We got similar results for the sender queries (and for each of the three users). 3.4 Adaptiveness of Parameters Obtaining the labeled data needed for parameter learning entails a nontrivial manual effort. Ideally, we would like to learn the parameters of MaxPaths in one domain, and use them in similar domains. 0% 25% 50% 75% 100% 0% 20% 40% 60% 80% 100% Google Jazzy MaxPaths* MaxPaths Spelling Error Probability (a) General queries (Farmer) 0% 25% 50% 75% 100% 0% 20% 40% 60% 80% 100% Google Jazzy MaxPaths* MaxPaths Spelling Error Probability (b) Sender queries (Farmer) Figure 5: Accuracy for Farmer (misspelled queries) More specifically, our desire is to use the parameters learned over one corpus (e.g., the email collection of one user) on a second corpus (e.g., the email collection of another user), rather than learning the parameters again over the second corpus. In this set of experiments, we examine the feasibility of that approach. Specifically, we consider the user Farmer and observe the accuracy of our algorithm with two sets of parameters: the first, denoted by MaxPaths in Figures 5(a) and 5(b), is learned within the Farmer dataset, and the second, denoted by MaxPaths⋆, is learned within the Kaminski dataset. Figures 5(a) and 5(b) show the accuracy of the top-1 suggestion for general queries and sender queries, respectively, with varying error probabilities. As can be seen, these results mean good news—the accuracies of MaxPaths⋆and MaxPaths are extremely close (their curves are barely distinguishable, as in most cases the difference is smaller than 1%). We repeated this experiment for Kitchen and Kaminski, and got similar results. 3.5 Accuracy for Correct Queries Next, we study the accuracy on correct queries, where the task is to recognize the given query as correct by returning it as the top suggestion. For each of the three users, we considered the 50 + 50 (general + sender) collected queries (having no spelling errors), and measured the accuracy, which is the percentage of queries that are equal to the top sug912 Table 3: Accuracy for Correct Queries Dataset Google Jazzy MaxPaths Kaminski (general) 90% 98% 94% Kaminski (sender) 94% 98% 94% Farmer (general) 96% 98% 96% Farmer (sender) 96% 96% 92% Kitchen (general) 86% 100% 92% Kitchen (sender) 94% 100% 98% gestion. Table 3 shows the results. Since Jazzy is based on edit distance, it almost always gives the input query as the top suggestion; the misses of Jazzy are for queries that contain a word that is not the corpus. MaxPaths is fairly close to the upper bound set by Jazzy. Google (having no access to the domain) also performs well, partly because it returns the input query if no reasonable suggestion is found. 3.6 Applicability to Large-Scale Site Search Up to now, our focus has been on email search, which represents a restricted (closed) domain with specialized knowledge (e.g., sender names). In this part, we examine the effectiveness of our algorithm in a totally different setting—large-scale site search within www.ibm.com, a domain that is popular on a world scale. There, the accuracy of Google is very high, due to this domain’s popularity, scale, and full accessibility on the Web. We crawled 10 million documents in that domain to obtain the corpus. We manually collected 1348 misspelled queries from the log of search issued against developerWorks (www.ibm.com/developerworks/) during a week. To facilitate the manual collection of these queries, we inspected each query with two or fewer search results, after applying a random permutation to those queries. Figure 6 shows the accuracy of top-k suggestions. Note that the performance of MaxPaths is very close to that of Google—only 2% lower for top-1. For k = 3 and k = 10, MaxPaths outperforms Jazzy and the top-1 of Google (from which we cannot obtain top-k for k > 1). 3.7 Summary To conclude, our experiments demonstrate various important qualities of MaxPaths. First, it outperforms its alternatives, in both accuracy (Section 3.2) and robustness to varying error complexities (Section 3.3). Second, the parameters learned in one domain (e.g., an email user) can be applied to sim0% 20% 40% 60% 80% 100% Top 1 Top 3 Top 10 Google Jazzy WordWise MaxPaths Figure 6: Accuracy for site search ilar domains (e.g., other email users) with essentially no loss in performance (Section 3.4). Third, it is highly accurate in recognition of correct queries (Section 3.5). Fourth, even when applied to large (open) domains, it achieves a comparable performance to the state-of-the-art Google spelling correction (Section 3.6). Finally, the higher performance of MaxPaths on top-3 and top-10 corrections suggests a potential for further improvement of top-1 (which is important since search engines often restrict their interfaces to only one suggestion). 4 Conclusions We presented the algorithm MaxPaths for spelling correction in domain-centric search. This algorithm relies primarily on corpus statistics and domain knowledge (rather than on query logs). It can handle a variety of spelling errors, and can incorporate different levels of spelling reliability among different parts of the corpus. Our experimental study demonstrates the superiority of MaxPaths over existing alternatives in the domain of email search, and indicates its effectiveness beyond that domain. In future work, we plan to explore how to utilize additional domain knowledge to better estimate the correlation between words. Particularly, from available auxiliary data (Fagin et al., 2010) and tools like information extraction (Chiticariu et al., 2010), we can infer and utilize type information from the corpus (Li et al., 2006b; Zhu et al., 2007). For instance, if kohli is of type person, and phone is highly correlated with person instances, then phone is highly correlated with kohli even if the two words do not frequently co-occur. We also plan to explore aspects of corpus maintenance in dynamic (constantly changing) domains. 913 References F. Ahmad and G. Kondrak. 2005. Learning a spelling error model from search query logs. In HLT/EMNLP. R. Bekkerman, A. Mccallum, and G. Huang. 2004. Automatic categorization of email into folders: Benchmark experiments on Enron and Sri Corpora. Technical report, University of Massachusetts - Amherst. Q. Chen, M. Li, and M. Zhou. 2007. Improving query spelling correction using Web search results. In EMNLP-CoNLL, pages 181–189. L. Chiticariu, R. Krishnamurthy, Y. Li, S. Raghavan, F. Reiss, and S. Vaithyanathan. 2010. SystemT: An algebraic approach to declarative information extraction. In ACL, pages 128–137. C. Cortes and V. Vapnik. 1995. Support-vector networks. Machine Learning, 20(3):273–297. S. Cucerzan and E. Brill. 2004. Spelling correction as an iterative process that exploits the collective knowledge of Web users. In EMNLP, pages 293–300. D. Eppstein. 1994. Finding the k shortest paths. In FOCS, pages 154–165. R. Fagin, B. Kimelfeld, Y. Li, S. Raghavan, and S. Vaithyanathan. 2010. Understanding queries in a search database system. In PODS, pages 273–284. Google. 2010. A Java API for Google spelling check service. http://code.google.com/p/google-api-spellingjava/. D. Jurafsky and J. H. Martin. 2000. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall PTR. M. D. Kernighan, K. W. Church, and W. A. Gale. 1990. A spelling correction program based on a noisy channel model. In COLING, pages 205–210. B. Klimt and Y. Yang. 2004. Introducing the Enron corpus. In CEAS. K. Kukich. 1992. Techniques for automatically correcting words in text. ACM Comput. Surv., 24(4):377– 439. M. Li, M. Zhu, Y. Zhang, and M. Zhou. 2006a. Exploring distributional similarity based models for query spelling correction. In ACL. Y. Li, R. Krishnamurthy, S. Vaithyanathan, and H. V. Jagadish. 2006b. Getting work done on the web: supporting transactional queries. In SIGIR, pages 557– 564. R. Mitton. 2010. Fifty years of spellchecking. Wring Systems Research, 2:1–7. J. L. Peterson. 1980. Computer Programs for Spelling Correction: An Experiment in Program Design, volume 96 of Lecture Notes in Computer Science. Springer. J. Schaback and F. Li. 2007. Multi-level feature extraction for spelling correction. In AND, pages 79–86. M. Schierle, S. Schulz, and M. Ackermann. 2007. From spelling correction to text cleaning - using context information. In GfKl, Studies in Classification, Data Analysis, and Knowledge Organization, pages 397– 404. Seobook. 2010. Keyword typo generator. http://tools.seobook.com/spelling/keywordstypos.cgi. X. Sun, J. Gao, D. Micol, and C. Quirk. 2010. Learning phrase-based spelling error models from clickthrough data. In ACL, pages 266–274. H. Zhu, S. Raghavan, S. Vaithyanathan, and A. L¨oser. 2007. Navigating the intranet with high precision. In WWW, pages 491–500. 914
|
2011
|
91
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 915–923, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Grammatical Error Correction with Alternating Structure Optimization Daniel Dahlmeier1 and Hwee Tou Ng1,2 1NUS Graduate School for Integrative Sciences and Engineering 2Department of Computer Science, National University of Singapore {danielhe,nght}@comp.nus.edu.sg Abstract We present a novel approach to grammatical error correction based on Alternating Structure Optimization. As part of our work, we introduce the NUS Corpus of Learner English (NUCLE), a fully annotated one million words corpus of learner English available for research purposes. We conduct an extensive evaluation for article and preposition errors using various feature sets. Our experiments show that our approach outperforms two baselines trained on non-learner text and learner text, respectively. Our approach also outperforms two commercial grammar checking software packages. 1 Introduction Grammatical error correction (GEC) has been recognized as an interesting as well as commercially attractive problem in natural language processing (NLP), in particular for learners of English as a foreign or second language (EFL/ESL). Despite the growing interest, research has been hindered by the lack of a large annotated corpus of learner text that is available for research purposes. As a result, the standard approach to GEC has been to train an off-the-shelf classifier to re-predict words in non-learner text. Learning GEC models directly from annotated learner corpora is not well explored, as are methods that combine learner and non-learner text. Furthermore, the evaluation of GEC has been problematic. Previous work has either evaluated on artificial test instances as a substitute for real learner errors or on proprietary data that is not available to other researchers. As a consequence, existing methods have not been compared on the same test set, leaving it unclear where the current state of the art really is. In this work, we aim to overcome both problems. First, we present a novel approach to GEC based on Alternating Structure Optimization (ASO) (Ando and Zhang, 2005). Our approach is able to train models on annotated learner corpora while still taking advantage of large non-learner corpora. Second, we introduce the NUS Corpus of Learner English (NUCLE), a fully annotated one million words corpus of learner English available for research purposes. We conduct an extensive evaluation for article and preposition errors using six different feature sets proposed in previous work. We compare our proposed ASO method with two baselines trained on non-learner text and learner text, respectively. To the best of our knowledge, this is the first extensive comparison of different feature sets on real learner text which is another contribution of our work. Our experiments show that our proposed ASO algorithm significantly improves over both baselines. It also outperforms two commercial grammar checking software packages in a manual evaluation. The remainder of this paper is organized as follows. The next section reviews related work. Section 3 describes the tasks. Section 4 formulates GEC as a classification problem. Section 5 extends this to the ASO algorithm. The experiments are presented in Section 6 and the results in Section 7. Section 8 contains a more detailed analysis of the results. Section 9 concludes the paper. 915 2 Related Work In this section, we give a brief overview on related work on article and preposition errors. For a more comprehensive survey, see (Leacock et al., 2010). The seminal work on grammatical error correction was done by Knight and Chander (1994) on article errors. Subsequent work has focused on designing better features and testing different classifiers, including memory-based learning (Minnen et al., 2000), decision tree learning (Nagata et al., 2006; Gamon et al., 2008), and logistic regression (Lee, 2004; Han et al., 2006; De Felice, 2008). Work on preposition errors has used a similar classification approach and mainly differs in terms of the features employed (Chodorow et al., 2007; Gamon et al., 2008; Lee and Knutsson, 2008; Tetreault and Chodorow, 2008; Tetreault et al., 2010; De Felice, 2008). All of the above works only use non-learner text for training. Recent work has shown that training on annotated learner text can give better performance (Han et al., 2010) and that the observed word used by the writer is an important feature (Rozovskaya and Roth, 2010b). However, training data has either been small (Izumi et al., 2003), only partly annotated (Han et al., 2010), or artificially created (Rozovskaya and Roth, 2010b; Rozovskaya and Roth, 2010a). Almost no work has investigated ways to combine learner and non-learner text for training. The only exception is Gamon (2010), who combined features from the output of logistic-regression classifiers and language models trained on non-learner text in a meta-classifier trained on learner text. In this work, we show a more direct way to combine learner and non-learner text in a single model. Finally, researchers have investigated GEC in connection with web-based models in NLP (Lapata and Keller, 2005; Bergsma et al., 2009; Yi et al., 2008). These methods do not use classifiers, but rely on simple n-gram counts or page hits from the Web. 3 Task Description In this work, we focus on article and preposition errors, as they are among the most frequent types of errors made by EFL learners. 3.1 Selection vs. Correction Task There is an important difference between training on annotated learner text and training on non-learner text, namely whether the observed word can be used as a feature or not. When training on non-learner text, the observed word cannot be used as a feature. The word choice of the writer is “blanked out” from the text and serves as the correct class. A classifier is trained to re-predict the word given the surrounding context. The confusion set of possible classes is usually pre-defined. This selection task formulation is convenient as training examples can be created “for free” from any text that is assumed to be free of grammatical errors. We define the more realistic correction task as follows: given a particular word and its context, propose an appropriate correction. The proposed correction can be identical to the observed word, i.e., no correction is necessary. The main difference is that the word choice of the writer can be encoded as part of the features. 3.2 Article Errors For article errors, the classes are the three articles a, the, and the zero-article. This covers article insertion, deletion, and substitution errors. During training, each noun phrase (NP) in the training data is one training example. When training on learner text, the correct class is the article provided by the human annotator. When training on non-learner text, the correct class is the observed article. The context is encoded via a set of feature functions. During testing, each NP in the test set is one test example. The correct class is the article provided by the human annotator when testing on learner text or the observed article when testing on non-learner text. 3.3 Preposition Errors The approach to preposition errors is similar to articles but typically focuses on preposition substitution errors. In our work, the classes are 36 frequent English prepositions (about, along, among, around, as, at, beside, besides, between, by, down, during, except, for, from, in, inside, into, of, off, on, onto, outside, over, through, to, toward, towards, under, underneath, until, up, upon, with, within, without), which we adopt from previous work. Every prepositional phrase (PP) that is governed by one of the 916 36 prepositions is one training or test example. We ignore PPs governed by other prepositions. 4 Linear Classifiers for Grammatical Error Correction In this section, we formulate GEC as a classification problem and describe the feature sets for each task. 4.1 Linear Classifiers We use classifiers to approximate the unknown relation between articles or prepositions and their contexts in learner text, and their valid corrections. The articles or prepositions and their contexts are represented as feature vectors X ∈X. The corrections are the classes Y ∈Y. In this work, we employ binary linear classifiers of the form uT X where u is a weight vector. The outcome is considered +1 if the score is positive and −1 otherwise. A popular method for finding u is empirical risk minimization with least square regularization. Given a training set {Xi, Yi}i=1,...,n, we aim to find the weight vector that minimizes the empirical loss on the training data ˆu = arg min u 1 n n X i=1 L(uT Xi, Yi) + λ ||u||2 ! , (1) where L is a loss function. We use a modification of Huber’s robust loss function. We fix the regularization parameter λ to 10−4. A multi-class classification problem with m classes can be cast as m binary classification problems in a one-vs-rest arrangement. The prediction of the classifier is the class with the highest score ˆY = arg maxY ∈Y (uT Y X). In earlier experiments, this linear classifier gave comparable or superior performance compared to a logistic regression classifier. 4.2 Features We re-implement six feature extraction methods from previous work, three for articles and three for prepositions. The methods require different linguistic pre-processing: chunking, CCG parsing, and constituency parsing. 4.2.1 Article Errors • DeFelice The system in (De Felice, 2008) for article errors uses a CCG parser to extract a rich set of syntactic and semantic features, including part of speech (POS) tags, hypernyms from WordNet (Fellbaum, 1998), and named entities. • Han The system in (Han et al., 2006) relies on shallow syntactic and lexical features derived from a chunker, including the words before, in, and after the NP, the head word, and POS tags. • Lee The system in (Lee, 2004) uses a constituency parser. The features include POS tags, surrounding words, the head word, and hypernyms from WordNet. 4.2.2 Preposition Errors • DeFelice The system in (De Felice, 2008) for preposition errors uses a similar rich set of syntactic and semantic features as the system for article errors. In our re-implementation, we do not use a subcategorization dictionary, as this resource was not available to us. • TetreaultChunk The system in (Tetreault and Chodorow, 2008) uses a chunker to extract features from a two-word window around the preposition, including lexical and POS ngrams, and the head words from neighboring constituents. • TetreaultParse The system in (Tetreault et al., 2010) extends (Tetreault and Chodorow, 2008) by adding additional features derived from a constituency and a dependency parse tree. For each of the above feature sets, we add the observed article or preposition as an additional feature when training on learner text. 5 Alternating Structure Optimization This section describes the ASO algorithm and shows how it can be used for grammatical error correction. 5.1 The ASO algorithm Alternating Structure Optimization (Ando and Zhang, 2005) is a multi-task learning algorithm that takes advantage of the common structure of multiple related problems. Let us assume that we have m binary classification problems. Each classifier ui is a 917 weight vector of dimension p. Let Θ be an orthonormal h × p matrix that captures the common structure of the m weight vectors. We assume that each weight vector can be decomposed into two parts: one part that models the particular i-th classification problem and one part that models the common structure ui = wi + ΘT vi. (2) The parameters [{wi, vi}, Θ] can be learned by joint empirical risk minimization, i.e., by minimizing the joint empirical loss of the m problems on the training data m X l=1 1 n n X i=1 L wl + ΘT vl T Xl i, Y l i + λ ||wl||2 ! . (3) The key observation in ASO is that the problems used to find Θ do not have to be same as the target problems that we ultimately want to solve. Instead, we can automatically create auxiliary problems for the sole purpose of learning a better Θ. Let us assume that we have k target problems and m auxiliary problems. We can obtain an approximate solution to Equation 3 by performing the following algorithm (Ando and Zhang, 2005): 1. Learn m linear classifiers ui independently. 2. Let U = [u1, u2, . . . , um] be the p × m matrix formed from the m weight vectors. 3. Perform Singular Value Decomposition (SVD) on U: U = V1DV T 2 . The first h column vectors of V1 are stored as rows of Θ. 4. Learn wj and vj for each of the target problems by minimizing the empirical risk: 1 n n X i=1 L wj + ΘT vj T Xi, Yi + λ ||wj||2 . 5. The weight vector for the j-th target problem is: uj = wj + ΘT vj. 5.2 ASO for Grammatical Error Correction The key observation in our work is that the selection task on non-learner text is a highly informative auxiliary problem for the correction task on learner text. For example, a classifier that can predict the presence or absence of the preposition on can be helpful for correcting wrong uses of on in learner text, e.g., if the classifier’s confidence for on is low but the writer used the preposition on, the writer might have made a mistake. As the auxiliary problems can be created automatically, we can leverage the power of very large corpora of non-learner text. Let us assume a grammatical error correction task with m classes. For each class, we define a binary auxiliary problem. The feature space of the auxiliary problems is a restriction of the original feature space X to all features except the observed word: X\{Xobs}. The weight vectors of the auxiliary problems form the matrix U in Step 2 of the ASO algorithm from which we obtain Θ through SVD. Given Θ, we learn the vectors wj and vj, j = 1, . . . , k from the annotated learner text using the complete feature space X. This can be seen as an instance of transfer learning (Pan and Yang, 2010), as the auxiliary problems are trained on data from a different domain (nonlearner text) and have a slightly different feature space (X\{Xobs}). We note that our method is general and can be applied to any classification problem in GEC. 6 Experiments 6.1 Data Sets The main corpus in our experiments is the NUS Corpus of Learner English (NUCLE). The corpus consists of about 1,400 essays written by EFL/ESL university students on a wide range of topics, like environmental pollution or healthcare. It contains over one million words which are completely annotated with error tags and corrections. All annotations have been performed by professional English instructors. We use about 80% of the essays for training, 10% for development, and 10% for testing. We ensure that no sentences from the same essay appear in both the training and the test or development data. NUCLE is available to the community for research purposes. On average, only 1.8% of the articles and 1.3% of the prepositions in NUCLE contain an error. This figure is considerably lower compared to other learner corpora (Leacock et al., 2010, Ch. 3) and shows that our writers have a relatively high proficiency of English. We argue that this makes the task considerably more difficult. Furthermore, to keep the task as realistic as possible, we do not filter the 918 test data in any way. In addition to NUCLE, we use a subset of the New York Times section of the Gigaword corpus1 and the Wall Street Journal section of the Penn Treebank (Marcus et al., 1993) for some experiments. We pre-process all corpora using the following tools: We use NLTK2 for sentence splitting, OpenNLP3 for POS tagging, YamCha (Kudo and Matsumoto, 2003) for chunking, the C&C tools (Clark and Curran, 2007) for CCG parsing and named entity recognition, and the Stanford parser (Klein and Manning, 2003a; Klein and Manning, 2003b) for constituency and dependency parsing. 6.2 Evaluation Metrics For experiments on non-learner text, we report accuracy, which is defined as the number of correct predictions divided by the total number of test instances. For experiments on learner text, we report F1-measure F1 = 2 × Precision × Recall Precision + Recall where precision is the number of suggested corrections that agree with the human annotator divided by the total number of proposed corrections by the system, and recall is the number of suggested corrections that agree with the human annotator divided by the total number of errors annotated by the human annotator. 6.3 Selection Task Experiments on WSJ Test Data The first set of experiments investigates predicting articles and prepositions in non-learner text. This primarily serves as a reference point for the correction task described in the next section. We train classifiers as described in Section 4 on the Gigaword corpus. We train with up to 10 million training instances, which corresponds to about 37 million words of text for articles and 112 million words of text for prepositions. The test instances are extracted from section 23 of the WSJ and no text from the WSJ is included in the training data. The observed article or preposition choice of the writer is the class 1LDC2009T13 2www.nltk.org 3opennlp.sourceforge.net we want to predict. Therefore, the article or preposition cannot be part of the input features. Our proposed ASO method is not included in these experiments, as it uses the observed article or preposition as a feature which is only applicable when testing on learner text. 6.4 Correction Task Experiments on NUCLE Test Data The second set of experiments investigates the primary goal of this work: to automatically correct grammatical errors in learner text. The test instances are extracted from NUCLE. In contrast to the previous selection task, the observed word choice of the writer can be different from the correct class and the observed word is available during testing. We investigate two different baselines and our ASO method. The first baseline is a classifier trained on the Gigaword corpus in the same way as described in the selection task experiment. We use a simple thresholding strategy to make use of the observed word during testing. The system only flags an error if the difference between the classifier’s confidence for its first choice and the confidence for the observed word is higher than a threshold t. The threshold parameter t is tuned on the NUCLE development data for each feature set. In our experiments, the value for t is between 0.7 and 1.2. The second baseline is a classifier trained on NUCLE. The classifier is trained in the same way as the Gigaword model, except that the observed word choice of the writer is included as a feature. The correct class during training is the correction provided by the human annotator. As the observed word is part of the features, this model does not need an extra thresholding step. Indeed, we found that thresholding is harmful in this case. During training, the instances that do not contain an error greatly outnumber the instances that do contain an error. To reduce this imbalance, we keep all instances that contain an error and retain a random sample of q percent of the instances that do not contain an error. The undersample parameter q is tuned on the NUCLE development data for each data set. In our experiments, the value for q is between 20% and 40%. Our ASO method is trained in the following way. We create binary auxiliary problems for articles or prepositions, i.e., there are 3 auxiliary problems for 919 articles and 36 auxiliary problems for prepositions. We train the classifiers for the auxiliary problems on the complete 10 million instances from Gigaword in the same ways as in the selection task experiment. The weight vectors of the auxiliary problems form the matrix U. We perform SVD to get U = V1DV T 2 . We keep all columns of V1 to form Θ. The target problems are again binary classification problems for each article or preposition, but this time trained on NUCLE. The observed word choice of the writer is included as a feature for the target problems. We again undersample the instances that do not contain an error and tune the parameter q on the NUCLE development data. The value for q is between 20% and 40%. No thresholding is applied. We also experimented with a classifier that is trained on the concatenated data from NUCLE and Gigaword. This model always performed worse than the better of the individual baselines. The reason is that the two data sets have different feature spaces which prevents simple concatenation of the training data. We therefore omit these results from the paper. 7 Results The learning curves of the selection task experiments on WSJ test data are shown in Figure 1. The three curves in each plot correspond to different feature sets. Accuracy improves quickly in the beginning but improvements get smaller as the size of the training data increases. The best results are 87.56% for articles (Han) and 68.25% for prepositions (TetreaultParse). The best accuracy for articles is comparable to the best reported results of 87.70% (Lee, 2004) on this data set. The learning curves of the correction task experiments on NUCLE test data are shown in Figure 2 and 3. Each sub-plot shows the curves of three models as described in the last section: ASO trained on NUCLE and Gigaword, the baseline classifier trained on NUCLE, and the baseline classifier trained on Gigaword. For ASO, the x-axis shows the number of target problem training instances. The first observation is that high accuracy for the selection task on non-learner text does not automatically entail high F1-measure on learner text. We also note that feature sets with similar performance on nonlearner text can show very different performance on 0.68 0.70 0.72 0.74 0.76 0.78 0.80 0.82 0.84 0.86 0.88 1000 10000 100000 1e+06 1e+07 ACCURACY Number of training examples GIGAWORD DEFELICE GIGAWORD HAN GIGAWORD LEE (a) Articles 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 1000 10000 100000 1e+06 1e+07 ACCURACY Number of training examples GIGAWORD DEFELICE GIGAWORD TETRAULTCHUNK GIGAWORD TETRAULTPARSE (b) Prepositions Figure 1: Accuracy for the selection task on WSJ test data. learner text. The second observation is that training on annotated learner text can significantly improve performance. In three experiments (articles DeFelice, Han, prepositions DeFelice), the NUCLE model outperforms the Gigaword model trained on 10 million instances. Finally, the ASO models show the best results. In the experiments where the NUCLE models already perform better than the Gigaword baseline, ASO gives comparable or slightly better results (articles DeFelice, Han, Lee, prepositions DeFelice). In those experiments where neither baseline shows good performance (TetreaultChunk, TetreaultParse), ASO results in a large improvement over either baseline. The best results are 19.29% F1measure for articles (Han) and 11.15% F1-measure for prepositions (TetreaultParse) achieved by the ASO model. 920 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 1000 10000 100000 1e+06 1e+07 F1 Number of training examples ASO NUCLE GIGAWORD (a) DeFelice 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 1000 10000 100000 1e+06 1e+07 F1 Number of training examples ASO NUCLE GIGAWORD (b) Han 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 1000 10000 100000 1e+06 1e+07 F1 Number of training examples ASO NUCLE GIGAWORD (c) Lee Figure 2: F1-measure for the article correction task on NUCLE test data. Each plot shows ASO and two baselines for a particular feature set. 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 1000 10000 100000 1e+06 1e+07 F1 Number of training examples ASO NUCLE GIGAWORD (a) DeFelice 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 1000 10000 100000 1e+06 1e+07 F1 Number of training examples ASO NUCLE GIGAWORD (b) TetreaultChunk 0.00 0.02 0.04 0.06 0.08 0.10 0.12 1000 10000 100000 1e+06 1e+07 F1 Number of training examples ASO NUCLE GIGAWORD (c) TetreaultParse Figure 3: F1-measure for the preposition correction task on NUCLE test data. Each plot shows ASO and two baselines for a particular feature set. 8 Analysis In this section, we analyze the results in more detail and show examples from our test set for illustration. Table 1 shows precision, recall, and F1-measure for the best models in our experiments. ASO achieves a higher F1-measure than either baseline. We use the sign-test with bootstrap re-sampling for statistical significance testing. The sign-test is a nonparametric test that makes fewer assumptions than parametric tests like the t-test. The improvements in F1-measure of ASO over either baseline are statistically significant (p < 0.001) for both articles and prepositions. The difficulty in GEC is that in many cases, more than one word choice can be correct. Even with a threshold, the Gigaword baseline model suggests too many corrections, because the model cannot make use of the observed word as a feature. This results in low precision. For example, the model replaces as Articles Model Prec Rec F1 Gigaword (Han) 10.33 21.81 14.02 NUCLE (Han) 29.48 12.91 17.96 ASO (Han) 26.44 15.18 19.29 Prepositions Model Prec Rec F1 Gigaword (TetreaultParse ) 4.77 14.81 7.21 NUCLE (DeFelice) 13.84 5.55 7.92 ASO (TetreaultParse) 18.30 8.02 11.15 Table 1: Best results for the correction task on NUCLE test data. Improvements for ASO over either baseline are statistically significant (p < 0.001) for both tasks. with by in the sentence “This group should be categorized as the vulnerable group”, which is wrong. In contrast, the NUCLE model learns a bias towards the observed word and therefore achieves higher precision. However, the training data is 921 smaller and therefore recall is low as the model has not seen enough examples during training. This is especially true for prepositions which can occur in a large variety of contexts. For example, the preposition in should be on in the sentence “... psychology had an impact in the way we process and manage technology”. The phrase “impact on the way” does not appear in the NUCLE training data and the NUCLE baseline fails to detect the error. The ASO model is able to take advantage of both the annotated learner text and the large non-learner text, thus achieving overall high F1-measure. The phrase “impact on the way”, for example, appears many times in the Gigaword training data. With the common structure learned from the auxiliary problems, the ASO model successfully finds and corrects this mistake. 8.1 Manual Evaluation We carried out a manual evaluation of the best ASO models and compared their output with two commercial grammar checking software packages which we call System A and System B. We randomly sampled 1000 test instances for articles and 2000 test instances for prepositions and manually categorized each test instance into one of the following categories: (1) Correct means that both human and system flag an error and suggest the same correction. If the system’s correction differs from the human but is equally acceptable, it is considered (2) Both Ok. If the system identifies an error but fails to correct it, we consider it (3) Both Wrong, as both the writer and the system are wrong. (4) Other Error means that the system’s correction does not result in a grammatical sentence because of another grammatical error that is outside the scope of article or preposition errors, e.g., a noun number error as in “all the dog”. If the system corrupts a previously correct sentence it is a (5) False Flag. If the human flags an error but the system does not, it is a (6) Miss. (7) No Flag means that neither the human annotator nor the system flags an error. We calculate precision by dividing the count of category (1) by the sum of counts of categories (1), (3), and (5), and recall by dividing the count of category (1) by the sum of counts of categories (1), (3), and (6). The results are shown in Table 2. Our ASO method outperforms both commercial software packages. Our evaluaArticles ASO System A System B (1) Correct 4 1 1 (2) Both Ok 16 12 18 (3) Both Wrong 0 1 0 (4) Other Error 1 0 0 (5) False Flag 1 0 4 (6) Miss 3 5 6 (7) No Flag 975 981 971 Precision 80.00 50.00 20.00 Recall 57.14 14.28 14.28 F1 66.67 22.21 16.67 Prepositions ASO System A System B (1) Correct 3 3 0 (2) Both Ok 35 39 24 (3) Both Wrong 0 2 0 (4) Other Error 0 0 0 (5) False Flag 5 11 1 (6) Miss 12 11 15 (7) No Flag 1945 1934 1960 Precision 37.50 18.75 0.00 Recall 20.00 18.75 0.00 F1 26.09 18.75 0.00 Table 2: Manual evaluation and comparison with commercial grammar checking software. tion shows that even commercial software packages achieve low F1-measure for article and preposition errors, which confirms the difficulty of these tasks. 9 Conclusion We have presented a novel approach to grammatical error correction based on Alternating Structure Optimization. We have introduced the NUS Corpus of Learner English (NUCLE), a fully annotated corpus of learner text. Our experiments for article and preposition errors show the advantage of our ASO approach over two baseline methods. Our ASO approach also outperforms two commercial grammar checking software packages in a manual evaluation. Acknowledgments This research was done for CSIDM Project No. CSIDM-200804 partially funded by a grant from the National Research Foundation (NRF) administered by the Media Development Authority (MDA) of Singapore. 922 References R.K. Ando and T. Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6. S. Bergsma, D. Lin, and R. Goebel. 2009. Web-scale ngram models for lexical disambiguation. In Proceedings of IJCAI. M. Chodorow, J. Tetreault, and N.R. Han. 2007. Detection of grammatical errors involving prepositions. In Proceedings of the 4th ACL-SIGSEM Workshop on Prepositions. S. Clark and J.R. Curran. 2007. Wide-coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4). R. De Felice. 2008. Automatic Error Detection in Nonnative English. Ph.D. thesis, University of Oxford. C. Fellbaum, editor. 1998. WordNet: An electronic lexical database. MIT Press, Cambridge,MA. M. Gamon, J. Gao, C. Brockett, A. Klementiev, W.B. Dolan, D. Belenko, and L. Vanderwende. 2008. Using contextual speller techniques and language modeling for ESL error correction. In Proceedings of IJCNLP. M. Gamon. 2010. Using mostly native data to correct errors in learners’ writing: A meta-classifier approach. In Proceedings of HLT-NAACL. N.R. Han, M. Chodorow, and C. Leacock. 2006. Detecting errors in English article usage by non-native speakers. Natural Language Engineering, 12(02). N.R. Han, J. Tetreault, S.H. Lee, and J.Y. Ha. 2010. Using an error-annotated learner corpus to develop an ESL/EFL error correction system. In Proceedings of LREC. E. Izumi, K. Uchimoto, T. Saiga, T. Supnithi, and H. Isahara. 2003. Automatic error detection in the Japanese learners’ English spoken data. In Companion Volume to the Proceedings of ACL. D. Klein and C.D. Manning. 2003a. Accurate unlexicalized parsing. In Proceedings of ACL. D. Klein and C.D. Manning. 2003b. Fast exact inference with a factored model for natural language processing. Advances in Neural Information Processing Systems (NIPS 2002), 15. K. Knight and I. Chander. 1994. Automated postediting of documents. In Proceedings of AAAI. T Kudo and Y. Matsumoto. 2003. Fast methods for kernel-based text analysis. In Proceedings of ACL. M. Lapata and F. Keller. 2005. Web-based models for natural language processing. ACM Transactions on Speech and Language Processing, 2(1). C. Leacock, M. Chodorow, M. Gamon, and J. Tetreault. 2010. Automated Grammatical Error Detection for Language Learners. Morgan & Claypool Publishers, San Rafael,CA. J. Lee and O. Knutsson. 2008. The role of PP attachment in preposition generation. In Proceedings of CICLing. J. Lee. 2004. Automatic article restoration. In Proceedings of HLT-NAACL. M.P. Marcus, M.A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19. G. Minnen, F. Bond, and A. Copestake. 2000. Memorybased learning for article generation. In Proceedings of CoNLL. R. Nagata, A. Kawai, K. Morihiro, and N. Isu. 2006. A feedback-augmented method for detecting errors in the writing of learners of English. In Proceedings of COLING-ACL. S.J. Pan and Q. Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10). A. Rozovskaya and D. Roth. 2010a. Generating confusion sets for context-sensitive error correction. In Proceedings of EMNLP. A. Rozovskaya and D. Roth. 2010b. Training paradigms for correcting errors in grammar and usage. In Proceedings of HLT-NAACL. J. Tetreault and M. Chodorow. 2008. The ups and downs of preposition error detection in ESL writing. In Proceedings of COLING. J. Tetreault, J. Foster, and M. Chodorow. 2010. Using parse features for preposition selection and error detection. In Proceedings of ACL. X. Yi, J. Gao, and W.B. Dolan. 2008. A web-based English proofing system for English as a second language users. In Proceedings of IJCNLP. 923
|
2011
|
92
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 924–933, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Algorithm Selection and Model Adaptation for ESL Correction Tasks Alla Rozovskaya and Dan Roth University of Illinois at Urbana-Champaign Urbana, IL 61801 {rozovska,danr}@illinois.edu Abstract We consider the problem of correcting errors made by English as a Second Language (ESL) writers and address two issues that are essential to making progress in ESL error correction - algorithm selection and model adaptation to the first language of the ESL learner. A variety of learning algorithms have been applied to correct ESL mistakes, but often comparisons were made between incomparable data sets. We conduct an extensive, fair comparison of four popular learning methods for the task, reversing conclusions from earlier evaluations. Our results hold for different training sets, genres, and feature sets. A second key issue in ESL error correction is the adaptation of a model to the first language of the writer. Errors made by non-native speakers exhibit certain regularities and, as we show, models perform much better when they use knowledge about error patterns of the nonnative writers. We propose a novel way to adapt a learned algorithm to the first language of the writer that is both cheaper to implement and performs better than other adaptation methods. 1 Introduction There has been a lot of recent work on correcting writing mistakes made by English as a Second Language (ESL) learners (Izumi et al., 2003; EegOlofsson and Knuttson, 2003; Han et al., 2006; Felice and Pulman, 2008; Gamon et al., 2008; Tetreault and Chodorow, 2008; Elghaari et al., 2010; Tetreault et al., 2010; Gamon, 2010; Rozovskaya and Roth, 2010c). Most of this work has focused on correcting mistakes in article and preposition usage, which are some of the most common error types among nonnative writers of English (Dalgish, 1985; Bitchener et al., 2005; Leacock et al., 2010). Examples below illustrate some of these errors: 1. “They listen to None*/the lecture carefully.” 2. “He is an engineer with a passion to*/for what he does.” In (1) the definite article is incorrectly omitted. In (2), the writer uses an incorrect preposition. Approaches to correcting preposition and article mistakes have adopted the methods of the contextsensitive spelling correction task, which addresses the problem of correcting spelling mistakes that result in legitimate words, such as confusing their and there (Carlson et al., 2001; Golding and Roth, 1999). A candidate set or a confusion set is defined that specifies a list of confusable words, e.g., {their, there}. Each occurrence of a confusable word in text is represented as a vector of features derived from a context window around the target, e.g., words and part-of-speech tags. A classifier is trained on text assumed to be error-free. At decision time, for each word in text, e.g. there, the classifier predicts the most likely candidate from the corresponding confusion set {their, there}. Models for correcting article and preposition errors are similarly trained on error-free native English text, where the confusion set includes all articles or prepositions (Izumi et al., 2003; Eeg-Olofsson and Knuttson, 2003; Han et al., 2006; Felice and Pulman, 2008; Gamon et al., 2008; Tetreault and Chodorow, 2008; Tetreault et al., 2010). 924 Although the choice of a particular learning algorithm differs, with the exception of decision trees (Gamon et al., 2008), all algorithms used are linear learning algorithms, some discriminative (Han et al., 2006; Felice and Pulman, 2008; Tetreault and Chodorow, 2008; Rozovskaya and Roth, 2010c; Rozovskaya and Roth, 2010b), some probabilistic (Gamon et al., 2008; Gamon, 2010), or “counting” (Bergsma et al., 2009; Elghaari et al., 2010). While model comparison has not been the goal of the earlier studies, it is quite common to compare systems, even when they are trained on different data sets and use different features. Furthermore, since there is no shared ESL data set, systems are also evaluated on data from different ESL sources or even on native data. Several conclusions have been made when comparing systems developed for ESL correction tasks. A language model was found to outperform a maximum entropy classifier (Gamon, 2010). However, the language model was trained on the Gigaword corpus, 17 · 109 words (Linguistic Data Consortium, 2003), a corpus several orders of magnitude larger than the corpus used to train the classifier. Similarly, web-based models built on Google Web1T 5-gram Corpus (Bergsma et al., 2009) achieve better results when compared to a maximum entropy model that uses a corpus 10, 000 times smaller (Chodorow et al., 2007)1. In this work, we compare four popular learning methods applied to the problem of correcting preposition and article errors and evaluate on a common ESL data set. We compare two probabilistic approaches – Na¨ıve Bayes and language modeling; a discriminative algorithm Averaged Perceptron; and a count-based method SumLM (Bergsma et al., 2009), which, as we show, is very similar to Na¨ıve Bayes, but with a different free coefficient. We train our models on data from several sources, varying training sizes and feature sets, and show that there are significant differences in the performance of these algorithms. Contrary to previous results (Bergsma et al., 2009; Gamon, 2010), we find that when trained on the same data with the same features, Averaged Perceptron achieves the best performance, followed by Na¨ıve Bayes, then the language model, and finally the count-based approach. Our results hold for 1These two models also use different features. training sets of different sizes, genres, and feature sets. We also explain the performance differences from the perspective of each algorithm. The second important question that we address is that of adapting the decision to the source language of the writer. Errors made by non-native speakers exhibit certain regularities. Adapting a model so that it takes into consideration the specific error patterns of the non-native writers was shown to be extremely helpful in the context of discriminative classifiers (Rozovskaya and Roth, 2010c; Rozovskaya and Roth, 2010b). However, this method requires generating new training data and training a separate classifier for each source language. Our key contribution here is a novel, simple, and elegant adaptation method within the framework of the Na¨ıve Bayes algorithm, which yields even greater performance gains. Specifically, we show how the error patterns of the non-native writers can be viewed as a different distribution on candidate priors in the confusion set. Following this observation, we train Na¨ıve Bayes in a traditional way, regardless of the source language of the writer, and then, only at decision time, change the prior probabilities of the model from the ones observed in the native training data to the ones corresponding to error patterns in the non-native writer’s source language (Section 4). A related idea has been applied in Word Sense Disambiguation to adjust the model priors to a new domain with different sense distributions (Chan and Ng, 2005). The paper has two main contributions. First, we conduct a fair comparison of four learning algorithms and show that the discriminative approach Averaged Perceptron is the best performing model (Sec. 3). Our results do not support earlier conclusions with respect to the performance of count-based models (Bergsma et al., 2009) and language models (Gamon, 2010). In fact, we show that SumLM is comparable to Averaged Perceptron trained with a 10 times smaller corpus, and language model is comparable to Averaged Perceptron trained with a 2 times smaller corpus. The second, and most significant, of our contributions is a novel way to adapt a model to the source language of the writer, without re-training the model (Sec. 4). As we show, adapting to the source language of the writer provides significant performance improvement, and our new method also performs 925 better than previous, more complicated methods. Section 2 presents the theoretical component of the linear learning framework. In Section 3, we describe the experiments, which compare the four learning models. Section 4 presents the key result of this work, a novel method of adapting the model to the source language of the learner. 2 The Models The standard approach to preposition correction is to cast the problem as a multi-class classification task and train a classifier on features defined on the surrounding context2. The model selects the most likely candidate from the confusion set, where the set of candidates includes the top n most frequent English prepositions. Our confusion set includes the top ten prepositions3: ConfSet = {on, from, for, of, about, to, at, in, with, by}. We use p to refer to a candidate preposition from ConfSet. Let preposition context denote the preposition and the window around it. For instance, “a passion to what he” is a context for window size 2. We use three feature sets, varying window size from 2 to 4 words on each side (see Table 1). All feature sets consist of word n-grams of various lengths spanning p and all the features are of the form s−kps+m, where s−k and s+m denote k words before and m words after p; we show two 3-gram features for illustration: 1. a passion p 2. passion p what We implement four linear learning models: the discriminative method Averaged Perceptron (AP); two probabilistic methods – a language model (LM) and Na¨ıve Bayes (NB); and a “counting” method SumLM (Bergsma et al., 2009). Each model produces a score for a candidate in the confusion set. Since all of the models are linear, the hypotheses generated by the algorithms differ only in the weights they assign to the features 2We also report one experiment on the article correction task. We take the preposition correction task as an example; the article case is treated in the same way. 3This set of prepositions is also considered in other works, e.g. (Rozovskaya and Roth, 2010b). The usage of the ten most frequent prepositions accounts for 82% of all preposition errors (Leacock et al., 2010). Feature Preposition context N-gram set lengths Win2 a passion [to] what he 2,3,4 Win3 with a passion [to] what he does 2,3,4 Win4 engineer with a passion [to] what he does . 2,3,4,5 Table 1: Description of the three feature sets used in the experiments. All feature sets consist of word n-grams of various lengths spanning the preposition and vary by n-gram length and window size. Method Free Coefficient Feature weights AP bias parameter mistake-driven LM λ · prior(p) P vl◦vr λvr · log(P(u|vr)) NB log(prior(p)) log(P(f|p)) SumLM |F(S, p)| · log(C(p)) log(P(f|p)) Table 2: Summary of the learning methods. C(p) denotes the number of times preposition p occurred in training. λ is a smoothing parameter, u is the rightmost word in f, vl ◦vr denotes all concatenations of substrings vl and vr of feature f without u. (Roth, 1998; Roth, 1999). Thus a score computed by each of the models for a preposition p in the context S can be expressed as follows: g(S, p) = C(p) + X f∈F(S,p) wa(f), (1) where F(S, p) is the set of features active in context S relative to preposition p, wa(f) is the weight algorithm a assigns to feature f ∈F, and C(p) is a free coefficient. Predictions are made using the winner-take-all approach: argmaxpg(S, p). The algorithms make use of the same feature set F and differ only by how the weights wa(f) and C(p) are computed. Below we explain how the weights are determined in each method. Table 2 summarizes the four approaches. 2.1 Averaged Perceptron Discriminative classifiers represent the most common learning paradigm in error correction. AP (Freund and Schapire, 1999) is a discriminative mistakedriven online learning algorithm. It maintains a vector of feature weights w and processes one training example at a time, updating w if the current weight assignment makes a mistake on the training example. In the case of AP, the C(p) coefficient refers to the bias parameter (see Table 2). 926 We use the regularized version of AP in Learning Based Java4 (LBJ, (Rizzolo and Roth, 2007)). While classical Perceptron comes with a generalization bound related to the margin of the data, Averaged Perceptron also comes with a PAC-like generalization bound (Freund and Schapire, 1999). This linear learning algorithm is known, both theoretically and experimentally, to be among the best linear learning approaches and is competitive with SVM and Logistic Regression, while being more efficient in training. It also has been shown to produce stateof-the-art results on many natural language applications (Punyakanok et al., 2008). 2.2 Language Modeling Given a feature f = s−kps+m, let u denote the rightmost word in f and vl ◦vr denote all concatenations of substrings vl and vr of feature f without u. The language model computes several probabilities of the form P(u|vr). If f =“with a passion p what”, then u =“what”, and vr ∈{“with a passion p”, “a passion p”, “passion p”, “p” }. In practice, these probabilities are smoothed and replaced with their corresponding log values, and the total weight contribution of f to the scoring function of p is P vl◦vr λvr · log(P(u|vr)). In addition, this scoring function has a coefficient that only depends on p: C(p) = λ · prior(p) (see Table 2). The prior probability of a candidate p is: prior(p) = C(p) P q∈ConfSet C(q), (2) where C(p) and C(q) denote the number of times preposition p and q, respectively, occurred in the training data. We implement a count-based LM with Jelinek-Mercer linear interpolation as a smoothing method5 (Chen and Goodman, 1996), where each n-gram length, from 1 to n, is associated with an interpolation smoothing weight λ. Weights are optimized on a held-out set of ESL sentences. Win2 and Win3 features correspond to 4-gram LMs and Win4 to 5-gram LMs. Language models are trained with SRILM (Stolcke, 2002). 4LBJ can be downloaded from http://cogcomp.cs. illinois.edu. 5Unlike other LM methods, this approach allows us to train LMs on very large data sets. Although we found that backoff LMs may perform slightly better, they still maintain the same hierarchy in the order of algorithm performance. 2.3 Na¨ıve Bayes NB is another linear model, which is often hard to beat using more sophisticated approaches. NB architecture is also particularly well-suited for adapting the model to the first language of the writer (Section 4). Weights in NB are determined, similarly to LM, by the feature counts and the prior probability of each candidate p (Eq. (2)). For each candidate p, NB computes the joint probability of p and the feature space F, assuming that the features are conditionally independent given p: g(S, p) = log{prior(p) · Y f∈F(S,p) P(f|p)} = log(prior(p)) + + X f∈F(S,p) log(P(f|p)) (3) NB weights and its free coefficient are also summarized in Table 2. 2.4 SumLM For candidate p, SumLM (Bergsma et al., 2009)6 produces a score by summing over the logs of all feature counts: g(S, p) = X f∈F(S,p) log(C(f)) = X f∈F(S,p) log(P(f|p)C(p)) = |F(S, p)|C(p) + X f∈F(S,p) log(P(f|p)) where C(f) denotes the number of times n-gram feature f was observed with p in training. It should be clear from equation 3 that SumLM is very similar to NB, with a different free coefficient (Table 2). 3 Comparison of Algorithms 3.1 Evaluation Data We evaluate the models using a corpus of ESL essays, annotated7 by native English speakers (Rozovskaya and Roth, 2010a). For each preposition 6SumLM is one of several related methods proposed in this work; its accuracy on the preposition selection task on native English data nearly matches the best model, SuperLM (73.7% vs. 75.4%), while being much simpler to implement. 7The annotation of the ESL corpus can be downloaded from http://cogcomp.cs.illinois.edu. 927 Source Prepositions Articles language Total Incorrect Total Incorrect Chinese 953 144 1864 150 Czech 627 28 575 55 Italian 687 43 Russian 1210 85 2292 213 Spanish 708 52 All 4185 352 4731 418 Table 3: Statistics on prepositions and articles in the ESL data. Column Incorrect denotes the number of cases judged to be incorrect by the annotator. (article) used incorrectly, the annotator indicated the correct choice. The data include sentences by speakers of five first languages. Table 3 shows statistics by the source language of the writer. 3.2 Training Corpora We use two training corpora. The first corpus, WikiNYT, is a selection of texts from English Wikipedia and the New York Times section of the Gigaword corpus and contains 107 preposition contexts. We build models of 3 sizes8: 106, 5 · 106, and 107. To experiment with larger data sets, we use the Google Web1T 5-gram Corpus, which is a collection of n-gram counts of length one to five over a corpus of 1012 words. The corpus contains 2.6·1010 prepositions. We refer to this corpus as GoogleWeb. We stress that GoogleWeb does not contain complete sentences, but only n-gram counts. Thus, we cannot generate training data for AP for feature sets Win3 and Win4: Since the algorithm does not assume feature independence, we need to have 7 and 9-word sequences, respectively, with a preposition in the middle (as shown in Table 1) and their corpus frequencies. The other three models can be evaluated with the n-gram counts available. For example, we compute NB scores by obtaining the count of each feature independently, e.g. the count for left context 5-gram “engineer with a passion p” and right context 5-gram “p what he does .”, due to the conditional independence assumption that NB makes. On GoogleWeb, we train NB, SumLM, and LM with three feature sets: Win2, Win3, and Win4. From GoogleWeb, we also generate a smaller training set of size 108: We use 5-grams with a preposition in the middle and generate a new 8Training size refers to the number of preposition contexts. count, proportional to the size of the smaller corpus9. For instance, a preposition 5-gram with a count of 2600 in GoogleWeb, will have a count of 10 in GoogleWeb-108. 3.3 Results Our key results of the fair comparison of the four algorithms are shown in Fig. 1 and summarized in Table 4. The table shows that AP trained on 5 · 106 preposition contexts performs as well as NB trained on 107 (i.e., with twice as much data; the performance of LM trained on 107 contexts is better than that of AP trained with 10 times less data (106), but not as good as that of AP trained with half as much data (5·106); AP outperforms SumLM, when the latter uses 10 times more data. Fig. 1 demonstrates the performance results reported in Table 4; it shows the behavior of different systems with respect to precision and recall on the error correction task. We generate the curves by varying the decision threshold on the confidence of the classifier (Carlson et al., 2001) and propose a correction only when the confidence of the classifier is above the threshold. A higher precision and a lower recall are obtained when the decision threshold is high, and vice versa. Key results AP > NB > LM > SumLM AP ∼2 · NB 5 · AP > 10 · LM > AP AP > 10 · SumLM Table 4: Key results on the comparison of algorithms. 2 · NB refers to NB trained with twice as much data as AP; 10 · LM refers to LM trained with 10 times more data as AP; 10·SumLM refers to SumLM trained with 10 times more data as AP. These results are also shown in Fig. 1. We now show a fair comparison of the four algorithms for different window sizes, training data and training sizes. Figure 2 compares the models trained on WikiNY T-107 corpus for Win4. AP is the superior model, followed by NB, then LM, and finally SumLM. Results for other training sizes and feature10 set 9Scaling down GoogleWeb introduces some bias but we believe that it should not have an effect on our experiments. 10We have also experimented with additional POS-based features that are commonly used in these tasks and observed similar behavior. 928 0 10 20 30 40 50 60 0 10 20 30 40 50 60 PRECISION RECALL SumLM-107 LM-107 NB-107 AP-106 AP-5*106 AP-107 Figure 1: Algorithm comparison across different training sizes. (WikiNYT, Win3). AP (106 preposition contexts) performs as well as SumLM with 10 times more data, and LM requires at least twice as much data to achieve the performance of AP. configurations show similar behavior and are reported in Table 5, which provides model comparison in terms of Average Area Under Curve (AAUC, (Hanley and McNeil, 1983)). AAUC is a measure commonly used to generate a summary statistic and is computed here as an average precision value over 12 recall points (from 5 to 60): AAUC = 1 12 · 12 X i=1 Precision(i · 5) The Table also shows results on the article correction task11. Training data Feature Performance (AAUC) set AP NB LM SumLM WikiNY T-5 · 106 Win3 26 22 20 13 WikiNY T-107 Win4 33 28 24 16 GoogleWeb-108 Win2 30 29 28 15 GoogleWeb Win4 44 41 32 Article WikiNY T-5 · 106 Win3 40 39 30 Table 5: Performance Comparison of the four algorithms for different training data, training sizes, and window sizes. Each row shows results for training data of the same size. The last row shows performance on the article correction task. All other results are for prepositions. 11We do not evaluate the LM approach on the article correction task, since with LM it is difficult to handle missing article errors, one of the most common error types for articles, but the expectation is that it will behave as it does for prepositions. 0 10 20 30 40 50 60 0 10 20 30 40 50 60 PRECISION RECALL SumLM LM NB AP Figure 2: Model Comparison for training data of the same size: Performance of models for feature set Win4 trained on WikiNY T-107. 3.3.1 Effects of Window Size We found that expanding window size from 2 to 3 is helpful for all of the models, but expanding window to 4 is only helpful for the models trained on GoogleWeb (Table 6). Compared to Win3, Win4 has five additional 5-gram features. We look at the proportion of features in the ESL data that occurred in two corpora: WikiNY T-107 and GoogleWeb (Table 7). We observe that only 4% of test 5-grams occur in WikiNY T-107. This number goes up 7 times to 28% for GoogleWeb, which explains why increasing the window size is helpful for this model. By comparison, a set of native English sentences (different from the training data) has 50% more 4-grams and about 3 times more 5-grams, because ESL sentences often contain expressions not common for native speakers. Training data Performance (AAUC) Win2 Win3 Win4 GoogleWeb 35 39 44 Table 6: Effect of Window Size in terms of AAUC. Performance improves, as the window increases. 4 Adapting to Writer’s Source Language In this section, we discuss adapting error correction systems to the first language of the writer. Nonnative speakers make mistakes in a systematic manner, and errors often depend on the first language of the writer (Lee and Seneff, 2008; Rozovskaya and 929 Test Train N-gram length 2 3 4 5 ESL WikiNY T-107 98% 66% 22% 4% Native WikiNY T-107 98% 67% 32% 13% ESL GoogleWeb 99% 92% 64% 28% Native-B09 GoogleWeb 99% 93% 70% Table 7: Feature coverage for ESL and native data. Percentage of test n-gram features that occurred in training. Native refers to data from Wikipedia and NYT. B09 refers to statistics from Bergsma et al. (2009). Roth, 2010a). For instance, a Chinese learner of English might say “congratulations to this achievement” instead of “congratulations on this achievement”, while a Russian speaker might say “congratulations with this achievement”. A system performs much better when it makes use of knowledge about typical errors. When trained on annotated ESL data instead of native data, systems improve both precision and recall (Han et al., 2010; Gamon, 2010). Annotated data include both the writer’s preposition and the intended (correct) one, and thus the knowledge about typical errors is made available to the system. Another way to adapt a model to the first language is to generate in native training data artificial errors mimicking the typical errors of the non-native writers (Rozovskaya and Roth, 2010c; Rozovskaya and Roth, 2010b). Henceforth, we refer to this method, proposed within the discriminative framework AP, as AP-adapted. To determine typical mistakes, error statistics are collected on a small set of annotated ESL sentences. However, for the model to use these language-specific error statistics, a separate classifier for each source language needs to be trained. We propose a novel adaptation method, which shows performance improvement over AP-adapted. Moreover, this method is much simpler to implement, since there is no need to train per source language; only one classifier is trained. The method relies on the observation that error regularities can be viewed as a distribution on priors over the correction candidates. Given a preposition s in text, the prior for candidate p is the probability that p is the correct preposition for s. If a model is trained on native data without adaptation to the source language, candidate priors correspond to the relative frequencies of the candidates in the native training data. More importantly, these priors remain the same regardless of the source language of the writer or of the preposition used in text. From the model’s perspective, it means that a correction candidate, for example to, is equally likely given that the author’s preposition is for or from, which is clearly incorrect and disagrees with the notion that errors are regular and language-dependent. We use the annotated ESL data and define adapted candidate priors that are dependent on the author’s preposition and the author’s source language. Let s be a preposition appearing in text by a writer of source language L1, and p a correction candidate. Then the adapted prior of p given s is: prior(p, s, L1) = CL1(s, p) CL1(s) , where CL1(s) denotes the number of times s appeared in the ESL data by L1 writers, and CL1(s, p) denotes the number of times p was the correct preposition when s was used by an L1 writer. Table 8 shows adapted candidate priors for two author’s choices – when an ESL writer used on and at – based on the data from Chinese learners. One key distinction of the adapted priors is the high probability assigned to the author’s preposition: the new prior for on given that it is also the preposition found in text is 0.70, vs. the 0.07 prior based on the native data. The adapted prior of preposition p, when p is used, is always high, because the majority of prepositions are used correctly. Higher probabilities are also assigned to those candidates that are most often observed as corrections for the author’s preposition. For example, the adapted prior for at when the writer chose on is 0.10, since on is frequently incorrectly chosen instead of at. To determine a mechanism to inject the adapted priors into a model, we note that while all of our models use priors in some way, NB architecture directly specifies the prior probability as one of its parameters (Sec. 2.3). We thus train NB in a traditional way, on native data, and then replace the prior component in Eq. (3) with the adapted prior, language and preposition dependent, to get the score for p of the NB-adapted model: g(S, p) = log{prior(p, s, L1) · Y f∈F(S,p) P(f|p)} 930 Candidate Global Adapted prior prior author’s prior author’s prior choice choice of 0.25 on 0.03 at 0.02 to 0.22 on 0.06 at 0.00 in 0.15 on 0.04 at 0.16 for 0.10 on 0.00 at 0.03 on 0.07 on 0.70 at 0.09 by 0.06 on 0.00 at 0.02 with 0.06 on 0.04 at 0.00 at 0.04 on 0.10 at 0.75 from 0.04 on 0.00 at 0.02 about 0.01 on 0.03 at 0.00 Table 8: Examples of adapted candidate priors for two author’s choices – on and at – based on the errors made by Chinese learners. Global prior denotes the probability of the candidate in the standard model and is based on the relative frequency of the candidate in native training data. Adapted priors are dependent on the author’s preposition and the author’s first language. Adapted priors for the author’s choice are very high. Other candidates are given higher priors if they often appear as corrections for the author’s choice. We stress that in the new method there is no need to train per source language, as with previous adaption methods. Only one model is trained, and only at decision time, we change the prior probabilities of the model. Also, while we need a lot of data to train the model, only one parameter depends on annotated data. Therefore, with rather small amounts of data, it is possible to get reasonably good estimates of these prior parameters. In the experiments below, we compare four models: AP, NB AP-adapted and NB-adapted. APadapted is the adaptation through artificial errors and NB-adapted is the method proposed here. Both of the adapted models use the same error statistics in k-fold cross-validation (CV): We randomly partition the ESL data into k parts, with each part tested on the model that uses error statistics estimated on the remaining k −1 parts. We also remove all preposition errors that occurred only once (23% of all errors) to allow for a better evaluation of the adapted models. Although we observe similar behavior on all the data, the models especially benefit from the adapted priors when a particular error occurred more than once. Since the majority of errors are not due to chance, we focus on those errors that the writers will make repeatedly. Fig. 3 shows the four models trained on WikiNY T-107. First, we note that the adapted 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 PRECISION RECALL NB-adapted AP-adapted AP NB Figure 3: Adapting to Writer’s Source Language. NBadapted is the method proposed here. AP-adapted and NB-adapted results are obtained using 2-fold CV, with 50% of the ESL data used for estimating the new priors. All models are trained on WikiNY T-107. models outperform their non-adapted counterparts with respect to precision. Second, for the recall points less than 20%, the adapted models obtain very similar precision values. This is interesting, especially because NB does not perform as well as AP, as we also showed in Sec. 3.3. Thus, NB-adapted not only improves over NB, but its gap compared to the latter is much wider than the gap between the APbased systems. Finally, an important performance distinction between the two adapted models is the loss in recall exhibited by AP-adapted – its curve is shorter because AP-adapted is very conservative and does not propose many corrections. In contrast, NBadapted succeeds in improving its precision over NB with almost no recall loss. To evaluate the effect of the size of the data used to estimate the new priors, we compare the performance of NB-adapted models in three settings: 2fold CV, 10-fold CV, and Leave-One-Out (Figure 4). In 2-fold CV, priors are estimated on 50% of the ESL data, in 10-fold on 90%, and in Leave-One-Out on all data but the testing example. Figure 4 shows the averaged results over 5 runs of CV for each setting. The model converges very quickly: there is almost no difference between 10-fold CV and Leave-OneOut, which suggests that we can get a good estimate of the priors using just a little annotated data. Table 9 compares NB and NB-adapted for two corpora: WikiNY T-107 and GoogleWeb. Since 931 0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 PRECISION RECALL NB-adapted-LeaveOneOut NB-adapted-10-fold NB-adapted-2-fold NB Figure 4: How much data are needed to estimate adapted priors. Comparison of NB-adapted models trained on GoogleWeb that use different amounts of data to estimate the new priors. In 2-fold CV, priors are estimated on 50% of the data; in 10-fold on 90% of the data; in Leave-One-Out, the new priors are based on all the data but the testing example. GoogleWeb is several orders of magnitude larger, the adapted model behaves better for this corpus. So far, we have discussed performance in terms of precision and recall, but we can also discuss it in terms of accuracy, to see how well the algorithm is performing compared to the baseline on the task. Following Rozovskaya and Roth (2010c), we consider as the baseline the accuracy of the ESL data before applying the model12, or the percentage of prepositions used correctly in the test data. From Table 3, the baseline is 93.44%13. Compared to this high baseline, NB trained on WikiNY T-107 achieves an accuracy of 93.54, and NB-adapted achieves an accuracy of 93.9314. Training data Algorithms NB NB-adapted WikiNY T-107 29 53 GoogleWeb 38 62 Table 9: Adapting to writer’s source language. Results are reported in terms of AAUC. NB-adapted is the model with adapted priors. Results for NB-adapted are based on 10-fold CV. 12Note that this baseline is different from the majority baseline used in the preposition selection task, since here we have the author’s preposition in text. 13This is the baseline after removing the singleton errors. 14We select the best accuracy among different values that can be achieved by varying the decision threshold. 5 Conclusion We have addressed two important issues in ESL error correction, which are essential to making progress in this task. First, we presented an extensive, fair comparison of four popular linear learning models for the task and demonstrated that there are significant performance differences between the approaches. Since all of the algorithms presented here are linear, the only difference is in how they learn the weights. Our experiments demonstrated that the discriminative approach (AP) is able to generalize better than any of the other models. These results correct earlier conclusions, made with incomparable data sets. The model comparison was performed using two popular tasks – correcting errors in article and preposition usage – and we expect that our results will generalize to other ESL correction tasks. The second, and most important, contribution of the paper is a novel method that allows one to adapt the learned model to the source language of the writer. We showed that error patterns can be viewed as a distribution on priors over the correction candidates and proposed a method of injecting the adapted priors into the learned model. In addition to performing much better than the previous approaches, this method is also very cheap to implement, since it does not require training a separate model for each source language, but adapts the system to the writer’s language at decision time. Acknowledgments The authors thank Nick Rizzolo for many helpful discussions. The authors also thank Josh Gioja, Nick Rizzolo, Mark Sammons, Joel Tetreault, Yuancheng Tu, and the anonymous reviewers for their insightful comments. This research is partly supported by a grant from the U.S. Department of Education. References S. Bergsma, D. Lin, and R. Goebel. 2009. Web-scale n-gram models for lexical disambiguation. In 21st International Joint Conference on Artificial Intelligence, pages 1507–1512. J. Bitchener, S. Young, and D. Cameron. 2005. The effect of different types of corrective feedback on ESL student writing. Journal of Second Language Writing. A. Carlson, J. Rosen, and D. Roth. 2001. Scaling up context sensitive text correction. In Proceedings of the 932 National Conference on Innovative Applications of Artificial Intelligence (IAAI), pages 45–50. Y. S. Chan and H. T. Ng. 2005. Word sense disambiguation with distribution estimation. In Proceedings of IJCAI 2005. S. Chen and J. Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of ACL 1996. M. Chodorow, J. Tetreault, and N.-R. Han. 2007. Detection of grammatical errors involving prepositions. In Proceedings of the Fourth ACL-SIGSEM Workshop on Prepositions, pages 25–30, Prague, Czech Republic, June. Association for Computational Linguistics. G. Dalgish. 1985. Computer-assisted ESL research. CALICO Journal, 2(2). J. Eeg-Olofsson and O. Knuttson. 2003. Automatic grammar checking for second language learners - the use of prepositions. Nodalida. A. Elghaari, D. Meurers, and H. Wunsch. 2010. Exploring the data-driven prediction of prepositions in english. In Proceedings of COLING 2010, Beijing, China. R. De Felice and S. Pulman. 2008. A classifier-based approach to preposition and determiner error correction in L2 English. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 169–176, Manchester, UK, August. Y. Freund and R. E. Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277–296. M. Gamon, J. Gao, C. Brockett, A. Klementiev, W. Dolan, D. Belenko, and L. Vanderwende. 2008. Using contextual speller techniques and language modeling for ESL error correction. In Proceedings of IJCNLP. M. Gamon. 2010. Using mostly native data to correct errors in learners’ writing. In NAACL, pages 163–171, Los Angeles, California, June. A. R. Golding and D. Roth. 1999. A Winnow based approach to context-sensitive spelling correction. Machine Learning, 34(1-3):107–130. N. Han, M. Chodorow, and C. Leacock. 2006. Detecting errors in English article usage by non-native speakers. Journal of Natural Language Engineering, 12(2):115– 129. N. Han, J. Tetreault, S. Lee, and J. Ha. 2010. Using an error-annotated learner corpus to develop and ESL/EFL error correction system. In LREC, Malta, May. J. Hanley and B. McNeil. 1983. A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology, 148(3):839– 843. E. Izumi, K. Uchimoto, T. Saiga, T. Supnithi, and H. Isahara. 2003. Automatic error detection in the Japanese learners’ English spoken data. In The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics, pages 145–148, Sapporo, Japan, July. C. Leacock, M. Chodorow, M. Gamon, and J. Tetreault. 2010. Morgan and Claypool Publishers. J. Lee and S. Seneff. 2008. An analysis of grammatical errors in non-native speech in English. In Proceedings of the 2008 Spoken Language Technology Workshop. V. Punyakanok, D. Roth, and W. Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2). N. Rizzolo and D. Roth. 2007. Modeling Discriminative Global Inference. In Proceedings of the First International Conference on Semantic Computing (ICSC), pages 597–604, Irvine, California, September. IEEE. D. Roth. 1998. Learning to resolve natural language ambiguities: A unified approach. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 806–813. D. Roth. 1999. Learning in natural language. In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI), pages 898–904. A. Rozovskaya and D. Roth. 2010a. Annotating ESL errors: Challenges and rewards. In Proceedings of the NAACL Workshop on Innovative Use of NLP for Building Educational Applications. A. Rozovskaya and D. Roth. 2010b. Generating confusion sets for context-sensitive error correction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). A. Rozovskaya and D. Roth. 2010c. Training paradigms for correcting errors in grammar and usage. In Proceedings of the NAACL-HLT. A. Stolcke. 2002. Srilm-an extensible language modeling toolkit. In Proceedings International Conference on Spoken Language Processing, pages 257–286, November. J. Tetreault and M. Chodorow. 2008. The ups and downs of preposition error detection in ESL writing. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 865–872, Manchester, UK, August. J. Tetreault, J. Foster, and M. Chodorow. 2010. Using parse features for preposition selection and error detection. In ACL. 933
|
2011
|
93
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 934–944, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Automated Whole Sentence Grammar Correction Using a Noisy Channel Model Y. Albert Park Department of Computer Science and Engineering 9500 Gilman Drive La Jolla, CA 92037-404, USA [email protected] Roger Levy Department of Linguistics 9500 Gilman Drive La Jolla, CA 92037-108, USA [email protected] Abstract Automated grammar correction techniques have seen improvement over the years, but there is still much room for increased performance. Current correction techniques mainly focus on identifying and correcting a specific type of error, such as verb form misuse or preposition misuse, which restricts the corrections to a limited scope. We introduce a novel technique, based on a noisy channel model, which can utilize the whole sentence context to determine proper corrections. We show how to use the EM algorithm to learn the parameters of the noise model, using only a data set of erroneous sentences, given the proper language model. This frees us from the burden of acquiring a large corpora of corrected sentences. We also present a cheap and efficient way to provide automated evaluation results for grammar corrections by using BLEU and METEOR, in contrast to the commonly used manual evaluations. 1 Introduction The process of editing written text is performed by humans on a daily basis. Humans work by first identifying the writer’s intent, and then transforming the text so that it is coherent and error free. They can read text with several spelling errors and grammatical errors and still easily identify what the author originally meant to write. Unfortunately, current computer systems are still far from such capabilities when it comes to the task of recognizing incorrect text input. Various approaches have been taken, but to date it seems that even many spell checkers such as Aspell do not take context into consideration, which prevents them from finding misspellings which have the same form as valid words. Also, current grammar correction systems are mostly rule-based, searching the text for defined types of rule violations in the English grammar. While this approach has had some success in finding various grammatical errors, it is confined to specifically defined errors. In this paper, we approach this problem by modeling various types of human errors using a noisy channel model (Shannon, 1948). Correct sentences are produced by a predefined generative probabilistic model, and lesioned by the noise model. We learn the noise model parameters using an expectation-maximization (EM) approach (Dempster et al., 1977; Wu, 1983). Our model allows us to deduce the original intended sentence by looking for the the highest probability parses over the entire sentence, which leads to automated whole sentence spelling and grammar correction based on contextual information. In Section 2, we discuss previous work, followed by an explanation of our model and its implementation in Sections 3 and 4. In Section 5 we present a novel technique for evaluating the task of automated grammar and spelling correction, along with the data set we collected for our experiments. Our experiment results and discussion are in Section 6. Section 7 concludes this paper. 2 Background Much of the previous work in the domain of automated grammar correction has focused on identi934 fying grammatical errors. Chodorow and Leacock (2000) used an unsupervised approach to identifying grammatical errors by looking for contextual cues in a ±2 word window around a target word. To identify errors, they searched for cues which did not appear in the correct usage of words. Eegolofsson and Knutsson (2003) used rule-based methods to approach the problem of discovering preposition and determiner errors of L2 writers, and various classifier-based methods using Maximum Entropy models have also been proposed (Izumi et al., 2003; Tetreault and Chodorow, 2008; De Felice and Pulman, 2008). Some classifier-based methods can be used not only to identify errors, but also to determine suggestions for corrections by using the scores or probabilities from the classifiers for other possible words. While this is a plausible approach for grammar correction, there is one fundamental difference between this approach and the way humans edit. The output scores of classifiers do not take into account the observed erroneous word, changing the task of editing into a fill-in-the-blank selection task. In contrast, editing makes use of the writer’s erroneous word which often encompasses information neccessary to correctly deduce the writer’s intent. Generation-based approaches to grammar correction have also been taken, such as Lee and Seneff (2006), where sentences are paraphrased into an over-generated word lattice, and then parsed to select the best rephrasing. As with the previously mentioned approaches, these approaches often have the disadvantage of ignoring the writer’s selected word when used for error correction instead of just error detection. Other work which relates to automated grammar correction has been done in the field of machine translation. Machine translation systems often generate output which is grammatically incorrect, and automated post-editing systems have been created to address this problem. For instance, when translating Japanese to English, the output sentence needs to be edited to include the correct articles, since the Japanese language does not contain articles. Knight and Chander (1994) address the problem of selecting the correct article for MT systems. These types of systems could also be used to facilitate grammar correction. While grammar correction can be used on the output of MT systems, note that the task of grammar correction itself can also be thought of as a machine translation task, where we are trying to ‘translate’ a sentence from an ‘incorrect grammar’ language to a ‘correct grammar’ language. Under this idea, the use of statistical machine translation techniques to correct grammatical errors has also been explored. Brockett et al. (2006) uses phrasal SMT techniques to identify and correct mass noun errors of ESL students. D´esilets and Hermet (2009) use a round-trip translation from L2 to L1 and back to L2 to correct errors using an SMT system, focusing on errors which link back to the writer’s native language. Despite the underlying commonality between the tasks of machine translation and grammar correction, there is a practical difference in that the field of grammar correction suffers from a lack of good quality parallel corpora. While machine translation has taken advantage of the plethora of translated documents and books, from which various corpora have been built, the field of grammar correction does not have this luxury. Annotated corpora of grammatical errors do exist, such as the NICT Japanese Learner of English corpus and the Chinese Learner English Corpus (Shichun and Huizhong, 2003), but the lack of definitive corpora often makes obtaining data for use in training models a task within itself, and often limits the approaches which can be taken. Using classification or rule-based systems for grammatical error detection has proven to be successful to some extent, but many approaches are not sufficient for real-world automated grammar correction for various of reasons. First, as we have already mentioned, classification systems and generationbased systems do not make full use of the given data when trying to make a selection. This limits the system’s ability to make well-informed edits which match the writer’s original intent. Second, many of the systems start with the assumption that there is only one type of error. However, ESL students often make several combined mistakes in one sentence. These combined mistakes can throw off error detection/correction schemes which assume that the rest of the sentence is correct. For example, if a student erroneously writes ‘much poeple’ instead of ‘many people’, a system trying to correct ‘many/much’ errors may skip correction of much to many because it does not have any reference to the misspelled word 935 ‘poeple’. Thus there are advantages in looking at the sentence as a whole, and creating models which allow several types of errors to occur within the same sentence. We now present our model, which supports the addition of various types of errors into one combined model, and derives its response by using the whole of the observed sentence. 3 Base Model Our noisy channel model consists of two main components, a base language model and a noise model. The base language model is a probabilistic language model which generates an ‘error-free’ sentence1 with a given probability. The probabilistic noise model then takes this sentence and decides whether or not to make it erroneous by inserting various types of errors, such as spelling mistakes, article choice errors, wordform choice errors, etc., based on its parameters (see Figure 1 for example). Using this model, we can find the posterior probability p(Sorig|Sobs) using Bayes rule where Sorig is the original sentence created by our base language model, and Sobs is the observed erroneous sentence. p(Sorig|Sobs) = p(Sobs|Sorig)p(Sorig) p(Sobs) For the language model, we can use various known probabilistic models which already have defined methods for learning the parameters, such as n-gram models or PCFGs. For the noise model, we need some way to learn the parameters for the mistakes that a group of specified writers (such as Korean ESL students) make. We address this issue in Section 4. Using this model, we can find the highest likelihood error-free sentence for an observed output sentence by tracing all possible paths from the language model through the noise model and ending in the observed sentence as output. 4 Implementation To actually implement our model, we use a bigram model for the base language model, and various noise models which introduce spelling errors, article choice errors, preposition choice errors, etc. 1In reality, the language model will most likely produce sentences with errors as seen by humans, but from the modeling perspective, we assume that the language model is a perfect representation of the language for our task. Figure 1: Example of noisy channel model All models are implemented using weighted finitestate tranducers (wFST). For operations on the wFSTs, we use OpenFST (Allauzen et al., 2007), along with expectation semiring code supplied by Markus Dryer for Dreyer et al. (2008). 4.1 Base language model The base language model is a bigram model implemented by using a weighted finite-state transducer (wFST). The model parameters are learned from the British National Corpus modified to use American English spellings with Kneser-Ney smoothing. To lower our memory usage, only bigrams whose words are found in the observed sentences, or are determined to be possible candidates for the correct words of the original sentence (due to the noise models) are used. While we use a bigram model here for simplicity, any probabilistic language model having a tractable intersection with wFSTs could be used. For the bigram model, each state in the wFST represents a bigram context, except the end state. The arcs of the wFST are set so that the weight is the bigram probability of the output word given the context specified by the from state, and the output word is a word of the vocabulary. Thus, given a set of n words in the vocabulary, the language model wFST had one start state, from which n arcs extended to each of their own context states. From each of these nodes, n + 1 arcs extend to each of the n context states and the end state. Thus the number of states in the language model is n + 2 and the number of arcs is O(n2). 4.2 Noise models For our noise model, we created a weighted finitestate transducer (wFST) which accepts error-free input, and outputs erroneous sentences with a specified probability. To model various types of human errors, we created several different noise models and 936 Figure 2: Example of noise model composed them together, creating a layered noise model. The noise models we implement are spelling errors, article choice errors, preposition choice errors, and insertion errors, which we will explain in more detail later in this section. The basic design of each noise wFST starts with an initial state, which is also the final state of the wFST. For each word found in the language model, an arc going from the initial state to itself is created, with the input and output values set as the word. These arcs model the case of no error being made. In addition to these arcs, arcs representing prediction errors are also inserted. For example, in the article choice error model, an arc is added for each possible (input, output) article pair, such as a:an for making the mistake of writing an instead of a. The weights of the arcs are the probabilities of introducing errors, given the input word from the language model. For example, the noise model shown in Figure 2 shows a noise model in which a will be written correctly with a probability of 0.9, and will be changed to an or the with probabilities 0.03 and 0.07, respectively. For this model to work correctly, the setting of the probabilities for each error is required. How this is done is explained in Section 4.3. 4.2.1 Spelling errors The spelling error noise model accounts for spelling errors made by writers. For spelling errors, we allowed all spelling errors which were a Damerau-Levenshtein distance of 1 (Damerau, 1964; Levenshtein, 1966). While allowing a DL distance of 2 or higher may likely have better performance, the model was constrained to a distance of 1 due to memory constraints. We specified one parameter λn for each possible word length n. This parameter is the total probability of making a spelling error for a given word length. For each word length we distributed the probability of each possible spelling error equally. Thus for word length n, we have n deletion errors, 25n substitution errors, n −1 transposition errors, and 26(n + 1) insertion errors, and the probability for each possible error is λn n+25n+n−1+26(n+1). We set the maximum word length for spelling errors to 22, giving us 22 parameters. 4.2.2 Article choice errors The article choice error noise model simulates incorrect selection of articles. In this model we learn n(n−1) parameters, one for each article pair. Since there are only 3 articles (a, an, the), we only have 6 parameters for this model. 4.2.3 Preposition choice errors The preposition choice error noise model simulates incorrect selection of prepositions. We take the 12 most commonly misused prepositions by ESL writers (Gamon et al., 2009) and specify one parameter for each preposition pair, as we do in the article choice error noise model, giving us a total of 132 parameters. 4.2.4 Wordform choice errors The wordform choice error noise model simulates choosing the incorrect wordform of a word. For example, choosing the incorrect tense of a verb (e.g. went→go), or the incorrect number marking on a noun or verb (e.g. are→is) would be a part of this model. This error model has one parameter for every number of possible inflections, up to a maximum of 12 inflections, giving us 12 parameters. The parameter is the total probability of choosing the wrong inflection of a word, and the probability is spread evenly between each possible inflection. We used CELEX (Baayen et al., 1995) to find all the possible wordforms of each observed word. 4.2.5 Word insertion errors The word insertion error model simulates the addition of extraneous words to the original sentence. We create a list of words by combining the prepositions and articles found in the article choice and preposition choice errors. We assume that the words on the list have a probability of being inserted erroneously. There is a parameter for each word, which 937 is the probability of that word being inserted. Thus we have 15 parameters for this noise model. 4.3 Learning noise model parameters To achieve maximum performance, we wish to learn the parameters of the noise models. If we had a large set of erroneous sentences, along with a handannotated list of the specific errors and their corrections, it would be possible to do some form of supervised learning to find the parameters. We looked at the NICT Japanese Learner of English (JLE) corpus, which is a corpus of transcripts of 1,300 Japanese learners’ English oral proficiency interview. This corpus has been annotated using an error tagset (Izumi et al., 2004). However, because the JLE corpus is a set of transcribed sentences, it is in a different domain from our task. The Chinese Learner English Corpus (CLEC) contains erroneous sentences which have been annotated, but the CLEC corpus had too many manual errors, such as typos, as well as many incorrect annotations, making it very difficult to automate the processing. Many of the corrections themselves were also incorrect. We were not able to find of a set of annotated errors which fit our task, nor are we aware that such a set exists. Instead, we collected a large data set of possibly erroneous sentences from Korean ESL students (Section 5.1). Since these sentences are not annotated, we need to use an unsupervised learning method to learn our parameters. To learn the parameters of the noise models, we assume that the collected sentences are random output of our model, and train our model using the EM algorithm. This was done by making use of the V -expectation semiring (Eisner, 2002). The V -expectation semiring is a semiring in which the weight is defined as R≥0 × V , where R can be used to keep track of the probability, and V is a vector which can be used to denote arc traversal counts or feature counts. The weight for each of the arcs in the noise models was set so that the real value was the probability and the vector V denoted which choice (having a specified error or not) was made by selecting the arc. We create a generative language-noise model by composing the language model wFST with the noise model wFSTs, as shown in Figure 3. By using the expectation semiring, we can keep track of the probability of each path going over an erroneous 0 1 a:a/0.6 2 an:an/0.4 3 cat:cat/1 4 ear:ear/1 5 ε:ε/1 ε:ε/1 0 a:a/0.95[0.95,0] a:an/0.05[0,0.05] an:an/1 cat:cat/1 ear:ear/1 0 1 a:a/0.57[0.57,0] a:an/0.03[0,0.03] 2 an:an/0.4 3 cat:cat/1 4 ear:ear/1 5 ε:ε/1 ε:ε/1 Figure 3: Example of language model (top) and noise model (middle) wFST composition. The vector of the V -expectation semiring weight is in brackets. The first value of the vector denotes no error being made on writing ‘a’ and the second value denotes the error of writing ‘an’ instead of ‘a’ arc or non-erroneous arc. Once our model is set up for the E step using the initial parameters, we must compute the expected number of noise model arc traversals for use in calculating our new parameters. To do this, we need to find all possible paths resulting in the observed sentence as output, for each observed sentence. Then, for each possible path, we need to calculate the probability of the path given the output sentence, and get the expected counts of going over each erroneous and error-free arc to learn the parameters of the noise model. To find a wFST with just the possible paths for each observed sentence, we can compose the language-noise wFST with the observed sentence wFST. The observed sentence wFST is created in the following manner. Given an observed sentence, an initial state is created. For each word in the sentence, in the order appearing in the sentence, a new state is added, and an arc is created going from the previously added state to the newly added state. The new arc takes the observed word as input and also uses it as output. The weight/probability for each arc is set to 1. Composing the sentence wFST with the language-noise wFST has the effect of restricting the new wFST to only have sentences which output the observed sentence from the language-noise wFST. We now have a new wFST where all valid paths are the paths which can produce the observed 938 sentence. To find the total weight of all paths, we first change all input and output symbols into the empty string. Since all arcs in this wFST are epsilon arcs, we can use the epsilon-removal operation (Mohri, 2002), which will reduce the wFST to one state with no arcs. This operation combines the total weight of all paths into the final weight of the sole state, giving us the total expectation value for that sentence. By doing this for each sentence, and adding the expectation values for each sentence, we can easily compute the expectation step, from which we can find the maximizing parameters and update our parameters accordingly. 4.4 Finding the maximum likelihood correction Once the parameters are learned, we can use our model to find the maximum likelihood error-free sentence. This is done by again creating the language model and noise model with the learned parameters, but this time we set the weights of the noise model to just the probabilities, using the log semiring, since we do not need to keep track of expected values. We also set the language model input for each arc to be the same word as the output, instead of using an empty string. Once again, we compose the language model with the noise models. We create a sentence wFST using the observed sentence we wish to correct, the same way the observed sentence wFST for training was created. This is now composed with the language-noise wFST. Now all we need to do is find the shortest path (when using minus-log probabilities) of the new wFST, and the input to that path will be our corrected sentence. 5 Experiment We now present the data set and evaluation technique used for our experiments. 5.1 Data Set To train our noise models, we collected around 25,000 essays comprised of 478,350 sentences written by Korean ESL students preparing for the TOEFL writing exam. These were collected from open web postings by Korean ESL students asking for advice on their writing samples. In order to automate the process, a program was written to download the posts, and discard the posts that were deemed too short to be TOEFL writing samples. Also discarded were the posts that had a “[re” or “re..” in the title. Next, all sentences containing Korean were removed, after which some characters were changed so that they were in ASCII form. The remaining text was separated into sentences solely by punctuation marks ., !, and ?. This resulted in the 478,350 sentences stated above. Due to the process, some of the sentences collected are actually sentence fragments, where punctuation had been misused. For training and evaluation purposes, the data set was split into a test set with 504 randomly selected sentences, an evaluation set of 1017 randomly selected sentences, and a training set composed of the remaining sentences. 5.2 Evaluation technique In the current literature, grammar correction tasks are often manually evaluated for each output correction, or evaluated by taking a set of proper sentences, artificially introducing some error, and seeing how well the algorithm fixes the error. Manual evaluation of automatic corrections may be the best method for getting a more detailed evaluation, but to do manual evaluation for every test output requires a large amount of human resources, in terms of both time and effort. In the case where artificial lesioning is introduced, the lesions may not always reflect the actual errors found in human data, and it is difficult to replicate the actual tendency of humans to make a variety of different mistakes in a single sentence. Thus, this method of evaluation, which may be suitable for evaluating the correction performance of specific grammatical errors, would not be fit for evaluating our model’s overall performance. For evaluation of the given task, we have incorporated evaluation techniques based on current evaluation techniques used in machine translation, BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007). Machine translation addresses the problem of changing a sentence in one language to a sentence of another. The task of correcting erroneous sentences can also be thought of as translating a sentence from a given language A, to another language B, where A is a broken language, and B is the correct language. Under this context, we can apply machine translation evaluation techniques to evaluate the performance of our system. Our model’s sentence correc939 tions can be thought of as the output translation to be evaluated. In order to use BLEU and METEOR, we need to have reference translations on which to score our output. As we have already explained in section 5.1, we have a collection of erroneous sentences, but no corrections. To obtain manually corrected sentences for evaluation, the test and evaluation set sentences and were put on Amazon Mechanical Turk as a correction task. Workers residing in the US were asked to manually correct the sentences in the two sets. Workers had a choice of selecting ‘Impossible to understand’, ‘Correct sentence’, or ‘Incorrect sentence’, and were asked to correct the sentences so no spelling errors, grammatical errors, or punctuation errors were present. Each sentence was given to 8 workers, giving us a set of 8 or fewer corrected sentences for each erroneous sentence. We asked workers not to completely rewrite the sentences, but to maintain the original structure as much as possible. Each hit was comprised of 6 sentences, and the reward for each hit was 10 cents. To ensure the quality of our manually corrected sentences, a native English speaker research assistant went over each of the ‘corrected’ sentences and marked them as correct or incorrect. We then removed all the incorrect ‘corrections’. Using our manually corrected reference sentences, we evaluate our model’s correction performance using METEOR and BLEU. Since METEOR and BLEU are fully automated after we have our reference translations (manual corrections), we can run evaluation on our tests without any need for further manual input. While these two evaluation methods were created for machine translation, they also have the potential of being used in the field of grammar correction evaluation. One difference between machine translation and our task is that finding the right lemma is in itself something to be rewarded in MT, but is not sufficient for our task. In this respect, evaluation of grammar correction should be more strict. Thus, for METEOR, we used the ‘exact’ module for evaluation. To validate our evaluation method, we ran a simple test by calculating the METEOR and BLEU scores for the observed sentences, and compared them with the scores for the manually corrected sentences, to test for an expected increase. The scores for each correction were evaluated using the set of METEOR BLEU Original ESL sentences 0.8327 0.7540 Manual corrections 0.9179 0.8786 Table 1: BLEU and METEOR scores for ESL sentences vs manual corrections on 100 randomly chosen sentences METEOR BLEU Aspell 0.824144 0.719713 Spelling noise model 0.825001 0.722383 Table 2: Aspell vs Spelling noise model corrected sentences minus the correction sentence being evaluated. For example, let us say we have the observed sentence o, and correction sentences c1, c2, c3 and c4 from Mechanical Turk. We run METEOR and BLEU on both o and c1 using c2, c3 and c4 as the reference set. We repeat the process for o and c2, using c1, c3 and c4 as the reference, and so on, until we have run METEOR and BLEU on all 4 correction sentences. With a set of 100 manually labeled sentences, the average METEOR score for the ESL sentences was 0.8327, whereas the corrected sentences had an average score of 0.9179. For BLEU, the average scores were 0.7540 and 0.8786, respectively, as shown in Table 1. Thus, we have confirmed that the corrected sentences score higher than the ESL sentence. It is also notable that finding corrections for the sentences is a much easier task than finding various correct translations, since the task of editing is much easier and can be done by a much larger set of qualified people. 6 Results For our experiments, we used 2000 randomly selected sentences for training, and a set of 1017 annotated sentences for evaluation. We also set aside a set of 504 annotated sentences as a development set. With the 2000 sentence training, the performance generally converged after around 10 iterations of EM. 6.1 Comparison with Aspell To check how well our spelling error noise model is doing, we compared the results of using the spelling error noise model with the output results of using the GNU Aspell 0.60.6 spelling checker. Since we 940 METEOR ↑ ↓ BLEU ↑ ↓ ESL Baseline 0.821000 0.715634 Spelling only 0.825001 49 5 0.722383 53 8 Spelling, Article 0.825437 55 6 0.723022 59 9 Spelling, Preposition 0.824157 52 17 0.720702 55 19 Spelling, Wordform 0.825654 81 25 0.723599 85 27 Spelling, Insertion 0.825041 52 5 0.722564 56 8 Table 3: Average evaluation scores for various noise models run on 1017 sentences, along with counts of sentences with increased (↑) and decreased (↓) scores. All improvements are significant by the binomial test at p < 0.001 are using METEOR and BLEU for our evaluation metric, we needed to get a set of corrected sentences for using Aspell. Aspell lists the suggested spelling corrections of misspelled words in a ranked order, so we replaced each misspelled word found by Aspell with the word with the highest rank (lowest score) for the Aspell corrections. One difference between Aspell and our model is that Aspell only corrects words which do not appear in the dictionary, while our method looks at all words, even those found in the dictionary. Thus our model can correct words which look correct by themselves, but seem to be incorrect due to the bigram context. Another difference is that Aspell has the capability to split words, whereas our model does not allow the insertion of spaces. A comparison of the scores is shown in Table 2. We can see that our model has better performance, due to better word selection, despite the advantage that Aspell has by using phonological information to find the correct word, and the disadvantage that our model is restricted to spellings which are within a Damerau-Levenstein distance of 1. This is due to the fact that our model is context-sensitive, and can use other information in addition to the misspelled word. For example, the sentence ‘In contast, high prices of products would be the main reason for dislike.’ was edited in Aspell by changing ‘contast’ to ‘contest’, while our model correctly selected ‘contrast’. The sentence ‘So i can reach the theater in ten minuets by foot’ was not edited by Aspell, but our model changed ‘minuets’ to ‘minutes’. Another difference that can be seen by looking through the results is that Aspell changes every word not found in the dictionary, while our algorithm allows words it has not seen by treating them as unknown tokens. Since we are using smoothing, these tokens are left in place if there is no other high probability bigram to take its place. This helps leave intact the proper nouns and words not in the vocabulary. 6.2 Noise model performance and output Our next experiment was to test the performance of our model on various types of errors. Table 3 shows the BLEU and METEOR scores of our various error models, along with the number of sentences achieving improved and reduced scores. As we have already seen in section 6.1, the spelling error model increases the evaluation scores from the ESL baseline. Adding in the article choice error model and the word insertion error models in addition to the spelling error noise model increases the BLEU score performance of finding corrections. Upon observing the outputs of the corrections on the development set, we found that the corrections changing a to an were all correct. Changes between a and the were sometimes correct, and sometimes incorrect. For example, ‘which makes me know a existence about’ was changed to ‘which makes me know the existence about’, ‘when I am in a trouble.’ was changed to ‘when I am in the trouble.’, and ‘many people could read a nonfiction books’ was changed to ‘many people could read the nonfiction books’. For the last correction, the manual corrections all changed the sentence to contain ‘many people could read a nonfiction book’, bringing down the evaluation score. Overall, the article corrections which were being made seemed to change the sentence for the better, or left it at the same quality. The preposition choice error model decreased the performance of the system overall. Looking through the development set corrections, we found that many correct prepositions were being changed to incorrect prepositions. For example, in the sentence ‘Distrust about desire between two have been growing in their 941 relationship.’, about was changed to of, and in ‘As time goes by, ...’, by was changed to on. Since these changes were not found in the manual corrections, the scores were decreased. For wordform errors, the BLEU and METEOR scores both increased. While the wordform choice noise model had the most sentences with increased scores, it also had the most sentences with decreased scores. Overall, it seems that to correct wordform errors, more context than just the preceding and following word are needed. For example, in the sentence ‘There are a lot of a hundred dollar phones in the market.’, phones was changed to phone. To infer which is correct, you would have to have access to the previous context ‘a lot of’. Another example is ‘..., I prefer being indoors to going outside ...’, where going was changed to go. These types of cases illustrate the restrictions of using a bigram model as the base language model. The word insertion error model was restricted to articles and 12 prepositions, and thus did not make many changes, but was correct when it did. One thing to note is that since we are using a bigram model for the language model, the model itself is biased towards shorter sentences. Since we only included words which were needed when they were used, we did not run into problems with this bias. When we tried including a large set of commonly used words, we found that many of the words were being erased because of the bigrams models probabilistic preference for shorter sentences. 6.3 Limitations of the bigram language model Browsing through the development set data, we found that many of our model’s incorrect ‘corrections’ were the result of using a bigram model as our language model. For example, ‘.., I prefer being indoors to going outside in that...’ was changed to ‘.., I prefer being indoors to go outside in that...’. From the bigram model, the probabilities p(go to) and p(outside go) are both higher than p(going to) and p(outside going), respectively. To infer that going is actually correct, we would need to know the previous context, that we are comparing ‘being indoors’ to ‘going outside’. Unfortunately, since we are using a bigram model, this is not possible. These kind of errors are found throughout the corrections. It seems likely that making use of a language model which can keep track of this kind of information would increase the performance of the correction model by preventing these kinds of errors. 7 Conclusion and future work We have introduced a novel way of finding grammar and spelling corrections, which uses the EM algorithm to train the parameters of our noisy channel approach. One of the benefits of this approach is that it does not require a parallel set of erroneous sentences and their corrections. Also, our model is not confined to a specific error, and various error models may be added on. For training our noise model, all that is required is finding erroneous data sets. Depending on which domain you are training on, this can also be quite feasible as we have shown by our collection of Korean ESL students’ erroneous writing samples. Our data set could have been for ESL students of any native language, or could also be a data set of other groups such as young native English speakers, or the whole set of English speakers for grammar correction. Using only these data sets, we can train our noisy channel model, as we have shown using a bigram language model, and a wFST for our noise model. We have also shown how to use weighted finite-state transducers and the expectation semiring, as well as wFST algorithms implemented in OpenFST to train the model using EM. For evaluation, we have introduced a novel way of evaluating grammar corrections, using MT evaluation methods, which we have not seen in other grammar correction literature. The produced corrections show the restrictions of using a bigram language model. For future work, we plan to use a more accurate language model, and add more types of complex error models, such as word deletion and word ordering error models to improve performance and address other types of errors. Acknowledgments We are grateful to Randy West for his input and assistance, and to Markus Dreyer who provided us with his expectation semiring code. We would also like to thank the San Diego Supercomputer Center for use of their DASH high-performance computing system. 942 References Allauzen, C., Riley, M., Schalkwyk, J., Skut, W., and Mohri, M. (2007). OpenFst: A general and efficient weighted finite-state transducer library. In Proceedings of the Ninth International Conference on Implementation and Application of Automata, (CIAA 2007), volume 4783 of Lecture Notes in Computer Science, pages 11–23. Springer. http://www.openfst.org. Baayen, H. R., Piepenbrock, R., and Gulikers, L. (1995). The CELEX Lexical Database. Release 2 (CD-ROM). Linguistic Data Consortium, University of Pennsylvania, Philadelphia, Pennsylvania. Brockett, C., Dolan, W. B., and Gamon, M. (2006). Correcting ESL errors using phrasal SMT techniques. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 249–256, Morristown, NJ, USA. Association for Computational Linguistics. Chodorow, M. and Leacock, C. (2000). An unsupervised method for detecting grammatical errors. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 140–147, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Damerau, F. J. (1964). A technique for computer detection and correction of spelling errors. Commun. ACM, 7:171–176. De Felice, R. and Pulman, S. G. (2008). A classifierbased approach to preposition and determiner error correction in L2 English. In Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1, COLING ’08, pages 169–176, Morristown, NJ, USA. Association for Computational Linguistics. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):pp. 1–38. D´esilets, A. and Hermet, M. (2009). Using automatic roundtrip translation to repair general errors in second language writing. In Proceedings of the twelfth Machine Translation Summit, MT Summit XII, pages 198–206. Dreyer, M., Smith, J., and Eisner, J. (2008). Latentvariable modeling of string transductions with finite-state methods. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1080–1089, Honolulu, Hawaii. Association for Computational Linguistics. Eeg-olofsson, J. and Knutsson, O. (2003). Automatic grammar checking for second language learners - the use of prepositions. In In Nodalida. Eisner, J. (2002). Parameter estimation for probabilistic finite-state transducers. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1–8, Philadelphia. Gamon, M., Leacock, C., Brockett, C., Dolan, W. B., Gao, J., Belenko, D., and Klementiev, A. (2009). Using statistical techniques and web search to correct ESL errors. In Calico Journal, Vol 26, No. 3, pages 491–511, Menlo Park, CA, USA. CALICO Journal. Izumi, E., Uchimoto, K., and Isahara, H. (2004). The NICT JLE corpus exploiting the language learnersspeech database for research and education. In International Journal of the Computer, the Internet and Management, volume 12(2), pages 119–125. Izumi, E., Uchimoto, K., Saiga, T., Supnithi, T., and Isahara, H. (2003). Automatic error detection in the Japanese learners’ English spoken data. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 2, ACL ’03, pages 145–148, Morristown, NJ, USA. Association for Computational Linguistics. Knight, K. and Chander, I. (1994). Automated postediting of documents. In Proceedings of the twelfth national conference on Artificial intelligence (vol. 1), AAAI ’94, pages 779–784, Menlo Park, CA, USA. American Association for Artificial Intelligence. Lavie, A. and Agarwal, A. (2007). Meteor: an automatic metric for MT evaluation with high levels of correlation with human judgments. In StatMT ’07: Proceedings of the Second Workshop on 943 Statistical Machine Translation, pages 228–231, Morristown, NJ, USA. Association for Computational Linguistics. Lee, J. and Seneff, S. (2006). Automatic grammar correction for second-language learners. In Proceedings of Interspeech. Levenshtein, V. I. (1966). Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10:707–710. Mohri, M. (2002). Generic epsilon-removal and input epsilon-normalization algorithms for weighted transducers. In International Journal of Foundations of Computer Science 13, pages 129– 143. Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311–318. Shannon, C. (1948). A mathematical theory of communications. Bell Systems Technical Journal, 27(4):623–656. Shichun, G. and Huizhong, Y. (2003). Chinese Learner English Corpus. Shanghai Foreign Language Education Press. Tetreault, J. R. and Chodorow, M. (2008). The ups and downs of preposition error detection in ESL writing. In Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1, COLING ’08, pages 865–872, Morristown, NJ, USA. Association for Computational Linguistics. Wu, C.-F. J. (1983). On the convergence properties of the EM algorithm. Ann. Statist., 11(1):95–103. 944
|
2011
|
94
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 945–954, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Generative Entity-Mention Model for Linking Entities with Knowledge Base Xianpei Han Le Sun Institute of Software, Chinese Academy of Sciences HaiDian District, Beijing, China. {xianpei, sunle}@nfs.iscas.ac.cn Abstract Linking entities with knowledge base (entity linking) is a key issue in bridging the textual data with the structural knowledge base. Due to the name variation problem and the name ambiguity problem, the entity linking decisions are critically depending on the heterogenous knowledge of entities. In this paper, we propose a generative probabilistic model, called entitymention model, which can leverage heterogenous entity knowledge (including popularity knowledge, name knowledge and context knowledge) for the entity linking task. In our model, each name mention to be linked is modeled as a sample generated through a three-step generative story, and the entity knowledge is encoded in the distribution of entities in document P(e), the distribution of possible names of a specific entity P(s|e), and the distribution of possible contexts of a specific entity P(c|e). To find the referent entity of a name mention, our method combines the evidences from all the three distributions P(e), P(s|e) and P(c|e). Experimental results show that our method can significantly outperform the traditional methods. 1 Introduction In recent years, due to the proliferation of knowledge-sharing communities like Wikipedia 1 and the many research efforts for the automated knowledge base population from Web like the Read the Web2 project, more and more large-scale knowledge bases are available. These knowledge bases contain rich knowledge about the world’s entities, their semantic properties, and the semantic relations between each other. One of the most notorious examples is Wikipedia: its 2010 English 1 http://www.wikipedia.org/ 2 http://rtw.ml.cmu.edu/ version contains more than 3 million entities and 20 million semantic relations. Bridging these knowledge bases with the textual data can facilitate many different tasks such as entity search, information extraction and text classification. For example, as shown in Figure 1, knowing the word Jordan in the document refers to a basketball player and the word Bulls refers to a NBA team would be helpful in classifying this document into the Sport/Basketball class. After a standout career at the University, joined the in 1984. Michael Jeffrey Jordan NBA Player Basketball Player Chicago Bulls NBA Sport Organization NBA Team Knowledge Base Employer-of IS-A IS-A IS-A IS-A IS-A Part-of Jordan Bulls Figure 1. A Demo of Entity Linking A key issue in bridging the knowledge base with the textual data is linking the entities in a document with their referents in a knowledge base, which is usually referred to as the Entity Linking task. Given a set of name mentions M = {m1, m2, …, mk} contained in documents and a knowledge base KB containing a set of entities E = {e1, e2, …, en}, an entity linking system is a function : M E which links these name mentions to their referent entities in KB. For example, in Figure 1 an entity linking system should link the name mention Jordan to the entity Michael Jeffrey Jordan and the name mention Bulls to the entity Chicago Bulls. The entity linking task, however, is not trivial due to the name variation problem and the name ambiguity problem. Name variation means that an entity can be mentioned in different ways such as full name, aliases, acronyms and misspellings. For 945 example, the entity Michael Jeffrey Jordan can be mentioned using more than 10 names, such as Michael Jordan, MJ and Jordan. The name ambiguity problem is related to the fact that a name may refer to different entities in different contexts. For example, the name Bulls can refer to more than 20 entities in Wikipedia, such as the NBA team Chicago Bulls, the football team Belfast Bulls and the cricket team Queensland Bulls. Complicated by the name variation problem and the name ambiguity problem, the entity linking decisions are critically depending on the knowledge of entities (Li et al., 2004; Bunescu & Pasca, 2006; Cucerzan, 2007; Milne & Witten, 2008 and Fader et al., 2009). Based on the previous work, we found that the following three types of entity knowledge can provide critical evidence for the entity linking decisions: Popularity Knowledge. The popularity knowledge of entities tells us the likelihood of an entity appearing in a document. In entity linking, the entity popularity knowledge can provide a priori information to the possible referent entities of a name mention. For example, without any other information, the popularity knowledge can tell that in a Web page the name “Michael Jordan” will more likely refer to the notorious basketball player Michael Jeffrey Jordan, rather than the less popular Berkeley professor Michael I. Jordan. Name Knowledge. The name knowledge tells us the possible names of an entity and the likelihood of a name referring to a specific entity. For example, we would expect the name knowledge tells that both the “MJ” and “Michael Jordan” are possible names of the basketball player Michael Jeffrey Jordan, but the “Michael Jordan” has a larger likelihood. The name knowledge plays the central role in resolving the name variation problem, and is also helpful in resolving the name ambiguity problem. Context Knowledge. The context knowledge tells us the likelihood of an entity appearing in a specific context. For example, given the context “__wins NBA MVP”, the name “Michael Jordan” should more likely refer to the basketball player Michael Jeffrey Jordan than the Berkeley professor Michael I. Jordan. Context knowledge is crucial in solving the name ambiguities. Unfortunately, in entity linking system, the modeling and exploitation of these types of entity knowledge is not straightforward. As shown above, these types of knowledge are heterogenous, making it difficult to be incorporated in the same model. Furthermore, in most cases the knowledge of entities is not explicitly given, making it challenging to extract the entity knowledge from data. To resolve the above problems, this paper proposes a generative probabilistic model, called entity-mention model, which can leverage the heterogeneous entity knowledge (including popularity knowledge, name knowledge and context knowledge) for the entity linking task. In our model, each name mention is modeled as a sample generated through a three-step generative story, where the entity knowledge is encoded in three distributions: the entity popularity knowledge is encoded in the distribution of entities in document P(e), the entity name knowledge is encoded in the distribution of possible names of a specific entity P(s|e), and the entity context knowledge is encoded in the distribution of possible contexts of a specific entity P(c|e). The P(e), P(s|e) and P(c|e) are respectively called the entity popularity model, the entity name model and the entity context model. To find the referent entity of a name mention, our method combines the evidences from all the three distributions P(e), P(s|e) and P(c|e). We evaluate our method on both Wikipedia articles and general newswire documents. Experimental results show that our method can significantly improve the entity linking accuracy. Our Contributions. Specifically, the main contributions of this paper are as follows: 1) We propose a new generative model, the entity-mention model, which can leverage heterogenous entity knowledge (including popularity knowledge, name knowledge and context knowledge) for the entity linking task; 2) By modeling the entity knowledge as probabilistic distributions, our model has a statistical foundation, making it different from most previous ad hoc approaches. This paper is organized as follows. The entitymention model is described in Section 2. The model estimation is described in Section 3. The experimental results are presented and discussed in Section 4. The related work is reviewed in Section 5. Finally we conclude this paper in Section 6. 946 2 The Generative Entity-Mention Model for Entity Linking In this section we describe the generative entitymention model. We first describe the generative story of our model, then formulate the model and show how to apply it to the entity linking task. 2.1 The Generative Story In the entity mention model, each name mention is modeled as a generated sample. For demonstration, Figure 2 shows two examples of name mention generation. As shown in Figure 2, the generative story of a name mention is composed of three steps, which are detailed as follows: (i) Firstly, the model chooses the referent entity e of the name mention from the given knowledge base, according to the distribution of entities in document P(e). In Figure 2, the model chooses the entity “Michael Jeffrey Jordan” for the first name mention, and the entity “Michael I. Jordan” for the second name mention; (ii) Secondly, the model outputs the name s of the name mention according to the distribution of possible names of the referent entity P(s|e). In Figure 2, the model outputs “Jordan” as the name of the entity “Michael Jeffrey Jordan”, and the “Michael Jordan” as the name of the entity “Michael I. Jordan”; (iii) Finally, the model outputs the context c of the name mention according to the distribution of possible contexts of the referent entity P(c|e). In Figure 2, the model outputs the context “joins Bulls in 1984” for the first name mention, and the context “is a professor in UC Berkeley” for the second name mention. 2.2 Model Based on the above generative story, the probability of a name mention m (its context is c and its name is s) referring to a specific entity e can be expressed as the following formula (here we assume that s and c are independent given e): ( , , ) P(m,e)= P s c e = P(e)P(s|e)P(c|e) This model incorporates the three types of entity knowledge we explained earlier: P(e) corresponds to the popularity knowledge, P(s|e) corresponds to the name knowledge and P(c|e) corresponds to the context knowledge. Knowledge Base Michael Jeffrey Jordan Michael I. Jordan Jordan Michael Jordan Jordan joins Bulls in 1984. Michael Jordan is a professor in UC Berkeley. Entity Name Mention Figure 2. Two examples of name mention generation Given a name mention m, to perform entity linking, we need to find the entity e which maximizes the probability P(e|m). Then we can resolve the entity linking task as follows: ( , ) e argmax argmax ( ) ( | ) ( | ) ( ) e e P m e P e P s e P c e P m Therefore, the main problem of entity linking is to estimate the three distributions P(e), P(s|e) and P(c|e), i.e., to extract the entity knowledge from data. In Section 3, we will show how to estimate these three distributions. Candidate Selection. Because a knowledge base usually contains millions of entities, it is timeconsuming to compute all P(m,e) scores between a name mention and all the entities contained in a knowledge base. To reduce the time required, the entity linking system employs a candidate selection process to filter out the impossible referent candidates of a name mention. In this paper, we adopt the candidate selection method of NLPR_KBP system (Han and Zhao, 2009), the main idea of which is first building a name-toentity dictionary using the redirect links, disambiguation pages, anchor texts of Wikipedia, then the candidate entities of a name mention are selected by finding its name’s corresponding entry in the dictionary. 3 Model Estimation Section 2 shows that the entity mention model can decompose the entity linking task into the estimation of three distributions P(e), P(s|e) and P(c|e). In this section, we describe the details of the estimation of these three distributions. We first 947 introduce the training data, then describe the estimation methods. 3.1 Training Data In this paper, the training data of our model is a set of annotated name mentions M = {m1, m2, …, mn}. Each annotated name mention is a triple m={s, e, c}, where s is the name, e is the referent entity and c is the context. For example, two annotated name mentions are as follows: Jordan | Michael Jeffrey Jordan | … wins his first NBA MVP in 1991. NBA | National Basketball Association | … is the preeminent men's professional basketball league. In this paper, we focus on the task of linking entities with Wikipedia, even though the proposed method can be applied to other resources. We will only show how to get the training data from Wikipedia. In Wikipedia, a hyperlink between two articles is an annotated name mention (Milne & Witten, 2008): its anchor text is the name and its target article is the referent entity. For example, in following hyperlink (in Wiki syntax), the NBA is the name and the National Basketball Association is the referent entity. “He won his first [[National Basketball Association | NBA]] championship with the Bulls” Therefore, we can get the training data by collecting all annotated name mentions from the hyperlink data of Wikipedia. In total, we collected more than 23,000,000 annotated name mentions. 3.2 Entity Popularity Model The distribution P(e) encodes the popularity knowledge as a distribution of entities, i.e., the P(e1) should be larger than P(e2) if e1 is more popular than e2. For example, on the Web the P(Michael Jeffrey Jordan) should be higher than the P(Michael I. Jordan). In this section, we estimate the distribution P(e) using a model called entity popularity model. Given a knowledge base KB which contains N entities, in its simplest form, we can assume that all entities have equal popularity, and the distribution P(e) can be estimated as: ( ) 1 P e N However, this does not reflect well the real situation because some entities are obviously more popular than others. To get a more precise estimation, we observed that a more popular entity usually appears more times than a less popular entity in a large text corpus, i.e., more name mentions refer to this entity. For example, in Wikipedia the NBA player Michael Jeffrey Jordan appears more than 10 times than the Berkeley professor Michael I. Jordan. Based on the above observation, our entity popularity model uses the entity frequencies in the name mention data set M to estimate the distribution P(e) as follows: ( ) 1 ( ) Count e P e M N where Count(e) is the count of the name mentions whose referent entity is e, and the |M| is the total name mention size. The estimation is further smoothed using the simple add-one smoothing method for the zero probability problem. For illustration, Table 1 shows three selected entities’ popularity. Entity Popularity National Basketball Association 1.73*10-5 Michael Jeffrey Jordan(NBA player) 8.21*10-6 Michael I. Jordan(Berkeley Professor) 7.50*10-8 Table 1. Three examples of entity popularity 3.3 Entity Name Model The distribution P(s|e) encodes the name knowledge of entities, i.e., for a specific entity e, its more frequently used name should be assigned a higher P(s|e) value than the less frequently used name, and a zero P(s|e) value should be assigned to those never used names. For instance, we would expect the P(Michael Jordan|Michael Jeffrey Jordan) to be high, P(MJ|Michael Jeffrey Jordan) to be relative high and P(Michael I. Jordan|Michael Jeffrey Jordan) to be zero. Intuitively, the name model can be estimated by first collecting all (entity, name) pairs from the name mention data set, then using the maximum likelihood estimation: ( , ) ( | ) ( , ) s Count e s P s e Count e s where the Count(e,s) is the count of the name mentions whose referent entity is e and name is s. However, this method does not work well because it cannot correctly deal with an unseen entity or an unseen name. For example, because the name “MJ” doesn’t refer to the Michael Jeffrey Jordan in Wikipedia, the name model will not be able to identify “MJ” as a name of him, even “MJ” is a popular name of Michael Jeffrey Jordan on Web. 948 To better estimate the distribution P(s|e), this paper proposes a much more generic model, called entity name model, which can capture the variations (including full name, aliases, acronyms and misspellings) of an entity's name using a statistical translation model. Given an entity’s name s, our model assumes that it is a translation of this entity’s full name f using the IBM model 1 (Brown, et al., 1993). Let ∑ be the vocabulary containing all words may be used in the name of entities, the entity name model assumes that a word in ∑ can be translated through the following four ways: 1) It is retained (translated into itself); 2) It is translated into its acronym; 3) It is omitted(translated into the word NULL); 4) It is translated into another word (misspelling or alias). In this way, all name variations of an entity are captured as the possible translations of its full name. To illustrate, Figure 3 shows how the full name “Michael Jeffrey Jordan” can be transalted into its misspelling name “Micheal Jordan”. Figure 3. The translation from Michael Jefferey Jordan to Micheal Jordan Based on the translation model, P(s|e) can be written as: 0 1 ( | ) ( 1) f s s l l i j l i j f P(s|e) t s f l where is a normalization factor, f is the full name of entity e, lf is the length of f, ls is the length of the name s, si the ith word of s, fj is the jth word of f and t(si|fj) is the lexical translation probability which indicates the probability of a word fj in the full name will be written as si in the output name. Now the main problem is to estimate the lexical translation probability t(si|fj). In this paper, we first collect the (name, entity full name) pairs from all annotated name mentions, then get the lexical translation probability by feeding this data set into an IBM model 1 training system (we use the GIZA++ Toolkit3). Table 2 shows several resulting lexical translation probabilities through the above process. 3 http://fjoch.com/GIZA++.html We can see that the entity name model can capture the different name variations, such as the acronym (MichaelM), the misspelling (MichaelMicheal) and the omission (St. NULL). Full name word Name word Probability Michael Michael 0.77 Michael M 0.008 Michael Micheal 2.64*10-4 Jordan Jordan 0.96 Jordan J 6.13*10-4 St. NULL 0.14 Sir NULL 0.02 Table 2. Several lexical translation probabilities 3.4 Entity Context Model The distribution P(c|e) encodes the context knowledge of entities, i.e., it will assign a high P(c|e) value if the entity e frequently appears in the context c, and will assign a low P(c|e) value if the entity e rarely appears in the context c. For example, given the following two contexts: C1: __wins NBA MVP. C2: __is a researcher in machine learning. Then P(C1|Michael Jeffrey Jordan) should be high because the NBA player Michael Jeffrey Jordan often appears in C1 and the P(C2|Michael Jeffrey Jordan) should be extremely low because he rarely appears in C2. __ wins NBA MVP. __is a professor in UC Berkeley. Michael Jeffrey Jordan (NBA Player) NBA=0.03 MVP=0.008 Basketball=0.02 player=0.005 win=0.00008 professor=0 ... Michael I. Jordan (Berkeley Professor) professor=0.003 Berkeley=0.002 machine learning=0.1 researcher = 0.006 NBA = 0 MVP=0 ... Figure 4. Two entity context models To estimate the distribution P(c|e), we propose a method based on language modeling, called entity context model. In our model, the context of each name mention m is the word window surrounding m, and the window size is set to 50 according to the experiments in (Pedersen et al., 2005). Specifically, the context knowledge of an entity e is encoded in an unigram language model: { ( )} e e M P t where Pe(t) is the probability of the term t appearing in the context of e. In our model, the term may indicate a word, a named entity (extracted using the Stanford Named Entity Michael Jeffrey Jordan Micheal Jordan NULL Full Name Name 949 Recognizer 4) or a Wikipedia concept (extracted using the method described in (Han and Zhao, 2010)). Figure 4 shows two entity context models and the contexts generated using them. Now, given a context c containing n terms t1t2…tn, the entity context model estimates the probability P(c|e) as: 1 2 1 2 ( | ) ( ... | ) ( ) ( ).... ( ) n e e e e n P c e P t t t M P t P t P t So the main problem is to estimate Pe(t), the probability of a term t appearing in the context of the entity e. Using the annotated name mention data set M, we can get the maximum likelihood estimation of Pe(t) as follows: _ ( ) ( ) ( ) e e ML e t Count t P t Count t where Counte(t) is the frequency of occurrences of a term t in the contexts of the name mentions whose referent entity is e. Because an entity e’s name mentions are usually not enough to support a robust estimation of Pe(t) due to the sparse data problem (Chen and Goodman, 1999), we further smooth Pe(t) using the Jelinek-Mercer smoothing method (Jelinek and Mercer, 1980): _ ( ) ( ) (1 ) ( ) e e ML g P t P t P t where Pg(t) is a general language model which is estimated using the whole Wikipedia data, and the optimal value of λ is set to 0.2 through a learning process shown in Section 4. 3.5 The NIL Entity Problem By estimating P(e), P(s|e) and P(c|e), our method can effectively link a name mention to its referent entity contained in a knowledge base. Unfortunately, there is still the NIL entity problem (McNamee and Dang, 2009), i.e., the referent entity may not be contained in the given knowledge base. In this situation, the name mention should be linked to the NIL entity. Traditional methods usually resolve this problem with an additional classification step (Zheng et al. 2010): a classifier is trained to identify whether a name mention should be linked to the NIL entity. Rather than employing an additional step, our entity mention model seamlessly takes into account the NIL entity problem. The start assumption of 4 http://nlp.stanford.edu/software/CRF-NER.shtml our solution is that “If a name mention refers to a specific entity, then the probability of this name mention is generated by the specific entity’s model should be significantly higher than the probability it is generated by a general language model”. Based on the above assumption, we first add a pseudo entity, the NIL entity, into the knowledge base and assume that the NIL entity generates a name mention according to the general language model Pg, without using any entity knowledge; then we treat the NIL entity in the same way as other entities: if the probability of a name mention is generated by the NIL entity is higher than all other entities in Knowledge base, we link the name mention to the NIL entity. Based on the above discussion, we compute the three probabilities of the NIL entity: P(e), P(s|e) and P(c|e) as follows: 1 P(NIL) M N ( ) g t s P(s| NIL) P t ( ) g t c P(c| NIL) P t 4 Experiments In this section, we assess the performance of our method and compare it with the traditional methods. In following, we first explain the experimental settings in Section 4.1, 4.2 and 4.3, then evaluate and discuss the results in Section 4.4. 4.1 Knowledge Base In our experiments, we use the Jan. 30, 2010 English version of Wikipedia as the knowledge base, which contains over 3 million distinct entities. 4.2 Data Sets To evaluate the entity linking performance, we adopted two data sets: the first is WikiAmbi, which is used to evaluate the performance on Wikipedia articles; the second is TAC_KBP, which is used to evaluate the performance on general newswire documents. In following, we describe these two data sets in detail. WikiAmbi: The WikiAmbi data set contains 1000 annotated name mentions which are randomly selected from Wikipedia hyperlinks data set (as shown in Section 3.1, the hyperlinks between Wikipedia articles are manually annotated name mentions). In WikiAmbi, there were 207 distinct 950 names and each name contains at least two possible referent entities (on average 6.7 candidate referent entities for each name) 5 . In our experiments, the name mentions contained in the WikiAmbi are removed from the training data. TAC_KBP: The TAC_KBP is the standard data set used in the Entity Linking task of the TAC 2009 (McNamee and Dang, 2009). The TAC_KBP contains 3904 name mentions which are selected from English newswire articles. For each name mention, its referent entity in Wikipedia is manually annotated. Overall, 57% (2229 of 3904) name mentions’s referent entities are missing in Wikipedia, so TAC_KBP is also suitable to evaluate the NIL entity detection performance. The above two data sets can provide a standard testbed for the entity linking task. However, there were still some limitations of these data sets: First, these data sets only annotate the salient name mentions in a document, meanwhile many NLP applications need all name mentions are linked. Second, these data sets only contain well-formed documents, but in many real-world applications the entity linking often be applied to noisy documents such as product reviews and microblog messages. In future, we want to develop a data set which can reflect these real-world settings. 4.3 Evaluation Criteria We adopted the standard performance metrics used in the Entity Linking task of the TAC 2009 (McNamee and Dang, 2009). These metrics are: Micro-Averaged Accuracy (MicroAccuracy): measures entity linking accuracy averaged over all the name mentions; Macro-Averaged Accuracy (MacroAccuracy): measures entity linking accuracy averaged over all the target entities. As in TAC 2009, we used Micro-Accuracy as the primary performance metric. 4.4 Experimental Results We compared our method with three baselines: (1) The first is the traditional Bag of Words based method (Cucerzan, 2007): a name mention’s referent entity is the entity which has the highest cosine similarity with its context – we denoted it as BoW; (2) The second is the method described in 5 This is because we want to create a highly ambiguous test data set (Medelyan et al., 2008), where a name mention’s referent entity is the entity which has the largest average semantic relatedness with the name mention’s unambiguous context entities – we denoted it as TopicIndex. (3) The third one is the same as the method described in (Milne & Witten, 2008), which uses learning techniques to balance the semantic relatedness, commoness and context quality – we denoted it as Learning2Link. 4.4.1 Overall Performance We conduct experiments on both WikiAmbi and TAC_KBP datasets with several methods: the baseline BoW; the baseline TopicIndex; the baseline Learning2Link; the proposed method using only popularity knowledge (Popu), i.e., the P(m,e)=P(e); the proposed method with one component of the model is ablated(this is used to evaluate the independent contributions of the three components), correspondingly Popu+Name(i.e., the P(m,e)=P(e)P(s|e)), Name+Context(i.e., the P(m,e)=P(c|e)P(s|e)) and Popu+Context (i.e., the P(m,e)=P(e)P(c|e)); and the full entity mention model (Full Model). For all methods, the parameters were configured through 10-fold cross validation. The overall performance results are shown in Table 3 and 4. Micro-Accuracy Macro-Accuracy BoW 0.60 0.61 TopicIndex 0.66 0.49 Learning2Link 0.70 0.54 Popu 0.39 0.24 Popu + Name 0.50 0.31 Name+Context 0.70 0.68 Popu+Context 0.72 0.73 Full Model 0.80 0.77 Table 3. The overall results on WikiAmbi dataset Micro-Accuracy Macro-Accuracy BoW 0.72 0.75 TopicIndex 0.80 0.76 Learning2Link 0.83 0.79 Popu 0.60 0.53 Popu + Name 0.63 0.59 Name+Context 0.81 0.78 Popu+Context 0.84 0.83 Full Model 0.86 0.88 Table 4. The overall results on TAC-KBP dataset From the results in Table 3 and 4, we can make the following observations: 1) Compared with the traditional methods, our entity mention model can achieve a significant 951 performance improvement: In WikiAmbi and TAC_KBP datasets, compared with the BoW baseline, our method respectively gets 20% and 14% micro-accuracy improvement; compared with the TopicIndex baseline, our method respectively gets 14% and 6% micro-accuracy improvement; compared with the Learning2Link baseline, our method respectively gets 10% and 3% microaccuracy improvement. 2) By incorporating more entity knowledge, our method can significantly improve the entity linking performance: When only using the popularity knowledge, our method can only achieve 49.5% micro-accuracy. By adding the name knowledge, our method can achieve 56.5% micro-accuracy, a 7% improvement over the Popu. By further adding the context knowledge, our method can achieve 83% micro-accuracy, a 33.5% improvement over Popu and a 26.5% improvement over Popu+Name. 3) All three types of entity knowledge contribute to the final performance improvement, and the context knowledge contributes the most: By respectively ablating the popularity knowledge, the name knowledge and the context knowledge, the performance of our model correspondingly reduces 7.5%, 5% and 26.5%. NIL Entity Detection Performance. To compare the performances of resolving the NIL entity problem, Table 5 shows the microaccuracies of different systems on the TAC_KBP data set (where All is the whole data set, NIL only contains the name mentions whose referent entity is NIL, InKB only contains the name mentions whose referent entity is contained in the knowledge base). From Table 5 we can see that our method can effectively detect the NIL entity meanwhile retaining the high InKB accuracy. All NIL InKB BoW 0.72 0.77 0.65 TopicIndex 0.80 0.91 0.65 Learning2Link 0.83 0.90 0.73 Full Model 0.86 0.90 0.79 Table 5. The NIL entity detection performance on the TAC_KBP data set 4.4.2 Optimizing Parameters Our model needs to tune one parameter: the Jelinek-Mercer smoothing parameter λ used in the entity context model. Intuitively, a smaller λ means that the general language model plays a more important role. Figure 5 plots the tradeoff. In both WikiAmbi and TAC_KBP data sets, Figure 5 shows that a λ value 0.2 will result in the best performance. Figure 5. The micro-accuracy vs. λ 4.4.3 Detailed Analysis To better understand the reasons why and how the proposed method works well, in this Section we analyze our method in detail. The Effect of Incorporating Heterogenous Entity Knowledge. The first advantage of our method is the entity mention model can incorporate heterogeneous entity knowledge. The Table 3 and 4 have shown that, by incorporating heterogenous entity knowledge (including the name knowledge, the popularity knowledge and the context knowledge), the entity linking performance can obtain a significant improvement. Figure 6. The performance vs. training mention size on WikiAmbi data set The Effect of Better Entity Knowledge Extraction. The second advantage of our method is that, by representing the entity knowledge as probabilistic distributions, our model has a statistical foundation and can better extract the entity knowledge using more training data through the entity popularity model, the entity name model and the entity context model. For instance, we can train a better entity context model P(c|e) using more name mentions. To find whether a better 952 entity knowledge extraction will result in a better performance, Figure 6 plots the micro-accuray along with the size of the training data on name mentions for P(c|e) of each entity e. From Figure 6, we can see that when more training data is used, the performance increases. 4.4.4 Comparision with State-of-the-Art Performance We also compared our method with the state-ofthe-art entity linking systems in the TAC 2009 KBP track (McNamee and Dang, 2009). Figure 7 plots the comparison with the top five performances in TAC 2009 KBP track. From Figure 7, we can see that our method can outperform the state-of-the-art approaches: compared with the best ranking system, our method can achieve a 4% performance improvement. Figure 7. A comparison with top 5 TAC 2009 KBP systems 5 Related Work In this section, we briefly review the related work. To the date, most entity linking systems employed the context similarity based methods. The essential idea was to extract the discriminative features of an entity from its description, then link a name mention to the entity which has the largest context similarity with it. Cucerzan (2007) proposed a Bag of Words based method, which represents each target entity as a vector of terms, then the similarity between a name mention and an entity was computed using the cosine similarity measure. Mihalcea & Csomai (2007), Bunescu & Pasca (2006), Fader et al. (2009) extended the BoW model by incorporating more entity knowledge such as popularity knowledge, entity category knowledge, etc. Zheng et al. (2010), Dredze et al. (2010), Zhang et al. (2010) and Zhou et al. (2010) employed the learning to rank techniques which can further take the relations between candidate entities into account. Because the context similarity based methods can only represent the entity knowledge as features, the main drawback of it was the difficulty to incorporate heterogenous entity knowledge. Recently there were also some entity linking methods based on inter-dependency. These methods assumed that the entities in the same document are related to each other, thus the referent entity of a name mention is the entity which is most related to its contextual entities. Medelyan et al. (2008) found the referent entity of a name mention by computing the weighted average of semantic relatedness between the candidate entity and its unambiguous contextual entities. Milne and Witten (2008) extended Medelyan et al. (2008) by adopting learning-based techniques to balance the semantic relatedness, commoness and context quality. Kulkarni et al. (2009) proposed a method which collectively resolves the entity linking tasks in a document as an optimization problem. The drawback of the inter-dependency based methods is that they are usually specially designed to the leverage of semantic relations, doesn’t take the other types of entity knowledge into consideration. 6 Conclusions and Future Work This paper proposes a generative probabilistic model, the entity-mention model, for the entity linking task. The main advantage of our model is it can incorporate multiple types of heterogenous entity knowledge. Furthermore, our model has a statistical foundation, making the entity knowledge extraction approach different from most previous ad hoc approaches. Experimental results show that our method can achieve competitive performance. In our method, we did not take into account the dependence between entities in the same document. This aspect could be complementary to those we considered in this paper. For our future work, we can integrate such dependencies in our model. Acknowledgments The work is supported by the National Natural Science Foundation of China under Grants no. 60773027, 60736044, 90920010, 61070106 and 61003117, and the National High Technology Development 863 Program of China under Grants no. 2008AA01Z145. Moreover, we sincerely thank the reviewers for their valuable comments. 953 References Adafre, S. F. & de Rijke, M. 2005. Discovering missing links in Wikipedia. In: Proceedings of the 3rd international workshop on Link discovery. Bunescu, R. & Pasca, M. 2006. Using encyclopedic knowledge for named entity disambiguation. In: Proceedings of EACL, vol. 6. Brown, P., Pietra, S. D., Pietra, V. D., and Mercer, R. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2), 263-31. Chen, S. F. & Goodman, J. 1999. An empirical study of smoothing techniques for language modeling. In Computer Speech and Language, London; Orlando: Academic Press, c1986-, pp. 359-394. Cucerzan, S. 2007. Large-scale named entity disambiguation based on Wikipedia data. In: Proceedings of EMNLP-CoNLL, pp. 708-716. Dredze, M., McNamee, P., Rao, D., Gerber, A. & Finin, T. 2010. Entity Disambiguation for Knowledge Base Population. In: Proceedings of the 23rd International Conference on Computational Linguistics. Fader, A., Soderland, S., Etzioni, O. & Center, T. 2009. Scaling Wikipedia-based named entity disambiguation to arbitrary web text. In: Proceedings of Wiki-AI Workshop at IJCAI, vol. 9. Han, X. & Zhao, J. 2009. NLPR_KBP in TAC 2009 KBP Track: A Two-Stage Method to Entity Linking. In: Proceeding of Text Analysis Conference. Han, X. & Zhao, J. 2010. Structural semantic relatedness: a knowledge-based method to named entity disambiguation. In: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Jelinek, Frederick and Robert L. Mercer. 1980. Interpolated estimation of Markov source parameters from sparse data. In: Proceedings of the Workshop on Pattern Recognition in Practice. Kulkarni, S., Singh, A., Ramakrishnan, G. & Chakrabarti, S. 2009. Collective annotation of Wikipedia entities in web text. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 457-466. Li, X., Morie, P. & Roth, D. 2004. Identification and tracing of ambiguous names: Discriminative and generative approaches. In: Proceedings of the National Conference on Artificial Intelligence, pp. 419-424. McNamee, P. & Dang, H. T. 2009. Overview of the TAC 2009 Knowledge Base Population Track. In: Proceeding of Text Analysis Conference. Milne, D. & Witten, I. H. 2008. Learning to link with Wikipedia. In: Proceedings of the 17th ACM conference on Conference on information and knowledge management. Milne, D., et al. 2006. Mining Domain-Specific Thesauri from Wikipedia: A case study. In Proc. of IEEE/WIC/ACM WI. Medelyan, O., Witten, I. H. & Milne, D. 2008. Topic indexing with Wikipedia. In: Proceedings of the AAAI WikiAI workshop. Mihalcea, R. & Csomai, A. 2007. Wikify!: linking documents to encyclopedic knowledge. In: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pp. 233-242. Pedersen, T., Purandare, A. & Kulkarni, A. 2005. Name discrimination by clustering similar contexts. Computational Linguistics and Intelligent Text Processing, pp. 226-237. Zhang, W., Su, J., Tan, Chew Lim & Wang, W. T. 2010. Entity Linking Leveraging Automatically Generated Annotation. In: Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). Zheng, Z., Li, F., Huang, M. & Zhu, X. 2010. Learning to Link Entities with Knowledge Base. In: The Proceedings of the Annual Conference of the North American Chapter of the ACL. Zhou, Y., Nie, L., Rouhani-Kalleh, O., Vasile, F. & Gaffney, S. 2010. Resolving Surface Forms to Wikipedia Topics. In: Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pp. 1335-1343. 954
|
2011
|
95
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 955–964, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Simple Supervised Document Geolocation with Geodesic Grids Benjamin P. Wing Department of Linguistics University of Texas at Austin Austin, TX 78712 USA [email protected] Jason Baldridge Department of Linguistics University of Texas at Austin Austin, TX 78712 USA [email protected] Abstract We investigate automatic geolocation (i.e. identification of the location, expressed as latitude/longitude coordinates) of documents. Geolocation can be an effective means of summarizing large document collections and it is an important component of geographic information retrieval. We describe several simple supervised methods for document geolocation using only the document’s raw text as evidence. All of our methods predict locations in the context of geodesic grids of varying degrees of resolution. We evaluate the methods on geotagged Wikipedia articles and Twitter feeds. For Wikipedia, our best method obtains a median prediction error of just 11.8 kilometers. Twitter geolocation is more challenging: we obtain a median error of 479 km, an improvement on previous results for the dataset. 1 Introduction There are a variety of applications that arise from connecting linguistic content—be it a word, phrase, document, or entire corpus—to geography. Leidner (2008) provides a systematic overview of geography-based language applications over the previous decade, with a special focus on the problem of toponym resolution—identifying and disambiguating the references to locations in texts. Perhaps the most obvious and far-reaching application is geographic information retrieval (Ding et al., 2000; Martins, 2009; Andogah, 2010), with applications like MetaCarta’s geographic text search (Rauch et al., 2003) and NewsStand (Teitler et al., 2008); these allow users to browse and search for content through a geo-centric interface. The Perseus project performs automatic toponym resolution on historical texts in order to display a map with each text showing the locations that are mentioned (Smith and Crane, 2001); Google Books also does this for some books, though the toponyms are identified and resolved quite crudely. Hao et al (2010) use a location-based topic model to summarize travelogues, enrich them with automatically chosen images, and provide travel recommendations. Eisenstein et al (2010) investigate questions of dialectal differences and variation in regional interests in Twitter users using a collection of geotagged tweets. An intuitive and effective strategy for summarizing geographically-based data is identification of the location—a specific latitude and longitude—that forms the primary focus of each document. Determining a single location of a document is only a well-posed problem for certain documents, generally of fairly small size, but there are a number of natural situations in which such collections arise. For example, a great number of articles in Wikipedia have been manually geotagged; this allows those articles to appear in their geographic locations while geobrowsing in an application like Google Earth. Overell (2009) investigates the use of Wikipedia as a source of data for article geolocation, in addition to article classification by category (location, person, etc.) and toponym resolution. Overell’s main goal is toponym resolution, for which geolocation serves as an input feature. For document geolocation, Overell uses a simple model that makes use only of the metadata available (article title, incoming and outgoing links, etc.)—the actual article text 955 is not used at all. However, for many document collections, such metadata is unavailable, especially in the case of recently digitized historical documents. Eisenstein et al. (2010) evaluate their geographic topic model by geolocating USA-based Twitter users based on their tweet content. This is essentially a document geolocation task, where each document is a concatenation of all the tweets for a single user. Their geographic topic model receives supervision from many documents/users and predicts locations for unseen documents/users. In this paper, we tackle document geolocation using several simple supervised methods on the textual content of documents and a geodesic grid as a discrete representation of the earth’s surface. Our approach is similar to that of Serdyukov et al. (2009), who geolocate Flickr images using their associated textual tags.1 Essentially, the task is cast similarly to language modeling approaches in information retrieval (Ponte and Croft, 1998). Discrete cells representing areas on the earth’s surface correspond to documents (with each cell-document being a concatenation of all actual documents that are located in that cell); new documents are then geolocated to the most similar cell according to standard measures such as Kullback-Leibler divergence (Zhai and Lafferty, 2001). Performance is measured both on geotagged Wikipedia articles (Overell, 2009) and tweets (Eisenstein et al., 2010). We obtain high accuracy on Wikipedia using KL divergence, with a median error of just 11.8 kilometers. For the Twitter data set, we obtain a median error of 479 km, which improves on the 494 km error of Eisenstein et al. An advantage of our approach is that it is far simpler, is easy to implement, and scales straightforwardly to large datasets like Wikipedia. 2 Data Wikipedia As of April 15, 2011, Wikipedia has some 18.4 million content-bearing articles in 281 language-specific encyclopedias. Among these, 39 have over 100,000 articles, including 3.61 million articles in the English-language edition alone. Wikipedia articles generally cover a single subject; in addition, most articles that refer to geographically 1We became aware of Serdyukov et al. (2009) during the writing of the camera-ready version of this paper. fixed subjects are geotagged with their coordinates. Such articles are well-suited as a source of supervised content for document geolocation purposes. Furthermore, the existence of versions in multiple languages means that the techniques in this paper can easily be extended to cover documents written in many of the world’s most common languages. Wikipedia’s geotagged articles encompass more than just cities, geographic formations and landmarks. For example, articles for events (like the shooting of JFK) and vehicles (such as the frigate USS Constitution) are geotagged. The latter type of article is actually quite challenging to geolocate based on the text content: though the ship is moored in Boston, most of the page discusses its role in various battles along the eastern seaboard of the USA. However, such articles make up only a small fraction of the geotagged articles. For the experiments in this paper, we used a full dump of Wikipedia from September 4, 2010.2 Included in this dump is a total of 10,355,226 articles, of which 1,019,490 have been geotagged. Excluding various types of special-purpose articles used primarily for maintaining the site (specifically, redirect articles and articles outside the main namespace), the dump includes 3,431,722 content-bearing articles, of which 488,269 are geotagged. It is necessary to process the raw dump to obtain the plain text, as well as metadata such as geotagged coordinates. Extracting the coordinates, for example, is not a trivial task, as coordinates can be specified using multiple templates and in multiple formats. Automatically-processed versions of the English-language Wikipedia site are provided by Metaweb,3 which at first glance promised to significantly simplify the preprocessing. Unfortunately, these versions still need significant processing and they incorrectly eliminate some of the important metadata. In the end, we wrote our own code to process the raw dump. It should be possible to extend this code to handle other languages with little difficulty. See Lieberman and Lin (2009) for more discussion of a related effort to extract and use the geotagged articles in Wikipedia. The entire set of articles was split 80/10/10 in 2http://download.wikimedia.org/enwiki/ 20100904/pages-articles.xml.bz2 3http://download.freebase.com/wex/ 956 round-robin fashion into training, development, and testing sets after randomizing the order of the articles, which preserved the proportion of geotagged articles. Running on the full data set is timeconsuming, so development was done on a subset of about 80,000 articles (19.9 million tokens) as a training set and 500 articles as a development set. Final evaluation was done on the full dataset, which includes 390,574 training articles (97.2 million tokens) and 48,589 test articles. A full run with all the six strategies described below (three baseline, three non-baseline) required about 4 months of computing time and about 10-16 GB of RAM when run on a 64bit Intel Xeon E5540 CPU; we completed such jobs in under two days (wall clock) using the Longhorn cluster at the Texas Advanced Computing Center. Geo-tagged Microblog Corpus As a second evaluation corpus on a different domain, we use the corpus of geotagged tweets collected and used by Eisenstein et al. (2010).4 It contains 380,000 messages from 9,500 users tweeting within the 48 states of the continental USA. We use the train/dev/test splits provided with the data; for these, the tweets of each user (a feed) have been concatenated to form a single document, and the location label associated with each document is the location of the first tweet by that user. This is generally a fair assumption as Twitter users typically tweet within a relatively small region. Given this setup, we will refer to Twitter users as documents in what follows; this keeps the terminology consistent with Wikipedia as well. The training split has 5,685 documents (1.58 million tokens). Replication Our code (part of the TextGrounder system), our processed version of Wikipedia, and instructions for replicating our experiments are available on the TextGrounder website.5 3 Grid representation for connecting texts to locations Geolocation involves identifying some spatial region with a unit of text—be it a word, phrase, or document. The earth’s surface is continuous, so a 4http://www.ark.cs.cmu.edu/GeoText/ 5http://code.google.com/p/textgrounder/ wiki/WingBaldridge2011 natural approach is to predict locations using a continuous distribution. For example, Eisenstein et al. (2010) use Gaussian distributions to model the locations of Twitter users in the United States of America. This appears to work reasonably well for that restricted region, but is likely to run into problems when predicting locations for anywhere on earth— instead, spherical distributions like the von MisesFisher distribution would need to be employed. We take here the simpler alternative of discretizing the earth’s surface with a geodesic grid; this allows us to predict locations with a variety of standard approaches over discrete outcomes. There are many ways of constructing geodesic grids. Like Serdyukov et al. (2009), we use the simplest strategy: a grid of square cells of equal degree, such as 1◦by 1◦. This produces variable-size regions that shrink latitudinally, becoming progressively smaller and more elongated the closer they get towards the poles. Other strategies, such as the quaternary triangular mesh (Dutton, 1996), preserve equal area, but are considerably more complex to implement. Given that most of the populated regions of interest for us are closer to the equator than not and that we use cells of quite fine granularity (down to 0.05◦), the simple grid system was preferable. With such a discrete representation of the earth’s surface, there are four distributions that form the core of all our geolocation methods. The first is a standard multinomial distribution over the vocabulary for every cell in the grid. Given a grid G with cells ci and a vocabulary V with words wj, we have θcij = P(wj|ci). The second distribution is the equivalent distribution for a single test document dk, i.e. θdkj = P(wj|dk). The third distribution is the reverse of the first: for a given word, its distribution over the earth’s cells, κji = P(ci|wj). The final distribution is over the cells, γi = P(ci). This grid representation ignores all higher level regions, such as states, countries, rivers, and mountain ranges, but it is consistent with the geocoding in both the Wikipedia and Twitter datasets. Nonetheless, note that the κji for words referring to such regions is likely to be much flatter (spread out) but with most of the mass concentrated in a set of connected cells. Those for highly focused point-locations will jam up in a few disconnected cells—in the extreme case, toponyms like Spring957 field which are connected to many specific point locations around the earth. We use grids with cell sizes of varying granularity d×d for d = 0.1◦, 0.5◦, 1◦, 5◦, 10◦. For example, with d=0.5◦, a cell at the equator is roughly 56x55 km and at 45◦latitude it is 39x55 km. At this resolution, there are a total of 259,200 cells, of which 35,750 are non-empty when using our Wikipedia training set. For comparison, at the equator a cell at d=5◦is about 557x553 km (2,592 cells; 1,747 non-empty) and at d=0.1◦a cell is about 11.3x10.6 km (6,480,000 cells; 170,005 non-empty). The geolocation methods predict a cell ˆc for a document, and the latitude and longitude of the degree-midpoint of the cell is used as the predicted location. Prediction error is the great-circle distance from these predicted locations to the locations given by the gold standard. The use of cell midpoints provides a fair comparison for predictions with different cell sizes. This differs from the evaluation metrics used by Serdyukov et al. (2009), which are all computed relative to a given grid size. With their metrics, results for different granularities cannot be directly compared because using larger cells means less ambiguity when choosing ˆc. With our distancebased evaluation, large cells are penalized by the distance from the midpoint to the actual location even when that location is in the same cell. Smaller cells reduce this penalty and permit the word distributions θcij to be much more specific for each cell, but they are harder to predict exactly and suffer more from sparse word counts compared to courser granularity. For large datasets like Wikipedia, fine-grained grids work very well, but the trade-off between resolution and sufficient training material shows up more clearly for the smaller Twitter dataset. 4 Supervised models for document geolocation Our methods use only the text in the documents; predictions are made based on the distributions θ, κ, and ρ introduced in the previous section. No use is made of metadata, such as links/followers and infoboxes. 4.1 Supervision We acquire θ and κ straightforwardly from the training material. The unsmoothed estimate of word wj’s probability in a test document dk is:6 ˜θdkj = #(wj, dk) P wl∈V #(wl, dk) (1) Similarly for a cell ci, we compute the unsmoothed word distribution by aggregating all of the documents located within ci: ˜θcij = P dk∈ci #(wj, dk) P dk∈ci P wl∈V #(wl, dk) (2) We compute the global distribution θDj over the set of all documents D in the same fashion. The word distribution of document dk backs off to the global distribution θDj. The probability mass αdk reserved for unseen words is determined by the empirical probability of having seen a word once in the document, motivated by Good-Turing smoothing. (The cell distributions are treated analogously.) That is:7 αdk = |wj ∈V s.t. #(wj, dk)=1| P wj∈V #(wj, dk) (3) θ(−dk) Dj = θDj 1−P wl∈dk θDl (4) θdkj = ( αdkθ(−dk) Dj , if ˜θdkj = 0 (1−αdk)˜θdkj, o.w. (5) The distributions over cells for each word simply renormalizes the θcij values to achieve a proper distribution: κji = θcij P ci∈G θcij (6) A useful aspect of the κ distributions is that they can be plotted in a geobrowser using thematic mapping 6We use #() to indicate the count of an event. 7θ(−dk) Dj is an adjusted version of θDj that is normalized over the subset of words not found in document dk. This adjustment ensures that the entire distribution is properly normalized. 958 techniques (Sandvik, 2008) to inspect the spread of a word over the earth. We used this as a simple way to verify the basic hypothesis that words that do not name locations are still useful for geolocation. Indeed, the Wikipedia distribution for mountain shows high density over the Rocky Mountains, Smokey Mountains, the Alps, and other ranges, while beach has high density in coastal areas. Words without inherent locational properties also have intuitively correct distributions: e.g., barbecue has high density over the south-eastern United States, Texas, Jamaica, and Australia, while wine is concentrated in France, Spain, Italy, Chile, Argentina, California, South Africa, and Australia.8 Finally, the cell distributions are simply the relative frequency of the number of documents in each cell: γi = |ci| |D|. A standard set of stop words are ignored. Also, all words are lowercased except in the case of the most-common-toponym baselines, where uppercase words serve as a fallback in case a toponym cannot be located in the article. 4.2 Kullback-Leibler divergence Given the distributions for each cell, θci, in the grid, we use an information retrieval approach to choose a location for a test document dk: compute the similarity between its word distribution θdk and that of each cell, and then choose the closest one. KullbackLeibler (KL) divergence is a natural choice for this (Zhai and Lafferty, 2001). For distribution P and Q, KL divergence is defined as: KL(P||Q) = X i P(i) log P(i) Q(i) (7) This quantity measures how good Q is as an encoding for P – the smaller it is the better. The best cell ˆcKL is the one which provides the best encoding for the test document: ˆcKL = arg min ci∈G KL(θdk||θci) (8) The fact that KL is not symmetric is desired here: the other direction, KL(θci||θdk), asks which cell 8This also acts as an exploratory tool. For example, due to a big spike on Cebu Province in the Philippines we learned that Cebuanos take barbecue very, very seriously. the test document is a good encoding for. With KL(θdk||θci), the log ratio of probabilities for each word is weighted by the probability of the word in the test document, θdkj log θdkj θcij , which means that the divergence is more sensitive to the document rather than the overall cell. As an example for why non-symmetric KL in this order is appropriate, consider geolocating a page in a densely geotagged cell, such as the page for the Washington Monument. The distribution of the cell containing the monument will represent the words from many other pages having to do with museums, US government, corporate buildings, and other nearby memorials and will have relatively small values for many of the words that are highly indicative of the monument’s location. Many of those words appear only once in the monument’s page, but this will still be a higher value than for the cell and will weight the contribution accordingly. Rather than computing KL(θdk||θci) over the entire vocabulary, we restrict it to only the words in the document to compute KL more efficiently: KL(θdk||θci) = X wj∈Vdk θdkj log θdkj θcij (9) Early experiments showed that it makes no difference in the outcome to include the rest of the vocabulary. Note that because θci is smoothed, there are no zeros, so this value is always defined. 4.3 Naive Bayes Naive Bayes is a natural generative model for the task of choosing a cell, given the distributions θci and γ: to generate a document, choose a cell ci according to γ and then choose the words in the document according to θci: ˆcNB = arg max ci∈G PNB(ci|dk) = arg max ci∈G P(ci)P(dk|ci) P(dk) = arg max ci∈G γi Y wj∈Vdk θ#(wj,dk) cij (10) 959 This method maximizes the combination of the likelihood of the document P(dk|ci) and the cell prior probability γi. 4.4 Average cell probability For each word, κji gives the probability of each cell in the grid. A simple way to compute a distribution for a document dk is to take a weighted average of the distributions for all words to compute the average cell probability (ACP): ˆcACP = arg max ci∈G PACP (ci|dk) = arg max ci∈G P wj∈Vdk #(wj, dk)κji P cl∈G P wj∈Vdk #(wj, dk)κjl = arg max ci∈G X wj∈Vdk #(wj, dk)κji (11) This method, despite its conceptual simplicity, works well in practice. It could also be easily modified to use different weights for words, such as TF/IDF or relative frequency ratios between geolocated documents and non-geolocated documents, which we intend to try in future work. 4.5 Baselines There are several natural baselines to use for comparison against the methods described above. Random Choose ˆcrand randomly from a uniform distribution over the entire grid G. Cell prior maximum Choose the cell with the highest prior probability according to γ: ˆccpm = arg maxci∈G γi. Most frequent toponym Identify the most frequent toponym in the article and the geotagged Wikipedia articles that match it. Then identify which of those articles has the most incoming links (a measure of its prominence), and then choose ˆcmft to be the cell that contains the geotagged location for that article. This is a strong baseline method, but can only be used with Wikipedia. Note that a toponym matches an article (or equivalently, the article is a candidate for the toponym) either if the toponym is the same as the article’s title, 0 200 400 600 800 1000 1200 1400 grid size (degrees) mean error (km) 0.1 0.5 1 5 10 Most frequent toponym Avg. cell probability Naive Bayes Kullback−Leibler Figure 1: Plot of grid resolution in degrees versus mean error for each method on the Wikipedia dev set. or the same as the title after a parenthetical tag or comma-separated higher-level division is removed. For example, the toponym Tucson would match articles named Tucson, Tucson (city) or Tucson, Arizona. In this fashion, the set of toponyms, and the list of candidates for each toponym, is generated from the set of all geotagged Wikipedia articles. 5 Experiments The approaches described in the previous section are evaluated on both the geotagged Wikipedia and Twitter datasets. Given a predicted cell ˆc for a document, the prediction error is the great-circle distance between the true location and the center of ˆc, as described in section 3. Grid resolution and thresholding The major parameter of all our methods is the grid resolution. For both Wikipedia and Twitter, preliminary experiments on the development set were run to plot the prediction error for each method for each level of resolution, and the optimal resolution for each method was chosen for obtaining test results. For the Twitter dataset, an additional parameter is a threshold on the number of feeds each word occurs in: in the preprocessed splits of Eisenstein et al. (2010), all vocabulary items that appear in fewer than 40 feeds are ignored. This thresholding takes away a lot of very useful material; e.g. in the first feed, it removes 960 Figure 2: Histograms of distribution of error distances (in km) for grid size 0.5◦for each method on the Wikipedia dev set. both “kirkland” and “redmond” (towns in the Eastside of Lake Washington near Seattle), very useful information for geolocating that user. This suggests that a lower threshold would be better, and this is borne out by our experiments. Figure 1 graphs the mean error of each method for different resolutions on the Wikipedia dev set, and Figure 2 graphs the distribution of error distances for grid size 0.5◦for each method on the Wikipedia dev set. These results indicate that a grid size even smaller than 0.1◦might be beneficial. To test this, we ran experiments using a grid size of 0.05◦and 0.01◦using KL divergence. The mean errors on the dev set increased slightly, from 323 km to 348 and 329 km, respectively, indicating that 0.1◦is indeed the minimum. For the Twitter dataset, we considered both grid size and vocabulary threshold. We recomputed the distributions using several values for both parameters and evaluated on the development set. Table 1 shows mean prediction error using KL divergence, for various combinations of threshold and grid size. Similar tables were constructed for the other strategies. Clearly, the larger grid size of 5◦is more optimal than the 0.1◦best for Wikipedia. This is unsurprising, given the small size of the corpus. Overall, there is a less clear trend for the other methods Grid size (degrees) Thr. 0.1 0.5 1 5 10 0 1113.1 996.8 1005.1 969.3 1052.5 2 1018.5 959.5 944.6 911.2 1021.6 3 1027.6 940.8 954.0 913.6 1026.2 5 1011.7 951.0 954.2 892.0 1013.0 10 1011.3 968.8 938.5 929.8 1048.0 20 1032.5 987.3 966.0 940.0 1070.1 40 1080.8 1031.5 998.6 981.8 1127.8 Table 1: Mean prediction error (km) on the Twitter dev set for various combinations of vocabulary threshold (in feeds) and grid size, using the KL divergence strategy. in terms of optimal resolution. Our interpretation of this is that there is greater sparsity for the Twitter dataset, and thus it is more sensitive to arbitrary aspects of how different user feeds are captured in different cells at different granularities. For the non-baseline strategies, a threshold between about 2 and 5 was best, although no one value in this range was clearly better than another. Results Based on the optimal resolutions for each method, Table 2 provides the median and mean errors of the methods for both datasets, when run on the test sets. The results clearly show that KL divergence does the best of all the methods considered, with Naive Bayes a close second. Prediction on Wikipedia is very good, with a median value of 11.8 km. Error on Twitter is much higher at 479 km. Nonetheless, this beats Eisenstein et al.’s (2010) median results, though our mean is worse at 967. Using the same threshold of 40 as Eisenstein et al., our results using KL divergence are slightly worse than theirs: median error of 516 km and mean of 986 km. The difference between Wikipedia and Twitter is unsurprising for several reasons. Wikipedia articles tend to use a lot of toponyms and words that correlate strongly with particular places while many, perhaps most, tweets discuss quotidian details such as what the user ate for lunch. Second, Wikipedia articles are generally longer and thus provide more text to base predictions on. Finally, there are orders of magnitude more training examples for Wikipedia, which allows for greater grid resolution and thus more precise location predictions. 961 Wikipedia Twitter Strategy Degree Median Mean Threshold Degree Median Mean Kullback-Leibler 0.1 11.8 221 5 5 479 967 Naive Bayes 0.1 15.5 314 5 5 528 989 Avg. cell probability 0.1 24.1 1421 2 10 659 1184 Most frequent toponym 0.5 136 1927 Cell prior maximum 5 2333 4309 N/A 0.1 726 1141 Random 0.1 7259 7192 20 0.1 1217 1588 Eisenstein et al. 40 N/A 494 900 Table 2: Prediction error (km) on the Wikipedia and Twitter test sets for each of the strategies using the optimal grid resolution and (for Twitter) the optimal threshold, as determined by performance on the corresponding development sets. Eisenstein et al. (2010) used a fixed Twitter threshold of 40. Threshold makes no difference for cell prior maximum. Ships One of the most difficult types of Wikipedia pages to disambiguate are those of ships that either are stored or had sunk at a particular location. These articles tend to discuss the exploits of these ships, not their final resting places. Location error on these is usually quite large. However, prediction is quite good for ships that were sunk in particular battles which are described in detail on the page; examples are the USS Gambier Bay, USS Hammann (DD412), and the HMS Majestic (1895). Another situation that gives good results is when a ship is retired in a location where it is a prominent feature and is thus mentioned in the training set at that location. An example is the USS Turner Joy, which is in Bremerton, Washington and figures prominently in the page for Bremerton (which is in the training set). Another interesting aspect of geolocating ship articles is that ships tend to end up sunk in remote battle locations, such that their article is the only one located in the cell covering the location in the training set. Ship terminology thus dominates such cells, with the effect that our models often (incorrectly) geolocate test articles about other ships to such locations (and often about ships with similar properties). This also leads to generally more accurate geolocation of HMS ships over USS ships; the former seem to have been sunk in more concentrated regions that are themselves less spread out globally. 6 Related work Lieberman and Lin (2009) also work with geotagged Wikipedia articles, but they do in order so to analyze the likely locations of users who edit such articles. Other researchers have investigated the use of Wikipedia as a source of data for other supervised NLP tasks. Mihalcea and colleagues have investigated the use of Wikipedia in conjunction with word sense disambiguation (Mihalcea, 2007), keyword extraction and linking (Mihalcea and Csomai, 2007) and topic identification (Coursey et al., 2009; Coursey and Mihalcea, 2009). Cucerzan (2007) used Wikipedia to do named entity disambiguation, i.e. identification and coreferencing of named entities by linking them to the Wikipedia article describing the entity. Some approaches to document geolocation rely largely or entirely on non-textual metadata, which is often unavailable for many corpora of interest, Nonetheless, our methods could be combined with such methods when such metadata is available. For example, given that both Wikipedia and Twitter have a linked structure between documents, it would be possible to use the link-based method given in Backstrom et al. (2010) for predicting the location of Facebook users based on their friends’ locations. It is possible that combining their approach with our text-based approach would provide improvements for Facebook, Twitter and Wikipedia datasets. For example, their method performs poorly for users with few geolocated friends, but results improved by combining link-based predictions with IP address predictions. The text written users’ updates could be an additional aid for locating such users. 962 7 Conclusion We have shown that automatic identification of the location of a document based only on its text can be performed with high accuracy using simple supervised methods and a discrete grid representation of the earth’s surface. All of our methods are simple to implement, and both training and testing can be easily parallelized. Our most effective geolocation strategy finds the grid cell whose word distribution has the smallest KL divergence from that of the test document, and easily beats several effective baselines. We predict the location of Wikipedia pages to a median error of 11.8 km and mean error of 221 km. For Twitter, we obtain a median error of 479 km and mean error of 967 km. Using naive Bayes and a simple averaging of word-level cell distributions also both worked well; however, KL was more effective, we believe, because it weights the words in the document most heavily, and thus puts less importance on the less specific word distributions of each cell. Though we only use text, link-based predictions using the follower graph, as Backstrom et al. (2010) do for Facebook, could improve results on the Twitter task considered here. It could also help with Wikipedia, especially for buildings: for example, the page for Independence Hall in Philadelphia links to geotagged “friend” pages for Philadelphia, the Liberty Bell, and many other nearby locations and buildings. However, we note that we are still primarily interested in geolocation with only text because there are a great many situations in which such linked structure is unavailable. This is especially true for historical corpora like those made available by the Perseus project.9 The task of identifying a single location for an entire document provides a convenient way of evaluating approaches for connecting texts with locations, but it is not fully coherent in the context of documents that cover multiple locations. Nonetheless, both the average cell probability and naive Bayes models output a distribution over all cells, which could be used to assign multiple locations. Furthermore, these cell distributions could additionally be used to define a document level prior for resolution of individual toponyms. 9www.perseus.tufts.edu/ Though we treated the grid resolution as a parameter, the grids themselves form a hierarchy of cells containing finer-grained cells. Given this, there are a number of obvious ways to combine predictions from different resolutions. For example, given a cell of the finest grain, the average cell probability and naive Bayes models could successively back off to the values produced by their coarser-grained containing cells, and KL divergence could be summed from finest-to-coarsest grain. Another strategy for making models less sensitive to grid resolution is to smooth the per-cell word distributions over neighboring cells; this strategy improved results on Flickr photo geolocation for Serdyukov et al. (2009). An additional area to explore is to remove the bag-of-words assumption and take into account the ordering between words. This should have a number of obvious benefits, among which are sensitivity to multi-word toponyms such as New York, collocations such as London, Ontario or London in Ontario, and highly indicative terms such as egg cream that are made up of generic constituents. Acknowledgments This research was supported by a grant from the Morris Memorial Trust Fund of the New York Community Trust and from the Longhorn Innovation Fund for Technology. This paper benefited from reviewer comments and from discussion in the Natural Language Learning reading group at UT Austin, with particular thanks to Matt Lease. References Geoffrey Andogah. 2010. Geographically Constrained Information Retrieval. Ph.D. thesis, University of Groningen, Groningen, Netherlands, May. Lars Backstrom, Eric Sun, and Cameron Marlow. 2010. Find me if you can: improving geographical prediction with social and spatial proximity. In Proceedings of the 19th international conference on World wide web, WWW ’10, pages 61–70, New York, NY, USA. ACM. Kino Coursey and Rada Mihalcea. 2009. Topic identification using wikipedia graph centrality. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, NAACL ’09, pages 117– 963 120, Morristown, NJ, USA. Association for Computational Linguistics. Kino Coursey, Rada Mihalcea, and William Moen. 2009. Using encyclopedic knowledge for automatic topic identification. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, CoNLL ’09, pages 210–218, Morristown, NJ, USA. Association for Computational Linguistics. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on Wikipedia data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 708–716, Prague, Czech Republic, June. Association for Computational Linguistics. Junyan Ding, Luis Gravano, and Narayanan Shivakumar. 2000. Computing geographical scopes of web resources. In Proceedings of the 26th International Conference on Very Large Data Bases, VLDB ’00, pages 545–556, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. G. Dutton. 1996. Encoding and handling geospatial data with hierarchical triangular meshes. In M.J. Kraak and M. Molenaar, editors, Advances in GIS Research II, pages 505–518, London. Taylor and Francis. Jacob Eisenstein, Brendan O’Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1277–1287, Cambridge, MA, October. Association for Computational Linguistics. Qiang Hao, Rui Cai, Changhu Wang, Rong Xiao, JiangMing Yang, Yanwei Pang, and Lei Zhang. 2010. Equip tourists with knowledge mined from travelogues. In Proceedings of the 19th international conference on World wide web, WWW ’10, pages 401– 410, New York, NY, USA. ACM. Jochen L. Leidner. 2008. Toponym Resolution in Text: Annotation, Evaluation and Applications of Spatial Grounding of Place Names. Dissertation.Com, January. M. D. Lieberman and J. Lin. 2009. You are where you edit: Locating Wikipedia users through edit histories. In ICWSM’09: Proceedings of the 3rd International AAAI Conference on Weblogs and Social Media, pages 106–113, San Jose, CA, May. Bruno Martins. 2009. Geographically Aware Web Text Mining. Ph.D. thesis, University of Lisbon. Rada Mihalcea and Andras Csomai. 2007. Wikify!: linking documents to encyclopedic knowledge. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, CIKM ’07, pages 233–242, New York, NY, USA. ACM. Rada Mihalcea. 2007. Using Wikipedia for Automatic Word Sense Disambiguation. In North American Chapter of the Association for Computational Linguistics (NAACL 2007). Simon Overell. 2009. Geographic Information Retrieval: Classification, Disambiguation and Modelling. Ph.D. thesis, Imperial College London. Jay M. Ponte and W. Bruce Croft. 1998. A language modeling approach to information retrieval. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’98, pages 275–281, New York, NY, USA. ACM. Erik Rauch, Michael Bukatin, and Kenneth Baker. 2003. A confidence-based framework for disambiguating geographic terms. In Proceedings of the HLT-NAACL 2003 workshop on Analysis of geographic references - Volume 1, HLT-NAACL-GEOREF ’03, pages 50–54, Stroudsburg, PA, USA. Association for Computational Linguistics. Bjorn Sandvik. 2008. Using KML for thematic mapping. Master’s thesis, The University of Edinburgh. Pavel Serdyukov, Vanessa Murdock, and Roelof van Zwol. 2009. Placing flickr photos on a map. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’09, pages 484–491, New York, NY, USA. ACM. David A. Smith and Gregory Crane. 2001. Disambiguating geographic names in a historical digital library. In Proceedings of the 5th European Conference on Research and Advanced Technology for Digital Libraries, ECDL ’01, pages 127–136, London, UK. Springer-Verlag. B. E. Teitler, M. D. Lieberman, D. Panozzo, J. Sankaranarayanan, H. Samet, and J. Sperling. 2008. NewsStand: A new view on news. In GIS’08: Proceedings of the 16th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, pages 144–153, Irvine, CA, November. Chengxiang Zhai and John Lafferty. 2001. Model-based feedback in the language modeling approach to information retrieval. In Proceedings of the tenth international conference on Information and knowledge management, CIKM ’01, pages 403–410, New York, NY, USA. ACM. 964
|
2011
|
96
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 965–975, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Piggyback: Using Search Engines for Robust Cross-Domain Named Entity Recognition Stefan R¨ud Institute for NLP University of Stuttgart Germany Massimiliano Ciaramita Google Research Z¨urich Switzerland Jens M¨uller and Hinrich Sch¨utze Institute for NLP University of Stuttgart Germany Abstract We use search engine results to address a particularly difficult cross-domain language processing task, the adaptation of named entity recognition (NER) from news text to web queries. The key novelty of the method is that we submit a token with context to a search engine and use similar contexts in the search results as additional information for correctly classifying the token. We achieve strong gains in NER performance on news, in-domain and out-of-domain, and on web queries. 1 Introduction As statistical Natural Language Processing (NLP) matures, NLP components are increasingly used in real-world applications. In many cases, this means that some form of cross-domain adaptation is necessary because there are distributional differences between the labeled training set that is available and the real-world data in the application. To address this problem, we propose a new type of features for NLP data, features extracted from search engine results. Our motivation is that search engine results can be viewed as a substitute for the world knowledge that is required in NLP tasks, but that can only be extracted from a standard training set or precompiled resources to a limited extent. For example, a named entity (NE) recognizer trained on news text may tag the NE London in an out-of-domain web query like London Klondike gold rush as a location. But if we train the recognizer on features derived from search results for the sentence to be tagged, correct classification as person is possible. This is because the search results for London Klondike gold rush contain snippets in which the first name Jack precedes London; this is a sure indicator of a last name and hence an NE of type person. We call our approach piggyback and search resultderived features piggyback features because we piggyback on a search engine like Google for solving a difficult NLP task. In this paper, we use piggyback features to address a particularly hard cross-domain problem, the application of an NER system trained on news to web queries. This problem is hard for two reasons. First, the most reliable cue for NEs in English, as in many languages, is capitalization. But queries are generally lowercase and even if uppercase characters are used, they are not consistent enough to be reliable features. Thus, applying NER systems trained on news to web queries requires a robust cross-domain approach. News to queries adaptation is also hard because queries provide limited context for NEs. In news text, the first mention of a word like Ford is often a fully qualified, unambiguous name like Ford Motor Corporation or Gerald Ford. In a short query like buy ford or ford pardon, there is much less context than in news. The lack of context and capitalization, and the noisiness of real-world web queries (tokenization irregularities and misspellings) all make NER hard. The low annotator agreement we found for queries (Section 5) also confirms this point. The correct identification of NEs in web queries can be crucial for providing relevant pages and ads to users. Other domains have characteristics similar to web queries, e.g., automatically transcribed speech, social communities like Twitter, and SMS. Thus, NER for short, noisy text fragments, in the absence of capitalization, is of general importance. 965 NER performance is to a large extent determined by the quality of the feature representation. Lexical, part-of-speech (PoS), shape and gazetteer features are standard. While the impact of different types of features is well understood for standard NER, fundamentally different types of features can be used when leveraging search engine results. Returning to the NE London in the query London Klondike gold rush, the feature “proportion of search engine results in which a first name precedes the token of interest” is likely to be useful in NER. Since using search engine results for cross-domain robustness is a new approach in NLP, the design of appropriate features is crucial to its success. A significant part of this paper is devoted to feature design and evaluation. This paper is organized as follows. Section 2 discusses related work. We describe standard NER features in Section 3. One main contribution of this paper is the large array of piggyback features that we propose in Section 4. We describe the data sets we use and our experimental setup in Sections 5–6. The results in Section 7 show that piggyback features significantly increase NER performance. This is the second main contribution of the paper. We discuss challenges of using piggyback features – due to the cost of querying search engines – and present our conclusions and future work in Section 8. 2 Related work Barr et al. (2008) found that capitalization of NEs in web queries is inconsistent and not a reliable cue for NER. Guo et al. (2009) exploit query logs for NER in queries. This is also promising, but the context in search results is richer and potentially more informative than that of other queries in logs. The insight that search results provide useful additional context for natural language expressions is not new. Perhaps the oldest and best known application is pseudo-relevance feedback which uses words and phrases from search results for query expansion (Rocchio, 1971; Xu and Croft, 1996). Search counts or search results have also been used for sentiment analysis (Turney, 2002), for transliteration (Grefenstette et al., 2004), candidate selection in machine translation (Lapata and Keller, 2005), text similarity measurements (Sahami and Heilman, 2006), incorrect parse tree filtering (Yates et al., 2006), and paraphrase evaluation (Fujita and Sato, 2008). The specific NER application we address is most similar to the work of Farkas et al. (2007), but they mainly used frequency statistics as opposed to what we view as the main strength of search results: the ability to get additional contextually similar uses of the token that is to be classified. Lawson et al. (2010), Finin et al. (2010), and Yetisgen-Yildiz et al. (2010) investigate how to best use Amazon Mechanical Turk (AMT) for NER. We use AMT as a tool, but it is not our focus. NLP settings where training and test sets are from different domains have received considerable attention in recent years. These settings are difficult because many machine learning approaches assume that source and target are drawn from the same distribution; this is not the case if they are from different domains. Systems applied out of domain typically incur severe losses in accuracy; e.g., Poibeau and Kosseim (2000) showed that newswire-trained NER systems perform poorly when applied to email data (a drop of F1 from .9 to .5). Recent work in machine learning has made substantial progress in understanding how cross-domain features can be used in effective ways (Ben-David et al., 2010). The development of such features however is to a large extent an empirical problem. From this perspective, one of the most successful approaches to adaptation for NER is based on generating shared feature representations between source and target domains, via unsupervised methods (Ando, 2004; Turian et al., 2010). Turian et al. (2010) show that adapting from CoNLL to MUC-7 (Chinchor, 1998) data (thus between different newswire sources), the best unsupervised feature (Brown clusters) improves F1 from .68 to .79. Our approach fits within this line of work in that it empirically investigates features with good cross-domain generalization properties. The main contribution of this paper is the design and evaluation of a novel family of features extracted from the largest and most up-to-date repository of world knowledge, the web. Another source of world knowledge for NER is Wikipedia: Kazama and Torisawa (2007) show that pseudocategories extracted from Wikipedia help for in-domain NER. Cucerzan (2007) uses Wikipedia and web search frequencies to improve NE disambiguation, including simple web search frequencies 966 BASE: lexical and input-text part-of-speech features 1 WORD(k,i) binary: wk = wi 2 POS(k,t) binary: wk has part-of-speech t 3 SHAPE(k,i) binary: wk has (regular expression) shape regexpi 4 PREFIX(j) binary: w0 has prefix j (analogously for suffixes) GAZ: gazetteer features 5 GAZ-Bl(k,i) binary: wk is the initial word of a phrase, consisting of l words, whose gaz. category is i 6 GAZ-Il(k,i) binary: wk is a non-initial word in a phrase, consisting of l words, whose gaz. category is i URL: URL features 7 URL-SUBPART N(w0 is substring of a URL)/N(URL) 8 URL-MI(PER) 1/N(URL-parts)P[[p∈URL-parts]] 3MIu(p, PER)−MIu(p, O)−MIu(p, ORG)−MIu(p, LOC) LEX: local lexical features 9 NEIGHBOR(k) 1/N(k-neighbors) P[[v∈k-neighbors]] log[NE-BNC(v, k)/OTHER-BNC(v, k)] 10 LEX-MI(PER,d) 1/N(d-words) P[[v∈d-words]] 3MId(v, PER)−MId(v, O)−MId(v, ORG)−MId(v, LOC) BOW: bag-of-word features 11 BOW-MI(PER) 1/N(bow-words)P[[v∈bow-words]] 3MIb(v, PER)−MIb(v, O)−MIb(v, ORG)−MIb(v, LOC) MISC: shape, search part-of-speech, and title features 12 UPPERCASE N(s0 is uppercase)/N(s0) 13 ALLCAPS N(s0 is all-caps)/N(s0) 14 SPECIAL binary: w0 contains special character 15 SPECIAL-TITLE N(s−1 or s1 in title contains special character)/(N(s−1)+N(s1)) 16 TITLE-WORD N(s0 occurs in title)/N(title) 17 NOMINAL-POS N(s0 is tagged with nominal PoS)/N(s0) 18 CONTEXT(k) N(sk is typical neighbor at position k of named entity)/N(s0) 19 PHRASE-HIT(k) N(wk = sk, i.e., word at position k occurs in snippet)/N(s0) 20 ACRONYM N(w−1w0 or w0w1 or w−1w0w1 occur as acronym)/N(s0) 21 EMPTY binary: search result is empty Table 1: NER features used in this paper. BASE and GAZ are standard features. URL, LEX, BOW and MISC are piggyback (search engine-based) features. See text for explanation of notation. The definitions of URL-MI, LEX-MI, and BOW-MI for LOC, ORG and O are analogous to those for PER. For better readability, we write P[[x]] for P x. for compound entities. 3 Standard NER features As is standard in supervised NER, we train an NE tagger on a dataset where each token is represented as a feature vector. In this and the following section we present the features used in our study divided in groups. We will refer to the target token – the token we define the feature vector for – as w0. Its left neighbor is w−1 and its right neighbor w1. Table 1 provides a summary of all features. Feature group BASE. The first class of features, BASE, is standard in NER. The binary feature WORD(k,i) (line 1) is 1 iff wi, the ith word in the dictionary, occurs at position k with respect to w0. The dictionary consists of all words in the training set. The analogous feature for part of speech, POS(k,t) (line 2), is 1 iff wk has been tagged with PoS t, as determined by TnT tagger (Brants, 2000). We also encode surface properties of the word with simple regular expressions, e.g., x-ray is encoded as x-x and 9/11 as d/dd (SHAPE, line 3). For these features, k ∈{−1, 0, 1}. Finally, we encode prefixes and suffixes, up to three characters long, for w0 (line 4). Feature group GAZ. Gazetteer features (lines 5 & 6) are an efficient and effective way of building world knowledge into an NER model. A gazetteer is simply a list of phrases that belong to a particular semantic category. We use gazetteers from (i) GATE (Cunningham et al., 2002): countries, first/last names, trigger words; (ii) WordNet: the 46 lexicographical labels (food, location, person etc.); and (iii) Fortune 500: company names. The two gazetteer features are the binary features GAZBl(k,i) and GAZ-Il(k,i). GAZ-Bl (resp. GAZ-Il) is 1 967 iff wk occurs as the first (resp. non-initial or internal) word in a phrase of length l that the gazetteer lists as belonging to category i where k ∈{−1, 0, 1}. 4 Piggyback features Feature groups URL, LEX, BOW, and MISC are piggyback features. We produce these by segmenting the input text into overlapping trigrams w1w2w3, w2w3w4, w3w4w5 etc. Each trigram wi−1wiwi+1 is submitted as a query to the search engine. For all experiments we used the publicly accessible Google Web Search API.1 The search engine returns a search result for the query consisting of, in most cases, 10 snippets,2 each of which contains 0, 1 or more hits of the search term wi. We then compute features for the vector representation of wi based on the snippets. We again refer to the target token and its neighbors (i.e., the search string) as w−1w0w1. w0 is the token that is to be classified (PER, LOC, ORG, or O) and the previous word and the next word serve as context that the search engine can exploit to provide snippets in which w0 is used in the same NE category as in the input text. O is the tag of a token that is neither LOC, ORG nor PER. In the definition of the features, we refer to the word in the snippet that matches w0 as s0, where the match is determined based on edit distance. The word immediately to the left (resp. right) of s0 in a snippet is called s−1 (resp. s1). For non-binary features, we first calculate real values and then binarize them into 10 quantile bins. Feature group URL. This group exploits NE information in URLs. The feature URL-SUBPART (line 7) is the fraction of URLs in the search result containing w0 as a substring. To avoid spurious matches, we set the feature to 0 if length(w0) ≤2. For URL-MI (line 8), each URL in the search result is split on special characters into parts (e.g., domain and subdomains). We refer to the set of all parts in the search result as URL-parts. The value of MIu(p, PER) is computed on the search results of the training set as the mutual information (MI) between (i) w0 being PER and (ii) p occurring as part of a URL in the search result. MI is defined as fol1Now deprecated in favor of the new Custom Search API. 2Less than 0.5% of the queries return fewer than 10 snippets. lows: MI(p, PER) = X i∈{¯p,p} X j∈{ ¯ PER,PER} P(i, j) log P(i, j) P(i)P(j) For example, for the URL-part p = “staff” (e.g., in bigcorp.com/staff.htm), P(staff) is the proportion of search results that contain a URL with the part “staff”, P(PER) is the proportion of search results where the search token w0 is PER and P(staff,PER) is the proportion of search results where w0 is PER and one of the URLs returned by the search engine has part “staff”. The value of the feature URL-MI is the average difference between the MI of PER and the other named entities. The feature is calculated in the same way for LOC, ORG, and O. Our initial experiments that used binary features for URL parts were not successful. We then designed URL-MI to integrate all URL information specific to an NE class into one measurement in a way that gives higher weight to strong features and lower weight to weak features. The inner sum on line 8 is the sum of the three differences MI(PER) −MI(O), MI(PER) −MI(ORG), and MI(PER) −MI(LOC). Each of the three summands indicates the relative advantage a URL part p gives to PER vs O (or ORG and LOC). By averaging over all URL parts, one then obtains an assessment of the overall strength of evidence (in terms of MI) for the NE class in question. Feature group LEX. These features assess how appropriate the words occurring in w0’s local contexts in the search result are for an NE class. For NEIGHBOR (line 9), we calculate for each word v in the British National Corpus (BNC) the count NE-BNC(v, k), the number of times it occurs at position k with respect to an NE; and OTHER-BNC(v, k), the number of times it occurs at position k with respect to a non-NE. We instantiate the feature for k = −1 (left neighbor) and k = 1 (right neighbor). The value of NEIGHBOR(k) is defined as the average log ratio of NE-BNC(v, k) and OTHER-BNC(v, k), averaged over the set kneighbors, the set of words that occur at position k with respect to s0 in the search result. In the experiments reported in this paper, we use a PoS-tagged version of the BNC, a balanced corpus of 100M words of British English, as a model 968 of word distribution in general contexts and in NE contexts that is not specific to either target or source domain. In the BNC, NEs are tagged with just one PoS-tag, but there is no differentiation into subcategories. Note that the search engine could be used again for this purpose; for practical reasons we preferred a static resource for this first study where many design variants were explored. The feature LEX-MI interprets words occurring before or after s0 as indicators of named entitihood. The parameter d indicates the “direction” of the feature: before or after. MId(v, PER) is computed on the search results of the training set as the MI between (i) w0 being PER and (ii) v occurring close to s0 in the search result either to the left (d = −1) or to the right (d = 1) of s0. Close refers to a window of 2 words. The value of LEX-MI(PER,d) is then the average difference between the MI of PER and the other NEs. The definition for LEX-MI(PER,d) is given on line 10. The feature is calculated in the same way for LOC, ORG, and O. Feature group BOW. The features LEX-MI consider a small window for cooccurrence information and distinguish left and right context. For BOW features, we use a larger window and ignore direction. Our aim is to build a bag-of-words representation of the contexts of w0 in the result snippets. MIb(v, PER) is computed on the search results of the training set as the MI between (i) w0 being PER and (ii) v occurring anywhere in the search result. The value of BOW-MI(PER) is the average difference between the MI of PER and the other NEs (line 11). The average is computed over all words v ∈bow-words that occur in a particular search result. The feature is calculated in the same way for LOC, ORG, and O. Feature group MISC. We collect the remaining piggyback features in the group MISC. The UPPERCASE and ALLCAPS features (lines 12&13) compute the fraction of occurrences of w0 in the search result with capitalization of only the first letter and all letters, respectively. We exclude titles: capitalization in titles is not a consistent clue for NE status. The SPECIAL feature (line 14) returns 1 iff any character of w0 is a number or a special character. NEs are often surrounded by special characters in web pages, e.g., Janis Joplin - Summertime. The SPECIAL-TITLE feature (line 15) captures this by counting the occurrences of numbers and special characters in s−1 and s1 in titles of the search result. The TITLE-WORD feature (line 16) computes the fraction of occurrences of w0 in the titles of the search result. The NOMINAL-POS feature (line 17) calculates the proportion of nominal PoS tags (NN, NNS, NP, NPS) of s0 in the search result, as determined by a PoS tagging of the snippets using TreeTagger (Schmid, 1994). The basic idea behind the CONTEXT(k) feature (line 18) is that the occurrence of words of certain shapes and with certain parts of speech makes it either more or less likely that w0 is an NE. For k = −1 (the word preceding s0 in the search result), we test for words that are adjectives, indefinites, possessive pronouns or numerals (partly based on tagging, partly based on a manually compiled list of words). For k = 1 (the word following s0), we test for words that contain numbers and special characters. This feature is complementary to the feature group LEX in that it is based on shape and PoS and does not estimate different parameters for each word. The feature PHRASE-HIT(−1) (line 19) calculates the proportion of occurrences of w0 in the search result where the left neighbor in the snippet is equal to the word preceding w0 in the search string, i.e., k = −1: s−1 = w−1. PHRASE-HIT(1) is the equivalent for the right neighbor. This feature helps identify phrases – search strings containing NEs are more likely to occur as a phrase in search results. The ACRONYM feature (line 20) computes the proportion of the initials of w−1w0 or w0w1 or w−1w0w1 occurring in the search result. For example, the abbreviation GM is likely to occur when searching for general motors dealers. The binary feature EMPTY (line 21) returns 1 iff the search result is empty. This feature enables the classifier to distinguish true zero values (e.g., for the feature ALLCAPS) from values that are zero because the search engine found no hits. 5 Experimental data In our experiments, we train an NER classifier on an in-domain data set and test it on two different outof-domain data sets. We describe these data sets in 969 CoNLL trn CoNLL tst IEER KDD-D KDD-T LOC 4.1 4.1 1.9 11.9 10.6 ORG 4.9 3.7 3.2 8.2 8.3 PER 5.4 6.4 3.8 5.3 5.4 O 85.6 85.8 91.1 74.6 75.7 Table 2: Percentages of NEs in CoNLL, IEER, and KDD. this section and the NER classifier and the details of the training regime in the next section, Section 6. As training data for all models evaluated we used the CoNLL 2003 English NER dataset, a corpus of approximately 300,000 tokens of Reuters news from 1992 annotated with person, location, organization and miscellaneous NE labels (Sang and Meulder, 2003). As out-of-domain newswire evaluation data3 we use the development test data from the NIST 1999 IEER named entity corpus, a dataset of 50,000 tokens of New York Times (NYT) and Associated Press Weekly news.4 This corpus is annotated with person, location, organization, cardinal, duration, measure, and date labels. CoNLL and IEER are professionally edited and, in particular, properly capitalized news corpora. As capitalization is absent from queries we lowercased both CoNLL and IEER. We also reannotated the lowercased datasets with PoS categories using the retrained TnT PoS tagger (Brants, 2000) to avoid using non-plausible PoS information. Notice that this step is necessary as otherwise virtually no NNP/NNPS categories would be predicted on the query data because the lowercase NEs of web queries never occur in properly capitalized news; this causes an NER tagger trained on standard PoS to underpredict NEs (1–3% positive rate). The 2005 KDD Cup is a query topic categorization task based on 800,000 queries (Li et al., 2005).5 We use a random subset of 2000 queries as a source of web queries. By means of simple regular expressions we excluded from sampling queries that looked like urls or emails (≈15%) as they are easy to identify and do not provide a significant chal3A reviewer points out that we use the terms in-domain and out-of-domain somewhat liberally. We simply use “different domain” as a short-hand for “different distribution” without making any claim about the exact nature of the difference. 4nltk.googlecode.com/svn/trunk/nltk data 5www.sigkdd.org/kdd2005/kddcup.html lenge. We also excluded queries shorter than 10 characters (4%) and longer than 50 characters (2%) to provide annotators with enough context, but not an overly complex task. The annotation procedure was carried out using Amazon Mechanical Turk. We instructed workers to follow the CoNLL 2003 NER guidelines (augmented with several examples from queries that we annotated) and identify up to three NEs in a short text and copy and paste them into a box with associated multiple choice menu with the 4 CoNLL NE labels: LOC, MISC, ORG, and PER. Five workers annotated each query. In a first round we produced 1000 queries later used for development. We call this set KDD-D. We then expanded the guidelines with a few uncertain cases. In a second round, we generated another 1000 queries. This set will be referred to as KDD-T. Because annotator agreement is low on a per-token basis (κ = .30 for KDD-D, κ = .34 for KDD-T (Cohen, 1960)), we remove queries with less than 50% agreement, averaged over the tokens in the query. After this filtering, KDD-D and KDD-T contain 777 and 819 queries, respectively. Most of the rater disagreement involves the MISC NE class. This is not surprising as MISC is a sort of place-holder category that is difficult to define and identify in queries, especially by untrained AMT workers. We thus replaced MISC with the null label O. With these two changes, κ was .54 on KDD-D and .64 on KDD-T. This is sufficient for repeatable experiments.6 Table 2 shows the distribution of NE types in the 5 datasets. IEER has fewer NEs than CoNLL, KDD has more. PER is about as prevalent in KDD as in CoNLL, but LOC and ORG have higher percentages, reflecting the fact that people search frequently for locations and commercial organizations. These differences between source domain (CoNLL) and target domains (IEER, KDD) add to the difficulty of cross-domain generalization in this case. 6 Experimental setup Recall that the input features for a token w0 consist of standard NER features (BASE and GAZ) and features derived from the search result we obtain by 6The two KDD sets, along with additional statistics on annotator agreement requested by a reviewer, are available at ifnlp.org/∼schuetze/piggyback11. 970 running a search for w−1w0w1 (URL, LEX, BOW, and MISC). Since the MISC NE class is not annotated in IEER and has low agreement on KDD in the experimental evaluation we focus on the fourclass (PER, LOC, ORG, O) NER problem on all datasets. We use BIO encoding as in the original CoNLL task (Sang and Meulder, 2003). ALL LOC ORG PER CoNLL c1 l BASE GAZ 88.8∗91.9 77.9 93.0 c2 l GAZ URL BOW MISC 86.4∗90.7 74.0 90.9 c3 l BASE URL BOW MISC 92.3∗93.7 84.8 96.0 c4 l BASE GAZ BOW MISC 91.1∗93.3 82.2 94.9 c5 l BASE GAZ URL MISC 92.7∗94.9 84.5 95.9 c6 l BASE GAZ URL BOW 92.3∗94.2 84.4 95.8 c7 l BASE GAZ URL BOW MISC 93.0 94.9 85.1 96.4 c8 l BASE GAZ URL LEX BOW MISC 92.9 94.7 84.9 96.5 c9 c BASE GAZ 92.9 95.3 87.7 94.6 IEER i1 l BASE GAZ 57.9∗71.0 37.7 59.9 i2 l GAZ URL LEX BOW MISC 63.8∗76.2 26.0 75.9 i3 l BASE URL LEX BOW MISC 64.9∗71.8 38.3 73.8 i4 l BASE GAZ LEX BOW MISC 67.3 76.7 41.2 74.6 i5 l BASE GAZ URL BOW MISC 67.8 76.7 40.4 75.8 i6 l BASE GAZ URL LEX MISC 68.1 77.2 36.9 77.8 i7 l BASE GAZ URL LEX BOW 66.6∗77.4 38.3 73.9 i8 l BASE GAZ URL LEX BOW MISC 68.1 77.4 36.2 78.0 i9 c BASE GAZ 68.6∗77.3 52.3 73.1 KDD-T k1 l BASE GAZ 34.6∗48.9 19.2 34.7 k2 l GAZ URL LEX MISC 40.4∗52.1 15.4 50.4 k3 l BASE URL LEX MISC 40.9∗50.0 20.1 48.0 k4 l BASE GAZ LEX MISC 41.6∗55.0 25.2 45.2 k5 l BASE GAZ URL MISC 43.0 57.0 15.8 50.9 k6 l BASE GAZ URL LEX 40.7∗55.5 15.8 42.9 k7 l BASE GAZ URL LEX MISC 43.8 56.4 17.0 52.0 k8 l BASE GAZ URL LEX BOW MISC 43.8 56.5 17.4 52.3 Table 3: Evaluation results. l = text lowercased, c = original capitalization preserved. ALL scores significantly different from the best results for the three datasets (lines c7, i8, k7) are marked ∗(see text). We use SuperSenseTagger (Ciaramita and Altun, 2006)7 as our NER tagger. It is a first-order conditional HMM trained with the perceptron algo7sourceforge.net/projects/supersensetag rithm (Collins, 2002), a discriminative model with excellent efficiency-performance trade-off (Sha and Pereira, 2003). The model is regularized by averaging (Freund and Schapire, 1999). For all models we used an appropriate development set for choosing the only hyperparameter, T, the number of training iterations on the source data. T must be tuned separately for each evaluation because different target domains have different overfitting patterns. We train our NER system on an 80% sample of the CoNLL data. For our in-domain evaluation, we tune T on a 10% development sample of the CoNLL data and test on the remaining 10%. For our out-ofdomain evaluation, we use the IEER and KDD test sets. Here T is tuned on the corresponding development sets. Since we do not train on IEER and KDD, these two data sets do not have training set portions. For each data set, we perform 63 runs, corresponding to the 26−1 = 63 different non-empty combinations of the 6 feature groups. We report average F1, generated by five-trial training and evaluation, with random permutations of the training data. We compute the scores using the original CoNLL phrasebased metric (Sang and Meulder, 2003). As a benchmark we use the baseline model with gazetteer features (BASE and GAZ). The robustness of this simple approach is well documented; e.g., Turian et al. (2010) show that the baseline model (gazetteer features without unsupervised features) produces an F1 of .778 against .788 of the best unsupervised word representation feature. 7 Results and discussion Table 3 summarizes the experimental results. In each column, the best numbers within a dataset for the “lowercased” runs are bolded (see below for discussion of the “capitalization” runs on lines c9 and i9). For all experiments, we selected a subset of the combinations of the feature groups. This subset always includes the best results and a number of other combinations where feature groups are added to or removed from the optimal combination. Results for the CoNLL test set show that the 5 feature groups without LEX achieve optimal performance (line c7). Adding LEX improves performance on PER, but decreases overall performance (line c8). Removing GAZ, URL, BOW and MISC 971 from line c7, causes small comparable decreases in performance (lines c3–c6). These feature groups seem to have about the same importance in this experimental setting, but leaving out BASE decreases F1 by a larger 6.6% (lines c7 vs c2). The main result for CoNLL is that using piggyback features (line c7) improves F1 of a standard NER system that uses only BASE and GAZ (line c1) by 4.2%. Even though the emphasis of this paper is on cross-domain robustness, we can see that our approach also has clear in-domain benefits. The baseline in line c1 is the “lowercase” baseline as indicated by “l”. We also ran a “capitalized” baseline (“c”) on text with the original capitalization preserved and PoS-tagged in this unchanged form. Comparing lines c7 and c9, we see that piggyback features are able to recover all the performance that is lost when proper capitalization is unavailable. Lin and Wu (2009) report an F1 score of 90.90 on the original split of the CoNLL data. Our F1 scores > 92% can be explained by a combination of randomly partitioning the data and the fact that the fourclass problem is easier than the five-class problem LOC-ORG-PER-MISC-O. We use the t-test to compute significance on the two sets of five F1 scores from the two experiments that are being compared (two-tailed, p < .01 for t > 3.36).8 CoNLL scores that are significantly different from line c7 are marked with ∗. For IEER, the system performs best for all six feature groups (line i8). Runs significantly different from i8 are marked ∗. When URL, LEX and BOW are removed from the set, performance does not decrease, or only slightly (lines i4, i5, i6), indicating that these three feature groups are least important. In contrast, there is significant evidence for the importance of BASE, GAZ, and MISC: removing them decreases performance by at least 1% (lines i2, i3, i7). The large increase of ORG F1 when URL is not used is surprising (41.2% on line i4, best performance). The reason seems to be that URL features (and LEX to a lesser extent) do not generalize for ORG. Locations like Madrid in CoNLL are frequently tagged ORG when they refer to sports clubs like Real Madrid. This is rare in IEER and KDD. 8We make the assumption that the distribution of F1 scores is approximately normal. See Cohen (1995), Noreen (1989) for a discussion of how this affects the validity of the t-test. Compared to standard NER (using feature groups BASE and GAZ), our combined feature set achieves a performance that is by more than 10% higher (lines i8 vs i1). This demonstrates that piggyback features have robust cross-domain generalization properties. The comparison of lines i8 and i9 confirms that the features effectively compensate for the lack of capitalization, and perform almost as well as (although statistically worse than) a model trained on capitalized data. The best run on KDD-D was the run with feature groups BASE, GAZ, URL, LEX and MISC. On line k7, we show results for this run for KDD-T and for runs that differ by one feature group (lines k2–k6, k8).9 The overall best result (43.8%) is achieved when using all feature groups (line k8). Omitting BOW results in the same score for ALL (line k7). Apparently, the local LEX features already capture most useful cooccurrence information and looking at a wider window (as implemented by BOW) is of limited utility. On lines k2–k6, performance generally decreases on ALL and the three NE classes when dropping one of the five feature groups on line k7. One notable exception is an increase for ORG when feature group URL is dropped (line k4, 25.2%, the best performance for ORG of all runs). This is in line with our discussion of the same effect on IEER. The key take-away from our results on KDD-T is that piggyback features are again (as for IEER) significantly better than standard feature groups BASE and GAZ. Search engine based adaptation has an advantage of 9.2% compared to standard NER (lines k7 vs k1). An F1 below 45% may not yet be good enough for practical purposes. But even if additional work is necessary to boost the scores further, our model is an important step in this direction. The low scores for KDD-T are also partially due to our processing of the AMT data. Our selection procedure is biased towards short entities whereas CoNLL guidelines favor long NEs. We can address this by forcing AMT raters to be more consistent with the CoNLL guidelines in the future. We summarize the experimental results as follows. Piggyback features consistently improve NER for non-well-edited text when used together with standard NER features. While relative improve9KDD-D F1 values were about 1% higher than for KDD-T. 972 ment due to piggyback features increases as outof-domain data become more different from the indomain training set, performance declines in absolute terms from .930 (CoNLL) to .681 (IEER) and .438 (KDD-T). 8 Conclusion Robust cross-domain generalization is key in many NLP applications. In addition to surface and linguistic differences, differences in world knowledge pose a key challenge, e.g., the fact that Java refers to a location in one domain and to coffee in another. We have proposed a new way of addressing this challenge. Because search engines attempt to make optimal use of the context a word occurs in, hits shown to the user usually include other uses of the word in semantically similar snippets. These snippets can be used as a more robust and domain-independent representation of the context of the word/phrase than what is available in the input text. Our first contribution is that we have shown that this basic idea of using search engines for robust domain-independent feature representations yields solid results for one specific NLP problem, NER. Piggyback features achieved an improvement of F1 of about 10% compared to a baseline that uses BASE and GAZ features. Even in-domain, we were able to get a smaller, but still noticeable improvement of 4.2% due to piggyback features. These results are also important because there are many application domains with noisy text without reliable capitalization, e.g., automatically transcribed speech, tweets, SMS, social communities and blogs. Our second contribution is that we address a type of NER that is of particular importance: NER for web queries. The query is the main source of information about the user’s information need. Query analysis is important on the web because understanding the query, including the subtask of NER, is key for identifying the most relevant documents and the most relevant ads. NER for domains like Twitter and SMS has properties similar to web queries. A third contribution of this paper is the release of an annotated dataset for web query NER. We hope that this dataset will foster more research on crossdomain generalization and domain adaptation – in particular for NER – and the difficult problem of web query understanding. This paper is about cross-domain generalization. However, the general idea of using search to provide rich context information to NLP systems is applicable to a broad array of tasks. One of the main hurdles that NLP faces is that the single context a token occurs in is often not sufficient for reliable decisions, be they about attachment, disambiguation or higherorder semantic interpretation. Search makes dozens of additional relevant contexts available and can thus overcome this bottleneck. In the future, we hope to be able to show that other NLP tasks can also benefit from such an enriched context representation. Future work. We used a web search engine in the experiments presented in this paper. Latencies when using one of the three main commercial search engines Bing, Google and Yahoo! in our scenario range from 0.2 to 0.5 seconds per token. These execution times are prohibitive for many applications. Search engines also tend to limit the number of queries per user and IP address. To gain widespread acceptance of the piggyback idea of using search results for robust NLP, we therefore must explore alternatives to search engines. In future work, we plan to develop more efficient methods of using search results for cross-domain generalization to avoid the cost of issuing a large number of queries to search engines. Caching will be of obvious importance in this regard. Another avenue we are pursuing is to build a specialized search system for our application in a way similar to Cafarella and Etzioni (2005). While we need good coverage of a large variety of domains for our approach to work, it is not clear how big the index of the search engine must be for good performance. Conceivably, collections much smaller than those indexed by major search engines (e.g., the Google 1T 5-gram corpus or ClueWeb09) might give rise to features with similar robustness properties. It is important to keep in mind, however, that one of the key factors a search engine allows us to leverage is the notion of relevance which might not be always possible to model as accurately with other data. Acknowledgments. This research was funded by a Google Research Award. We would like to thank Amir Najmi, John Blitzer, Rich´ard Farkas, Florian Laws, Slav Petrov and the anonymous reviewers for their comments. 973 References Rie Kubota Ando. 2004. Exploiting unannotated corpora for tagging and chunking. In ACL, Companion Volume, pages 142–145. Cory Barr, Rosie Jones, and Moira Regelson. 2008. The linguistic structure of English web-search queries. In EMNLP, pages 1021–1030. Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning, 79:151–175. Thorsten Brants. 2000. TnT – A statistical part-ofspeech tagger. In ANLP, pages 224–231. Michael J. Cafarella and Oren Etzioni. 2005. A search engine for natural language applications. In WWW, pages 442–452. Nancy A. Chinchor, editor. 1998. Proceedings of the Seventh Message Understanding Conference. NIST. Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 594–602. Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1):37–46. Paul R. Cohen. 1995. Empirical methods for artificial intelligence. MIT Press, Cambridge, MA, USA. Michael Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In EMNLP, pages 1–8. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on Wikipedia data. In EMNLPCoNLL, pages 708–716. Hamish Cunningham, Diana Maynard, Kalina Bontcheva, and Valentin Tablan. 2002. GATE: A framework and graphical development environment for robust NLP tools and applications. In ACL, pages 168–175. Rich´ard Farkas, Gy¨orgy Szarvas, and R´obert Orm´andi. 2007. Improving a state-of-the-art named entity recognition system using the world wide web. In Industrial Conference on Data Mining, pages 163–172. Tim Finin, Will Murnane, Anand Karandikar, Nicholas Keller, Justin Martineau, and Mark Dredze. 2010. Annotating named entities in twitter data with crowdsourcing. In NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 80–88. Yoav Freund and Robert E. Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 37:277–296. Atsushi Fujita and Satoshi Sato. 2008. A probabilistic model for measuring grammaticality and similarity of automatically generated paraphrases of predicate phrases. In COLING, pages 225–232. Gregory Grefenstette, Yan Qu, and David A. Evans. 2004. Mining the web to create a language model for mapping between English names and phrases and Japanese. In Web Intelligence, pages 110–116. Jiafeng Guo, Gu Xu, Xueqi Cheng, and Hang Li. 2009. Named entity recognition in query. In SIGIR, pages 267–274. Jun’ichi Kazama and Kentaro Torisawa. 2007. Exploiting Wikipedia as external knowledge for named entity recognition. In EMNLP-CoNLL, pages 698–707. Mirella Lapata and Frank Keller. 2005. Web-based models for natural language processing. ACM Transactions on Speech and Language Processing, 2(1):1–31. Nolan Lawson, Kevin Eustice, Mike Perkowitz, and Meliha Yetisgen-Yildiz. 2010. Annotating large email datasets for named entity recognition with mechanical turk. In NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 71–79. Ying Li, Zijian Zheng, and Honghua (Kathy) Dai. 2005. KDD CUP 2005 report: Facing a great challenge. SIGKDD Explorations Newsletter, 7:91–99. Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1030–1038. Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses : An Introduction. WileyInterscience. Thierry Poibeau and Leila Kosseim. 2000. Proper name extraction from non-journalistic texts. In CLIN, pages 144–157. J. J. Rocchio. 1971. Relevance feedback in information retrieval. In Gerard Salton, editor, The Smart Retrieval System – Experiments in Automatic Document Processing, pages 313–323. Prentice-Hall. Mehran Sahami and Timothy D. Heilman. 2006. A webbased kernel function for measuring the similarity of short text snippets. In WWW, pages 377–386. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Languageindependent named entity recognition. In Proceedings of CoNLL 2003 Shared Task, pages 142–147. Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of the International Conference on New Methods in Language Processing, pages 44–49. 974 Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, pages 134–141. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In ACL, pages 384–394. Peter D. Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In ACL, pages 417–424. Jinxi Xu and W. Bruce Croft. 1996. Query expansion using local and global document analysis. In SIGIR, pages 4–11. Alexander Yates, Stefan Schoenmackers, and Oren Etzioni. 2006. Detecting parser errors using web-based semantic filters. In EMNLP, pages 27–34. Meliha Yetisgen-Yildiz, Imre Solti, Fei Xia, and Scott Russell Halgrim. 2010. Preliminary experience with Amazon’s Mechanical Turk for annotating medical named entities. In NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 180–183. 975
|
2011
|
97
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 976–986, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Template-Based Information Extraction without the Templates Nathanael Chambers and Dan Jurafsky Department of Computer Science Stanford University {natec,jurafsky}@stanford.edu Abstract Standard algorithms for template-based information extraction (IE) require predefined template schemas, and often labeled data, to learn to extract their slot fillers (e.g., an embassy is the Target of a Bombing template). This paper describes an approach to template-based IE that removes this requirement and performs extraction without knowing the template structure in advance. Our algorithm instead learns the template structure automatically from raw text, inducing template schemas as sets of linked events (e.g., bombings include detonate, set off, and destroy events) associated with semantic roles. We also solve the standard IE task, using the induced syntactic patterns to extract role fillers from specific documents. We evaluate on the MUC-4 terrorism dataset and show that we induce template structure very similar to handcreated gold structure, and we extract role fillers with an F1 score of .40, approaching the performance of algorithms that require full knowledge of the templates. 1 Introduction A template defines a specific type of event (e.g., a bombing) with a set of semantic roles (or slots) for the typical entities involved in such an event (e.g., perpetrator, target, instrument). In contrast to work in relation discovery that focuses on learning atomic facts (Banko et al., 2007a; Carlson et al., 2010), templates can extract a richer representation of a particular domain. However, unlike relation discovery, most template-based IE approaches assume foreknowledge of the domain’s templates. Very little work addresses how to learn the template structure itself. Our goal in this paper is to perform the standard template filling task, but to first automatically induce the templates from an unlabeled corpus. There are many ways to represent events, ranging from role-based representations such as frames (Baker et al., 1998) to sequential events in scripts (Schank and Abelson, 1977) and narrative schemas (Chambers and Jurafsky, 2009; Kasch and Oates, 2010). Our approach learns narrative-like knowledge in the form of IE templates; we learn sets of related events and semantic roles, as shown in this sample output from our system: Bombing Template {detonate, blow up, plant, explode, defuse, destroy} Perpetrator: Person who detonates, plants, blows up Instrument: Object that is planted, detonated, defused Target: Object that is destroyed, is blown up A semantic role, such as target, is a cluster of syntactic functions of the template’s event words (e.g., the objects of detonate and explode). Our goal is to characterize a domain by learning this template structure completely automatically. We learn templates by first clustering event words based on their proximity in a training corpus. We then use a novel approach to role induction that clusters the syntactic functions of these events based on selectional preferences and coreferring arguments. The induced roles are template-specific (e.g., perpetrator), not universal (e.g., agent or patient) or verb-specific. After learning a domain’s template schemas, we perform the standard IE task of role filling from individual documents, for example: Perpetrator: guerrillas Instrument: dynamite Target: embassy 976 This extraction stage identifies entities using the learned syntactic functions of our roles. We evaluate on the MUC-4 terrorism corpus with results approaching those of supervised systems. The core of this paper focuses on how to characterize a domain-specific corpus by learning rich template structure. We describe how to first expand the small corpus’ size, how to cluster its events, and finally how to induce semantic roles. Section 5 then describes the extraction algorithm, followed by evaluations against previous work in section 6 and 7. 2 Previous Work Many template extraction algorithms require full knowledge of the templates and labeled corpora, such as in rule-based systems (Chinchor et al., 1993; Rau et al., 1992) and modern supervised classifiers (Freitag, 1998; Chieu et al., 2003; Bunescu and Mooney, 2004; Patwardhan and Riloff, 2009). Classifiers rely on the labeled examples’ surrounding context for features such as nearby tokens, document position, syntax, named entities, semantic classes, and discourse relations (Maslennikov and Chua, 2007). Ji and Grishman (2008) also supplemented labeled with unlabeled data. Weakly supervised approaches remove some of the need for fully labeled data. Most still require the templates and their slots. One common approach is to begin with unlabeled, but clustered event-specific documents, and extract common word patterns as extractors (Riloff and Schmelzenbach, 1998; Sudo et al., 2003; Riloff et al., 2005; Patwardhan and Riloff, 2007). Filatova et al. (2006) integrate named entities into pattern learning (PERSON won) to approximate unknown semantic roles. Bootstrapping with seed examples of known slot fillers has been shown to be effective (Surdeanu et al., 2006; Yangarber et al., 2000). In contrast, this paper removes these data assumptions, learning instead from a corpus of unknown events and unclustered documents, without seed examples. Shinyama and Sekine (2006) describe an approach to template learning without labeled data. They present unrestricted relation discovery as a means of discovering relations in unlabeled documents, and extract their fillers. Central to the algorithm is collecting multiple documents describing the same exact event (e.g. Hurricane Ivan), and observing repeated word patterns across documents connecting the same proper nouns. Learned patterns represent binary relations, and they show how to construct tables of extracted entities for these relations. Our approach draws on this idea of using unlabeled documents to discover relations in text, and of defining semantic roles by sets of entities. However, the limitations to their approach are that (1) redundant documents about specific events are required, (2) relations are binary, and (3) only slots with named entities are learned. We will extend their work by showing how to learn without these assumptions, obviating the need for redundant documents, and learning templates with any type and any number of slots. Large-scale learning of scripts and narrative schemas also captures template-like knowledge from unlabeled text (Chambers and Jurafsky, 2008; Kasch and Oates, 2010). Scripts are sets of related event words and semantic roles learned by linking syntactic functions with coreferring arguments. While they learn interesting event structure, the structures are limited to frequent topics in a large corpus. We borrow ideas from this work as well, but our goal is to instead characterize a specific domain with limited data. Further, we are the first to apply this knowledge to the IE task of filling in template mentions in documents. In summary, our work extends previous work on unsupervised IE in a number of ways. We are the first to learn MUC-4 templates, and we are the first to extract entities without knowing how many templates exist, without examples of slot fillers, and without event-clustered documents. 3 The Domain and its Templates Our goal is to learn the general event structure of a domain, and then extract the instances of each learned event. In order to measure performance in both tasks (learning structure and extracting instances), we use the terrorism corpus of MUC-4 (Sundheim, 1991) as our target domain. This corpus was chosen because it is annotated with templates that describe all of the entities involved in each event. An example snippet from a bombing document is given here: 977 The terrorists used explosives against the town hall. El Comercio reported that alleged Shining Path members also attacked public facilities in huarpacha, Ambo, tomayquichua, and kichki. Municipal official Sergio Horna was seriously wounded in an explosion in Ambo. The entities from this document fill the following slots in a MUC-4 bombing template. Perp: Shining Path members Victim: Sergio Horna Target: public facilities Instrument: explosives We focus on these four string-based slots1 from the MUC-4 corpus, as is standard in this task. The corpus consists of 1300 documents, 733 of which are labeled with at least one template. There are six types of templates, but only four are modestly frequent: bombing (208 docs), kidnap (83 docs), attack (479 docs), and arson (40 docs). 567 documents do not have any templates. Our learning algorithm does not know which documents contain (or do not contain) which templates. After learning event words that represent templates, we induce their slots, not knowing a priori how many there are, and then fill them in by extracting entities as in the standard task. In our example above, the three bold verbs (use, attack, wound) indicate the Bombing template, and their syntactic arguments fill its slots. 4 Learning Templates from Raw Text Our goal is to learn templates that characterize a domain as described in unclustered, unlabeled documents. This presents a two-fold problem to the learner: it does not know how many events exist, and it does not know which documents describe which event (some may describe multiple events). We approach this problem with a three step process: (1) cluster the domain’s event patterns to approximate the template topics, (2) build a new corpus specific to each cluster by retrieving documents from a larger unrelated corpus, (3) induce each template’s slots using its new (larger) corpus of documents. 4.1 Clustering Events to Learn Templates We cluster event patterns to create templates. An event pattern is either (1) a verb, (2) a noun in Word1There are two Perpetrator slots in MUC-4: Organization and Individual. We consider their union as a single slot. Net under the Event synset, or (3) a verb and the head word of its syntactic object. Examples of each include (1) ‘explode’, (2) ‘explosion’, and (3) ‘explode:bomb’. We also tag the corpus with an NER system and allow patterns to include named entity types, e.g., ‘kidnap:PERSON’. These patterns are crucially needed later to learn a template’s slots. However, we first need an algorithm to cluster these patterns to learn the domain’s core events. We consider two unsupervised algorithms: Latent Dirichlet Allocation (LDA) (Blei et al., 2003), and agglomerative clustering based on word distance. 4.1.1 LDA for Unknown Data LDA is a probabilistic model that treats documents as mixtures of topics. It learns topics as discrete distributions (multinomials) over the event patterns, and thus meets our needs as it clusters patterns based on co-occurrence in documents. The algorithm requires the number of topics to be known ahead of time, but in practice this number is set relatively high and the resulting topics are still useful. Our best performing LDA model used 200 topics. We had mixed success with LDA though, and ultimately found our next approach performed slightly better on the document classification evaluation. 4.1.2 Clustering on Event Distance Agglomerative clustering does not require foreknowledge of the templates, but its success relies on how event pattern similarity is determined. Ideally, we want to learn that detonate and destroy belong in the same cluster representing a bombing. Vector-based approaches are often adopted to represent words as feature vectors and compute their distance with cosine similarity. Unfortunately, these approaches typically learn clusters of synonymous words that can miss detonate and destroy. Our goal is to instead capture world knowledge of cooccuring events. We thus adopt an assumption that closeness in the world is reflected by closeness in a text’s discourse. We hypothesize that two patterns are related if they occur near each other in a document more often than chance. Let g(wi, wj) be the distance between two events (1 if in the same sentence, 2 in neighboring, etc). Let Cdist(wi, wj) be the distance-weighted frequency of 978 kidnap: kidnap, kidnap:PER, abduct, release, kidnapping, ransom, robbery, registration bombing: explode, blow up, locate, place:bomb, detonate, damage, explosion, cause, damage, ... attack: kill, shoot down, down, kill:civilian, kill:PER, kill:soldier, kill:member, killing, shoot:PER, wave, ... arson: burn, search, burning, clip, collaborate, ... Figure 1: The 4 clusters mapped to MUC-4 templates. two events occurring together: Cdist(wi, wj) = X d∈D X wi,wj∈d 1 −log4(g(wi, wj)) (1) where d is a document in the set of all documents D. The base 4 logarithm discounts neighboring sentences by 0.5 and within the same sentence scores 1. Using this definition of distance, pointwise mutual information measures our similarity of two events: pmi(wi, wj) = Pdist(wi, wj)/(P(wi)P(wj)) (2) P(wi) = C(wi) P j C(wj) (3) Pdist(wi, wj) = Cdist(wi, wj) P k P l Cdist(wk, wl) (4) We run agglomerative clustering with pmi over all event patterns. Merging decisions use the average link score between all new links across two clusters. As with all clustering algorithms, a stopping criterion is needed. We continue merging clusters until any single cluster grows beyond m patterns. We briefly inspected the clustering process and chose m = 40 to prevent learned scenarios from intuitively growing too large and ambiguous. Post-evaluation analysis shows that this value has wide flexibility. For example, the Kidnap and Arson clusters are unchanged in 30 < m < 80, and Bombing unchanged in 30 < m < 50. Figure 1 shows 3 clusters (of 77 learned) that characterize the main template types. 4.2 Information Retrieval for Templates Learning a domain often suffers from a lack of training data. The previous section clustered events from the MUC-4 corpus, but its 1300 documents do not provide enough examples of verbs and argument counts to further learn the semantic roles in each cluster. Our solution is to assemble a larger IRcorpus of documents for each cluster. For example, MUC-4 labels 83 documents with Kidnap, but our learned cluster (kidnap, abduct, release, ...) retrieved 3954 documents from a general corpus. We use the Associated Press and New York Times sections of the Gigaword Corpus (Graff, 2002) as our general corpus. These sections include approximately 3.5 million news articles spanning 12 years. Our retrieval algorithm retrieves documents that score highly with a cluster’s tokens. The document score is defined by two common metrics: word match, and word coverage. A document’s match score is defined as the average number of times the words in cluster c appear in document d: avgm(d, c) = P w∈c P t∈d 1{w = t} |c| (5) We define word coverage as the number of seen cluster words. Coverage penalizes documents that score highly by repeating a single cluster word a lot. We only score a document if its coverage, cvg(d, c), is at least 3 words (or less for tiny clusters): ir(d, c) = avgm(d, c) if cvg(d, c) > min(3, |c|/4) 0 otherwise A document d is retrieved for a cluster c if ir(d, c) > 0.4. Finally, we emphasize precision by pruning away 50% of a cluster’s retrieved documents that are farthest in distance from the mean document of the retrieved set. Distance is the cosine similarity between bag-of-words vector representations. The confidence value of 0.4 was chosen from a manual inspection among a single cluster’s retrieved documents. Pruning 50% was arbitrarily chosen to improve precision, and we did not experiment with other quantities. A search for optimum parameter values may lead to better results. 4.3 Inducing Semantic Roles (Slots) Having successfully clustered event words and retrieved an IR-corpus for each cluster, we now address the problem of inducing semantic roles. Our learned roles will then extract entities in the next section and we will evaluate their per-role accuracy. Most work on unsupervised role induction focuses on learning verb-specific roles, starting with seed examples (Swier and Stevenson, 2004; He and 979 Gildea, 2006) and/or knowing the number of roles (Grenager and Manning, 2006; Lang and Lapata, 2010). Our previous work (Chambers and Jurafsky, 2009) learned situation-specific roles over narrative schemas, similar to frame roles in FrameNet (Baker et al., 1998). Schemas link the syntactic relations of verbs by clustering them based on observing coreferring arguments in those positions. This paper extends this intuition by introducing a new vectorbased approach to coreference similarity. 4.3.1 Syntactic Relations as Roles We learn the roles of cluster C by clustering the syntactic relations RC of its words. Consider the following example: C = {go off, explode, set off, damage, destroy} RC = {go off:s, go off:p in, explode:s, set off:s} where verb:s is the verb’s subject, :o the object, and p in a preposition. We ideally want to cluster RC as: bomb = {go off:s, explode:s, set off:o, destroy:s} suspect = {set off:s} target = {go off:p in, destroy:o} We want to cluster all subjects, objects, and prepositions. Passive voice is normalized to active2. We adopt two views of relation similarity: coreferring arguments and selectional preferences. Chambers and Jurafsky (2008) observed that coreferring arguments suggest a semantic relation between two predicates. In the sentence, he ran and then he fell, the subjects of run and fall corefer, and so they likely belong to the same scenario-specific semantic role. We applied this idea to a new vector similarity framework. We represent a relation as a vector of all relations with which their arguments coreferred. For instance, arguments of the relation go off:s were seen coreferring with mentions in plant:o, set off:o and injure:s. We represent go off:s as a vector of these relation counts, calling this its coref vector representation. Selectional preferences (SPs) are also useful in measuring similarity (Erk and Pado, 2008). A relation can be represented as a vector of its observed arguments during training. The SPs for go off:s in our data include {bomb, device, charge, explosion}. We measure similarity using cosine similarity between the vectors in both approaches. However, 2We use the Stanford Parser at nlp.stanford.edu/software coreference and SPs measure different types of similarity. Coreference is a looser narrative similarity (bombings cause injuries), while SPs capture synonymy (plant and place have similar arguments). We observed that many narrative relations are not synonymous, and vice versa. We thus take the maximum of either cosine score as our final similarity metric between two relations. We then back off to the average of the two cosine scores if the max is not confident (less than 0.7); the average penalizes the pair. We chose the value of 0.7 from a grid search to optimize extraction results on the training set. 4.3.2 Clustering Syntactic Functions We use agglomerative clustering with the above pairwise similarity metric. Cluster similarity is the average link score over all new links crossing two clusters. We include the following sparsity penalty r(ca, cb) if there are too few links between clusters ca and cb. score(ca, cb) = X wi∈ca X wj∈cb sim(wi, wj)∗r(ca, cb) (6) r(ca, cb) = P wi∈ca P wj∈cb 1{sim(wi, wj) > 0} P wi∈ca P wj∈cb 1 (7) This penalizes clusters from merging when they share only a few high scoring edges. Clustering stops when the merged cluster scores drop below a threshold optimized to extraction performance on the training data. We also begin with two assumptions about syntactic functions and semantic roles. The first assumes that the subject and object of a verb carry different semantic roles. For instance, the subject of sell fills a different role (Seller) than the object (Good). The second assumption is that each semantic role has a high-level entity type. For instance, the subject of sell is a Person or Organization, and the object is a Physical Object. We implement the first assumption as a constraint in the clustering algorithm, preventing two clusters from merging if their union contains the same verb’s subject and object. We implement the second assumption by automatically labeling each syntactic function with a role type based on its observed arguments. The role types are broad general classes: Person/Org, Physical Object, or Other. A syntactic function is labeled as a 980 Bombing Template (MUC-4) Perpetrator Person/Org who detonates, blows up, plants, hurls, stages, is detained, is suspected, is blamed on, launches Instrument A physical object that is exploded, explodes, is hurled, causes, goes off, is planted, damages, is set off, is defused Target A physical object that is damaged, is destroyed, is exploded at, is damaged, is thrown at, is hit, is struck Police Person/Org who raids, questions, discovers, investigates, defuses, arrests N/A A physical object that is blown up, destroys Attack/Shooting Template (MUC-4) Perpetrator Person/Org who assassinates, patrols, ambushes, raids, shoots, is linked to Victim Person/Org who is assassinated, is toppled, is gunned down, is executed, is evacuated Target Person/Org who is hit, is struck, is downed, is set fire to, is blown up, surrounded Instrument A physical object that is fired, injures, downs, is set off, is exploded Kidnap Template (MUC-4) Perpetrator Person/Org who releases, abducts, kidnaps, ambushes, holds, forces, captures, is imprisoned, frees Target Person/Org who is kidnapped, is released, is freed, escapes, disappears, travels, is harmed, is threatened Police Person/Org who rules out, negotiates, condemns, is pressured, finds, arrests, combs Weapons Smuggling Template (NEW) Perpetrator Person/Org who smuggles, is seized from, is captured, is detained Police Person/Org who raids, seizes, captures, confiscates, detains, investigates Instrument A physical object that is smuggled, is seized, is confiscated, is transported Election Template (NEW) Voter Person/Org who chooses, is intimidated, favors, is appealed to, turns out Government Person/Org who authorizes, is chosen, blames, authorizes, denies Candidate Person/Org who resigns, unites, advocates, manipulates, pledges, is blamed Figure 2: Five learned example templates. All knowledge except the template/role names (e.g., ‘Victim’) is learned. class if 20% of its arguments appear under the corresponding WordNet synset3, or if the NER system labels them as such. Once labeled by type, we separately cluster the syntactic functions for each role type. For instance, Person functions are clustered separate from Physical Object functions. Figure 2 shows some of the resulting roles. Finally, since agglomerative clustering makes hard decisions, related events to a template may have been excluded in the initial event clustering stage. To address this problem, we identify the 200 nearby events to each event cluster. These are simply the top scoring event patterns with the cluster’s original events. We add their syntactic functions to their best matching roles. This expands the coverage of each learned role. Varying the 200 amount does not lead to wide variation in extraction performance. Once induced, the roles are evaluated by their entity extraction performance in Section 5. 4.4 Template Evaluation We now compare our learned templates to those hand-created by human annotators for the MUC-4 terrorism corpus. The corpus contains 6 template 3Physical objects are defined as non-person physical objects Bombing Kidnap Attack Arson Perpetrator x x x x Victim x x x x Target x x x Instrument x x Figure 3: Slots in the hand-crafted MUC-4 templates. types, but two of them occur in only 4 and 14 of the 1300 training documents. We thus only evaluate the 4 main templates (bombing, kidnapping, attack, and arson). The gold slots are shown in figure 3. We evaluate the four learned templates that score highest in the document classification evaluation (to be described in section 5.1), aligned with their MUC-4 types. Figure 2 shows three of our four templates, and two brand new ones that our algorithm learned. Of the four templates, we learned 12 of the 13 semantic roles as created for MUC. In addition, we learned a new role not in MUC for bombings, kidnappings, and arson: the Police or Authorities role. The annotators chose not to include this in their labeling, but this knowledge is clearly relevant when understanding such events, so we consider it correct. There is one additional Bombing and one Arson role that does not align with MUC-4, marked incorrect. 981 We thus report 92% slot recall, and precision as 14 of 16 (88%) learned slots. We only measure agreement with the MUC template schemas, but our system learns other events as well. We show two such examples in figure 2: the Weapons Smuggling and Election Templates. 5 Information Extraction: Slot Filling We now present how to apply our learned templates to information extraction. This section will describe how to extract slot fillers using our templates, but without knowing which templates are correct. We could simply use a standard IE approach, for example, creating seed words for our new learned templates. But instead, we propose a new method that obviates the need for even a limited human labeling of seed sets. We consider each learned semantic role as a potential slot, and we extract slot fillers using the syntactic functions that were previously learned. Thus, the learned syntactic patterns (e.g., the subject of release) serve the dual purpose of both inducing the template slots, and extracting appropriate slot fillers from text. 5.1 Document Classification A document is labeled for a template if two different conditions are met: (1) it contains at least one trigger phrase, and (2) its average per-token conditional probability meets a strict threshold. Both conditions require a definition of the conditional probability of a template given a token. The conditional is defined as the token’s importance relative to its uniqueness across all templates. This is not the usual conditional probability definition as IR-corpora are different sizes. P(t|w) = PIRt(w) P s∈T PIRs(w) (8) where PIRt(w) is the probability of pattern w in the IR-corpus of template t. PIRt(w) = Ct(w) P v Ct(v) (9) where Ct(w) is the number of times word w appears in the IR-corpus of template t. A template’s trigger words are defined as words satisfying P(t|w) > 0.2. Kidnap Bomb Attack Arson Precision .64 .83 .66 .30 Recall .54 .63 .35 1.0 F1 .58 .72 .46 .46 Figure 4: Document classification results on test. Trigger phrases are thus template-specific patterns that are highly indicative of that template. After identifying triggers, we use the above definition to score a document with a template. A document is labeled with a template if it contains at least one trigger, and its average word probability is greater than a parameter optimized on the training set. A document can be (and often is) labeled with multiple templates. Finally, we label the sentences that contain triggers and use them for extraction in section 5.2. 5.1.1 Experiment: Document Classification The MUC-4 corpus links templates to documents, allowing us to evaluate our document labels. We treat each link as a gold label (kidnap, bomb, or attack) for that document, and documents can have multiple labels. Our learned clusters naturally do not have MUC labels, so we report results on the four clusters that score highest with each label. Figure 4 shows the document classification scores. The bombing template performs best with an F1 score of .72. Arson occurs very few times, and Attack is lower because it is essentially an agglomeration of diverse events (discussed later). 5.2 Entity Extraction Once documents are labeled with templates, we next extract entities into the template slots. Extraction occurs in the trigger sentences from the previous section. The extraction process is two-fold: 1. Extract all NPs that are arguments of patterns in the template’s induced roles. 2. Extract NPs whose heads are observed frequently with one of the roles (e.g., ‘bomb’ is seen with Instrument relations in figure 2). Take the following MUC-4 sentence as an example: The two bombs were planted with the exclusive purpose of intimidating the owners of... 982 The verb plant is in our learned bombing cluster, so step (1) will extract its passive subject bombs and map it to the correct instrument role (see figure 2). The human target, owners, is missed because intimidate was not learned. However, if owner is in the selectional preferences of the learned ‘human target’ role, step (2) correctly extracts it into that role. These are two different, but complementary, views of semantic roles. The first is that a role is defined by the set of syntactic relations that describe it. Thus, we find all role relations and save their arguments (pattern extraction). The second view is that a role is defined by the arguments that fill it. Thus, we extract all arguments that filled a role in training, regardless of their current syntactic environment. Finally, we filter extractions whose WordNet or named entity label does not match the learned slot’s type (e.g., a Location does not match a Person). 6 Standard Evaluation We trained on the 1300 documents in the MUC-4 corpus and tested on the 200 document TST3 and TST4 test set. We evaluate the four string-based slots: perpetrator, physical target, human target, and instrument. We merge MUC’s two perpetrator slots (individuals and orgs) into one gold Perpetrator slot. As in Patwardhan and Riloff (2007; 2009), we ignore missed optional slots in computing recall. We induced clusters in training, performed IR, and induced the slots. We then extracted entities from the test documents as described in section 5.2. The standard evaluation for this corpus is to report the F1 score for slot type accuracy, ignoring the template type. For instance, a perpetrator of a bombing and a perpetrator of an attack are treated the same. This allows supervised classifiers to train on all perpetrators at once, rather than template-specific learners. Although not ideal for our learning goals, we report it for comparison against previous work. Several supervised approaches have presented results on MUC-4, but unfortunately we cannot compare against them. Maslennikov and Chua (2006; 2007) evaluated a random subset of test (they report .60 and .63 F1), and Xiao et al. (2004) did not evaluate all slot types (they report .57 F1). Figure 5 thus shows our results with previous work that is comparable: the fully supervised and P R F1 Patwardhan & Riloff-09 : Supervised 48 59 53 Patwardhan & Riloff-07 : Weak-Sup 42 48 44 Our Results (1 attack) 48 25 33 Our Results (5 attack) 44 36 40 Figure 5: MUC-4 extraction, ignoring template type. F1 Score Kidnap Bomb Arson Attack Results .53 .43 .42 .16 / .25 Figure 6: Performance of individual templates. Attack compares our 1 vs 5 best templates. weakly supervised approaches of Patwardhan and Riloff (2009; 2007). We give two numbers for our system: mapping one learned template to Attack, and mapping five. Our learned templates for Attack have a different granularity than MUC-4. Rather than one broad Attack type, we learn several: Shooting, Murder, Coup, General Injury, and Pipeline Attack. We see these subtypes as strengths of our algorithm, but it misses the MUC-4 granularity of Attack. We thus show results when we apply the best five learned templates to Attack, rather than just one. The final F1 with these Attack subtypes is .40. Our precision is as good as (and our F1 score near) two algorithms that require knowledge of the templates and/or labeled data. Our algorithm instead learned this knowledge without such supervision. 7 Specific Evaluation In order to more precisely evaluate each learned template, we also evaluated per-template performance. Instead of merging all slots across all template types, we score the slots within each template type. This is a stricter evaluation than Section 6; for example, bombing victims assigned to attacks were previously deemed correct4. Figure 6 gives our results. Three of the four templates score at or above .42 F1, showing that our lower score from the previous section is mainly due to the Attack template. Arson also unexpectedly 4We do not address the task of template instance identification (e.g., splitting two bombings into separate instances). This requires deeper discourse analysis not addressed by this paper. 983 Precision Recall F1 Kidnap .82 .47 .60 (+.07) Bomb .60 .36 .45 (+.02) Arson 1.0 .29 .44 (+.02) Attack .36 .09 .15 (0.0) Figure 7: Performance of each template type, but only evaluated on documents labeled with each type. All others are removed from test. The parentheses indicate F1 gain over evaluating on all test documents (figure 6). scored well. It only occurs in 40 documents overall, suggesting our algorithm works with little evidence. Per-template performace is good, and our .40 overall score from the previous section illustrates that we perform quite well in comparison to the .44.53 range of weakly and fully supervised results. These evaluations use the standard TST3 and TST4 test sets, including the documents that are not labeled with any templates. 74 of the 200 test documents are unlabeled. In order to determine where the system’s false positives originate, we also measure performance only on the 126 test documents that have at least one template. Figure 7 presents the results on this subset. Kidnap improves most significantly in F1 score (7 F1 points absolute), but the others only change slightly. Most of the false positives in the system thus do not originate from the unlabeled documents (the 74 unlabeled), but rather from extracting incorrect entities from correctly identified documents (the 126 labeled). 8 Discussion Template-based IE systems typically assume knowledge of the domain and its templates. We began by showing that domain knowledge isn’t necessarily required; we learned the MUC-4 template structure with surprising accuracy, learning new semantic roles and several new template structures. We are the first to our knowledge to automatically induce MUC-4 templates. It is possible to take these learned slots and use a previous approach to IE (such as seed-based bootstrapping), but we presented an algorithm that instead uses our learned syntactic patterns. We achieved results with comparable precision, and an F1 score of .40 that approaches prior algorithms that rely on hand-crafted knowledge. The extraction results are encouraging, but the template induction itself is a central contribution of this work. Knowledge induction plays an important role in moving to new domains and assisting users who may not know what a corpus contains. Recent work in Open IE learns atomic relations (Banko et al., 2007b), but little work focuses on structured scenarios. We learned more templates than just the main MUC-4 templates. A user who seeks to know what information is in a body of text would instantly recognize these as key templates, and could then extract the central entities. We hope to address in the future how the algorithm’s unsupervised nature hurts recall. Without labeled or seed examples, it does not learn as many patterns or robust classifiers as supervised approaches. We will investigate new text sources and algorithms to try and capture more knowledge. The final experiment in figure 7 shows that perhaps new work should first focus on pattern learning and entity extraction, rather than document identification. Finally, while our pipelined approach (template induction with an IR stage followed by entity extraction) has the advantages of flexibility in development and efficiency, it does involve a number of parameters. We believe the IR parameters are quite robust, and did not heavily focus on improving this stage, but the two clustering steps during template induction require parameters to control stopping conditions and word filtering. While all learning algorithms require parameters, we think it is important for future work to focus on removing some of these to help the algorithm be even more robust to new domains and genres. Acknowledgments This work was supported by the National Science Foundation IIS-0811974, and this material is also based upon work supported by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the Air Force Research Laboratory (AFRL). Thanks to the Stanford NLP Group and reviewers for helpful suggestions. 984 References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Christian Boitet and Pete Whitelock, editors, ACL-98, pages 86– 90, San Francisco, California. Morgan Kaufmann Publishers. Michele Banko, Michael J Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007a. Learning relations from the web. In Proceedings of the International Joint Conferences on Artificial Intelligence (IJCAI). Michele Banko, Michael J Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007b. Open information extraction from the web. In Proceedings of the International Joint Conferences on Artificial Intelligence (IJCAI). David Blei, Andrew Ng, and Michael Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research. Razvan Bunescu and Raymond Mooney. 2004. Collective information extraction with relational markov networks. In Proceedings of the Association of Computational Linguistics (ACL), pages 438–445. Andrew Carlson, J. Betteridge, R.C. Wang, E.R. Hruschka Jr., and T.M. Mitchell. 2010. Coupled semisupervised learning for information extraction. In Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM). Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of the Association of Computational Linguistics (ACL), Hawaii, USA. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Association of Computational Linguistics (ACL), Columbus, Ohio. Hai Leong Chieu, Hwee Tou Ng, and Yoong Keok Lee. 2003. Closing the gap: Learning-based information extraction rivaling knowledge-engineering methods. In Proceedings of the Association of Computational Linguistics (ACL). Nancy Chinchor, David Lewis, and Lynette Hirschman. 1993. Evaluating message understanding systems: an analysis of the third message understanding conference. Computational Linguistics, 19:3:409–449. Katrin Erk and Sebastian Pado. 2008. A structured vector space model for word meaning in context. In Proceedings of the 2008 Conference on Empirical Methods on Natural Language Processing (EMNLP). Elena Filatova, Vasileios Hatzivassiloglou, and Kathleen McKeown. 2006. Automatic creation of domain templates. In Proceedings of the Association of Computational Linguistics (ACL). Dayne Freitag. 1998. Toward general-purpose learning for information extraction. In Proceedings of the Association of Computational Linguistics (ACL), pages 404–408. David Graff. 2002. English gigaword. Linguistic Data Consortium. Trond Grenager and Christopher D. Manning. 2006. Unsupervised discovery of a statistical verb lexicon. In Proceedings of the the 2006 Conference on Empirical Methods on Natural Language Processing (EMNLP). Shan He and Daniel Gildea. 2006. Self-training and co-training for semantic role labeling: Primary report. Technical Report 891, University of Rochester. Heng Ji and Ralph Grishman. 2008. Refining event extraction through unsupervised cross-document inference. In Proceedings of the Association of Computational Linguistics (ACL). Niels Kasch and Tim Oates. 2010. Mining script-like structures from the web. In Proceedings of NAACL HLT, pages 34–42. Joel Lang and Mirella Lapata. 2010. Unsupervised induction of semantic roles. In Proceedings of the North American Association of Computational Linguistics. Mstislav Maslennikov and Tat-Seng Chua. 2007. Automatic acquisition of domain knowledge for information extraction. In Proceedings of the Association of Computational Linguistics (ACL). Siddharth Patwardhan and Ellen Riloff. 2007. Effective ie with semantic affinity patterns and relevant regions. In Proceedings of the 2007 Conference on Empirical Methods on Natural Language Processing (EMNLP). Siddharth Patwardhan and Ellen Riloff. 2009. A unified model of phrasal and sentential evidence for information extraction. In Proceedings of the 2009 Conference on Empirical Methods on Natural Language Processing (EMNLP). Lisa Rau, George Krupka, Paul Jacobs, Ira Sider, and Lois Childs. 1992. Ge nltoolset: Muc-4 test results and analysis. In Proceedings of the Message Understanding Conference (MUC-4), pages 94–99. Ellen Riloff and Mark Schmelzenbach. 1998. An empirical approach to conceptual case frame acquisition. In Proceedings of the Sixth Workshop on Very Large Corpora. Ellen Riloff, Janyce Wiebe, and William Phillips. 2005. Exploiting subjectivity classification to improve information extraction. In Proceedings of AAAI-05. Roger C. Schank and Robert P. Abelson. 1977. Scripts, plans, goals and understanding. Lawrence Erlbaum. Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive ie using unrestricted relation discovery. In Proceedings of NAACL. 985 Kiyoshi Sudo, Satoshi Sekine, and Ralph Grishman. 2003. An improved extraction pattern representation model for automatic ie pattern acquisition. In Proceedings of the Association of Computational Linguistics (ACL), pages 224–231. Beth M. Sundheim. 1991. Third message understanding evaluation and conference (muc-3): Phase 1 status report. In Proceedings of the Message Understanding Conference. Mihai Surdeanu, Jordi Turmo, and Alicia Ageno. 2006. A hybrid approach for the acquisition of information extraction patterns. In Proceedings of the EACL Workshop on Adaptive Text Extraction and Mining. Robert S. Swier and Suzanne Stevenson. 2004. Unsupervised semantic role labelling. In Proceedings of the 2004 Conference on Empirical Methods on Natural Language Processing (EMNLP). Jing Xiao, Tat-Seng Chua, and Hang Cui. 2004. Cascading use of soft and hard matching pattern rules for weakly supervised information extraction. In Proceedings of the 20th International Conference on Computational Linguistics (COLING). Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Automatic acquisition of domain knowledge for information extraction. In COLING, pages 940–946. 986
|
2011
|
98
|
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 987–996, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Classifying Arguments by Scheme Vanessa Wei Feng Department of Computer Science University of Toronto Toronto, ON, M5S 3G4, Canada [email protected] Graeme Hirst Department of Computer Science University of Toronto Toronto, ON, M5S 3G4, Canada [email protected] Abstract Argumentation schemes are structures or templates for various kinds of arguments. Given the text of an argument with premises and conclusion identified, we classify it as an instance of one of five common schemes, using features specific to each scheme. We achieve accuracies of 63–91% in one-against-others classification and 80–94% in pairwise classification (baseline = 50% in both cases). 1 Introduction We investigate a new task in the computational analysis of arguments: the classification of arguments by the argumentation schemes that they use. An argumentation scheme, informally, is a framework or structure for a (possibly defeasible) argument; we will give a more-formal definition and examples in Section 3. Our work is motivated by the need to determine the unstated (or implicitly stated) premises that arguments written in natural language normally draw on. Such premises are called enthymemes. For instance, the argument in Example 1 consists of one explicit premise (the first sentence) and a conclusion (the second sentence): Example 1 [Premise:] The survival of the entire world is at stake. [Conclusion:] The treaties and covenants aiming for a world free of nuclear arsenals and other conventional and biological weapons of mass destruction should be adhered to scrupulously by all nations. Another premise is left implicit — “Adhering to those treaties and covenants is a means of realizing survival of the entire world”. This proposition is an enthymeme of this argument. Our ultimate goal is to reconstruct the enthymemes in an argument, because determining these unstated assumptions is an integral part of understanding, supporting, or attacking an entire argument. Hence reconstructing enthymemes is an important problem in argument understanding. We believe that first identifying the particular argumentation scheme that an argument is using will help to bridge the gap between stated and unstated propositions in the argument, because each argumentation scheme is a relatively fixed “template” for arguing. That is, given an argument, we first classify its argumentation scheme; then we fit the stated propositions into the corresponding template; and from this we infer the enthymemes. In this paper, we present an argument scheme classification system as a stage following argument detection and proposition classification. First in Section 2 and Section 3, we introduce the background to our work, including related work in this field, the two core concepts of argumentation schemes and scheme-sets, and the Araucaria dataset. In Section 4 and Section 5 we present our classification system, including the overall framework, data preprocessing, feature selection, and the experimental setups. In the remaining section, we present the essential approaches to solve the leftover problems of this paper which we will study in our future work, and discuss the experimental results, and potential directions for future work. 987 2 Related work Argumentation has not received a great deal of attention in computational linguistics, although it has been a topic of interest for many years. Cohen (1987) presented a computational model of argumentative discourse. Dick (1987; 1991a; 1991b) developed a representation for retrieval of judicial decisions by the structure of their legal argument — a necessity for finding legal precedents independent of their domain. However, at that time no corpus of arguments was available, so Dick’s system was purely theoretical. Recently, the Araucaria project at University of Dundee has developed a software tool for manual argument analysis, with a point-and-click interface for users to reconstruct and diagram an argument (Reed and Rowe, 2004; Rowe and Reed, 2008). The project also maintains an online repository, called AraucariaDB, of marked-up naturally occurring arguments collected by annotators worldwide, which can be used as an experimental corpus for automatic argumentation analysis (for details see Section 3.2). Recent work on argument interpretation includes that of George, Zukerman, and Nieman (2007), who interpret constructed-example arguments (not naturally occurring text) as Bayesian networks. Other contemporary research has looked at the automatic detection of arguments in text and the classification of premises and conclusions. The work closest to ours is perhaps that of Mochales and Moens (2007; 2008; 2009a; 2009b). In their early work, they focused on automatic detection of arguments in legal texts. With each sentence represented as a vector of shallow features, they trained a multinomial na¨ıve Bayes classifier and a maximum entropy model on the Araucaria corpus, and obtained a best average accuracy of 73.75%. In their follow-up work, they trained a support vector machine to further classify each argumentative clause into a premise or a conclusion, with an F1 measure of 68.12% and 74.07% respectively. In addition, their context-free grammar for argumentation structure parsing obtained around 60% accuracy. Our work is “downstream” from that of Mochales and Moens. Assuming the eventual success of their, or others’, research program on detecting and classifying the components of an argument, we seek to determine how the pieces fit together as an instance of an argumentation scheme. 3 Argumentation schemes, scheme-sets, and annotation 3.1 Definition and examples Argumentation schemes are structures or templates for forms of arguments. The arguments need not be deductive or inductive; on the contrary, most argumentation schemes are for presumptive or defeasible arguments (Walton and Reed, 2002). For example, argument from cause to effect is a commonly used scheme in everyday arguments. A list of such argumentation schemes is called a scheme-set. It has been shown that argumentation schemes are useful in evaluating common arguments as fallacious or not (van Eemeren and Grootendorst, 1992). In order to judge the weakness of an argument, a set of critical questions are asked according to the particular scheme that the argument is using, and the argument is regarded as valid if it matches all the requirements imposed by the scheme. Walton’s set of 65 argumentation schemes (Walton et al., 2008) is one of the best-developed schemesets in argumentation theory. The five schemes defined in Table 1 are the most commonly used ones, and they are the focus of the scheme classification system that we will describe in this paper. 3.2 Araucaria dataset One of the challenges for automatic argumentation analysis is that suitable annotated corpora are still very rare, in spite of work by many researchers. In the work described here, we use the Araucaria database1, an online repository of arguments, as our experimental dataset. Araucaria includes approximately 660 manually annotated arguments from various sources, such as newspapers and court cases, and keeps growing. Although Araucaria has several limitations, such as rather small size and low agreement among annotators2, it is nonetheless one of the best argumentative corpora available to date. 1http://araucaria.computing.dundee.ac.uk/doku.php# araucaria argumentation corpus 2The developers of Araucaria did not report on interannotator agreement, probably because some arguments are annotated by only one commentator. 988 Argument from example Premise: In this particular case, the individual a has property F and also property G. Conclusion: Therefore, generally, if x has property F, then it also has property G. Argument from cause to effect Major premise: Generally, if A occurs, then B will (might) occur. Minor premise: In this case, A occurs (might occur). Conclusion: Therefore, in this case, B will (might) occur. Practical reasoning Major premise: I have a goal G. Minor premise: Carrying out action A is a means to realize G. Conclusion: Therefore, I ought (practically speaking) to carry out this action A. Argument from consequences Premise: If A is (is not) brought about, good (bad) consequences will (will not) plausibly occur. Conclusion: Therefore, A should (should not) be brought about. Argument from verbal classification Individual premise: a has a particular property F. Classification premise: For all x, if x has property F, then x can be classified as having property G. Conclusion: Therefore, a has property G. Table 1: The five most frequent schemes and their definitions in Walton’s scheme-set. Arguments in Araucaria are annotated in a XMLbased format called “AML” (Argument Markup Language). A typical argument (see Example 2) consists of several AU nodes. Each AU node is a complete argument unit, composed of a conclusion proposition followed by optional premise proposition(s) in a linked or convergent structure. Each of these propositions can be further defined as a hierarchical collection of smaller AUs. INSCHEME is the particular scheme (e.g., “Argument from Consequences”) of which the current proposition is a member; enthymemes that have been made explicit are annotated as “missing = yes”. Example 2 Example of argument markup from Araucaria <TEXT>If we stop the free creation of art, we will stop the free viewing of art.</TEXT> <AU> <PROP identifier="C" missing="yes"> <PROPTEXT offset="-1"> The prohibition of the free creation of art should not be brought about.</PROPTEXT> <INSCHEME scheme="Argument from Consequences" schid="0" /> </PROP> <LA> <AU> <PROP identifier="A" missing="no"> <PROPTEXT offset="0"> If we stop the free creation of art, we will stop the free viewing of art.</PROPTEXT> <INSCHEME scheme="Argument from Consequences" schid="0" /> </PROP> </AU> <AU> <PROP identifier="B" missing="yes"> <PROPTEXT offset="-1"> The prohibition of free viewing of art is not acceptable.</PROPTEXT> <INSCHEME scheme="Argument from Consequences" schid="0" /> </PROP> </AU> </LA> </AU> There are three scheme-sets used in the annotations in Araucaria: Walton’s scheme-set, Katzav and Reed’s (2004) scheme-set, and Pollock’s (1995) scheme-set. Each of these has a different set of schemes; and most arguments in Araucaria are marked up according to only one of them. Our experimental dataset is composed of only those arguments annotated in accordance with Walton’s scheme-set, within which the five schemes shown in Table 1 constitute 61% of the total occurrences. 4 Methods 4.1 Overall framework As we noted above, our ultimate goal is to reconstruct enthymemes, the unstated premises, in an argument by taking advantage of the stated propositions; and in order to achieve this goal we need to first determine the particular argumentation scheme that the argument is using. This problem is depicted in Figure 1. Our scheme classifier is the dashed round-cornered rectangle portion of this 989 Detecting argumentative text ARGUMENTATIVE SEGMENT Premise / conclusion classifier CONCLUSION PREMISE #1 PREMISE #2 Scheme classifier TEXT ARGUMENTATION SCHEME Argument template fitter CONSTRUCTED ENTHYMEME Figure 1: Overall framework of this research. overall framework: its input is the extracted conclusion and premise(s) determined by an argument detector, followed by a premise / conclusion classifier, given an unknown text as the input to the entire system. And the portion below the dashed roundrectangle represents our long-term goal — to reconstruct the implicit premise(s) in an argument, given its argumentation scheme and its explicit conclusion and premise(s) as input. Since argument detection and classification are not the topic of this paper, we assume here that the input conclusion and premise(s) have already been retrieved, segmented, and classified, as for example by the methods of Mochales and Moens (see Section 2 above). And the scheme template fitter is the topic of our on-going work. 4.2 Data preprocessing From all arguments in Araucaria, we first extract those annotated in accordance with Walton’s scheme-set. Then we break each complex AU node into several simple AUs where no conclusion or premise proposition nodes have embedded AU nodes. From these generated simple arguments, we extract those whose scheme falls into one of the five most frequent schemes as described in Table 1. Furthermore, we remove all enthymemes that have been inserted by the annotator and ignore any argument with a missing conclusion, since the input to our proposed classifier, as depicted in Figure 1, cannot have any access to unstated argumentative propositions. The resulting preprocessed dataset is composed of 393 arguments, of which 149, 106, 53, 44, and 41 respectively belong to the five schemes in the order shown in Table 1. 4.3 Feature selection The features used in our work fall into two categories: general features and scheme-specific features. 4.3.1 General features General features are applicable to arguments belonging to any of the five schemes (shown in Table 2). For the features conLoc, premLoc, gap, and lenRat, we have two versions, differing in terms of their basic measurement unit: sentence-based and token-based. The final feature, type, indicates whether the premises contribute to the conclusion in a linked or convergent order. A linked argument (LA) is one that has two or more inter-dependent premise propositions, all of which are necessary to make the conclusion valid, whereas in a convergent argument (CA) exactly one premise proposition is sufficient to do so. Since it is observed that there exists a strong correlation between type and the particular scheme employed while arguing, we believe type can be a good indicator of argumentation scheme. However, although this feature is available to us because it is included in the Araucaria annotations, its value cannot be obtained from raw text as easily as other features mentioned above; but it is possible that we will in the future be able to determine it automatically by taking advantage of some scheme-independent cues such as the discourse relation between the conclusion and the premises. 4.3.2 Scheme-specific features Scheme-specific features are different for each scheme, since each scheme has its own cue phrases or patterns. The features for each scheme are shown in Table 3 (for complete lists of features see Feng (2010)). In our experiments in Section 5 below, all these features are computed for all arguments; but 990 conLoc: the location (in token or sentence) of the conclusion in the text. premLoc: the location (in token or sentence) of the first premise proposition. conFirst: whether the conclusion appears before the first premise proposition. gap: the interval (in token or sentence) between the conclusion and the first premise proposition. lenRat: the ratio of the length (in token or sentence) of the premise(s) to that of the conclusion. numPrem: the number of explicit premise propositions (PROP nodes) in the argument. type: type of argumentation structure, i.e., linked or convergent. Table 2: List of general features. the features for any particular scheme are used only when it is the subject of a particular task. For example, when we classify argument from example in a one-against-others setup, we use the schemespecific features of that scheme for all arguments; when we classify argument from example against argument from cause to effect, we use the schemespecific features of those two schemes. For the first three schemes (argument from example, argument from cause to effect, and practical reasoning), the scheme-specific features are selected cue phrases or patterns that are believed to be indicative of each scheme. Since these cue phrases and patterns have differing qualities in terms of their precision and recall, we do not treat them all equally. For each cue phrase or pattern, we compute “confidence”, the degree of belief that the argument of interest belongs to a particular scheme, using the distribution characteristics of the cue phrase or pattern in the corpus, as described below. For each argument A, a vector CV = {c1, c2, c3} is added to its feature set, where each ci indicates the “confidence” of the existence of the specific features associated with each of the first three schemes, schemei. This is defined in Equation 1: ci = 1 N mi X k=1 (P (schemei|cpk) · dik) (1) Argument from example 8 keywords and phrases including for example, such as, for instance, etc.; 3 punctuation cues: “:”, “;”, and “—”. Argument from cause to effect 22 keywords and simple cue phrases including result, related to, lead to, etc.; 10 causal and noncausal relation patterns extracted from WordNet (Girju, 2003). Practical reasoning 28 keywords and phrases including want, aim, objective, etc.; 4 modal verbs: should, could, must, and need; 4 patterns including imperatives and infinitives indicating the goal of the speaker. Argument from consequences The counts of positive and negative propositions in the conclusion and premises, calculated from the General Inquirer2. Argument from verbal classification The maximal similarity between the central word pairs extracted from the conclusion and the premise; the counts of copula, expletive, and negative modifier dependency relations returned by the Stanford parser3 in the conclusion and the premise. 2 http://www.wjh.harvard.edu/∼inquirer/ 3 http://nlp.stanford.edu/software/lex-parser.shtml Table 3: List of scheme-specific features. Here mi is the number of scheme-specific cue phrases designed for schemei; P (schemei|cpk) is the prior probability that the argument A actually belongs to schemei, given that some particular cue phrase cpk is found in A; dik is a value indicating whether cpk is found in A; and the normalization factor N is the number of scheme-specific cue phrase patterns designed for schemei with at least one support (at least one of the arguments belonging to schemei contains that cue phrase). There are two ways to calculate dik, Boolean and count: in Boolean mode, dik is treated as 1 if A matches cpk; in count mode, dik equals to the number of times A matches cpk; and in both modes, dik is treated as 0 if cpk is not found in A. 991 For argument from consequences, since the arguer has an obvious preference for some particular consequence, sentiment orientation can be a good indicator for this scheme, which is quantified by the counts of positive and negative propositions in the conclusion and premise. For argument from verbal classification, there exists a hypernymy-like relation between some pair of propositions (entities, concepts, or actions) located in the conclusion and the premise respectively. The existence of such a relation is quantified by the maximal Jiang-Conrath Similarity (Jiang and Conrath, 1997) between the “central word” pairs extracted from the conclusion and the premise. We parse each sentence of the argument with the Stanford dependency parser, and a word or phrase is considered to be a central word if it is the dependent or governor of several particular dependency relations, which basically represents the attribute or the action of an entity in a sentence, or the entity itself. For example, if a word or phrase is the dependent of the dependency relation agent, it is therefore considered as a “central word”. In addition, an arguer tends to use several particular syntactic structures (copula, expletive, and negative modifier) when using this scheme, which can be quantified by the counts of those special relations in the conclusion and the premise(s). 5 Experiments 5.1 Training We experiment with two kinds of classification: oneagainst-others and pairwise. We build a pruned C4.5 decision tree (Quinlan, 1993) for each different classification setup, implemented by Weka Toolkit 3.65 (Hall et al., 2009). One-against-others classification A one-againstothers classifier is constructed for each of the five most frequent schemes, using the general features and the scheme-specific features for the scheme of interest. For each classifier, there are two possible outcomes: target scheme and other; 50% of the training dataset is arguments associated with target scheme, while the rest is arguments of all the other schemes, which are treated as other. Oneagainst-other classification thus tests the effective5http://cs.waikato.ac.nz/ml/weka ness of each scheme’s specific features. Pairwise classification A pairwise classifier is constructed for each of the ten possible pairings of the five schemes, using the general features and the scheme-specific features of the two schemes in the pair. For each of the ten classifiers, the training dataset is divided equally into arguments belonging to scheme1 and arguments belonging to scheme2, where scheme1 and scheme2 are two different schemes among the five. Only features associated with scheme1 and scheme2 are used. 5.2 Evaluation We experiment with different combinations of general features and scheme-specific features (discussed in Section 4.3). To evaluate each experiment, we use the average accuracy over 10 pools of randomly sampled data (each with baseline at 50%6) with 10fold cross-validation. 6 Results We first present the best average accuracy (BAA) of each classification setup. Then we demonstrate the impact of the feature type (convergent or linked argument) on BAAs for different classification setups, since we believe type is strongly correlated with the particular argumentation scheme and its value is the only one directly retrieved from the annotations of the training corpus. For more details, see Feng (2010). 6.1 BAAs of each classification setup target scheme BAA dik base type example 90.6 count token yes cause 70.4 Boolean / count token no reasoning 90.8 count sentence yes consequences 62.9 – sentence yes classification 63.2 – token yes Table 4: Best average accuracies (BAAs) (%) of oneagainst-others classification. 6We also experiment with using general features only, but the results are consistently below or around the sampling baseline of 50%; therefore, we do not use them as a baseline here. 992 example cause reasoning consequences cause 80.6 reasoning 93.1 94.2 consequences 86.9 86.7 97.9 classification 86.0 85.6 98.3 64.2 Table 5: Best average accuracies (BAAs) (%) of pairwise classification. Table 4 presents the best average accuracies of one-against-others classification for each of the five schemes. The subsequent three columns list the particular strategies of features incorporation under which those BAAs are achieved (the complete set of possible choices is given in Section 4.3.): • dik: Boolean or count — the strategy of combining scheme-specific cue phrases or patterns using either Boolean or count for dik. • base: sentence or token — the basic unit of applying location- or length-related general features. • type: yes or no — whether type (convergent or linked argument) is incorporated into the feature set. As Table 4 shows, one-against-others classification achieves high accuracy for argument from example and practical reasoning: 90.6% and 90.8%. The BAA of argument from cause to effect is only just over 70%. However, with the last two schemes (argument from consequences and argument from verbal classification), accuracy is only in the low 60s; there is little improvement of our system over the majority baseline of 50%. This is probably due at least partly to the fact that these schemes do not have such obvious cue phrases or patterns as the other three schemes which therefore may require more world knowledge encoded, and also because the available training data for each is relatively small (44 and 41 instances, respectively). The BAA for each scheme is achieved with inconsistent choices of base and dik, but the accuracies that resulted from different choices vary only by very little. Table 5 shows that our system is able to correctly differentiate between most of the different scheme pairs, with accuracies as high as 98%. It has poor performance (64.0%) only for the pair argument from consequences and argument from verbal classification; perhaps not coincidentally, these are the two schemes for which performance was poorest in the one-against-others task. 6.2 Impact of type on classification accuracy As we can see from Table 6, for one-against-others classifications, incorporating type into the feature vectors improves classification accuracy in most cases: the only exception is that the best average accuracy of one-against-others classification between argument from cause to effect and others is obtained without involving type into the feature vector — but the difference is negligible, i.e., 0.5 percentage points with respect to the average difference. Type also has a relatively small impact on argument from verbal classification (2.6 points), compared to its impact on argument from example (22.3 points), practical reasoning (8.1 points), and argument from consequences (7.5 points), in terms of the maximal differences. Similarly, for pairwise classifications, as shown in Table 7, type has significant impact on BAAs, especially on the pairs of practical reasoning versus argument from cause to effect (17.4 points), practical reasoning versus argument from example (22.6 points), and argument from verbal classification versus argument from example (20.2 points), in terms of the maximal differences; but it has a relatively small impact on argument from consequences versus argument from cause to effect (0.8 point), and argument from verbal classification versus argument from consequences (1.1 points), in terms of average differences. 7 Future Work In future work, we will look at automatically classifying type (i.e., whether an argument is linked or convergent), as type is the only feature directly retrieved from annotations in the training corpus that has a strong impact on improving classification accuracies. Automatically classifying type will not be easy, because sometimes it is subjective to say whether a premise is sufficient by itself to support the conclusion or not, especially when the argument is about 993 target scheme BAA-t BAA-no t max diff min diff avg diff example 90.6 71.6 22.3 10.6 14.7 cause 70.4 70.9 −0.5 −0.6 −0.5 reasoning 90.8 83.2 8.1 7.5 7.7 consequences 62.9 61.9 7.5 −0.6 4.2 classification 63.2 60.7 2.6 0.4 2.0 Table 6: Accuracy (%) with and without type in one-against-others classification. BAA-t is best average accuracy with type, and BAA-no t is best average accuracy without type. max diff, min diff, and avg diffare maximal, minimal, and average differences between each experimental setup with type and without type while the remaining conditions are the same. scheme1 scheme2 BAA-t BAA-no t max diff min diff avg diff cause example 80.6 69.7 10.9 7.1 8.7 reasoning example 93.1 73.1 22.8 19.1 20.1 reasoning cause 94.2 80.5 17.4 8.7 13.9 consequences example 86.9 76.0 13.8 6.9 10.1 consequences cause 87.7 86.7 3.8 −1.5 −0.1 consequences reasoning 97.9 97.9 10.6 0.0 0.8 classification example 86.0 74.6 20.2 3.7 7.1 classification cause 85.6 76.8 9.0 3.7 7.1 classification reasoning 98.3 89.3 8.9 4.2 8.3 classification consequences 64.0 60.0 6.5 −1.3 1.1 Table 7: Accuracy (%) with and without type in pairwise classification. Column headings have the same meanings as in Table 6. personal opinions or judgments. So for this task, we will initially focus on arguments that are (or at least seem to be) empirical or objective rather than value-based. It will also be non-trivial to determine whether an argument is convergent or linked — whether the premises are independent of one another or not. Cue words and discourse relations between the premises and the conclusion will be one helpful factor; for example, besides generally flags an independent premise. And one premise may be regarded as linked to another if either would become an enthymeme if deleted; but determining this in the general case, without circularity, will be difficult. We will also work on the argument template fitter, which is the final component in our overall framework. The task of the argument template fitter is to map each explicitly stated conclusion and premise into the corresponding position in its scheme template and to extract the information necessary for enthymeme reconstruction. Here we propose a syntaxbased approach for this stage, which is similar to tasks in information retrieval. This can be best explained by the argument in Example 1, which uses the particular argumentation scheme practical reasoning. We want to fit the Premise and the Conclusion of this argument into the Major premise and the Conclusion slots of the definition of practical reasoning (see Table 1), and construct the following conceptual mapping relations: 1. Survival of the entire world −→a goal G 2. Adhering to the treaties and covenants aiming for a world free of nuclear arsenals and other conventional and biological weapons of mass destruction −→action A Thereby we will be able to reconstruct the missing Minor premise — the enthymeme in this argument: Carrying out adhering to the treaties and covenants aiming for a world free of nuclear arsenals and other conventional and biological 994 weapons of mass destruction is a means of realizing survival of the entire world. 8 Conclusion The argumentation scheme classification system that we have presented in this paper introduces a new task in research on argumentation. To the best of our knowledge, this is the first attempt to classify argumentation schemes. In our experiments, we have focused on the five most frequently used schemes in Walton’s schemeset, and conducted two kinds of classification: in one-against-others classification, we achieved over 90% best average accuracies for two schemes, with other three schemes in the 60s to 70s; and in pairwise classification, we obtained 80% to 90% best average accuracies for most scheme pairs. The poor performance of our classification system on other experimental setups is partly due to the lack of training examples or to insufficient world knowledge. Completion of our scheme classification system will be a step towards our ultimate goal of reconstructing the enthymemes in an argument by the procedure depicted in Figure 1. Because of the significance of enthymemes in reasoning and arguing, this is crucial to the goal of understanding arguments. But given the still-premature state of research of argumentation in computational linguistics, there are many practical issues to deal with first, such as the construction of richer training corpora and improvement of the performance of each step in the procedure. Acknowledgments This work was financially supported by the Natural Sciences and Engineering Research Council of Canada and by the University of Toronto. We are grateful to Suzanne Stevenson for helpful comments and suggestions. References Robin Cohen. 1987. Analyzing the structure of argumentative discourse. Computational Linguistics, 13(1–2):11–24. Judith Dick. 1987. Conceptual retrieval and case law. In Proceedings, First International Conference on Artificial Intelligence and Law, pages 106–115, Boston, May. Judith Dick. 1991a. A Conceptual, Case-relation Representation of Text for Intelligent Retrieval. Ph.D. thesis, Faculty of Library and Information Science, University of Toronto, April. Judith Dick. 1991b. Representation of legal text for conceptual retrieval. In Proceedings, Third International Conference on Artificial Intelligence and Law, pages 244–252, Oxford, June. Vanessa Wei Feng. 2010. Classifying arguments by scheme. Technical report, Department of Computer Science, University of Toronto, November. http://ftp.cs.toronto.edu/pub/ gh/Feng-MSc-2010.pdf. Sarah George, Ingrid Zukerman, and Michael Niemann. 2007. Inferences, suppositions and explanatory extensions in argument interpretation. User Modeling and User-Adapted Interaction, 17(5):439–474. Roxana Girju. 2003. Automatic detection of causal relations for question answering. In Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering, pages 76–83, Morristown, NJ, USA. Association for Computational Linguistics. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: an update. SIGKDD Explorations Newsletter, 11(1):10–18. Jay J. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In International Conference Research on Computational Linguistics (ROCLING X), pages 19–33. Joel Katzav and Chris Reed. 2004. On argumentation schemes and the natural classification of arguments. Argumentation, 18(2):239–259. Raquel Mochales and Marie-Francine Moens. 2008. Study on the structure of argumentation in case law. In Proceedings of the 2008 Conference on Legal Knowledge and Information Systems, pages 11–20, Amsterdam, The Netherlands. IOS Press. Raquel Mochales and Marie-Francine Moens. 2009a. Argumentation mining: the detection, classification and structure of arguments in text. In ICAIL ’09: Proceedings of the 12th International Conference on Artificial Intelligence and Law, pages 98–107, New York, NY, USA. ACM. Raquel Mochales and Marie-Francine Moens. 2009b. Automatic argumentation detection and its role in law and the semantic web. In Proceedings of the 2009 Conference on Law, Ontologies and the Semantic Web, pages 115–129, Amsterdam, The Netherlands. IOS Press. Marie-Francine Moens, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic detection 995 of arguments in legal texts. In ICAIL ’07: Proceedings of the 11th International Conference on Artificial Intelligence and Law, pages 225–230, New York, NY, USA. ACM. John L. Pollock. 1995. Cognitive Carpentry: A Blueprint for How to Build a Person. Bradford Books. The MIT Press, May. J. Ross Quinlan. 1993. C4.5: Programs for machine learning. Machine Learning, 16(3):235–240. Chris Reed and Glenn Rowe. 2004. Araucaria: Software for argument analysis, diagramming and representation. International Journal of Artificial Intelligence Tools, 14:961–980. Glenn Rowe and Chris Reed. 2008. Argument diagramming: The Araucaria project. In Knowledge Cartography, pages 163–181. Springer London. Frans H. van Eemeren and Rob Grootendorst. 1992. Argumentation, Communication, and Fallacies: A Pragma-Dialectical Perspective. Routledge. Douglas Walton and Chris Reed. 2002. Argumentation schemes and defeasible inferences. In Workshop on Computational Models of Natural Argument, 15th European Conference on Artificial Intelligence, pages 11–20, Amsterdam, The Netherlands. IOS Press. Douglas Walton, Chris Reed, and Fabrizio Macagno. 2008. Argumentation Schemes. Cambridge University Press. 996
|
2011
|
99
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.