id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_12800
Many of the features of the AEP prediction model are specifically tuned to the choice of German and English as the source and target languages.
it should be easy to develop new feature sets to deal with other languages or treebanking styles.
contrasting
train_12801
Because Kanji comprises ideograms, an individual pronunciation can potentially be represented by more than one character.
if several Kanji strings are related to the same pronunciation of the source word, their meanings will be different and will therefore convey different impressions.
contrasting
train_12802
Existing transliteration methods for Chinese (Haizhou et al., 2004;Wan and Verspoor, 1998) aim to spell out foreign names of people and places, and do not model impression.
as exemplified by "Coca-Cola" in Section 1, the impression of words needs to be modeled in the transliteration of proper names, such as companies and products.
contrasting
train_12803
A simple implementation is to segment each impression keyword into characters.
because it is difficult for a user to provide an exhaustive list of appropriate keywords and characters, our impression model derives characters that are not included in the impression keywords.
contrasting
train_12804
These co-occurrences can potentially be collected from existing language resources, such as corpora in Chinese.
it is desirable to collect an association between a word and a character, not simply their co-occurrence in corpora.
contrasting
train_12805
We believe our hand-assigned costs are a reasonable starting point if one knows nothing about the particular pair of languages in question.
one could also train such costs, either from an existing list of known transliterations, or as part of an iterative bootstrapping method as, for example, in Yarowsky and Wicentowski's (2000) work on morphological induction.
contrasting
train_12806
Expanding a foreign word to its possible variants in a query has been shown to increase the precision of search results (Abduljaleel and Larkey, 2003).
ooV words in the query are easily recognised based on English rules and an English-Arabic dictionary: capitalised words are marked as nouns, and the remaining words are translated using the dictionary.
contrasting
train_12807
We built the Arabic language model using 100 000 words extracted from the TREC collection using the same spell-checker.
we excluded any word that could be a proper noun, to avoid involving foreign words.
contrasting
train_12808
This could be due to the limited size of the training corpus.
we expect that improvements to this approach will remain limited due to the fact that many Arabic and foreign words share the same trigrams.
contrasting
train_12809
They investigate the impact of different features and data size, and report results significantly better than a simple baseline.
their results vary considerably between the languages and the domains.
contrasting
train_12810
Techniques based on machine learning (Zhou et al., 2005;Hao et al., 2005;Bunescu and Mooney, 2006) are expected to alleviate this problem in manually crafted IE.
in most cases, the cost of manually crafting patterns is simply transferred to that for constructing a large amount of training data, which requires tedious amount of manual labor to annotate text.
contrasting
train_12811
Their approaches were similar to our approach using PASs derived from full parsing.
one problem with their systems is that they could not treat non-local dependencies such as semantic subjects of gerund constructions (discussed in Section 2), and thus rules acquired from the constructions were partial.
contrasting
train_12812
([c] pseudo False Negatives (FNs)).
these results included pairs that Reactome missed or those that only cooccurred and were not interacting pairs in the text.
contrasting
train_12813
Parsing errors are intrinsic problems to IE methods using parsing.
from Table 3, we can conclude that the key to gaining better accuracy is refining of the method with which the PAS patterns are constructed (there were 46 related FNs) rather than improving parsing (there were 35 FNs).
contrasting
train_12814
as enzymes that catalyze certain reactions) depends on their three-dimensional structure.
genes only specify the linear, sequence of the amino acids, and the ribosome (the cell's "protein factory") uses this information to assemble the polypeptide chain.
contrasting
train_12815
Ultimately, models which explicitly capture all atoms and their physical interactions are required to study the folding of real proteins.
since such models often require huge computational resources such as supercomputers or distributed systems, novel search strategies and other general properties of the folding problem are usually first studied with coarse-grained, simplified representations, such as the HP model (Lau and Dill, 1989;Dill et al., 1995) used here.
contrasting
train_12816
What CKY by itself does not give us is an accurate prediction of the rates that govern the folding process, including misfolding and unfolding events.
we believe that it is possible to obtain this information from the chart by extracting all tree cuts (which corresond to the states of the chain at different stages during the folding process) and calculating folding rates between them.
contrasting
train_12817
They guide the search using the performance on parsing (and several other tasks) of the grammar at each stage in the search.
our approach explores the space of grammars by starting with few nonterminals and splitting them.
contrasting
train_12818
If the reevaluated candidate is no longer better than the second candidate on the queue, we reinsert it and continue.
if it is still the best on the queue, and it improves the model, we enact the split; otherwise it is discarded.
contrasting
train_12819
One is what Klein & Manning and we do.
they have a better performing approximation which is used in their reported score.
contrasting
train_12820
The INHERIT model may be regarded as containing all the same rules (see (1)) as the PCFG-LA model.
these rules' probabilities are now collectively determined by a smaller set of shared parameters.
contrasting
train_12821
The choice of a particular passpattern, for example, depends on all and only the three nonterminals X, Y, Z.
given sparse training data, sometimes it is advantageous to back off to smaller amounts of contextual information; the nonterminal X or Y might alone be sufficient to predict the passpattern.
contrasting
train_12822
Indeed, Table 3 shows that we can improve agreement precision by setting θ agr to the (positive) mean agreement score µ assigned by the SVM agreement-classifier over all references in the given debate 12 .
this comes at the cost of greatly reducing agreement accuracy (development: 64.38%; test: 66.18%) due to lowered recall levels.
contrasting
train_12823
We compute transitive closure by using a Union-Find structure, which runs in time O(log * n), which for practical purposes can be considered linear (O(n)) 3 .
when computing the best information gain for a nominal feature, StRip has to make a pass over the data for each value that the feature takes, while RIPPER can split the data into bags and perform the computation in one pass.
contrasting
train_12824
StRip's performance is all the more impressive considering the strength of the SVM and RIPPER baselines, which which represent the best runs across the 336 different parameter settings tested for SV M light and 144 different settings tested for RIPPER.
all four of the StRip runs using the full MPQA corpus (we vary the loss ratio for false positive/false negative cost) outperform those baselines.
contrasting
train_12825
By carefully selecting the smoothing parameters, the model can preserve dependencies between topic and sentiment words, and is quite capable of distinguishing the positive sentiment of 'unpredictable plot' from the negative sentiment of 'unpredictable steering'.
the model does ignore the ordering of the words, so it will not be able to differentiate the negative phrase 'gone from good to bad' from its exact opposite.
contrasting
train_12826
One possible approach is to set the threshold values for frequency in a polar context, max(p(a), n(a)) and for the ratio of appearances in polar contexts among the to-tal appearances, max(p(a),n(a)) f (a) .
the optimum threshold values should depend on the corpus and the initial lexicon.
contrasting
train_12827
Summarization of meetings faces many challenges not found in texts, i.e., high word error rates, absence of punctuation, and sometimes lack of grammaticality and coherent ordering.
meetings present a rich source of structural and pragmatic information that makes summarization of multi-party speech quite unique.
contrasting
train_12828
those identified as 1-13 and 31-41.
the two sentences have contradictory meanings, and it would be unfortunate to increase the score of a peer summary containing the former sentence because the latter is included in some model summaries.
contrasting
train_12829
While the contingency counts in Table 2 only hinted a limited benefit of linear-chain features, empirical results show the contraryespecially for order k = 2.
the further increase of k causes overfitting, and skip-chain features seem a better way to capture non-local dependencies while keeping the number of model parameters relatively small.
contrasting
train_12830
Incorporating syntactic features into the context has been at the forefront of recent research (Collins et al., 2005;Rosenfeld et al., 2001;Chelba and Jelinek, 2000;Hall and Johnson, 2004).
much of the previous work has focused on English language syntax.
contrasting
train_12831
Factoring a word into the semantics-bearing lemma and syntaxbearing morphological tag alleviates the data sparsity problem to some extent.
the number of possible factorizations of n-grams is large.
contrasting
train_12832
The space of features spanned by the crossproduct space of words, lemmas, tags, factoredtags and their n-gram can potentially be overwhelming.
not all of these features are equally important and many of the features may not have a significant impact on the word error rate.
contrasting
train_12833
Transductive learning, first described by Vapnik (Vapnik, 1998) also describes a setting where both labeled and unlabeled data are used jointly to decide on a label assignment to the unlabeled data points.
the goal here is not to learn a general classification function that can be applied to new test sets multiple times but to achieve a high-quality onetime labeling of a particular data set.
contrasting
train_12834
As seen in Table 4, POSset sizes of machine-learned lexicon is a factor of 2 or 3 smaller than that of the baseline lexicons.
recall is better for the baseline lexicons.
contrasting
train_12835
Because reviews can be numerous and varying in quality, it is important to rank them to enhance customer experience.
with ranking search results, assessing relevance when ranking reviews is of little importance because reviews are directly associated with the relevant product or service.
contrasting
train_12836
Pang and Lee (2005) have studied prediction of product ratings, which may be particularly relevant due to the correlation we find between product rating and the helpfulness of the review (discussed in Section 5).
a user's overall rating for the product is often already available.
contrasting
train_12837
The exact scoring approaches developed in commercial systems are often not disclosed.
more recent work on one of the major systems, e-rater 2.0, has focused on systematizing and simplifying the set of features used (Attali and Burstein 2006).
contrasting
train_12838
later formulated global inference using integer linear programming, which is the approach that we apply here.
to our work, operated in the domain of factual information extraction rather than opinion extraction, and assumed that the exact boundaries of entities from the gold standard are known a priori, which may not be available in practice.
contrasting
train_12839
Since both features represent a positive sentiment and the bigram matches fewer contexts than the unigram, it is probably sufficient just to have the unigram.
there are many cases where a feature captures a subtlety or non-compositional meaning that a simpler feature does not.
contrasting
train_12840
9) as an extension to the Dirichlet prior method ).
we have introduced a multinomial approximation of the Polya distribution.
contrasting
train_12841
Table 3 also shows that Polya-CF outperformed CProb when the dataset was ML and CProb was better than Polya-CF in the other cases.
the differences in precision were small.
contrasting
train_12842
We actually tried to learn α ω and α µ from the training data by using an EM method (Minka, 2003;Yamamoto and Sadamitsu, 2005).
the estimated parameters were about 0.05, too small for better recommendations.
contrasting
train_12843
We see that the very large corpus has reduced the accuracy of frequency weighted Random Indexing.
our two top performers have both substantially increased in accuracy, presenting a 75-100% improvment in performance over FREQ.
contrasting
train_12844
We have shown that order-1 semi-Markov conditional random fields are strictly more expressive than order-1 Markov CRFs, and that the added expressivity enables the use of features that lead to improvements on a segmentation task.
markov CRFs can more naturally incorporate certain features that may be useful for modeling sub-chunk phenomena and generalization to unseen chunks.
contrasting
train_12845
The results show that 2nd order chains of characters generally obtain the best performance.
the difference in performance between 1st order and 2nd order chains could be considered as statistically insignificant due to the large overlap of the error bars.
contrasting
train_12846
Some machine learning algorithms also have been investigated in Chinese NER, including HMM (Yu et al., 1998;, class-based language model (Gao et al., 2005;, RRM (Guo et al., 2005;, etc.
when a machine learning-based NER system is directly employed in a new domain, its performance usually degrades.
contrasting
train_12847
date, time, numeral expression) are encoded into pattern-specific class labels aligned with the tokens.
the performance usually becomes unstable when NER models are applied in different domains.
contrasting
train_12848
Experimental results show that the performance of the general NER model is significantly enhanced in the first several retraining cycles since more training data are used.
when the general training data set size is more than 2.4M, the performance enhancement is very slight.
contrasting
train_12849
More training data are used, higher NER performance can be achieved.
it is difficult to significantly enhance the performance when the training data size is above a certain threshold.
contrasting
train_12850
Domain-specific models usually could achieve a higher performance in its corresponding domain after being trained with a smaller amount of domainspecific annotated data (see Table 2 in Section 3.2).
the performance stability of domain-specific NER model is poor across different domains.
contrasting
train_12851
One is chronological ordering (Barzilay et al., 2002;Bollegala et al., 2005), which is based on time-related features of the documents.
such temporal features may be not available in all cases.
contrasting
train_12852
Unlike in MO, selection of the next sentence here is based on the most recent one.
this may lead to topic bias: i.e.
contrasting
train_12853
In fact, it can be seen as merge of the original c 0 and c k , and in this sense the updated c 0 represents the history of selections.
to the MO algorithm, the ordering algorithm here (HO) uses immediate back-front co-occurrence, while the MO algorithm uses relative back-front locations.
contrasting
train_12854
The synset having higher overlapping word counts (or weights) is selected for a particular test example.
for TSWEB and TSBNC the better results have been obtained using occurrences (the weights are only used to order the words of the vector).
contrasting
train_12855
Therefore, it is desirable to crawl the web and to develop specific search engines for NLP applications (Cafarella and Etzioni, 2005).
considering that great efforts are taken in commercial search engines to maintain quality of crawling and indexing, especially against spammers, it is still important to pursue the possibility of using the current search engines for NLP applications.
contrasting
train_12856
Detailed analyses of the results revealed that word groups such as bacteria and diseases are clustered correctly.
word groups such as computers (in which homepage, server and client are included) are not well clustered: these words tend to be polysemic, which causes difficulty.
contrasting
train_12857
This phenomenon depends on how a search engine processes AND operator, and results in unstable values for the PMI.
our method by the chi-square uses a co-occurrence matrix as a contingency table.
contrasting
train_12858
Co-occurrence analysis has been used to determine related words or terms in many NLP-related applications such as query expansion in Information Retrieval (IR).
related words are usually determined with respect to a single word, without relevant information for its application context.
contrasting
train_12859
Query expansion thus aims to improve query expression by adding related terms to the query.
the effect of query expansion is strongly determined by the term relations used (Peat and Willett, 1991).
contrasting
train_12860
Another often used resource is associative relations extracted from co-occurrences: two terms that co-occur frequently are thought to be associated to each other (Jing and Croft, 1994).
co-occurrence relations are noisy: Frequently co-occurring terms are not necessarily related.
contrasting
train_12861
One naturally would suggest that compound terms can be used for this purpose.
for many queries, it is difficult to form a legitimate compound term.
contrasting
train_12862
To some extent, the proposed approach is also related to (Schütze and Pedersen, 1997), which calculate term similarity according to the words appearing in the same context, or to second-order co-occurrences.
a key difference is that (Schütze and Pedersen, 1997) consider only separate context words, while we consider multiple context words together.
contrasting
train_12863
could be built as a traditional bigram model.
this is not a good approach for IR because two related terms do not necessarily co-occur side by side.
contrasting
train_12864
This shows that the CDQE method does not increase recall to the detriment of precision, but both of them.
cIQE increases precision at all but 0.0 recall points: the precision at the 0.0 recall point is 0.6565 for cIQE and 0.6699 for UM.
contrasting
train_12865
Among these methods, supervised learning is usually more preferred when a large amount of la-beled training data is available.
it is time-consuming and labor-intensive to manually tag a large amount of training data.
contrasting
train_12866
Most of semi-supervised methods employ the bootstrapping framework, which only need to pre-define some initial seeds for any particular relation, and then bootstrap from the seeds to acquire the relation.
it is often quite difficult to enumerate all class labels in the initial seeds and decide an "optimal" number of them.
contrasting
train_12867
(2004)'s method is to use a hierarchical clustering method to cluster pairs of named entities according to the similarity of context words intervening between the named entities.
the drawback of hierarchical clustering is that it required providing cluster number by users.
contrasting
train_12868
Some of these synsets are expressed by very general nouns such as "biont", "benthos", "whole", and "nothing".
others undoubtedly refer to other supersenses, for which they provide the label, such as "food", "person", "plant" or "animal".
contrasting
train_12869
After merging together the 333 and 444 numbers, B ij will recompute the new inter-cluster compatibility as 0.51, the average of the inter-cluster edges.
the cluster compatibility function C ij can represent the fact that three numbers with different area codes are to be merged, and can penalize their compatibility accordingly.
contrasting
train_12870
It can also associate a weight with the fact that several fields overlap (e.g., the chances that a cluster has two first names, two last names and two cities).
the binary classifier only examines pairs of fields in isolation and averages these probabilities with other edges.
contrasting
train_12871
We present two methods for creating and applying transliteration models.
to most previous transliteration approaches, our models are discriminative.
contrasting
train_12872
At first glance, k-best lists may seem like they should outperform sampling, because in effect they are the k best samples.
there are several important reasons why one might prefer sampling.
contrasting
train_12873
Thus, the aims of his work are similar to ours.
he is not concerned with the more fine-grained elements, and also uses a different machinery.
contrasting
train_12874
Whereas HMMs are generative models, CRFs are discriminative models that can incorporate rich features.
other approaches to text segmentation have also been pursued.
contrasting
train_12875
There exist clear recommendations on structuring medical reports, such as E2184-02 (ASTM International, 2002).
actual medical reports still vary greatly with regard to their structure.
contrasting
train_12876
Again, naive computation of (9) is intractable.
the max-product variant of loopy belief propagation can be applied to approximately find the MAP assignment of y (maxproduct can be seen as a generalization of the wellknown Viterbi algorithm to graphical models).
contrasting
train_12877
This information can be described in a DTD as book ← title author + price .
as shown in Example 1, regexes for information extraction rely on more complex constructs.
contrasting
train_12878
Typically, the restrictions involve either disallowing or limiting the use of Kleene disclosure and disjunction operations.
our work imposes no such restrictions.
contrasting
train_12879
A simple pattern for this task is: "one or more capitalized words followed by a version number", represented as When applied to a collection of University web pages, we discovered that R 5 identified correct instances such as Netscape 2.0, Windows 2000 and Installation Designer v1.1.
r 5 also extracted incorrect instances such as course numbers (e.g.
contrasting
train_12880
Such features would undoubtedly do a better job predicting the rationales and hence increasing equation 1.
crucially, our true goal is not to predict the rationales but to recover the classifier parameters θ.
contrasting
train_12881
We show that the proposed measure is optimal for constructing the core cluster among documents of equal length.
our method is not useful in a setup where some long documents have a topical portion: such documents should be considered on-topic, but their heavy tail of background words overcomes the topical words' influence.
contrasting
train_12882
If |G| |R| 0, and if π 0, then topical words would tend to appear more often than non-topical words.
we cannot simply base our conclusions on word counts, as some words are naturally more frequent than others (in general English).
contrasting
train_12883
Intuitively, this means that the new feature could no longer be used to discriminate between sentences sampled from P i+1 and real sentences.
setting 1 i rej p + in this manner may violate the constraints 18associated with the features already existing in the model, thus hampering the model's performance.
contrasting
train_12884
Given that in each iteration we generate 12,045 sentences, and that in the n'th iteration each sentence has to be classified by n features, this gives a total of roughly 10 7 10 ⋅ classifications after 21 iterations.
using rejection sampling, we used only 7 6.7 10 ⋅ classifications in total -a difference of over three orders of magnitude.
contrasting
train_12885
(1999) define the similarity-weighted probability, Pr SIM , to be: where Sim(v ′ , v) returns a real-valued similarity between two verbs v ′ and v (normalized over all pair similarities in the sum).
erk (2007) generalizes by substituting similar arguments, while Wang et al.
contrasting
train_12886
In a similar vein, Swanson and Gordon (2006) reported that parse tree path features extracted from a rule-based dependency parser are much less reliable than those from a modern constituent parser.
we recently carried out a detailed comparison (Johansson and Nugues, 2008b) between constituent-based and dependency-based SRL systems for FrameNet, in which the results of the two types of systems where almost equivalent when using modern statistical dependency parsers.
contrasting
train_12887
For parsers that consider features of single links only, the Chu-Liu/Edmonds algorithm can be used instead.
this algorithm cannot be generalized to the second-order setting - McDonald and Pereira (2006) proved that this problem is NP-hard, and described an approximate greedy search algorithm.
contrasting
train_12888
To compare our results with previously published results in SRL, we carried out an experiment comparing our system to the top system (Punyakanok et al., 2008) in the CoNLL-2005 Shared Task.
comparison is nontrivial since the output of the CoNLL-2005 systems was a set of labeled segments, while the CoNLL-2008 systems (including ours) produced labeled semantic dependency links.
contrasting
train_12889
To handle this situation fairly to both types of systems, we carried out a two-way evaluation: conversion of dependencies to segments for the dependency-based system, and head-finding heuristics for segmentbased systems.
the latter is difficult since no structure is available inside segments, and we had to resort to computing upper-bound results using gold-standard input; despite this, the dependencybased system clearly outperformed the upper bound of the performance of the segment-based system.
contrasting
train_12890
We note that inference is particularly helpful with rarely mentioned instances.
inference can lead to errors when the proof tree contains joins on generic terms (e.g., "company") or common extraction errors (e.g., "LLC" as a company name).
contrasting
train_12891
To see why, we re-examine the proof: the large factor comes from assuming that all of R's first arguments which meet the PF definition are associated with exactly K min distinct second arguments.
in our corpus 83% of first arguments are associated with only one second argument.
contrasting
train_12892
HOLMES is also related to open-domain questionanswering systems such as Mulder (Kwok et al., 2001), AskMSR (Brill et al., 2002), and others (Harabagiu et al., 2000;Brill et al., 2001).
these Q/A systems attempt to find individual documents or sentences containing the answer.
contrasting
train_12893
They often perform deep analysis on promising texts, and back off to shallower, less reliable methods if those fail.
hOLMES utilizes TI and attempts to combine information from multiple different sentences in a scalable way.
contrasting
train_12894
Typically, a translation rule consists of a source-side and a target-side.
the source-side of a rule usually corresponds to multiple target-sides in multiple rules.
contrasting
train_12895
The advantage of the MERS model is that it uses rich contextual information to compute posterior probability for e given T .
the translation probabilities and lexical weights in Lynx ignore these information.
contrasting
train_12896
Considering the following examples: The syntactic tree of the Chinese phrase " " is shown in Figure 6.
there are two TATs which can be applied to the source tree, as shown in Figure 7.
contrasting
train_12897
2, the value of distortion score peaks at d=1, i.e., the monotonic alignment, and decays for non-monotonic alignments depending on how far it diverges from the monotonic alignment.
following Och and Ney (2003), we use a fixed value p 0 for the probability of jumping to a null state, which can be optimized on held-out data, and the overall distortion model becomes 0 0 if state ( | , ) (1 ) ( | , ) otherwise Given an HMM, the Viterbi alignment algorithm can be applied to find the best alignment between the backbone and the hypothesis, the alignment produced by the algorithm cannot be used directly to build a confusion network.
contrasting
train_12898
It uses a similarity model for synonym matching and a distortion model for word ordering.
to previous methods, the similarity model explicitly incorporates both semantic and surface word similarity, which is critical to monolingual word alignment, and a smoothed distance-based distortion model is used to model the first-order dependency of word ordering, which is shown to be better than simpler approaches.
contrasting
train_12899
As shown in Figure 2, they simply strip off the appropriate words from each state, collapsing dynamic programming items which are identical from the standpoint of their left-to-right combination in the lower order language model.
having only orderbased projections is very limiting.
contrasting