id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_10300
For example, the error rate of Noun in Turkish is 39% which is the highest error rate.
the head error rates fall in the middle rank for the other two languages.
contrasting
train_10301
In Japanese, al-though our parser achieves more than 97% left/right arc rates.
for the root word precision rate is quite lower (85.97).
contrasting
train_10302
Barzilay and Elhadad's scoring function aims to identify sentences (for inclusion in a summary) that have a high concentration of chain members.
we are interested in chains that span several sentences.
contrasting
train_10303
Our approach to document compression differs from most summarisation work in that our summaries are fairly long.
we believe this is the first step into understanding how compression can help summarisation.
contrasting
train_10304
In the example given above, the predicate beat evoques a single frame (i.e., Cause harm).
predicates often have multiple meanings thus evoquing more than one frame.
contrasting
train_10305
We can also observe a trend in recent work in textual entailment that more emphasis is put on explicit learning of the syntactic graph mapping between the entailed and entailed-by sentences (Mac-Cartney et al., 2006).
relatively fewer attempts have been made in the QA community.
contrasting
train_10306
First, the U-SVM classifies questions into a question-dependent set of clusters, and the answer is the name of a question cluster.
most previous models have classified candidates into positive and negative.
contrasting
train_10307
However, most of previous research has focused on using multilingual resources typically used in SMT systems to improve WSD accuracy, e.g., Dagan and Itai (1994), Li and Li (2002), Diab (2004).
this paper focuses on the converse goal of using WSD models to improve actual translation quality.
contrasting
train_10308
The purpose of the study was to lower the annotation cost for supervised WSD, as suggested earlier by Resnik and Yarowsky (1999).
this result is also encouraging for the integration of WSD in SMT, since it suggests that accurate WSD can be achieved using training data of the kind needed for SMT.
contrasting
train_10309
These properties may be maintained by examining sentences adjacent to each potential insertion point.
a local sentence comparison method such as this may fail to account for global document coherence (e.g.
contrasting
train_10310
Currently, our system is trained on insertions in which the sentences of the original text are not modified.
in some cases additional text revisions are required to guarantee coherence of the generated text.
contrasting
train_10311
A χ 2 test on overall frequencies of aggregated versus non-aggregated disjunctives showed that the non-aggregated descriptions ('true' partitions) were a significant majority (χ 2 = 83.63, p < .001).
the greater frequency of aggregation in VS compared to VDS turned out to be significant (χ 2 = 15.498, p < .001).
contrasting
train_10312
IA part initially performs partitioning based on the basic-level TYPE of objects, in line with the evidence.
later partitions can be induced by other properties, possible yielding partitions even with same-TYPE referents (e.g.
contrasting
train_10313
Figure 2 shows that the performance is almost constant on ungrammatical data in the important sentence length range from 5 to 40.
there is a negative correlation of accuracy and sentence length for grammatical sentences.
contrasting
train_10314
Adpositions do tend to have a high number of siblings on average, which could explain MSTParser's performance on that category.
adjectives on average occur the furthest away from the root, have the shortest dependency length and the fewest siblings.
contrasting
train_10315
It was already known that the two systems make different errors through the work of Sagae and Lavie (2006).
in that work an arc-based voting scheme was used that took only limited account of the properties of the words connected by a dependency arc (more precisely, the overall accuracy of each parser for the part of speech of the dependent).
contrasting
train_10316
A naïve method for computing the partition function is therefore to evaluate The above would require calculating n determinants, resulting in O(n 4 ) complexity.
as we show below Z(θ) may be obtained in O(n 3 ) time using a single determinant evaluation.
contrasting
train_10317
In response, several researchers have developed algorithms for automatically learning inference rules from textual corpora.
these rules are often either imprecise or underspecified in directionality.
contrasting
train_10318
A proper treatment of polysemy is essential in the area of lexical acquisition, since polysemy repre-sents a pervasive phenomenon in natural language.
previous approaches to the automatic acquisition of semantic classes have mostly disregarded the problem (cf.
contrasting
train_10319
These results suggest that morphology is the best single source of evidence for our task.
recall from Section 3 that the sampling procedure for the Gold Standard explicitly balanced for morphological factors.
contrasting
train_10320
For instance, according to t-tests performed on the Gold Standard (α = 0.05), only three of the 18 semantic features exhibit significant mean differences for classes basic and event.
aNOVa across the 6 classes (α = 0.05) yields significant differences for 16 out of the 18 features, which indicates that most features serve to distinguish object adjectives from basic and event adjectives.
contrasting
train_10321
This distributional approach works under the assumption that the context vector of each word encodes sufficient information for enabling accurate word clustering.
many words are distributionally unreliable: due to data sparseness, they occur infrequently and hence their context vectors do not capture reliable statistical information.
contrasting
train_10322
add-on's in existing POS induction algorithms, which remain primarily distributional in nature.
our approach more tightly integrates morphology into the distributional framework.
contrasting
train_10323
We have also experimented with a very simple method for handling ambiguity in our bootstrapping algorithm: when augmenting the seed set, instead of labeling a word with a tag that receives 9 votes from the 45 pairwise classifiers, we label a word with any tag that receives at least 8 votes, effectively allowing the assignment of more than one label to a word.
our experimental results (not shown due to space limitations) indicate that the incorporation of this method does not yield better overall performance, since many of the additional labels are erroneous and hence their presence deteriorates the quality of the bootstrapped data.
contrasting
train_10324
We have proposed a new bootstrapping algorithm for unsupervised POS induction.
to existing algorithms developed for this problem, our algorithm is designed to (1) operate under a resource-scarce setting in which no languagespecific tools or resources are available and (2) more tightly integrate morphological information with the distributional POS induction framework.
contrasting
train_10325
A number of (mostly manually curated) databases such as MINT (Zanzoni et al., 2002), BIND , and SwissProt (Bairoch and Apweiler, 2000) have been created to store protein interaction information in structured and standard formats.
the amount of biomedical literature regarding protein interactions is increasing rapidly and it is difficult for interaction database curators to detect and curate protein interaction information manually.
contrasting
train_10326
One of them is based on matching pre-specified patterns and rules (Blaschke et al., 1999;Ono et al., 2001).
complex cases that are not covered by the pre-defined patterns and rules cannot be extracted by these methods.
contrasting
train_10327
For instance, our example sentence is a positive interaction sentence for the KaiC and SasA protein pair.
it is a negative interaction sentence for the KaiA and SasA protein pair, i.e., it does not describe an interaction between this pair of proteins.
contrasting
train_10328
Intuitively, the unlabeled data pushes the decision boundary away from the dense regions.
unlike SVM, the optimization problem now is NP-hard (Zhu, 2005).
contrasting
train_10329
(2004) proved that edit kernel is not always positive definite.
it is possible to make the kernel matrix positive definite by adjusting the γ parameter, which is a positive real number.
contrasting
train_10330
Supervised machine learning approaches have been applied to this domain.
they rely only on labeled training data, which is difficult to gather.
contrasting
train_10331
Al-Onaizan and Knight (2002) report a 1best accuracy of 0.199 on a corpus of Arabic person names (but an accuracy of 0.634 on English names), using a "spelling-based" model, i.e., a model which has no access to phonetic information.
the details of their experiment and model differ from ours in a number of respects.
contrasting
train_10332
Similarly, "h" is more common following "ya" in the target string (often as part of the larger suffix "iyah").
the preceding context "rya" is usually observed in the word "qaryat", meaning "village" as in "the village of ..." In this grammatical usage, the tah marbouta is spoken and therefore rendered with a "t".
contrasting
train_10333
Thus, Klementiev and Roth, in common with the two MT approaches described above, carefully control the features used by the perceptron.
to these approaches, our algorithm discovers latent alignments, essentially selecting those features necessary for good performance on the task at hand.
contrasting
train_10334
State-of-the-art NER systems are characterized by high accuracy, but they require a large amount of training data.
domain specific ontologies generally contains many "fine-grained" categories (e.g., particular categories of people, such as writers, scientists, and so on) and, as a consequence, supervised methods cannot be used because the annotation costs would become prohibitive.
contrasting
train_10335
In the first example (see row 1), the word job is strongly associated to the word position, because the contexts of the latter in the examples 1 and 2 are similar to the context of the former, and not to the word task, whose contexts (4, 5 and 6) are radically different.
the second example (see row 2) of the word job is similar to the occurrences 4 and 5 of the word task, allowing its correct substitution.
contrasting
train_10336
Capturing non-local dependencies is crucial to the accurate and complete determination of semantic interpretation in the form of predicateargument-modifier structures or deep dependencies.
with few exceptions (Model 3 of Collins, 1999;Schmid, 2006), output trees produced by state-of-the-art broad coverage statistical parsers (Charniak, 2000;Bikel, 2004) are only surface context-free phrase structure trees (CFG-trees) without empty categories and coindexation to represent displaced constituents.
contrasting
train_10337
The automatic generation grammar transform presented in (Cahill and van Genabith, 2006) provides a solution to coarse-grained and (in fact) inappropriate independence assumptions in the basic generation model.
there is a sense in which the proposed cure improves on the symptoms, but not the cause of the problem: it weakens independence assumptions by multiplying and hence increasing the specificity of conditioning CFG category labels.
contrasting
train_10338
SMT has evolved from the original word-based approach (Brown et al., 1993) into phrase-based approaches (Koehn et al., 2003;Och and Ney, 2004) and syntax-based approaches (Wu, 1997;Alshawi et al., 2000;Yamada and Knignt, 2001;Chiang, 2005).
much important work continues to be carried out in Example-Based Machine Translation (EBMT) (Carl et al., 2005;Way and Gough, 2005), and many existing commercial systems are rule-based.
contrasting
train_10339
Since the posterior samples are produced as a byproduct of Gibbs sampling while maximum poste-rior decoding requires an additional time consuming step that does not have much impact on scores, we used the posterior samples to produce the results in Table 1.
to MCMC, Variational Bayesian inference attempts to find the function Q(y, θ, φ) that minimizes an upper bound of the negative log likelihood (Jordan et al., 1999): The upper bound in (3) is called the Variational Free Energy.
contrasting
train_10340
Thus VB can be viewed as a more principled version of the wellknown ad hoc technique for approximating Bayesian estimation with EM that involves adding α−1 to the expected counts.
in the ad hoc approach the expected count plus α − 1 may be less than zero, resulting in a value of zero for the corresponding parameter (Johnson et al., 2007;.
contrasting
train_10341
To measure the similarity between potential preand post-conjuncts, a lot of work on the coordination disambiguation used the similarity between conjoined heads.
not only the conjoined heads but also other components in conjuncts have some similarity and furthermore structural parallelism.
contrasting
train_10342
(2005) enabled the use of non-local features by using Gibbs sampling.
it is unclear how to apply their method of determining the parameters of a non-local model to other types of non-local features, which they did not used.
contrasting
train_10343
Krishnan and Manning (2006) divided the model into two CRFs, where the second model uses the output of the first as a kind of non-local information.
it is not possible to use non-local features that depend on the labels of the very candidate to be scored.
contrasting
train_10344
Nakagawa and Matsumoto (2006) used a Bolzmann distribution to model the correlation of the POS of words having the same lexical form in a document.
their method can only be applied when there are convenient links such as the same lexical form.
contrasting
train_10345
Such methods include ALMA (Gentile, 2001) used in (Daumé III and Marcu, 2005) 2 , MIRA (Crammer et al., 2006) used in (McDonald et al., 2005, and Max-Margin Markov Networks (Taskar et al., 2003).
to the best of our knowledge, there has been no prior work that has applied a perceptron with a margin (Krauth and Mézard, 1987) to structured output.
contrasting
train_10346
We use the superscript, l, for local features as Φ l i (x, y) and g for non-local features as Φ g i (x, y).
then, feature mapping is written as Here, we define: Ideally, we want to determine the labels using the whole feature set as: Algorithm 4.1: Candidate algorithm (parameters: if there are non-local features, it is impossible to find the highest scoring candidate efficiently, since we cannot use the Viterbi algorithm.
contrasting
train_10347
BPMs try to cancel out overfitting caused by the order of examples, by training several models by shuffling the training examples.
4 it is very time consuming to run the complete training process several times.
contrasting
train_10348
We used n = 20 (n of the n-best) for training since we could not use too a large n because it would have slowed down training.
we could examine a larger n during testing, since the testing time did not dominate the time for the experiment.
contrasting
train_10349
Although the averaged perceptron outperformed the perceptron, the improvement was slight.
the margin perceptron greatly outperformed compared to the averaged perceptron.
contrasting
train_10350
This comparison, of course, is not fair because the setting was different.
we think the results demonstrate a potential of our new algorithm.
contrasting
train_10351
Our algorithm could at least improve the accuracy of NER with non-local features and it was indicated that our algorithm was superior to the re-ranking approach in terms of accuracy and training cost.
the achieved accuracy was not better than that of related work (Finkel et al., 2005;Krishnan and Manning, 2006) based on CRFs.
contrasting
train_10352
Product features (e.g., "image quality" for digital camera) in a review are good indicators of review quality.
different product features may refer to the same meaning (e.g., "battery life" and "power"), which will bring redundancy in the study.
contrasting
train_10353
Typically, the more data is used to estimate the parameters of the translation model, the better it can approximate the true translation probabilities, which will obviously lead to a higher translation performance.
large corpora are not easily available.
contrasting
train_10354
In addition, the numerical information, abbreviations and the documents' style may be very good indicators of their topic.
this information is no longer available after the dimensionality reduction.
contrasting
train_10355
Table 2 shows that "General Motors" or "GM" are not repeated in every sentence of the second story.
"GM", "carmaker" and "company" are semantically related.
contrasting
train_10356
In contrast, the C&C tagger, which is based on that of Ratnaparkhi (1996), utilizes a wide range of features and a larger contextual window including the previous two tags and the two previous and two following words.
the C&C tagger cannot train on texts which are not fully tagged for POS, so we use the bigram tagger to produce a completely labeled version of the Wycliffe text and train the C&C tagger on this material.
contrasting
train_10357
Compare this with the result for the role patient: "hunter" is further away from "lion" and "deer", and will therefore be found to be a rather bad patient of "shoot".
"hunter" is still more plausible as a patient of "shoot" than e.g., "director".
contrasting
train_10358
In one, assigning every datapoint into a single cluster, guarantees perfect completeness -all of the data points that are members of the same class are trivially elements of the same cluster.
this cluster is as unhomogeneous as possible, since all classes are included in this single cluster.
contrasting
train_10359
In the perfectly homogeneous case, this value, H(C|K), is 0.
in an imperfect situation, the size of this value, in bits, is dependent on the size of the dataset and the distribution of class sizes.
contrasting
train_10360
Therefore, in order for the goodness of two clustering solutions to be compared using one these measures, an identical post-processing algorithm must be used.
this problem can be trivially addressed by fixing the classcluster matching function and including it in the definition of the measure as in H. a second and more critical problem is the "problem of matching" (Meila, 2007).
contrasting
train_10361
Transformations like those applied by the adjusted Rand Index and a minor adjustment to the Mirkin measure (see Section 4) can address this problem.
pair matching measures also suffer from distributional problems.
contrasting
train_10362
V I depends on the relative sizes of the partitions of C and K, not on the number of points in these partitions.
v I is bounded by the maximum number of clusters in C or K, k * .
contrasting
train_10363
Normalizing, V I by log n or 1/2 log k * guarantee this range.
meila (2007) raises two potential problems with this modification.
contrasting
train_10364
All evaluated measures satisfy P4 and P7.
rand, Mirkin, Fowlkes-Mallows, Gamma, Jaccard and F-Measure all fail to satisfy P3 and P6 in at least one experimental configuration.
contrasting
train_10365
Therefore, model parameter θ learned by PMM can be reasonable over the whole of documents.
if we compute the probability of generating an individual document, a document-specific topic weight bias on mixture ratio is to be considered.
contrasting
train_10366
Language models and phrase tables have in common that the probabilities of rare events may be overestimated.
in language modeling probability mass must be redistributed in order to account for the unseen n-grams.
contrasting
train_10367
The continuous space model has a much higher complexity than a backoff n-gram.
this can be heavily optimized when rescoring n-best lists, i.e.
contrasting
train_10368
The performance of NetSum with third-party features over NetSum without third-party features is statistically significant at 95% confidence.
netSum still outperforms the baseline without thirdparty features, leading us to conclude that Ranknet and simple position and term frequency features contribute the maximum performance gains, but increased ROUGE-1 and ROUGE-2 scores are a clear benefit of third-party features.
contrasting
train_10369
Therefore acc(categorizer(D k )) will increase if features that contribute to the categorization appear in D k .
acc(categorizer(D k )) will decrease if no features that contribute to the categorization are in D k .
contrasting
train_10370
Moreover, strategies incorporating more selection criteria often require more parameters to be set.
proper parametrization is hard to achieve in real-world applications.
contrasting
train_10371
We intentionally avoided using features such as semantic triggers or external dictionary look-ups because they depend a lot on the specific subdomain and entity types being used.
one might add them to fin-tune the final classifier, if needed.
contrasting
train_10372
Thus, in our committee we employ ME classifiers to meet requirement 1 (fast selection time cycles).
in the end we want to use the annotated corpora to train a CRF and will thus examine the reusability of such an ME-annotated AL corpus for CRFs (cf.
contrasting
train_10373
But even with a selector which has only orthographical features and a tester with many more features -which is actually quite an extreme example and a rather unrealistic scenario for a real-world application -AL is more efficient than random selection.
the limits of reusability are taking shape: on PBvar, the AL selection with sub3 converges with the random selection curve after about 100,000 tokens.
contrasting
train_10374
the design of features in both systems using a modular approach, such as (Poesio et al., 2005), where the decision on discourse-newness is taken beforehand, and those that integrate discourse-new classification with the actual resolution of coreferent bridging cases.
to earlier investigations (Markert and Nissim, 2005;Garera and Yarowsky, 2006), we provide a more extensive overview on features and also discuss properties that influence their combinability.
contrasting
train_10375
N-gram based language models (LM) estimate the probability of occurrence for a word, given a string of n-1 preceding words.
computers have only recently become powerful enough to estimate probabilities on a reasonable amount of training data.
contrasting
train_10376
We show that domain specific language and translation models also benefit statistical machine translation.
there are two problems with using domain specific models.
contrasting
train_10377
When the domain D is known, domain specific models can be created and used in the translation decoding process.
in many cases, domain D is unknown or changes dynamically.
contrasting
train_10378
When the domain is known in advance, it is usually expressible, for example it could be a topic that matches a human-defined category like "sport".
when the domain is delimited in an unsupervised manner, it is used only as a probabilistic variable and does not need to be expressed.
contrasting
train_10379
The points of difference are: • In the cluster language model, clusters are defined by human-defined regular expressions.
with the proposed method, clusters are automatically (without human knowledge) defined and created by the entropy reduction based method.
contrasting
train_10380
Furthermore, when P (D|f ) is approximated as P (D) = D λ , and the general translation model P (f |e) is used instead of the domain specific translation model P (f |e, D), this equation represents the process of translation using sentence mixture language models (Iyer et al., 1993) as follows: The points that differ from the proposed method are as follows: • In the sentence mixture model, the mixture weight parameters D λ are constant.
in the proposed method, weight parameters P (D|f ) are estimated separately for each sentence.
contrasting
train_10381
phrase-based versus rule-based systems, as shown in (Callison-Burch et al., 2006).
callison-Burch concluded that the Bleu score is reliable for comparing variants of the same machine translation system.
contrasting
train_10382
Here, we will use the translation candidate with the maximum Bleu score as pseudo reference to bias the system towards the Bleu score.
as pointed out in (Och, 2003), there is no reason to believe that the resulting parameters are optimal with respect to translation quality measured with the Bleu score.
contrasting
train_10383
This seems intuitive as the Bleu score uses n-grams up to length four.
one should be careful here: the differences are rather small, so it might be just statistical noise.
contrasting
train_10384
Table 2 demonstrates an example how these rules are applied to translate the foreign sentence "欧元/的/大幅/升值" into the English sentence "the significant appreciation of the Euro".
step Partial derivations Rule there are always other kinds of bilingual phrases extracted directly from training corpus, such as 欧元, the Euro and 的 大幅 升 值, 's significant appreciation, which can produce different candidate sentence translations.
contrasting
train_10385
There are only a few overt pronouns in English, Chinese, and many other languages, and state-of-the-art part-of-speech taggers can successfully recognize most of these overt pronouns.
zero pronouns in Chinese, which are not explicitly marked in a text, are hard to be identified.
contrasting
train_10386
Noun phrases provide much fruitful information for anaphoricity identification.
useful information such as gender, number, lexical string, etc, is not available in the case of zero pronouns.
contrasting
train_10387
For instance, the overrepresentation of "but" as the sentence-initial CC in the second and third subtree of that figure, is dealt with in (Schmid, 2006) by splitting the CC-category into CC/BUT and CC/AND.
also when a range of such transformations is applied, some subtrees are still greatly overrepresented.
contrasting
train_10388
This is the approach we take in this paper, which involves switching from PCFGs to PTSGs.
we cannot simply add overrepresented trees to the treebank PCFG; as is clear from figure 2, many of the overrepresented subtrees are in fact spurious variations of the same constructions (e.g.
contrasting
train_10389
Turney's (2006) extensive evaluation of LRA on SAT verbal analogy questions, for example, involves roughly ten thousand relational similarity computations 2 .
our system typically requires millions of relational similarity computations because every pair of extracted word-pairs needs to be compared.
contrasting
train_10390
Turney's (2006) Latent Relational Analysis is a corpus-based algorithm that computes the relational similarity between word-pairs with remarkably high accuracy.
lRA is focused solely on the relation-matching problem, and by itself is insufficient for lexical analogy generation.
contrasting
train_10391
Latent semantic analysis (LSA) (Landauer et al., 1998) has also been used to measure distributional distance with encouraging results (Rapp, 2003).
it too measures the distance between words and not senses.
contrasting
train_10392
Although German is not resourcepoor per se, Gurevych (2005) has observed that the German wordnet GermaNet (Kunze, 2004) (about 60,000 synsets) is less developed than the English WordNet (Fellbaum, 1998) (about 117,000 synsets) with respect to the coverage of lexical items and lexical semantic relations represented therein.
substantial raw corpora are available for the German language.
contrasting
train_10393
7 As in the monolingual distributional measures, the distance between two concepts is calculated by first determining their DPs.
in the crosslingual approach, a concept is now glossed by nearsynonymous words in an English thesaurus, whereas its profile is made up of the strengths of association with co-occurring German words.
contrasting
train_10394
Recent work has applied random walks to NLP tasks such as PP attachment (Toutanova et al., 2004), word sense disambiguation (Mihalcea, 2005;Tarau et al., 2005), and query expansion (Collins-Thompson and Callan, 2005).
to our knowledge, the literature in NLP has only considered using one stationary distribution per specially-constructed graph as a probability estimator.
contrasting
train_10395
The highest probability nodes in the table are common because both model variants share the same initial links.
the orders of the remaining nodes in the stationary distributions are different.
contrasting
train_10396
SRL feature extraction has relied on various syntactic representations of input sentences, such as syntactic chunks and full syntactic parses (Gildea and Jurafsky, 2002).
with features from shallow parsing, previous work (Gildea and Palmer, 2002;Punyakanok et al., 2005b) has shown the necessity of full syntactic parsing for SRL.
contrasting
train_10397
Our accuracy is most closely comparable to the 78.63% accuracy achieved on the full task by (Pradhan et al., 2005a).
(Pradhan et al., 2005a) uses some additional information since it deals with incorrect parser output by using multiple parsers.
contrasting
train_10398
As stated in their paper, "as a consequence, adjunct semantic roles (ARGM's) are basically absent from our test corpus"; and around 13% complement semantic roles cannot be found in etrees in the gold parses.
we recover all SRLs by exploiting more general paths in the LTAG derivation tree.
contrasting
train_10399
When using the parent-child relation, the modifiee can be determined based only on the relation between ID 1 and 5.
when using the ancestor-descendant relation, the modifiee cannot be determined based only on the relation between ID 1 and 5.
contrasting