id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_10400
These conventional methods select the node given by argmax j P (node j | node i ) as the parent node of node i , setting the beam width to 1.
their processes are essentially the same as the process in PARENT METHOD.
contrasting
train_10401
We assume that every bit added as redundancy is correctly transmitted for the above-mentioned discussion.
some of these added bits may be transmitted wrongly in the proposed method.
contrasting
train_10402
Figure 6: Pseudocode for FindPCs in the case of conjunction powerset as the non-terminals (adding a few more details like the start symbol) and production rules such as {P2} → P2 {P3, P5}.
for our example the chain of patterns applied P 1 , P 2 , P 3 , P 4 , $ could generate a pattern tree that is incompatible with the original tree.
contrasting
train_10403
Figure 3 illustrates the different behaviour of the Table 6 shows that all error types inserted into Sentence 9 in our test set result in the same evaluation score for the PARSEVAL metric, while the LA metric provides a more discriminative treatment of PP attachment errors, label errors and span errors for the same sentence (Table 6).
the differences in the LA results are only indirectly caused by the different error types.
contrasting
train_10404
In traditional approaches for document classification, in many cases, documents are classified independently.
the Wikipedia articles are hypertexts and they have a rich structure that is useful for categorization.
contrasting
train_10405
This parameter tying is introduced by Ghamrawi and McCallum (2005).
we did not get any improved accuracy.
contrasting
train_10406
The results vary from the ordering strategy of local classification.
to iterative classification methods, collective classification methods directly estimate most likely assignments.
contrasting
train_10407
Most previous inference algorithms for DP-based models involve sampling (Escobar and West, 1995;Teh et al., 2006).
we chose to use variational inference (Blei and Jordan, 2005), which provides a fast deterministic alternative to sampling, hence avoiding issues of diagnosing convergence and aggregating samples.
contrasting
train_10408
Many methods have been proposed for solving this problem by automatically extracting gazetteers from large amounts of texts (Riloff and Jones, 1999;Thelen and Riloff, 2002;Etzioni et al., 2005;Shinzato et al., 2006;Talukdar et al., 2006;Nadeau et al., 2006).
these methods require complicated induction of patterns or statistical methods to extract high-quality gazetteers.
contrasting
train_10409
For example, Beckham is redirected to Beckham (disambiguation) in the above example.
it is also possible that Beckham redirects to one of the articles (e.g, David Beckham).
contrasting
train_10410
The results indicate that structures in Wikipedia are suited for knowledge extraction.
the results also indicate that there is room for improvement, considering that the effects of gaz c and wp c were similar, while the matching rate was greater for wp c. An issue, which we should treat, is the disambiguation of ambiguous entities.
contrasting
train_10411
For example, the paragraph titled "Music of Scotland" (shown below in Wikitext) in the Wikipedia article on Scotland contains an enumeration of entities, which can be la- Lexicosyntactic patterns have been employed successfully in the past (e.g., Hearst, 1992;Roark and Charniak, 1998;Cederberg and Widdows, 2003), and this type of tag extraction is still a promising direction for the future.
the brute force approach we tried -of indiscriminately tagging the entities of enumerations of four or more entities -was found to introduce a large amount of noise into the system in our development experiments.
contrasting
train_10412
First, even seemingly good patterns can produce false hits due to metaphor and idiomatic expressions.
by restricting their use to relevant regions of text, we could avoid such false positives.
contrasting
train_10413
Existing systems generally require rules/patterns to recognize a context in which a weapon is explicitly linked to an event or its consequences (e.g., "attack with <np>", or "<np> caused damage").
weapons are not always directly linked to an event in text, but they may be inferred through context.
contrasting
train_10414
In contrast, if a document is relevant to the IE task, then there must be at least one sentence that contains relevant information.
most documents contain a mix of both relevant and irrelevant sentences.
contrasting
train_10415
We will refer to such reliable patterns as Primary Patterns.
patterns that are not necessarily reliable and need to be restricted to relevant regions will be called Secondary Patterns.
contrasting
train_10416
This demonstrates that our sentence classifier is having the desired effect.
observe that the precision gain comes with some loss in recall points.
contrasting
train_10417
These results are particularly noteworthy because AutoSlog-TS requires a human to manually review the patterns and assign event roles to them.
our approach is fully automated.
contrasting
train_10418
For example, "John's wife" is enough to determine the relationship b etween "John" and "John's wife" in the sentence "John's wife got a good job… " as shown in Figure 1(a) .
sPT is not enough in the coordinated cases, e.g.
contrasting
train_10419
Finally, note that a number of statistical MT systems make use of source language syntax in transducer-style approaches; see (Lin, 2004;Ding and Palmer, 2005;Quirk et al., 2005;Liu et al., 2006;Huang et al., 2006).
to the preprocessing approach, they attempt to incorporate syntax directly into the decoding stage.
contrasting
train_10420
Such distortions in the word reordering will be quite difficult for the word or phrase-based alignment model to capture.
with the application of a reordering rule to reposition the child CP after its sibling NP under a parent NP, and the PP VP reordering rule for VP introduced previously, the sentence can be easily transformed into "French delegation participate 8 th handicap people Winter Olympics hold at US Salt Lake City," a sentence whose word order is much closer to that of English.
contrasting
train_10421
The idea is to compare the two systems given the same type of input: if the reordered system learned a better phrase table, then it might outperform the baseline system on un-reordered inputs despite the mismatch; on the other hand, if the baseline system learned a better phrase table, then it might outperform the reordered system on reordered inputs despite the mismatch.
the results in Table 6 did not settle our question: the reordered system performed worse than the baseline on unreordered data, while the baseline system performed worse than the reordered system on reordered data, both of which can be explained by the mismatched conditions between training and testing.
contrasting
train_10422
In our experiments, the phrase-based MT system uses an un-lexicalized reordering model, which might make the effects of the syntactic reordering method more pronounced.
in an early experiment 4 submitted to the official NIST 2006 MT evaluation, the reordered system also improved the BLEU score substantially (by 1.34 on NIST 2006 data) over a phrase-based MT system with lexicalized reordering models .
contrasting
train_10423
We could transform tree (1) directly into tree (4) without bothering to generate tree (3).
skipping tree (3) will create us difficulty in applying the EM algorithm to choose a better binarization for each tree node, since tree (4) can neither be classified as left binarization nor as right binarization of the original tree (1) -it is the result of the composition of two left-binarizations.
contrasting
train_10424
They relied on a variant of a voted perceptron, and achieved significant improvements.
their work was limited to reranking, thus the improvement was relative to the performance of the baseline system, whether or not there was a good translation in a list.
contrasting
train_10425
2005;Dang and Palmer, 2005), when enough labeled training data is available.
creating a large sense-tagged corpus is very expensive and time-consuming, because these data have to be annotated by human experts.
contrasting
train_10426
Because it is difficult to know when the classifier reaches maximum effectiveness, previous work used a simple stopping condition when the training set reaches desirable size.
in fact it is almost impossible to predefine an appropriate size of desirable training data for inducing the most effective classifier.
contrasting
train_10427
In recent years, there have been attempts to apply active learning for word sense disambiguation .
to our best knowledge, there has been no such attempt to consider the class imbalance problem in the process of active learning for WSD tasks.
contrasting
train_10428
Previous work (Estabrooks et al., 2004) reported that under-sampling of the majority class (predominant sense) has been proposed as a good means of increasing the sensitivity of a classifier to the minority class (infrequent sense).
in our active learning experiments, under-sampling is apparently worse than ordinary, over-sampling and our BootOS.
contrasting
train_10429
An approach based on pairwise similarities, which encourage nearby data points to have the same class label, has been proposed as a way of incorporating unlabeled data discriminatively (Zhu et al., 2003;Altun et al., 2005;Brefeld and Scheffer, 2006).
this approach generally requires joint inference over the whole data set for prediction, which is not practical as regards the large data sets used for standard sequence labeling tasks in NLP.
contrasting
train_10430
The gradient of Equation (1) can be written as follows: Calculating E p(Y|x,λ) as well as the partition function Z(x) is not always tractable.
for linear-chain CRFs, a dynamic programming algorithm similar in nature to the forward-backward algorithm in HMMs has already been developed for an efficient calculation (Lafferty et al., 2001).
contrasting
train_10431
In Sequential Viterbi Models, such as HMMs, MEMMs, and Linear Chain CRFs, the type of patterns over output sequences that can be learned by the model depend directly on the model's structure: any pattern that spans more output tags than are covered by the models' order will be very difficult to learn.
increasing a model's order can lead to an increase in the number of model parameters, making the model more susceptible to sparse data problems.
contrasting
train_10432
As a simple illustration of this, when encountering the symbol RMNH in a field book entry, this most likely indicates the start of a new (registration number) segment.
in the database, on which all language models are trained, RMNH never occurs as a symbol in the registration number column; it does occur a few times in the column for special remarks but never at the start of the text.
contrasting
train_10433
For example, the comma between Las Claritas and 9-VI-1978 only serves to separate the Place segment from the Collection date segment; the comma is not copied to the database.
commas do occur fieldinternally in the database, especially in longer fields such as SPECIAL REMARKS.
contrasting
train_10434
Supervised machine learning approaches have been successfully applied for creating systems capable of performing this task.
the supervised nature of these approaches requires large amounts of annotated training data; the acquisition of which is often a laborious and time-consuming process.
contrasting
train_10435
Experiments with this approach pointed out that truly random concatenation of database fields results in weak performance; a rather simple baseline approach, which only matches substrings of a field book entry with the contents of the database, leads to better results.
if a small amount of annotated field book entries is available -in this study, 10 entries turned out to be sufficient-one can estimate field ordering probabilities that can be used to generate more realistic training data from the database.
contrasting
train_10436
Most of those approaches rely on the analysis of document structure (reflected in, for example, html tags), from which record templates are derived.
this approach does not apply to unstructured text.
contrasting
train_10437
This matches our intuition: without generalizing to semantic language models, higher order language models will be relatively sparse and contain much noise.
when taking into account the semantic features, we found that bigram and trigram semantic language model fea-tures outperformed unigrams.
contrasting
train_10438
Out of the models they describe, the HMM models are the most expressive models that can compute posterior probabilities using the forward-backward algorithm.
unlike sequence alignments, there are no ordering constraints in word alignments, and the alignments are many-to-many as opposed to one-to-one.
contrasting
train_10439
Blunsom & Cohn (2006) use Viterbi decoding to find an alignment of two sentences given a trained CRF model, a * argmax a P Λ (a|C i , C j ).
the posterior probabilities of the labels at each position can be calculated as well using the forwardbackward algorithm: where α l and β l are the forward and backward vectors that are computed with the forward-backward algorithm (Lafferty et al., 2001).
contrasting
train_10440
Other utility functions, such as Dice, Jaccard and Hamming can be used as U set agreement .
only metric-based utility functions will result in a metric-based U AM A utility function.
contrasting
train_10441
We tried numerous features that compare MeSH terms based on their distance in the ontology, and other features that indicate whether a word is part of a longer term.
none of these feature were selected for the final system.
contrasting
train_10442
For the Viterbi alignments, only three results could be generated (one for each symmetrization method).
since the refined method produced a very similar result to the union, only the union is displayed in the figure.
contrasting
train_10443
Our experiments were limited by the size of the labeled data.
the results support the theoretical predictions, and demonstrate the advantage of posterior-decoding over Viterbi decoding.
contrasting
train_10444
In principle, the predictive accuracy of the language model can be improved by increasing the order of the n-gram.
doing so further exacerbates the sparse data problem.
contrasting
train_10445
We focused on machine translation when describing the queued language model access.
it is general enough that it may also be applicable to speech decoders and optical character recognition systems.
contrasting
train_10446
For this data set, we also see an improvement when using a part-of-speech language model -the BLEU score increases from 18.19% to 19.05% -consistent with the results reported in the previous section.
moving from a surface word translation mapping to a lemma/morphology mapping leads to a deterioration of performance to a BLEU score of 14.46%.
contrasting
train_10447
In particular, since no model of the training material is being learned, the training corpus needs to be stored in order to be queried.
to k-NN, however, the search for closest neighbors does not require any distance, but instead relies on relational similarities.
contrasting
train_10448
The questions involved range from reconstruction of ancient word forms, to the elucidation of phonological drift processes, to the determination of phylogenetic relationships between languages.
this problem has received relatively little attention from the computational community.
contrasting
train_10449
Like our approach, they use a probabilistic edit model as a formalization of the phonological process.
they do not consider the question of reconstruction or inference in multi-node phylogenies, nor do they present a learning algorithm for such models.
contrasting
train_10450
Incorrect prior splits can needlessly fragment training data and incorrect prior tying can limit the model's expressivity.
correct assumptions can increase the efficiency of the learner.
contrasting
train_10451
Despite our structural simplicity, we outperform state-tied triphone systems like Young and Woodland (1994), a standard baseline for this task, by nearly 2% absolute.
we fall short of the best current systems.
contrasting
train_10452
If a key function is to serve as a filter, matching names must be members of the same equivalence class.
no single partition can produce equivalence classes that both include all matching pairs and exclude all non-matching pairs.
contrasting
train_10453
The methods presented above are mostly efficient and always exact.
for models that take global properties of the tree into account, they cannot be applied.
contrasting
train_10454
For all four languages, the same treebanks were used, which allows a comparison of the results.
in some cases the size of the training set changed, and at least one treebank, Turkish, underwent a thorough correction phase.
contrasting
train_10455
Last year's test set also had an average sentence length of 5.9.
this year, the average sentence length is 7.5 tokens, which is a significant increase.
contrasting
train_10456
In this domain, it seems more feasible to use general language resources than for the chemical domain.
the results prove that the extra effort may be unnecessary.
contrasting
train_10457
The choice of the best translation is made based on the combination of the probabilities and feature weights, and much discussion has been made of how to make the estimates of probabilites, how to smooth these estimates, and what features are most useful for discriminating among the translations.
a cursory glance at phrasetables produced often suggests that many of the translations are wrong or will never be used in any translation.
contrasting
train_10458
However, a cursory glance at phrasetables produced often suggests that many of the translations are wrong or will never be used in any translation.
most obvious ways of reducing the bulk usually lead to a reduction in translation quality as measured by BLEU score.
contrasting
train_10459
Tight restrictions on phrase length curtail the power of phrase-based models.
some promising engineering solutions are emerging.
contrasting
train_10460
It is perhaps surprising that such a small sample size works as well as the full data.
recent work by Och (2005) and Federico and Bertoldi (2006) has shown that the statistics used by phrase-based systems are not very precise.
contrasting
train_10461
Any linear algorithm will suffice.
for reasons described in §5.3, our other collocation algorithms depend on sorted sets, so we use a merge algorithm.
contrasting
train_10462
Double binary search requires that its input sets be in sorted order.
the suffix array returns matchings in lexicographical order, not numeric order.
contrasting
train_10463
They show that this approach leads to some time savings for phrase search, although the gains are relatively modest since the search for contiguous phrases is not very expensive to begin with.
the potential savings in the discontiguous case are much greater.
contrasting
train_10464
In our baseline algorithm, we would search for ab and cd, and then perform a computation to see whether these subphrases were collocated within an elastic window.
if we instead use abXc and bXcd as the basis of the computation, we gain two advantages.
contrasting
train_10465
In our Python implementation this takes several minutes, which in principle should be amortized over the cost for each sentence.
just as Zens and Ney (2007) do for phrase tables, we could compile our data structures into binary memory-mapped files, which can be read into memory in a matter of seconds.
contrasting
train_10466
Our work enables this in hierarchical phrase-based models.
we are interested in additional applications.
contrasting
train_10467
Babelfish 2 typically use a bilingual dictionary that is either manually compiled or learned from a par-allel corpus.
such dictionaries often have insufficient coverage of proper names and technical terms, leading to poor translation performance due to out of vocabulary (OOV) problem.
contrasting
train_10468
Integrating out θ d 's and z dn 's, the probability p(D|α, β) of the corpus is thus: Unfortunately, it is intractable to directly solve the posterior distribution of the hidden variables given a document, namely p(θ, z|w, α, β).
(Blei et al., 2003) has shown that by introducing a set of variational parameters, γ and φ, a tight lower bound on the log likelihood of the probability can be found using the following optimization procedure: γ is the Dirichlet parameter for φ and the multino- are the free variational parameters.
contrasting
train_10469
This probably contributed to its inferior result.
we found that the best result comes from combining all the corpora together with K = 60 and L = 40.
contrasting
train_10470
Nicholson and Baldwin (2006) investigated the prediction of the inherent semantic relation of a given compound nominalization using as statistical measure the confidence interval.
looked at MWEs in general investigating the semiautomated detection of MWE candidates in texts using error mining techniques and validating them using a combination of the World Wide Web as a corpus and some statistical measures.
contrasting
train_10471
With good statistical measures, we are able to distinguish genuine MWE from non-MWEs among the n-gram candidates.
from the perspective of grammar engineering, even with a good candidate list of MWEs, great effort is still required in order to incorporate such word units into a given grammar automatically and in a precise way.
contrasting
train_10472
By acquiring new lexical entries for the MWEs candidates validated by the statistical measures, the grammar coverage was shown to improve significantly.
no further investigation on the parser accuracy was reported there.
contrasting
train_10473
The annotation indicates a flat structure, where ev-ery token is headed by "Department".
a similar BIO phrase has a very different structure, pursuant to the BIO guidelines.
contrasting
train_10474
Example domains of predictive data mining include earthquake prediction, air temperature prediction, foreign exchange prediction, and energy price predic-tion.
predictive data mining is only feasible when a large amount of structured numerical data (e.g., in a database) is available.
contrasting
train_10475
One simple approach could be a system (see NGR system in Section 5) trained by a machine learning technique using n-gram features and classifying a message into multiple classes (e.g., NDP, Liberal, or Progressive).
we develop a more sophisticated algorithm and compare its result with several baselines, including the simple n-gram method 2 .
contrasting
train_10476
However, these approaches do not address the task of extracting aspect-of relations and make use of syntactic features only for labeling opinion holders and topics.
as we describe below, we find the significant overlap between aspectevaluation relation extraction and aspect-of relation extraction and apply the same approach to both tasks, gaining the generality of the model.
contrasting
train_10477
In the aspect-evaluation relation extraction, we evaluated the results against the human annotated gold-standard in a strict manner.
according to our error analysis, some of the errors can be regarded as correct for some real applications.
contrasting
train_10478
In the case of inter-sentential relations, our model tends to rely heavily on the statistical clues, because syntactic pattern features cannot be used.
our current method for estimating co-occurrence distributions is not sophisticated as we discussed above.
contrasting
train_10479
2 To be exact, the doubly underlined part is polar clause.
it is called polar sentence because of the consistency with polar sentences extracted by using layout structures.
contrasting
train_10480
Our results show that we can determine case with an error rate of 4.2%.
our results would have been impossible without a deeper understanding of the linguistic phenomenon of case and a transformation of the representation oriented towards this phenomenon.
contrasting
train_10481
Finally, the dual and masculine sound plural do not express nunation.
the feminine sound plural marks nunation explicitly, and all of its case morphemes are written only as diacritics, e.g., Traditional Arabic grammar makes a distinction between verbal clauses (¨ § ¤ ¦ " ! )
contrasting
train_10482
We observed that no matter which value the smoothing parameter takes, there are only about 10,000 non-zero features finally selected by Collins' original method.
the two new methods select substantially more features, as shown in Table 5.
contrasting
train_10483
Degradation of accuracy in a new domain can be overcome by developing an annotated corpus for that specific domain, e.g., as in the Biology domain.
this solution is feasible only if there is sufficient interest in the use of NLP technology in that domain, and there are sufficient funding and resources.
contrasting
train_10484
However, this solution is feasible only if there is sufficient interest in the use of NLP technology in that domain, and there are sufficient funding and resources.
our approach is to use existing resources, and rapidly develop taggers for new domains without using the time and effort to develop annotated data.
contrasting
train_10485
The above method forms the basis for our determination of the set of tags that are to be associated with the domain words.
the actual tag to be assigned for an occurrence in text depends on the context of use.
contrasting
train_10486
In the Bin vector associated with "creation", all these four features will get the value one, assuming that the four corresponding words are found in Text-Lex.
assuming "creatory" is not found in Text-Lex, the feature "-ion+ory" would get a zero value.
contrasting
train_10487
This method does not use POS tagged corpora (although in the reported experiment the initial "perfect" clusters were obtained from the Brown corpus using the POS tag information).
we use the POS tagged WSJ corpus to assist in the induction of tag information for our lexicon.
contrasting
train_10488
Our use of the kNN method to identify tags and their probabilities for words was inspired by this work.
their use of kNN method was in the context of supervised learning.
contrasting
train_10489
Since our motivation, on the other hand, is to move to a new domain, we didn't consider detection of similarity on the basis of word contexts.
we have shown that the approach of identifying words on the basis of suffixation patterns and using them as exemplars can be applied effectively even when the domain of application is substantially different from the text (the WSJ corpus) providing the exemplars.
contrasting
train_10490
As shown in (Cucerzan and Yarowsky, 2000), such information can provide considerable information to build a lexicon that associates possible tags with words.
we use this information only to provide the initial values.
contrasting
train_10491
Time did not allow a full-scale experiment, but for all languages except Catalan and Hungarian, the bidirectional parsing method outperformed the unidirectional methods when trained on a 20,000-word subset.
the gain of using bidirectional parsing may be more obvious when the treebank is small.
contrasting
train_10492
We performed a beam search by carrying out a K-best search through the set of possible sequences of actions as proposed by Johansson and Nugues (2006).
this did not increase the accuracy.
contrasting
train_10493
The ESDU algorithm also fared better with the SVO languages, except for Italian.
the Greenberg's basic word order typology cannot shed enough light into the performance of the three parsing algorithms.
contrasting
train_10494
Due to the limitation of participation time, we only applied the first-order decoding parsing algorithm in CONLL-2007.
our algorithm can be used for the second order parsing.
contrasting
train_10495
Our system makes no provisions for non-projective edges.
to previous work, we aim to learn labelled dependency trees at one fell swoop.
contrasting
train_10496
It is no longer rare to see dependency relations used as features, in tasks such as machine translation (Ding and Palmer, 2005) and relation extraction (Bunescu and Mooney, 2005).
there is one factor that prevents the use of dependency parsing: sparseness of annotated corpora outside Wall Street Journal.
contrasting
train_10497
We therefore defined the positive lower bound (10 -10 ) and the negative upper bound (-10 -10 ) to eliminate values that tend to be zero.
the SVM is a binary classifier which only recognizes true or false.
contrasting
train_10498
Therefore the parser develops a bias against assigning these roles in general, and recall suffers.
9 precision is very good, thanks to the rich context in which the roles are assigned.
contrasting
train_10499
The parsing formalism we use is related to the tree adjoining grammar (TAG) formalisms described in (Chiang, 2003;Shen and Joshi, 2005).
an important difference of our work from this previous work is that our formalism is defined to be "splittable", allowing use of the efficient parsing algorithms of Eisner (2000).
contrasting