id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_12600
To ease the training of the maximum entropy model, bootstrapping is used to help supervised learning.
to avoid error propagations from word segmentation and NER, we directly extract Chinese NEs and make the alignment from plain text without word segmentation.
contrasting
train_12601
So far a lot of research has been conducted in the field of machine translation and knowledge acquisition, including both statistical approaches (Cherry and Lin, 2003;Probst and Brown, 2002;Wang et al., 2002;Och and Ney, 2000;Melamed, 2000;Vogel et al., 1996) and symbolic approaches (Huang and Choi, 2000;Ker and Chang, 1997).
these approaches do not work well on the task of NE alignment.
contrasting
train_12602
They only carry out the alignment between words and do not consider the case of complex phrases like some multi-word NEs.
iBM Models allow at most one word in the source language to correspond to a word in the target language (Koehn et al., 2003;Marcu, 2001).
contrasting
train_12603
YASMET requires supervised learning for the training of the maximum entropy model.
it is not easy to acquire a large annotated training set.
contrasting
train_12604
Surely, a good translation has to adequately capture the meaning of the foreign original.
pinning down all the nuances is hard, and often differences in emphasis are introduced based on the interpretation of the translator.
contrasting
train_12605
The obvious next step would be to learn from the entire neighborhood -similar to KNN classification.
due to the sparsity of the data and because different groups of neighbors capture different aspects of the test question, we choose to cluster the neighborhood instead.
contrasting
train_12606
These particular content features are intuitive and highly indicative of a correct answer.
in sparse clusters, the content features have less information content and are more vague.
contrasting
train_12607
In the context of Web surfing, it is unusual for a page to include multiple or partial links to another page, and hence the original PageRank definition for graph-based ranking is assuming unweighted graphs.
in our model the graphs are build from natural language texts, and may include multiple or partial links between the units (vertices) that are extracted from text.
contrasting
train_12608
The summary evaluation performed in SUMMAC (Mani et al., 1999) followed that strategy.
extrinsic evaluations are time-consuming to set up and can thus not be used for the day-to-day evaluation needed during system development.
contrasting
train_12609
One of us then used the Kuwait consensus agreement to annotate the 16 machine summaries for that text which were created by different participants in DUC-2002, an annotation which could be done rather quickly.
a small number of missing factoids were detected, for instance the (incorrect) factoid that Saudi Arabia was invaded, that the invasion happened on a Monday night, and that Kuwait City is Kuwait's only sizable town.
contrasting
train_12610
Of course, the suggestion might be made that the system ranking will most likely also be stabilised by scoring summaries for more texts, even with such a low (or even lower) N per text.
in that case, the measure only yields information at the macro level: it merely gives an ordering between systems.
contrasting
train_12611
The definition of is as follows, where In the preliminary experiment, a weight-pushing operation (Mohri and Riley, 2001) was also effective for deleting negative v transition of our fullexpansion models.
pushing causes an imbalance of weights among paths if the WFST is not deterministic.
contrasting
train_12612
In this paper, we applied the state merging method to a fully-expanded WFST and showed the effectiveness of this approach.
the state merging method itself is general and independent of the fully-expanded WFST.
contrasting
train_12613
In other words, the kth sentence chosen is the one with the largest index value in the kth right singular vector in matrix The summarization method proposed by Gong and Liu has some disadvantages as well, the main of which is that it is necessary to use the same number of dimensions as is the number of sentences we want to choose for a summary.
the higher the number of dimensions of reduced space is, the less significant topic we take into a summary.
contrasting
train_12614
Figure 2 illustrates the tectogramatical tree structure of the following sentence:  In the PDT the intonation center is not annotated.
the annotators were instructed to use their judgement where the IC would be if they uttered the sentence.
contrasting
train_12615
They are shown in the figure 3, which shows that, actually, after using only 1% of the training data (4,947 instances), the classifiers already perform very well, and adding more training data improves the results only slightly.
for RIPPER, adding more data causes a decrease in performance, and as we mentioned earlier, even an impossibility of building a classifier.
contrasting
train_12616
It is worth noting that all configurations of this algorithm are computationally intensive, mainly because of Step 2.
since our aim is to provide transcripts for browsing audio recordings, we do not have to correct errors in real time.
contrasting
train_12617
In the other methods, it is used inside the co-reference module of the IE pipeline, to find the (single) locally-best state.
other textual features of the state candidate should contribute to establishing the relations to a location mention, besides the raw distance.
contrasting
train_12618
The simplest approach is to treat each alignmentsystem output as a separate feature upon which we build a classifier.
when only a few alignment systems are combined, this feature space is not sufficient to distinguish between instances.
contrasting
train_12619
In particular, it is a slightly better pair according to the Dice value than the correct the-les.
the latter alignment has the advantage that major-grands follows it.
contrasting
train_12620
6 Of course, symmetrizing Model 4 by intersecting alignments from both directions does yield an improved AER of 6.9, so, while our model does do surprisingly well with cheaply obtained count-based features, Model 4 does still outperform it so far.
our model can 4 It is important to note that while our matching algorithm has no first-order effects, the features can encode such effects in this way, or in better ways -e.g.
contrasting
train_12621
In this method, we merely run our matching algorithm and update weights based on the difference between the predicted and target matchings.
the performance of the average perceptron learner on the same feature set is much lower, only 8.1, not even breaking the AER of its best single feature (the intersected Model 4 predictions).
contrasting
train_12622
For example, we estimate the conditional probability of linking not to ne...pas by considering the number of sentence pairs in which not occurs in the English sentence and both ne and pas occur in the French sentence, compared to the number of times not is linked to both ne and pas in pairs of corresponding sentences.
when we make this estimate in the CLP-based model, we do not count a link between not and ne...pas if the same instance of not, ne, or pas is linked to any other words.
contrasting
train_12623
As we will discuss in Section 4, the majority of our coreference-specific features are over pairs of chunks: the proposed new mention and an antecedent.
since in general a proposed mention can have well more than one antecedent, we are left with a decision about how to combine this information.
contrasting
train_12624
Then sentence set recall is R = M/A and precision is P = M/S.
set-based recall and precision do not average well, especially when the assessor set sizes A vary widely across topics.
contrasting
train_12625
There are similar patterns to those seen in the relevant sentence data, with the 2003 assessors clearly being more liberal in judging.
the pattern is reversed for topic types, with more sentences being considered relevant and novel for the opinion topics than for the event topics.
contrasting
train_12626
The majority of the documents in the data set are relevant, and so many of the topic terms are present throughout the documents.
the assessor was often looking for a finer-grained level of information than what exists at the document level.
contrasting
train_12627
We observe that the scores for a small fraction of new stories that were initially missed (between scores 0.8 and 1) are decreased by the model-based NED system while a larger fraction (between scores 0.1 and 0.4) is also decreased by a small amount.
the major impact of using SVM modelbased NED systems appears to be in detecting old stories.
contrasting
train_12628
We convert a tree to feature vector Then we use a procedure that chooses the DP procedure or the inner product procedure depending on maliciousness: This procedure returns the same value as the original calculation.
naively, if |V (T i )| (the number of feature subtrees such that # F i (T i ) = 0) is small enough, we can expect a speed-up because the cost of calculating the inner product is O(|V since |V (T i )| might increase as the training set becomes larger, we need a way to scale the speed-up to large data.
contrasting
train_12629
However, the speed-up might be smaller since without node marks the number of subtrees increases while the DP procedure becomes simpler.
the FREQTM conversion for marked labeled ordered trees might be made faster by exploiting the mark information for pruning.
contrasting
train_12630
The results showed that the LCS-based measure is comparable to Ngram-based automatic evaluation methods.
these methods tend to be strongly influenced by word order.
contrasting
train_12631
ROUGE-L is better than both ROUGE-3 and ROUGE-4 but worse than ROUGE-1 or ROUGE-2.
& § © x ª 's correlation coefficients ( ) do not change very much with respect to l .
contrasting
train_12632
We limited our first study reported here to linear classifiers, in which conversion can be performed by simple ordering according to the score of each document.
approaching the problem as "learning how to order things" allowed us to design our sampling and training mechanisms in a novel and, we believe, more powerful way.
contrasting
train_12633
Similarly to the global weighting, we assigned each occurrence of a term to its local weighting bin l, but this time by simply capping tf at the total number of local weighting bins |L|: ) Let's note that this particular representation does not really need rounding since tf is already a positive integer.
in a more general case, tf can be normalized by document length (as is done in BM25 and language models) and thus local weighting would become a continuous function.
contrasting
train_12634
The WFST composition gives the word-to-word alignments between the sentences.
to obtain the phrase alignments, we need to construct additional FSTs not described here.
contrasting
train_12635
Our goal is to estimate the reordering model parameters P (b|x, u) for each phrase-pair (x, u) in this inventory.
when translating a given test set, only a subset of the phrase-pairs is needed.
contrasting
train_12636
The WtoP alignment model includes the first two of these.
distortion, which allows hypothesized words to be distributed throughout the target sentence, is difficult to incorporate into a model that supports efficient DP-based search.
contrasting
train_12637
Notably, simply increasing the amount of bitext used in training need not improve AER.
larger aligned bitexts can give improved phrase pair coverage of the test set.
contrasting
train_12638
A block δ [] invokes both the inner and outer generations simultaneously in Bracket Model A (BM-A).
the generative process is usually more effective in the inner part as δ [] is generally small and accurate.
contrasting
train_12639
On the one hand, the ability to provide fully correct phrasal alignments is impaired by the occurrence of high-frequency function words and/or words that are not exact translations of the words in the other language.
we have observed that most alignment systems are capable of providing partially correct phrasal alignments.
contrasting
train_12640
So, they potentially lead to better solutions.
the error rate of a finite set of training samples is usually a step function of model parameters, and cannot be easily minimized.
contrasting
train_12641
Second, the number of candidate features is not too large, and these features are not highly correlated.
neither of the assumptions holds in our case.
contrasting
train_12642
When the two domains are very similar to the background domain (such as Yomiuri), discriminative methods outperform MAP by a large margin.
the margin is smaller when the two domains are substantially different (such as Encarta and Shincho).
contrasting
train_12643
In the absence of robust mechanisms for assessing the reliability of the decoded inputs, the system will take the misunderstanding as fact and will act based on invalid information.
in a nonunderstanding the system fails to obtain an interpretation of the input altogether.
contrasting
train_12644
The task therefore has to be fixed, known in advance: for instance the slots that the system queries the user about (in a slot-filling system) are fixed.
in the RavenClaw architecture, the dialog task tree (e.g.
contrasting
train_12645
In the past, the idea of using perceptual categories has been dismissed as impractical due to the high cost of hand annotation.
with advances in weakly supervised learning, it is possible to train prosodic event classifiers with only a small amount of hand-labeled data by leveraging information in syntactic parses of unlabeled data.
contrasting
train_12646
Our strategy is similar to that proposed in (Nöth et al., 2000), which uses categorical labels defined in terms of syntactic structure and pause duration.
their system's category definitions are without reference to human perception, while we leverage learned relations between perceptual events and syntax with other acoustic cues, without predetermining the relation or requiring a direct coupling to syntax.
contrasting
train_12647
It is an open question as to whether the conclusions will hold for errorful ASR transcripts and automatically detected SU boundaries.
there is reason to believe that relative gains from using prosody may be larger than those observed here for reference transcripts (though overall performance will degrade), based on prior work combining prosody and lexical cues to detect other language structures (Shriberg and Stolcke, 2004).
contrasting
train_12648
In principle it could be any probability distribution.
largely for the sake of technical convenience, we assume it is one component of a multinomial distribution known as the Dirichlet distribution.
contrasting
train_12649
A two-step approach also avoids the creation of illegal chunk sequences, such as "B-SAT I-NUC".
a potential drawback is that the number of training examples for the labeller is reduced as the instances to be classified are chunks rather than tokens.
contrasting
train_12650
The two BoosTexter models also perform significantly worse than Spade on segmentation.
the higher WDiff for Spade on the segmentation task suggests that the boundaries predicted by our models contain more "near misses" than those predicted by Spade.
contrasting
train_12651
As with the one-step method, the stacked model performs (insignificantly) better than its unstacked counterpart on the segmentation task.
on the labelling task, the stacked variant performs significantly worse.
contrasting
train_12652
Discriminative training methods strive to optimize the parameters of a model by minimizing SR, as in Equation 4.
(4) cannot be optimized directly by regular gradient-based procedures as it is a piecewise constant function of λ and its gradient is undefined.
contrasting
train_12653
First, all discriminative methods significantly outperform the linear interpolation (statistically significant according to the t-test at p < 0.01).
the differences among three discriminative methods are very subtle and most of them are not statistically significant.
contrasting
train_12654
The algorithm cannot be guaranteed to terminate (since it is possible to write arbitrary Turing machines in Dyna).
if it does terminate, it should return values from a valid model of the program, i.e., values that simultaneously satisfy all the equations expressed by the program.
contrasting
train_12655
This strategy for computing the gradient ∂goal/∂a via the chain rule is an example of automatic differentiation in the reverse mode (Griewank and Corliss, 1991), known in the neural network community as back-propagation.
what if goal might be computed only approximately, by early stopping before convergence ( §4.5)?
contrasting
train_12656
MDPs provide us with a principled way to deal with these elements and their relationships.
dealing with the most general case results in models that are very cumbersome and which hide the conceptual simplicity of our approach.
contrasting
train_12657
Without noise, the performance (dotted triangles) is only slightly worse than that of the original policy.
when noise objects are added (solid triangles) the training is no longer slowed down.
contrasting
train_12658
Question series were also the fundamental structure used in the QACIAD challenge (Question Answering Challenge for Information Access Dialogue) of NTCIR4.
there are some important differences between the QACIAD and TREC series.
contrasting
train_12659
Linguistic analy-sis is useful because full parsing captures long distance dependencies between the answers and the query terms, and provides more information for inference.
merely linguistic analysis may not be enough.
contrasting
train_12660
The TREC question answering evaluation is based on human judgments (Voorhees, 2004).
such a manual procedure is costly and time consuming.
contrasting
train_12661
The advantage is that it does not require human annotation.
it only works for certain types of questions that have fixed anchors, such as "where was X born".
contrasting
train_12662
(2004) uses a similar approach for unsupervised pattern learning and generalization to soft pattern matching.
the method is actually used for sentence selection rather than answer snippet selection.
contrasting
train_12663
Perfect knowledge of informer spans can enhance accuracy from 79.4% to 88% using linear SVMs on standard benchmarks.
standard heuristics based on shallow pattern-matching give only a 3% improvement, showing that the notion of an informer is non-trivial.
contrasting
train_12664
We also note that the post-2003 TREC task has encountered evaluation problems, because it is difficult to agree on which nuggets should be included in the multi-snippet definitions (Hildebrandt et al., 2004).
our experimental results of Section 4 indicate strong inter-assessor agreement for single-snippet answers, suggesting that it is easier to agree upon what constitutes an acceptable single-snippet definition.
contrasting
train_12665
's WordNet-based method (2002).
they found the additional attribute to lead to no significant improvements, and, hence, we do not use it.
contrasting
train_12666
Considering additional features tailored to the NFL domain could further enhance performance.
feature selection is not one of the main objectives of this work.
contrasting
train_12667
(2004), who use machine learning techniques to identify propositional opinions and their holders (sources).
their work is more limited in scope than ours in several ways.
contrasting
train_12668
Due to its complex nature, it is not uncommon that the mention detection task itself is also divided into a number of smaller sub-tasks.
in this paper, we adopt an integrated classification approach to this problem that yields a monolithic structure.
contrasting
train_12669
One can argue that the large number of classes and data sparsity is an important issue here that it might have significant effect on performance.
several attempts to divide the task into simpler subtasks have failed to yield a system with a better performance than that of the integrated system.
contrasting
train_12670
Strapping may be harder in cases like gender induction: it is hard to stumble into the kind of detailed seed used by Cucerzan and Yarowsky (2003).
we suspect that fertile seeds exist that are much smaller than their lists of 50-60 words.
contrasting
train_12671
13 Our basic idea was to conflate x and y into a pseudoword x-y.
to get a pseudoword with only two senses, we tried to focus on the particular senses of x and y that were selected by t. We constructed about 500 pseudoword tokens by using only x and y tokens that appeared in sentences that contained t, or in sentences resembling those under a TF-IDF measure.
contrasting
train_12672
There are several metrics that can be used for this purpose, see for instance (Budanitsky and Hirst, 2001) for an overview.
most of them rely on measures of semantic distance computed on semantic networks, and thus they are limited by the availability of explicitly encoded semantic relations (e.g.
contrasting
train_12673
The algorithm was later improved through a method for simulated annealing (Cowie et al., 1992), which solved the combinatorial explosion of word senses, while still finding an optimal solution.
recent comparative evaluations of different variants of the Lesk algorithm have shown that the performance of the original algorithm is significantly exceeded by an algorithm variation that relies on the overlap between word senses and current context (Vasilescu et al., 2004).
contrasting
train_12674
This reduced the set of words to 38.
some of these words were fairly obscure, did not occur frequently enough in one of the domain corpora or were simply too polysemous.
contrasting
train_12675
A naive choice of source transliteration unit is a single character.
single characters lack contextual information, and their combinations may generate too many unlikely candidates.
contrasting
train_12676
The CRF is discriminative and avoids label/observation bias by using a model that is constrained only in that the conditional distribution factorizes over an undirected Markov chain.
most popular training procedures for a CRF are time-consuming and complex processes.
contrasting
train_12677
A naive implementation of this algorithm requires O(n 2 ) invocations of local classifiers, where n is the number of the words in the sentence, because we need to update the probabilities over the words at each iteration.
a k-th order Markov assumption obviously allows us to skip most of the probability updates, resulting in O(kn) invocations of local classifiers.
contrasting
train_12678
The first search strategy resulted in a relatively low inclusion rate; the second achieved a much higher inclusion rate.
because such English pages were limited, and on average only 45 unique snippets could be found for each f, which resulted in a maximum inclusion rate of 85.8%.
contrasting
train_12679
Therefore, even if some of these words are missing, numerical Information Extraction methods can use the remaining salient words and discard the noise generated by ASR errors.
this phenomenon is not true for tasks related to the extraction of fine grained entities, like Named Entities.
contrasting
train_12680
These improvements are not significant enough to justify the use of this kind of metadata information for improving the general performance of both ASR and NER processes.
if we focus now on the entities occurring in the newsletters corresponding to the exact days of the unmatched corpus, the improvement is much more significant, as presented in the next section.
contrasting
train_12681
We could restrict the features to local scope on the candidate parses, allowing dynamic-programming to be used to train the model with a packed representation.
even with these restrictions, finding arg max t h p(t, h | s, Θ) is NP-hard, and the Viterbi approximation arg max t,h p(t, h | s, Θ) -or other approximations -would have to be used (see Matsuzaki et al.
contrasting
train_12682
The maximal accuracy was 58.59% which was obtained after 7 iterations.
to the experiments on data2, the accuracy decreased by more than 1.5% when the training was continued.
contrasting
train_12683
Andreas Eisele (unpublished work) implemented a statistical disambiguator for German based on weighted finite-state transducers as described in the introduction.
his system fails to represent and disambiguate the ambiguities observed in compounds with three or more elements and similar constructions with structural ambiguities.
contrasting
train_12684
These systems have shown that accurate projective dependency parsers can be automatically learned from parsed data.
non-projective analyses have recently attracted some interest, not only for languages with freer word order but also for English.
contrasting
train_12685
The present work is related to that of Hirakawa (2001) who, like us, reduces the problem of dependency parsing to spanning tree search.
his parsing method uses a branch and bound algorithm that is exponential in the worst case, even though it appears to perform reasonably in limited experiments.
contrasting
train_12686
We illustrate here the application of the Chu-Liu-Edmonds algorithm to dependency parsing on the simple example x = John saw Mary, with directed graph representation G x , The first step of the algorithm is to find, for each word, the highest scoring incoming edge If the result were a tree, it would have to be the maximum spanning tree.
in this case we have a cycle, so we will contract it into a single node and recalculate edge weights according to Figure 3.
contrasting
train_12687
In that algorithm, the single highest scoring tree (or structure) is used to update the weight vector.
mIRA aggressively updates w to maximize the margin between the correct tree and the highest scoring tree, which has been shown to lead to increased accuracy.
contrasting
train_12688
The results also show that in terms of Accuracy, factored MIRA performs better than single-best MIRA.
for the factored model, we do have O(n 2 ) margin constraints, which results in a significant increase in training time over single-best MIRA.
contrasting
train_12689
Unlike the Reuters titles or the proverbs, the BNC sentences have typically no added creativity.
we decided to add this set of negative examples to our experimental setting, in order to observe the level of difficulty of a humorrecognition task when performed with respect to simple text.
contrasting
train_12690
The results of the recent Senseval-3 competition (Mihalcea et al., 2004) have shown that supervised WSD methods can yield up to 72.9% accuracy 1 on words for which manually sense-tagged data are available.
supervised methods suffer from the so-called knowledge acquisition bottleneck: they need large quantities of high quality annotated data to produce reliable results.
contrasting
train_12691
However, the first translation means "an exit" or "to export".
the second translation ( ) is monosemous and should be used.
contrasting
train_12692
In our experiments, we used all sense examples that we built for a sense (with an upper bound of 6000).
the distribution of senses in English text often does not match the distribution of their corresponding Chinese translations in Chinese text.
contrasting
train_12693
Second, since the engine is local, network latency is minimal.
to support IE, we must also execute the second stage of the algorithm (see the beginning of this section).
contrasting
train_12694
Thus, we see that a small number of repetitions can yield high confidence in an extraction.
when the sample size increases so that n = 20, 000, and the other parameters are unchanged, then P (x ∈ C) drops to 19.6%.
contrasting
train_12695
For the relations with a small number of correct instances, Country and CapitalOf, KNOWITNOW is able to identify 70-80% as many instances as KNOW-ITALL at precision 0.9.
corp and ceoOf have a huge number of correct instances and a long tail of low frequency extractions that KNOWITNOW has difficulty distinguishing from noise.
contrasting
train_12696
Traditionally, these sub-word units are determined by the phones or phonemes of a language (depending on desired detail of representation).
phonetic (or phonemic) representation has its pitfalls (cf.
contrasting
train_12697
Briscoe experimented with several triggers for starting a new word -at every phone, at the beginnings of syllables, at the beginnings of syllables with unreduced vowels, and at the beginnings of word boundaries.
the latter three require oracle information as to where word or syllable boundaries can occur.
contrasting
train_12698
It is possible that by focusing more on the training of annotator pairs, particularly on joint training, agreement might improve.
that would also result in a bias, which is probably not preferable to actual perception.
contrasting
train_12699
Moreover, sequencing shows potential for contributing in one case.
observations also point to three issues: first, the current data set appears to be too small.
contrasting