id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_500
It requires the ability to recognize translational equivalence in very noisy environments, namely sentence pairs that express different (although overlapping) content.
a good solution to this problem would have a strong impact on parallel data acquisition efforts.
contrasting
train_501
With this approach, one would need to obtain a corpus in which each ambiguous word has been manually annotated with the correct sense, to serve as training data.
supervised WSD systems faced an important issue of domain dependence when using such a corpus-based approach.
contrasting
train_502
In our eariler work (Chan and Ng, 2005b), the posterior probabilities assigned by a naive Bayes classifier are used by the EM procedure described in the previous section to estimate the sense priors in a new dataset.
it is known that the posterior probabilities assigned by naive Bayes are not well calibrated (Domingos and Pazzani, 1996).
contrasting
train_503
BC was built as a balanced corpus and contains texts in various categories such as religion, fiction, etc.
the focus of the WSJ corpus is on financial and business news.
contrasting
train_504
We did not include the results from GI03 in the tables since the system is only applicable to part-of relations and we did not reproduce it.
the authors evaluated their system on a sample of the TREC-9 dataset and reported 83% precision and 72% recall (this algorithm is heavily supervised.)
contrasting
train_505
Experimental results, for all relations and the two different corpus sizes, show that ESP-greatly outperforms the other methods on precision.
without the use of generic patterns, the ESP-system shows lower recall in all but the production relation.
contrasting
train_506
When the percentage of labeled data increases from 50% to 100%, LP Cosine is still comparable to SVM in F-measure while LP JS achieves slightly better F-measure than SVM.
lP JS consistently outperforms lP Cosine .
contrasting
train_507
This allows the grammar writer to specify any partial information about the signature, and provides the needed mathematical and computational capabilities to integrate the information with the rest of the signature.
this work does not define modules or module interaction.
contrasting
train_508
Consider now the appropriateness relation.
to type signatures, Ap is not required to be a function.
contrasting
train_509
Since Charniak's parser does its own tagging, this experiment did not examine the utility of prosodic disjuncture marks.
the combination of daughter annotation and -UNF propagation does lead to a better grammar-based reparandum-finder than parsers trained on flattened EDITED regions.
contrasting
train_510
This paper uses data-driven n-gram user simulations (Georgila et al., 2005a) and a richer dialogue context.
increasing the size of the state space for RL has the danger of making the learning problem intractable, and at the very least means that data is more sparse and state approximation methods may need to be used (Henderson et al., 2005).
contrasting
train_511
In our case we see that certain turns are rejected less than expected (row 3), while uncertain turns are rejected more than expected (row 4).
there is no interaction between neutral turns and rejections or between mixed turns and rejections.
contrasting
train_512
We find that 'Certain' turns have less SRP than expected (in terms of AsrMis and Rej).
'Uncertain' turns have more SRP both in terms of AsrMis and Rej.
contrasting
train_513
This constraint is expressed in a grammar G encoded as a regular expression ).
in order to cope with the prediction errors of the classifier, we approximate with an -gram language model on sequences of the refined tag labels: In order to estimate the conditional distribution e !
contrasting
train_514
Dialog structure information is necessary for language generation (predicting the agents' response) and dialog state specific text-to-speech synthesis.
there are several challenging problems that remain to be addressed.
contrasting
train_515
These models are trained to encourage nearby data points to have the same class label, and they can obtain impressive accuracy using a very small amount of labeled data.
since they model pairwise similarities among data points, most of these approaches require joint inference over the whole data set at test time, which is not practical for large data sets.
contrasting
train_516
To see why, note that the entropy regularizer can be seen as a composition, , is given by (Boyd and Vandenberghe 2004) As (2) is not concave, many of the standard global maximization techniques do not apply.
one can still use unlabeled data to improve a supervised CRF via iterative ascent.
contrasting
train_517
These training criteria have yielded excellent results for various tasks.
real world tasks are evaluated by task-specific evaluation measures, including non-linear measures such as Fscore, while all of the above criteria achieve optimization based on the linear combination of average accuracies, or error rates, rather than a given task-specific evaluation measure.
contrasting
train_518
The most probable outputŷ is given byŷ = arg max y∈Y p(y|x; λ).
z λ (x) never affects the decision ofŷ since z λ (x) does not depend on y.
contrasting
train_519
It may be feared that since the objective function is not differentiable everywhere for ψ = ∞, problems for optimization would occur.
it has been shown (Le Roux and McDer-mott, 2005) that even simple gradient-based (firstorder) optimization methods such as GPD and (approximated) second-order methods such as Quick-Prop (Fahlman, 1988) and BFGS-based methods have yielded good experimental optimization results.
contrasting
train_520
We expanded the basic features by using bigram combinations of the same types of features, such as words and part-of-speech tags, within window size 5.
to the above, we used the original feature set for NER.
contrasting
train_521
Note that all above-mentioned systems are based on the assumption that the true quality of essays must be defined by human judges.
bennet and bejar (1998) have criticized the overreliance on human ratings as the sole criterion for evaluating computer performance because ratings are typically based as a constructed rubric that may ultimately achieve acceptable reliability at the cost of validity.
contrasting
train_522
On one hand, we associate the words in the left half with food or cooking.
we associate those in the right half with animals or birds.
contrasting
train_523
In essence, decision lists could be learned from a corpus consisting of a general corpus and the feedback corpus.
since the size of the feedback corpus is normally far smaller than that of general corpora, so is the effect of the feedback corpus on .
contrasting
train_524
Thus, only a little improvement is expected in recall however much feedback corpus data become available.
most of the 13 Indeed, words around the target noun were effective.
contrasting
train_525
Incorrect substitutions and newly injected erroneous material anywhere in the sentence counted as New Errors, even if the proposed replacement were otherwise correct.
changes in upper and lower case and punctuation were ignored.
contrasting
train_526
The difference between τ c+ and τ c is expected.
in the next section this will be contrasted with the increased burden on the parser for τ c+ , since it is also responsible for selecting the correct dependency type for each arc among as many as 2 • |R| types instead of |R|.
contrasting
train_527
the subject is RESTAURANT and verb is has, whereas the learned DSyntSs often place the attribute in subject position as a definite noun phrase.
the learned DSyntS can be incorporated into SPaRKy using the semantic representations to substitute learned DSyntSs into nodes in the sentence plan tree.
contrasting
train_528
Synonymy, hypernymy or meronymy fall clearly in this latter category, and well known resources like WordNet (Miller, 1995), EuroWordNet (Vossen, 1998) or MindNet (Richardson et al., 1998) contain them.
as various researchers have pointed out (Harabagiu et al., 1999), these networks lack information, in particular with regard to syntagmatic associations, which are generally unsystematic.
contrasting
train_529
The network of topical co-occurrences built from Topical Units is a subset of the initial network.
it also contains co-occurrences that are not part of it, i.e.
contrasting
train_530
For us, supertagging decreases the speed slightly, because additional constraints means more work for the parser, and because our supertagger-parser integration is not yet optimal.
it gives us better parsing accuracy.
contrasting
train_531
The mean number of relevant documents per query is 137 and the median is 81 (the most prolific query has 968 relevant documents).
each document is relevant to, on average, 1.11 queries (the median is 5.5 and the most generally relevant document is relevant to 20 different queries).
contrasting
train_532
Our model is similar in spirit to the randomwalk summarization model (Otterbacher et al., 2005).
our model has several advantages over this technique.
contrasting
train_533
Lapata (2002) examines the task of expressing the implicit relations in nominalizations, which are noun compounds whose head noun is derived from a verb and whose modifier can be interpreted as an argument of the verb.
with this work, our algorithm is not restricted to nominalizations.
contrasting
train_534
This great expressiveness has the disadvantage that the parsing problem becomes N P-complete and cannot be solved efficiently.
good success has been achieved with transformation-based solution methods that start out with an educated guess about the optimal tree and use constraint failures as cues where to change labels, subordinations, or lexical readings.
contrasting
train_535
As with many parsers, the attachment of prepositions poses a particular problem for the base WCDG of German, because it is depends largely upon lexicalized information that is not widely used in its constraints.
such information can be automatically extracted from large corpora of trees or even raw text: prepositions that tend to occur in the vicinity of specific nouns or verbs more often than chance would suggest can be assumed to modify those words preferentially (Volk, 2002).
contrasting
train_536
Therefore, external evidence is either used to restrict the space of possibilities for a subsequent component (Clark and Curran, 2004) or to choose among the alternative results which a traditional rule-based parser usually delivers (Malouf and van Noord, 2004).
to these approaches, our system directly integrates the available evidence into the decision procedure of the rule-based parser by modifying the objective function in a way that helps guiding the parsing process towards the desired interpretation.
contrasting
train_537
Natural language parsing is a hard task, partly because of the complexity and the volume of information that have to be taken into account about words and syntactic constructions.
it is necessary to have access to such information, stored in resources such as lexica and grammars, and to try and minimize the amount of missing and erroneous information in these resources.
contrasting
train_538
Lease and Charniak (2005) use the Charniak parser for biomedical data and find that the use of out-of-domain trees and in-domain vocabulary information can considerably improve performance.
the work which is most directly comparable to ours is that of (Ratnaparkhi, 1999;Hwa, 1999;Gildea, 2001;Bacchiani et al., 2006).
contrasting
train_539
Lexical verb classes have been used to support various (multilingual) tasks, such as computational lexicography, language generation, machine translation, word sense disambiguation, semantic role labeling, and subcategorization acquisition (Dorr, 1997;Prescher et al., 2000;Korhonen, 2002).
large-scale exploitation of the classes in real-world or domain-sensitive tasks has not been possible because the existing classifications, e.g.
contrasting
train_540
annotated data (or a comprehensive list of verb senses) exists for the domain.
examination of a number of corpus instances suggests that the use of verbs is fairly conventionalized in our data 13 .
contrasting
train_541
Most of the acquisition methods are based on distributional hypothesis (Harris, 1985), which states that semantically similar words share similar contexts, and it has been experimentally shown considerably plausible.
whereas many methods which adopt the hypothesis are based on contextual clues concerning words, and there has been much consideration on the language models such as Latent Semantic Indexing (Deerwester et al., 1990) and Probabilistic LSI (Hofmann, 1999) and synonym acquisition method, almost no attention has been paid to what kind of categories of contextual information, or their combinations, are useful for word featuring in terms of synonym acquisition.
contrasting
train_542
Accurately representing synonymy using distributional similarity requires large volumes of data to reliably represent infrequent words.
the naïve nearestneighbour approach to comparing context vectors extracted from large corpora scales poorly (O(n 2) in the vocabulary size).
contrasting
train_543
Their paper concluded that recognizing the sub-events that comprise a single news event is essential for producing better summaries.
it is difficult to automatically break a news topic into sub-events.
contrasting
train_544
Apparently, named entities are key elements in their model.
the constraints defining events seemed quite stringent.
contrasting
train_545
We are aware of the important roles of information fusion and sentence compression in summary generation.
the focus of this paper is to evaluate event-based approaches in extracting the most important sentences.
contrasting
train_546
Collecting human judgements is the method of choice for evaluating sentence compression models.
human evaluations tend to be expensive and cannot be repeated frequently; furthermore, comparisons across different studies can be difficult, particularly if subjects employ different scales, or are given different instructions.
contrasting
train_547
The SSA does not correlate on both corpora with human judgements; it thus seems to be an unreliable measure of compression performance.
the Fscore correlates significantly with human ratings, yielding a correlation coefficient of r = 0.575 on the Ziff-Davis corpus and r = 0.532 on the Broadcast news.
contrasting
train_548
When no parallel corpora are available the parameters can be manually tuned to produce compressions.
the supervised decision-tree model is not particularly robust on spoken text, it is sensitive to the nature of the training data, and did not produce adequate compressions when trained on the humanauthored Broadcast News corpus.
contrasting
train_549
Barzilay (2002) has provided empirical evidence that proper order of extracted sentences improves their readability significantly.
ordering a set of sentences into a coherent text is a nontrivial task.
contrasting
train_550
Several strategies to determine sentence ordering have been proposed as described in section 2.
the appropriate way to combine these strategies to achieve more coherent summaries remains unsolved.
contrasting
train_551
This fact shows that integration of CHR experts with other experts worked well by pushing poor ordering to an acceptable level.
a huge gap between AGL and HUM orderings was also found.
contrasting
train_552
Inter-annotator agreement is too low on fine-grained judgments.
for the coarse-grained judgments of more than or less than a day, and of approximate agreement on temporal unit, human agreement is acceptably high.
contrasting
train_553
The main reason is that the understanding of the basic entailment processes will allow us to model more accurate semantic theories of natural languages (Chierchia and McConnell-Ginet, 2001) and design important applications (Dagan and Glickman, 2004), e.g., Question Answering and Information Extraction.
previous work (e.g., (Zaenen et al., 2005)) suggests that determining whether or not a text T entails a hypothesis H is quite complex even when all the needed information is explicitly asserted.
contrasting
train_554
These readings are all semantically equivalent to each other.
the USR for (2) has 480 readings, which fall into two classes of mutually equivalent readings, characterised by the relative scope of "the lee of" and "a small hillside."
contrasting
train_555
However, even first-order equivalence is an undecidable problem, and broadcoverage semantic representations such as those computed by the ERG usually have no welldefined model-theoretic semantics and therefore no concept of semantic equivalence.
we do not need to solve the full semantic equivalence problem, as we only want to compare formulas that are readings of the same sentence, i.e.
contrasting
train_556
One way to formalise this is to enumerate exactly one representative of each equivalence class.
after such a step we would be left with a collection of semantic representations rather than an USR, and could not use the USR for ruling out further readings.
contrasting
train_557
This is an improvement over our earlier algorithm (Koller and Thater, 2006), which computed a chart with four configurations for the graph in which 1 and 2 are existential and 3 is universal, as opposed to the three equivalence classes of this graph's configurations.
the present algorithm still doesn't achieve complete reduction for all USRs.
contrasting
train_558
Indeed, this is the approach we use for our main results in Section 3.
because of the second problem noted above, in Section 4, we simulated the context by filling the cache with rules from the correct tree.
contrasting
train_559
Significant results were obtained both for the Within model (N = 24, Z = 1.67, p < .05, one-tailed) and for the Copy model (N = 24, Z = 4.27, p < .001, onetailed).
the effect was much larger for the Copy model, a conclusion which is confirmed by comparing the differences of differences between the two models (N = 24, Z = 4.27, p < .001, one-tailed).
contrasting
train_560
The increase in recall is statistically significant, and it shows classifier stacking can improve performance.
we did not find metaclassification and simple voting very effective.
contrasting
train_561
Our grammars automatically learn the kinds of linguistic distinctions exhibited in previous work on manual tree annotation.
our grammars are much more compact and substantially more accurate than previous work on automatic annotation.
contrasting
train_562
We would like to see only one sort of this tag because, despite its frequency, it always produces the terminal comma (barring a few annotation errors in the treebank).
we would expect to find an advantage in distinguishing between various verbal categories and NP types.
contrasting
train_563
To prevent oversplitting, we could measure the utility of splitting each latent annotation individually and then split the best ones first.
not only is this impractical, requiring an entire training phase for each new split, but it assumes the contributions of multiple splits are independent.
contrasting
train_564
This approach gives parsing accuracies of up to 90.7% on the development set, substantially higher than previous symbol-splitting approaches, while starting from an extremely simple base grammar.
in general, any automatic induction system is in danger of being entirely uninterpretable.
contrasting
train_565
In the limit, each word may well have its own unique syntactic behavior, especially when, as in modern parsers, semantic selectional preferences are lumped in with traditional syntactic trends.
in practice, and given limited data, the relationship between specific words and their syntactic contexts may be best modeled at a level more fine than POS tag but less fine than lexical identity.
contrasting
train_566
Our findings suggest that in the supervised setting the results of the direct and indirect approaches are comparable.
addressing directly the binary classification task has practical advantages and can yield high precision values, as desired in precision-oriented applications such as IR and QA.
contrasting
train_567
In general, the classification performance of one-class approaches is usually quite poor, if compared to supervised approaches for the same tasks.
in many practical settings oneclass learning is the only available solution.
contrasting
train_568
Although they are all based on the Bayesian model, Qin and Wang (2005) used an ensemble classifier.
the difference of the average value is not remarkable.
contrasting
train_569
When we simply apply first-order semi-CRFs, we must distinguish states that have different previous states.
when we want to distinguish only the preceding named entity tags rather than the immediate previous states, feature forests can represent these events more compactly (Figure 4).
contrasting
train_570
The result of the preceding entity information improves the performance.
the system with preceding information is not significantly better than the system without it 5 .
contrasting
train_571
The contribution of the non-local information introduced by our method was not significant in the experiments.
other types of nonlocal information have also been shown to be effective (Finkel et al., 2005) and we will examine the effectiveness of other non-local information which can be embedded into label information.
contrasting
train_572
Figure 2 contains an example of such a case: the cascade model will have to predict the type of the entire phrase Donna Karan International, in the context 'Since <chunk> went public in ..', which will give it a better opportunity to classify it as an organization.
because the joint model and AIO have a word view of the sentence, will lack the benefit of examining the larger region, and will not have access at features that involve partial future classifications (such as the fact that another mention of a particular type follows).
contrasting
train_573
Among all the issued extractions, the larger of the number of incorrect extractions is, the closer the extraction redundancy for that document is to 1.
the extraction redundancy can never be 1 according to our definition, since this measure is only defined over the documents that contain at lease one correct extraction.
contrasting
train_574
To support machine translation, parallel sentences should be extracted from the mined parallel documents.
current sentence alignment models, (Brown et al 1991;Gale & Church 1991;Wu 1994;Chen 1993;Zhao and Vogel, 2002;etc.)
contrasting
train_575
Compared to Table 1, the results show a significant improvement of over 10% on the baseline f-score for questions.
the tests on the non-question Section 23 data show not only a significant drop in accuracy but also a drop in coverage.
contrasting
train_576
For example when trained on only 10% of the 3600 questions used in this experiment, the parser successfully parses all of the 400 question test set and achieves an fscore of 85.59.
the results for the tests on WSJ Section 23 are considerably worse.
contrasting
train_577
Free-word order languages typically pose greater challenges for syntactic theories (Rambow, 1994), and the richer inflectional morphology of these languages creates additional problems both for the coverage of lexicalized formalisms such as CCG or TAG, and for the usefulness of dependency counts extracted from the training data.
formalisms such as CCG and TAG are particularly suited to capture the cross-ing dependencies that arise in languages such as Dutch or German, and by choosing an appropriate linguistic representation, some of these problems may be mitigated.
contrasting
train_578
Compared with word-based SMT systems, phrase-based systems can easily address reorderings of words within phrases.
at the phrase level, reordering is still a computationally expensive problem just like reordering at the word level (Knight, 1999).
contrasting
train_579
Performance gains have been reported for systems with lexicalized reordering model.
since reorderings are related to concrete phrases, researchers have to design their systems carefully in order not to cause other problems, e.g.
contrasting
train_580
A simple way to compute this probability is to take counts from the training data and then to use the maximum likelihood estimate (MLE) The similar way is used by lexicalized reordering model.
in our model this way can't work because blocks become larger and larger due to using the merging rules, and finally unseen in the training data.
contrasting
train_581
When decoding a speech signal, words are generated in the same order in which their corresponding acoustic signal is consumed.
that is not necessarily the case in MT due to the fact that different languages have different word order requirements.
contrasting
train_582
The paper suggests that reordering the input reduces the translation error rate.
it does not provide a methodology on how to perform this reordering.
contrasting
train_583
In the absence of richer models such as the proposed distortion model, our results suggest that it is best to decode monotonically and only allow local reorderings that are captured in our phrase dictionary.
when the distortion model is used, we see statistically significant increases in the BLEU score as we consider more word reorderings.
contrasting
train_584
In particular, it is possible to deal with it even when no bilingual resources are available.
when it is possible to exploit bilingual repositories, such as a synset-aligned WordNet or a bilingual dictionary, the obtained performance is close to that achieved for the monolingual task.
contrasting
train_585
et al., 2005) which are the two general approaches to personalized search.
(J. Pitkow et al., 2002) to the best of our knowledge, how to exploit these two types of implicit feedback in a unified way, which not only brings collaboration between query expansion and result re-ranking but also makes the whole system more concise, has so far not been well studied in the previous work.
contrasting
train_586
In Figure 5, we can see that the precision of "PAIR No QE" is better than that of UCAIR among top 5 and top 10 documents, and is almost the same as that of UCAIR among top 20 and top 30 documents.
pAIR is much better than UCAIR in all measurements.
contrasting
train_587
Compared with dictionary or corpus based methods, the advantage of MT-based query translation lies in that technologies integrated in MT systems, such as syntactic and semantic analysis, could help to improve the translation accuracy (Jones et al., 1999).
in a very long time, fewer experiments with MT-based methods have been reported than with dictionary-based methods or corpus-based methods.
contrasting
train_588
Thus, in practice the rate of growth on the x-axis of Figure 1 will slow as the corpus size increases.
the number of documents (shown on the y-axis in Figure 1) remains unbounded.
contrasting
train_589
For example, one study found that full-text articles require weighting schemes that consider document length (Kamps, et al, 2005).
controlling the weights for document lengths may hide a systematic difference between the language used in abstracts and the language used in the body of a document.
contrasting
train_590
substrings that are common enough to be observed on training data.
a key limitation of phrase-based models is that they fail to model reordering at the phrase level robustly.
contrasting
train_591
Most previous studies of the NER of speech data used generative models such as hidden Markov models (HMMs) (Miller et al., 1999;Palmer and Ostendorf, 2001;Horlock and King, 2003b;Béchet et al., 2004;Favre et al., 2005).
in text-based NER, better results are obtained using discriminative schemes such as maximum entropy (ME) models (Borthwick, 1999;Chieu and Ng, 2003), support vector machines (SVMs) (Isozaki and Kazawa, 2002), and conditional random fields (CRFs) (McCallum and Li, 2003).
contrasting
train_592
If all the candidates are classified negative, TA is judged nonanaphoric.
the preference-based approach (Yang et al., 2003;Iida et al., 2003) decomposes the task into comparisons of the preference between candidates and selects the most preferred one as the antecedent.
contrasting
train_593
The larger is Ò, the larger corpus is required to avoid data sparseness.
though loworder Ò-grams do not suffer from data sparseness severely, they do not reflect the language characteristics well, either.
contrasting
train_594
All the experiments above considered left context only.
kang reported that the probabilistic model using both left and right context outperforms the one that uses left context only (kang, 2004).
contrasting
train_595
Even though word spacing is one of the important tasks in Korean information processing, it is just a simple task in many other languages such as English, German, and French.
due to its generality, the importance of the proposed method yet does hold in such languages.
contrasting
train_596
The solution for a maximization problem can be found using an exhaustive search method.
the complexity is very high in practice for a large number of pairs to be processed.
contrasting
train_597
Interestingly, Buckley et al., (2000) points out that "English query words are treated as potentially misspelled French words" and attempts to treat English words as variations of French words according to lexicographical rules.
when two languages are very distinct, e.g., English-Korean, English-Chinese, transliteration from English words is utilized for cognate matching.
contrasting
train_598
Figure 1 shows the results of the experiments done for different alignment score cut-off without considering the Frequency Range constraint on the three corpora.
it was observed that the performance of the algorithm reduced slightly on introducing this BASS filter.
contrasting
train_599
The starting point constraint expresses range in terms of number of words.
it has been observed (see section 2.2) that the optimum value of the range varies with the nature of text.
contrasting