id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_13400 | adopt a stratified sampling strategy to select the seed set. | their method needs a stratification variable such as the known distribution of the relation types, while our method uses clustering to divide relation instances into different strata. | contrasting |
train_13401 | Normally, this number is proportional to the size of that stratum in the whole data set. | in case this number is 0 due to the rounding of real numbers, it is set to 1 to ensure the existence of at least one seed from that stratum. | contrasting |
train_13402 | The method in (Haveliwala, 2002), is similar to TPR which also decompose PageRank into various topics. | the method in (Haveliwala, 2002) only considered to set the preference values using pr(w|z) (In the context of (Haveliwala, 2002), w indicates Web pages). | contrasting |
train_13403 | In general, the minimization of the normalized cut criterion is NP-complete. | the linearity constraint on text segmentation for monologue allows them to find an exact solution in polynomial time. | contrasting |
train_13404 | Email programs (e.g., Gmail, Yahoomail) group emails into threads using headers. | our annotations show that topics change at a finer level of granularity than emails. | contrasting |
train_13405 | In the second phase the annotators identify the most appropriate topic for each sentence. | if a sentence covers more than one topic, they were asked to label it with all the relevant topics according to their order of relevance. | contrasting |
train_13406 | Automatic detection of discourse relations in natural language text is important for numerous tasks in NLP, such as sentiment analysis (Somasundaran et al., 2009), text summarization (Marcu, 2000) and dialogue generation (Piwek et al., 2007). | most of the recent work employing discourse relation classifiers are based on fully-supervised machine learning approaches (duVerle and Prendinger, 2009;Pitler et al., 2009;Lin et al., 2009). | contrasting |
train_13407 | Some of the relations corresponding to these classes are relatively more frequent in the corpus, such as the ELAB- ORATION[N][S] relation (4441 instances), or the ATTRIBUTION[S][N] relation (1612 instances). | 1 other relation types occur very rarely, such as TOPIC-COMMENT [S] [N] (2 instances), or EVALUATION [N][N] (3 instances). | contrasting |
train_13408 | For each measure, the authors note a performance increase when little training data is available, or when the feature representations are very sparse. | for our task, classification of discourse relations, we employ not only words but also other types of features such as parse tree production rules, and thus cannot compute semantic kernels using WordNet. | contrasting |
train_13409 | (2008b)-for each instance with n senses in the corpus, we create n identical feature vectors, each being labeled by one of the instance's senses. | in the RST framework, only one relation is allowed to hold between two EDUs. | contrasting |
train_13410 | As α increases, the utterances increase in complexity, as does the success rate. | when α approaches 1, the utterances are too complex and the success rate decreases. | contrasting |
train_13411 | In recent years, statistical MT systems have been easy to develop due to the rapid explosion in data availability, especially parallel data. | in reality there are still many language pairs which lack parallel data, such as Urdu-English, Chinese-Italian, where large amounts of speakers exist for both languages; of course, the problem is far worse for pairs such as Catalan-Irish. | contrasting |
train_13412 | In this case, the role of paraphrases in decoding becomes a little weaker. | it might become a kind of noise to interfere with the exact translation of the original source-side phrases when decoding. | contrasting |
train_13413 | These results show that 1) most of the paraphrases that are added in are lower-order n-grams; 2) the paraphrases can increase the coverage of the input by handling the unknown words to some extent. | we observed that most untranslated words in the "Para-Sub" and "Lattice" systems are still NEs, which shows that in our paraphrase table, there are few paraphrases for the NEs. | contrasting |
train_13414 | One common limitation of the above works is they only allow the one-to-one mapping between each non-terminal frontier node, and thus they suffer from the issue of rule coverage. | due to the data sparseness issue and model coverage issue, previous tree-to-tree (Zhang et al., 2008;Liu et al., 2009) decoder has to rely solely on the span information or source side information to combine the target syntactic structures, without checking the compatibility of the merging nodes, in order not to fail many translation paths. | contrasting |
train_13415 | At first glance, this seems only peripherally related to our work, since the specific/general distinction is made for features rather than instances. | for multinomial models like our LMs and TMs, there is a one to one correspondence between instances and features, eg the correspondence between a phrase pair (s, t) and its conditional multinomial probability p(s|t). | contrasting |
train_13416 | (2007) and the references therein.) | unlike text, which requires little or no preprocessing, audio files are typically first transcribed into text before applying standard NLP tools. | contrasting |
train_13417 | Note that for some pseudo-terms, the words match exactly, while for others, the phrases are distinct but phonetically similar. | even in this case, there is often substantial overlap in the spoken terms. | contrasting |
train_13418 | Unsupervised clustering methods are attractive since they require no human annotations. | obtaining a few labeled examples for a simple labeling task can be done quickly, especially with crowd sourcing systems such as CrowdFlower and Amazon's Mechanical Turk (Snow et al., 2008;Callison-Burch and Dredze, 2010). | contrasting |
train_13419 | Our results demonstrate that such use of eye gaze can potentially compensate for a conversational systems limited language processing and domain modeling capability (Prasov and Chai, 2008). | this work is conducted in a static visual environment and evaluated only on transcribed spoken utterances. | contrasting |
train_13420 | In this case, the system must first identify one long sword as a referring expression and then resolve it to the correct set of entities in the virtual world. | not until the twenty fifth ranked recognition hypothesis H 25 , do we see a referring expression closest to the actual uttered referring expression. | contrasting |
train_13421 | It should be noted that these weights are learned based on single sentences. | to have a fair comparison between all our summary types we use these weights to generate summaries using the A* search with the word limit as constraint. | contrasting |
train_13422 | This confirms that MERT is maximizing the metric for which it was trained. | this is not the case for regression results. | contrasting |
train_13423 | These results concur with the automatic evaluation results as described in section 6.1. | this is not the case for the grammaticality and redundancy criteria. | contrasting |
train_13424 | With single sentences we have only a guarantee for high content overlap between single training and model sentences. | when these sentences are combined into summaries it is not guaranteed that these summaries will also have high content overlap with the entire model ones. | contrasting |
train_13425 | Most state-of-the-art models will incorrectly link we to the israelis because of their proximity and compatibility of attributes (both we and the israelis are plural). | a more cautious approach is to first cluster the israelis with israel because the demonymy relation is highly precise. | contrasting |
train_13426 | To facilitate comparison with most of the recent previous work, we report results using gold mention boundaries. | our approach does not make any assumptions about the underlying mentions, so it is trivial to adapt it to predicted mention boundaries (e.g., see Haghighi and Klein (2010) for a simple mention detection model). | contrasting |
train_13427 | SUMTIME forecasts are intended to be read by trained meteorologists, and thus the text is quite abbreviated. | wEATHERGOV texts are intended to be read by the general public and thus is more English-like. | contrasting |
train_13428 | Here, we focus on a single sentence as this is most appropriate for title generation. | multi-sentence output can be easily generated by setting a summary length constraint. | contrasting |
train_13429 | If the same word appears twice in a sequence, then it may receive two different pronunciations, since the mapping is probabilistic. | a token's syllable/stress pattern is chosen independently of other tokens in the sequence; we look at relaxing this assumption later. | contrasting |
train_13430 | Indeed we find that this distribution is too sharp and overemphasises short phrases, so we use f C = 1. | it does allow us to rank target phrases as possible translations. | contrasting |
train_13431 | For example, the translations of a source sentence in MT08 are as follows 2 : • Src: • HPB: South Korean government late last month to start with 400,000 tons of rice aid to the DPRK • HPB+MER: Start at the end of last month, South Korean government plans to provide 400,000 tons of rice in aid to the DPRK The most obvious error that the baseline system makes is the order of the time expression " , the end of last month", which should be either at the beginning or the end on target side. | the baseline produced a monotone translation by using the rule " X 1 , South Korean government X 1 ". | contrasting |
train_13432 | We evaluated the realizations using seven automatic metrics, and analyzed correlations obtained between the human judgments and the automatic scores. | to previous NLG meta-evaluations, we find that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best overall. | contrasting |
train_13433 | In contrast to previous NLG meta-evaluations, we found that several of the metrics correlate moderately well with human judgments of both adequacy and fluency, with the TER family performing best. | when looking at statistically significant system-level differences in human judgments, we found that some of the metrics get some of the rankings correct, but none get them all correct, with different metrics making different ranking errors. | contrasting |
train_13434 | Chance agreement for this data is calculated by the method discussed in Carletta's squib. | in previous work in MT meta-evaluation, Callison-Burch et al. | contrasting |
train_13435 | Human fluency judgments of outputs with only punctuation problems were generally high, and many realizations with commas inserted or removed were rated fully fluent by the annotators. | tERP penalizes such insertions or deletions. | contrasting |
train_13436 | The correlations of each of the metrics with the human judgments of fluency for the realizer systems indicate at least a moderate relationship, in contrast with the results reported in (Stent et al., 2005) for paraphrase data, which found an inverse correlation for fluency, and (Cahill, 2009) for the output of a surface realizer for German, which found only a weak correlation. | the former study employed a corpus-based paraphrase generation system rather than grammar-driven surface realizers, and the resulting paraphrases exhibited much broader variation. | contrasting |
train_13437 | For the targeted metrics, HNIST is correct for all five comparisons, while neither HBLEU nor HME-TEOR correctly rank all the OpenCCG models. | hTER and hGTM incorrectly rank the XLE-best system versus OpenCCG-based models. | contrasting |
train_13438 | These results suggest that the MT-evaluation metrics are useful for developing surface realizers. | the correlations are lower than those reported for MT data, suggesting that they should be used with caution, especially for cross-system evaluation, where consulting multiple metrics may yield more reliable comparisons. | contrasting |
train_13439 | This is the most commonly used metric across the literature as it is intuitive and creates a meaningful POS sequence out of the cluster identifiers. | it tends to yield higher scores as |C| increases, making comparisons difficult when |C| can vary. | contrasting |
train_13440 | Recall that crossval was proposed as a possible solution to this problem, and it does solve the extreme case of single, yielding 0% accuracy rather than 100%. | it patterns just like many-to-1 for up to 200 clusters, suggesting that there is very little difference 4 We used the Stanford Tagger trained on the WSJ corpus: http://nlp.stanford.edu/software/tagger.shtml. | contrasting |
train_13441 | We first present results for the same WSJ corpus used above. | because most of the systems were initially developed on this corpus, and often evaluated only on it, there is a question of whether their methods and/or hyperparameters are overly specific to the domain or to the English language. | contrasting |
train_13442 | One might expect that the two systems with morphological features (clark and feat) would show less difference between English and some of the other languages (all of which have complex morphology) than the other systems. | although clark and feat (along with Brown) are the best performing systems overall, they don't show any particular benefit for the morphologically complex languages. | contrasting |
train_13443 | Versions of this problem include adapting using only unlabeled target domain data Blitzer et al., 2007;Jiang and Zhai, 2007), adapting using a limited amount of target domain labeled data (Daumé, 2007;Finkel and Manning, 2009), and learning across multiple domains simultaneously in an online setting (Dredze and Crammer, 2008b). | in practical settings, we do not know if the data distribution will change, and certainly not when. | contrasting |
train_13444 | Since the A-distance assumes a stream of single values, we can apply an A-distance detector to each feature (e.g., unigram and bigram count) individually. | our extensive experiments with this approach (omitted here) show that it suffers from a number of flaws, such as a high false positive rate if all features are tracked, the difficult problem of identifying an informative subset of features for tracking, and deciding how many such features need to change before a shift has occurred, which turns out to be highly variable between shifts. | contrasting |
train_13445 | We have shown detection of sudden shifts between the source and target domains. | some shifts may happen gradually over time. | contrasting |
train_13446 | When CWPM is slow to detect a change (over 400 examples), the domain classifier is the clear winner. | in the majority of experiments, especially for ACE and spam data, both detectors register a change quickly. | contrasting |
train_13447 | It is always difficult to determine how many samples are enough for sampling algorithms. | both fertility models achieve better results than their baseline models using a small amount of samples. | contrasting |
train_13448 | An exact implementation of a new decoder for factorized grammars can make better use of all the templates. | the experiments will show that even an approximation like this can already provide significant improvement on small training data sets, i.e. | contrasting |
train_13449 | It helps to alleviate the pronoun dropping problem in Arabic. | we notice that most of the templates in the 200 lists are rather simple. | contrasting |
train_13450 | They also described a technique based on TF-IDF to de-emphasize sentences similar to those that have already been selected, thereby encouraging diversity. | this strategy is bootstrapped by random initial choices that do not necessarily favor sentences that are difficult to translate. | contrasting |
train_13451 | The top-ranked sentences are chosen for manual translation. | this approach requires that the pool have the same distributional characteristics as the development sets used to train the ranking model. | contrasting |
train_13452 | The translation results show that the system trained with this null element (*PRO*) translates verbs that follow the null element largely in such a manner. | it may not be always closest to the reference. | contrasting |
train_13453 | Indexing by suffix arrays is used to allow fast access to phrase instances in the corpus, and random sampling to avoid collecting the full set of examples has been shown to perform well. | these approaches consider all instances of a phrase as equivalent for the estimation of its translations. | contrasting |
train_13454 | The larger grammar results in more combinations of partial theories in decoding. | for computing reasons, we kept the beam size of the decoder constant despite the increase in grammar size, potentially pruning out good theories. | contrasting |
train_13455 | Thus they only find a complete fully connected parse at the very end. | both subjective and experimental evidence show that people understand a sentence word-to-word as they go along, or close to it. | contrasting |
train_13456 | Thus they only find a complete fully-connected parse at the very end. | human syntactic parsing must be fully connected (or close to it) as people are able to apply vast amounts of real-world knowledge to the process as it proceeds from word-toword (van Gompel and Pickering, 2007). | contrasting |
train_13457 | As is invariably the case, when combined the two models perform much better than either by itself (C/R Combined -88.8%). | we still achieve a 0.6% improvement over that result. | contrasting |
train_13458 | If the test data is also similarly composed, performance on any particular test instance might improve if training is done on a training subset coming from the same source. | even when the training and test data are from the same source, a ZL algorithm may capture fine differences between subsets. | contrasting |
train_13459 | One possibility for an initial model was to extract the word-to-lextype mappings from the grammar lexicon as Baldridge does, and make all starting probabilities uniform. | our lexicon maps between lextypes and lemmas, rather than inflected word forms, which is what we'd be tagging. | contrasting |
train_13460 | For example, it would be easy to enforce such constraints in the Eisner (1996) algorithm or using Integer Linear Programming approaches (Riedel and Clarke, 2006;Martins et al., 2009). | such richer modeling capacity comes with a much higher computational cost. | contrasting |
train_13461 | should have "stand" as the main head of the sentence, and "does" as its aux. | the WSJ model labeled "does" as the main head. | contrasting |
train_13462 | Most of the initial research in this literature focused on either recognizing negated terms or identifying speculative sentences, using some heuristic rules (Chapman et al., 2001;Light et al., 2004), and machine learning methods (Goldin and Chapman, 2003;Medlock and Briscoe, 2007). | scope learning has been largely ignored until the recent release of the BioScope corpus (Szarvas et al., 2008), where negation/speculation cues and their scopes are annotated explicitly. | contrasting |
train_13463 | Table 6 shows that: 1) For both negation and speculaiton scope identification, automatic syntactic parsing lowers the performance on the abstracts subcorpus (e.g., from 83.10% to 81.84% in accuracy for negation scope identification and from 86.41% to 83.74% in accuracy for speculaiton scope identification). | the performance drop shows that both negation and speculation scope identification are not as senstive to automatic syntactic parsing as common shallow semantic parsing, whose performance might decrease by about ~10 in F1measure (Toutanova et al., 2005). | contrasting |
train_13464 | It also shows that our final system significantly outperforms the state-of-the-art ones using a chunking approach, especially on the abstracts and full papers subcorpora. | the improvement on the clinical reports subcorpora for negation scope identification is much less apparent, partly due to the fact that the sentences in this subcorpus are much simpler (with average length of 6.6 words per sentence) and thus a chunking approach can achieve high performance. | contrasting |
train_13465 | In addition, we have also experimented on only these words, which happen to be a cue or inside a cue in the training data as cue candidates. | this experimental setting achieves a lower performance than that when all words are considered. | contrasting |
train_13466 | The SAMA analyzer has good coverage; for typical texts, the correct analysis of an orthographic word can be found somewhere in SAMA's list of alternatives about 95% of the time. | this broad coverage comes at a cost; the list of analytic alternatives must include a long Zipfian tail of rare or contextually-implausible analyses, which collectively are correct often enough to make a large contribution to the coverage statistics. | contrasting |
train_13467 | It is straightforward to read off the highest scoring parse from a packed chart, and similarly routine to generate an n-best list containing a highly-ranked subset of the parses. | a packed chart built on an atomic CFG does not make available all of the features that are important to many CFG-based SRL systems. | contrasting |
train_13468 | Finding the values of θ k using the MLE method is straightforward. | this is not the case for maximising the likelihood function over the space of all possible dendrograms. | contrasting |
train_13469 | The largest performance difference is 6.0% at p 1 = 45, p 2 = 10 and p 3 = 0.05. | this picture is not the same when considering a higher context similarity threshold (p 3 = 0.13) as Figure 7 shows. | contrasting |
train_13470 | The advantage of implicit modeling is that it is easy to implement based on the dependency structure. | its limitation is that the distance measure does not capture sufficient information of semantic relations between language constituents. | contrasting |
train_13471 | We predict that h j (x, y) is entailed if the distance between x * and y * is smaller than λ L . | as can be seen, this distance does not reflect whether the type of relationship between x * and y * is similar to the relationship that holds between x and y. | contrasting |
train_13472 | For fact, the most benefit of incorporating explicit modeling of long distance relationship appears at the alignment stage, but not much at the inference stage. | this situation is different for intent, where the benefit of explicitly modeling long distance relationship mostly happened at the inference stage. | contrasting |
train_13473 | Therefore, it is important to develop algorithms that find features which work across domains. | labeled adaptation frameworks are also required because we would like to take advantages of target labeled data. | contrasting |
train_13474 | As our example adaptation algorithms we selected: Labeled adaptation: FE framework One of the most popular adaptation frameworks that requires the use of labeled target data is the "Frustratingly Easy" (FE) adaptation framework (Daumé III, 2007). | why and when this framework works remains unclear in the NLP community. | contrasting |
train_13475 | Before showing the bound analysis, note that the framework proposed by (Evgeniou and Pontil, 2004;Finkel and Manning, 2009) is a generalization over these three frameworks (FE, S+T, and the baseline) 6 . | our goal in this paper is different: we try to provide a deep discussion on when and why one should use a particular framework. | contrasting |
train_13476 | When β is close to zero, z 1 and z 2 will be very similar. | when β is large, z 1 and z 2 can be very different. | contrasting |
train_13477 | In the previous experiment, we used only cosine as our task similarity measurement to decide what is the best framework. | task similarity should consider the difference in both P (X) and P (Y |X), and the cosine measurement is not sufficient for this. | contrasting |
train_13478 | features improves the Token-F1 by around 10%. | adding target labeled data also helps the results significantly. | contrasting |
train_13479 | In this case, two vocabularies need to be considered, corresponding respectively to the context vocabulary V c used to define the history; and the prediction vocabulary V p . | this method fails to deliver any probability estimate for words outside of the prediction vocabulary, meaning that a fall-back strategy needs to be defined for those words. | contrasting |
train_13480 | We can observe that the LBL model converges faster than the standard model: the latter needs 13 epochs to reach the stopping criteria, while the former only needs 6 epochs. | upon convergence, the standard model reaches a lower perplexity than the LBL model. | contrasting |
train_13481 | The unsupervised nature of the approach means good ability to deal with domain variation. | the approach did not show a segmentation performance as good as that of the supervised approach. | contrasting |
train_13482 | 5 To make the results comparable, we use the same feature templates, that is feature template (a)-(c). | sVM-HMM takes interactions between nearby labels into the model, which means there is a label bigram feature template implicitly used in the sVM-HMM. | contrasting |
train_13483 | One concern to the bootstrapping approach in this paper is that it takes time to work with, which will make it difficult to be incorporated into language applications that need to responses in real time. | we believe that such an approach can be used in offline contexts. | contrasting |
train_13484 | Note that the LDC algorithm is deterministic. | the randomness in the sparse-matrix implementation of reduced-rank SVD used in the initialization step causes a small variability in performance (the standard deviation of the MTO score is 0.0004 for PTB17 and 0.003 for PTB45). | contrasting |
train_13485 | Both models use an iterative approach to minimize an objective function, and both initialize with frequent words. | the model of Ney et al. | contrasting |
train_13486 | Figure 4 clearly shows that hybrid-morfette linkers outperform hybrid-maxent linkers on unknown words. | figures 2-4 show that hybridmorfette's advantage on unknown words is counteracted by its lower performance on known words; therefore, it has slightly lower overall accuracy than hybrid-maxent. | contrasting |
train_13487 | The proposed criterion with an automatically determined threshold value produced slightly worse results than that of Jin and Tanaka-Ishii at the CHILDES corpus. | we found out that our approach achieves approximately 1% higher score when the best performing threshold value is selected from the candidate list. | contrasting |
train_13488 | Both our decoding algorithm and the decoding algorithm of Z&C08 run in linear time. | in order to generate possible candidates for each character, Z&C08 uses an extra loop to search for possible words that end with the current character. | contrasting |
train_13489 | A restriction to the maximum word length is applied to limit the number of iterations in this loop, without which the algorithm would have quadratic time complexity. | our decoder does not search backword for the possible starting character of any word. | contrasting |
train_13490 | (2009) is based on Nakagawa and Uchimoto (2007), which separates the processing of known words and unknown words, and uses a set of segmentation tags to represent the segmentation of characters. | our model is conceptually simpler, and does not differentiate known words and unknown words. | contrasting |
train_13491 | Dynamic programming is exact inference, for which the time complexity is decided by the locality of feature templates. | beam-search is approximate and can run in linear time. | contrasting |
train_13492 | By design, they readily capture regularities at the token-level. | these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | contrasting |
train_13493 | In total there are O(K 2 ) parameters associated with the transition parameters. | to the Bayesian HMM, θ t is not drawn from a distribution which has support for each of the n word types. | contrasting |
train_13494 | BoW+P+P t ) helped to achieve higher accuracies, and the Author feature was also beneficial. | list decreased the performance, as the flow of dialogues can change, and when a larger history of dialogue acts is included, it tends to introduce noise. | contrasting |
train_13495 | Instead, Zhao and Ng (2007) proposed featurebased methods to zero anaphora resolution on the same corpus from Convese (2006). | they only considered zero anaphors with explicit noun phrase referents and discarded those with split an-tecedents or referring to events. | contrasting |
train_13496 | Isozaki and Hirao (2003) explored some ranking rules and a machine learning method on zero anaphora resolution. | they assumed that zero anaphors were already detected and each zero anaphor's grammatical case was already determined by a zero anaphor detector. | contrasting |
train_13497 | In fact, the ratio of positive and negative instances reaches about 1:12. | this ratio is much better than that (1:30) using the heuristic rule as described in Zhao and Ng (2007). | contrasting |
train_13498 | Actually, among the remaining 7% zero anaphors, about 5% are driven by a preposition phrase (PP) node, and 2% are driven by a noun phrase (NP) node. | our preliminary experiments show that simple inclusion of those PP-driven and NPdriven zero anaphors will largely increase the imbalance between positive and negative instances, which significantly decrease the performance. | contrasting |
train_13499 | This rate is almost the same as that of bunsetsu boundaries. | the commas inserted at the clause boundaries "topicalized element-wa" accounted for 8.84% (1,446/16,357) of all the inserted commas. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.