id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_12500 | what is a crossing when moving from English to French is not guaranteed to be a crossing when moving from French to English). | we only pursue translation direction since that is the one for which we have parsed data. | contrasting |
train_12501 | The major reason may be that the resolution algorithm relies on surface features and does not have access to world or domain knowledge, which we did not want to depend upon since we were mainly interested in cheap features. | the string ident and substring match features did not perform very well either. | contrasting |
train_12502 | Other researchers like Vieira and Poesio (2000) used information about the syntactic structure and compared only the syntactic heads of the phrases. | the feature used by Soon et al. | contrasting |
train_12503 | In our two-phase approach it is straightforward for the second-phase classifier to take the length of a candidate phone number into account. | standard named entity taggers that use trigram features do not exploit this information, and doing so would entail significant changes to the underlying models and parameter estimation procedures. | contrasting |
train_12504 | Mandarin Chinese is, of course, not resource-deficient for language modeling -100s of millions of words are available on-line. | we choose it for our experiments partly because it is sufficiently different from English to pose a real challenge, and because the availability of large text corpora in fact permits us to simulate controlled resource deficiency. | contrasting |
train_12505 | Looking at the broader context they appear in can provide additional insight: if the types of information expressed in the contexts are similar, then the specific information expressed in the sentences is more likely to be the same. | if the types of information in the two contexts are unrelated, chances are that the sentences should not be aligned. | contrasting |
train_12506 | We compiled two collections from the Encyclopedia Britannica and Britannica Elementary. | to the long (up to 15-page) detailed articles of the Encyclopedia Britannica, Britannica Elementary contains one-to two-page entries targeted towards children. | contrasting |
train_12507 | The two models may contain overlapping information: in many cases, the lexical cue corresponds to the immediate head-word the EE depends on. | other surrounding words (which frequently correspond to the headword of grandparent of the empty node) often carry important information, especially for distinguishing NP-NP and PRO-NP nodes. | contrasting |
train_12508 | Therefore our new tree kernel is more lexicalized. | it immediately follows that the lexicalized tree kernel is well-defined. | contrasting |
train_12509 | Then we combine the results of individual SVMs with simple combination. | the overall performance does not improve. | contrasting |
train_12510 | Naive Bayes classifiers are fairly resistant to class skewness, which can only exert its influence on classifier prediction via the class priors. | decision lists suffer from skewed class distributions. | contrasting |
train_12511 | Frequency counts of feature-value pairs in these classifiers are updated independently, and thus a single instance can possibly contribute to the discovery of more than one useful feature-value pair. | some classifiers such as decision trees are not able to take advantage of this redundancy because of their intrinsic nature of recursive data partitioning. | contrasting |
train_12512 | (2003a) also investigate instance selection methods for co-training, but their goal is primarily to use selection methods as a means to explore the trade-off between maximizing coverage and maximizing accuracy. | 6 our focus 6 McCallum and Nigam (1998) here is on examining whether a more conservative ranking method can alleviate the problem of performance deterioration. | contrasting |
train_12513 | If the underlying learners have indeed induced two different hypotheses from the data, then each classifier can potentially acquire informative instances from the other and yield performance improvements very rapidly. | our ranking method is more conservative in that it places more emphasis on maintaining labeled data accuracy than the B&M method. | contrasting |
train_12514 | This inequality constraint falls into a type of fat constraints, (Khudanpur, 1995). | as noted in (Chen and Rosenfeld, 2000), this type of constraint has not yet been applied nor evaluated for NLPs. | contrasting |
train_12515 | For this reason, HMMs have been standardly used with current word-current label, and previous label(s)-current label features. | if we incorporate information about the neighboring words and/or information about more detailed characteristics of the current word directly to our model, rather than propagating it through the previous labels, we may hope to learn a better classifier. | contrasting |
train_12516 | The best performing models of label sequence learning are MEMMs or PMMs (also known as Maximum Entropy models) whose features are carefully designed for the specific tasks (Ratnaparkhi, 1999;Toutanova and Manning, 2000). | maximum entropy models suffer from the so called label bias problem, the problem of making local decisions (Lafferty et al., 2001). | contrasting |
train_12517 | To overcome the numerical problems of working with a product of a large number of small probabilities, usually the logarithm of the likelihood of the data is optimized. | most of the time, these systems, sequence labelling systems in particular, are tested with respect to their error rate on test data, i.e. | contrasting |
train_12518 | Sequential Exp-loss function: This loss function, was first introduced in (Collins, 2000) for NLP tasks with a structured output domain. | there, the sum is not over the whole possible label sequence set, but over the best label sequences generated by an external mechanism. | contrasting |
train_12519 | Thus, one only needs to modify those sum [i, y] that satisfy f * (x i , y)=1, and to make changes to their corresponding normalizing factors z [i]. | to what is shown in Berger et al 1996's paper, here is how the different values in this variant of the IFS algorithm are computed. | contrasting |
train_12520 | Furthermore, as the length of context increases, the ratio for the Kneser-Ney smoothed model becomes greater -a clear sign of over-parameterization. | the ratio for the neural network model changes very little even when the length of the context increases from 4 (2H) to 8 (3H-1OP). | contrasting |
train_12521 | Figure 4: Learning curves As we expected, the learning curve of the training data in EM iteration 1 is not as smooth as that in EM iteration 0, and even more so for the heldout data. | the general trend is still decreasing. | contrasting |
train_12522 | In 3, "other risk factors for Mr. Cray's company" refers to a set of risk factors excluding the designer's age. | in list-contexts such as (4), the antecedent is available both anaphorically and structurally, as the left conjunct of the anaphor. | contrasting |
train_12523 | The algorithm's performance with this feature set is encouraging. | the semantic knowledge the algorithm relies on is not sufficient for many cases of other-anaphors (Section 4.2). | contrasting |
train_12524 | In this way, our zero resolver creates a 'general purpose' candidate list. | some of the candidates are inappropriate for certain zeros. | contrasting |
train_12525 | Siblings When CP is wa or mo, it is not clear whether 7 is a subject. | a verb rarely has the same entity in two or more cases. | contrasting |
train_12526 | Many of these algorithms are, in principle, language-independent. | when applying these algorithms to languages such as Chinese and Japanese, we must deal with certain language-specific issues: for example, should we build a character-based model or a word-based model? | contrasting |
train_12527 | On one hand, the word-based model is attractive since it allows the system to inspect a larger window of text, which may lead to more informative decisions. | a word segmenter is not error-prone and these errors may propagate and result in errors in NE recognition. | contrasting |
train_12528 | Our HMM classifier for English uses a set of word-features to indicate whether a word contains all capitalized letters, only digits, or capitalized letters and period, as described in (Bikel et al., 1999). | chinese does not have capitalization. | contrasting |
train_12529 | In the previously described methods, also known as voting, each classifier gave its entire vote to one classification -its own output. | equation (2) allows for classifiers to give partial credit to alternative classifications, through the probability @ # © u e! | contrasting |
train_12530 | For example, suppose that we have a set of documents as in Table 1. | some possible virtual examples generated from Document 1 by GenerateByDeletion algorithm are´ ¾ ¿ GenerateByAddition algorithm is: 1. | contrasting |
train_12531 | We have reduced the problem to a polynomial size QP, which, in principle, can be solved using standard QP toolkits. | although the number of variables and constraints in the factored dual is polynomial in the size of the data, the number of coefficients in the quadratic term in the objective is very large: quadratic in the number of sentences and dependent on the sixth power of sentence length. | contrasting |
train_12532 | 7 Nonetheless, generative approaches are vastly cheaper to train, since they must only collect counts from the training set. | the max-margin approach does have the potential to incorporate many new kinds of features over the input, and the current feature set allows limited lexicalization in cubic time, unlike other lexicalized models (including the Collins model which it outperforms in the present limited experiments). | contrasting |
train_12533 | Unlike the other two strategies, predictions must sometimes be made when there is no immediately adjacent complete node in the tree. | these comparisons are not conclusive, because the choice of features for the state representation may also have an important role in the differences. | contrasting |
train_12534 | They report 95% coverage and 75% average recall and precision on sentences of length ≤ 40 with 490 popped edges; this is ten times the minimum number of steps. | to get complete coverage, they required 1760 popped edges, which is a factor of 37 greater than the minimum. | contrasting |
train_12535 | For example, using a 1.5GB newspaper corpus, here are the 20 most associated paths to "X solves Y" generated by DIRT: Y is solved by X, X resolves Y, X finds a solution to Y, X tries to solve Y, X deals with Y, Y is resolved by X, X addresses Y, X seeks a solution to Y, X does something about Y, X solution to Y, Y is resolved in X, Y is solved through X, X rectifies Y, X copes with Y, X overcomes Y, X eases Y, X tackles Y, X alleviates Y, X corrects Y, X is a solution to Y, X makes Y worse, X irons out Y This list of associated paths looks tantalizingly close to the kind of axioms that would prove useful in an inference system. | dIRT only outputs pairs of paths that have some semantic relation. | contrasting |
train_12536 | To conclude, several approaches exhaustively process different types of corpora, obtaining varying scales of output. | the Web is a huge promising resource, but current Web-based methods suffer serious scalability constraints. | contrasting |
train_12537 | The last approach is the most general with respect to the template form. | its processing time increases exponentially with the size of the templates. | contrasting |
train_12538 | On one hand, an anchor set should correspond to a sufficiently specific setting, so that entailment would hold between its different occurrences. | it should be sufficiently frequent to appear with different entailing templates. | contrasting |
train_12539 | The challenge of these tasks varies by the degree of parallel-ness of the input multilingual documents. | the non-parallel corpora used so far in the previous work tend to be quite comparable. | contrasting |
train_12540 | Since previous works were carried out on different corpora, in different language pairs, we cannot directly compare our method against them. | we implement a baseline method that follows the same "find-topic-extract-sentence" principle as in earlier work. | contrasting |
train_12541 | In the HMM model we incorporated auxiliary LMs by interpolation, which is not possible here since there is no LM per se, but rather N-gram features. | we can use the same trick as we used for prosodic features. | contrasting |
train_12542 | Note that the distance in words from the previous paragraph boundary (Dist w ) is a good indicator for a paragraph break in the English news domain. | this feature is less useful for the other two languages. | contrasting |
train_12543 | They removed an additional 2% of the training data due to issues involving the named entity tagger splitting corpus tokens into multiple words. | where these issues occurred in tagging the section 23 test sentences, they were manually corrected. | contrasting |
train_12544 | This random selection method should act to further reduce the correlation between trees and Breiman notes that it gets around the problem caused by categorical inputs with large numbers of values. | he leaves the number of values chosen unspecified. | contrasting |
train_12545 | The Random Forest approach outperforms the Bayesian method and the Decision Tree method. | it does not perform as well as the SVM classifier. | contrasting |
train_12546 | We mention the performance of their method where appropriate below. | our results are compared to human annotation of chunked data, while theirs (and other supervised results) are compared to manually annotated full sentences. | contrasting |
train_12547 | In terms of overall results, the MBL model outperforms the Maxent model by 3 to 4 points F-score. | all our results lie broadly in the range of existing systems with a similar architecture (i.e. | contrasting |
train_12548 | Some of the plausible variables which might explain the variance are the number of semantic roles per frame, the amount of training data, and the number of verbs per frame. | we suggest that a fourth variable might have a more decisive influence. | contrasting |
train_12549 | Kaufman and Rousseeuw (1990)), which estimates the "cost" of a cluster as the sum of squared distances between each vector and the cluster centroid Under this view, a good cluster is one with a low cost, and the goal of the clustering algorithm is to minimise the average distance to the centroid. | for our purposes it is more convenient for a good cluster to have a high rating. | contrasting |
train_12550 | As for the two statistical frameworks, uniformity is better correlated with the Maxent model than with the MBL model, even though MBL performs better on the evaluation. | this does not mean that the correlation will become weaker for semantic role labelling systems performing at higher levels of accuracy. | contrasting |
train_12551 | Since uniformity is defined in terms of a quality function, clustering would be the natural method to employ for this task. | this method is only viable for frames with a large amount of annotation. | contrasting |
train_12552 | We find that the parser performs well on the object extraction cases found in the Penn Treebank, given the difficulty of the task. | the parser performs poorly on questions from TREC, due to the small number of questions in the Penn Treebank. | contrasting |
train_12553 | This small study only provides anecdotal evidence for the reasons the parser is unable to recover some long-range object dependencies. | the analysis suggests that the parser fails largely for the same reasons it fails on other WSJ sentences: wrong attachment decisions are being made; the lexical coverage of the supertagger is lacking for some verbs; the model is sometimes biased towards incorrect lexical categories; and the supertagger is occasionally led astray by incorrect POS tags. | contrasting |
train_12554 | The model is very much like a Hidden Markov Model in which the summary is the observed sequence. | using a standard HMM would not allow us to account for phrases in the summary. | contrasting |
train_12555 | That is, when given an abstract containing "the man" and a document also containing "the man," a human may prefer to align "the" to "the" and "man" to "man." | a phrase-based model will almost always prefer to align the entire phrase "the man" to "the man." | contrasting |
train_12556 | StoryStation was designed by researchers in conjunction with two teachers and a group of students. | both students and teachers indicated StoryStation would be significantly improved if it were enhanced with an agent that could give feedback about the plot of a story. | contrasting |
train_12557 | Good: A good story shows that the pupil was listening to the story, and can recall the main Class Probability Number of Class 1 (Excellent) 0.175 18 2 (Good) 0.320 33 3 (Fair) 0.184 19 4 (Poor) 0.320 33 Table 1: Distribution of Story Ratings events and links in the plot. | the pupil shows no deeper understanding of the plot, which can often be detected by the pupil leaving out an important link or emphasizing the wrong details. | contrasting |
train_12558 | A general plot analysis agent would be more useful than our current system, which is successful by virtue of the story rewriting task being less complex than full story understanding. | our system fulfills an immediate need in the StoryStation application, in contrast to more traditional story-understanding and story-generation systems, which are usually used as testing grounds for theoretical ideas in artificial intelligence. | contrasting |
train_12559 | Indeed the total perplexity on the test corpus is 21.3 and the one obtained with the specific LMs is 17.8, so a gain of 16% compared to the 26.8% obtained previously. | this result is not surprising as the previous method was designed for specifically decreasing the perplexity measure. | contrasting |
train_12560 | As we have seen in the previous section, specific dialog situations (like those obtained with the hierarchical clustering) proved to be more efficient than the semantic channel (represented by the calltype labels) for clustering utterances in relation with the language used, at least from the perplexity point of view. | for clustering dialogs rather than utterances, the semantic channel is the main channel that we are going to use because in this case we want to characterize the whole interaction between a system and a user rather than just the language used. | contrasting |
train_12561 | Their methodology provides striking results within a limited domain characterized by a high frequency of stereotypical sentence types. | as we show below, the approach may be of limited generality, even within the training domain. | contrasting |
train_12562 | 6 We gathered all phrase differs by less than 5%), we have presented the average of the directional AERs. | 5 following SMT practice of augmenting data with a bilingual lexicon, we did append an identity lexicon to the training data. | contrasting |
train_12563 | The model is thus deficient in that it assigns a large portion of its probability mass to impossible cases: those instances which have words in the context which do not match those in the sentence. | because the sentences are always observed, we only consider instances in the set of consistent cases, so the deficiency should be irrelevant for the purpose of reasoning about sense and SCF. | contrasting |
train_12564 | the sense model again believes that the sense 2:42:04 is most likely. | the SCF model correctly gives high weight to the NP frame, which when combined with the joint distribution, gives much more probability to the sense 2:30:01. | contrasting |
train_12565 | The frequencies for attributes and values were again collected as in the first experiment. | these data were used in a different way. | contrasting |
train_12566 | 1 The maximum-entropy (ME) principle, which prescribes choosing the model that maximizes the entropy out of all models that satisfy given feature constraints, can be seen as a built-in regularization mechanism that avoids overfitting the training data. | it is only a weak regularizer that cannot avoid overfitting in situations where the number of training examples is significantly smaller than the number of features. | contrasting |
train_12567 | (2003) assess the computational complexity for standard gradient-based optimization with the full feature set by ≈ cmp 2 τ , for a multiple c of p line minimizations for p derivatives over m data points, each of which has cost τ . | for grafting, the cost is assessed by adding up the costs for feature testing and optimization for s grafting steps as ≈ (msp+ 1 3 cms 3 )τ . | contrasting |
train_12568 | In any case, since this is engineering, the rationalization for a feature is far less important than the model's overall performance increase. | science would demand that, at some point, we analyze the multitude of features in a state-of-the-art lexicalized statistical parsing model. | contrasting |
train_12569 | From a history-based grammar perspective, there are 727,930 types of history contexts from which futures are generated. | 401,447 of these are singletons. | contrasting |
train_12570 | This is of particular concern when a head word-which the top-down model generates at its highest point in the tree-influences an attachment decision. | inspecting the lowentropy word-generation histories of P M w revealed that almost all such cases are when the model is generating a preterminal, and are thus of little to no consequence vis-a-vis syntactic disambiguation. | contrasting |
train_12571 | In machine translation, intuitively, the informative content words should be emphasized more for better adequacy of the translation quality. | the standard statistical translation approach does not take account how informative and thereby, how important a word is, in its translation model. | contrasting |
train_12572 | Any statistical translation can be used in (1) to calculate the phrase translation probability. | in our experiment we typically see now significant difference in translation results when using lexicons trained from different alignment models. | contrasting |
train_12573 | Allowing mto-n matching of up to two nodes on either side of the parallel treebank allows for limited nonisomorphism between the trees. | even given this flexibility, requiring alignments to match two input trees rather than one often makes tree-totree alignment more constrained than tree-to-string alignment. | contrasting |
train_12574 | In some cases, differences in the number of level may be handled by the tree-to-tree model, for example by grouping the subject NP and its base NP child together as a single elementary tree. | this introduces unnecessary variability into the alignment process. | contrasting |
train_12575 | Previous work in CRFs assumed that observation sequence (word) boundaries were fixed. | word boundaries are not clear in Japanese, and hence a straightforward application of CRFs is not possible. | contrasting |
train_12576 | Statistical parsers tend to scale exponentially in sentence length, unless a narrow beam is employed, which leads to globally poorer parses. | the bracketer described in this paper scales linearly in [[Confidence] the length of the sentence to find the globally optimal solution. | contrasting |
train_12577 | We use a Viterbi-like dynamic programming decoding algorithm, where transition probabilities are governed by the discriminative tagging model. | the tags generated by our decoder are not the same as those predicted by the maximum entropy model. | contrasting |
train_12578 | To solve these problems in the tagging model would be nearly impossible, without giving up on efficiency. | our decoder is able to produce n-best lists using exact A * search that very frequently contain globally superior taggings, even though the simple tagging model cannot recognize them as such. | contrasting |
train_12579 | The first thing worth noticing in this table is that in general, when one system achieves higher precision, the other system achieves higher recall, which is not surprising. | in the last row, corresponding to proper nouns, the RR2 system outperforms the COL03 (this is the "Full" implementation) in both precision and recall, suggesting that our system is better able to capture the phrasing of proper nouns. | contrasting |
train_12580 | That is, this is the performance attainable given a chunker that identifies base NPs perfectly (at 100% precision). | since this hypothetical system only chunks base NPs, it misses all non-base NPs and thus achieves a recall of only 73.0, yielding an overall F-score below our system's performance. | contrasting |
train_12581 | In the original formulation of BWI, boundaries are identified without reference to the location of the opposing boundary. | we might expect that the end of a name, say, would be easier to identify if we know where it begins. | contrasting |
train_12582 | As a specific NLP task, we will consider partof-speech (POS) tagging. | the problem addressed comes up in any NLP task which is tackled by the statistical approach and which makes use of a Bayes decision rule. | contrasting |
train_12583 | Chinese part-of-speech (POS) tagging assigns one POS tag to each word in a Chinese sentence. | since words are not demarcated in a Chinese sentence, Chinese POS tagging requires word segmentation as a prerequisite. | contrasting |
train_12584 | When a paired t-test was carried out at the level of significance 0.01, the all-at-once approach was found to be significantly better than the one-at-a-time approach for POS tagging accuracy, although the difference was insignificant for word segmentation. | experim ent Num ber POS Accuracy(%) Figure 6: CTB 10-fold CV POS tagging accuracy using an all-at-once approach the time required for training and testing is increased significantly for the all-atonce approach. | contrasting |
train_12585 | It can be shown that, if we smooth the A model with a Gaussian prior on the feature weights that is centered at 0 -following the approach in (Chen and Rosenfeld, 2000) for smoothing maximum entropy models -then the MinDiv update equations for estimating A on the adaptation data are identical to the MAP adaptation procedure we proposed 5 . | we wish to point out that the equivalence holds only if the feature set for the new model A is F background ∪ F adapt . | contrasting |
train_12586 | Thus, a spell checker built according to this formulation could suggest the correction detroittigers because this alternative occurs frequently enough in the employed query log. | detroittigers itself could be corrected to detroit tigers if presented as a stand-alone query to this spell checker, based on similar query-log frequency facts, which naturally leads to the idea of an iterative correction approach. | contrasting |
train_12587 | Most of features including " " are assigned a negative weight (negative opinion). | only one feature " (hard to cut off)" has a positive weight. | contrasting |
train_12588 | There has been some previous use of machine learning to classify email messages (Cohen 1996;Sahami et al., 1998;Rennie, 2000;Segal & Kephart, 2000). | to our knowledge, none of these systems has investigated learning methods for assigning email speech acts. | contrasting |
train_12589 | Like a proposal, the message involves both a commitment and a request. | while a proposal is associated with a new task, an amendment is a suggested modification of an already-proposed task. | contrasting |
train_12590 | In Experiment 1, we replicated the entropy rate effect reported by Charniak (2002, 2003) and showed that it generalizes to a larger range of sentence positions and also holds for individual sentences, not just averaged over all sentences with the same position. | we also found that a simple baseline model based on sentence length achieves a correlation with sentence position. | contrasting |
train_12591 | Significant improvements in both perplexity (PPL) and word error rate (WER) over backoff smoothing were reported after interpolating the neural network models with the baseline backoff models. | the neural network models rely on interpolation with ¢ -gram models, and use ¢ -gram models exclusively for low frequency words. | contrasting |
train_12592 | Most of the studies focused on improving ¢ -gram language models by adopting various smoothing techniques in growing and using DTs (Bahl et al., 1989;Potamianos and Jelinek, 1998). | the results were not satisfactory. | contrasting |
train_12593 | Standard -test 3 shows that the improvements are significant at ¡ 0.001 and ¡ 0.05 level respectively. | we notice that the improvement in WER using the trigram with 40M words is not as much as the trigram with 20M words. | contrasting |
train_12594 | While segmentation algorithms, such as TEXTTILING (Hearst, 1994) and its recent successors using the inter-paragraph similarity matrix (Choi, 2000), all themselves use nonstructural cosine similarity as a measure of semantic proximity between paragraphs. | the distance function so far has been largely defined and used ad hoc, usually by a tf.idf weighting scheme (Salton and Yang, 1973) and a simple cosine similarity, equivalently, an Euclidean dot product. | contrasting |
train_12595 | As a method of tackling clusters of texts, the text classification task has recently made great advances with a Naïve Bayes or SVM classifiers (for example, (Joachims, 1998)). | they all aim at classifying texts into a few predefined clusters, and cannot deal with a document that fits neither of the clusters. | contrasting |
train_12596 | We can see from both results that metric distance produces a better retrieval over the tf.idf and dot product. | refinements in precision are certain (average p = 0.0243) but subtle. | contrasting |
train_12597 | They report that the effectiveness of A increases as the number of the training pairs S increases; this requires O(n 2 ) sample points from n training data, and must be optimized by a computationally expensive Newton-Raphson iteration. | our method uses only linear algebra, and can induce an ideal metric using all the training data at the same time. | contrasting |
train_12598 | Regular roots such as p.s.q yield forms such as hpsqh. | the irregular roots n.p.l, i.c.g, q.w.m and g.n.n in this pattern yield the seemingly similar forms hplh, hcgh, hqmh and hgnh, respectively. | contrasting |
train_12599 | There are 8 different human judges for DUC 2004 Task 2, and 4 for DUC 2004 Task 4. | a subset of exactly 4 different human judges produced model summaries for any given cluster. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.