id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_10600 | Hence, evaluting against the full tag set that includes punctuation artificially increases the quality of the reported results, which is why we report results for the non-punctuation tag set. | to be able to directly compare with previous work, we also report results for the full WSJ POS tag set. | contrasting |
train_10601 | The task of marking up these expressions has usually been approached using straightforward sequence labeling techniques using simple features in a small contextual window (Choi et al., 2006;Breck et al., 2007). | due to the simplicity of the feature sets, this approach fails to take into account the fact that the semantic and pragmatic interpretation of sentences is not only determined by words but also by syntactic and shallow-semantic relations. | contrasting |
train_10602 | Conventionally, when measuring the quality of a system for an information extraction task, a predicted entity is counted as correct if it exactly matches the boundaries of a corresponding entity in the gold standard; there is thus no reward for close matches. | since the boundaries of the spans annotated in the MPQA corpus are not strictly defined in the annotation guidelines , measuring precision and recall using exact boundary scoring will result in figures that are too low to be indicative of the usefulness of the system. | contrasting |
train_10603 | The most visible effect of the reranker is that the recall is greatly improved. | this does not seem to have an adverse effect on the precision until the candidate set size goes above 8 -in fact, the precision actually improves over the baseline for small candidate set sizes. | contrasting |
train_10604 | Clustering evaluation has been extensively investigated (Section 3). | the discussion centers around the monosemous case, where each item belongs to exactly one cluster, although polysemy is the common case in NLP. | contrasting |
train_10605 | Hence, H(K|C) = H(K) and H(C|K) = H(C), and both V and NVI obtain their worst possible values 1 . | the score should surely depend on r (the size of each word's gold entry). | contrasting |
train_10606 | Their approach also allows to take the alignments of neighboring words into account. | to our work, they only have a very crude fertility model and they are considering a substantially different model. | contrasting |
train_10607 | In all cases the run-times are much higher than in the standard GIZA++ training. | we are now getting optimality guarantees where previously one could not even tell how far away one is from the optimum. | contrasting |
train_10608 | Example (3) is a factual statement without explicit opinion. | having a fast connection is a positive thing. | contrasting |
train_10609 | MLE estimators for all-fragment models are zerobiased with zero divergence between the average estimate and the true data distribution. | their variance is unboundedly large, leading to unbounded generalisation error on unseen cases. | contrasting |
train_10610 | Furthermore, other frequently employed priors such as the Dirichlet distribution and the Dirichlet Process promote better generalising rule probability distributions based on externally set hyperparameter values, whose selection is frequently sensitive in terms of language pairs, or even the training corpus itself. | the CV-MLE prior aims for a data-driven Bayesian model, focusing on getting information from the data, instead of imposing external human knowledge on them (see also (Mackay and Petoy, 1995)). | contrasting |
train_10611 | They also propose a technique, based on TF-IDF, to de-emphasize sentences similar to those that have already been selected. | this strategy is bootstrapped by random initial choices that do not necessarily favor sentences that are difficult to translate. | contrasting |
train_10612 | Active learning has been studied extensively in the context of multi-class labeling problems, and theoretically optimal selection strategies have been identified for simple classification tasks with metric features (Freund et al., 1997). | natural language applications such as SMT present a significantly higher level of complexity. | contrasting |
train_10613 | Similar with Co-training, the basic idea of Tritraining (Zhou and Li, 2005) is to iteratively expand the labeled training set for the next-round training based on the decisions of the current classifiers. | tri-training employs three classifiers instead of two. | contrasting |
train_10614 | It demonstrates that semi-supervised algorithms are able to learn more bi-lexical features automatically from the unlabeled data, which may help recognize more translation equivalences. | we also notice that the accuracy drops a little after Tri-training. | contrasting |
train_10615 | The topic-sentiment mixture (TSM) model (Mei et al., 2007) can jointly model sentiment and topics by constructing an extra background component and two additional sentiment subtopics on top of the probabilistic latent semantic indexing (pLSI) (Hofmann, 1999). | tSM may suffer from the problem of overfitting the data which is known as a deficiency of pLSI, and postprocessing is also required in order to calculate the sentiment prediction for a document. | contrasting |
train_10616 | (2009) employed lexical prior knowledge for semi-supervised sentiment classification based on non-negative matrix tri-factorization, where the domain-independent prior knowledge was incorporated in conjunction with domain-dependent unlabelled data and a few labelled documents. | this approach performed worse than the JST model on the movie review data even with 40% labelled documents as will be shown in Section 5. | contrasting |
train_10617 | Gibbs sampling is used to estimate the posterior distribution of LSM, as well as the JST and Reverse-JST models that will be discussed in the following two sections. | to LSM that only models document sentiment, the JST model (Lin and He, 2009) can detect sentiment and topic simultaneously, by modelling each document with S (number of sentiment labels) topic-document distributions. | contrasting |
train_10618 | Another recently proposed non-negative matrix tri-factorization approach (Li et al., 2009) also employed lexical prior knowledge for semi-supervised sentiment classification. | when incorporating 10% of labelled documents for training, the non-negative matrix tri-factorization approach performed much worse than LSM, with only around 60% accuracy achieved for all the datasets. | contrasting |
train_10619 | For this work, a subset of 16 emotional categories from this level has been selected, since the hierarchy proposed in WordNet Affect is considerably broader than those commonly used in sentiment analysis. | the first level of emotional categories may be useful to predict the polarity, but it is clearly not enough to predict the intensity of this polarity. | contrasting |
train_10620 | In this sentence, the concepts risk, death and disease are labeled with an emotional category: in particular, the categories assigned to them are fear, fear and dislike respectively. | while the two firsts are retrieved from the affective lexicon by their own synsets, the last one is labeled through its hypernym: since no matching is found for disease in the lexicon, the analysis over its hypernyms detects the category dislike assigned to the synset of its first hypernym, which contains words such as illness and sickness, and the same emotion (dislike) is assigned to disease. | contrasting |
train_10621 | Even if we expected that using quantifiers would lead to better results, the performance in both datasets decreases in 2 out of the 3 ML algorithms. | combining both features improves the results in both datasets. | contrasting |
train_10622 | Besides, the Most Frequent Sense (MFS) heuristic in WSD is usually regarded as a difficult competitor. | the improvement with respect to the random sense heuristic is quite remarkable. | contrasting |
train_10623 | A test on the news corpus removing those sentences not labeled with any emotional meaning has been performed for the 2-classes classification problem, allowing the method to obtain an accuracy of 81.7%. | to correctly classify these sentences, it would be necessary to have additional information about their contexts (i.e. | contrasting |
train_10624 | Some research has used the text that surrounds an image in a news article as a proxy (Feng and Lapata, 2008;Deschacht and Moens, 2007). | in many cases, the surrounding text or a user-provided caption does not simply describe what is depicted in the image (since this is usually obvious to the human reader for which this text is intended), but provides additional information. | contrasting |
train_10625 | For example, we would like to be able to identify that "woman" and "person" may refer to the same entity, whereas "man" and "woman" typically would not. | we also do not want a type inventory that is too large or too fine-grained. | contrasting |
train_10626 | 6 K-SVM performs best with all the data; it uses the most expressive representation, but needs 100K examples to make use of it. | feature augmentation and variance regularization provide diminishing returns as the amount of training data increases. | contrasting |
train_10627 | Therefore, we do not measure the performance of these tasks on the Words category set. | we do use the content feature in labeling the test examples in grammaticality judgment. | contrasting |
train_10628 | (2008) result in a better performance than word types, but they are still too sparse for this task. | the average score gained by part of speech tags is also lower than the one by our categories. | contrasting |
train_10629 | Most work in IE has concentrated on entity extraction alone (Tjong Kim Sang, 2002;Sang and Meulder, 2003) or on relation extraction assuming entities are either given or previously extracted (Bunescu et al., 2005;Zhang et al., 2006;Giuliano et al., 2007;Qian et al., 2008). | these tasks are very closely inter-related. | contrasting |
train_10630 | They thus do a joint syntactic parsing and information extraction using a fixed template. | as designed, such a CFG approach cannot handle the cases when an entity is involved in multiple relations and when the relations crisscross each other in the sentence, as in Figure 1. | contrasting |
train_10631 | (2009) present a method to simultaneously do semi-supervised training of entity and relation classifiers. | their coupling method is meant to take advantage of the available unsupervised data and does not do joint inference. | contrasting |
train_10632 | For example, one would not be able to use kernelbased SVM for relation extraction, which has been very successful at this task, because Markov Logic does not support kernel-based machine learning. | our joint approach is independent of the individual machine learning methods for entity and relation extraction, and hence allows use of the best machine learning methods available for each of them. | contrasting |
train_10633 | MD and that usually have their parents to the left. | in some constructions this is not the case, and the parser has a hard time learning these constructions. | contrasting |
train_10634 | improve the classification accuracy, while using bigrams brings a statistically significant improvement over a simple bag-of-words representation. | medlock (2008) illustrates that "whether a particular term acts as a hedge cue is quite often a rather subtle function of its sense usage, in which case the distinctions may well not be captured by part-of-speech tagging". | contrasting |
train_10635 | The ideal method should recognize only one consecutive block for each hedge cue. | our classifier cannot work so well. | contrasting |
train_10636 | In the corpora provided for this task, scopes are annotated as continuous sequences of tokens that include the cue. | the classifiers only predict the first and last element of the scope. | contrasting |
train_10637 | Additionally, the system has been trained on a corpus that contains abstracts and full text articles, instead of only abstracts. | it is possible to confirm that, even with information on dependency syntax, resolving the scopes of hedge cues in biological texts is not a trivial task. | contrasting |
train_10638 | The GENIA tagger (Tsuruoka et al., 2005) takes an important role in our pre-processing set-up. | maybe somewhat surprisingly, we found that its tokenization rules are not always optimally adapted for the BioScope corpus. | contrasting |
train_10639 | For better normalization, we downcase base forms for all parts of speech except proper nouns. | gENIA does not make a PoS distinction between proper and common nouns, as in the Penn Treebank, and hence we give precedence to TnT outputs for tokens tagged as nominal by both taggers. | contrasting |
train_10640 | This provided us with an additional 855 hedged sentences. | the classifiers did not seem able to benefit from the additional training examples, and across several feature configurations performance was found to be consistently lower (though not significantly so). | contrasting |
train_10641 | Rules for scope detection, based on the grammatical relations of the sentence and the part-ofspeech tag of the cue, were manuallydeveloped. | another supervised CRF classifier was used to refine these predictions. | contrasting |
train_10642 | In (10) below it is uncertain whether fat body disintegration is independent of the AdoR. | it is stated with certainty that fat body disintegration is promoted by action of the hemocytes, yet the latter assertion is included in the scope to keep it continuous. | contrasting |
train_10643 | The corresponding classifier must discriminate positive from negative candidates. | identifying one candidate as positive implies that some other candidates must be negatives. | contrasting |
train_10644 | The epistemic verb indicate has as its scope head the token approximation, due to the existence of a clausal complement dependency (ccomp) between them. | the rightmost token of the sentence, significance, has a prepositional modifier dependency (prep to) with approximation. | contrasting |
train_10645 | In fact, setting the threshold to 3 after the shared task, we were able to obtain overall better results (Precision=83.43, Recall=84.81, F-score=84.12, Rank=8/24). | we explicitly targeted precision, and in that respect, our submission results were not surprising. | contrasting |
train_10646 | (b) CUE: either-or SCOPE: either a seqeuncing error or a pseudogene By handling this class to some extent, we could have increased our recall, and therefore, F-score (65 out of 1,044 cues in the evaluation data for biological text involved this class). | we decided against treating this class, as we believe it requires a slightly different treatment due to its special semantics. | contrasting |
train_10647 | (b) FP: possibly FN: possibly splitting them into different roles Left/right expansion strategies were based on the analysis of training data. | we encountered errors caused by these strategies where we found the annotations contradictory. | contrasting |
train_10648 | The classifier with all syntactic features achieves the best F1-score, 2.19% higher than baseline classifier. | in later experiment on evaluation dataset after the shared task, we observed that dependency features actually harmed the performance for full articles dataset. | contrasting |
train_10649 | It seems that the best labeling result of task 1 can be used directly to be the proper intermediate representation of task 2. | the complexity of scope finding for multi-hedge sentences forces us to modify the intermediate result of task 2 for the sake of handling the sentences with more than one hedge cue correctly. | contrasting |
train_10650 | In the near future, we will improve the hedge cue detection performance by investigating more implicit information of potential keywords. | we will study on how to improve scope finding performance by integrating CRF-based and syntactic pattern-based scope finding systems. | contrasting |
train_10651 | We firstly tried to use the token as the basic unit for hedge cues. | several pieces of evidence suggest it is not appropriate. | contrasting |
train_10652 | Even though in the Wikipedia domain the TK+BF score is less than the baseline score, still the performance of the classifiers do not fall much in any of the in-domain and cross-domain experiments. | bF does not have a good performance in 5 of 6 the experiments. | contrasting |
train_10653 | Also, the highest cue level precision of 54.89% was obtained for L class, whereas it was lowered to 51.13% by the addition of S and W features. | the performance improvement is due to the improved recall, which is as per the expectation that syntactic features would help identify new patterns, which lexical features alone cannot. | contrasting |
train_10654 | Our experiments show that the addition of syntactic features helps in improving recall. | the advantage given by syntactic features were surprisingly marginal. | contrasting |
train_10655 | Therefore, in an unsupervised linguistic setting which is rife with ambiguity, modeling this connection can be particularly beneficial. | existing unsupervised morphological analyzers take little advantage of this linguistic property. | contrasting |
train_10656 | These approaches fit within standard statistical approaches to natural language processing, defining statistical objectives and inference strategies, with the learners trying to optimize some combination of the quality of its lexicon and representations of the corpus. | bootstrapping approaches (Gambell and Yang, 2004;Lignos and Yang, 2010) to word segmentation have focused on simple heuristics for populating a lexicon and strategies for using the contents of the lexicon to segment utterances. | contrasting |
train_10657 | Because of the dramatic performance gains shown by the addition of USC in testing, as well as the poor performance of TP, Gambell and Yang conclude that the USC is required for word segmentation and thus is a likely candidate for inclusion in Universal Grammar (Gambell and Yang, 2006). | as the results in Table 2 show, VE is capable of slightly superior performance on syllable input, without assuming any prior constraints on syllable stress distribution. | contrasting |
train_10658 | In tasks where the learned hypothesis is accurate enough, this has no performance loss and it is computationally efficient as the optimal policy is deterministic. | in event extraction the learned hypothesis is likely to make mistakes, thus the optimal policy does not provide a good approximation for it. | contrasting |
train_10659 | During the first stages of language acquisition children make a lot of errors, and parents are not constantly telling them that their sentences are wrong; rather the important thing is that they can communicate with each other. | it is worth studying whether other sources of negative evidence are provided to children. | contrasting |
train_10660 | Combination of instance and feature feedback has been shown to reduce the total annotation cost for supervised learning. | learning problems may not benefit equally from feature feedback. | contrasting |
train_10661 | Direct feedback on a list of features (Raghavan et al., 2006;Druck et al., 2008) is limited to simple features like unigrams. | unigrams are limited in the linguistic phenomena they can capture. | contrasting |
train_10662 | If a concept can be expressed using a few well-selected features from a large feature space, we stand to benefit from feature feedback as few labeled features can provide this information. | if learning a concept requires all or most of the features in the feature space, there is little knowledge that feature feedback can provide. | contrasting |
train_10663 | (2007) analyze benefit from feature feedback at a fixed training data size of 42 labeled units. | the difference between learning problems may vary with the amount of labeled data. | contrasting |
train_10664 | Some problems may benefit significantly from feature feedback even at relatively larger amount of labeled data. | with very large training set, the benefit from feature feedback can be expected to be small and not significant for all problems and all problems will look similar. | contrasting |
train_10665 | 2007evaluate benefit from feature feedback in terms of the gain in learning speed. | the learning rate does not tell us how much improvement we get in performance at a given stage in learning. | contrasting |
train_10666 | We found a significant negative correlation (−0.574) between annotation budget (number of AU s) and improvement in performance with feature feedback. | note that this correlation is not very strong, which supports our belief that factors other than the amount of labeled data affect benefit from feature feedback. | contrasting |
train_10667 | So far only a linear relationship of various measures with benefit from feature feedback has been considered. | some of these relationships may not be linear or a combination of several measures together may be stronger indicators of the benefit from feature feedback. | contrasting |
train_10668 | Both select high quality parses by computing the level of agreement among different parser outputs: wheras the former uses several versions of a constituency parser, each trained on a different sample from the training data, the latter uses the parses produced by different dependency parsing algorithms trained on the same data. | a widely acknowledged problem of both supervised-based and ensemble-based methods is that they are dramatically influenced by a) the selection of the training data and b) the accuracy and the typology of errors of the used parser. | contrasting |
train_10669 | For example, a research paper talking about "machine transliteration" may less or even not mention the phrase "machine translation". | since "machine transliteration" is a sub-field of "machine translation", the phrase "machine translation" is also reasonable to be suggested as a keyphrase to indicate the topics of this paper. | contrasting |
train_10670 | Let us take another example: in a news article talking about "iPad" and "iPhone", the word "Apple" may rarely ever come up. | it is known that both "iPad" and "iPhone" are the products of "Apple", and the word "Apple" may thus be a proper keyphrase of this article. | contrasting |
train_10671 | Compared to TextRank, ExpandRank performs better when facing the vocabulary gap by borrowing the information on document level. | the finding of neighbor documents are usually arbitrary. | contrasting |
train_10672 | WAM assumes each translation pair should be of comparable length. | a document is usually much longer than title. | contrasting |
train_10673 | The simplest way to compute W from W is to let W = W , which boils down to using a dense, complete graph G with the unmodified all-pairs similarity as its edge weights. | it has been observed that a sparse W not only save time needed for classification, but also results in better classification accuracy 1 than the full similarity matrix W (Zhu, 2008). | contrasting |
train_10674 | (2009) reported that b-matching graphs achieve semi-supervised classification accuracy higher than k-NN graphs. | without approximation, building a b-matching graph is prohibitive in terms of computational complexity. | contrasting |
train_10675 | From Table 4, we see that mutual k-NN graphs perform significantly better than k-NN graphs. | theere is no significant difference in the accuracy of the mutual k-NN graphs and b-matching graphs. | contrasting |
train_10676 | Their (non-probabilistic) approach finds dictionaries with a minimal number of entries. | the approach does not include a position model. | contrasting |
train_10677 | For the IBM-1 model the first term alone results in a convex, but not strictly convex minimization problem 2 . | eM-like iterative methods generally do not reach the minimum: they are doing block coordinate descent (Bertsekas, 1999, chap. | contrasting |
train_10678 | The above algorithm is fast and can handle large corpora. | it still gets stuck in local minima, and there is no way of telling how close to the optimum one got. | contrasting |
train_10679 | For deriving cuts we tried all the methods implemented in the COIN Cut Generation Library CGL 3 , based on the solver Clp from the same project line. | either the methods were very slow in producing cuts or they produced very few cuts only. | contrasting |
train_10680 | Clearly, the gaps are reduced significantly compared to the LP-relaxation. | except for the IBM-1 (which is convex for λ = 0) the lower bounds are still quite loose. | contrasting |
train_10681 | For example, if stopwords are removed from the corpus, the resulting factors often, but not necessarily, correspond to topics. | if only stopwords are retained, as is commonly done in authorship attribution studies, the resulting factors lose their interpretability as topics; rather, they can be seen as stylistic markers. | contrasting |
train_10682 | The highest LDAH-M accuracy was obtained with 300 topics (Figure 3). | lDAH-S yielded a much higher accuracy than lDAH-M. | contrasting |
train_10683 | For 5000 prolific users (figure omitted due to space limitations), the methods perform comparably, and KOP outperforms LDAH-S by a small margin. | with all the authors (Figure 4(c)), KOP yields a higher accuracy than both LDA+Hellinger variants. | contrasting |
train_10684 | We released an earlier version of such a network that was limited by the fact that only relationships involving at least one monosemous noun had been included, and it was not evaluated on a WSD task (Szumlanski and Gomez, 2010). | the network we present here has relatedness data for over 4,500 polysemous noun targets and 3,000 monosemous noun targets, each of which are related to an average of 27.5 distinct noun senses. | contrasting |
train_10685 | These unintended cluster memberships are bound to cause minor errors in our disambiguation efforts. | our analysis reveals that we do not find such high entropy among the relatives of a polysemous noun that the semantic clustering effect (which is necessary for the success of the disambiguation algorithms described above in Section 3.3) is diminished. | contrasting |
train_10686 | In order to capture subtle and creative opinions, opinion detection systems generally assume that a large body of opinion-labeled data are available. | collections of opinion-labeled data are often limited, especially at the granularity level of sentences; and manual annotation is tedious, expensive and error-prone. | contrasting |
train_10687 | Overall, SSL for opinion detection on movie reviews shows similar trends to SSL for traditional topical classification (Nigam and Ghani, 2000). | the advantages of SSL were not as significant in other data domains. | contrasting |
train_10688 | enforce long distance regularities for more grammatically correct generation. | optimizing both language-model-based probabilities and parser-based probabilities is intractable. | contrasting |
train_10689 | In the 3rd image, the word "way" is chosen to represent "path" or "street" by the image recognizer. | a different sense of way -"very" -is being used in the final output. | contrasting |
train_10690 | Probabilistic history-based models such as MEMMs should be able to avoid (at least some of) such mistakes by performing a Viterbi search to find the highest probability path of the actions. | as pointed out by Lafferty et al. | contrasting |
train_10691 | In this work, we did not conduct extensive feature engineering for improving the accuracy of individual tasks because our primary goal with this paper is to present the learning framework itself. | one of the major merits of using history-based models is that we are allowed to define arbitrary features on the partially completed structure. | contrasting |
train_10692 | Most work on metric learning learns a Mahalanobis distance, which generalizes the standard squared Euclidean distance by modeling the similarity of elements in different dimensions using a positive semi-definite matrix A. | given two vectors x and y, their squared Mahalanobis distance is: the computational complexity of learning a general Mahalanobis matrix is at least O(n 2 ), where n is the dimensionality of the input vectors. | contrasting |
train_10693 | (2005) used a convolutional network and designed a contrastive loss function for optimizing a Eucliden distance metric. | the network of S2Net is equivalent to a linear projection matrix and has a pairwise loss function. | contrasting |
train_10694 | It comes with various ways of cleanly querying and manipulating the data and allows convenient access to the sense inventory and propbank frame files instead of having to interpret the raw .xml versions. | maintaining format consistency with earlier CoNLL tasks was deemed convenient for sites that already had tools configured to deal with that format. | contrasting |
train_10695 | In the coreference resolution space, several works have shown that applying a list of rules from highest to lowest precision is beneficial for coreference resolution (Baldwin, 1997;Raghunathan el al., 2010). | we believe we are the first to show that this high-recall/high-precision strategy yields competitive results for the complete task of coreference resolution, i.e., including mention detection and both nominal and pronominal coreference. | contrasting |
train_10696 | It is thus ideally suited for experimenting with feature selection and other aspects of optimization. | considering all the parameters, it was unfeasible to run an optimization on the amount of data available on CONLL; we focused therefore on feature selection and the choice between single and split classifiers. | contrasting |
train_10697 | The official data have provided gold and automatic parse trees for each sentences in training and development set. | according to statistics, almost 3% mentions have no corresponding constituents in automatic parse trees. | contrasting |
train_10698 | Under normal strategy, such mention will not be recognized and be absent in the clustering stage. | we find that mention has its constituent N P 0 in packed forest. | contrasting |
train_10699 | Since the requirement of this year's task is to model unrestricted coreference, intuitively, we should not constraint in recognizing only noun phrases but also adjective phrase, verb and so on. | we find that most mentions appeared in corpus are noun phrases, and our experimental results indicate that considering constituents annotated with above proposed POS tags achieve the best performance. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.