id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_13300 | Previous works thus proposed features that express the corpus statistics of syntactic frames. | class boundaries are subtle in some cases; several classes share syntactic frames with each other to a large extent. | contrasting |
train_13301 | One possible direction for this research topic would be to use our model for the semi-automatic construction of verb lexicons, with the help of human curation. | there is also a demand for exploring other types of features that can discriminate among confusing classes. | contrasting |
train_13302 | Enabled by the development of Web 2.0 technology and created by communities of volunteers, CKBs have emerged as a new source of lexical semantic knowledge in recent years. | to LKBs, they are created by persons with diverse personal backgrounds and fields of expertise. | contrasting |
train_13303 | The SR values range between 0 and 1. | the majority of SR values lie between 0 and 0.1. | contrasting |
train_13304 | The large difference of average document length and query term instances suggests a larger difference of the average relevance scores than 20%. | in the BM25 model the relevance score is decreased with increasing document length and additional occurrences of a query term have little impact after three or four occurrences. | contrasting |
train_13305 | Also SR-WordNet outperforms Wikipedia for hypernymy and hyponymy. | to Wiktionary, no direct information about related words is used to construct the textual representation of concepts. | contrasting |
train_13306 | In contrast to Wiktionary, no direct information about related words is used to construct the textual representation of concepts. | the very short and specific representations are built from glosses and examples which often contain hypernym-hyponym pairs. | contrasting |
train_13307 | On the negative side, our approach is limited in that it requires that X 2 be related to X 1 , while the pivoting language Z does not need to be related to X 1 nor to Y . | we only need one additional parallel corpus (for X 2 -Y ), while pivoting needs two: one for X 1 -Z and one for Z-Y . | contrasting |
train_13308 | Finally, we also report in the last row (in italic) the superior RE result by (Zhou et al., 2007). | to achieve this outcome the authors used the composite kernel CK 1 with several heuristics to define an effective portion of constituent trees. | contrasting |
train_13309 | Using a manually annotated corpus for relation extraction has one particular advantage compared to extraction from plain text: the type of an argument-predicate relation is already annotated; there is no need to determine it by automatic means which are usually error-prone. | the relatively small size of SALSA does not allow to make relevant predictions about the degree of semantic relatedness in the extracted argument-predicate pairs, see section 4. | contrasting |
train_13310 | Most of the literature discusses predicates inferable from nouns. | other parts of speech can support similar inferences. | contrasting |
train_13311 | The calculated F max values are reported in table 2 which shows a correlation between F max values calculated for the "semantic" and "syntactic" procedures. | there is no correlation with human agreement. | contrasting |
train_13312 | An obvious limitation of the presented approach is that it is bounded to manual annotations which are hard to obtain. | since semantic annotations are useful for many different goals in linguistics and NLP, the number of reliable annotated corpora constantly grows. | contrasting |
train_13313 | is an example of a complex keyword, since the words forming the phrase can only express speculation together. | to the minimalist strategy followed when annotating the keywords, the annotation of scopes of the keywords was performed by assigning the scope to the largest syntactic unit possible by including all the elements between the keyword and the target word to the scope (in order to avoid scopes without a keyword) and by including the modifiers of the target word to the scope (Vincze et al., 2008). | contrasting |
train_13314 | For example, the keyword "suggest" has been used in a speculative context in all its occurrences in the abstracts and in the full papers. | "appear" is a real speculation keyword in 86% of its occurrences in the abstracts and in 83% of its occurrences in the full papers, whereas "can" is a real speculation keyword in 12% of its occurrences in the abstracts and in 16% of its occurrences in the full papers. | contrasting |
train_13315 | Our continuous objective can be optimized using simple gradient ascent. | computing critical quantities in the gradient necessitates a novel dynamic program, which we also present here. | contrasting |
train_13316 | For the remainder of this paper, we focus on consensus BLEU. | the techniques herein, including the optimization approach of Section 3, are applicable to many differentiable functions of expected n-gram counts. | contrasting |
train_13317 | This plot exhibits a known issue with MERT training: because new k-best lists are generated at each iteration, the objective function can change drastically between iterations. | coBLEU converges Table 1: Performance measured by BLEU using a consensus decoding method over translation forests shows an improvement over MERT when using coBLEU training. | contrasting |
train_13318 | The query, ebay official, is assumed to be commercial intent, because a large fraction of the clicks are on ads. | typos tend to have relatively more clicks on "did-you-mean" spelling suggestions. | contrasting |
train_13319 | DIPRE (Dual Iterative Pattern Relation Expansion) (Brin, 1998) is a system based on bootstrapping that exploits the duality between patterns and relations to augment the target relation starting from a small sample. | it only extracts simple relations such as (author, title) pairs from the WWW. | contrasting |
train_13320 | We also found some basic emotions tend to combine together, such as {expect, joy, love}, {anxiety, sorrow}, {angry, hate}. | some emotions have small or scarce possibility appear together, such as joy and hate, surprise and angry. | contrasting |
train_13321 | In English, overt pronouns such as "she" and definite noun phrases such as "the company" are anaphors that refer to preceding entities (antecedents). | in Japanese, anaphors are often omitted, which are called zero pronouns, and zero anaphora resolution is one of the most important techniques for semantic analysis in Japanese. | contrasting |
train_13322 | When we used the decay rates smaller than 0.5, the recall score worsened clearly. | although we expected to obtain higher precision with small decay rate, the highest precision was achieved by the decay rate 0.5. | contrasting |
train_13323 | It is therefore crucial that all annotations at a high level of interpretation are backed up by human annotation with more than one annotator. | annotations of citation function classification typically use only the untested annotation of a single human annotator as gold standard, who is typically the designer of a scheme (Spiegel-Rüsing, 1977;Weinstock, 1971;Nanba and Okumura, 1999;Garzone and Mercer, 2000). | contrasting |
train_13324 | In these cases the guidelines seem to fully suffice for their description, but then again good performance of AIM, FUT and USE is not that surprising, as they are signalled clearly by linguistic and non-linguistic cues. | there are three categories with particularly low distinguishability in both disciplines: ANTISUPP, OWN FAIL and PREV OWN. | contrasting |
train_13325 | The definition of the categories SUPPORT and NOV ADV also seem to be substantially more confusing for CL than for chemistry. | cODI is a category which shows average distinctiveness for cL, but much worse distinctiveness for chemistry. | contrasting |
train_13326 | Overall, only 25 (0.02%) official symbols were ambiguous within the organisms. | when official symbols from 21 organisms were combined, the ambiguity increased substantially to 21, 279 (14.2%) symbols. | contrasting |
train_13327 | It is expected that the first rule would produce good precision. | it can only disambiguate the fraction of entities that happen to have a species word to their immediate left. | contrasting |
train_13328 | High performance can often be achieved if the system is trained and tested on data from the same domain. | the performance of NLP systems often degrades badly when the test data is drawn from a source that is different from the labeled data used to train the system. | contrasting |
train_13329 | It is often used when labeled training data is scarce but unlabeled data is abundant. | for domain adaptation problems, we may have a lot of training data but the target application domain has a different data distribution. | contrasting |
train_13330 | It has also been applied to the problem of domain adaptation for word sense disambiguation in (Chan and Ng, 2007). | active learning requires human intervention. | contrasting |
train_13331 | Regarding the selection-criterion, standard and balanced bootstrapping both select instances which are confidently labeled by SC t to be used for training SC t+1 , in the hope of avoiding using wrongly labeled data in bootstrapping. | instances that are already confidently labeled by SC t may not contain sufficient information which is not in D S , and using them to train SC t+1 may result in SC t+1 performing similarly to SC t . | contrasting |
train_13332 | Balanced bootstrapping has been shown to be more effective for domain adaptation than standard bootstrapping (Jiang and Zhai, 2007) for named entity classification on a subset of the dataset used here. | we found that both methods perform poorly on domain adaptation for NER. | contrasting |
train_13333 | In named entity classification, the names have already been segmented out and only need to be classified with the appropriate class. | for NER, the names also Table 6: F-measure of the cross-topic transfer. | contrasting |
train_13334 | This makes NER much more difficult. | when BN is the source domain, the capitalization information can be discovered by DAB. | contrasting |
train_13335 | Their kernel is defined on lexical dependency tree by the convolution of similarities between all possible subtrees. | if the convolution containing too many irrelevant subtrees, over-fitting may occur and decreases the performance of the classifier. | contrasting |
train_13336 | We observe that the three learning based methods(SVM-1, SVM-WTree, SVM-PTree) perform better than the Adjacent baseline in the first three domains. | in other domains, directly adjacent method is better than the learning based methods. | contrasting |
train_13337 | This latter work was then further improved by Huang (2008) to 91.7%, by utilizing the benefit of forest structure. | one of the limitations of these techniques is the huge number of features which makes the training very expensive and inefficient in space and memory usage. | contrasting |
train_13338 | For each tree, its constituent count is the sum of all the counts of its constituent. | as suggested in (Sagae and Lavie 2006), this feature favours precision over recall. | contrasting |
train_13339 | Also many works have discussed the issues, such as word segmentation, POS tagging etc, between English and Chinese (Wang et al., 2006;Wu et al., 2003). | to the best of our knowledge, no studies have been reported on discussing preprocessing techniques on Chinese document and sentence-level novelty mining, which is the focus of our paper. | contrasting |
train_13340 | The problem of re-ranking initial retrieval results exploring the intrinsic structure of documents is widely researched in information retrieval (IR) and has attracted a considerable amount of time and study. | one of the drawbacks is that those algorithms treat queries and documents separately. | contrasting |
train_13341 | In their work, LDA-based document model and language model-based document model were linearly combined to rank the entire corpus. | unlike this approach we only apply LDA to a small set of documents. | contrasting |
train_13342 | Given a query , a set of initial results ∈ of top documents are returned by a standard information retrieval model (initial ranker). | the initial ranker tends to be imperfect. | contrasting |
train_13343 | Prior to It is worth noting that the CLEF-2008 TEL data is actually multilingual: all collections to a greater or lesser extent contain records pointing to documents in other languages. | this is not a major problem because the majority of documents in the test collection are written in main languages of those test collections (BL-English, BNF-French). | contrasting |
train_13344 | The search ranges for these two parameters were: : 0.1, 0.2, …, 0.9 k : 5, 10, 15, …, 45 As it turned out, for many instances, the optimal value of with respect to MAP was either 0.1 or 0.2, suggesting the initial retrieval scores have valuable information inside them. | the optimal value of was between 20 and 40. | contrasting |
train_13345 | As expected, the RWILM method bought improvements in many cases in CLEF-2008 test collections. | the performance over CLEF-2007 collection was somewhat disappointing. | contrasting |
train_13346 | The increased expressiveness of the model, combined with the more robust parameter estimates provided by the smoothing, results in a nice increase in parsing accuracy on a held-out set. | as reported by Petrov (2009) and Huang and Harper (2009), an additional 7th SM round actually hurts performance. | contrasting |
train_13347 | This specialization leads to greater diversity in their prediction preferences, especially in the presence of a small training set. | the self-labeled training set size is much larger, and so the specialization process is therefore slowed down. | contrasting |
train_13348 | They have provided evidence that syntactic consistency exists not only within coordinate structures, but also in a variety of other contexts, such as within sentences, between sentences, within documents, and between speaker turns in the Switchboard corpus. | their analysis rests on a selected number of constructions concerning the internal structure of noun phrases. | contrasting |
train_13349 | Sentiment analysis (Pang and Lee, 2008) offers the promise of automatically discerning how people feel about a product, person, organization, or issue based on what they write online, which is potentially of great value to businesses and other organizations. | the vast majority of sentiment resources and algorithms are limited to a single language, usually English (Wilson, 2008;Baccianella and Sebastiani, 2010). | contrasting |
train_13350 | The conditional topic distribution for SLDA (Blei and McAuliffe, 2007) replaces this term with the standard Multinomial-Dirichlet. | we believe this is the first published SLDA-style model using MCMC inference, as prior work has used variational inference (Blei and McAuliffe, 2007;Chang and Blei, 2009;Wang et al., 2009). | contrasting |
train_13351 | Supervised learning also suffers from its heavy dependence on training data. | unsupervised, knowledge-lean topic modeling approach has been shown to be effective in automatically identifying aspects and their representative words (Titov and McDonald, 2008;Brody and Elhadad, 2010). | contrasting |
train_13352 | (2007) propose to separate topic and sentiment words using a positive sentiment model and a negative sentiment model, but both models capture general opinion words only. | we model aspect-specific opinion words as well as general opinion words. | contrasting |
train_13353 | As for lexical features, words from a sentiment lexicon can also be helpful in discovering opinion words. | lexical features are more diverse so presumably we need more training data in order to detect useful lexical features. | contrasting |
train_13354 | Recognizing plans and goals depends on world knowledge and inference, and is beyond the scope of this paper. | we identified two cases where affect states often can be inferred based on syntactic properties. | contrasting |
train_13355 | Our simple heuristics for creating links work surprisingly well for xchar and a links when given perfect affect states. | these heuristics produce relatively low precision for m links, albeit with 100% recall. | contrasting |
train_13356 | There exists some work to remove noise from SMS (Choudhury et al., 2007) (Byun et al., 2008) (Aw et al., 2006) (Neef et al., 2007 (Kobus et al., 2008). | all of these techniques require an aligned corpus of SMS and conventional language for training. | contrasting |
train_13357 | We compute the score of each question in C using Score(Q) and the question with highest score is treated asQ h . | the naive approach suffers from high runtime cost. | contrasting |
train_13358 | If we consider the question Q 24 below as reference, question Q 26 will be deemed more useful than Q 25 when using cos or mcs because of the higher relative lexical and conceptual overlap with Q 24 . | this is contrary to the actual ordering Q 25 ≻ Q 26 |Q 24 , which reflects the fact that Q 25 , which expects the same answer type as Q 24 , should be deemed more useful than Q 26 , which has a different answer type. | contrasting |
train_13359 | retrieving with high precision, low recall). | current search engines are still facing the lexical ambiguity issue (Furnas et al., 1987) -i.e. | contrasting |
train_13360 | Users can then select the cluster(s) and the pages therein that best answer their information needs. | many Web clustering engines group search results on the basis of their lexical similarity. | contrasting |
train_13361 | Over the years, different methods for SIR have been proposed (Krovetz and Croft, 1992;Voorhees, 1993;Mandala et al., 1998;Gonzalo et al., 1999;Kim et al., 2004;Liu et al., 2005a, inter alia). | contrasting results have been reported on the benefits of these techniques: it has been shown that WSD has to be very accurate to benefit Information Retrieval (Sanderson, 1994) -a result that was later debated (Gonzalo et al., 1999;Stokoe et al., 2003). | contrasting |
train_13362 | Our triangular measure is the edge counterpart of the clustering coefficient (or curvature) for vertices, previously used to perform WSI (Widdows and Dorow, 2002). | it is our hunch that measuring the ratio of squares an edge participates in provides a stronger clue of how important that edge is within a meaning component. | contrasting |
train_13363 | We note that the ranking and optimality of clusters can be improved with more sophisticated techniques (Crabtree et al., 2005;Kurland, 2008;Kurland and Domshlak, 2008;Lee et al., 2008, inter alia). | this is outside the scope of this paper. | contrasting |
train_13364 | The above considerations might not seem intuitive at first glance, as the average polysemy of longer queries is lower (17.9 on AMBIENT vs. 6.7 on MORESQUE according to our gold standard). | we note that while the kind of ambiguity of 1-word queries is generally coarser (e.g., beagle as dog vs. lander vs. search tool), with longer queries we often encounter much finer sense distinctions (e.g., Across the Universe as song by The Beatles vs. a 2007 film based on the song vs. a Star Trek novel vs. a rock album by Trip Shakespeare, etc.). | contrasting |
train_13365 | This would clearly exclude tasks like translation of medical reports, business contracts, or literary works, where the validation of a qualified bilingual translator is absolutely necessary. | it does include a great many real-world scenarios, such as following news reports in another country, reading international comments about a product, or generating a decent first draft translation of a Wikipedia page for Wikipedia editors to improve. | contrasting |
train_13366 | As shown in , hierarchical phrase-based models significantly outperform tree-to-string models (Liu et al., 2006;Huang et al., 2006), even when attempts are made to alleviate parsing errors using either forest-based decoding or forest-based rule extraction . | when properly used, syntactic constraints can provide invaluable benefits to improve translation quality. | contrasting |
train_13367 | In our example above, rule (2) can be extracted from rule (1) with the following sub phrase pair: The use of a unified X nonterminal makes hierarchical phrase-based models flexible at capturing non-local reordering of phrases. | such flexibility also comes at the cost that it is not able to differentiate between different syntactic usages of phrases. | contrasting |
train_13368 | The binarized decomposition forest compactly encodes the hierarchical structure among phrases and non-phrases. | the coarse abstraction of phrases with X and non-phrases with B provides little information on the constraints of the hierarchy. | contrasting |
train_13369 | Instead, we use the raw counts for the two models # m (f ,ē) and # w→m (f ,ē) directly as follows: For lexicalized translation probabilities, we would like to use simple interpolation. | we notice that when a phrase pair belongs to only one of the phrase tables, the corresponding lexicalized score for the other table would be zero. | contrasting |
train_13370 | Self-training improves over the baseline by about 0.6% on the de-velopment set. | the gains from self-training are more modest (0.2%) on the evaluation (test) set. | contrasting |
train_13371 | Most previous punctuation prediction techniques, developed mostly by the speech processing community, exploit both lexical and prosodic cues. | in order to fully exploit prosodic features such as pitch and pause duration, it is necessary to have access to the original raw speech waveforms. | contrasting |
train_13372 | The skip-chain CRF (Sutton and McCallum, 2004), another variant of linear-chain CRF, attaches additional edges on top of a linear-chain CRF for better modeling of long range dependencies between states with similar observations. | such a model usually requires known long range dependencies in advance and may not be readily applicable to our task where such clues are not explicit. | contrasting |
train_13373 | The majority of the sentences are declarative sentences. | question sentences are more frequent in the BTEC dataset compared to the CT dataset. | contrasting |
train_13374 | Thus, duplicating the ending punctuation symbol to the start of a sentence so that it is near these indicative words helps to improve the prediction accuracy. | chinese presents quite different syntactic structures for question sentences. | contrasting |
train_13375 | This simple modification of the HMM takes advantage of the dichotomy in natural language between content and function words. | a standard HMM draws all prior distributions once over all states and it is known to perform poorly in unsupervised and semisupervised POS tagging. | contrasting |
train_13376 | Similarly peaked distributions are observed for other function categories such as MD and CC. | the joint probability of any word occurring with NN is much less likely to be zero and the distribution is much less likely to be peaked. | contrasting |
train_13377 | In the past few years, several authors have studied the problem of gender classification in the natural language processing and linguistic communities. | most existing works deal with formal writings, e.g., essays of people, the Reuters news corpus and the British National Corpus (BNC). | contrasting |
train_13378 | (Houvardas and Stamatatos, 2006) even applied character (rather than word or tag) n-grams to capture stylistic features for authorship classification of news articles in Reuters. | these works use only one or a subset of the classes of features. | contrasting |
train_13379 | Moreover, having many feature classes is very useful as they provide features with varied granularities and diversities. | this also results in a huge number of features and many of them are redundant and may obscure classification. | contrasting |
train_13380 | POS n-grams can also be used as features. | since we mine all POS sequence patterns and use them as features, most discriminative POS ngrams are already covered. | contrasting |
train_13381 | A set of top ranked features are selected. | the wrapper model chooses features and adds to the current feature pool based on whether the new features improve the classification accuracy. | contrasting |
train_13382 | A fundamental assumption is that the training and test data are identically distributed. | this assumption may not hold in practice. | contrasting |
train_13383 | Since the hidden positives in U should have the same behaviors as the positives in P in terms of their similarities to pr, we set their minimum similarity as the threshold value ω which is the minimum similarity before a document is considered as a potential negative document: In a noiseless scenario, using the minimum similarity is acceptable. | most real-life applications contain outliers and noisy artifacts. | contrasting |
train_13384 | Hence, to obtain a paragraph's function label, we need to first label its sentences. | we are faced with the same problem: how can we obtain the sentence function labels? | contrasting |
train_13385 | If a paragraph contains Thesis, Prompt, or Background sentences, the paragraph is likely to be an Introduction. | if a paragraph contains Main Idea, Support, or Conclusion sentences, it is likely to be a Body paragraph. | contrasting |
train_13386 | Figures 3(a) counts, respectively, for the 20 Newsgroups dataset. | without pre-processing the number of types scales from 107,211 to 892,983 and the number of tokens from 2,261,805 to 3,073,208. | contrasting |
train_13387 | The advantage of pre-translation is that MT systems tend to preserve the meaning of documents. | mT can be very slow (more than 1 second per document), preventing its use on large training sets. | contrasting |
train_13388 | Translating word-by-word operates at 274K words per second. | machine translation processes 50 words per second, approximately 3 orders of magnitude slower. | contrasting |
train_13389 | The tests show that OPCA is better than CCA, CL-LSI, plain word-by-word translation, and even full translation for Spanish documents. | if we post-process full translation by an LSI model trained on the English training set, full translation is the most accurate. | contrasting |
train_13390 | In practice, the decoder has to employ beam search to make it tractable (Koehn, 2004). | even beam search runs in quadratic-time in general (see Sec. | contrasting |
train_13391 | Strong indications of perspective can often come from collocations of arbitrary length; for example, someone writing get the government out of my X is typically expressing a conservative rather than progressive viewpoint. | going beyond unigram or bigram features in perspective classification gives rise to problems of data sparsity. | contrasting |
train_13392 | Indeed, it is not uncommon to see the feature space for sentiment analysis limited to unigrams. | important indicators of perspective can also be longer (get the government out of my). | contrasting |
train_13393 | Briefly, adaptor grammars allow nonterminals to be rewritten to entire subtrees. | a nonterminal in a PCFG rewrites only to a collection of grammar symbols; their subsequent productions are independent of each other. | contrasting |
train_13394 | It is easy to find clearer ambiguities online, such as compositional examples of typically noncompositional verbs (how to recover a couch, when to redress a wound, etc.). | in our data verbs like recover and redress always occur in their more dominant non-compositional sense. | contrasting |
train_13395 | Noncompositionality is the majority class on the examples that are in the dictionary. | one would expect verbs that are not in a comprehensive dictionary to be largely compositional, and indeed most of the / ∈ CELEX verbs are compositional. | contrasting |
train_13396 | This relation is obvious when reading the sentence, so it is omitted by the writer. | any semantic representation of text needs as much semantics as possible explicitly stated. | contrasting |
train_13397 | Information-extraction (IE) research typically focuses on clean-text inputs. | an IE engine serving real applications yields many false alarms due to less-well-formed input. | contrasting |
train_13398 | The mixed and gazetteer systems, having a variety of noisy data in their training set, perform much better on the noisy conditions, particularly on Latin-alphabet-non-English data because that is one of the conditions included in its training, while Transactions remains a condition not covered in the training set and so shows less improvement. | because the mixed classifier, and moreso the gazetteer classifier, are oriented to noisy data, on clean data they suffer in performance by 2.5 and 5 F -measure points, respectively. | contrasting |
train_13399 | Alternatively, bootstrap a set of weighted support vectors from both labeled and unlabeled data using SVM and feed these instances into semi-supervised relation extraction. | their seed set is sequentially generated only to ensure that there are at least 5 instances for each relation class. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.