id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_2400 | On the one hand, this has led to a range of manually or semi-automatically developed lexical resources focusing on verb information, such as the Levin classes (Levin, 1993), VerbNet (Kipper Schuler, 2006), FrameNet 4 (Fillmore et al., 2003), and PropBank (Palmer et al., 2005). | we find automatic approaches to the induction of verb subcategorization information at the syntax-semantics interface for a large number of languages, e.g. | contrasting |
train_2401 | The probabilities are discretized into 5 buckets (B p=0 , B 0<p≤0.25 , B 0.25<p≤0.5 , B 0.5<p≤0.75 , B 0.75<p≤1 ). | noun modification in noun-noun Gen construction is represented by cooccurrence frequencies. | contrasting |
train_2402 | For example, incorrect translations of "the" and "Bush" will receive the same penalty. | for crosslingual information processing applications, we should acknowledge that certain informationally critical words are more important than other common words. | contrasting |
train_2403 | (2012) proposed a semantic role driven MT metric. | none of these work declaratively exploited results from information extraction for MT. | contrasting |
train_2404 | (2004) that assigns higher likelihood to objects with larger similarity to share the same label. | q b (F ) corresponds to their fitting constraint, which requires the final alignment to maintain the maximum consistency with the initial alignment. | contrasting |
train_2405 | It is founded on the observation that true monotonic alignment paths usually lie close to the diagonal of a relation matrix. | it is not applicable to our task due to the nonmonotonicity involved. | contrasting |
train_2406 | T HB achieves a better MRR than T P H+P due to the semantic translation of organization names. | despite the increased recall of T HB over that of T Dict , the precision of T HB is unsatisfactory because T HB maps abbreviated names such as 'WTO' with other NEs. | contrasting |
train_2407 | However, despite the increased recall of T HB over that of T Dict , the precision of T HB is unsatisfactory because T HB maps abbreviated names such as 'WTO' with other NEs. | our method achieves the highest MRR and precision in both the person and organization categories. | contrasting |
train_2408 | Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. | infobox information is very incomplete and imbalanced among the Wikipedias in different languages. | contrasting |
train_2409 | Hence, the challenge remains: how could we supplement the missing infoboxes for the rest 79.57% articles? | the numbers of existing infobox attributes in different languages are highly imbalanced. | contrasting |
train_2410 | Using variants of Wu and Palmer (1994) on the 374 SAT analogies of Turney (2005), Veale (2004) reports a success rate of 38-44% using only WordNet-based similarity. | turney (2005) reports up to 55% success on the same analogies, partly because his approach aims to match implicit relations rather than explicit concepts, and in part because it uses a divergent process to gather from the web as rich a perspective as it can on these latent relationships. | contrasting |
train_2411 | This poses a problem for the generation of sensible comparisons. | rex's lookup table captures the implicit pragmatics of comparability, making rex usable in generative tasks where a metric must both suggest and evaluate comparisons. | contrasting |
train_2412 | Yet other approaches extend topic models to produce author specific topics (Rosen-Zvi et al., 2004), author persona (Mimno and McCallum, 2007), social roles (McCallum et al., 2007), etc. | these models do not model debates and hence are unable to discover AD-expressions and interaction natures of author pairs. | contrasting |
train_2413 | Burfoot et al., (2011) builds on the work of (Thomas et al., 2006) and proposes collective classification using speaker contextual features (e.g., speaker intentions based on vote labels). | above works do not discover pair interactions (arguing nature) in debate authors. | contrasting |
train_2414 | For quantitative evaluation, topic models are often compared using perplexity. | perplexity does not reflect our purpose since we are not trying to evaluate how well the ADexpressions in an unseen discussion data fit our learned models. | contrasting |
train_2415 | There has been a substantial body of work on metaphor identification and interpretation (Wilks, 2007;. | in this paper we focus on an equally interesting, challenging and important problem, which concerns the automatic identification of affect carried by metaphors. | contrasting |
train_2416 | The previously proposed methods still suffer from the same issue of data sparseness when applied to MSD tagging. | in our approach, we overcome the problem through a different encoding of the input data (see section 2.1). | contrasting |
train_2417 | Probably the most reliable way to obtain such data would be to let annotators manually encode a training corpus with such transformations. | the task would be extremely tedious and the annotators would probably have to undergo special training (to be able to think in terms of transformations). | contrasting |
train_2418 | Since our approach requires manual involvement in the filtering of the attribute list, one might argue that one should simply manually enumerate the most relevant attributes directly. | the manual generation of conceptual features by a single researcher results in substantial variability both across and within participants (McRae et al., 2005). | contrasting |
train_2419 | With the increasing amount of user generated reference texts in the web, automatic quality assessment has become a key challenge. | only a small amount of annotated data is available for training quality assessment systems. | contrasting |
train_2420 | Automatic quality assessment has therefore become a key application in today's information society. | there is a lack of training data annotated with fine-grained quality information. | contrasting |
train_2421 | The average performance based on NOWIKI is slightly lower while using ALL features results in slightly higher average F 1 -scores. | the differences are not statistically significant and thus omitted. | contrasting |
train_2422 | If " •" ("don't") and " †" (past tense marker) are correctly recognized as two words, we may predict the previously unseen characters "g " ("tell spoilers") as an informal word, based on the learned Chinese language patterns. | state-of-the-art Chinese segmenters 1 incorrectly yield " • g †", preferring to chunk " †" ("thoroughly") as a word, as they do not consider the possibility that "g " ("spoiler") could be an informal word. | contrasting |
train_2423 | Other alternative frameworks such as Markov Logic Networks (MLNs) and Integer Linear Programming (ILP) could also be considered. | we feel that for this task, formulating efficient global formulas (constraints) for MLN (ILP) is comparatively less straightforward than in other tasks (e.g, compared to Semantic Role Labeling, where the rules may come directly from grammatical constraints). | contrasting |
train_2424 | This makes it a useful signal for IWR, as it is sensitive to informal words which often have low frequency. | the word frequency alone is not reliable enough to distinguish informal words from uncommon but formal words. | contrasting |
train_2425 | For RQ3, interestingly, the difference in performance between the LCRF and FCRF upper-bound systems is not significant. | these are upper bounds, and we expect on real-world data that CWS performance will not be perfect. | contrasting |
train_2426 | Answers and Baidu Zhidao and sites that are specialized in polls, such as Toluna. | it is highly unlikely that such sources will contain enough relevant questions for any news article due to typical sparseness issues as well as differences in interests between askers in CQA sites and news reporters. | contrasting |
train_2427 | Further delving into question correctness, the above example shows the need to assess each entity by itself. | even if both entities are independently valid for the template, their comparison may not make sense. | contrasting |
train_2428 | Finally, several works present unsupervised methods for ranking proper template instantia-tions, mainly as selectional preferences (Light and Greiff, 2002;Erk, 2007;Ritter et al., 2010). | we eventually choose instantiation candidates, and thus preferred supervised methods that enable filtering and not just ranking. | contrasting |
train_2429 | The baseline gets a little higher recall than our method, which shows the baseline method tends to make aggressive segmentation decisions. | both precision and F1 score of our method are much higher than the baseline. | contrasting |
train_2430 | The neural network model is a reasonable method to overcome these pitfalls. | the neural network based machine translation is far from easy. | contrasting |
train_2431 | Those systems use synchronous context-free grammars (Chiang, 2007), synchronous tree substitution grammars (Eisner, 2003) or even more powerful formalisms like synchronous tree-sequence substitution grammars (Sun et al., 2009). | those systems use linguistic syntactic annotation at different levels. | contrasting |
train_2432 | If we meet a lexical element (terminal), then we add it to the end of w i . | if we meet a nonterminal, then we have to consult the best pre-translation τ = t , (u 1 , . | contrasting |
train_2433 | So far, the LM scorer could only score their associated unigrams. | we also have their associated strings w 1 and w 2 , which can now be used. | contrasting |
train_2434 | One of them is the work reported in (Chung and Gildea, 2010), where three approaches are compared, based on either pattern matching, CRF, or parsing. | there is no comparison between using gold trees and automatic trees. | contrasting |
train_2435 | It seems performing parsing and tagging with the bilingually-induced POS tagset is too difficult when only monolingual in-formation is available to the parser. | our bilingually-induced POSs, except for Joint [P ], with the lower accuracies are more effective for SMT than the monolingually-induced POSs and the original POSs, as indicated in Table 1. | contrasting |
train_2436 | Methods that have proven successful for paraphrase detection (Deerwester et al., 1990;Dolan et al., 2004), as in the main clauses of b and c, include latent variable models that simultaneously capture the semantics of words and sentences, such as latent semantic analysis (LSA) or latent Dirichlet allocation (LDA). | our task goes beyond paraphrase detection. | contrasting |
train_2437 | The kernel computational complexity is , where all pairwise comparisons are carried out between T 1 and T 2 . | there are fast algorithms for kernel computation that run in linear time on average, either by dynamic programming (Collins and Duffy, 2002), or pre-sorting production rules before training (Moschitti, 2006). | contrasting |
train_2438 | Our evaluation relies on the Matthews correlation coefficient (MCC, also known as the φcoefficient) (Matthews, 1975) to avoid the bias of accuracy due to data skew, and to produce a robust summary score independent of whether the positive class is skewed to the majority or minority. | to f-measure, which is a classspecific weighted average of precision and recall, and whose weighted version depends on a choice of whether the class-specific weights should come from the training or testing data, MCC is a single summary value that incorporates all 4 cells of a 2 × 2 confusion matrix (TP, FP, TN and FN for True or False Positive or Negative). | contrasting |
train_2439 | As for classes or senses, it may not be a common assumption. | when the classes for all-words WSD are enormous, finegrained, and can be associated with distance, we can rather naturally assume the continuity also for senses. | contrasting |
train_2440 | The cluster centers are located at the means of hypotheses including miscellaneous alternatives not intended, thus the estimated probability distribution is, roughly speaking, offset toward the center of WordNet, which is not what we want. | the proposed method proceeds to Figure 2(c) and finds clusters in the data after conflicting data is erased. | contrasting |
train_2441 | some structures like sequences or parse trees, specialized and tractable dynamic programming algorithms have proven to be very effective. | as the structures under consideration become increasingly complex, the computational problem of predicting structures can become very expensive, and in the worst case, intractable. | contrasting |
train_2442 | We choose this baseline because it is shown to give the highest improvement in wall-clock time and also in terms of the number of cache hits. | we note that the results presented in our work outperform all the previous amortization algorithms, including the approximate inference methods. | contrasting |
train_2443 | Lagrangian Relaxation in the literature In the literature, in applications of the Lagrangian relaxation technique (such as (Rush and Collins, 2011;Chang and Collins, 2011;Reichart and Barzilay, 2012) and others), the relaxed problems are solved using specialized algorithms. | in both the relaxations considered in this paper, even the relaxed problems cannot be solved without an ILP solver, and yet we can see improvements from decomposition in Table 1. | contrasting |
train_2444 | In general, a tree has width one, and it can be shown that a graph has treewidth at most two iff it does not have the following graph as a minor (Bodlaender, 1997): Finding a tree decomposition with minimal width is in general NP-hard (Arnborg et al., 1987). | we find that for the graphs we are interested in in NLP applications, even a naïve algorithm gives tree decompositions of low width in practice: simply perform a depth-first traversal of the edges of the graph, forming a tree T . | contrasting |
train_2445 | In general, however, such supervision is not always available or easy to obtain. | databases are often abundantly available, especially for important domains. | contrasting |
train_2446 | Results suggested that 3-year-old children are capable of deception, and that non-verbal behaviors during deception include increases in 'positive' behaviors (e.g., smiling). | verbal cues of deception were not analyzed. | contrasting |
train_2447 | Ideally, we would like to have at our disposal a large annotated training set for our new concept of sentiment relevance. | such a resource does not yet exist. | contrasting |
train_2448 | Our work is most closely related to (Taboada et al., 2009) who define a fine-grained classification that is similar to sentiment relevance on the highest level. | unlike our study, they fail to experimentally compare their classification scheme to prior work in their experiments and to show that this scheme is different. | contrasting |
train_2449 | In addition, they work on the paragraph level. | paragraphs often contain a mix of S-relevant and S-nonrelevant sentences. | contrasting |
train_2450 | The best model (line 7) performs better than MinCut (3) by 3.1% and better than training on purely rule-generated DS labels (line 2) by 5.8%. | we did not find a cumulative effect (line 8) of the two feature sets. | contrasting |
train_2451 | The optimization criterion does not always correlate perfectly with F 1 . | we find no statistically significant difference between the selected result and the highest F 1 value. | contrasting |
train_2452 | To date, the modeling of emotion in a dialogue has extensively been studied in NLP as well as related areas (Forbes-Riley and Litman, 2004;Ayadi et al., 2011). | the past attempts are virtually restricted to estimating the emotion of an addresser 1 from her/his utterance. | contrasting |
train_2453 | In this case, the first two utterances are used to learn the translation model, while only the second utterance is used to learn the language model. | this simple approach is prone to suffer from the data sparseness problem. | contrasting |
train_2454 | For all the four features (i.e., two phrase translation probabilities and two lexical weights) derived from translation model, the weights of the adapted model are equally set as α (0 ≤ α ≤ 1.0). | we use SRILM for the interpolation of language models. | contrasting |
train_2455 | We can clearly observe the improvement in the BLEU from 0.64 to 1.05. | there still remains a gap between the last two rows (i.e., proposed and optimal). | contrasting |
train_2456 | As the Figures show, in both tasks, SHEM achieved high performances with 11 emotions. | bHEM achieved high performances with six emotions. | contrasting |
train_2457 | For example, a threshold like 0.3 or 0.4 improves the performance. | if a large value (e.g., 0.6) is selected as threshold, the performance decreases. | contrasting |
train_2458 | They also utilized Latent Semantic Analysis (LSA) (Landauer et al., 1998) as another semantic similarity measure. | both PMI and LSA are semantic similarity measure. | contrasting |
train_2459 | We assumed that the type and number of emotions are pre-defined and our approach was based on this assumption. | in previous research, there is little agreement about the number and types of basic emotions. | contrasting |
train_2460 | Still, we assume that the different users will produce information of varying quality, and some should be eliminated entirely. | we emphasise that there may be smaller 1 Data collection was performed using Twitter API, http://dev.twitter.com/, to extract all posts for our target users. | contrasting |
train_2461 | As mentioned before, one could also try the opposite (i.e., start by expanding the user space); both those models can also be optimised in an iterative process. | our experiments revealed that those approaches did not improve on the performance of BEN. | contrasting |
train_2462 | For the UK case study, both BEN and BGL are able to beat all baselines in average performance across all parties. | in the Austrian case study, LEN performs better that BEN, something that could be justified by the fact that the users in C au were selected by domain experts, and consequently there was not much gain to be had by filtering them further. | contrasting |
train_2463 | First, using the fact that Now the problem is equivalent to: This is equivalent to the ILP: In the previous section, we assume that n b,ref is at hand (reference abstractive summary is given) and propose a bigram-based optimization framework for extractive summarization. | for the summarization task, the bigram frequency is unknown, and thus our first goal is to estimate such frequency. | contrasting |
train_2464 | On one hand, this is to save computational cost for the ILP module. | we see from the table that only 127 of these more than 2K bigrams are in the reference summary and are thus expected to help the summary responsiveness. | contrasting |
train_2465 | We can see that the ICSI ILP system performs better when the input bigrams have less noise (those bigrams that are not in summary). | our proposed method is slightly more robust to this kind of noise, possibly because of the weights we use in our system -the noisy bigrams have lower weights and thus less impact on the final system performance. | contrasting |
train_2466 | This is because their system was tuned for the particular summarization task using the DUC 2003 corpus. | even without any parameter tuning our method yields good performance, as evidenced by results on the two different summarization tasks. | contrasting |
train_2467 | Joint models of sentence extraction and compression have a great benefit in that they have a large degree of freedom as far as controlling redundancy goes. | conventional two-stage approaches (Zajic et al., 2006), which first generate candidate compressed sentences and then use them to generate a summary, have less computational complexity than joint models. | contrasting |
train_2468 | That is, we extract subsentences for making the summary directly from all available subsentences in the documents and not in a stepwise fashion. | there is a difficulty with such a formalization. | contrasting |
train_2469 | V is partitioned into exclusive subsets B of valid subtrees, and each subset corresponds to the original sentence from which the valid subtrees derived. | the cost of a union of subtrees from different sentences is simply the sum of the costs of subtrees, while the cost of a union of subtrees from the same sentence is smaller than the sum of the costs. | contrasting |
train_2470 | In this joint model, we generate a compressed sentence by extracting an arbitrary subtree from a dependency tree of a sentence. | not all subtrees are always valid. | contrasting |
train_2471 | Comparing the results of SbE and RC, we can see that the sentence compression caused the recall of SbE to be 7% lower than that of RC. | the drop is relatively small in light of the fact that the sentence compression can discard 19% of the original character length with SbE. | contrasting |
train_2472 | We show that for the special case of linear PCFGs (which include HMMs) with non-degenerate priors the posterior puts zero mass on non-tight PCFGs, so tightness is not an issue with Bayesian estimation of such grammars. | because all of the commonly used priors (such as the Dirichlet or the logistic normal) assign non-zero probability across the whole probability simplex, in general the posterior may assign non-zero probability to nontight PCFGs. | contrasting |
train_2473 | This dominance is mainly because chunkbased dependency analysis looks most appropriate for Japanese syntax due to its morphosyntactic typology, which includes agglutination and scrambling (Bekki, 2010). | it is also true that this type of analysis has prevented us from deeper syntactic analysis such as deep parsing (Clark and Curran, 2007) and logical inference (Bos et al., 2004;Bos, 2007), both of which have been surpassing shallow parsing-based approaches in languages like English. | contrasting |
train_2474 | 13, RelIn is assigned because the right NP "book" is annotated as an accusative argument of the predicate "buy". | relExt is assigned in the lower side in the figure because the right NP "store" is not annotated as an argument. | contrasting |
train_2475 | ), an unsupervised model usually suffers from two drawbacks, i.e., lower performance and higher computational cost. | bilingual projection (Hwa et al., 2005;Smith and Eisner, 2009;Jiang and Liu, 2010) seems a promising substitute for languages with a large amount of bilingual sentences and an existing parser of the counterpart language. | contrasting |
train_2476 | Most commonly used alignment models, such as the IBM models and HMM-based aligner are unsupervised learners, and can only capture simple distortion features and lexical translational features due to the high complexity of the structure prediction space. | the CRFbased NER models are trained on manually annotated data, and admit richer sequence and lexical features. | contrasting |
train_2477 | Morph can be considered as a special case of alias used for hiding true entities in malicious environment (Hsiung et al., 2005;Pantel, 2006). | social network plays an important role in generating morphs. | contrasting |
train_2478 | Furthermore, our approaches can also be applied for satire or other implicit meaning recognition, as well as information extraction (Bollegala et al., 2011). | morph resolution in social media is challenging due to the following reasons. | contrasting |
train_2479 | Given the constructed network, a straightforward solution for finding the target for a morph is to use link-based similarity search. | now objects are linked to different types of neighbors, if all neighbors are treated as the same, it may cause information loss problems. | contrasting |
train_2480 | For example, using only surface features, the real target "乔布斯 (Steve Jobs)" of the morph "乔帮 主 (Qiao Boss)" is not top ranked since some other candidates such as "乔治 (George)" are more orthographically similar. | "Steve Jobs" is ranked top when combined with semantic features. | contrasting |
train_2481 | Translations get propagated to oov nodes using a label propagation technique. | beside the difference in the oov label assignment, there is a major difference between our bipartite graph and the baseline (Marton et al., 2009): we do not use a heuristic to reduce the number of neighbor candidates and we consider all possible candidates that share at least one context word. | contrasting |
train_2482 | A common practice to control the number of edges is to connect each node to at most k other nodes (k-nearest neigh-bor). | finding the top-k nearest nodes to each node requires considering its similarity to all the other nodes which requires O(n 2 ) computations and since n is usually very large, doing such is practically intractable. | contrasting |
train_2483 | The correctness of this gold standard is limited to the size of the parallel data used as well as the quality of the word alignment software toolkit, and is not 100% precise. | it gives a good estimate of how each oov should be translated without the need for human judgments. | contrasting |
train_2484 | Most have focused on extracting a translation lexicon by mining monolingual resources of data to find clues, using probabilistic methods to map words, or by exploiting the cross-language evidence of closely related languages. | most of them evaluated only highfrequency words of specific types (nouns or content words) (Rapp, 1995;Koehn and Knight, 2002;Haghighi et al., 2008;Garera et al., 2009;Laws et al., 2010) we do not consider any constraint on our test data and our data includes many low frequency words. | contrasting |
train_2485 | Predicate-argument structure (PAS) has been demonstrated to be very effective in improving SMT performance. | since a sourceside PAS might correspond to multiple different target-side PASs, there usually exist many PAS ambiguities during translation. | contrasting |
train_2486 | Considering that current syntaxbased translation models are always impaired by cross-lingual structure divergence (Eisner, 2003;, PAS is really a better representation of a sentence pair to model the bilingual structure mapping. | since a source-side PAS might correspond to multiple different target-side PASs, there usually exist many PAS ambiguities during translation. | contrasting |
train_2487 | From Figure 1, we can see that (a) and (c) get the same source-side PAS and target-side-like PAS. | they are different because in Figure 1(c), there is a gap string "对 运动员" between [A0] and [Pred]. | contrasting |
train_2488 | Researchers such as Swan and Smith (2001), Aarts and Granger (1998), Davidsen-Nielsen and Harder (2001), and Altenberg and Tapper (1998) work on mother tongue interference to reveal overused/underused words, part of speech (POS), or grammatical items. | very little is known about how strongly mother tongue interference is transferred to another language and about what relation there is across mother tongues. | contrasting |
train_2489 | For example, Slavic languages have a rich inflectional case system (e.g., Czech has seven inflectional cases) whereas French does not. | the difference in the richness cannot be transferred into English because English has almost no inflectional case system. | contrasting |
train_2490 | In contrast, the Slavic Englishes are scattered. | the ratios give a clue to how to distinguish Slavic Englishes from the others when combined with other | contrasting |
train_2491 | Thus, co-occurrence of words in n-word windows, syntactic structures, sentences, paragraphs, and even whole documents is captured in vector-space models built from text corpora (Turney and Pantel, 2010;Basili and Pennacchiotti, 2010;Erk and Padó, 2008;Mitchell and Lapata, 2008;Bullinaria and Levy, 2007;Jones and Mewhort, 2007;Pado and Lapata, 2007;Lin, 1998;Landauer and Dumais, 1997;Lund and Burgess, 1996;Salton et al., 1975). | little is known about typical profiles of texts in terms of co-occurrence behavior of their words. | contrasting |
train_2492 | Thus, inasmuch as highest PMI values tend to capture multi-word expressions (South and Africa; Merill and Lynch), morphological variants (bids and bidding), or synonyms (mergers and takeovers), their proportion in word type pairs does not seem to give a clear signal regarding the quality of writing. | 8 the area of moderately high PMI values (from PMI=2.5 to PMI=3.67 in Figure 2) produces a very consistent picture, with only two points out of 48 in that interval 9 lacking significant positive correlation with essay score (p2 at PMI=3.17 and p5 at PMI=3). | contrasting |
train_2493 | Because the style of informal writing may be different in different data sources, tailoring an approach towards a particular data source can improve performance in the desired domain. | this is often done at the cost of adaptability. | contrasting |
train_2494 | Unlike on previous datasets, the use of generic mappings only provided a small improvement over the baseline. | the use of domain-specific generators once again led to significantly increased performance on subjects and objects. | contrasting |
train_2495 | These systems limited their searchspace to the elements that share a syntactical relation with the predicate. | when the participants of a predicate are implicit this approach obtains incomplete predicative structures with null arguments. | contrasting |
train_2496 | For the predicate plan in the previous sentence, a traditional SRL process only returns the filler for the argument arg1, the theme of the plan. | in both examples, a reader could easily infer the missing arguments from the surrounding context of the predicate, and determine that in (1) both instances of the predicate share the same arguments and in (2) the missing argument corresponds to the subject of the verb that dominates the predicate, Quest Medical Inc. Obviously, this additional annotations could contribute positively to its semantic analysis. | contrasting |
train_2497 | We found many derivational patterns in German to be conceptually simple (e.g., verb-noun zero derivation) so that substantial coverage can already be achieved with very simple transformation functions. | there are many more complex patterns (e.g., suffixation combined with optional stem changes) that in sum also affect a considerable number of lemmas, which required us to either implement low-coverage rules or generalize existing rules. | contrasting |
train_2498 | Both the developers of these systems as well as researchers working on the subject matter frequently claim their approaches to be searching the entire web or, at least, to be scalable to web size. | there is hardly any evidence to substantiate this claim-rather the opposite can be observed: commercial plagiarism detectors have not been found to reliably identify plagiarism from the web (Köhler and Weber-Wulff, 2010), and the evaluation of research prototypes even under laboratory conditions shows that there is still a long way to go (Potthast et al., 2010b). | contrasting |
train_2499 | These relations are extracted from unstructured or semi-structured text using ontology learning from scratch (Velardi et al., 2013) and Open Information Extraction techniques (Etzioni et al., 2005;Yates et al., 2007;Wu and Weld, 2010;Fader et al., 2011;Moro and Navigli, 2013) which mainly stem from seminal work on is-a relation acquisition (Hearst, 1992) and subsequent developments (Girju et al., 2003;Pasca, 2004;Snow et al., 2004, among others). | these knowledge resources still lack semantic information about language units such as phrases and collocations. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.