id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_14500 | These tasks use automatic metrics to determine the quality of the participating systems. | these efforts pale in comparison to competitions organized in other fields, e.g. | contrasting |
train_14501 | Our experiments show that this method does improve click prediction performance. | this method has several potential weaknesses. | contrasting |
train_14502 | In the latest SemEval campaign (UzZaman et al., 2013), the rule-based Hei-delTime (Strötgen and Gertz, 2010) out performed machine-learning and hybrid counterparts by a large margin. | statistical systems obtained promising results with respect to temporal entity extraction. | contrasting |
train_14503 | We studied the effect of corpus size by training a model using a portion of the clinical training data equivalent in size to that of FTB (marked clin-in Table 3). | the ATC corpus was too small to train any usable models (results not shown). | contrasting |
train_14504 | State-of-the-art approaches rely on word vector representations with TF-IDF weights (Salton and Buckley, 1988). | expanding the seed set by relying on TF-IDF representations to find similar instances has limitations, since the similarity between any two relationship instance vectors of TF-IDF weights is only positive when the instances share at least one term. | contrasting |
train_14505 | In Table 1, for example, both the annual, Milankovitch and continuum temperature variability and annual temperature between 1958 and 2010 are generalised to annual temperature. | many generalised variables are unique and thus serve no purpose in relating variables. | contrasting |
train_14506 | Since many sentences are long, complex and domainspecific, it comes as no surprise that the parser often fails to correctly resolve well-known ambiguities in coordination and PP-attachment. | with pattern matching on strings and/or POS tags instead of syntax trees, determining boundaries of variables would be problematic. | contrasting |
train_14507 | Generalisation by tree pruning appears to work quite well as long as the parse is correct. | pruning by itself is insufficient and should be supplemented with other methods. | contrasting |
train_14508 | The experiments illustrate that if we are lucky enough to have KB tags, they improve NER. | the models use all possible KB tags and should be considered an upper bound. | contrasting |
train_14509 | While such systems aim for extraction coverage, and because they operate in an ontologyfree setting, they don't directly address the problem of improving knowledge density in ontological KGs such as NELL. | oIE extractions provide a suitable starting point which is exploited by ENTICE. | contrasting |
train_14510 | We can see that our model still benefits from the randomly sampled negative examples, which may help our model learn to refine the margin between the positive and negative examples. | with similar amount of negative examples, treating the reversed dependency paths from objects to subjects as negative examples can achieve a better performance (85.4% F1), improving random samples by 1.9%. | contrasting |
train_14511 | Of course, exceptions exist, e.g., TipSem for English and Spanish (Llorens et al., 2010), and Hei-delTime even covers 13 languages. | it required a lot of manual effort to extend HeidelTime, and researchers of different institutes and countries have been involved for German (Strötgen and Gertz, 2011), Dutch (van de Camp and Christiansen, 2012), Spanish , French (Moriceau and Tannier, 2014), Croatian (Skukan et al., 2014), Vietnamese and Arabic , Italian (Manfredi et al., 2014), Chinese (Li et al., 2014), Russian, Estonian, and Portuguese. | contrasting |
train_14512 | While there have been earlier approaches for automatic extensions of temporal taggers to further languages (Saquete et al., 2004;Negri et al., 2006;Spreyer and Frank, 2008), these were limited to a few languages and the results were considered less successful, in particular for the normalization subtask. | angeli and Uszkoreit (2013) presented an approach to language-independent parsing of temporal expressions, however, addressing only the normalization and not the extraction subtask. | contrasting |
train_14513 | Note that in contrast to the original rules, these rules do neither contain pos constraints nor any English terms to guarantee language independence. | fuzzy pattern matching is allowed at the end of some patterns to try to take care of morphology-rich languages. | contrasting |
train_14514 | This is the case for new languages and domains, the task we face in this paper. | training embeddings for Chinese is not straightforward: Chinese is not word segmented, so embeddings for each word cannot be trained on a raw corpus. | contrasting |
train_14515 | Since one entity belongs to multiple types, relation schemas with general types will be ranked higher. | two different schemas may share the same support. | contrasting |
train_14516 | For example, we can learn that the type person subsumes types such as actor, politician and deceased person. | strict set inclusion doesn't always hold in the knowledge base. | contrasting |
train_14517 | In GM, a fixed set of combination weights (i.e., ω) are learned to optimize the overall performance for all entity-document pairs. | the best combination strategy for a given pair is not always the best for the others since both the documents and entities are heterogeneous. | contrasting |
train_14518 | Moreover, topic LDTM generally performs better than src LDTM in both scenarios, which meets our expectation because topic-based features have far more dimensions than source-based features. | even if source-based feature vector holds a few dimensions (10 in our experiments), src LDT improves the precision on the basis of GM. | contrasting |
train_14519 | Transfer learning is utilized to transfer the keyword importance learned from training pairs to query pairs (Zhou and Chang, 2013). | some highly supervised methods require training instances for each entity to build a relevance model, limiting their scalabilities. | contrasting |
train_14520 | Allowing patients access to their own electronic health records (EHRs) can enhance medical understanding and provide clinical relevant benefits (Wiljer et al., 2006), including increased medication adherence (Delbanco et al., 2012). | eHR notes present unique challenges to the average patients. | contrasting |
train_14521 | Other related work that reduces long queries includes ranking all subsets of the original query (Kumaran and Carvalho, 2009). | typical EHR notes are longer than the passages and verbose queries in these systems, which makes the graphical model and other learning based models less efficient. | contrasting |
train_14522 | Wikipedia, especially the Medicine category, is an appealing resource for such information as the human curated links in them are naturally concepts that are important. | the number of articles in the Medicine category outnumbers our EHR notes substantially, we therefore restricted the Wikipedia articles to the Diabetes category. | contrasting |
train_14523 | This can be attributed to the fact that the Wikipedia articles outnumbered the EHR data by 7 times. | this data helped improve the coverage of the key concepts. | contrasting |
train_14524 | A straightforward approach is to crawl bilingual documents from the Web for use as training data. | because most documents on the Web are written in one language, it is not always easy to collect a sufficient number of multilingual documents, especially those involving minor languages. | contrasting |
train_14525 | Although both GCCA and CCA show improved performance as the sample size of [train-E/J] increases, not surprisingly, GCCA is gradually overtaken by CCA when we have enough samples to learn the relevance between English and Japanese texts directly. | accuracies of image-mediated learning in the cases when [train-E/J] is scarce are higher than CCA baseline. | contrasting |
train_14526 | These classifiers can be seen as a baseline for comparisons. | the remaining SVM clearly yields the best results among the employed algorithms. | contrasting |
train_14527 | Nevertheless, they are not good enough for reliable predictions. | the aggregated classification data accurately predict whether the majority of banks will increase or decrease their Tier 1 capital ratio in the following year: for 12 out of 13 years, the algorithm correctly predicts the direction of the T1 evolution. | contrasting |
train_14528 | Hence, the sentiment scores could be used in regression models for predicting the T1 evolution. | the results are only meaningful if the figures are aggregated by year. | contrasting |
train_14529 | E.g., while the product review and the hotel review in Figure 1 cover opinions on several aspects, no deliberate structure is found in their argumentation. | the ex-cerpt of the more professional movie review shows that this is not always the case. | contrasting |
train_14530 | Our hypothesis is that similar sentiment flows are used across domains of web reviews to express the same global sentiment. | because of the domain differences described in Section 3, we do not expect that the original sentiment flows of web reviews generalize well. | contrasting |
train_14531 | Intuitively, the interaction between entity boundaries and sentiment classes might not be as strong as that between more closely-coupled sources of information, such as word boundaries and POS (Zhang and Clark, 2008), or named entities and constituents (Finkel and Manning, 2009), for which joint models significantly outperform pipeline models. | there do exist cases where entity boundaries and sentiment classes reinforce each other. | contrasting |
train_14532 | As a result, the neural model can better capture patterns that do not occur in the training data. | the discrete model is based on manually defined binary features, which do not fire if not contained in the training data. | contrasting |
train_14533 | tity recognition, the pipeline can be a favorable choice. | although useful for some joint sequence labeling task (Ng and Low, 2004), the collapsed task does not seem to address the joint sentiment task as effectively. | contrasting |
train_14534 | A fundamental issue in opinion mining is to search a corpus for opinion units, each of which typically comprises the evaluation by an author for a target object from an aspect, such as "This hotel is in a good location". | few attempts have been made to address cases where the validity of an evaluation is restricted on a condition in the source text, such as "for traveling with small kids". | contrasting |
train_14535 | For example, those who intend to improve the quality of hotel A may investigate representative values for "Aspect" in the units satisfying "Target=hotel A & Polarity=negative", while those who look for accommodation may collect the opinion units for one or more candidate hotels and investigate the distribution of values for "Polarity" on an aspect-by-aspect basis. | in the above example (1), the evaluation for hotel A ("a reasonable price") is valid for "if you take a family trip with small kids", and thus it is not clear whether this evaluation is valid irrespective of the condition. | contrasting |
train_14536 | For example, the distribution of positive and negative opinions can be available on a category-bycategory basis. | in this paper we focus only on the identification for CFOs and leave the semantic classification future work. | contrasting |
train_14537 | Candidate categories include demographic and psychographic attributes for target users (e.g., age and hobby) and situations of target users (e.g., purpose, time, and place). | we leave the classification for U-CFOs future work. | contrasting |
train_14538 | y n , where y i ∈ {BU, IU, BC, IC, Other, T arget, Aspect, OpinionW ord}. | because an opinion unit in an input sentence has been identified in advance, the task is a quinary classification with respect to y i ∈ {BU, IU, BC, IC, Other}. | contrasting |
train_14539 | These methods compared the context of the entities and their reference pages in the KBs through a similarity measurement. | we tested 32 different names of General Motors (GM) car brands, and only four of the brands exist in Wikipedia. | contrasting |
train_14540 | Men-tionRank, the only state-of-the-art method for T-ED, is a graph-based model that focuses on a small set of entities (e.g., 50 or 100 entities) and conducts experiments on thousands of documents . | the graph has a quadratic growth as the number of documents increases. | contrasting |
train_14541 | In the graph, the context similarity between documents is computed and used as the edges normally, where the width of the context window that surround the target entity in the document is typically chosen to be 10, 20 or 50 words. | the documents considered here are limited in length, and a user seldom changes the topic in so few words; thus, we regard the entire document as the context of its target entities. | contrasting |
train_14542 | On one hand, the number of layers l declines sharply when η is less than 0.1, this indicates little difference in the trust level of documents in the propagation. | our method has no effect on the vertices outside of the graph, so the performance is directly proportional to the graph coverage rate. | contrasting |
train_14543 | EM yields precise archiving results because the constraint conditions are helpful to reduce uncertainty in string matching. | fM-based archiving is able to match entity name mentions with various forms, and thus it achieves higher recall. | contrasting |
train_14544 | In total there are 1,851 gold standard relevant documents available for the evaluation of entity archiving. | the data is far from enough because it only covers a small portion of all relevant documents in the pool. | contrasting |
train_14545 | The reasons are as following: 1) RM gives greater weights to the highfrequency words, and 2) popular slices are of much greater public interest and hence frequently mentioned in relevant documents. | some entities not only share similar names but similar popular slices, such as the religious vocation of different church scientologists. | contrasting |
train_14546 | Another observation is that in all the four figures, the performance of the stochastic ListNet methods increases with more samples of the object lists. | if there are too many samples, the performance starts to decrease. | contrasting |
train_14547 | For stochastic ListNet, the performance improves with k increases. | to the conventional ListNet, this improvement propagates to the results on the test data. | contrasting |
train_14548 | These mention at most 3 variables each and thus have relatively manageable groundings. | as discussed in the next section, ER-MLN can fail on questions that have distinct entities with similar string representations (e.g. | contrasting |
train_14549 | Therefore, our approach is corpus-level: We infer the types of an entity by considering the set of all of its mentions in the corpus. | named entity recognition (NER) is context-level or sentence-level: NER infers the type of an entity in a particular context. | contrasting |
train_14550 | • We address the problem of corpus-level entity typing in a knowledge base completion setting. | to other work that has focused on learning relations between entities, we learn types of entities. | contrasting |
train_14551 | That is clearest for tail entities -where one bad context can highly influence the final decision -and for tail types, which CM was not able to distinguish from other similar types. | the good results of the simple JM confirm that the score distributions in CM do help. | contrasting |
train_14552 | This negatively affects evaluation measures for many entities. | the resulting types do not have a balanced number of instances. | contrasting |
train_14553 | FREE-BASE) improves Web extractions by predicting their reliability. | in both cases the main objective is distantly supervised extraction from unstructured text, rather than KB unification. | contrasting |
train_14554 | Given KB i ∈ K U , our disambiguation module ( Figure 2) takes as input its set of unlinked triples T i and outputs a set T S i ⊆ T i of disambiguated triples with subject-object pairs linked to S. The triples in T S i , together with their corresponding entity sets and relation sets, constitute the redefined KB S i which is then added to K S . | applying a straightforward approach that disambiguates all triples in isolation might lead to very imprecise results, due to the lack of available context for each individual triple. | contrasting |
train_14555 | After disambiguation (Section 5) each KB in K is linked to the unified sense inventory S and added to K S . | each KB S i ∈ K S still provides its own relation set R S i ⊆ R i . | contrasting |
train_14556 | The distribution of entities over relations is however very skewed, with 80.33% of the triples being instances of the generalizations relationship. | rEVErB contains a highly sparse relation set (1,299,844 distinct relations) and more than 3 million distinct entities. | contrasting |
train_14557 | For the MT RNN and BOW + ME classifiers, the additional training data primarily benefits recall, particulary for the General domain. | the ST RNN sees an improvement in precision for all domains. | contrasting |
train_14558 | Name errors have a similar character to OOV erors in that they often have anomalous word sequences in the region of the OOV word (examples 1 and 2), which is why the OOV posterior is so useful. | too much reliance on the OOV posterior leads to wrongly detecting general OOV errors as name errors ('lizards' and 'zombies' in example 2) and missed detection of name errors where the confusion network cues indicate a plausible hypothesis ('omaha' in example 3). | contrasting |
train_14559 | To motivate this method, recall that one important property of a sieve-based approach is that later sieves can exploit earlier sieves' decisions when making their own decisions. | our first method of using sieves makes limited use of the decisions made by earlier sieves. | contrasting |
train_14560 | These current approaches have largely assumed that characters can be reliably identified in text using standard techniques such as Named Entity Recognition (NER) and that the variations in how a character is named can be found through coreference resolution. | such treatment of character identity often overlooks minor characters that serve to enrich the social structure and serve as foils for the identities of major characters (Eder et al., 2010). | contrasting |
train_14561 | (2013) propose an alternate approach for identifying speaker references in novels, using a probabilistic model to identify which character is speaking. | to account for the multiple aliases used to refer to a character, the authors first manually constructed a list of characters and their aliases, which is the task proposed in this work and underscores the need for automated methods. | contrasting |
train_14562 | While novels written before 1850 had slightly more characters on average, this effect may be due to the smaller number of works available from this period. | our finding raises many questions about whether the social networks for these characters obey similar trends in their size and density. | contrasting |
train_14563 | While DPMM achieved some promising results, it can still sometimes produce unsatisfactory topic clusters due to its unsupervised nature. | people often have prior knowledge about what potential topics should exist in a given text corpus. | contrasting |
train_14564 | In our previous works, we have investigated several different topological structures (tree and directed acyclic graph) to recursively model the semantic composition from the bottom layer to the top layer, and applied them on Chinese word segmentation (Chen et al., 2015a) and dependency parsing (Chen et al., 2015b) tasks. | these structures are not suitable for modeling sentences. | contrasting |
train_14565 | One common problem among most of them is that different results are obtained depending on the cluster initialization, suggesting that some clusters are unstable or weak. | there is no obvious way to effectively and efficiently evaluate the quality of clusters. | contrasting |
train_14566 | Other features we implemented include the last word of the section, and a binary feature indicating whether the section title contains a year which is included between the date of birth and date of death of the person of interest. | both pieces of information led to a performance drop on the development set, so we did not include them in the final feature set. | contrasting |
train_14567 | We can observe that the number of possible alignments per word problem of our algorithm is much smaller than . | the number of all the false alignments is still 80K. | contrasting |
train_14568 | 2 http://en.wikipedia.org/wiki/ Metaphone which in turn serves as grouping words of similar sounds (lexical variations) to one code. | most of the schemes are designed for English and European languages and are limited when apply to other family of languages like Urdu. | contrasting |
train_14569 | Traditionally, CCR and NEL have been addressed separately. | such approaches miss out on the mutual synergies if CCR and NEL were performed jointly. | contrasting |
train_14570 | NEL methods often harness the semantic similarity between mentions and entities and also among candidate entities for different mentions (in Wikipedia or other KBs) for contextualization and coherence disambiguation (Hoffart et al., 2011;Milne & Witten, 2008;Kulkarni et al., 2009;Ratinov et al., 2011). | in the absence of CR mention groups, NEL has limited context and is bound to miss out on certain kinds of difficult cases. | contrasting |
train_14571 | Recently, several joint models have been proposed for CR-NER (Haghighi & Klein, 2010;Singh et al., 2013), CR-NEL (Hajishirzi et al., 2013), and NER-CR-NEL (Durrett & Klein, 2014). | to the best of our knowledge, no method exists for jointly handling CCR and NEL on large text corpora. | contrasting |
train_14572 | CCR alone would likely miss the coreference relation between Logan (Doc 1) and its alias Wolverine (Doc 2), leaving NEL with the difficult task of disambiguating "Logan" in a document with sparse and highly ambiguous context (Doc 1). | nEL alone would likely map Australia (Doc 3) to the country (not the movie) and could easily choose the wrong link for mention "Hugh". | contrasting |
train_14573 | Similar to the behavior induced by δ s , we observe that a high τ limits entity linking and possible KB feature inclusion, while an extremely low value (near to zero) allows for noisy feature incorporation -both situations leading to lowered CCR efficiency. | since τ prevents gross mis-alignment of mentions to KB entities, a wide range of small value (0.1 − 0.35) is seen to provide comparable performance. | contrasting |
train_14574 | Hence, CCR cannot be efficiently tackled by simply employing CR methods on a "super-document". | harnessing of non-local mention features (via CCR) and efficient detection of new mentions using link validation enables C3EL to achieve a gain of around 5% in NEL compared to others (see Table 8(b)). | contrasting |
train_14575 | While approaches on addressing these issues exist, current algorithms typically suffer from high time complexity (Finkel and Manning, 2009) and are therefore difficult to scale to large datasets. | the problem of designing efficient and scalable models for mention extraction and classification from natural language texts becomes increasingly important in this era where a large volume of textual data is becoming available on the Web every day -users need systems which are able to scale to extremely large datasets to support efficient semantic analysis for timely decisonmaking. | contrasting |
train_14576 | • We might assist an online aligner by permuting our n segment pairs to place shorter, less ambiguous ones at the top. | we would have to communicate the permutation to the decompressor, at a prohibitive cost of log 2 (n! | contrasting |
train_14577 | Li and Hovy (2014) propose a neural network coherence model which employs distributed sentence representation and then predict the probability of whether a sequence of sentences is coherent or not. | to the methods mentioned above which learn the word relationship in or between the sentences separately, we propose a hierarchical recurrent neural network language model (HRNNLM) to capture the word sequence across the sentence boundaries at the document level. | contrasting |
train_14578 | 2014propose a RNN Encoder-Decoder which is a joint recurrent neural network model at the sentence level as conventional SMT decoder does. | at the discourse level, there is little report on applying DNN to boost the translation result of a document. | contrasting |
train_14579 | Neural networks have been shown to improve performance across a range of natural-language tasks. | designing and training them can be complicated. | contrasting |
train_14580 | For example, neural language models and joint language/translation models improve machine translation quality significantly (Vaswani et al., 2013;Devlin et al., 2014). | neural networks can be complicated to design and train well. | contrasting |
train_14581 | Both have a sharp tip at the origin that encourages all the parameters in a row to become exactly zero. | this also means that sparsity-inducing regularizers are not differentiable at zero, making gradient-based optimization methods trickier to apply. | contrasting |
train_14582 | Ideally, what one then wants is to learn a function h : where X E n is the domain of instances representing a collection of EDUs for each dialogue and Y G is the set of all possible SDRT graphs. | given the complexity of this task and the fact that it would require an amount of training data that we currently lack in the community, we aim at the more modest goal of learning a function where the domain of instances X E 2 represents features for a pair of EDUs and Y R represents the set of SDRT relations. | contrasting |
train_14583 | Note, that the EG model with equal weighting scores slightly better than the one with optimized weighting for German but not for English. | this difference is not significant (p>0.5) for both languages, which indicates that the search for an optimal weighting is not necessary for the attachment task. | contrasting |
train_14584 | Studies have shown that such tasks can benefit from an explicit alignment component (Hickl and Bensley, 2007;Sultan et al., 2014b;Sultan et al., 2015). | alignment is still an open research problem. | contrasting |
train_14585 | The graph obtained by HARPY consists of around 600,000 hypernymy links for around 20,000 relational phrases. | the final graph was not evaluated for precision; rather, the evaluation was instead concentrated on the alignment between verb senses and relations. | contrasting |
train_14586 | "married to" and "relative of" for P 1 and P 2 . | unlike previous approaches such as Markov logic networks, the atoms in each logical rule take values in the [0,1] continuous domain. | contrasting |
train_14587 | Relations with semantic types were also used in typed entailment graphs (Berant et al., 2011). | the type hierarchy was not considered there, which prevented from creating links between two relations with different semantic types. | contrasting |
train_14588 | 1, when the implicit object of "bake" is connected to the output of the "lay" action, it is inferred to be of type f ood since that is what is created by the "lay" action. | when the implicit P P argument of "bake" is connected to the output of the "preheat" action, it is inferred to be a location since "preheat" does not generate a food. | contrasting |
train_14589 | 1) should have a high probability. | spans that denote the output of actions (e.g, 'batter," "banana mixture") should have low probability. | contrasting |
train_14590 | Data-driven extraction of cooking knowledge has been explored in the context of building a cooking ontology (Gaillard et al., 2012;Nanba et al., 2014). | our work induces probabilistic cooking knowledge as part of unsupervised learning process for understanding recipes. | contrasting |
train_14591 | 4 Given a window size of n words around a word w, the skip-gram model predicts the neighboring words given the current word. | the CBOW model predicts the current word w, given the neighboring words in the window. | contrasting |
train_14592 | Two lines of research are directly relevant to our work: sarcasm detection in Twitter and application of distributional semantics, such as word embedding techniques to various NLP tasks. | to current research on sarcasm and irony detection (Davidov et al., 2010;Riloff et al., 2013;Liebrecht et al., 2013;Maynard and Greenwood, 2014), we have introduced a reframing of this task as a type of word sense disambiguation problem, where the sense of a word is sarcastic or literal. | contrasting |
train_14593 | Score Contrast (f ox, f lying f ox)) is 1.10, which is higher than the average collective contrastive score, 0.32. | the true taxonomic relation between 'bat' and 'flying fox' is not identified by the baseline, mainly due to the rare mention of this relation in the Web. | contrasting |
train_14594 | On the other hand, the true taxonomic relation between 'bat' and 'flying fox' is not identified by the baseline, mainly due to the rare mention of this relation in the Web. | our proposed method can recognize this relation because of two reasons: (1) 'flying fox' has many synonyms such as 'fruit bat', 'pteropus', 'kalong', and 'megabat', and there are much evidence that these synonyms are kinds of 'bat' (i.e. | contrasting |
train_14595 | For example, in Animal domain, our method identifies 'wild sheep' as a hyponym of 'sheep', but in WordNet, they are siblings. | many references 10 11 consider 'wild sheep' as a species of 'sheep'. | contrasting |
train_14596 | In its simplest form, the relations will suggest if the entities are linked together or not. | the underlying semantics of such relations can be deep such as the one in clue 5 of Problem.1. | contrasting |
train_14597 | The goal is to identify the relation possDif f (E1, E2, E3), where E1, E2, E3 are constituents having a nonnull class value. | instead of identifying posDif f (E1, E2, E3) directly, first the relation 3 ) } are identified, the extraction module will infer that posDif f (E1, E2, E3) holds. | contrasting |
train_14598 | It has been reported that the fast align approach is more than 10 times faster than baseline GIZA++, with comparable results in end-to-end French-, Chinese-, and Arabic-to-English translation experiments. | the simplicity of the IBM model 2 also leads to a limitation. | contrasting |
train_14599 | The translations from English improved (equal to GIZA++) at the second iteration. | translations to English improved more slowly. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.