id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_6800 | YASMET is designed to handle a large set of features efficiently. | there are not many details available about the features of this toolkit. | contrasting |
train_6801 | This classifier uses 19 distinct and weighted morphological features to provide complete diacritic, lexemic, glossary, and morphological information (Habash 2010). | because MADA is built on top of BAMA, it inherits all of BAMA's limitations. | contrasting |
train_6802 | As we showed earlier, WordNet (Miller 1995) has been used for this task. | even though W have been built for other languages, their coverage is relatively limited when compared to the English WordNet. | contrasting |
train_6803 | For large-scale data sets, on-line training methods can be much faster than batch training methods. | we find that the existing on-line training methods are still not good enough for training large-scale NLP systems-probably because those methods are not well-tailored for NLP systems that have massive features. | contrasting |
train_6804 | (2011) modify the Web query approach to better capture statistical association by using pointwise mutual information (PMI) rather than raw co-occurrence frequency to quantify selectional preference: The role of the PMI transformation is to correct for the effect of unigram frequency: A common word may co-occur often with another word just because it is a common word rather than because there is a semantic association between them. | it does not provide a way to overcome the problem of inaccurate counts for low-probability cooccurrences. | contrasting |
train_6805 | We have attempted to identify general factors that predict the difficulty of an item by measuring rank correlation between the per-item pseudo-coefficients and various corpus statistics. | it has proven difficult to isolate reliable patterns. | contrasting |
train_6806 | Our role labeler is fully unsupervised with respect to both tasks-it does not rely on any role annotated data or semantic resources. | our system does not learn from raw text. | contrasting |
train_6807 | 2009), which was also derived from the SALSA corpus. | we did not convert the original constituent-based SALSA representation into dependencies, as we wanted to assess whether our methods are also compatible with phrase structure trees. | contrasting |
train_6808 | This also explains why there is little variation in the collocation and purity results across methods. | qualitatively the tradeoff between purity and collocation is the same as for English (i.e., purity is increased at the cost of collocation). | contrasting |
train_6809 | Though we cannot compute the true bias for any real system, the computation is trivial for this baseline. | the true V-measure is equal to 0, as the baseline can be regarded as a limiting case of a stochastic system that picks up one of the m clusters under the uniform distribution with m → ∞; the mutual information between any class labels and clustering produced by such model equals 0 for every m. the ML estimate for the V-measure isV(k, c) = 2Ĥ(c)/(log N +Ĥ(c)). | contrasting |
train_6810 | For this reason, we do not use PDAs to implement Step 1 in decoding: throughout this article a CYK-like parsing algorithm is always used for Step 1. | we do use PDAs to represent the regular languages produced in Step 1 and in the intersection and shortest distance operations needed for Steps 2 and 3. | contrasting |
train_6811 | In this section, in order to simplify the presentation we will only consider machines over the tropical semiring (R + ∪ {∞}, min, +, ∞, 0). | for each operation, we will specify in which semirings it can be applied. | contrasting |
train_6812 | We first describe the unpruned expansion. | in practice a pruning strategy of some sort is required to avoid state explosion. | contrasting |
train_6813 | The complexity of the algorithm is linear in the size of T ′ . | the size of T ′ can be exponential in the size of T, which motivates the development of pruned expansion, as discussed next. | contrasting |
train_6814 | This leaves case (4) to consider. | in M, because there is a transition leaving state s i labeled with w, the backoff arc, which is a failure transition, cannot be traversed, hence the destination of the n-gram arc s j will be the next state in p. in M , both the n-gram transition labeled with w and the backoff transition, now labeled with , can be traversed. | contrasting |
train_6815 | From Table 1(a) we see that Povey is faster and demands less memory compared with TC. | results using Povey with general determinization show that the memory demands between the two approaches are similar in the absence of the specialized determinization. | contrasting |
train_6816 | HALOGEN is thus domain-independent, and it was successfully ported to a specific dialogue system domain (Chambers and Allen 2004). | its performance depends largely on the granularity of the underlying meaning representation, which typically includes syntactic and lexical information. | contrasting |
train_6817 | More recent work has investigated other types of reranking models, such as hierarchical syntactic language models (Bangalore and Rambow 2000), discriminative models trained to replicate user ratings of utterance quality (Walker, Rambow, and Rogati 2002), or language models trained on speaker-specific corpora to model linguistic alignment (Isard, Brockmann, and Oberlander 2006). | a major drawback of the utterancelevel overgenerate and rank approach is its inherent computational cost. | contrasting |
train_6818 | This article is based on the assumption that learning to produce paraphrases can be facilitated by collecting data from a large sample of annotators. | this requires that the meaning representation should (a) be simple enough to be understood by untrained annotators, and (b) provide useful generalization properties for generating unseen inputs. | contrasting |
train_6819 | Such instances are therefore likely to occur simultaneously in the training and test partitions. | in our evaluation such templates are mapped to the same meaning representation, and we enforce the condition that the generated meaning representation was not seen during training. | contrasting |
train_6820 | 2006), and suggested that users prefer dialogue systems in which repetitions are signaled (e.g., as I said before), even though that preference was not significant (Foster and White 2005). | we do not know of any research applying statistical paraphrasing techniques to dialogue. | contrasting |
train_6821 | BAGEL's granularity is defined by the semantic annotation in the training data, rather than external linguistic knowledge about what constitutes a unit of meaning; namely, contiguous words belonging to the same semantic stack are modeled as an atomic observation unit or phrase. | 2 with word-level language models, a major advantage of phrase-based generation models is that they can model long-range dependencies and domain-specific idiomatic phrases with fewer parameters. | contrasting |
train_6822 | In terms of computational complexity, the number of stack sequences Order(S m ) to search over during the first decoding step increases exponentially with the number of input mandatory stacks. | the proposed three-stage architecture allows for tractable decoding by (a) pruning low probability paths during each Viterbi search, and (b) pruning low probability sequences from the output n-best list of each component. | contrasting |
train_6823 | Because a dialogue act can typically be conveyed in a large number of ways, it seems natural to model the NLG task as a one-to-many mapping. | previous work on statistical NLG has typically focused on evaluating the top ranked utterance, without evaluating whether the generator can produce paraphrases matching a reference paraphrase set (Langkilde-Geary 2002;Reiter and Belz 2009). | contrasting |
train_6824 | We first propose taking a sample from the top of the n-best list produced by BAGEL's realization reranking FLM shown in Table 2. | to avoid sampling from the long tail of low-probability utterances, we only consider utterances whose probability lies within a selection beam relative to the probability first-best utterance p 1 ; that is, only the utterances generated with a probability above are kept. | contrasting |
train_6825 | FLMs can be trained easily by estimating conditional probabilities from feature counts over a corpus, and they offer efficient decoding techniques for real-time generation. | fLMs do not scale well to large feature sets (i.e., contexts), as each additional feature increases the amount of data required to accurately estimate the fLM's conditional probability distribution. | contrasting |
train_6826 | The trigram model performs best without including any slot in the utterance class, with a mean BLEU score of .28. | bAGEL produces a score of .37 on the same data (using the most likely utterance only). | contrasting |
train_6827 | This limitation could be alleviated by including the n slots for which we want to control the lexical realization as part of the utterance class. | this is not tractable as it would require fragmenting the data further to produce all 2 n slot combinations as distinct utterance classes. | contrasting |
train_6828 | the handcrafted gold over the FLM reranker with n-best outputs, as no significance was reached over 600 comparisons (p < 0.05). | the judges preferred the handcrafted generator over the perceptron reranker, possibly because it was also perceived as significantly more natural (p < 0.05). | contrasting |
train_6829 | It is important to note that crowdsourced evaluations can lead to additional noise compared with standard lab-based evaluation, mostly due to the possibility of uncooperative evaluators. | the randomization of the order of the evaluated utterances ensures that such noise does not bias the results towards one system. | contrasting |
train_6830 | The model proposed by Yu and Joachims (2009) is highly related to ours because it is also based on latent trees. | they use undirected trees to represent clusters, whereas we use directed trees. | contrasting |
train_6831 | The CoNLL-2012 Shared Task data sets also include coreferring mentions of events. | the current version of our system does not consider verbs when creating candidate mentions and therefore does not resolve coreferences involving events. | contrasting |
train_6832 | In such a case, the correct prediction of y y y is impossible. | such cases are rare, given that the recall of the used sieves is approximately 90%. | contrasting |
train_6833 | At the same time, most structure learning algorithms are based on linear models, as such algorithms have strong theoretical guarantees regarding their prediction performance and, moreover, are computationally efficient. | linear models using basic features alone do not capture enough information to effectively represent coreference dependencies. | contrasting |
train_6834 | In this situation, the prediction of a completely correct clustering is impossible; that is, F y (F h (x x x, y y y)) = y y y for any model parameters. | such cases are rare, given that the recall of the used sieves is approximately 90%. | contrasting |
train_6835 | 's system, sieves are ordered from higher to lower precision. | in our filtering strategy, precision is not a concern, and the application order is not important. | contrasting |
train_6836 | From Table 3, it can be further observed that the best scores on Chinese and English are similar. | the performances on the Arabic language are much lower. | contrasting |
train_6837 | In the CoNLL-2012 data sets, for both English and Arabic, only the outer noun phrases are considered as mentions. | in the Chinese newswire documents, nested mentions are annotated as coreferring (Chen and Ng 2012). | contrasting |
train_6838 | The most frequent errors are pronouns, followed by proper nouns. | in the English language, the proportion of pronouns is much higher than the proportion of proper nouns, most likely due Table 17 Most frequent errors whenever an incorrect arc (i, j) is predicted instead of the correct arc (i * , j). | contrasting |
train_6839 | Cluster-sensitive features can further extend our modeling by considering features that are more strongly adherent to the coreference task. | as far as we know, there is as yet no structure learning system that considers such features. | contrasting |
train_6840 | Recently, Zhong and Ng (2009) tackled this problem by using a bilingual dictionary. | the dictionary has to be aligned to the sense inventory of interest (e.g., WordNet) and a large parallel corpus must be available that covers the full range of meanings in a lexicon. | contrasting |
train_6841 | The approach, implemented in a system based on Support Vector Machines and called It Makes Sense (Zhong and Ng 2010, IMS), attains state-of-the-art performance on lexical sample and all-words WSD tasks. | according to our calculation on the available models, 2 this approach can only provide training examples for about one third of ambiguous nouns in WordNet, more than half of which have only one of their senses covered. | contrasting |
train_6842 | Experimental results show that the joint use of multilingual knowledge enables further improvements over monolingual WSD. | the power of this disambiguation system lies mainly in its usage of the BabelNet multilingual semantic network. | contrasting |
train_6843 | We propose a new approach to the generation of pseudowords that enables the creation of semantically aware pseudowords while tackling the coverage and flexibility issues of the vicinity-based approach. | to the vicinity-based method, which takes as its search space the surroundings of a sense, our technique considers the WordNet semantic network in its entirety, hence enabling us to determine a graded degree of similarity between a given sense and all other synsets in WordNet. | contrasting |
train_6844 | the list similarSynsets to select a pseudosense. | the mode statistics in the table suggests that even when minFreq is set to a large value, most of the pseudosenses are picked out from the highest-ranking positions in the similarSynsets list. | contrasting |
train_6845 | In order to maximize the possibility of preserving the meaning of the original synset, a pseudosense should be selected from the set of words in the same synset, or in the directly related synsets (e.g., hypernym synsets). | many of the WordNet synsets do not contain monosemous terms and the similarity-based approach often needs to look further into the other indirectly related synsets so as to find a suitable pseudosense. | contrasting |
train_6846 | Ideally, the corpus used for calculating these statistics should be fully sense-tagged, namely, each usage of an ambiguous co-occurring word tagged with the intended sense. | because our training data (as is customary for WSD lexical sample data sets) do not provide sense annotations for context words, these edges are semi-noisy in that we connect an unambiguous endpoint w i to all senses of w . | contrasting |
train_6847 | across sense ranking, irrespective of the distribution of the training data. | iMS is not equally robust across configurations: Although its recall is relatively stable in the Uni-Nat configuration, it is not when the training set is naturally distributed (Nat-Nat). | contrasting |
train_6848 | We observed that IMS attains an optimal recall value of 100.0 for all data set sizes and for both sense distributions (i.e., uniform and natural distributions), showing that its models perfectly fit the training data. | as mentioned earlier in Section 6.5, in our setting the automatic enrichment of the LKB is less immediate and natural than the training of the supervised system. | contrasting |
train_6849 | In fact, as the number of training sentences increases, more reliable sets of related words get selected that are likely to provide semantic edges that are more beneficial. | the value of K, and hence the number of additional edges, remains almost constant across different sizes of the training data. | contrasting |
train_6850 | So, if the user model believes that the user cannot associate an attribute-value pair (e.g., < category, recliner >) to the target entity x, then it would return false. | if he can instead associate the pair (e.g., < category, chair >) to x, the user model would return true. | contrasting |
train_6851 | Therefore, using an accurate user model, an appropriate choice can be made to suit the user. | these models are static and are predefined before run-time. | contrasting |
train_6852 | In large domains, a large number of explicit sensing questions would need to be asked, which could be unwieldy. | we aim to sense each user's domain knowledge implicitly by using expert technical (or "jargon") expressions within the interaction. | contrasting |
train_6853 | Usually, in fully automated dialogue systems, automatic speech recognition (ASR) and natural language understanding (NLU) modules are used. | we use a human wizard to play the roles of ASR and NLU modules, so that we can focus on only the user modeling and NLG problem. | contrasting |
train_6854 | Recently, Lemon (2008), Rieser and Lemon (2009), and Dethlefs and Cuayahuitl (2010) have extended this approach to NLG to learn NLG policies to choose the appropriate attributes and strategies in information presentation tasks. | to our knowledge, the application of RL for dynamically modeling users' domain knowledge and generation of referring expressions based on user's domain knowledge is novel. | contrasting |
train_6855 | Another possible metric for optimization would be to weigh each reference instance equally, wherein there is no need to calculate Independent Accuracy for each entity and then average them into Adaptation Accuracy, as shown earlier. | such an approach will lead the learning agent to ignore the entities that are least referred to, and focus on getting the reference to the most frequently referred-to entities right. | contrasting |
train_6856 | Note that as the value of n increases from 1, accuracy increases as it provides more evidence for classification. | after a certain point the adaptation accuracy started to stabilize, because too much sensing is not more informative. | contrasting |
train_6857 | Therefore, when there is no verbal feedback (i.e., no clarification request) from the user, the system has no information on which a user profile can be picked. | the learned policy represents this uncertainty in its state transistions and is able to select an appropriate adaptive action. | contrasting |
train_6858 | The difference between the Learned-DS and the Jargon-adapt policy is statistically significant (p < 0.05). | the difference between the Learned-DS and the Stereotype policy is not significant. | contrasting |
train_6859 | That would require Arg1 to be "there isn't likely to be any silver lining." | the annotators did not take such an argument to be minimal. | contrasting |
train_6860 | r Multiplicity: The PDTB allows more than one sense label to be associated with a single discourse connective to indicate that multiple sense relations hold concurrently (e.g., a token of since may be labeled with both a temporal and causal sense). | propBank only permits a constituent to fill a single functional role. | contrasting |
train_6861 | They propose a user model for jointly generating keywords and questions. | their approach is based on generating question templates from existing questions, which requires a large set of English questions as training data. | contrasting |
train_6862 | its relevance to the topic in consideration. | q3 of the state-of-the-art was assigned a lower score due to its lack of clarity with respect to the topic. | contrasting |
train_6863 | We can regard the task of sentiment lexicon learning as word-level sentiment classification. | for wordlevel sentiment classification, it is not straightforward to extract features for a single word. | contrasting |
train_6864 | It appears that the antonym relations depict word relations in a more accurate way and can refine the word sentiment scores more precisely. | the synonym relation and word alignment relation dominate, whereas the antonym relation accounts for only a small percentage of the graph. | contrasting |
train_6865 | This parsing algorithm is defined for featureless variants of TAG. | in implemented TAGs (e.g., XTAG [The XTAG Research Group 2001], SemXTAG [Gardent 2008], or XXTAG 1 [Alahverdzhieva 2008]) feature structures and feature unification are central. | contrasting |
train_6866 | (4) The tall black meerkat slept. | (Independent derivation) because they may involve strong scopal and morpho-syntactic constraints, stacked predicative verbs (i.e., verbs taking a sentential complement, Example (5a)) and non-intersective modifiers (Example (5c)) require dependent derivations. | contrasting |
train_6867 | To ensure the appropriate linearization, the Schabes and Shieber's approach introduces the outermost-predication rule, which stipulates that predicative trees adjoin above modifier auxiliary trees. | the FB-TAG approach allows both orders and lets feature constraints rule out ungrammatical sentences such as Example (14b). | contrasting |
train_6868 | The success of DSMs in essentially word-based tasks such as thesaurus extraction and construction (Grefenstette 1994;Curran 2004) invites an investigation into how DSMs can be applied to NLP and information retrieval (IR) tasks revolving around larger units of text, using semantic representations for phrases, sentences, or documents, constructed from lemma vectors. | the problem of compositionality in DSMs-of how to go from word to sentence and beyond-has proved to be non-trivial. | contrasting |
train_6869 | Its properties are well known, and it becomes simple to evaluate the meaning of a sentence if given a logical model and domain, as well as verify whether or not one sentence entails another according to the rules of logical consequence and deduction. | such logical analysis says nothing about the closeness in meaning or topic of expressions beyond their truth-conditions and which models satisfy these truth conditions. | contrasting |
train_6870 | In this model, nouns are lexical vectors, as with other models. | embracing a view of adjectives that is more in line with formal semantics than with distributional semantics, they model adjectives as linear maps taking lexical vectors as input and producing lexical vectors as output. | contrasting |
train_6871 | Whereas the approaches to compositional DSMs presented in Section 2 either failed to take syntax into account during composition, or did so at the cost of not being able to compare sentences of different structure in a common space, this categorical approach projects all sentences into a common sentence space where they can be directly compared. | this alone does not give us a compositional DSM. | contrasting |
train_6872 | The general learning algorithm presented in Section 4.3 technically can be applied to learn and model relations of any semantic type. | many open questions remain, such as how to deal with logical words, determiners, and quantification, and how to reconcile the different semantic types used for sentences with transitive and intransitive sentences. | contrasting |
train_6873 | This information is reported in their paper as additional means for model comparison. | for the same reason we considered Spearman's ρ to be a fair means of model comparisonnamely, in that it required no model score normalization procedure and thus was less likely to introduce error by adding such a degree of freedom-we consider the HIGH/LOW means to be inadequate grounds for comparison, precisely because it requires normalized model scores for comparison to be meaningful. | contrasting |
train_6874 | They also more generally demonstrated that concrete models could be built from the general categorical framework and perform adequately in simple paraphrase detection tasks. | various aspects of compositionality were not evaluated here. | contrasting |
train_6875 | The creation of treebanks is a prime example (Marcus, Santorini, and Marcinkiewicz 1993). | the linguistic theories motivating these annotation efforts are often heavily debated, and as a result there often exist multiple corpora for the same task with vastly different and incompatible annotation philosophies. | contrasting |
train_6876 | For dependency parsing, an effective guiding feature is the dependency path between the hypothetic head and modifier, as shown in Figure 3. | our effort is not limited to this, and more special features are introduced: A classification label or dependency path is attached to each feature of the baseline classifier to generate combined guiding features. | contrasting |
train_6877 | It would also be valuable to evaluate the improved word segmenter and dependency parser on the out-of-domain data sets. | currently most corpora for word segmentation and dependency parsing do not explicitly distinguish the domains of their data sections, making such evaluations difficult to conduct. | contrasting |
train_6878 | Eschewing theorizing to stay close to data permits a remarkably wide range of linguistic phenomena to be covered, and it is this that is the book's greatest strength. | in a few places, a seemingly arbitrary theoretical perspective is assumed rather more tacitly than one might hope, with few hints as to alternative analyses (e.g., see the following remarks about parts of speech in Chapter 6). | contrasting |
train_6879 | Reordering of the German word stimmen is internal to the phrase-pair gegen ihre Kampagne stimmen -'vote against your campaign' and therefore represented by the translation model. | the model fails to correctly translate the test sentence shown in Figure 1(b), which is translated as 'they would for the legalization of abortion in Canada vote', failing to displace the verb. | contrasting |
train_6880 | The model makes no phrasal independence assumption and generates a tuple monotonically by looking at a context of n previous tuples, thus capturing context across phrasal boundaries. | n-gram-based systems have the following drawbacks. | contrasting |
train_6881 | The POS-based rewrite rules serve to precompute the orderings that will be hypothesized during decoding. | notice that this rule cannot generalize to the test sentence in Figure 1(b), even though the tuple translation model learned the trigram < sie -'they' würden -'would' stimmen -'vote' > and it is likely that the monolingual language model has seen the trigram they would vote. | contrasting |
train_6882 | Note that the OSM, like the discontinuous phrase-based model (Galley and Manning 2010), allows all possible geometries as shown in Figure 7. | because our decoder only uses continuous phrases, we cannot hypothesize (ii) and (iii) unless they appear inside of a phrase. | contrasting |
train_6883 | Our model, like the reordering models (Tillmann and Zhang 2005;Galley and Manning 2008) used in phrase-based decoders, is lexicalized. | our model has richer conditioning as it considers both translation and reordering context across phrasal boundaries. | contrasting |
train_6884 | As already mentioned in Example 6, the grammar G constructed there is not prefix-closed. | we can make it prefixclosed by explicitly allowing the "missing" rule instances: We shall now argue that this modification does not actually change the language generated by G . | contrasting |
train_6885 | Construction of H. The chain of inclusions Y ⊆ L(H ) ⊆ L(G) is sufficient to prove Lemma 6: Because Y and L(G) are Parikh-equivalent (which we observed at the beginning of Section 3.4.2), so are L(H ) and L(G), which means that L(H ) satisfies all of the properties claimed in Lemma 6, even though this does not suffice to prove our current lemma. | once H is given, it is not hard to also obtain a grammar H that generates exactly Y. | contrasting |
train_6886 | In the grammar formalisms folklore, the generative capacity of CCG is often attributed to generalized composition, and indeed we have seen (in Lemma 4) that even grammars without target restrictions can generate non-context-free languages such as L(G 2 ). | our results show that composition by itself is not enough to achieve weak equivalence with TAG: The yields of the transformed derivations from Section 3.4 form a context-free language despite the fact that these derivations may still contain compositions, including compositions of degree n > 2. | contrasting |
train_6887 | Perhaps surprisingly, it is not the availability of generalized composition rules by itself that explains the generative power of CCG, but the ability to constrain the interaction between generalized composition and function application by means of target restrictions. | one may be interested in CCG primarily as a formalism for developing grammars for natural languages (Steedman 2000;Baldridge 2002;Steedman 2012). | contrasting |
train_6888 | From this point of view, the suitability of CCG for the development of lexicalized grammars has been amply demonstrated. | our technical results still serve as important reminders that extra care must be taken to avoid overgeneration when designing a grammar. | contrasting |
train_6889 | A simple answer is to add some lexicalized method for enforcing target restrictions to CCG, specifically on the application rules. | we are not aware that this idea has seen widespread use in the CCG literature, so it may not be called for empirically. | contrasting |
train_6890 | We note that this is not always the case: For example, the entailment graph in Figure 3 is not an FRG, because X annex Y entails both Y be part of X and X invade Y, while the latter two do not entail one another. | we hypothesize that this scenario is rather uncommon. | contrasting |
train_6891 | Assuming that entailment graphs are FRGs allows us to use the node re-attachment operation in linear time. | this assumption also enables performing other graph operations efficiently. | contrasting |
train_6892 | Comparing runtimes for TNF and GNF, we see that the gap between the algorithms decreases as −λ increases. | for reasonable values of λ, TNF is about four to seven times faster than GNF, and we were unable to run GNF on large graphs, as we report in Section 5. | contrasting |
train_6893 | As expected, HTL-FRG is much faster than both TNF and TNCF, and TNCF is somewhat slower than TNF. | as mentioned earlier, TNCF is able to improve both precision and recall compared with TNF and HTL-FRG. | contrasting |
train_6894 | Ideally, we need examples of sentences annotated with polarity for the whole sentence as well as sentiment tags for constituents within a sentence, as with the Penn TreeBank for training traditional linguistic parsers. | this is not practical as the annotations will be inevitably time-consuming and require laborious human efforts. | contrasting |
train_6895 | Specifically, Figure 10a indicates that a minimum fragment frequency that is too small will introduce noise, and it is difficult to estimate reliable polarity probabilities for infrequent fragments. | a minimum fragment frequency that is too large will discard too much useful information. | contrasting |
train_6896 | The results become better as the beam size K increases. | the computation costs increase. | contrasting |
train_6897 | As shown in the results, the polarity probabilities learned by our method are more reasonable and meet people's intuitions. | there are also some negative examples caused by "false subjective." | contrasting |
train_6898 | They define an aggregate evaluation score for comparing systems, estimating expected value and standard error for hypothesis testing. | in aggregating this way information about ties is lost. | contrasting |
train_6899 | Similarly, in the natural language processing field (NLP), the adaptation of a generic model to a specific domain often requires new annotated data that illustrate its specificities (as in Candito, Anguiano, and Seddah 2011). | the creation cost of such data highly depends on the kind of labels used to adapt the model. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.