id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_12000 | In (8), both 'create' and 'buy' are done due to the 'country's lack of natural resources'. | in (9), the analysts 'forecasting' and the company 'saying' do not have as their cause 'planned price cuts'. | contrasting |
train_12001 | The main challenge for such systems is translating out-of-vocabulary words (Carpuat et al., 2012). | words in biographies are closer to a training corpus of news commentaries and parlimentary proceedings and allow us to examine how well domain adaptation techniques can disambiguate lexical choices. | contrasting |
train_12002 | (2011) introduced additional caches to store (i) words and phrase pairs from training documents most similar to a current source article, and (ii) words from topical clusters created on the training set. | a central issue in these systems is that caches become noisy over time, since they ignore topic shifts in the documents. | contrasting |
train_12003 | The consistency cache's contents are computed similarly to the general domain case. | the cache gets cleared at segment boundaries. | contrasting |
train_12004 | The struct-topic cache performs much better on longer documents of over 30 sentences giving 0.3 to 0.4 BLEU points increase compared to the general domain model. | the performance worsens when the structured cache is applied on documents with less than 20 sentences. | contrasting |
train_12005 | changes the weights to learn different representations for the input layer. | the dropout for the input layer improves the performance. | contrasting |
train_12006 | The system is incremental is that each word class to be verbalised can yield a new set of utterance candidates. | it supports only addition not revisions. | contrasting |
train_12007 | While their approach is fast to execute, it is limited to a restricted set of domain specific attributes; requires a training corpus of example sentences to define the space of possible surface realisations; and is based on a large set (800 rules) of domain specific rules extracted semi-automatically from the training corpus. | we use a general, small size grammar (around 50 rules) and a lexicon which is automatically derived from the input ontologies. | contrasting |
train_12008 | The PPDB data outperforms both French literature and MSR models if we look all possible sentence pairs from test data (the column labeled "all" in the table). | when we consider whether any pair from a set of 4 translations can be translated, the PPDB models do not do as well. | contrasting |
train_12009 | The retrieval algorithm used by commercial TM systems is typically not disclosed (Koehn and Senellart, 2010;Simard and Fujita, 2012;Whyman and Somers, 1999). | the bestperforming method used in current systems is widely believed to be based on edit distance (Baldwin and Tanaka, 2000;Simard and Fujita, 2012;Whyman and Somers, 1999;Koehn and Senellart, 2010;Christensen and Schjoldager, 2010;Mandreoli et al., 2006;He et al., 2010). | contrasting |
train_12010 | As can be seen from the numerator of Equation 5, NGP is weighting the match of all n-grams as uniformly important. | it is not the case that each n-gram is of equal value to the translator. | contrasting |
train_12011 | For example, Rule (4) correctly detects the social event in sentence (5), since Semafor correctly parses the input. | semafor does not correctly parse the input sentence (1): it correctly identifies the statement frame and its Message frame element, but it fails to find the speaker. | contrasting |
train_12012 | There have been recent efforts to extract networks from text (Elson et al., 2010;He et al., 2013). | these efforts extract a different type of network: a network of only bi-directional links, where the links are triggered by quotation marks. | contrasting |
train_12013 | (2013) will extract an interaction link between Emma and Harriet in the following sentence. | their system will not detect any interaction links in the other examples mentioned in this paper. | contrasting |
train_12014 | There is a small body of recent research on automatically learning probabilistic models of scripts from large corpora of raw text (Manshadi et al., 2008;Chambers and Jurafsky, 2008;Chambers and Jurafsky, 2009;Jans et al., 2012). | this work uses a very impoverished representation of events that only includes a verb and a single dependent entity. | contrasting |
train_12015 | We claim that this is the most natural adaptation of the cloze evaluation to the multi-argument event setting. | other types of inferences would be useful as well for question-answering. | contrasting |
train_12016 | Normalisation helps with Sum throughout, with little difference in performance between Norm and Norm10, but with a slight decrease when CNorm is used. | only CNorm improves the ranking of Prod-based vectors. | contrasting |
train_12017 | The two generation algorithms presented above rely on a completed initial parsing step. | given that the complexity of the parsing stage is O(2 N • K), this may not be achievable in practice. | contrasting |
train_12018 | They can be different usages, both having a subjective meaning. | if two instances are labeled having opposing labels, we do not want them to be in the same cluster. | contrasting |
train_12019 | Customer reviews of books, hotels and other products are widely perceived as an important rea-son for the success of e-commerce sites such as amazon.com or tripadvisor.com. | customer confidence in such reviews is often misplaced, due to the growth of the so-called sock puppetry phenomenon: authors / hoteliers writing glowing reviews of their own works / hotels (and occasionally also negative reviews of the competitors). | contrasting |
train_12020 | The results suggest that both methods achieve accuracy well above the baseline. | the models trained using Learning from Crowd classes not only achieved the highest accuracy, but also outperformed the thresholds for precision and recall in detecting deceptive reviews (Table 4), while the models trained with the majority voting classes showed a very high precision, but at the expense of the recall, which was lower than the baseline (Table 3). | contrasting |
train_12021 | Our system follows much previous work by counting PPs that accompany the verb among its complements, even though they are not obligatory (so-called 'adjuncts'), because PP adjuncts are excellent clues to a verb's semantics (Sun et al., 2008). | nominal and clausal adjuncts do not count as verbal complements. | contrasting |
train_12022 | Approaching temporal link labelling as a classification task has already been explored in several works. | choosing the right feature vectors to build the classification model is still an open issue, especially for event-event classification, whose accuracy is still under 50%. | contrasting |
train_12023 | We train a classification model for each category of entity pair, as suggested in several previous works (Mani et al., 2006;Chambers, 2013). | because there are very few examples of timex-timex pairs in the training corpus, it is not possible to train the classification model for these particular pairs. | contrasting |
train_12024 | (1999) indeed shows that the accuracy of a context-sensitive word prediction system is related to how much training material is provided. | once most of the frequent combinations are covered, it takes more and more training material to improve the results a little bit. | contrasting |
train_12025 | Please note that CKS assumes that the user always accepts a prediction immediately when it is available, which might not always be the case in reality. | current popular smartphone applications suggest this approach might be too strict. | contrasting |
train_12026 | Nondeterministic variants of LR(k) parsing, for use in natural language processing, have been proposed as well, some using tabulation to ensure polynomial running time in the length of the input string (Tomita, 1988;Billot and Lang, 1989). | nondeterministic LR(k) parsing is potentially as expensive as, and possibly more expensive than, traditional tabular parsing algorithms such as CKY parsing (Younger, 1967;Aho and Ullman, 1972), as shown by for example (Shann, 1991); greater values of k make matters worse (Lankhorst, 1991). | contrasting |
train_12027 | These benchmark datasets have supported a diverse and influential line of research into semantic parsing learning algorithms for sophisticated semantic constructions, with continuing advances in accuracy. | the focus on these datasets leads to a natural question -do other natural datasets have similar syntax and semantics, and if not, can existing algorithms handle the variability in syntax and semantics? | contrasting |
train_12028 | Furthermore, they require the grammar to be binarized and linear, which means that they only support linear context-free rewriting systems (LCFRS). | our algorithm naturally supports the full power of PMCFG. | contrasting |
train_12029 | the weight of the pro- is the probability to choose this production when the result category is fixed. | in this case the probabilities for all productions with the same result category sum to one: the parsing algorithm does not depend on the probabilistic interpretation of the weights, so the same algorithm can be used with any other kind of weights. | contrasting |
train_12030 | The outside weight w o for the new active item remains the same. | we must update the inside weight since we have replaced the d-th argument in B with the newly generated categoryB d . | contrasting |
train_12031 | Since different sentiments may be expressed toward different entities in a document, fine-grained analysis may be more informative for applications. | fine-grained sentiment analysis remains a challenging task for NLP systems. | contrasting |
train_12032 | The goal of such work is to determine one overall polarity of an expression or sentence. | our framework commits to a holder having sentiments toward various events and entities in the sentence, possibly of different polarities. | contrasting |
train_12033 | This is reasonable in scenarios where available training data is fixed over long periods of time. | this approach Figure 1: Context when translating an input sentence (bold) with simulated post-editing. | contrasting |
train_12034 | Unfortunately, in general, Bayesian techniques are computationally difficult to work with. | hierarchical Pitman-Yor process language models (HPYPLMs) are convenient in this regard since (1) inference can be carried out efficiently in a convenient collapsed representation (the "Chinese restaurant franchise") and (2) the posterior predictive distribution from a single sample provides a high quality language model. | contrasting |
train_12035 | It is expected that users with high numbers of followers are also popular in the real world, being well-known artists, politicians, brands and so on. | non popular entities, the majority in the social network, can also gain a great number of followers, by exploiting, for example, a follow-back strategy. | contrasting |
train_12036 | Avoiding mentioning (a 7 ) or replying (a 8 ) to others may not affect (on average) an impact score positively or negatively; however, accounts that do many unique @-mentions are distributed around a clearly higher impact score. | users that overdo @-replies are distributed below the mean impact score. | contrasting |
train_12037 | Vector-based models are typically used in the literature for representing documents both in monolingual and cross-lingual settings (Manning et al., 2008). | because of the large size of the vocabulary, having each term as a component of the vector makes the document representation very sparse. | contrasting |
train_12038 | This is not the case in the CosSim BN model which achieves higher results using BabelNet as a statistical dictionary, especially on the Spanish news corpus. | however, the linear projection methods as well as Full MT obtained the highest results on the English corpus. | contrasting |
train_12039 | Phrase-based models (Koehn et al., 2003;Och and Ney, 2004;Xiong et al., 2006) have been strong in local translation and reordering. | phrase-based models cannot effectively conduct long-distance reordering because they are based purely on statistics of syntax-independent phrases. | contrasting |
train_12040 | We use word alignment results for tree structure projection. | accurate word alignment is challenging when handling language pairs in which long-distance reordering is needed, and the alignment noise propagates to the tree projection. | contrasting |
train_12041 | A reasonable way out of this problem would be to save the mean and standard deviation parameters used for data standardization and use them to project the composed phrase vector outputs back to the original vector space. | enetLex obtained a stable good performance in SVD space, with the best results achieved with dimensions between 200 and 300. | contrasting |
train_12042 | At one end of the spectrum, the WMT human evaluation collects large numbers of quick judgments (approximately 3.5 minutes per screen, or 20 seconds per label) (Bojar et al., 2013). | 1 HMEANT (Lo and Wu, 2011) uses a more time-consuming fine-grained semantic-role labeling analysis at a rate of approximately 10 sentences per hour (Birch et al., 2013). | contrasting |
train_12043 | Several works (e.g., Ratinov and Roth, 2009;Cohen and Sarawagi, 2004) have shown that injecting dictionary matches as features in a sequence tagger results in significant gains in NER performance. | building these dictionaries requires a huge amount of human effort and it is often difficult to get good coverage for many named entity types. | contrasting |
train_12044 | Further analysis reveals that 32 (63%) target languages for ENC, 25 (49%) target languages for EVPC, and only 5 (10%) target languages for GNC have a correlation of r ≥ 0.1 with goldstandard compositionality judgements. | 8 (16%) target languages for ENC, 2 (4%) target languages for EVPC, and no target languages for GNC have a correlation of r ≤ −0.1. | contrasting |
train_12045 | The results for string similarity (CS string : r = 0.385) are similar to those for CS L2N . | as with the ENC dataset, when we combine string similarity and distributional similarity (CS all ), the results improve, and we achieve the state-of-the-art for the dataset. | contrasting |
train_12046 | Word embeddings resulting from neural language models have been shown to be a great asset for a large variety of NLP tasks. | such architecture might be difficult and time-consuming to train. | contrasting |
train_12047 | It can be computed in linear time with the Forward algorithm, which derives a recursion similar to the Viterbi algorithm (see Rabiner (1989)). | we can thus maximize the log-likelihood over all the training pairs ( 1 , the best tag path which minimizes the sentence score (6): to classical CRF, all parameters θ are trained in a end-to-end manner, by backpropagation through the Forward recursion, following Collobert et al. | contrasting |
train_12048 | The weights for the scope assignment and disambiguation task are learned in a cascaded way. | to the joint approach, the hasScope(m, s) predicate is observed during disambiguation. | contrasting |
train_12049 | Note that our definition of a topic, namely the top N features from a distributional vector, does not correspond to a topic generated by a latent variable model, because it does not have a probability distribution over words. | the TC measures we adopt do not make use of such a probability distribution except for choosing the top N words from a topic, which are then treated as an unordered set for the pairwise operations. | contrasting |
train_12050 | These idiosyncratic contexts are not mutually informative and cause a sizeable decrease in TC. | removing owl from creature does not decrease the coherence nearly as much. | contrasting |
train_12051 | In the context of hypernym detection, they could test a system's ability to find one or two good-quality hypernyms quickly from a set of candidates. | these measures are less appropriate for testing whether a system can, in general, rank hypernyms over other relations. | contrasting |
train_12052 | (2013) similarly note that generated REs should avoid information that is perceptually expensive to obtain. | these results focus on content selection rather than surface realization. | contrasting |
train_12053 | We set up the prediction task as in the previous section: Given an anchor/landmark pair, our system must decide what direction and ES-TABLISH status to assign it. | here we evaluate the system as a classifier. | contrasting |
train_12054 | On the one hand, emotions are coded by affective modalities (Scherer, 2005), among which sadness, disgust, enjoyment, fear, surprise and anger are the most usual (Ekman, 1999;Cowie and Cornelius, 2003). | an ordinal classification in a multidimensional space is considered. | contrasting |
train_12055 | Analogies are considered to be one of the core concepts of human cognition and communication, and are very efficient at encoding complex information in a natural fashion. | computational approaches towards largescale analysis of the semantics of analogies are hampered by the lack of suitable corpora with real-life example of analogies. | contrasting |
train_12056 | For example, an analogy repository containing such domain knowledge has to provide information on which attributes of source and target are generally considered comparable. | to Linked Open Data or typical ontologies, such analogical knowledge is consensual, i.e. | contrasting |
train_12057 | For example, (Bollegala, Matsuo, & Ishizuka, 2009), (Nakov & Hearst, 2008), or (Turney, 2008) approach this challenge by using pattern-based Web search and subsequent analysis of the resulting snippets. | to these approaches, we do not focus on word pair similarity, but given one entity, we aim at finding other entities which are seen as analogous in a specific domain (in our case analogies between locations and places). | contrasting |
train_12058 | These patterns were selected manually based on analysis of sample Web data by three experts. | to other approaches relying on extraction patters, e.g. | contrasting |
train_12059 | Under certain circumstances, crowd-sourcing can be very effective for handling large tasks requiring human intelligence without relying on expensive experts. | to using expert annotators, crowd-workers are readily and cheaply available even for ad-hoc tasks. | contrasting |
train_12060 | This is due to some of its necessary assumptions not holding true (see section 6.4). | our subsequence-based model achieves a higher informedness score of 0.85 and 0.87 in the best cases. | contrasting |
train_12061 | First, we compared our baseline, lexical unigrams with SVM to using lexical n-grams to test whether using n-grams actually contributed to the quality, and found the difference to be significant (sign-test p<0.024). | for SVM-based classification, the higher reported performance for also including POS features in addition to lexical n-grams could not be shown to be significant (p>0.4). | contrasting |
train_12062 | In this paper we address the problem of lexicon construction by constructing a semi-supervised system that accepts concrete inflection tables as input, generalizes inflection paradigms from the tables provided, and subsequently allows the use of unannotated corpora to expand the inflection tables and the automatically generated paradigms. | 1 to many machine learning approaches that address the problem of paradigm extraction, the current method is intended to produce human-readable output of its generalizations. | contrasting |
train_12063 | In the general case, all patterns are tried for a given candidate word. | we usually have access to additional information about the candidate words-e.g., that they are in the base form of a certain part of speech-which we use to improve the results by only matching the relevant patterns. | contrasting |
train_12064 | The main source of error (334 out of 1000) is confusion with p akribi (accuracy), which has no plural. | it is on semantic grounds that the paradigm has no plural; a native Swedish speaker would pluralize akribi like akademi (disregarding the fact that akribi is defective). | contrasting |
train_12065 | Hausboot, "house boat") or only rarely have been seen in the training data. | 2 most compounds consist of two (or more) simple words that occur more frequently in the data than the compound as a whole (e.g. | contrasting |
train_12066 | This approach can even produce new compounds unseen in the training data, provided that the modifiers occurred in modifier position of a compound and heads occurred as heads or even as simple words with the same inflectional endings. | as former compound modifiers were left with their filler letters (cf. | contrasting |
train_12067 | (2012) we re-implemented the approach of Stymne and Cancedda (2011), combined it with inflection prediction and applied it to a translation task. | compound merging was restricted to a list of compounds and parts. | contrasting |
train_12068 | It can be seen that using more features (SC→T→ST) is favourable in terms of precision and overall accuracy and the positive impact of using source language features is clearer when only reduced feature sets are used (TR vs. STR). | these accuracies only somewhat correlate with SMT performance: while being trained and tested on clean, fluent German language, the models will later be applied to disfluent SMT output and might thus lead to different results there. | contrasting |
train_12069 | With accuracies of over 97%, POS-tagging of WSJ can be treated as a solved problem (Manning, 2011). | performance is still well below satisfactory for many other languages and domains (Petrov et al., 2012;Christodoulopoulos et al., 2010). | contrasting |
train_12070 | The idea is compelling: on the one hand, a list of lexicons is often available for special domains, such as bio-informatics; on the other hand, compiling a lexicon of word-tag pairs appears to be less timeconsuming than annotating full sentences. | success in type-supervised POStagging turns out to depend on several subtle factors. | contrasting |
train_12071 | Results show that BLEU score improves on a test sub-set containing only negative sentences when extra negative data is appended to the original training data and the language model is enriched as well. | system performance deteriorates on both the original test set and on positive sentences. | contrasting |
train_12072 | Also in the case of a sentence containing a subordinate clause, dependency parsing is able to correctly capture the latter as part of the scope given that the relative pronoun depends directly on the event of the main clause. | recursion from the negated event excludes coordinate clauses that are not considered part of the scope, given that the event is a dependant of the connective. | contrasting |
train_12073 | Only French setup yielded statistically significant improvement (p < .01). | if we concatenate the outputs of all languages, the improvement in translation of references with BLEU score averaged over all systems becomes statistically significant (p = .03), improving from 16.8 for the baseline system to 17.3 for the adapted MT outputs. | contrasting |
train_12074 | Early user simulation techniques are based on Ngrams (Eckert et al., 1997;Levin and Pieraccini, 2000;Georgila et al., 2005;Georgila et al., 2006), ensuring that simulator responses to a machine utterance are sensible locally. | they do not enforce user consistency throughout the dialog. | contrasting |
train_12075 | the probability of a set of waypoints given a semantic unit. | there are two problems with deriving a generative model directly over W . | contrasting |
train_12076 | The overall accuracy is slightly better when snippets from the END of the summary are chosen compared to those from the START. | with START snippets, better prediction of different length summaries was obtained, whereas the accuracy in the END case comes mainly from correct prediction of 50 and 400 word summaries. | contrasting |
train_12077 | When pragmatic information is used, 5.6 relevance variables are used on average (per dialogue snipped). | when pragmatic information is not used, this number rises to 6.3 8 . | contrasting |
train_12078 | For example, in Figure 1 banana and redberry are not directly connected but they can be reached via pear or raspberry. | by considering mediate relationships it becomes more difficult to determine the most appropriate category for each food item since most food items are connected to food items of different categories (in Figure 1, there are not only edges between banana and other types of fruits but there is also some edge to some sweet, i.e. | contrasting |
train_12079 | Sentence compression and sentence simplification also consider deleting words from input sentences. | these tasks have different goals. | contrasting |
train_12080 | This is because discarding "not" would flip the sentence's meaning; discarding "the" would lose a necessary determiner before a noun. | discarding "just" would hurt neither fluency nor meaning. | contrasting |
train_12081 | A linear combination of a language model score and our proposed measure based on analysis of alignments best captures redundancy. | as our experimental results suggest, it is necessary both to use alignments in translation outputs, and to use them in a good way. | contrasting |
train_12082 | Such systems are relatively easy to set up and experienced many successes: the TEES system (Björne et al., 2009;Bj[Pleaseinsertintopreamble]rne et al., 2012; Björne and Salakoski, 2013) won the BioNLP GE task in 2009 and ranked 2 nd in 2013, whereas the EVEX system won in 2013 (Van Landeghem et al., 2011;Hakala et al., 2013). | all these methods suffer from error cascading. | contrasting |
train_12083 | Words that tend to be used in the summaries, characterized by high KL(A ∥ G) scores, include locations (York, NJ, Iraq), people's names and titles (Bush, Sen, John), some abbreviations (pres, corp, dept) and verbs of conflict (contends, dies). | from KL(G ∥ A), we can see that it is unlikely for writers to include courtesy titles (Mr, Ms, Jr.) and relative time reference in summaries. | contrasting |
train_12084 | Systems such as Siddharthan (2011) use transformation rules that encode morphological changes as well as deletions, re-orderings, substitutions and sentence splitting, and are well suited to handle the voice conversion example above. | hand-crafted systems are limited in scope to syntactic simplification. | contrasting |
train_12085 | While much work in summarisation has concentrated on multi-document summarisation, where the main challenge is the detection of redundant information, the summariser presented here is a single-document summariser. | researchers have been attracted by deeper, more symbolic and thus more explanatory summarisation models that use semantic representations of some form (Radev and McKeown, 1998) and often rely on explicit discourse modelling (Lehnert, 1981;Kintsch and van Dijk, 1978;Cohen, 1984). | contrasting |
train_12086 | In natural language processing (NLP) annotation projects, we use inter-annotator agreement measures and annotation guidelines to ensure consistent annotations. | annotation guidelines often make linguistically debatable and even somewhat arbitrary decisions, and interannotator agreement is often less than perfect. | contrasting |
train_12087 | for English to (Marcus et al., 1993) or words ending in -ing (Manning, 2011). | standardized label sets have practical advantages in NLP (Zeman and Resnik, 2008;Zeman, 2010;Das and Petrov, 2011;Petrov et al., 2012;McDonald et al., 2013). | contrasting |
train_12088 | If an error occurred and the predicted tag is in the same class as the gold tag, a loss σ occurred, otherwise it counts as full cost. | to our approach, they let the learner focus on the more difficult cases by occurring a bigger loss when the predicted POS tag is in a different category. | contrasting |
train_12089 | In fact, Residual Networks can be viewed as a special case of Highway Networks where both the transform and carry gates are substituted by the identity mapping function: thereby forming a hard-wired shortcut connection x. Arguably, Equation (3) can be considered as a form of residuality with o k working as the residual function and u k the shortcut connection. | as discussed in (Srivastava et al., 2015b), in contrast to the hard-wired skip connection in Residual Networks, one of the advantages of Highway Networks is the adaptive gating mechanism, capable of learning to dynamically control the information flow based on the current input. | contrasting |
train_12090 | Notice that the highest average accuracy of the original MemN2N model on the 10k dataset is 95.8. | it was attained by a model with layer-wise weight tying, not adjacent weight tying as adopted in this work, and, more importantly, a much larger embedding size d = 100 (therefore not shown in Table 1). | contrasting |
train_12091 | Recurrent neural networks (RNNs) process input text sequentially and model the conditional transition between word tokens. | the advantages of recursive networks include that they explicitly model the compositionality and the recursive structure of natural language. | contrasting |
train_12092 | Unlike sequential models, recursive neural networks compose word phrases over syntactic tree structure and have shown improved performance in sentiment analysis (Socher et al., 2013). | its dependence on a syntactic tree architecture limits practical NLP applications. | contrasting |
train_12093 | This attention mechanism is robust as it globally normalizes the attention score m with sof tmax to obtain the weights α. | it does not consider the tree structure when producing the final representation h tree . | contrasting |
train_12094 | and this equation is similarity to the global attention. | now each non-leaf node attentively collects its own and children representations and passes towards the root which finally constructs the attentively blended tree representation. | contrasting |
train_12095 | Pooling is unweighted selection: it outputs the selected values as is. | attention can be thought of as weighted selection: some input elements are highly weighted, others receive weights close to zero and are thereby effectively not selected. | contrasting |
train_12096 | Previous work on attention and pooling has only considered a small number of the possible configurations along those dimensions of attention. | the internal/external and un/weighted distinctions can potentially impact performance because external resources add information that can be critical for good performance and because weighting increases the flexibility and expressivity of neural network models. | contrasting |
train_12097 | Perhaps the combination of the external resource and the more indirect representation of the entire sentence produced by the RNN is difficult. | hedge cue patterns identified by convolutional filters of the CNN can be evaluated well based on external attention; e.g., if there is strong external-attention evidence for uncertainty, then the effect of a hedge cue pattern (hypothesized by a convolutional filter) on the final decision can be boosted. | contrasting |
train_12098 | We use the term sequence-agnostic for this. | we propose to investigate sequence-preserving attention as presented in Section 3.3. | contrasting |
train_12099 | Our motivation for introducing sequence-Model wiki bio SVM (Georgescul, 2010) 62.01 78.64 HMM (Li et al., 2014) 63.97 80.15 CRF + ling (Tang et al., 2010) 55.05 86.79 Our CNN with external attention 67.52 85.57 Table 5: Comparison of our best model with the state of the art preserving attention was that the semantic meaning of a sentence can vary depending on where an uncertainty cue occurs. | the core of uncertainty detection is keyword and keyphrase detection; so, the overall sentence structure might be less important for this task. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.