id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_4300 | The distance of the Sami languages from the Finnic (Estonian, Finnish) and Ugric (Hungarian) languages is much larger than the distances within Romance and within Slavic. | even for Northern Sami, the worst performing language, adding an additional language is still always beneficial compared to the monolingual baseline. | contrasting |
train_4301 | Our intuition is that ciphering will disrupt transfer of morphology. | 6 the regularization effect we observed with Arabic should still be effective. | contrasting |
train_4302 | For 200 samples and ciphering, there is no clear difference in performance between Portuguese and Arabic. | for 50 samples and ciphering, Portuguese (0.09) seems to perform better than Arabic (0.02) in accuracy. | contrasting |
train_4303 | This may be explained by the soft-attention mechanism encouraging the encoder to encode positional information in the input representations, which may help it to predict better attention scores, and to avoid collisions when computing the weighted sum of representations for the context vector. | our hardattention model has other means of obtaining the position information in the decoder using the step actions, and for that reason it does not encode it as strongly in the representations of the inputs. | contrasting |
train_4304 | If the model were sensitive to reduplication, we would expect to see morphological variants of the query word among its nearest neighbors. | from Table 12, this is not so. | contrasting |
train_4305 | This fact additionally confirms the idea described in Section 2.2.2 that the independent optimization over parameters W and C may decrease the performance. | the target performance measure of embedding models is the correlation between semantic similarity and human assessment (Section 4.2). | contrasting |
train_4306 | The neighbourhood of "he" contains less semantically similar words in case of our model. | it filters out irrelevant words, such as "promptly" and "dumbledore". | contrasting |
train_4307 | Even syntactic formalisms are moving toward graphs (de Marneffe et al., 2014). | full semantic graphs can be expensive to annotate, and efforts are fragmented across competing semantic theories, leading to a limited number of annotations in any one formalism. | contrasting |
train_4308 | (Pennington et al., 2014) further utilizes matrix factorization on word affinity matrix to learn word representations. | these models merely arrange only one vector for each word, regardless of the fact that many words have multiple senses. | contrasting |
train_4309 | The SSA Model replaces the target word embedding with the aggregated sememe embeddings to encode sememe information into word representation learning. | each word in SSA model still has only one single representation in different contexts, which cannot deal with polysemy of most words. | contrasting |
train_4310 | Whereas for the mean rank, CBOW gets the worst results, which indicates the performance of CBOW is unstable. | although the accuracy of SAT is a bit lower than that of CBOW, SAT seldom gives an outrageous prediction. | contrasting |
train_4311 | Previous work has modeled the compositionality of words by creating characterlevel models of meaning, reducing problems of sparsity for rare words. | in many writing systems compositionality has an effect even on the character-level: the meaning of a character is derived by the sum of its parts. | contrasting |
train_4312 | The LOOKUP model learns embedding that captures the semantics of each character symbol without sharing information with each other. | the proposed VISUAL model directly learns embedding from visual information, which naturally shares information between visually similar characters. | contrasting |
train_4313 | Several techniques for reducing the rare words effects have been introduced in the literature, including spelling expansion (Habash, 2008), dictionary term expansion (Habash, 2008), proper name transliteration (Daumé and Jagarlamudi, 2011), treating words as a sequence of characters (Luong and Manning, 2016), subword units (Sennrich et al., 2015), and reading text as bytes (Gillick et al., 2015). | most of these techniques still have no mechanism for handling low frequency characters, which are the target of this work. | contrasting |
train_4314 | Semantic role labeling (SRL) is one of the fundamental tasks in natural language processing because of its important role in information extraction (Bastianelli et al., 2013), statistical machine translation (Aziz et al., 2016;Xiong et al., 2012), and so on. | state-of-the-art performance of Chinese SRL is still far from satisfactory. | contrasting |
train_4315 | Most word embedding models define a single vector for each word type. | a fundamen-tal flaw in this design is their inability to distinguish between different meanings and abstractions of the same word. | contrasting |
train_4316 | The context used for the attention component is simply the hidden state from the previous timestep. | since we use a bi-LSTM, the model essentially has two RNNs, and accordingly we have two context vectors, and associated attentions. | contrasting |
train_4317 | The input representations are enriched using syntactic context information, POS, WordNet and VerbNet (Kipper et al., 2008) information and the distance of the head word from the PP is explicitly encoded in composition architecture. | we do not use syntactic context, VerbNet and distance information, and do not explicitly encode POS information. | contrasting |
train_4318 | Related to the idea of concept embeddings is Rothe and Schütze (2015) who estimated Word-Net synset representations, given pre-trained typelevel word embeddings. | our work focuses on estimating token-level word embeddings as context sensitive distributions of concept em- beddings. | contrasting |
train_4319 | The proposed architecture introduces 4 additional parameter matrices that are optimised during training: and ← − W q . | the computational complexity and resource requirements of this model during sequence labeling are equal to the baseline from Section 2, since the language modeling components are ignored during testing and these additional weight matrices are not used. | contrasting |
train_4320 | (2015) took SAT geometry questions as their benchmark. | the nature of SAT geometry questions restricts the resulting formula's complexity. | contrasting |
train_4321 | 3 The Proposed Method Intuitively, the dependency path based idea can be introduced into the temporal relation classification task. | around 64% E-E, E-T links in TimeBank-Dense are with the ends in two neighboring sentences, called cross-sentence links. | contrasting |
train_4322 | Over the rules used on the 1-best result, more than 30% are non-terminal rules, showing that the induced rules play an important role. | 30% are glue rules. | contrasting |
train_4323 | As shown by Durrett and Klein (2013), lexical features implicitly model some linguistic phenomena, which were previously modeled by heuristic features, but at a finer level of granularity. | we question whether the knowledge that is mainly captured by lexical features can be generalized to other domains. | contrasting |
train_4324 | Overfitting to the training dataset is a problem that cannot be completely avoided. | there is a notable overlap between the CoNLL training, development and test sets that encourages overfitting. | contrasting |
train_4325 | Durrett and Klein (2013) use exact surface forms as lexical features. | when word embeddings are used instead of surface forms, the use of lexical features is even more beneficial. | contrasting |
train_4326 | All corresponding ratios are lower than those of deep-coref in Table 5. | the ratios are surprisingly high for a system that does not use the training data. | contrasting |
train_4327 | Table 1 shows the mean value for each group and the t-statistic for each of the features. | to some previous work (Turner, 1999;Geurts et al., 2004;Spek et al., 2009), we find no betweengroup differences in raw item count. | contrasting |
train_4328 | rejecting 10% of candidates; and Area under a model's rejection curve (AUC) (Fig 3). | aUC is influenced by the base PCC of a model, making it difficult to compare the rejection performance. | contrasting |
train_4329 | Figure 2 reveals increasing disparity in LID accuracy for developing countries by the two baseline models. | eQUILID outperforms both systems at all levels of HDI and provides 30% more observations for countries with the lowest development levels. | contrasting |
train_4330 | Our proposed extrinsic evaluation approach for compound splitting is language-independent as we do not use any language-specific parameters. | in the present work we test it on the most prominent closed-compounding language, German (Ziering and van der Plas, 2014). | contrasting |
train_4331 | To address these issues and also to understand the range of possible interactions between humans and objects, the human-object interaction (HOI) detection task has been proposed, in which all possible interactions between a human and a given object have to be identified (Le et al., 2014;Chao et al., 2015;Lu et al., 2016). | both action classification and HOI detection do not consider the ambiguity that arises when verbs are used as labels, e.g., the verb play has multiple meanings in different contexts. | contrasting |
train_4332 | The HICO dataset also has multiple annotations per object and it incorporates the information that certain interactions such as riding a bike and holding a bike often co-occur. | it fails to include annotations to distinguish between multiple senses of a verb. | contrasting |
train_4333 | This is the first dataset that aims to annotate all visual senses of a verb. | the total number of images annotated and number of images for some senses is relatively small, which makes it difficult to use this dataset to train models. | contrasting |
train_4334 | Linguistic resources therefore have to play a key role if we are to make rapid progress in these language and vision tasks. | as we have shown in this paper, only a few of the existing datasets for action recognition and related tasks are based on linguistic resources (Chao et al., 2015;Gella et al., 2016;Yatskar et al., 2016). | contrasting |
train_4335 | Not surprisingly, many industry players are investing heavily in machine learning and AI to create new products and services (MIT Technology Review, 2016). | translating research into a successful product has its own challenges. | contrasting |
train_4336 | We consider two sentences paraphrases if they would have equivalent interpretations when represented as a structured query, i.e., "a pair of units of text deemed to be interchangeable" (Dras, 1999 We considered the above two questions as paraphrases since they are both requests for a list of classes, explicit and implicit, respectively, although the second one is a polar question and the first one is not. | : Prompt:Which is easier out of EECS 378 and EECS 280? | contrasting |
train_4337 | We further confirmed this by calculating PINC between the two paraphrases provided by each user, which produced scores similar to comparing with the prompt. | the One Paraphrase condition did have lower grammaticality, emphasizing the value of evaluating and filtering out workers who write ungrammatical paraphrases. | contrasting |
train_4338 | This fits our intuition that the prompt is a form of priming. | correctness decreases along the chain, suggesting the need to check paraphrases against the original sentence during the overall process, possibly using other workers as described in § 2.1. | contrasting |
train_4339 | The state shown is generated by the first six transitions of both systems. | the transition systems employed in state-of-the-art dependency parsers usually define very local transitions. | contrasting |
train_4340 | Parsers employing traditional transition systems would usually incorporate more features about the context in the transition decision, or employ beam search during parsing (Chen and Manning, 2014;Andor et al., 2016). | inspired by graph-based parsers, we propose arc-swift, which defines non-local transitions as shown in Figure 2. | contrasting |
train_4341 | 4 A caveat is that the worst-case time complexity of arc-swift is O(n 2 ) instead of O(n), which existing transition-based parsers enjoy. | in practice the runtime is nearly 4 This is easy to show because in arc-eager, all Reduce transitions can be viewed as preparing for a later LArc or RArc transition. | contrasting |
train_4342 | This transition system also introduces spurious ambiguity where multiple transition sequences could lead to the same correct parse, which necessitates easy-first training to achieve a more noticeable improvement over arcstandard. | arc-swift can be easily implemented given the parser state alone, and does not give rise to spurious ambiguity. | contrasting |
train_4343 | For a comprehensive study of transition systems for dependency parsing, we refer the reader to (Bohnet et al., 2016), which proposed a generalized framework that could derive all of the traditional transition systems we described by configuring the size of the active token set and the maximum arc length, among other control parameters. | this framework does not cover arc-swift in its original form, as the authors limit each of their transitions to reduce at most one token from the active token set (the buffer). | contrasting |
train_4344 | 1 Generative models defining joint distributions over parse trees and sentences are good theoretical models for interpreting natural language data, and appealing tools for tasks such as parsing, grammar induction and language modeling (Collins, 1999;Henderson, 2003;Titov and Henderson, 2007;Petrov and Klein, 2007;Dyer et al., 2016). | they often impose strong independence assumptions which restrict the use of arbitrary features for effective disambiguation. | contrasting |
train_4345 | Parsing In parsing, we are interested in the parse tree that maximizes the posterior p(a|x) (or the joint p(a, x)). | the decoder alone does not have a bottom-up recognition mechanism for computing the posterior. | contrasting |
train_4346 | The idea of using neural networks is the basis of the state-of-the-art attention-based approach to machine translation (Bahdanau et al., 2015;Luong et al., 2015). | that approach is not based on the principle of an explicit and separate lexicon model. | contrasting |
train_4347 | NMT models usually do not make explicit use of syntactic information about the languages at hand. | a large body of work was dedicated to syntax-based SMT (Williams et al., 2016). | contrasting |
train_4348 | It can accurately predict some compositional semantic effects and handle negation. | since it was trained on movie reviews, it is likely to be missing labelled data for some common phrases in our blogs. | contrasting |
train_4349 | We find it a welcome result that our semi-supervised methods yield patterns that correspond to the A&R classes, thus validating our suspicion that first-person sentences furnish a simplifying test ground for discovering functional patterns in the wild. | many patterns are not covered by A&R's general classes, see Table 6. | contrasting |
train_4350 | They then apply label propagation to spread polarity from sentences to events. | the triples they learn do not focus on first-person experiencers. | contrasting |
train_4351 | A fundamental advantage of neural models for NLP is their ability to learn representations from scratch. | in practice this often means ignoring existing external linguistic resources, e.g., Word-Net or domain specific ontologies such as the Unified Medical Language System (UMLS). | contrasting |
train_4352 | Searching directly in the generative models yields results that are partly surprising, as it reveals the presence of parses which the generative models prefer, but which lead to lower performance than the candidates proposed by the base model. | the results are also unsurprising in the sense that explicitly combining scores allows the reranking setup to achieve better performance than implicit combination, which uses only the scores of a single model. | contrasting |
train_4353 | This is a very loose coupling, however. | to these methods, our work goes a step further, fully coupling the entire sequences of hidden states of an RNN. | contrasting |
train_4354 | Our work is similar to (Finkel et al., 2005), which augments a CRF with long-distance constraints. | our work differs in that we extend an RNN and uses Netwon-Krylov (Knoll and Keyes, 2004) instead of Gibbs Sampling. | contrasting |
train_4355 | For instance, as shown in Figure 1, the editor on idebate.org -a Wikipedia-style website for gathering pro and con arguments on controversial issues, utilizes arguments based on study, factual evidence, and expert opinion to support the anti-gun claim "legally owned guns are frequently stolen and used by criminals". | it would require substantial human effort to collect information from diverse resources to support argument construction. | contrasting |
train_4356 | (2015) investigates the detection of relevant factual evidence from Wikipedia articles. | it is unclear whether their method can perform well on documents of different genres (e.g. | contrasting |
train_4357 | Our task is related to caption generation, which has been studied extensively (e.g., Pedersoli et al., 2016;Carrara et al., 2016;Chen et al., 2016) with MSCOCO (Chen et al., 2015b) and Flickr30K (Young et al., 2014;Plummer et al., 2015). | to caption generation, our task does not require approximate metrics like BLEU. | contrasting |
train_4358 | The inter-sentence classifier exhibits the same trend: GS features do improve the performance. | adding cTAKES features degrades it slightly (-0.013). | contrasting |
train_4359 | (2017) tested both CNN and LSTM models and found CNN superior to LSTM. | this work addressed intra-sentence relations only. | contrasting |
train_4360 | From a global perspective, the work we have presented in this article shows that in accordance with a more general trend, our neural model for extracting containment relations clearly outperforms classical approaches based on feature engineering. | it also shows that incorporating classical features in such a model is a way to improve it, even if all kinds of features do not contribute equally to such improvement. | contrasting |
train_4361 | Discourse segmentation is a crucial step in building end-to-end discourse parsers. | discourse segmenters only exist for a few languages and domains. | contrasting |
train_4362 | Most recent works on RST discourse parsing focuses on the task of tree building, relying on a gold discourse segmentation (Ji and Eisenstein, 2014;Feng and Hirst, 2014;Li et al., 2014;Joty et al., 2013). | discourse parsers' performance drops by 12-14% when relying on predicted segmentation (Joty et al., 2015), underscoring the importance of discourse segmentation. | contrasting |
train_4363 | POS tagger, chunker, list of connectives, gold sentences). | we present what is to the best of our knowledge the first work on discourse segmentation that is directly applicable to low-resource languages, presenting results for scenarios where no labeled data is available for the target language. | contrasting |
train_4364 | Our scores are not directly comparable with sentence-level state-of-the-art systems (see Section 2). | for En-DT, our best system correctly identifies 950 sentence boundaries out of 991, but gets only 84.5% in F 1 for intra-sentential boundaries, 11 thus lower than the state-of-the-art (91.0%). | contrasting |
train_4365 | Also, global sufficiency has the lowest agreement in both cases. | the experts hardly said "cannot judge" at all, whereas the crowd chose it for about 4% of all ratings (most often for global sufficiency), possibly due to a lack of training. | contrasting |
train_4366 | Considering that some common reasons are quite vague, the diverse and comprehensive theoretical view of argumentation quality may guide a more insightful assessment. | some quality dimensions remain hard to assess and/or to separate in practice, resulting in limited agreement. | contrasting |
train_4367 | Similarly, when we pack w 3,9 into an oracle summary, we have to pack both chunks c 3,3 and c 3,5 and drop chunk c 3,4 . | this compres-sion is not allowed since there is no dependency relationship between c 3,3 and c 3,5 . | contrasting |
train_4368 | Firstly, to restrict the training data to grammatical and informative sentences, only news articles satisfying certain conditions are used. | then, nouns, verbs, adjectives, and adverbs (i.e., content words) shared by S and H are identified by matching word lemmas, and a rooted dependency subtree that contains all the shared content words is regarded as C. their method is designed for English, and cannot be applied to Japanese as it is. | contrasting |
train_4369 | Thus, in C, original forms are replaced by their abbreviated forms obtained as explained in Section 2.1 (e.g., the pair with "-" in Figures 1 and 2). | we do not allow the head of a chunk to be deleted to keep the grammaticality. | contrasting |
train_4370 | rooted includes only deletion of the leafs in a dependency tree. | multi-root+ includes deleting the global root and reflecting abbreviated forms besides it. | contrasting |
train_4371 | They make it easy to include synonyms and word-class information through hypernym relations. | they require substantial human effort to build and can have low coverage. | contrasting |
train_4372 | In both SF and pocket KBP, a query is an entity of interest and a document mentioning that entity. | in PKB the primary goal is to populate the KB with nodes for all entities related to the query, irrespective of any prior beliefs about relations. | contrasting |
train_4373 | Structured curated KBs have been used successfully for this task (Berant et al., 2013;Berant and Liang, 2014). | these KBs are expensive to build and typically domain-specific. | contrasting |
train_4374 | It is important to note that existing Open IE systems, like Open IE 4.2, may also extract numerical facts. | they are oblivious to the presence of numbers in arguments. | contrasting |
train_4375 | An extension to the Universal Schema approach was proposed by (Toutanova et al., 2015), where representations of text relations are formed compositionally by Convolutional Neural Networks (CNNs) and then composed with entity vectors by a bilinear model to score a fact. | these models show only moderate improvement when incorporating tex- A limitation of the Universal Schema approach for joint embedding of KBs and text is that information about the correspondence between KB and text relations is only implicitly available through their co-occurrence with entities. | contrasting |
train_4376 | MINIST dataset) for pre-training. | in tasks which prefer dense semantic representations (e.g. | contrasting |
train_4377 | In these experiments, there are a few sub-category labels that are not included in the training data. | we still hope that our model could still return the correct parent category for these unseen subcategories at test time. | contrasting |
train_4378 | Information extraction (IE) from text has largely focused on relations between individual entities, such as who has won which award. | some facts are never fully mentioned, and no IE method has perfect recall. | contrasting |
train_4379 | Ideally, we would like to make sense of all cardinality statements found in text. | this would require us to resolve the meaning of a large set of vague predicates, which is in general a difficult task. | contrasting |
train_4380 | This problem has matured into learning semantic parsers from parallel question and logical form pairs (Zelle and Mooney, 1996;Zettlemoyer and Collins, 2005), to recent scaling of methods to work on very large KBs like Freebase using question and answer pairs (Berant et al., 2013). | a major drawback of this paradigm is that KBs are highly incomplete (Dong et al., 2014). | contrasting |
train_4381 | Sequence-to-Sequence (seq2seq) models have demonstrated excellent performance in several tasks including machine translation (Sutskever et al., 2014), summarization (Rush et al., 2015, dialogue generation (Serban et al., 2015), and image captioning (Xu et al., 2015). | the standard cross-entropy training procedure for these models suffers from the well-known problem of exposure bias: because cross-entropy training always uses gold contexts, the states and contexts encountered during training do not match those encountered at test time. | contrasting |
train_4382 | Note that our original motivation based on removing discontinuity does not strictly apply to this sampling procedure, which still yields a stochastic gradient due to sampling from the Gumbel distribution. | this approach is conceptually related to greedy relaxations since, here, the soft argmax reparametrization reduces gradient variance which may yield a more informative training signal. | contrasting |
train_4383 | Among them, word-level combination approaches that adopt confusion network for decoding have been quite successful (Rosti et al., 2007;Ayan et al., 2008;Freitag et al., 2014). | these approaches are mainly designed for SMT without considering the features of NMT results. | contrasting |
train_4384 | As for negative examples, Munteanu and Marcu (2005) randomly paired sentences from their parallel data using two constraints: a length ratio not greater than two, and a coverage constraint that considers a negative example only if more than half of the words of the source sentence has a translation in the given target sentence according to some bilingual lexicon. | from a large parallel corpus, one can easily retrieve another target sentence, almost identical, containing most of the words that the true target sentence also contains. | contrasting |
train_4385 | Standard tasks, such as topic classification, are usually performed within a single language, and the maximum feature space size is a function of the single language's vocabulary. | lID must deal with vocabulary from many languages and the feature space grows prodigiously. | contrasting |
train_4386 | Otherwise, it is usually impossible to combine these pieces of information effectively. | the standard syntactic corpus of English, Penn Treebank, is not concerned with consistency between syntactic trees and spans of multiword expressions (MWEs). | contrasting |
train_4387 | Simple, yet competitive methods, are based on pointwise vector addition or multiplication Lapata, 2008, 2010). | these approaches neglect the structure of the text defining composition as a commutative operation. | contrasting |
train_4388 | Most of the existing models generate one representation per word and do not distinguish between different meanings of a word. | many tasks can benefit from using multiple representations per word to capture polysemy (Reisinger and Mooney, 2010). | contrasting |
train_4389 | • Fixed granularity: in some cases, annotators might feel too restricted with a given rating scale and may want to place an item inbetween the two points on the scale. | a fine-grained scale may overwhelm the respondents and lead to even more inconsistencies in annotation. | contrasting |
train_4390 | BWS is claimed to produce high-quality annotations while still keeping the number of annotations small (1.5N -2N tuples need to be annotated) (Louviere et al., 2015;Kiritchenko and Mohammad, 2016a). | the veracity of this claim has never been systematically established. | contrasting |
train_4391 | The sparse prior work in natural language annotations that uses BWS involves the creation of datasets for relational similarity (Jurgens et al., 2012), word-sense disambiguation (Jurgens, 2013), and word-sentiment intensity (Kiritchenko and Mohammad, 2016a). | none of these works has systematically compared BWS with the rating scale method. | contrasting |
train_4392 | (Estimating a value often stabilizes as the sample size is increased.) | in rating scale annotation, each item is annotated individually whereas in BWS, groups of four items (4-tuples) are annotated together (and each item is present in multiple different 4-tuples). | contrasting |
train_4393 | On the one hand, for BWS, the respondent has to consider four items at a time simultaneously. | even though a rating scale question explicitly involves only one item, the respondent must choose a score that places it appropriately with respect to other items. | contrasting |
train_4394 | So we weigh such words higher than the other words in the description during gender prediction task. | these weights might not be applicable for a location prediction task. | contrasting |
train_4395 | Hierarchical-Attention model performs well ahead of the other two models for almost all the tasks. | the performance of all the models fall flat for location prediction task. | contrasting |
train_4396 | It is quite valuable to analyze and predict user opinions from these materials (Wang and Pal, 2015), in which supervised learning is one of the effective paradigms (Xu et al., 2015). | the performance of a supervised learning algorithm relies heavily on the quality of training labels (Song et al., 2015). | contrasting |
train_4397 | At this optimal setting, the baseline is outperformed by 2.2%. | if more than only the first candidate is used, it is not beneficial to normalize all words anymore. | contrasting |
train_4398 | Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. | these latent variables are highly randomized, leading to uncontrollable generated responses. | contrasting |
train_4399 | This approach, however, requires a set of high-quality labeled data (i.e., the Gold Standard) for providing the instruction and feedback to the crowd workers. | acquiring such data requires a considerable amount of human effort. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.