id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_17300 | Previous studies on answer selection have focused mostly on small-scale datasets. | many community question answering (CQA) platforms such as Yahoo Answers and Stack Exchange have become an essential source of information for many people. | contrasting |
train_17301 | This is expected as BERT has been pretrained on a massive amount of unlabeled data. | our proposed techniques do add a significant amount of performance. | contrasting |
train_17302 | In other words, it cannot copy words with morphological changes. | morphological changes frequently happen when we transform a passage into a question as grammatical functions of some words (e.g., verbs, nouns, adjectives, etc) change. | contrasting |
train_17303 | The trained seq2seq model is prone to generating these "safe" questions, similar to the undiversified response generation in seq2seq-based dialogue model . | our model is able to generate a more relevant question including a rare word "amalgamated" as the word has a high overlap rate. | contrasting |
train_17304 | Stories generated with neural language models have shown promise in grammatical and stylistic consistency. | the generated stories are still lacking in common sense reasoning, e.g., they often contain sentences deprived of world knowledge. | contrasting |
train_17305 | 7 The BERT multi-task versions perform better with highly correlated qualities like Q4 and Q5 (as illustrated in Figures 2 to 4 in the supplementary material). | there is not a clear winner among them. | contrasting |
train_17306 | This observation is consistent with many previous works of using RL/IL in NLP. | we also notice that the improvement of using the REINFORCE algorithm (line 6) is very small, only 0.18 on the average score. | contrasting |
train_17307 | In Table 4, we can see that the auxiliary loss model's improvements are even more amplified on D med and D late . | our pretraining results are mixed. | contrasting |
train_17308 | Further, at generation time, they heavily rely on rejection sampling to produce quatrains which satisfy any valid rhyming pattern. | we find that generators trained using our structured adversary produce samples that satisfy rhyming constraints with much higher frequency. | contrasting |
train_17309 | devised a type decoder which combines three type-specific generation distribution (including question type) with weighted sum. | the results displayed in their paper show that questions in dialogue are far different from questions for reading comprehension, which indicates a gap In this paper, we propose a unified model to predict the question type and to generate questions simultaneously. | contrasting |
train_17310 | The first two examples represent DSR's generated tokens are more diverse. | it may suffer from problems as shown in example 3 and 4. | contrasting |
train_17311 | Recently, various deep learning approaches have been proposed based on the WikiSQL dataset (Zhong et al., 2017). | because WikiSQL contains only very simple queries over just a single table, these approaches (Xu et al., 2017;Huang et al., 2018;Yu et al., 2018a;Dong and Lapata, 2018) cannot be applied directly to generate complex queries containing elements such as JOIN, GROUP BY, and nested queries. | contrasting |
train_17312 | For text-to-SQL generation, several SQLspecific approaches have been proposed (Zhong et al., 2017;Xu et al., 2017;Huang et al., 2018;Yu et al., 2018a;Dong and Lapata, 2018;Yavuz et al., 2018) based on WikiSQL dataset (Zhong et al., 2017). | all of them are limited to the specific WikiSQL SQL sketch, which only supports very simple queries. | contrasting |
train_17313 | After their release, very large language models (LMs) were able to reach or surpass human-level performance on SNLI (Peters et al., 2018) and SWAG (Devlin et al., 2018). | researchers have found inadequacies in these datasets and the models trained on them. | contrasting |
train_17314 | Contextualized word embeddings have boosted many NLP tasks compared with traditional static word embeddings. | the word with a specific sense may have different contextualized embeddings due to its various contexts. | contrasting |
train_17315 | The LM updated with textbook data (BERT+Textbook), improves performance on the domains included in additional pre-training (Phy and Gov). | we suspect that the updated model becomes more specialized towards seen domains, which leads to performance degradation on the unseen domain of Psychology. | contrasting |
train_17316 | We limited the scope of this paper to the task of automatic short answer grading only. | our findings of the sensitivity of domain-specific BERT models appear generic. | contrasting |
train_17317 | Similarly, counterfactual reasoning has been studied in the logic community (Lewis, 2013), but again using formal frameworks. | wIQA treats the task as a mixture of reading comprehension and commonsense reasoning, creating a new NLP challenge. | contrasting |
train_17318 | It contains several phenomena in the "stress tests" (Naik et al., 2018) including word overlap, negation, and length mismatch. | these datasets are artificially constructed while CB data are naturally occurring. | contrasting |
train_17319 | In the past, there has been work on extracting high quality translations from crowd-sourced workers using automatic methods (Zaidan and Callison-Burch, 2011;Post et al., 2012). | crowd-sourced translations have generally lower quality than professional translations. | contrasting |
train_17320 | Overall, the domains and quantity of the existing parallel data are very limited. | both languages have a rather large amount of monolingual data publicly available (Buck et al., 2014), making them perfect candidates to track performance on unsupervised and semi-supervised tasks for Machine Translation. | contrasting |
train_17321 | Right: average sentence level BLEU against Wikipedia document id from which the source sentence was extracted; sentences have roughly the same degree of difficulty across documents since there is no extreme difference between shortest and tallest bar. | source sentences originating from Nepali Wikipedia (blue) are translated more poorly than those originating from English Wikipedia (red). | contrasting |
train_17322 | Nepali and Sinhala are languages with very different syntax and morphology than English; also, very little parallel data in these language pairs is publicly available. | a good amount of monolingual data, parallel data in related languages, and Paracrawl data exist in both languages, making these two language pairs a perfect candidate for research on low-resource MT. | contrasting |
train_17323 | On the one hand, if src is ignored, it is difficult to identify translation errors related to adequacy in mt, especially for fluent but inadequate translations (e.g., mt in Figure 1). | mt serves as a major source for generating pe since many words (e.g., "Ich" and "einen" in Figure 1) are copied from mt to pe. | contrasting |
train_17324 | occupations), we can rely on human annotations or external data. | for most words, evaluating the correctness of them is still an open problem. | contrasting |
train_17325 | (2017) use word embeddings as a quantitative lens to investigate historical trends of gender stereotypes. | due to the limitations in word embedding training, existing methods have constrained effects in this field. | contrasting |
train_17326 | One possible idea to tackle such problems is to employ a sarcasm scorer that can determine the sarcasm content in the generated output, and use the scores given by the sarcasm scorer for better training of the generator. | the sarcasm scorer may be external to the sequence-tosequence pipeline, and the scoring function may not be differentiable with respect to the model M θ . | contrasting |
train_17327 | In typical RL settings, the learner is typically initialized to a random policy distribution. | in our case, since some supervision is already available in the form of target sarcastic sentences, we pretrain the model with the loss minimization objective given in Eq. | contrasting |
train_17328 | Inspired by unsupervised machine translation (UMT) (Lample et al., 2018b), we treated our task as a translation problem, namely translating vernacular paragraphs to classical poems. | our work is not just a straight-forward application of UMT. | contrasting |
train_17329 | The UMT framework makes it possible to apply neural models to tasks where limited human labeled data is available. | in previous tasks that adopt the UMT framework, the abstraction levels of source and target language are the same. | contrasting |
train_17330 | For example, in selected from famous modern prose. | compared with poem 4, poem 5 seems semantically more confusing. | contrasting |
train_17331 | We did not set any stopping criteria for the number of instructions to be generated for simplicity sake. | we generated as many number of instructions in the corresponding test record. | contrasting |
train_17332 | As mentioned above, the main difference is that SYPHON is two orders of magnitude faster than their system thanks to a novel decomposition and efficient SMT encoding. | we impose extra restrictions on the hypothesis space (i.e. | contrasting |
train_17333 | The resulting knowledge base, TEMPROB, contains observed frequencies of tuples (v1, v2, r) representing the probability of verb 1 and verb 2 having relation r and it was shown a useful resource for TempRel extraction. | tEMPROB is a simple counting model and fails (or is unreliable) for unseen (or rare) tuples. | contrasting |
train_17334 | The underlying assumptions behind most NER systems are that an entity should contain a contiguous sequence of words and should not overlap with each other. | such assumptions do Figure 1: Entities are highlighted with colored underlines. | contrasting |
train_17335 | Muis and Lu (2016a) proposed a hypergraphbased representation to compactly encode discontiguous entities. | this representation suffers from the ambiguity issue during decodingone particular hypergraph corresponds to multiple interpretations of entity combinations. | contrasting |
train_17336 | (2014); extended the BIO tagging scheme to encode such complex structures so that traditional linear-chain CRF (Lafferty et al., 2001) can be employed. | the model suffers greatly from ambiguity during decoding due to the use of the extended tagset. | contrasting |
train_17337 | For r (k) or r (k,l) derived from a KB relation, we represent it by a trainable parameter vector. | for the one derived from a textual relation, we use the following encoders to compute its representations. | contrasting |
train_17338 | Since their method directly predicts a relation label for each surface pattern, it is more robust to the sparsity of surface patterns among a specific higher arity entity tuple. | due to their purely supervised training objective, its performance may degrade if the number of available training labels is small. | contrasting |
train_17339 | In the result, U+B performs significantly better (p < 0.005) than U and B, and this shows effectiveness of combining scores of both binary facts and unary facts. | there was no significant difference between U+B+N and N (p > 0.9). | contrasting |
train_17340 | From a humanities perspective, a VA constitutes an interesting phenomenon of enculturation (Holmqvist and Płuciennik, 2010) that deserves to be studied more in-depth, based on larger corpora. | we expect that the large-scale identification of VAs and other such stylistic devices is not only important in the humanities and for cultural reasons, but also in natural language processing to avoid mistakes in tasks such as machine translation and fact extraction. | contrasting |
train_17341 | Thus, we used a random sample of 105 articles (for each year 5). | we found only one VA: "If Nike is the Chicago Bulls of the athletic shoe market, retailers are holding their breath for a strong underdog player to emerge." | contrasting |
train_17342 | In other words, the more distinct these representations are, the more difficult identifying chemical compound entities becomes. | existing chemical NER methods do not deal with notation variants of chemical compounds, derived from the partial structures or the notation fluctuation peculiar to these chemical compounds. | contrasting |
train_17343 | These results contain two types of errors: idiosyncratic casing in the gold data and failures of the truecaser. | from the high scores in the Wikipedia experiment, we suppose that much of the score drop comes from idiosyncratic casing. | contrasting |
train_17344 | Naturally, in any crossdomain experiment, one will obtain higher scores by training on in-domain data. | our goal is to show that our methods produce a more robust model on out-of-domain data, not to maximize performance on this test set. | contrasting |
train_17345 | Its scorer makes some modifications to OIE2016. | it does not reward partial coverage of gold tuples, and forces one system prediction to match just one gold. | contrasting |
train_17346 | One-to-one match (OIE2016) is indifferent between the two which means that for OIE2016, adding more information in the same extraction has no value at all. | multi match (CaRB) assigns higher recall to system 1, since it contains strictly more information, and higher precision to system 2, since its prediction exactly matched a gold extraction. | contrasting |
train_17347 | Based on our empirical observation, the sentence-level sentiment classifiers without considering aspects can still achieve competitive results with many recent ABSA methods (see TextCNN and LSTM in Table 3). | even advanced ABSA methods trained on these datasets can hardly distinguish the sentiment polarities towards different aspects in the sentences that contain multiple aspects and multiple sentiments. | contrasting |
train_17348 | Researchers and practitioners typically have to resort to crowdsourcing. | as mentioned above, the crowdsourced annotations can be quite noisy. | contrasting |
train_17349 | This task is similar to the three-way aspect-level sentiment analysis that determines sentiment polarity towards aspect terms. | different from aspect-level sentiment analysis, the target in stance detection might not be explicitly mentioned in a given sentence. | contrasting |
train_17350 | Then the F1 average is calculated as: Note that the label "None" is not discarded during training. | the label "None" is not considered in the evaluation because we are only interested in labels "Favor" and "Against" in this task. | contrasting |
train_17351 | Based on massive amounts of data, recent pretrained contextual representation models have made significant strides in advancing a number of different English NLP tasks. | for other languages, relevant training data may be lacking, while state-of-the-art deep learning methods are known to be data-hungry. | contrasting |
train_17352 | Without L alignment , we observe a reduction in both accuracy and BLEU on Yelp. | this tendency is inconsistent on Amazon (i.e., -2.2 accuracy and +0.56 BLEU). | contrasting |
train_17353 | Hierarchical attention without normalization can make distinctions for the output vector of each class in the norm, so it works well. | using normalization is better in label attention. | contrasting |
train_17354 | The lower class label text for the upper class OTHER includes less verbose phrases such as "The user requesting," so label attention can give higher weight for important parts of the label text more precisely. | the F1 score of the proposed method for the upper class REPORT is worse than that of Non-hier. | contrasting |
train_17355 | The proposed method has a better F1 score than Non-hier for nine lower classes with a mean number of tokens in the lower class labels of 11.2. | the proposed method has a lower F1 score than Non-hier for seven lower classes with a mean number of tokens of 8.0. | contrasting |
train_17356 | Word embeddings have demonstrated strong performance on NLP tasks. | lack of interpretability and the unsupervised nature of word embeddings have limited their use within computational social science and digital humanities. | contrasting |
train_17357 | Relaxing the location assumptions with the Truncated Prior (γ = 1000) and placing informative priors on neutral words (ψ = 0.01) show varying degrees of improvements. | due to the limited number of hold-out test words, large improvements are required for significant differences -which only is observed for the sentiment dimension in the Senate corpus. | contrasting |
train_17358 | Such forms of bias typically do not depend on context outside of the sentence and can be alleviated while maintaining its semantics: polarized words can be removed or replaced, and clauses written in active voice can be rewritten in passive voice. | political science researchers find that news bias can also be characterized by decisions made regarding content selection and organization within articles (Gentzkow et al., 2015;Prat and Strömberg, 2013). | contrasting |
train_17359 | An alternative encoding method, Label Powerset (LP), captures dependencies explicitly: each label combination appearing in the training data is encoded as a new, unique label, reducing the task once again to a multiclass classification. | lP can introduce an exponentially large number of new labels, potentially with few training instances, thus exacerbating class imbalance. | contrasting |
train_17360 | Ancient History relies on disciplines such as Epigraphy, the study of ancient inscribed texts, for evidence of the recorded past. | these texts, "inscriptions", are often damaged over the centuries, and illegible parts of the text must be restored by specialists, known as epigraphists. | contrasting |
train_17361 | These repositories primarily consist in a researcher's mnemonic repertoire of such parallels, and in digital corpora for performing "string matching" searches (The Packard Humanities Institute, 2005;Clauss, 2012). | minor differences in the search query can exclude or obfuscate relevant results, making it hard to estimate the true probability distribution of possible restorations. | contrasting |
train_17362 | In addition to pairwise ranking, SemEval 2017 Task 6 also includes a global ranking subtask. | the majority of submissions build global rankings using a series of pairwise comparisons (Potash et al., 2017). | contrasting |
train_17363 | State-of-the-art humor recognition algorithms usually require a considerable amount of training data with labels to learn effective features (Yang et al., 2015). | such data are difficult to obtain -especially fine-grained humor annotations. | contrasting |
train_17364 | Like ours, their results are obtained without using the training set. | their system uses an n-gram language model trained on a 6.2GB subset of the News Commentary Corpus and the News Crawl Corpus. | contrasting |
train_17365 | Automatic data augmentation is commonly used in computer vision (Simard et al., 1998;Szegedy et al., 2014;Krizhevsky et al., 2017) and speech (Cui et al., 2015;Ko et al., 2015) and can help train more robust models, particularly when using smaller datasets. | because it is challenging to come up with generalized rules for language transformation, universal data augmentation techniques in NLP have not been thoroughly explored. | contrasting |
train_17366 | Kobayashi (2018) showed that replacing words with other words that were predicted from the sentence context using a bi-directional language model yielded a 0.5% gain on five datasets. | training a variational auto-encoder or bidirectional LSTM language model is a lot of work. | contrasting |
train_17367 | Therefore, we translate the source language, English, to the target language for the training corpus to reconstruct source language sentences and learn source and target languages jointly. | one English word might have multiple corresponding translations. | contrasting |
train_17368 | The previous cross-lingual settings were only for European languages, which share similar alphabets. | many languages use non-Latin orthography. | contrasting |
train_17369 | Specially, the authors used 5 different emojis to represent the 5 degrees of humor instead of using the 5-point annotation. | while previous work focuses on textual humor annotation of humorous/non-humorous and degree of funniness, such annotations do not provide adequate knowledge and scenarios to explain how humor arises, so they may not provide a deep analysis of the underlying mechanism of humor. | contrasting |
train_17370 | Existing automatic anagram generation methods can find possible combinations of words form an anagram. | they do not pay much attention to the naturalness of the generated anagrams. | contrasting |
train_17371 | NLP tasks (Józefowicz et al., 2016;Gehrmann et al., 2018;Skadina and Pinnis, 2017). | beam search, the de-facto decoding algorithm used in many language generation methods, cannot be used as-is to generate anagrams since it cannot ensure that the generated sentences are anagrams. | contrasting |
train_17372 | Dancing links is a link-based data structure that supports efficient DFS for some combinatorial search problems (Details are shown in (Knuth, 2019)). | we found that using a simple array-based representation is sufficient since the conditional probability evaluation consumes most of the time. | contrasting |
train_17373 | While the baseline method seems to employ some heuristics for deciding the word order, the combinations of selected words tend to form meaningless sentences. | the proposed method tends to generate readable sentences. | contrasting |
train_17374 | This is due to the fact that the famous anagrams use relatively rare words. | the generated anagrams contain natural sentences. | contrasting |
train_17375 | Here, the word 'elsewhere' is the key cue which determines the stance. | the presence of the word 'elsewhere' does not necessarily imply that the perspective is opposing the claim. | contrasting |
train_17376 | (2016) proposed the Deep-Regex model based on Seq2Seq for generating regular expressions from natural language descriptions together with a dataset of 10,000 NL-RX pairs. | due to the limitations of the standard Seq2Seq model, the Deep-Regex model can only generate regular expressions similar in shape to the training data. | contrasting |
train_17377 | The KB13 (Kushman and Barzilay, 2013) dataset was constructed by regex experts and is relatively small. | nL-RX-Synth is data generated automatically and nL-RX-Turk is made from ordinary people by paraphrasing nL descriptions in nL-RX-Synth using Mechanical Turk (Locascio et al., 2016). | contrasting |
train_17378 | For in-hospital mortality task, only those patients are considered who were admitted in the ICU for at least 48 hours. | we dropped all clinical notes which doesn't have any chart time associated and also dropped all the patients without any notes. | contrasting |
train_17379 | CGAEW improves upon CGAE as the existence of world-state in the score of the attention layer allows the model better learn the grounding of entities in the instruction to the map. | our best model still fails on features not captured by our world-state: abstract unmarked entities such as blocks, intersections, etc, and generic entities such as traffic-lights (Tab. | contrasting |
train_17380 | That is to say, if a 2-tuple often cooccurs with CC i in training corpus with positive view, it contributes more to positive orientation than negative one. | if the 2-tuple often co-occurs with CC i in training corpus with negative view, it contributes more to negative orientation. | contrasting |
train_17381 | Sentiment classifying knowledge is defined as the importance of all 2-tuples <word, epos> that compose the context of CC i (given as an example) to sentiment classification and every Concerned Concept like CC i has its own positive and negative sentiment classifying knowledge that can be formalized as a 3-tuple K: To CC i , its S i pos has concrete form that is described as a set of 5-tuples: Where S i pos represents the positive sentiment classifying knowledge of CC i , and it is a data set about all 2-tuples <word, epos> appearing in the sentences containing CC i in training texts with positive view. | s i neg is acquired from the training texts with negative view. | contrasting |
train_17382 | Seen from the table, when evaluating texts that have more than 15 sentences, for enough features, SVM has better result, while ours is averagely close to it. | when evaluating the texts containing less than 15 sentences, our method is obviously superior to SVM in either positive or negative view. | contrasting |
train_17383 | We can see from both Tables 3 and 4 that when the number of words in a sequence is small, the result has no effect with the number of positive training data, since the range of F-score in Table 3 is 0.415 ∼ 0.478, and that in Table 4 is 0.453 ∼ 0.516. | as we can see from Table 2, when the number of title words is large, the smaller the number of positive training data is, the worse the result is. | contrasting |
train_17384 | Overall, the result of 'with hierarchy' was better than that of 'without hierarchy' in all N t values. | there are four topics/N t patterns whose results with hierarchical classification were worse than those of without a hierarchy. | contrasting |
train_17385 | Last decade has witnessed an explosive growth of multimedia information such as images and videos. | we can't access to or make use of the relevant information more leisurely unless it is organized so as to provide efficient browsing and querying. | contrasting |
train_17386 | In our annotation procedure, each annotated word is predicted independently by the Maximum Entropy Model, word correlations are not taken into consideration. | correlations between annotated words are essentially important in predicting relevant text descriptions. | contrasting |
train_17387 | Japanese relative clause constructions should be classified into at least two major semantic types: case-slot gapping and head restrictive. | these types for relative clause constructions cannot apparently be distinguished. | contrasting |
train_17388 | Cooccurrence information between nouns and verbs can be calculated from the syntactically parsed corpus, and this information can be used preferentially instead of handcrafted case frames to determine whether a noun can be the filler of a case-slot of a verb [7,11]. | merely using the cooccurrence information between nouns and verbs instead of case frames cannot provide a good solution to the analysis of Japanese relative clauses. | contrasting |
train_17389 | For nouns that do not tend to be modified by 'outer' clauses, such as " "(people), " " (city), and " "(television), the ratio between the frequency and the number of verbs is almost the same between the relative clause and case-slot cases. | for nouns that tend to be modified by 'outer' clauses, such as " "(intent), " " (fact), and " "(preparation), the number of verbs is much bigger in relation to clause cases, although the frequency is smaller. | contrasting |
train_17390 | As previously explained, if the head noun can fill the case-slot of the main verb of the relative clause, the RCC instance can be judged as an 'inner' clause. | if the case-slot that the head noun can fill is already filled by the noun in the relative clause, and hence unavailable for case-slot gapping, the rule cannot be applied. | contrasting |
train_17391 | The second interpretation is that " " can be the direct object(" " case) of the main verb " " and can be considered to be modified by the 'inner' relative clause. | (b) has only the interpretation of 'inner'. | contrasting |
train_17392 | If modifiers other than the relative clause exist, RCC type is 'inner'. | the accuracy of this feature is not so good compared with other features. | contrasting |
train_17393 | We found use of lexical collocations did yield a small (0.3%) but statistically significant improvement in performance over the unmodified parser (Table 5). | when combined with either POS or entity adaptations, the lexicon's impact on parsing accuracy was statistically insignificant. | contrasting |
train_17394 | Domain knowledge appears to be necessary here for correct resolution. | to this, POS tags appear to be a distributional rather than a semantic concern. | contrasting |
train_17395 | For a better recognition, one can define accurate regular expressions. | we just collect suffixes and feature characters to match strings. | contrasting |
train_17396 | For example, the verb eat impose a selection restriction on its object modifier 5 : it has to be solid food. | the verb drink specifies its object modifier to be liquid beverage. | contrasting |
train_17397 | In this case, the condition B max will not suffice, and we need a second boundary condition: B increase Boundaries are locations where the entropy is increased. | when h decreases at k = 5, then even B increase cannot be applied to detect k = 5 as a boundary. | contrasting |
train_17398 | 8 The problem here might not occur if we used many more question types in the first QA test. | we did not do this to keep the first QA test simple. | contrasting |
train_17399 | [16] proposed a method of identifying attribute-value pairs in Web documents. | since this method only identified the attributes obtained with the method in [1], the coverage might be bounded by the coverage of tables for attributes. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.