id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_100600 | Intuitively, finegrained classifications are often considered more error-prone than those with a small number of class values. | providing highquality sources might lower the attention and interest of the annotators. | neutral |
train_100601 | The results show that the model GRU+SVM with ELMo yields the best performance of 73.03% on the development data, while the model GRU+SVM with ELMo+POS outperforms all the other models on the test dataset with a micro-average F1 score of 69.93%, by being marginally better than GRU+SVM with ELMo 7 . | equations 4 and 5 show the mechanisms used to calculate the weights, where w corresponds to the weights in the single-layer neural network, and ν t is the single value, which is the result of feeding y t to the fully-connected layer. | neutral |
train_100602 | Let n +− t denote for the time step t, the union of positive and negative examples. | the positive impact of the backward external initialisation increases with the volume of data. | neutral |
train_100603 | The variance parameter is fixed at 0.1. | we added a regularisation term to their loss using the Hardshrink activation function, successfully getting longer distribution tails for the drifts. | neutral |
train_100604 | In (B), the string " " has only one grammatical analysis result " " (Nanjin). | in STEP (1) at the line 14 of Algorithm 1, when an identical surface was associated with two or Figure 1: Example of a lattice that is built by our baseline method and our proposed model using a CiS dictionary. | neutral |
train_100605 | Finally, in Section 5, conclusions are drawn and future developments outlined. | furthermore, we thank the reviewers for their valuable comments. | neutral |
train_100606 | This means that the slightly higher entropy is found in the language pair where there is slightly lower intelligibility. | the character adaptation surprisal values between language A and language B are not necessarily the same as between language B and language A. | neutral |
train_100607 | According to Gatt and Krahmer (2017), there has been a plenty of works which investigated the generation of NL texts from Semantic Web Technologies (SWT) as an input data (Cimiano et al., 2013;Duma and Klein, 2013;Ell and Harth, 2014;Biran and McKeown, 2015). | as expected, the verbalization of short expressions leads to sentences which read as if they have been generated by a human. | neutral |
train_100608 | we get everything that is a city and located in France. | we will consider the used of attention-based encoder-decoder networks to improve the fluency of complex sentences. | neutral |
train_100609 | We see a similar pattern for both extractive and abstractive summarization, with an increase of deletion for longer summaries produced by the extractive system. | our denoising methods are currently better suited for extractive than for absctractive summarization. | neutral |
train_100610 | democratic to republican) or gender (e.g. | importantly, these methods assume that some parallel training data is already available, which impedes their application in settings where there is no parallel data whatsoever, which is the case for many text rewriting tasks such as style transfer. | neutral |
train_100611 | A quick look at an image is sufficient for a human to say a few words related to that image. | we investigate data from Twitter. | neutral |
train_100612 | With increasing k, more concept clusters of unused single terms are added. | the feature combinations are formed by concatenating the feature spaces of each method. | neutral |
train_100613 | The feature combinations are formed by concatenating the feature spaces of each method. | largely balanced validation sets are formed with data from various sources. | neutral |
train_100614 | Among the false negatives we analyzed, we found that the model is most likely to miss "tricky" quotations that are unusual in their grammatical structure. | the results in table 1 show that NQD cannot beat the performance of Scheible et al. | neutral |
train_100615 | Next, we carry out more methods to clean the texts. | the evaluation was carried out in accordance with parameters set out for SemEval-2019 task 12, featuring strict and overlap categories on macro and micro levels. | neutral |
train_100616 | On the one hand large-scale geographical databases, such as GeoNames 1 , make information about many different locations easily and freely available. | while a lot more research has been focused on these kind of architectures, we hope to explore tasks other than only sequence labelling. | neutral |
train_100617 | automatic post-editing (APE) (Junczys-Dowmunt and Grundkiewicz, 2016;. | it could then be questioned whether the extra training effort in itself does not partly explain the positive effect of back-translation. | neutral |
train_100618 | We demonstrate the results of our model in section five. | we accomplish this by incorporating information about part-of-speech (PoS) tags into the model. | neutral |
train_100619 | The graph-based embeddings are somewhat better than those based solely on the FrameNet annotated corpus; the difference is especially pronounced in the evaluations without the lexicon. | the Wikipedia data is approximately the same size as the pseudo-corpus. | neutral |
train_100620 | As described above in Section 1, FrameNet has a structure which connects frames via different semantic relations. | their frame elements are more likely to appear across different domains. | neutral |
train_100621 | We have constructed such a model following the methodology described in Goikoetxea et al. | the paper is structured as follows: the next section presents related work; section 3 deals with some preliminary experiments aimed at replicating previous work described in the literature, but in a slightly modified setting; section 4 outlines several strategies for improving the structure of the knowledge base used for KBWSD; the penultimate section reports on our core experimental work, and the final section concludes the paper. | neutral |
train_100622 | The dependency analyses of the sentences are not part of the original annotation of SemCor. | in the case of the preliminary experiments, we have used a synset embedding model constructed via random walks along the WordNet KB 7 . | neutral |
train_100623 | (2017) and Rios Gonzales et al. | the conjunction ambiguity is related more to fluency than it is to adequacy. | neutral |
train_100624 | The opposite applies to the Czech parties SPD, Rozumní, Polish party Porozumienie and Slovak party L'S-HZDS. | we set a baseline which uses standard machine learning approach, and set an upper bound which uses manually created external knowledge. | neutral |
train_100625 | As an example the word 'bank' Sentence Pair Similarity 1. | contextualised word embeddings perform better than standard word embeddings in many natural language processing tasks like question answering, textual entailment etc. | neutral |
train_100626 | Each pair is annotated with a relatedness score between [1,5] corresponding to the average relatedness judged by 10 different individuals. | siamese networks are popular among tasks that involve finding similarity or a relationship between two comparable things. | neutral |
train_100627 | In Figure 1, only terms with a probability higher than 50% (up until rank 1352) were labelled as terms by HAMLET. | those that are validated with EWN as medical terms. | neutral |
train_100628 | RSR -RuSentRel based dataset with sentence-level attitude labeling (Section 3.1); 3. | this process involves the tokenization to demarcate text string into words and punctuation signs. | neutral |
train_100629 | The list of authorized objects is necessary to avoid accidental misses from the NER model. | all the entities appeared between pair endings should be authorized objects; in a specific sentence, the supposed relation between countries can be false. | neutral |
train_100630 | Improving over the matrix factorisation and the edge reconstruction approaches, the random walk technique is effective and accommodates global information on the nodes. | the cell A Gij contains the associative strength of word i with word j, obtained from the frequency with which word j is responded when word i is cued. | neutral |
train_100631 | One notes that all measures are higher than those obtained when using the GOLDPERS (see rows PERS_1 in Table 4), which can partly be due to the inclusion of first names only into the present version of SRPNER. | to the best of our knowledge, StANFORD NER and SPACY NER were used for the first time for the recognition of personal names in Serbian texts. | neutral |
train_100632 | Initial results suggest that both tasks may be performed with relatively high accuracy by making use of simple models based on char n-grams and feature selection. | subsequent to semEval-2016, a number of improved systems have been proposed. | neutral |
train_100633 | For the stance recognition task, a range of n-gram models -from 1 to 5 words and from 3 to 16 characters -was considered, and we found that character-based models always outperform wordbased models. | the work in (Zarrella and Marsh, 2016) presents the best overall performance in the SemEval-2016 shared task (Mohammad et al., 2016b) on supervised stance recognition. | neutral |
train_100634 | Recall the definition of pVCC: stays increase completeness, and jumps increase cleanness. | 2), but stating the principle of "jump and stay" is much more clearer. | neutral |
train_100635 | If the jump would omit the last filler from a VCC, we do not take this step. | the basic processing unit is the clause, the unit which contains a verb together with its complements and adjuncts, and consequently, a pVCC. | neutral |
train_100636 | In fact, we do not do a jump in every case. | this paper is 1. a proof of concept concerning the corpus lattice model, opening the way to investigate this structure further through our implementation; and 2. a proof of concept of the "jump and stay" idea and the algorithm itself, opening the way to apply it further, e.g. | neutral |
train_100637 | In Table 2 we provide an individual example of a linear conversation from our dataset including offensive probabilities. | do you find your government to be trustworthy? | neutral |
train_100638 | Leaf nodes are posts which have no further direct replies. | finally, in Section 3.3, we show how we intend to further analyse these linear dialogues by applying decoupling functions to model the change of offensive probability. | neutral |
train_100639 | It appears with a broad range of other emojis with a relatively high frequency. | it is often remarked that they lack interpretability, in the sense that individual values in such vectors do not carry any easily interpretable inherent significance. | neutral |
train_100640 | In this way, the system learns to detect and use more useful features. | we then concatenated the vector to the rest of the word embeddings. | neutral |
train_100641 | While we can still build a stable system based on this data, the class imbalance makes our model more vulnerable to overfitting. | in the last years, the interest in NER for Slavic languages grew. | neutral |
train_100642 | Another place for improvements would be to distinguish better between B-LOC and B-ORG, as many places and organization have identical names, or at least the identical first words. | we show that named entity recognition needs only coarse-grained POS tags, but at the same time it can benefit from simultaneously using some POS information of different granularity. | neutral |
train_100643 | Traditionally, NER has focused on recognizing entities such as person (PER), organization (ORG), location (LOC), and miscellaneous (MISC). | we created the input vectors for the tokens in the sentences as a concatenation of three vectors: a word embedding vector, a character embedding vector, and a vector containing some grammatical features, called a grammatical vector. | neutral |
train_100644 | At our disposal there are 12 lists (see Table 2 for more detailed information). | a system extracting relevant cybersecurity information from unstructured publications could be of great use for a cybersecurity expert. | neutral |
train_100645 | In addition, to fine-tune and evaluate models in experiment 2 (see section 5.4), we sample smallscale sets with a higher proportion (1%) of cognates, presented in Table 3, SAMI-FT and SAMI-FT-TEST. | we implemented the model using the Keras library with Tensorflow backend 4 . | neutral |
train_100646 | A straightforward example is the Italian-Spanish pair (notte, noche), with a similar form and common meaning. | our approach consists of learning a similarity metric from example cognates in Indo-European languages and applying it to low-resource Sami languages of the Uralic family. | neutral |
train_100647 | In order to determine how common each word is, we used pre-trained frequency lists in all languages (Michel et al., 2010). | figure 3: POS-MTUs, English-German thereby determining the source and the target sentences. | neutral |
train_100648 | As one realization of this hypothesis, we assume that translations would use more common, frequent words than originals. | the classification unit used in all the above-mentioned research was larger chunks of text, typically 2,000 tokens. | neutral |
train_100649 | This is not unexpected, since the English data set is larger than the German one. | abusive language detection has received much attention in the last years, and recent approaches perform the task in a number of different languages. | neutral |
train_100650 | This paper presents a comparison of different techniques for solving the task of company industry classification based on textual descriptions of companies in DBpedia 1 . | other methods with comparable results are Universal Language Model Fine-tuning (ULMFiT), (Howard and Ruder, 2018) with error 0.8 for DBpedia dataset. | neutral |
train_100651 | NLP pipeline for stopword removal and stemming, resulting text is processed into unigrams) but instead of one-hot vectors, GloVe vector embeddings are used. | the main advantage of transformer-XL is that it allows the capture of longer-term dependencies and resolves context fragmentation problem. | neutral |
train_100652 | The Industries classification is based on the nature of organization's activity and is generated from the industry 8 property of DBPedia. | the 300dimensional GloVe vectors trained on the large Common Crawl corpus of 840 billion tokens with a vocabulary of 2.2 million words. | neutral |
train_100653 | Even if some studies try to find connections by QPT and brain functionality at neural level (Khrennikov et al., 2018), the use of QPT in this field is simply as an explanation theory useful to model real phenomena in the right way, but none of them is really claiming that our brain is working by applying QPT axioms. | the large set of works introducing word and sentence embeddings (Mikolov et al., 2013;Pennington et al., 2014;Bojanowski et al., 2016;Le and Mikolov, 2014;Sutskever et al., 2014;Kiros et al., 2015;Cer et al., 2018) produce representations in the real domain while we need similar vectors but in the complex domain. | neutral |
train_100654 | We can see that the qualities of neighbors in two embedding matrices are close. | we conduct both qualitative and quantitative evaluations of the embeddings from Transformers and Trans-noEnc models. | neutral |
train_100655 | An examples, consider attributes such as 'disputed territories' from the country domain, or 'supplier' from the organization domain; arguably the values of these attributes is so specific that numeric information cannot help. | in this paper, we propose a simple feedforward neural architecture to jointly predict numeric and categorical attributes based on embeddings learned from textual occurrences of the entities in question. | neutral |
train_100656 | Conversely, results for numeric prediction improve when the model pays more attention to these attributes, for low values of α (recall that lower NRS values are better). | each unit in the output layer corresponds to one numeric attribute, and the model predicts all numeric attributes simultaneously. | neutral |
train_100657 | Wrong negation: Classifying negated sentiment words accurately requires more effort than negating sentiment words that are preceded by a negator. | we applied Fleiss' Kappa (Fleiss, 1971) to measure the agreement among the annotators scoring a substantial agreement of 0.74 (Landis and Koch, 1977). | neutral |
train_100658 | We tried our best to keep three annotators per task, but in a few cases we had one annotator at hand. | the annotations carried out to create SenZi and the datasets took place at different times between 2016 and 2018. | neutral |
train_100659 | This contrasts with our intuition of aligning the word vectors based on a few static (from a lexical semantic point of view) words. | this is because the alignments of the former are based on the representations of words that are indeed stable over time. | neutral |
train_100660 | Figure 2 shows the encoder model. | monolingual Impact Similar to the RCSLS, our model changes the cosine distance between word vectors in the same language, that is, it also has an impact on the monolingual embedding space. | neutral |
train_100661 | Limitations Similar to the baseline models, the main limitation of our model is that it can not generate multi-word expressions such as phrases on the target side, although our model is able to represent a sequence of strings in the source encoder. | although the existing models achieve high performance on pairs of morphologically simple languages, they perform very poorly on morphologically rich languages such as Turkish and Finnish. | neutral |
train_100662 | (2010b) looked for new claims on the Web that entail the ones that have already been collected. | other claim monitoring tools include FactWatcher (Hassan et al., 2014) and Dis-puteFinder (Ennals et al., 2010b). | neutral |
train_100663 | We approach the task of check-worthiness prediction as a multi-source learning problem, using different sources of annotation over the same training dataset. | this kind of neural network architecture for multi-task learning is known in the literature as hard parameter sharing (Caruana, 1993), and it can greatly reduce the risk of overfitting. | neutral |
train_100664 | Therefor we run 10 times 10-fold cross validation on random selected folds. | more generally, the result of the experiment provides some evidence that current NLP methods are quite cable to "understand" the meaning of text at an almost human level. | neutral |
train_100665 | In addition, we need to capture information that is relevant to the match, but is expressed in a semi-explicit or implicit way, such as health conditions, confidence, psyche, etc. | it then remains unchanged throughout the remaining experiments. | neutral |
train_100666 | • Order of Relative Clause and Noun. | the annotation process was a collective effort and a number of data annotators were involved in this step. | neutral |
train_100667 | Results in Table 6 show a slight improvement over the InIt-based scoring, but the difference is not as high as we expected. | this study has a threefold objective. | neutral |
train_100668 | Coreference resolution and sentence fusion may help to lower the degree of redundancy introduced through the syntactic sentence simplification. | aMR graphs also rely on PropBank framework whose limitations pose additional constraints on aMR graphs. | neutral |
train_100669 | (2013) used accuracy metric to measure the quality of word embeddings on the task in which only when the expected word is on top of the prediction list, then the model gets +1 for true positive count. | unlike the side-by-side visualization, this interactive visualization can only visualize up to a certain amount of embedding vectors as long as the tensor graph is less than 2GB. | neutral |
train_100670 | • Evaluator evaluates the pre-trained embeddings for a downstream task. | among many embeddings at different learning steps of dpUGC, how to choose a suitable embedding to achieve a good trade-off between data privacy and data utility is a key challenge. | neutral |
train_100671 | An important challenge is handling of the segmentation in a correct way or applying a more advanced normalisation process before tagging. | we proposed significant expansions to the state-of-the-art tagger for Polish, namely Toygger, that resulted in large gain in per-Form Tag W prep:loc:nwok kredytowaniu ger:sg:loc:n:imperf:aff zakupu subst:sg:gen:m3 auta subst:sg:gen:n poszło praet:sg:n:perf bardzo adv:pos sprawnie adv:pos i conj całkiem adv przyzwoite adj:pl:nom:f:pos spłaty subst:pl:nom:f . | neutral |
train_100672 | And the second, based on clustering of word embeddings into a predefined number of groups and using centroids as elements of final document vectors. | the embeddings are sentence aware and could solve a problem of polysemous words (words with multiple meanings). | neutral |
train_100673 | If there are no terms with the same document frequency in an ordered term list, the index of each term can be reasonably considered as its rank in this corpus. | for different corpora, the order of terms will be different, as well as the index of each term. | neutral |
train_100674 | The current state of the art for First Story Detection (FSD) are nearest neighbourbased models with traditional term vector representations; however, one challenge faced by FSD models is that the document representation is usually defined by the vocabulary and term frequency from a background corpus. | we first look at the comparisons between corpora before looking at FSD performance for different background corpora. | neutral |
train_100675 | The state of the art document representation model for P2P FSD models remains the traditional term vector models, due, in part, to their specificity of terms (Wang et al., 2018). | given these two factors cannot always be mutually satisfied, in this paper we examine whether the distributional similarity of common terms is more important than the scale of common terms for FSD. | neutral |
train_100676 | Our results show that term distributional similarity is more predictive of good FSD performance than the scale of common terms; and, thus we demonstrate that a smaller recent domain-related corpus will be more suitable than a very largescale general corpus for FSD. | the ideal background corpus for FSD should be both large-scale and similar in frequency distribution to the assumed target corpus. | neutral |
train_100677 | The motivation for a universal classifier might not be obvious at first, as clearly the best performance is achieved by in-domain classification (Twitter may need tweet classifier, Facebook needs posts classifier, IMDB needs review classifier, and so on). | in this experiment we merged all sub-parts of the sentiment treebank (jun18, neg, polevaltest, rev, sklad) into one data set presented as TW. | neutral |
train_100678 | The analysis only considers NIL entities that were marked as such (e.g., NIL or similar designation). | for example, if a KB has more name variants (e.g., Bobby Kennedy and RfK for Robert f. Kennedy) than the corpus annotators have considered, NEL systems able to correctly detect these name variants will be penalized since they do not occur in the corpus and are, therefore, considered errors. | neutral |
train_100679 | Counts from rows (i) and (ii) were taken directly from the corpora; row (iii) count was estimated based on SPARQL queries that aim at linking NIL entities to the KB; and counts for columns (iv) and (v) were estimated based on annotating samples from each data set. | reuse of old gold standards can lead to problematic results (e.g., entities declared NIL in the gold can currently exist in the current KB version and can be retrieved by annotator tools) or even unfair evaluations (e.g., tools that use an old KB should not be compared with those who use the latest updates). | neutral |
train_100680 | Since edges do not have a pre-specified order, we propose a set-based learning method. | this model assumes conditional independence of the edges. | neutral |
train_100681 | gazetteer or word list. | for instance, recent technological advances make the provision of various eHealth services feasible. | neutral |
train_100682 | For a QA application, it means that the classes of domain-specific semantic concepts can be used to generate signature vectors and the semantic similarity with the signature vectors of the answer candidates can be computed for retrieval and ranking. | we are interested in training new neural networks in multi-and cross-lingual term extraction and definition retrieval settings. | neutral |
train_100683 | The aim of the survey was to identify whether the group with autism: i) experienced any barriers when reading online reviews, ii) what these potential barriers were, and iii) what automatic methods would be best suited to improve the accessibility of online reviews for people with autism. | nine control participants said they read reviews Sometimes (27.27%, n = Figure 1: In general, do you find understanding product reviews: 9) and one participant selected the option Rarely (3.03%, n = 1). | neutral |
train_100684 | For example, one of the most authoritative sources of such guidelines for people with cognitive disabilities, the European Guidelines for the Production of Easy-to-Read Information (Freyhoff et al., 1998), lists requirements that fit the profile of people with moderate to severe comprehension deficits, but not those of more highly able individuals. | very little is known about the perceptions of adults with high-functioning autism on the usefulness of specific simplification strategies. | neutral |
train_100685 | There are, in general, two major kinds of allomorphy with respect to their source: (i) phonologically conditioned, and (ii) morphologically or lexically conditioned. | words, or rather morphemes, that are affected by this process have a voiced obstruent in the final position in their UR (as in Table 6(a)), which gets devoiced unless a vowel-initial suffix follows. | neutral |
train_100686 | We also experimented with using the most frequent 10000, 20000, and 25000 words, however, the results in the crossvalidation experiments were lower. | (2018) offered a new word embedding representation model named as ELMo. | neutral |
train_100687 | (2010) extracted different features from short-texts and used these with Naive Bayes to classify them. | pointing to the weaknesses of the BoW approach, different kernels have been developed for SVM such as semantic kernels that use TF-IDF (Salton & Buckley, 1988) and its variants that apply different term weighting functions on the term incidence matrix. | neutral |
train_100688 | Naive Bayes, Random Forest, and SVM obtain better scores when morphological analysis is performed. | the grammatical and syntactic features of the turkish language pose additional challenges for short-text classification. | neutral |
train_100689 | They are obtained by using the bidirectional approach with masked language model (Taylor, 1953) in training. | in spite of these improvements, the proposed Transformer Encoder model achieves the highest scores in all metrics for tweet classification even though it does not use the morphological analysis. | neutral |
train_100690 | .⊗V j ⊗V k and U be a tensor in V k ⊗V m ⊗. | distributional semantic models, best summarised by the dictum of Firth (1957) that "You shall know a word by the company it keeps," provide an elegant and tractable way of learning semantic representations of words from text. | neutral |
train_100691 | The average combiner with variable binding is the most memory intensive since the number of arguments of the result() predicate can become large (there is an argument for each individual and event in the sentence). | table 2 summarizes the results of our experiments. | neutral |
train_100692 | We present an approach to training coarse to fine grained sense disambiguation systems in the presence of such annotation inconsistencies. | the proposed framework of learning with positive and unlabeled examples for sense disambiguation could be applied on the entire Wikipedia without any manual annotations. | neutral |
train_100693 | Table 2 shows the results of the CORE task, with runs listed in alphabetical order. | hence the task is about comparing pairs of items. | neutral |
train_100694 | Our first run uses an align-and-penalize algorithm, which extends the second approach by giving penalties to the words that are poorly aligned. | related words can have similarity scores as high as what similar words get, as illustrated by "doctor" and "hospital" in Table 1. | neutral |
train_100695 | We also added to it more than 2,000 verb phrases extracted from WordNet. | a context window of ±4 words allows us to compute semantic similarity between words with different POS. | neutral |
train_100696 | Our approach treats text pairs as structural objects which provides much richer representation for the learning algorithm to extract useful patterns. | the total number of takelab's features is 21. | neutral |
train_100697 | The same setting of (Croce et al., 2012a) has been adopted for the space acquisition. | specific phrases are filtered according to linguistic policies, e.g. | neutral |
train_100698 | • Preceding or following lemma (or word form) content word appearing in the same sentence as the target word. | the recall is low, in almost all cases less than a third of nouns in one list appear in the other. | neutral |
train_100699 | OnWN was trained on MSRpar train with LK and DK. | this makes it sensitive to the choice of training data, which ideally would have similar characteristics to the individual kernels, as well as a similar gold standard distribution to the test data. | neutral |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.