id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_11300 | For these experiments, we use the dataset published as part of word2vec 5 , which consists of 8,869 semantic and 10,675 syntactic questions of this type (Mikolov et al., 2013). | word similarity measures the correlation 6 between the similarity scores produced by a model and a gold standard created by human annotators for a given set of word pairs. | contrasting |
train_11301 | More concretely, negative values of α are beneficial for the 15 https://github.com/lgazpio/DAM_STS centroid method up to certain point, bringing an improvement of nearly 4.5 points for glove, and the results clearly start degrading after that ceiling. | dAM is almost unaffected by negative values of α. | contrasting |
train_11302 | for filtering unacceptable generations at application time. | fluency evaluation of NLG systems constitutes a hard challenge: systems are often not limited to reusing words from the input, but can generate in an abstractive way. | contrasting |
train_11303 | They constitute a compromise between characters and words: On the one hand, they yield a smaller vocabulary, which reduces model size and training time, and improve handling of rare words, since those are partitioned into more frequent segments. | they contain more information than characters. | contrasting |
train_11304 | As can be seen, ILP produces the best output. | nAMAS is the worst system for fluency. | contrasting |
train_11305 | (2014) predicted the fluency (which they called grammaticality) of sentences written by English language learners. | to ours, their approach is supervised. | contrasting |
train_11306 | Inducing sparseness while training neural networks has been shown to yield models with a lower memory footprint but similar effectiveness to dense models. | sparseness is typically induced starting from a dense model, and thus this advantage does not hold during training. | contrasting |
train_11307 | Also, stop words are the most frequent ones but are said to carry little information content. | table 3 confirms our initial hypothesis. | contrasting |
train_11308 | Traditional active learning (AL) methods for machine translation (MT) rely on heuristics. | these heuristics are limited when the characteristics of the MT problem change due to e.g. | contrasting |
train_11309 | .... Algorithmic Expert At a given AL state, the algorithmic expert selects a reasonable batch from the pool, D unl via: where m b b b φ φ φ denotes the underlying NMT model φ further retrained by incorporating the batch b b b, and B B B denotes the possible batches from D unl . | the number of possible batches is exponential in the size D unl , hence the above optimisation procedure would be very slow even for a moderately-sized pool. | contrasting |
train_11310 | see ; , where several heuristics for query sentence selection have been proposed, including the entropy over the potential translations (uncertainty sampling), query by committee, and a similarity-based sentence selection method. | active learning is largely under-explored for NMT. | contrasting |
train_11311 | For the Gītā dataset, the models CopyNet and PCRF-Seq2Seq report similar performances. | sahaśranāma is a noisier dataset, and we find that CopyNet outperforms all other models by a huge margin. | contrasting |
train_11312 | These probabilities are used as features in the log-linear ranker, and therefore the inverse parser affects the ranking results, albeit implicitly. | we should point out that the unsupervised training objective is relatively difficult to optimize, since there are no constraints to regularize the latent logical forms. | contrasting |
train_11313 | (2017), and this is likely due to the different visual features and max-of-hinges loss. | our Bilingual model with the additional c2c objective performs the best for German, whereas Gella et al. | contrasting |
train_11314 | They further use a pre-trained language model as a prior for their compression model to induce their compressed output to be grammatical. | their reported results are still based on models trained on at least 500k instances of paired data. | contrasting |
train_11315 | Wang and Lee (2018) train a generative adversarial network to encode references into a latent space and decode them in summaries using only unmatched documentsummary pairs. | in contrast with machine translation where monolingual data is plentiful and paired data scarce, summaries are paired with their respective documents when they exist, thus limiting the usefulness of such approaches. | contrasting |
train_11316 | The model specified above is supplied only with an unordered set of words with which to construct a shorter sentence. | there are typically many ways of ordering a given set of words into a grammatical sentence. | contrasting |
train_11317 | We found that ROUGE scores can be fairly uncorrelated with human evaluation, and in general can be distorted by quirks of the data set or model outputs, particularly pertaining to length, formatting, and handling of special tokens. | human evaluation can be more sensitive to comprehensibility and relevancy while being more robust to rewording and reasonable ambiguity. | contrasting |
train_11318 | Word embeddings are powerful tools that facilitate better analysis of natural language. | their quality highly depends on the resource used for training. | contrasting |
train_11319 | The intuition behind including only higher match-count n-grams in the training data is that they may be more valid segments of the raw text, as they appear several times in the same order. | we naturally lose information by pruning low match-count n-grams. | contrasting |
train_11320 | Having answered the first question, we will be able to quantify the effect of the fragmentation. | it is necessary to study the effect of the second parameter as well, in order to quantify the applicability of ngrams for embedding comprehensively. | contrasting |
train_11321 | In Factoid QA like air traffic information systems (ATIS) or dialogue systems, we answer the questions by extracting an entity from a structured database like relational databases or knowledge graphs. | in non-factoid systems, answers are extracted mostly from unstructured data like Wikipedia. | contrasting |
train_11322 | In contrast, (Xie and Eric, 2017) explicitly modeled candidate answers as sequences of constituents by encoding individual constituents using a chain oftrees LSTM (CT-LSTM) and tree-guided attention mechanism. | their formulation of constituents is more complicated than ours and as we will see, a direct use of constituents as answer chunk is much less complicated and yields better results. | contrasting |
train_11323 | One recent work (Yao, 2015) manually assigns a single numerical number to encode and derive phonetic similarity. | this single-encoding approach is inaccurate since the phonetic distances between Pinyins are not captured well in a one dimensional space. | contrasting |
train_11324 | Assuming twenty word pairs are provided as context per pair, the task quickly blows up to eighteen thousand annotations. | we observe that the phonetic similarity of Pinyin is greatly impacted by the pronunciation methods and the place of articulation -this allows us to improve the accuracy and simplify the annotation task. | contrasting |
train_11325 | In turn, MED struggles with representing accurate phonetic distances between initials, since most initials are of length 1, and the edit distance between any two characters of length 1 is identical. | dIMSIM encodes initials and finals separately, and thus even a 1-dimensional encoding (dIMSIM1) outperforms the other baselines. | contrasting |
train_11326 | However, as features of articulatory phonetics are manually assigned, these algorithms fall short in capturing the perceptual essence of phonetic similarity through empirical data (Kessler, 2005). | dIMSIM achieves high accuracy by learning the encodings both from high quality training data sets and linguistic Pinyin features. | contrasting |
train_11327 | Training frameworks are then used to learn the similarity. | the phonetic similarity used in these systems cannot be applied to Chinese words since Pinyin has its own specific characteristics, which do not easily map to English, for determining phonetic similarity. | contrasting |
train_11328 | News editorials are said to shape public opinion, which makes them a powerful tool and an important source of political argumentation. | rarely do editorials change anyone's stance on an issue completely, nor do they tend to argue explicitly (but rather follow a subtle rhetorical strategy). | contrasting |
train_11329 | As such, news editorials represent an important resource for research on argument mining (Mochales and Moens, 2011) and debating technologies (Rinott et al., 2015). | a single news editorial rarely changes the stance of a reader completely. | contrasting |
train_11330 | We captured our annotators' personality traits, too. | we primarily focus on nine political profiles from left to right (Doherty et al., 2017) in order to represent prior stance. | contrasting |
train_11331 | Accordingly, it meets our conditions of being of high argumentation quality. | figure 4 shows an excerpt of an editorial on global warming from the corpus. | contrasting |
train_11332 | Entity linking in long text has been well studied in previous work. | short text entity linking is more challenging since the texts are noisy and less coherent. | contrasting |
train_11333 | Several studies Huang et al., 2014) investigate collective tweet entity linking by pre-collecting and considering multiple tweets simultaneously. | multiple texts are not always available for collection and the process is time-consuming. | contrasting |
train_11334 | One straightforward way to combine multiple semantic matching signals is to apply a linear regression layer to learn a static weight for each matching signal (Francis-Landau et al., 2016). | we observe that the importance of different signals can be different case by case. | contrasting |
train_11335 | In the first example, the correct answer is 'Justin Trudeau' which contains the words of 'Canada' and 'trump' in its entity description. | m-CNN fails to capture this concrete matching information, since the concrete information of text might be lost after the convolution layer and maxpooling layer. | contrasting |
train_11336 | However, the soft-TF information in the descriptions of the two entities is similar. | m-CNN captures the whole meaning of the text and links the mention to the correct entity. | contrasting |
train_11337 | Note that the human studies and automatic evaluation are complementary to each other: while MTurk annotators are good at judging how natural and coherent a response is, they are usually not experts in the Ubuntu operating system's technical details. | automatic evaluation focuses more on the technical side (i.e., whether key activities or entities are present in the response). | contrasting |
train_11338 | Later approaches (Shang et al., 2015;Luong et al., 2015) applied Recurrent Neural Network (RNN)-based encoderdecoder architectures. | dialogue generation is considerably more difficult than language translation because of the wide possibility of responses in interactions. | contrasting |
train_11339 | A conversation between a pair of users often stops when the problem has been solved. | they might continue having a discussion which is not related to the topic. | contrasting |
train_11340 | If it attaches to the verb phrase saw a man, the first interpretation arises. | if it attaches to the noun man, the second interpretation is given. | contrasting |
train_11341 | On the other hand, if it attaches to the noun man, the second interpretation is given. | if the structural information is lost, we would have no way to disambiguate the two readings. | contrasting |
train_11342 | TreeLSTM achieves the state-of-the-art performance among the tree-structured models in various tasks, including natural language inference and sentiment classification. | there are non-tree-structured models on the market that outperform TreeLSTM. | contrasting |
train_11343 | To recapitulate, TreeRNN and TreeLSTM reflect the principle of compositionality but cannot capture the multiplicative interaction between two expressions. | cMS incorporates multiplicative interaction but violates the principle of compositionality. | contrasting |
train_11344 | As in CMS, the primary mode of semantic composition is matrix multiplication. | lMS improves on CMS in that it avoids associativity. | contrasting |
train_11345 | Topics tend to correspond to salient features, and are typically labelled with the most probable words according to the corresponding distribution. | while LDA only uses bag-of-words (BoW) representations, our focus is specifically on identifying and improving features that are modelled as directions in semantic spaces. | contrasting |
train_11346 | For the feature represented by the cluster {steep, climb, slope}, the top ranked object mountain is clearly relevant. | the next two objects -landscape and national parkare not directly related to this feature. | contrasting |
train_11347 | with any facts that relate to food but definitely not to geographical locations, sports, etc. | this assumption is not as damaging as long as P (t | s, z = f ) is almost zero for the z that P (z = f | s) should be ignored -the multiplication would result in zero anyway. | contrasting |
train_11348 | For PROFILEMEMORY + trained only on Per-sonaChat data, all types of sentences have similar effectiveness in predicting personality. | after joint learning with OpenSubtitles, only sentences from PersonaChat (which are most relevant to personalities) are able to predict noticeably accurate than other sentences. | contrasting |
train_11349 | Interestingly enough, there is no obvious correlation between how engaging the dialogue has been perceived and simple metrics like the length of the response or the number of asked questions. | it is strongly correlated with DISCOVERYSCORE, indicating that it indeed can be used as one of the automatic metrics for dialogues quality. | contrasting |
train_11350 | Some features are so distinctive that model can learn them easily. | a sentence may have more than one feature that can contribute to class prediction. | contrasting |
train_11351 | Given a source sentence, human translators are able to produce a set of diverse and reasonable translations. | although beam search for SEQ2SEQ models is able to generate various candidates, the final candidates often share majority of their tokens, except a few trailing ones. | contrasting |
train_11352 | Consider the trees tCO and t13 in Table 1. tCO is a supertag that does not use adjunction (this type of supertag is called an initial tree). | t13 modifies an internal VP node in another supertag (this type of supertag is called an auxiliary tree). | contrasting |
train_11353 | 1 Deep neural networks have improved the accuracy of various natural language processing (NLP) tasks by performing representation learning with massive annotated datasets. | the annotations in NLP depend on the target language as well as the task, and it is unrealistic to prepare such extensive annotated datasets for every pair of language and task. | contrasting |
train_11354 | While the nearest neighbors of "excellent" and "terrible" are not semantically close, they all indicate positive and negative polarities in the respective domains. | the nearest neighbors of "economic" are noisy as they do not contribute to the task. | contrasting |
train_11355 | Another related task is Cross-lingual Lexical Substitution (Sinha et al., 2009): the model must provide plausible target language translations for the source language lexical item in the source language context. | our BTSR task: (1) directly evaluates token-level word representations without the need to predict sense labels from a sense inventory and (2) it contextualizes both the source query and the target candidates ensuring full sense disambiguation. | contrasting |
train_11356 | We follow these prior works in working with the CzEng, a Czech-English dataset (Bojar et al., 2016b), due to its size, diverse domain coverage, and rich syntactic variations , and to allow for a direct comparison in methodologies. | we propose a new approach to paraphrase generation designed to increase paraphrastic diversity, using a multi-step process: the first part of the pipeline generates a large number of candidate paraphrases through a random process, and the second part whittles them down to a much shorter list. | contrasting |
train_11357 | Some semantic changes during paraphrasing, especially omission, are not well-reflected by the (forward) probability p generate from the generating model. | a model running in the other direction can penalize this omission, as found by Goto and Tanaka (2017). | contrasting |
train_11358 | (2017), text-based retrieval models often handle misspellings poorly. | speech-based models are unlikely to suffer from similar problems because they inherently must deal with variation in the expression of words and utterances. | contrasting |
train_11359 | If we assume that S X and S Y are similar to each other in the LM's representation space, then A(Y | X) > 0 -i.e., encountering sentences with S X causes the LM to assign a higher probability to sentences with S Y . | if we assume that S X and S Y are unrelated to each other, then A(Y | X) = 0 -i.e., encountering sentences with S X does not cause the LM to change its probability for sentences with 3 A is shorthand for adaptation. | contrasting |
train_11360 | Unlike sentences with coordination, sentences with different types of RCs differ from each other at a surface level (see Table 1). | at a more abstract level they all share a common property: a gap. | contrasting |
train_11361 | Our analyses so far have demonstrated that sentences that belong to linguistically interpretable classes (e.g., sentences that match in reduction) are more similar to each other in the LMs' representation space than they are to sentences that do not belong to those classes (e.g., sentences that do not match in reduction). | it is unclear what properties of the sentences are driving this similarity between members of the class. | contrasting |
train_11362 | There was an increase in accuracy as the number of hidden units increased (see Figure 5b). | the similarity between object RCs and other types of RCs did not significantly correlate with agreement prediction; we therefore did not find any evidence for the hypothesis mentioned above. | contrasting |
train_11363 | We hypothesized that models' accuracy on subject verb agreement when preceded by object RCs would increase as the similarity between object RCs and the other types of RCs increased. | we did not find evidence for this. | contrasting |
train_11364 | COCO, and in some cases reportedly surpass human-level performance as measured by n-gram based evaluation metrics (Bernardi et al., 2016). | recent work has revealed several caveats. | contrasting |
train_11365 | 6 Therefore, one should not expect a model to generate a single caption with the concepts in a pair. | a model can generate a larger set of K captions using beam search or diverse decoding strategies. | contrasting |
train_11366 | (2019)) contain distributional information obtained from large-scale textual resources, which may improve generalization performance. | we do use them for this task because the resulting model may not have the expected paradigmatic gaps. | contrasting |
train_11367 | (2017), which is trained with a generative adversarial objective in order to generate more diverse captions. | these types are more equally distributed in the captions generated by BUTR+RR, as shown by the higher mean segmented type-token ratio (TTR 1 ) and bigram type-token ratio (TTR 2 ). | contrasting |
train_11368 | A major strength of these learned word embeddings is that they are able to capture useful semantic information that can be easily used in other tasks of interest such as semantic similarity and relatedness between pair of words (Mikolov et al., 2013a;Pennington et al., 2014;Wilson and Mihalcea, 2017) and dependency parsing (Chen and Manning, 2014;Dyer et al., 2015). | these models treat names and entities no more than the tokens used to mention them. | contrasting |
train_11369 | While the principles of federated learning are fairly generic, its methodology assumes that the underlying models are neural networks. | virtual keyboards are typically powered by n-gram language models for latency reasons. | contrasting |
train_11370 | They share many similarities with the vector space embeddings that are commonly used in natural language processing. | rather than representing entities in a single vector space, conceptual spaces are usually decomposed into several facets, each of which is then modelled as a relatively lowdimensional vector space. | contrasting |
train_11371 | Their approach relies on feature selection methods to find subsets of features that are predictive of particular class labels, based on a set of labelled training examples. | our focus in this paper is on unsupervised methods, as suitable training data is often not available. | contrasting |
train_11372 | horror and zombie) then the corresponding feature directions d a and d b will also be similar. | for paradigmatically similar words, such as horror and comedy, this should not be the case. | contrasting |
train_11373 | Looking more closely at the results of our main method IncAgg, it is interesting to note that large improvements are obtained for depth-1 decision trees, which shows that our facet subspaces make it easier to identify features that correspond to the categories from the corresponding classification problems. | large improvements can also be seen for SVMs, which shows that the actual decomposition of the space is also helpful. | contrasting |
train_11374 | The choice of linking vowel is partly determined by vowel harmony: back vowel stems select a or o whereas front vowel stems select e or ö. | for back vowel stems, it is largely unpredictable whether a or o is used (Siptár andTörkenczy 2000:224f., Vago 1980:110f. | contrasting |
train_11375 | Case syncretisms in inanimate (i.e., non-personal) nouns are found in many Slavic languages. | animacy is an inherent feature of nouns and cannot be predicted from the form of the lemma alone. | contrasting |
train_11376 | As a result, our final sample of twelve languages only includes two major language families, Indo-European and Uralic, the latter represented by Finnish and Hungarian. | this sample has some degree of grammatical diversity. | contrasting |
train_11377 | Bilingual word embeddings have been widely used to capture the correspondence of lexical semantics in different human languages. | the cross-lingual correspondence between sentences and words is less studied, despite that this correspondence can significantly benefit many applications such as crosslingual semantic search and textual inference. | contrasting |
train_11378 | Recently, language models (LMs) or language representation models are widely used in natural language understanding (NLU) tasks. | these LMs are usually trained on large unlabeled text corpora, while the finetuning process simply takes words or wordpieces as model input. | contrasting |
train_11379 | The researchers in Microsoft (Song et al., 2011) employed a big and rich probabilistic knowledgebase to machine learning algorithms, and got significant improvement in terms of tweets clustering accuracy. | such method needs huge human and material resources to build up a highquality and extremely wide-coverage knowledge base. | contrasting |
train_11380 | FastText employed n-grams thus it could predict the zero-shot word embeddings. | n-gram is an arbitrary method of word segmentation. | contrasting |
train_11381 | Little performance boost is shown on SciTail, probably because sentences in SciTail are more focused on the expressions of concepts and knowledge, and thus key words are more about verbs and nouns rather than named entities. | as we unexpected, applying NPB-BPE * to SST-2 gains a 0.6% absolute increase. | contrasting |
train_11382 | The parameters in GPT are all the same for all wordpiece embeddings in the vocabulary. | linguistic features contain different levels of information compared with words and wordpieces. | contrasting |
train_11383 | Despite parallel syntax and overlapping vocabulary, the sentences above vary in numerous aspects of meaning: • The NPs her picture and the wall denote entities that stand in a certain locative relation to each other, as signaled by the preposition on. | • the relation between her speech (which is an event, not an entity) and security is a different one, TOPIC, despite being signaled by the same preposition. | contrasting |
train_11384 | Thus, the word may evoke 'nuclear disaster' in a reader in 2011 (e.g., driven by the Fukushima incident). | in 2012 the 'storm' sense may be more salient (e.g., driven by Superstorm Sandy). | contrasting |
train_11385 | They enable us to investigate how word meaning and relatedness between words change over time. | as it may take some time until an event's name is determined and referred to in newspapers, the paper's text may not have meaningful embeddings for those events. | contrasting |
train_11386 | Training an NLI model in this end-to-end manner assumes that any inference type involved in the sentence-level decision may be learned from the training data. | recent work created challenge datasets which show that these modelswhen trained on the original NLI datasets-fail when they need to make inferences pertaining to certain linguistic phenomena, often ones which are not sufficiently represented in the training data. | contrasting |
train_11387 | The TEED problem is essentially the same as that of Parallel Corpus Filtering (PCF), discussed in the previous section. | the usage scenario is quite different: in PCF, one is typically dealing with a very large collection of segment pairs, only a fraction of which are true translations; the PCF task is then to filter out pairs which are not proper translations, possibly with some tolerance for pairs of segments that do share partial meaning. | contrasting |
train_11388 | Globally, YiSi-2 clearly performs best at this task when using BWE's trained on domainspecific parallel data (PSC.bivec), even when there is very limited quantities of such data, as is the case here. | bERT models perform comparably to vector-mapped bWE's trained with indomain data (PSC.vecmap), and substantially better than bWE's trained on large quantities of generic, out-of-domain parallel data (WMT). | contrasting |
train_11389 | Recurrent neural network grammars (RNNGs) generate sentences using phrase-structure syntax and perform very well in terms of both language modeling and parsing performance. | since dependency annotations are much more readily available than phrase structure annotations, we propose two new generative models of projective dependency syntax, so as to explore whether generative dependency models are similarly effective. | contrasting |
train_11390 | This has already been explored in (Gu et al., 2018) in the context of Machine Translation. | our problem is simpler because the size of the output is always twice the size of the input, in other words we do not have to estimate the size of the output. | contrasting |
train_11391 | Compared to joint parsing systems working on both constituents and dependencies, our approach doesn't require external linguistic knowledge such as head percolation rules. | since derivations don't add new information, but merely offer a new vision of the problem, the potential accuracy gain is lower. | contrasting |
train_11392 | (2008) report the results of experiments which show that agreement between annotators is difficult to achieve, casting doubts on the reliability of the Project's codes. | in similar experiments, Lacewell and Werner (2013) report greater inter-annotator agreement, and claim that with ongoing training, annotators can produce reliable labels. | contrasting |
train_11393 | Some existing work alleviates this problem by directly incorporating coverage or fertility mechanisms to an NMT model (Tu et al., 2016;Feng et al., 2016;Kong et al., 2019). | the problem is that attention weights based coverage calculation for NMT is insensitive and sometimes even inaccurate to translation errors. | contrasting |
train_11394 | A policy gradient method is leveraged to co-train the NMT model and the discriminator. | those approaches rarely take account of translation adequacy. | contrasting |
train_11395 | Many linguists have formulated various constraints to define a general rule for code-switching (Poplack, 1978(Poplack, , 1980Belazi et al., 1994). | these constraints cannot be postulated as a universal rule for all code-switching scenarios, especially for languages that are syntactically divergent (Berk-Seligson, 1986), such as English and Mandarin since they have word alignments with an inverted order. | contrasting |
train_11396 | Difficulties in handling LDD have motivated the development of syntax-based MT (Yamada and Knight, 2001), that can effectively represent reordering at the phrase level, such as when translating between VSO and SOV languages. | syntaxbased MT models remain limited in their ability to map between arbitrarily different word orders (Sun et al., 2009;Xiong et al., 2012). | contrasting |
train_11397 | String-similarity metrics against a reference are known to be partial and coarsegrained aspects of the task (Callison-Burch et al., 2006), but are still the common practice in various text generation tasks. | their opaqueness and difficulty to interpret have led to efforts to improve evaluation measures so that they will better reflect the requirements of the task (Anderson et al., 2016;Sulem et al., 2018;Choshen and Abend, 2018b), and to increased interest in defin-ing more interpretable and telling measures (Lo and Wu, 2011;Hodosh et al., 2013;Choshen and Abend, 2018a). | contrasting |
train_11398 | The size of the sets allows using any MT evaluation measures to measure performance, and is thus a much more scalable solution than manual inspection, as is commonly done in challenge set approaches. | an automatic methodology has the side-effect of being noisier, and not necessarily selecting the most representative sentences for each phenomenon. | contrasting |
train_11399 | Preposition stranding is common in English and other languages such as Scandinavian languages or Dutch (Hornstein and Weinberg, 1981). | in German, it is not a part of standard written language (Beermann and Ik-Han, 2005), although it does (rarely) appear (Fanselow, 1983). | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.