id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_18300 | and (Kim et al., 2014) proposed cross-lingual annotation projection approach for relation detection with parallel corpora. | our work don't require parallel corpora nor Machine Translation. | contrasting |
train_18301 | English, Chinese) corpora of relation annotation are readily available. | for most languages few (if any) such resources exist. | contrasting |
train_18302 | The first piece of work comparing methods for identifying translation pairs in comparable corpora was presented in Jakubina and Langlais (2016). | the evaluation was conducted on Wikipedia, which is a general domain corpus. | contrasting |
train_18303 | From this point CBOW is more appropriate for our task. | combining both CBOW and Skip-gram as shown by the Concat model, always improves the MAP scores. | contrasting |
train_18304 | This drop may suggest that Skip-gram concatenation is not appropriate for this configuration. | the concatenation of CBOW and Skip-gram models (the Concat approach) still improves the results as we can see for BC (BC ∪ JRC) where we move from 53% to 56.1% of MAP score and for BC (BC ∪ CC) where we increase the MAP score from 68.4% to 70.9%. | contrasting |
train_18305 | In this perspective, all the utterances in the same session is homogenous and could be composed within RNNs. | the influences of user's queries and the agent's responses are different in predicting user's emotion. | contrasting |
train_18306 | Recent works encode the sequence of utterances with recurrent neural network based encoder-decoder to generate responses (Serban et al., 2016;Shang et al., 2015). | in this work, the prediction is made with all existing utterances being known. | contrasting |
train_18307 | There are 2 million sessions in the dataset, most of which contain task-oriented dialogues. | we focus on those only including chat, and the amount of such pure chat sessions is 260,867. | contrasting |
train_18308 | The pre-training data encoding such strong correlation will makes the models ignore the context utterance and convergence to the local optimum only related to R 0 . | the balanced pre-training dataset is effective in initializing the networks. | contrasting |
train_18309 | This way of specifying a dialog model using intents and corresponding system responses manually is more popular in industry than a data driven approach as it makes dialog model easy to interpret and debug as well as provides a better control to a dialog designer. | this is very time consuming and laborious and thus involves huge costs. | contrasting |
train_18310 | Similarly the agents utterances can be clustered to identify system responses. | we argue that rather than treating user utterances and agents responses in an isolated manner, there is merit in jointly clustering them. | contrasting |
train_18311 | and "What OS is installed in your machine" have no syntactic similarity and therefore may not be grouped together. | the fact that these utterances are adjacent to the similar user utterances "I am unable to start notes email client" and "Unable to start my email client" provides some evidence that the agent utterances might be similar. | contrasting |
train_18312 | Also the F1-scores on Seq2Seq based representation improve from 0.83 and 0.9 to 0.86 and 0.916 using SimCluster. | the gains are much more in case of Doc2Vec representations than Seq2Seq representations since Doc2Vec did not have any information from the other domain where as some amount of this information is already captured by Seq2Seq representation. | contrasting |
train_18313 | (2017) introduced a network-based end-to-end trainable task-oriented dialogue system, which treated dialogue system learning as the problem of learning a mapping from dialogue histories to system responses, and applied an encoder-decoder model to train the whole system. | the system is trained in a supervised fashion: not only does it require a lot of training data, but it may also fail to find a good policy robustly due to lack of exploration of dialogue control in the training data. | contrasting |
train_18314 | In the conversation, the agent asks the user a series of Yes/No questions to find the correct answer. | this simplified task may not generalize to practical problems due to the following: 1. | contrasting |
train_18315 | We know, for example, that personality scores are, by construction, Gaussian at the individual level, and that averaging these Gaussians should give a distribution of mean county-level personalities that are Gaussian. | as shown below, we find our county-level predictions to be far from a normal distribution, with some predictions lying over 10 standard deviations from the mean. | contrasting |
train_18316 | 2012is related to our research because they also used a bootstrapping method to discover offensive language from a large-scale Twitter corpus. | their bootstrapping model is driven by mining hateful Twitter users, instead of content analysis of tweets as in our approach. | contrasting |
train_18317 | While this classifier aggressively labeled a large number of tweets as hateful, only 121,512 tweets are estimated to be truly hateful. | the supervised LSTM classifier has a high precision of around 79%, however, this classifier is too conservative and only labeled a small set of tweets as hateful. | contrasting |
train_18318 | In this example, the gist of the question is whether the chlorine will stripe the questioner's hair. | the question also contains the additional information that the questioner swims five days a week and has black 1 https://answers.yahoo.com/ Table 1: Example of question-summary pair Question Text: I'm a swimmer for my school swim team and I practice two hours a day, five days a week. | contrasting |
train_18319 | Abstractive approaches, rather than selecting units, generate a summary using words not found in the input text. | existing summarization approaches, whether extractive or abstractive, do not assume a question as an input. | contrasting |
train_18320 | (2005) worked on classification of multiple-sentence questions into classes such as yes/no questions and definition questions, and attempted to extract the question sentence that was the most important in finding the correct class. | the extracted question sentence is not always a summary of the question. | contrasting |
train_18321 | In Example 1 of Table 4, the questioner accidentally spilled buttered popcorn and needs to know how to remove it. | the title "Please help!" | contrasting |
train_18322 | In this example, a summary can be generated by extracting the last sentence in the original content. | if one takes the actual title into consideration, the idiom "get rid of" in the original question can be replaced by the word "remove". | contrasting |
train_18323 | The original data contains 4,485,032 question-title pairs. | not all of them are question-summary pairs. | contrasting |
train_18324 | There was no length constraint in terms of the number of characters or words. | we assumed that a better summary would contains more focused content in a shorter output. | contrasting |
train_18325 | These examples suggest that extractive approaches are intrinsically not suitable for cases where informa-tion needs to be picked up from multiple sentences in the input. | to extractive approaches, the output of CopyNet properly resolves this; "it" is resolved by "The Simpsons" even if the model needed to use information across sentences. | contrasting |
train_18326 | The selection of a summary-worthy subset of all extracted concepts and relations was largely ignored in previous work, as many studies did not have a focus on summarization. | when dealing with larger document clusters, this step becomes inevitable. | contrasting |
train_18327 | After predicting scores for every concept and selecting the highest scoring subgraph with the ILP, we use this subgraph as the summary concept map. | this graph might contain multiple edges between certain concepts. | contrasting |
train_18328 | Both of the above approaches rely on sequential arrangement of phrasal or subsentential structures existing in the input corpus to generate summary. | an ideal abstractive multi-document summarizer should enjoy the freedom to exhibit its own writing style and to generate the sentences from scratch. | contrasting |
train_18329 | The three layers correspond to the most valuable information sources for the baseline. | the baseline draws only simple categorical features from them. | contrasting |
train_18330 | The middle-left part of Figure 3 depicts the biL-STM and its final output, the average vectors a. | to LSTMs, which are designed to capture the meaning of sequences, Convolutional Neural Networks (CNNs) are often used to pro- duce bag-of-words-like representations. | contrasting |
train_18331 | We pick the class with the highest probability as our final result. | choosing between all classes is unnecessary because not all combinations of event type, entity type and argument type are possible. | contrasting |
train_18332 | Input to their network is the word embedding matrix of a sentence. | to , they predict triggers and arguments jointly. | contrasting |
train_18333 | As shown in Table 2 under the SRL column, these features only give a minor improvement in micro-averaged F1 for the EED sieve, but hurt performance of the other sieves and the architecture on the TimeBank-Dense test set. | we believe this may be due to overfitting, as we observe 3% and 4% gains for the ETWS and EED sieves with these features on the development set. | contrasting |
train_18334 | CAEVO uses Algorithm 1 to draw inferences about a data set D. In the original implementation, this set contained only the gold-standard labeled evaluation pairs within two sentence windows. | if D were expanded with other unlabeled data points outside of two sentence windows (for which it is easy to predict labels with high precision), the transitivity constraints in C might generate further high precision predictions on the labeled data. | contrasting |
train_18335 | Previous open Relation Extraction (open RE) approaches mainly rely on linguistic patterns and constraints to extract important relational triples from large-scale corpora. | they lack of abilities to cover diverse relation expressions or measure the relative importance of candidate triples within a sentence. | contrasting |
train_18336 | Most successful open RE approaches (Fader et al., 2011;Xu et al., 2013;Bovi et al., 2015;Bhutani et al., 2016) extract salient relational triples based on lexical or syntactic patterns. | such handcrafted or automatically learned patterns are incapable of covering diverse relation expressions (Soderland et al., 2013). | contrasting |
train_18337 | The way we rank nodes is most similar to the work of White and Smyth (2003) and Yu and Ji (2016) which generate the relative importance score of each node toward a set of preferred nodes. | they only deal with unweighted undirected graphs. | contrasting |
train_18338 | Although the system explicitly combines biological events extracted from sentences to construct a long biological process, possibly leading to the propagation of errors, they reported fairly good performance and meaningful results, considering that it is the first attempt for document-level biological information extraction. | there are quite a few systems and corpora for document-level comprehension from a similar perspective in other domains, such as news articles (Hermann et al., 2015) and children's books (Hill et al., 2016). | contrasting |
train_18339 | In the attentive reader and the static RNN decoder, we used LSTM (Hochreiter and Schmidhuber, 1997), a variant of the RNN model, and set the hidden size and dropout rate of RNN to 64 and 0.5, respectively. | the sequenceto-sequence model used GRU (Cho et al., 2014), another variant of the RNN model, and we set the hidden size of 64 and dropout rate of 0.5. | contrasting |
train_18340 | If the reader model and the sequence-tosequence model use attention as demonstrated in other studies, it shows higher performance than the model without attention. | the static RNN decoders show a different case, in that our proposed attention seems to hamper the model in exactly extracting the environment. | contrasting |
train_18341 | Overall, precision of the static RNN decoder is found better than that of the other models. | the attentive reader model shows outstanding performance in extracting all relevant environment tokens. | contrasting |
train_18342 | Although there are many answer tokens, the static RNN decoder seems to extract minimal tokens. | the majority of tokens extracted by the Figure 3: Performance of the two models, the attentive reader and the static RNN decoder model, according to the number of environment tokens. | contrasting |
train_18343 | Experimental results show that our method significantly outperforms all the alternative methods (+0.013-0.318 in terms of Rprecision; p 0.01, t-test). | with traditional courses that have limited numbers of students, each online course in a MOOC platform may draw more than 100,000 registrants (Seaton et al., 2014). | contrasting |
train_18344 | Because a course concept is likely to connect with other course concepts with high semantic relatedness, it tends to receive high voting scores from its course concept neighbors, compared with other vertexes, during propagation. | in practice, the propagation process is usually hampered by the overlapping problem. | contrasting |
train_18345 | TPR alleviates this problem by incorporating topic information from online encyclopedias. | tPR also favors frequent course concepts, because frequent words tend to have high connectivity in the cooccurrence graph, and thus receive high rankings using PageRank. | contrasting |
train_18346 | In the age deceiver case, the younger individuals who portray themselves as older use more 'you', 'family', 'death' and 'filler' words. | older individuals who portray themselves as younger use more internet slang 'netspeak', 'fillers', and 'swear' words. | contrasting |
train_18347 | The Real Male vs Female as Male correlation suggests that females are good at emulating males' writing. | the analysis shows that males are not as good emulating female language (see correlation of Real Female vs Male). | contrasting |
train_18348 | The main rationale for these two fields is for users to provide an accurate textual description for each query they make. | there is no method to ensure that the title or the description entered for a query are actually relevant in describing it. | contrasting |
train_18349 | First, most of the SQL snippets are relatively simple, containing at most 10 distinct tokens, as can be easily seen in Table 2. | textual descriptions are more evenly distributed, based on the number of tokens, with 2, 003 of the entries in the dataset having more than 100 tokens. | contrasting |
train_18350 | On the other hand, the generated SQL queries are more often than not inaccurate and thus we have not compared the accuracy of the NNLIDB with existing solutions. | we have focused on verifying how similar the generated SQL queries are to the annotated ones using measures from machine translation (BLEU) and also precision and recall for simpler tasks, such as generating the correct table and column names in a SQL statement. | contrasting |
train_18351 | Approaches to the automatic generation of system actions, such as (Kadlec et al., 2015), have been presented to facilitate that process. | those approaches often consider only system actions that are necessary from a functional point of view. | contrasting |
train_18352 | Overall, CG statements perform well compared to HG ones. | there are cases when the rating of CG generated statements decreases for one or both of the considered cultures when compared to the HG statement. | contrasting |
train_18353 | In addition to the message conveyed in a given tweet, some information is stored for all tweets, such as location, time and user's profile. | this information is not accessible for most tweets. | contrasting |
train_18354 | In this particular case, each word will be represented by a vector of 4k continuous values. | we need to represent tweets with variable lengths based only upon the concatenated embeddings of the words they contain. | contrasting |
train_18355 | (Aw et al., 2006;Pennell and Liu, 2011). | social network service (SNS) text has a dynamic nature, and large SNS text is costly to annotate. | contrasting |
train_18356 | The basic framework of Japanese normalization is quite similar to that of English normalization. | the problem is more complicated in Japanese normalization because Japanese words are not segmented using explicit delimiters, so we have to estimate word segmentation simultaneously in the decoding step. | contrasting |
train_18357 | (2013) proposed a character-level sequential labeling method for normalization. | it handles only oneto-one character transformations and does not take the word-level context into account. | contrasting |
train_18358 | In these studies, clear word segmentations were assumed to exist. | since Japanese is unsegmented, the normalization problem needs to be treated as a joint normalization, word segmentation, and POS tagging problem. | contrasting |
train_18359 | In a previous study (Han and Baldwin, 2011), the nodes were assumed to be single words. | our Figure 3: Flow of generating coarsely segmented corpus method assigns a node to not only single words but also short phrases (See. | contrasting |
train_18360 | Real-world applications include GitHub, version control in Microsoft Word and Wikipedia version trees (Sabel, 2007). | our system solves an n-to-n problem on a large scale. | contrasting |
train_18361 | Fixed-length fingerprints are created using hash functions to represent document features and are then used to measure document similarities. | the main purpose of fingerprinting is to reduce computation instead of improving accuracy, and it cannot capture word semantics. | contrasting |
train_18362 | According to the description of DTW in Section 3.1.3, the distance between two documents can be calculated using DTW by replacing each element in the feature vectors A and B with a word vector. | computing the DTW distance between two documents at the word level is basically as expensive as calculating the Levenshtein distance. | contrasting |
train_18363 | wDTW and wTED perform better than WMD especially when the corpus is large, because they use dynamic programming to find the global optimal alignment for documents. | wMD relies on a greedy algorithm that sums up the minimal cost for every word. | contrasting |
train_18364 | Fact-checking can be framed as a questionanswering (QA) task in a broad sense. | it has not been studied as intensively as other QA tasks such as factoid-style question-answering (Ravichandran and Hovy, 2002;Bian et al., 2008;Ferrucci, 2012). | contrasting |
train_18365 | The document score ds is defined by Term Frequency-Inverse Document Frequency (TF-IDF), which is written as where tf (w ′ , d) is the frequency of the word w ′ in the document d, idf (w ′ ) is the inverse document frequency of the word w ′ , and ℓ(d) is the length of d. The VQA feature uses document-local statistics (except for IDF) and counts only in the topk search results. | in this feature, all the query words jointly contribute through the document score ds, in contrast to the case of PMI where only the pairwise relations are considered. | contrasting |
train_18366 | The difference between them is that they use the longest consecutive common subsequence between question and KB entity names to get candidates in the passive linker. | the active linker first gets the span from the question that is most likely to be a subject entity by sequential labeling, and the linker uses the labeled span to search the candidate entity from KB. | contrasting |
train_18367 | is "Excuse me, but can you tell me the way... Just go straight... You can t miss it", whose dialog act flow {3, 3} is consistent with the context test. | the response found by feature-based approach has the context "Can you direct me to Test Context Retrieved Response U1: Can you direct me to Holiday inn ? | contrasting |
train_18368 | These findings are consistent with previous work (Sordoni et al., 2015;Serban et al., 2016b). | the first three columns in Table 4 show that models pretrained by OpenSubtitle converge faster, achieving lower Perplexity (PPL) but poorer BLEU scores. | contrasting |
train_18369 | To address this issue, we take inspiration from the FraCaS dataset (Cooper et al., 1996) and construct a suite of targeted datasets that separately test a system's ability to perform individual bits of interpretation such as paraphrasing, semantic role labeling, and coreference. | to the original FraCaS data set, which is relatively small and which could not support the training of purely lexical neural RTE classifiers, we pursue the strategy of automatically converting semantic classificationsi.e., human judgments about semantic propertiesinto labeled examples for textual entailment. | contrasting |
train_18370 | Traditionally, they rely on domain-specific lexicons (de Does and Depuydt, 2013) and characterbased errors statistics obtained from a corrected training set (Kumar and Lehal, 2016). | they have some drawbacks that limit their usefulness for specific, low-resource domains. | contrasting |
train_18371 | Previous studies in document classification attempted to address these issues by employing multilingual word embeddings, which allow direct comparisons and groupings across languages (Klementiev et al., 2012;Hermann and Blunsom, 2014;Ferreira et al., 2016). | they are only applicable when common label sets are available across languages which is often not the case (e.g. | contrasting |
train_18372 | The latter is defined as a concatenation of the hidden states for each input vector obtained from the forward GRU, g w , and the backward GRU, g w : The same concatenation is applied for the hiddenstate representation of a sentence h A typical way to obtain a representation for a given word sequence at each level is by taking the last hidden-state vector that is output by the encoder. | it is hard to encode all the relevant input information needed in a fixed-length vector. | contrasting |
train_18373 | For minibatch SGD, the number of samples per language is equal to the batch size divided by M. Multilingual document classification datasets are usually limited in size, have target categories aligned across languages, and assign documents to only one category. | classification is often necessary in cases where the categories are not strictly aligned, and multiple categories may apply to each document. | contrasting |
train_18374 | In their method, based on the alignment between a nonterminal and input words, the attention mechanism has also an important role. | since the attention is learned in an unsupervised manner, the alignment quality might not be optimal. | contrasting |
train_18375 | Since the process of creating annotated resources needs significant manual effort, SRL resources are available for a relative small number of languages such as English (Palmer et al., 2005), German (Erk et al., 2003), Arabic (Zaghouani et al., 2010) and Hindi (Vaidya et al., 2011). | most languages still lack SRL systems. | contrasting |
train_18376 | We would like to score every path in the lattice with the NMT system and then search. | this is generally prohibitively expensive because the RNN architectures in NMT do not permit recombination of hypotheses on the lattice, since NMT states encode the entire sentence history. | contrasting |
train_18377 | Word embeddings are a relatively new addition to the modern NLP researcher's toolkit. | unlike other tools, word embeddings are used in a black box manner. | contrasting |
train_18378 | If a word embedding learning algorithm wishes to model this information correctly, it has to strive to uphold this equality constraint. | its success will depend on the degree of freedom which it receives in terms of the number of dimensions. | contrasting |
train_18379 | We also use SEQ2SEQ for prediction of what comes next in a text. | there are several key differences. | contrasting |
train_18380 | Traditional language models are based on contextual word frequency in a static corpus of text. | certain types of phrases, when offered to writers as suggestions, may be systematically chosen more often than their frequency would predict. | contrasting |
train_18381 | Language models have a long history and play an important role in many NLP applications (Sordoni et al., 2015;Rambow et al., 2001;Mani, 2001;Johnson et al., 2016). | these models do not model human preferences from interactions. | contrasting |
train_18382 | 6 This model can be used for generation: sampling from the model yields words or phrases that match the frequency statistics of the corpus. | rather than offering representative samples from h 0 , most deployed systems instead sample from p(w i ) / h 0 (w i ) 1/⌧ , where ⌧ is a "temperature" parameter; ⌧ = 1 corresponds to sampling based on p 0 (soft-max), while ⌧ ! | contrasting |
train_18383 | The complexity for the gradient calculations is also O(n 3 ). | even though our vectorised version has higher complexity, in practice we see large gains in runtime. | contrasting |
train_18384 | Other studies use transliterations to segment katakana words using explicit word boundaries from the original English words (Kaji and Kitsuregawa, 2011;Hagiwara and Sekine, 2013). | as not all katakana words are transliterations, it is advantageous to use a monolingual corpus. | contrasting |
train_18385 | This is adequate for two-character words in Chinese, which comprise the majority of Chinese words (Suen, 1986), but not for potentially very long katakana words in Japanese. | to their approach, we regard each katakana term as one document and compute the inverse document frequency. | contrasting |
train_18386 | Recent researches (Sun, 2011;Qian and Liu, 2012;Zheng et al., 2013;Zeng et al., 2013;Qian et al., 2015) also focus on the development of a joint model to perform Chinese word segmentation, POS tagging, and/or informal word detection. | to the best of our knowledge, no existing system can perform word segmentation, POS tagging, and NER simultaneously. | contrasting |
train_18387 | It is also reported by that segmentation criteria in AS and CU datasets are not very consistent. | by fusing two corpora, the RNN CU06+YA can even surpass the performances of CKIP. | contrasting |
train_18388 | Their corresponding recalls are therefore equal as TP is normalised by RP, which is hard-constrained by the references. | adopting precision as the metric, S1 yields substantially higher scores as it returns much fewer PP. | contrasting |
train_18389 | For WS, it is not straightforward to compute TNR by directly normalising the true negatives (TN) by the real negatives (RN). | it can be indirectly computed via TP, PP, RP and the total number of possible output TW given a sentence. | contrasting |
train_18390 | AAS and MAS deal with many-to-many and oneto-many word alignments, respectively. | hAS (Song and Roth, 2015) is based on one-to-one word alignments. | contrasting |
train_18391 | These systems use general language models not restricted to individual application domains. | for an ASR in a larger pipeline, the expected words and phrases from users will be biased by the application domain. | contrasting |
train_18392 | Case markers of the other dependents Our model independently estimates label scores for each argument candidate. | as argued by Toutanova et al. | contrasting |
train_18393 | (2015) using a feedforward network for calculating the score of the PAS graph. | the model is evaluated on a dataset annotated with a different semantics; therefore, it is difficult to directly compare the results with ours. | contrasting |
train_18394 | Empirical studies such as (Kita andÖzyürek, 2003;Kita et al., 2007) analysed speech and gesture semantics with statistical methods and show that the semantics of speech and gestures coordinate with each other. | it remains unclear how to computationally derive the semantics of iconic gestures and build corresponding multimodal semantics together with the accompanying verbal content. | contrasting |
train_18395 | This problem has theoretic roots in psycholinguistics studies such as and , which aim to understand connections between emotions and words. | emotion Analysis also has motivations from an applied perspective, being closely related to Opinion Mining (Pang and Lee, 2008). | contrasting |
train_18396 | a simple way to have a flexible non-linear model over the data. | from a GP perspective it assumes the process is infinitely mean-square differentiable. | contrasting |
train_18397 | Compared to our model, they show much lower correlation scores (their best model achieves 0.399 Pearson's r on the SemEval2007 dataset), although these are not strictly comparable since they use different data splits and do not perform cross-validation. | their approach is orthogonal to ours: combining the Matérn kernels within a multi-task GP framework can be a promising avenue for future work. | contrasting |
train_18398 | MTNA-s. To evaluate to what extent that ATE loss function can improve the performance of the ACC task, we compare MTNA with its variance MTNA-s, the loss function of which does not include that of ATE task. | this model keeps LSTM layer as a feature extractor before the convolution layers as MTNA does. | contrasting |
train_18399 | The reason is that some sentences have restaurant names as target terms. | there are around 40.1% sentences with Restaurant label that do not have annotated words in the training dataset, 41.2% in test dataset. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.