id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_9600 | All the above-mentioned approaches of learning task-specific affective embeddings (Tang et al., 2014;Tang et al., 2016;Felbo et al., 2017) rely on tweets data obtained from Twitter, automatically labeled using emoticons. | tweets data do not generalize well to texts from other domains such as blogs, narratives, etc. | contrasting |
train_9601 | A corpus analysis by Gkatzia and Mahamood (2015) shows that intrinsic human-based measures are used to a much greater degree compared to extrinsic human-based evaluations, although increasingly more effort regarding the latter evaluation can be observed in the past few years (Gkatzia et al., 2017;Goldstein et al., 2017;Ramos-Soto et al., 2017, for instance) These extrinsic task-focused evaluations have traditionally been regarded as the type of evaluation that provides the most meaningful results. | it can be an expensive and timely undertaking to execute such an evaluation (Reiter and Belz, 2009). | contrasting |
train_9602 | They did not find significant behavioral differences between people that received a tailored vs. non-tailored letter. | it might be argued that such behavioral changes are difficult to achieve with only a letter. | contrasting |
train_9603 | The results showed that the language use of the human-written texts was found to be more fluent and easy to read, as well as more clear and understandable. | the computer-generated texts gave a better overview of the match-data it was based on. | contrasting |
train_9604 | Another reason to think that the PASS-generated texts are effective in their goal is the fact that the intended tailoring was correctly identified by participants during evaluations, to a similar degree as the tailoring in human-texts. | the second study did not show any effects of tailoring on perceived text quality. | contrasting |
train_9605 | This model facilitates the development of machine learning approaches and the automatic evaluation of performances. | as criticized by (Jia and Liang, 2017), a machine can succeed in a task of this kind by remembering and recalling linguistic patterns that are prominent in a target dataset. | contrasting |
train_9606 | Their approach can be contrasted with ours, in that they modify text passages, whereas we propose to incorporate NAQs. | both share the same objective of extending the conventional MRC framework. | contrasting |
train_9607 | Machine understanding of language is difficult to define and accomplish. | we must approach this issue by developing a computational mechanism, along with the relevant resources. | contrasting |
train_9608 | During the forward pass the GRL computes the identity function and feeds its input to a shallow Multi-Layer Perceptron (MLP) style classifier. | during back-propagation the gradient of the style classifier is flipped in sign. | contrasting |
train_9609 | For the sequence-to-sequence models, → is the primary indicator of successful obfuscation-by-transfer. | ← gives some indication how much the output is still related to the original. | contrasting |
train_9610 | Adversary Impact (∆ acc) We compare the accuracy of the adversary on the generated sentence to that of the original to assess obfuscation strength. | as our goal is to keep the adversary's performance level close to chance, we define ∆ accuracy = accuracy − p where p is majority baseline. | contrasting |
train_9611 | Looking at the target BLEU and METEOR, the sequence-to-sequence model without the target token generates sentences that are closer to source than they are to the target; and achieves low scores overall, with the sentences being quite far off based on WMD. | note that this is many-to-many translation without any signal regarding the target, given languages with largely the same vocabulary. | contrasting |
train_9612 | Recent work also explores how to perform style transfer without parallel data. | this line of work considers transformations that alter the original meaning (e.g., changes in sentiment or topic), while we view style transfer as meaning-preserving. | contrasting |
train_9613 | Other work has focused on specific realizations of stylistic variations, such as T-V pronoun selection for translation into German (Sennrich et al., 2016a) or controlling voice (Yamagishi et al., 2016). | we adopt the broader range of style variations considered in our prior work, which introduced the FSMT task : in FSMT, the MT system takes a desired formality level as an additional input, to represent the target audience of a translation, which human translators implicitly take into account. | contrasting |
train_9614 | It is a compromise to an alternative behavior of restarting over again from the root, which causes many non-termination problems and unpredictable behavior. | two alternatives still being considered as options at the command line are: to resume from the leftmost child of the node matching the primary node specifier (which usually is the primary placeholder, though not always); and also resuming from the node which would be the next, in a preorder left-to-right traversal, after the whole subtree substituted at primary placeholder was visited. | contrasting |
train_9615 | Of course there are other ways of doing this. | this greatly simplifies the transductions that have to eliminate intermediate nodes. | contrasting |
train_9616 | We are not going to exhaust these aspects here. | it is important to notice that the definition and use of placeholders interact very dangerously with the logical operators. | contrasting |
train_9617 | Pichotta and Mooney (2016a) showed that the LSTM-based event sequence model outperformed previous co-occurrence-based methods for event prediction. | this line of work build their models based on discrete verbs and tokens, which is far from being a complete sentence or a story. | contrasting |
train_9618 | (2017) also developed an end-to-end model for future subevent prediction. | they use large-scale news corpus as training data, which is quite noisy and far from being reasonable stories. | contrasting |
train_9619 | Because this task essentially has no gold-standard answers, and any reasonable story ending can be the right one. | purely MLE trained Seq2Seq model tends to generate frequent words or phrases in the test stage, which is a well known intractable obstacle. | contrasting |
train_9620 | (2017) applied adversarial networks on this task, which is most similar to our work. | all these studies put their focuses on choosing the correct story ending through discriminative approaches. | contrasting |
train_9621 | Generating texts conditioned on the records is a non-trivial problem because a sentence could likely contain several records. | there have been few works explicitly addressing this issue due to the lack of challenging datasets. | contrasting |
train_9622 | The special placeholder could be "<data>". | in this specific task, we make a distinction between entities and numeric values, and define two placeholders, "<entity>" and "<number>". | contrasting |
train_9623 | There are more than 7 repetitions in this short text. | in our model, the delayed copy network avoided such problem since it is actually not based on language models. | contrasting |
train_9624 | These constraints make delete-based sentence compression a relatively easier task. | in spite of the strong ability of deleting undesired words, delete-based models are not able to rephrase the words, which is far from human sentence compression. | contrasting |
train_9625 | Due to the difficulty of abstractive sentence compression, there was only a limited number of work on the task (Cohn and Lapata, 2008;Cohn and Lapata, 2013;Galanis and Androutsopoulos, 2011;Coster and Kauchak, 2011a). | with the recent success of the sequence-to-sequence (Seq2Seq) model, the task of abstractive sentence compression has become viable. | contrasting |
train_9626 | These abstractive models (which will be termed generate-based models hereafter) have the ability to reorder words or rephrase. | none of these models consider explicit word deletion. | contrasting |
train_9627 | (2002), and is usually used for automatic evaluation of statistical machine translation systems. | it can also be used for evaluating sentence compression task (Napoles et al., 2011). | contrasting |
train_9628 | (2016) further improved the model with Recurrent Neural Networks. | both works used vocabularies of fixed size for target sentence generation. | contrasting |
train_9629 | It is beneficial for models to focus on nearby context words. | as high level semantic concepts of terms, aspects usually have more generalizable representations. | contrasting |
train_9630 | For example, the term "sandwich" is surely about the aspect "food". | existing neural network based methods fail to utilize the relevance between the aspects and the terms as very few datasets are annotated with both aspects and terms. | contrasting |
train_9631 | 10 Table 3: Examples of improved classification upon the addition of author profiling features (AUTH). | a number of abusive tweets still remain misclassified despite the addition of author profiling features. | contrasting |
train_9632 | Review text has been widely studied in traditional tasks such as sentiment analysis and aspect extraction. | to date, no work is towards the end-to-end abstractive review summarization that is essential for business organizations and individual consumers to make informed decisions. | contrasting |
train_9633 | (ii) According to what we observe, summary styles and words in different categories can significantly vary. | existing methods apply a uniform model to generate text summaries for the source documents in different categories, which easily miss or under represent salient aspects of the documents. | contrasting |
train_9634 | Here, we set γ 1 = 0.2, γ 2 = 0.8. | the maximum likelihood estimation (MLE) method suffers from two main issues. | contrasting |
train_9635 | In that case, the computed attention weights rely entirely on the semantic associations between context words and the target. | this may not be sufficient for differentiating opinions words for different targets. | contrasting |
train_9636 | As shown in example 1), the sentence holds a positive sentiment on atmosphere, but expresses no specific opinion on drinks. | affected by the word perfect, the predicted sentiment towards drinks is positive. | contrasting |
train_9637 | The main finding we see is that SSWE by themselves are not as informative as W2V vectors which is different to the findings of . | we agree that combining the two vectors is beneficial and that the rank of methods is the same in our observations. | contrasting |
train_9638 | A collapsed Generator is not capable of providing meaningful rationales since it extracts the same rationales repeatedly regardless of the context. | as shown in Figure 1, a collapsed Generator does not always lead to poor classification performance when compared to a finely converged Generator. | contrasting |
train_9639 | The objective of GAN is to jointly train a Generator and Discriminator, where the loss for Discriminator and the reward for Generator comes from failed attempts to determine ground-truths from the emulated results created by Generator. | the adversary process in GAN is the major source of mode collapse, hence further extensions of GAN aimed towards addressing this issue. | contrasting |
train_9640 | (2016) first attempt to adopt multi-lingual transfer learning for RE. | both of these works learn predictive models on a new language for existing KBs, without fully leveraging semantic information in text. | contrasting |
train_9641 | Second, we want to explore the effectiveness of text descriptions in different relation classification frameworks. | it should be noted that the BRCNN-based encoder cannot be used in the description representation learning. | contrasting |
train_9642 | "Pointer-Generator-all" has 99.6% of unigrams, 95.2% bigrams, and 87.2% trigrams contained in the source documents (DUC-04). | the ratios for human summaries are 85.2%, 41.6% and 17.1%, and for "AMRSumm-Clst" the ratios are 84.6%, 31.3% and 8.4% respectively. | contrasting |
train_9643 | The information overlap between the documents from the same topic makes the multi-document summarization more challenging than the task of summarizing single documents. | in case of multi-document summarization where source documents usually contain similar information, the extractive methods would produce redundant summary or biased towards specific source document (Nayeem and Chali, 2017a). | contrasting |
train_9644 | MSC is a text-to-text generation process in which a novel sentence is produced as a result of summarizing a set of similar sentences originally called sentence fusion (Barzilay and McKeown, 2005). | lexical paraphrasing aims at replacing some selected words with other similar words while preserving the meaning of the original text. | contrasting |
train_9645 | (Boudin and Morin, 2013) improved Filippova's approach by re-ranking the fusion candidate paths according to keyphrases to generate more informative sentences. | grammaticality is sacrificed to improve informativity in these works (Nayeem and Chali, 2017b). | contrasting |
train_9646 | (Yasunaga et al., 2017) is limited to extractive summmarization. | (Li et al., 2017a) is limited to compressive summary generation using an ILP based model, and there is no explicit redundancy control in the summary side. | contrasting |
train_9647 | Dataset Models R-1 R-2 R-WE-1 R-WE-2 LexRank (Erkan and Radev, 2004) 35.95 7.47 36.91 7.91 Submodular (Lin and Bilmes, 2011) 39.18 9.35 40.03 9.92 RegSum 38.57 9.75 39.12 10.33 ILPSumm (Banerjee et al., 2015) 39.24 11.99 40.21 12.08 PDG* (Yasunaga et al., 2017) 38 TextRank (Mihalcea and Tarau, 2004) 27.56 6.12 28.20 6.45 Opinosis (Ganesan et al., 2010) 32.35 9.13 33.54 9.41 Biclique (Muhammad et al., 2016) 33.03 8.96 33.91 9.25 ParaFuse doc (ours) 33.86 9.74 34.46 10.09 Table 3: Results on DUC 2004 (Task-2) and Opinosis 1.0 Evaluation Metric: We evaluate our summarization system using ROUGE 11 (Lin, 2004) on DUC 2004 (Task-2, Length limit (L) = 100 Words) and Opinosis 1.0 (L = 15 Words). | rOUGE scores are unfairly biased towards lexical overlap at surface level. | contrasting |
train_9648 | Both decision makers have access to the image and implicitly to the linguistic dialogue history: DM1 exploits the dialogue encoding learned by QGen's LSTM, which is trained to record information relevant for generating a followup question. | dM2 leverages the dialogue encoding learned by the Guesser's LSTM, which is trained to capture the properties of the linguistic input that are relevant to make a guess. | contrasting |
train_9649 | While DM1 tends to make a decision to stop asking questions and guess in easier games, surprisingly DM2 is more likely to make a guessing decision when the image complexity is higher (note the contrasting tendency of the coefficients in the last column of Table 2 for DM2). | similarly to DM1, once DM2 decides to guess (decided games in Table 2), the simpler the image the more likely the model is to succeed in picking up the right target object. | contrasting |
train_9650 | In this paper, we study the problem of data augmentation for language understanding in taskoriented dialogue system. | to previous work which augments an utterance without considering its relation with other utterances, we propose a sequence-to-sequence generation based data augmentation framework that leverages one utterance's same semantic alternatives in the training data. | contrasting |
train_9651 | Success has been achieved with data augmentation on a wide range of problems including computer vision (Krizhevsky et al., 2012), speech recognition (Hannun et al., 2014), text classification (Zhang et al., 2015), and question answering (Fader et al., 2013). | its application in the task-oriented dialogue system is less studied. | contrasting |
train_9652 | To learn the seq2seq model, it's straight-forward to use each pair of utterances in C s as training data for the model. | the goal of our paper is to generate diverse augmented data and the usefulness of less diverse pair (like give me the <distance> route to <poi type> and find me the <distance> route to <poi type> in Figure 1) is arguable. | contrasting |
train_9653 | If we don't filter the alike instances when training the seq2seq model, the drop of performance is a 0.65 F-score. | larger number of new utterances with smaller edit distances are yielded which indicates that more noise is introduced when the training data of the seq2seq model is not properly filtered. | contrasting |
train_9654 | Beyond these classic approaches, adding noise to the image, randomly interpolating a pair of images (Zhang et al., 2018) are also proposed in previous works. | these signal transformation approaches are not directly applicable to language because order of words in language may form rigorous syntactic and semantic meaning (Zhang et al., 2015). | contrasting |
train_9655 | Its ability of generating adversarial examples is attractive for data augmentation. | it hasn't been tried in data augmentation beyond computer vision (Antoniou et al., 2018). | contrasting |
train_9656 | In this paper, we hypothesize that dialogue acts improve conversation modeling. | it is not always possible that such dialogue acts are available in practice, and it would be ideal to predict dialogue acts first (Kumar et al., 2017), and then use them for next utterance generation/retrieval; having a model where both tasks, i.e. | contrasting |
train_9657 | Recently, deep reinforcement learning (DRL) has been used for dialogue policy optimization. | many DRL-based policies are not sample-efficient. | contrasting |
train_9658 | DRL-based models are often more expressive and computational effective. | these deep models are not sample-efficient and not robust to errors from input modules of SDS. | contrasting |
train_9659 | REINFORCE (Williams et al., 2017), advantage actor-critic (A2C) (Fatemi et al., 2016). | compared with GPRL, most of these models are not sample-efficient. | contrasting |
train_9660 | Neural networks for the approximation of value functions have long been investigated (Lin, 1993). | these methods were previously quite unstable (Mnih et al., 2013). | contrasting |
train_9661 | Note that, in input module and communication module, the same types of nodes share parameters, which may speed up the learning process. | in output module, in oder to capture the specific characteristics of each node, they don't share parameters. | contrasting |
train_9662 | the adjacency matrix Z, is known. | usually the graph is not known in practice, and the hypothetical structure is not guaranteed to be optimal. | contrasting |
train_9663 | The reason is that each representation of the individual modality encodes specific knowledge and is complementary, an aspect that can be explored to facilitate understanding on the entire meaning of the content. | this task could be extremely challenging because we need to explore single-modal information deliberately and jointly learn the intrinsic correlation among various modalities. | contrasting |
train_9664 | (16) 6: until convergence Our proposed framework MEMD is seemingly similar to the many-to-many setting in multi-task sequence-sequence learning (Luong et al., 2015). | there are obvious distinctions between MEMD and multi-task sequence-sequence learning. | contrasting |
train_9665 | In addition, another phenomenon observed is that the longer the source sentence is, it is easier to ignore important information for RNNsearch . | as can be seen from the boldfaced sections marked in results generated with RNMT, proposed model with CNN could captures more source information successfully. | contrasting |
train_9666 | NMT yields the state-of-the-art translation performance in resource rich scenarios (Bojar et al., 2017;Nakazawa et al., 2017). | currently, high quality parallel corpora of sufficient size are only available for a few language pairs such as languages paired with English and several European language pairs. | contrasting |
train_9667 | For instancelevel interpolation, the most related method is to assign a weight in NMT objective function (Chen et al., 2017a;Wang et al., 2017b). | the model structures of SMT and NMT are quite different. | contrasting |
train_9668 | Domhan and Hieber (2017) propose a method similar to the deep fusion method (Gülçehre et al., 2015). | unlike training the RNNLM and NMT model separately (Gülçehre et al., 2015), Domhan and Hieber (2017) train RNNLM and NMT models jointly. | contrasting |
train_9669 | It has been shown that CNN based NMT and the Transformer significantly outperform the state-of-the-art RNN based NMT model of in both the translation quality and speed perspectives. | currently, most of the domain adaptation studies for NMT are based on the RNN based model (Bahdanau et al., 2015). | contrasting |
train_9670 | With increasing access to digital historical text, the processing of these historical texts is attracting more and more interest. | in contrast to modern text, historical text processing faces more challenges. | contrasting |
train_9671 | There are 84.4% and 75.8% historical spellings that are identical to their modern spellings in German and English, respectively. | the unchanged rate is only 17.1% in Hungarian. | contrasting |
train_9672 | That is to say, the average edit distance of incorrectly normalized spellings will be larger compared to the average edit distance before normalization. | hungarian is the exception in Table 8, which indicates that spellings with longer edit distance are more likely to be normalized close to modern spellings in hungarian. | contrasting |
train_9673 | Since SMT models are more focused on a local context, the SMT models choose 'tok' rather than 'tuk'. | in terms of accuracy, it is still hard for NMT models to exceed SMT models in Swedish. | contrasting |
train_9674 | This can be attributed to the small size of CORPUS-26 TRAIN data used to train the 5-gram LM and the large number of classes. | adding 5-gram character LM scores as features (row n.) beats the baseline scores (row a.). | contrasting |
train_9675 | This is therefore a coarse abstraction of the structured prediction tasks presented in this paper. | this constitutes the most straight-forward task in emotion analysis. | contrasting |
train_9676 | The same applies to experiencer: if the head of the governing phrase is an emotion, then the head of the current phrase is a potential experiencer. | due to variability of emotion expressions, this cannot always be the case. | contrasting |
train_9677 | At the same time, the resource we present provides interesting and valuable insights in the language of emotion expression and, therefore, is useful to the community of linguists who are interested in the study of linguistic properties of emotions. | we also note that developing such a resource has its limitations: Due to the subjective nature of emotions, it is challenging, if not impossible, to come up with an annotation methodology that would lead to less disparate annotations, especially if in addition to emotion, other categories should be annotated together with roles. | contrasting |
train_9678 | Fictional texts are highly metaphoric and full of allusions and metonymies, which requires thoughtful reading (often reading between the lines) and a broader context. | this is something that our annotators do not have: all the context they have at their disposal is a triple of sentences, each of which can rely on information that is available in other parts of the book, but not in the annotation unit. | contrasting |
train_9679 | Clearly, the problem of string transduction subsumes the problem of sequence labeling, as one can always try to learn a mapping f from a given training data of strings of identical length. | there is a significant distinction that is usually made between the two, as string transduction can re-write a string into a completely different string, while sequence labeling has a stronger notion of locality. | contrasting |
train_9680 | We found out that while it perhaps prevents insertions in legitimate positions, if the frequency threshold is low enough, it does not have an adverse effect. | it prevents adding spurious insertions that cannot be correctly recovered as deletions. | contrasting |
train_9681 | The intermediate string representation is intended to correct mistakes while actually relying on the type of substitutions, deletions and insertions done by users of social media. | it is still prone to character-based mistakes, since it is a learned component. | contrasting |
train_9682 | A simple way of retrieving parallel sentences from comparable articles is to align the sentences in source and target pages together using a sentence alignment algorithm (Gale and Church, 1993;Fung and Church, 1994;Wu, 1994;Moore, 2002). | these aligners are designed to align parallel corpora in which the source and target sentences are in the same order (i.e., no cross-alignment) or in proximity to each other and in which each sentence has only one matching sentence (i.e., no many-tomany alignment). | contrasting |
train_9683 | As we discussed earlier, we are interested in perfect parallel sentences which are clustered in the highest σ regions. | partial parallel sentences in lower regions can be used for certain purposes too. | contrasting |
train_9684 | Recent years have witnessed a surge of publications aimed at tracing temporal changes in lexical semantics using distributional methods, particularly prediction-based word embedding models. | this vein of research lacks the cohesion, common terminology and shared practices of more established areas of natural language processing. | contrasting |
train_9685 | Unfortunately, Google Ngrams is inherently limited in that it does not contain full texts. | for many cases, this corpus was enough, and its usage as the source of diachronic data continued in Mitra et al. | contrasting |
train_9686 | Ideally, diachronic approaches should be evaluated on human-annotated lists of semantically shifted words (ranked by the degree of the shift). | such gold standard data is difficult to obtain, even for English, let alone for other languages. | contrasting |
train_9687 | (3) (Li et al., 2016a;Chen and Ren, 2017) take into account the content and static characteristics of network structures to deduce topics. | they ignore dynamic user behaviours. | contrasting |
train_9688 | (2017) model the entailment task as the seq2seq generation problem and enforce sharing of the same decoder between summarization and entailment. | the entailment task is more reasonable to be considered as a multi-label classification problem rather than a generation problem. | contrasting |
train_9689 | Our system is based on bidirectional recurrent neural networks that can learn sentence representations in a shared vector space by explicitly maximizing the similarity between parallel sentences. | to previous approaches, by leveraging these continuous vector representation of sentences we remove the need to rely on multiple models and specific feature engineering. | contrasting |
train_9690 | Hence, our training set contains is a target sentence of M tokens, and y i is the label representing the translation relationship between s S i and s T i , so that The advantage of negative sampling is its simplicity. | relying only on randomness to select negative sentence pairs makes most of the examples very non-parallel and easy to classify. | contrasting |
train_9691 | The most reliable way to create test sets to compare different approaches would be to have professional translators manually annotate parallel sentences from comparable corpora. | this option is expensive and impractical. | contrasting |
train_9692 | We see that having a balanced training set with m = 1 is not the optimal solution for our approach. | having an unbalanced training set improves its performance. | contrasting |
train_9693 | parallel sentences as the number of non-parallel sentences increases in the test set. | we see that our neural network based approach obtains better performances by a significant margin. | contrasting |
train_9694 | We notice that a dictionary misleads the decoder when it generates a correct sequence but there is only another form (e.g., plural) of this word, in the dictionary. | the dictionary sometimes helps to get closer to the original word even if this word does not exist in the dictionary and if the training and the test data are from different periods or different languages. | contrasting |
train_9695 | For the four point graded scale, the agreement dropped to 63%. | when binarizing these annotations by combining the first two and the second two classes, an agreement of 84% is obtained. | contrasting |
train_9696 | With a value of 0.21 in terms of Fleiss' κ, the annotator agreement is between slight and fair. | when binarizing the classes as described before, κ becomes 0.36, which corresponds to the respective value of 0.35 reported for our previous clickbait corpus (Potthast et al., 2016). | contrasting |
train_9697 | Two of the 27 publishers, breitbartnews and buzzfeed, do obviously not follow the overall distribution; both send significantly more clickbait than the others. | their contribution to the total amount of clickbaiting tweets (and hence to the corpus' publisher bias) is moderate only. | contrasting |
train_9698 | Considerable effort has been devoted to building commonsense knowledge bases. | they are not available in many languages because the construction of KBs is expensive. | contrasting |
train_9699 | There is a rich body of work on sense embedding, which allows one surface form of a word to have sense-specific vectors (Neelakantan et al., 2014;Iacobacci et al., 2015). | to the best of our knowledge, previous studies in this field do not target sense vectors of concepts for cross-lingual knowledge projection. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.