id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_16700 | Existing works introduce dialog acts to label a cluster of responses and a latent variable is learned to select a dialog act for response generation (Zhao et al., 2017;Serban et al., 2017a). | it is not effective to capture the output diversity since the natural correlation between the expression patterns and dialog acts is not learnt. | contrasting |
train_16701 | A simple implementation is to directly embed the act information into the Seq2Seq model. | as shown in our experiments, it still suffers from safe response problem. | contrasting |
train_16702 | Formally, the first way updates its state only according to [w, 0 • e], while the second according to [w, 1 • e]. | these methods may cause that the content (expression) is so powerful that the responses are without any effective expression (meaningful content). | contrasting |
train_16703 | But, only a little works (Zhao et al., 2017;Serban et al., 2017a) on open domain endto-end modeling take dialog acts into account. | many attempts have also been made to improve the architecture of Seq2Seq models by changing the training methods. | contrasting |
train_16704 | Because of its great potential in understanding and modeling conversations, SEQ2SEQ has been widely applied in different kinds of conversation scenarios including technical supporting, movie discussions, and social entertainment, etc. | when confronting conversations with diverse topics or themes, SEQ2SEQ is usually prone to make generic meaningless responses due to its oversimplified parameterization. | contrasting |
train_16705 | This architecture shows great success in neural dialogue generation (Shang et al., 2015;Serban et al., , 2017Shen et al., 2017;Clark and Cao, 2017;Xing et al., 2017;Choudhary et al., 2017;. | with a single set of model parameters and the oversimplified model architecture, the flexibility of the model is rather limited, especially when confronting conversations with diverse topics or themes. | contrasting |
train_16706 | We observe that the decoding speeds of CVAE and LAED are relatively comparable with our model. | when comparing with TA-SEQ2SEQ and DOM-SEQ2SEQ that also elaborately and explicitly model conversations with diverse topics or themes, ADAND shows a clear superiority in decoding speed. | contrasting |
train_16707 | These actions can be regarded as users' feedbacks that reflect users' interest. | due to its implicitness, such feedback can only reflect a part of users' interest, causing inaccuracy in recommendation. | contrasting |
train_16708 | From the figure, it can be found that while there is no mentioned item in the dialog, the baseline and the one only with knowledge perform the worst. | the two models with dialog incorporation perform significantly better. | contrasting |
train_16709 | 2018, we align latent spaces for different styles. | we also align latent spaces encoded by different models (S2S and AE). | contrasting |
train_16710 | (2019b) maximized the average of capped distance between points from the same latent space. | we found that the results are sensitive to the cap value. | contrasting |
train_16711 | As in the automatic evaluation results, STYLEFU-SION and MTask show the highest appropriate- ness (not statistically different) apart from the Human system. | sTYLEFUsION outputs are much more stylized. | contrasting |
train_16712 | Dialogue systems have attracted increasing attention due to the promising potentials on applications like virtual assistants or customer support systems (Hauswald et al., 2015;Poulami Debnath, 2018). | studies (Carbonell, 1983) show that users of dialogue systems tend to use succinct language which often omits entities or concepts made in previous utterances. | contrasting |
train_16713 | First, the context is treated as a whole for calculating its attention towards profile sentences. | each context is composed of multiple utterances and these utterances may play different roles when matching different profile sentences. | contrasting |
train_16714 | In general, all these methods adopted a contextlevel persona fusion strategy, which first obtained the embedding vector of a context and then computed the similarities between the whole context and each profile sentence to acquire the persona representation. | such persona fusion is relatively too coarse. | contrasting |
train_16715 | It is reasonable that the contextpersona matching is more important because contexts provide the fundamental semantic descriptions for response selection. | the single persona-response matching can also achieve a hits@1 of 48.8% and an MRR of 60.9%, which shows the usefulness of utilizing persona information to select the best-matched response. | contrasting |
train_16716 | The lengths of shortest paths from sources to targets are shown in Figure 3. | most probabilities lie on two and three hops rather than zero and one hop, so key-value extraction based text generative models (Eric et al., 2017;Levy et al., 2017;Qian et al., 2018) are not suitable for this task.multi-hop reasoning might be useful for better retrieving correct knowledge graph entities. | contrasting |
train_16717 | Among the baselines, KAware has better performance on Last1. | qadpt outperforms all baselines when the knowledge graphs slightly change (Last1 and Last2) in terms of accurate change rate. | contrasting |
train_16718 | For the scenario of zero-shot adaptation, Mem-Net and TAware show their ability to update responses when the knowledge graphs are largely changed. | qadpt is better to capture minor dynamic changes ( Last1 and Last2) and updates the responses according to the new knowledge graphs. | contrasting |
train_16719 | The goal of the interpretable matching model is to reveal token-level matching information between a query-response pair thus a matching skeleton can be derived from the response. | the training of the matching model does not rely on such fine-grained annotations. | contrasting |
train_16720 | In addition, it might be a little bit surprising to see that PMI and keywords give almost the same performance on all three metrics, telling a given query is not that necessary. | we found lots of skeletons proposed by PMI are identical to those of keywords. | contrasting |
train_16721 | In the first case, the retrieved utterance is very specific with elaborated details. | it is not a reasonable response due to the sudden topic drift. | contrasting |
train_16722 | use dialogue acts to capture the discourse variations in multi-round dialogues as guided knowledge. | such discourse information can hardly be extracted for short-text conversation. | contrasting |
train_16723 | During testing, a decoder sequentially generates a response using search strategies such as beam search. | these models frequently generate bland and generic responses. | contrasting |
train_16724 | Recently, Seq2Seq neural networks (Sutskever et al., 2014) have demonstrated excellent results on open-domain conversation (Shang et al., 2015;Sordoni et al., 2015;Vinyals and V. Le, 2015;Yao et al., 2015). | due to lacking of the global and relevant information guidance, they inherently tend to generate trivial and uninformative responses (e.g., "I don't know"), rather than meaningful and replier-specific ones Li et al., 2016). | contrasting |
train_16725 | It is observed that when k ≤ 3 , model performance improves with the increasing of k, which suggests that more distributions in GMD are helpful for modeling user personalization. | for k > 3, model performance slightly drops with the increasing of k, The potential reason is GMD with three distributions is effective enough for modeling personalization, and sophisticated GMD might suffer scarce datasets for training. | contrasting |
train_16726 | (2018) explores this approach in detail. | the dataset introduced in that work does not capture higher-level strategic behaviors that can impact the quality of the recommendation made (for example, it may be better to elicit user preferences first, before making a recommendation). | contrasting |
train_16727 | (2013) automatically produces template-based questions from user reviews. | no conversational recommender systems have been built based on these works due to the lack of a large publicly available corpus of human recommendation behaviors. | contrasting |
train_16728 | Second, as the model becomes better at the game, we observe an increase in the length of dialogue. | it remains shorter than the average length of human dialogues, possibly because our reward function is designed to minimize it, which worked better in experiments. | contrasting |
train_16729 | The goal of these systems is to help users accomplish a specific task, such as flight or hotel booking or transportation planning. | to achieve these goals, task-oriented dialogue systems rely on pre-defined slots and values for request processing (which can be represented using simple SQL queries consisting of SELECT and WHERE clauses). | contrasting |
train_16730 | Comparing to existing context-dependent text-to-SQL datasets, CoSQL contains significantly more turns, out of which 11,039 user utterances are convertible to SQL. | all NL utterances in ATIS and SParC can be mapped to SQL. | contrasting |
train_16731 | For example, SQL AGG components occur most frequently in the beginning of dialogues, as a result of users familiarizing themselves with the amount of data in the DB or other statistical measures. | the frequencies of almost all SQL components in SParC increase as the question turn increases. | contrasting |
train_16732 | Reinforcement Learning has gained more and more attention in dialog system training because it treats the dialog planning as a sequential decision problem and focuses on long-term rewards . | rL requires interaction with the environment, and obtaining real human users to interact with the system is both time-consuming and labor-intensive. | contrasting |
train_16733 | For instance, RL systems built with more complicated user simulators will have lower scores on the automatic metrics, compared to those built using simpler user simulators. | the good performance may not necessarily transfer when the system is tested by real users. | contrasting |
train_16734 | In this work, we build a similar agenda-based user simulator in the restaurant domain, and focus more on analyzing the effects of using different user simulators. | it's not feasible to build agenda-based user simulators for more complex tasks without an explicit agenda. | contrasting |
train_16735 | (2018) introduced the Neural User Simulator (NUS) which learned user behaviour from a corpus and generates natural language directly instead of semantic output such as dialog acts. | unlike in ABUS, how to infuse the agenda into the dialog planning and assure consistency in data-driven user simulators has been an enduring challenge. | contrasting |
train_16736 | Dialog action space can also be on the word-level. | previous study shows degenerate behavior when using word-level action space , as it is difficult to design a reward. | contrasting |
train_16737 | "Auto-Success" has been used to reflect the solved ratio previously. | it's not necessarily correlated with the user-rated solved ratio. | contrasting |
train_16738 | pare the success rate of S 1 , ..., S n on U . | by looking at the fifth row for the SL-Retrieval simulator, it will prefer Sys-SLT (0.975) over Sys-AgenG (0.965), but actually the average performance of Sys-AgenG (0.882) is better than Sys-SLT (0.616) from the last row. | contrasting |
train_16739 | Bahdanau Attention (Bahdanau et al., 2014) is widely used to calculate the attention weights according to the features of individual modalities, since Bahdanau Attention is originally proposed for machine translation which can be considered as a unimodal task. | video captioning is a multimodal task and different modalities are able to provide complementary cues to each other when calculating the attention weights. | contrasting |
train_16740 | Given an image x ∈ D x u , we retrieve a caption in the unpaired dataset, i.e.,ỹ ∈ D y u , that has the highest score obtained by the discriminator, i.e., the most likely caption to be paired with the given image as vice versa for unpaired captions: By this retrieval process over all the unpaired dataset, we have image-caption pairs {(x i , y i )} from the paired data and the pairs with pseudolabels {(x j ,ỹ j )} and {(x k , y k )} from the unpaired data. | these pseudo-labels are not noisefree, thus treating them equally with the paired ones is detrimental. | contrasting |
train_16741 | Over the last few years, vision-language tasks such as image captioning (Xu et al., 2015) and visual question answering (VQA) (Antol et al., 2015;Anderson et al., 2018) have provided a testbed for developing a cognitive agent. | the agent performing these tasks still has a long way to go to be used in real-world applications (e.g., aiding visually impaired users, interacting with humanoid robots) in that it does not consider the continuous interaction over time. | contrasting |
train_16742 | In general, the algorithms we introduce again outperform the NoStruct baseline. | to the crowdlabeled experiments, AP (slightly) outperformed the other algorithms. | contrasting |
train_16743 | We also constructed a dataset from English sentence-tokenized Wikipedia articles (not including captions) and their associated images from Im-ageCLEF2010 (Popescu et al., 2010). | to RQA and DIY, there are no explicit connections between individual images and individual sentences, so we cannot compute AUC or precision, but this corpus represents an important organicallymultimodal setting. | contrasting |
train_16744 | Ultimately, the choice to split the annotations to three streams and use more than one attention heads, each focusing on different types of mentions, leads to better performance. | even the multi-head supervised attentive model, does not score as well as the multiview (non-attentive) model. | contrasting |
train_16745 | (2018), where an attention head is replaced by a model trained to predict syntac-tic dependencies (Dozat and Manning, 2017). | our model uses explicit supervision for all self-attention heads and is trained to predict the correct attention scores in a multi-task fashion. | contrasting |
train_16746 | (Lu et al., 2016) proposed a "visual sentinel", which applies the hidden state in LSTM as supervision to adaptively decide if it is necessary to input visual feature to the language model when generating the next word. | the supervision information from the hidden state isn't credible which cannot make sure it contains the corresponding visual decision signals. | contrasting |
train_16747 | The features of the POS tag in the linguistic computer vision. | they ignore the important property of POS tag, which relates to different visual semantics with different types of POS tag. | contrasting |
train_16748 | Considering this strong correlation between the two tasks, some joint models are proposed based on the multi-task learning framework (Zhang and Wang, 2016;Liu and Lane, 2016) and all these models outperform the pipeline models via mutual enhancement between two tasks. | their work just modeled the relationship between intent and slots by sharing parameters. | contrasting |
train_16749 | All these models outperform the pipeline models via mutual enhance- 5 We only consider the first subword label if a word is broken into multiple subwords ment between two tasks. | these joint models did not model the intent information for slots explicitly and just considered their correlation between the two tasks by sharing parameters. | contrasting |
train_16750 | This is due to the fact that these RPN-systems have to align every proposed region with the command. | the non-RPN systems only have to encode the full image once and then reason over this embedding. | contrasting |
train_16751 | In future versions, Talk2Car will be expanded to include the above annotations and dialogues. | this first version already offers a challenging dataset to improve current methods for the joint processing of language and visual data and for the development of suitable machine learning architectures. | contrasting |
train_16752 | In most cases, this involves some kind of digital manipulation, e.g., cropping, splicing, etc. | there are cases when an image is completely legitimate, but it is published alongside some text that does not reflect its content accurately. | contrasting |
train_16753 | The tool extracts metadata in the form of about 100 features such as size, resolution, GPS location. | most of this metadata turns out to be missing from our images: only five features could be extracted for more than half of the images from the Snopes dataset. | contrasting |
train_16754 | If there is virtually no change, then the cell has reached its local minima for error at that quality level. | if there is a large change, then the pixels are not at their local minima and are effectively original. | contrasting |
train_16755 | This usually requires exploration. | conventional frameworks like reinforcement learning (RL) or imitation learning (IL) are poorly suitable. | contrasting |
train_16756 | Unlike instruction-level evaluation, cascaded evaluation executes the instructions in sequence. | instead of starting of starting only from the start state of the first instruction, we create separate examples for starting from the starting state of each instruction in the interaction and continuing until the end of the interaction. | contrasting |
train_16757 | We see a similar trend for the implicit discriminator when looking at full game points, an interaction-level metric that does not account for performance on over 80% of the data because of error propagation. | the proportion of points scored computed using cascaded evaluation shows the benefit of both mechanisms. | contrasting |
train_16758 | Recent neural architectures including Transformer (Vaswani et al., 2017) and BERT (Devlin et al., 2018) have dramatically increased our ability to include a broad window of potential lexical hints. | the same capacity allows for multimodal context, which may help model the meaning of words in general, and also sharpen its understanding of instances of words in context (e.g., Bruni et al. | contrasting |
train_16759 | Similarly, SPICE also has a lower capability to identify action inconsistencies, which shows the limitation of scene graphs to capture this feature. | the n-gram-based metrics prefer to identify movement changes. | contrasting |
train_16760 | This has been the basis of overcoming domain-shift using many self-training techniques which select the most confident examples as new training examples, together with the predicted class as pseudo-label (McClosky et al., 2006). | modern neural networks are known to give wrongly calibrated confidence scores (Guo et al., 2017) meaning that the associated probability scores to the predicted class label does not reflect its correctness likelihood. | contrasting |
train_16761 | We find that none of the papers reported all of the items in our checklist. | every paper reported at least one item in the checklist, and each item is reported by at least one paper. | contrasting |
train_16762 | ( , 2020 -as the goal of our analysis is to learn a smooth manifold of font styles that allows for stylistic inference given a small sample of glyphs. | many other style transfer tasks in the language domain (Shen et al., 2017) suffer from ambiguity surrounding the underlying division between style and semantic content. | contrasting |
train_16763 | Broadly speaking, our architecture represents the underlying shape that defines the specific character type (but not the font) as coarse-grained information that therefore enters the transpose convolutional process early on. | the stylistic content that specifies attributes of the specific font (such as serifs, drop shadow, texture) is represented as finer-grained information that enters into the decoder at a later stage, by parameterizing filters, as shown in the right half of Figure 2. | contrasting |
train_16764 | The conventional approach for computing loss on image observations is to use an independent output distribution, typically a Gaussian, on each pixel's intensity. | deliberate analysis of the statistics of natural images has shown that images are not well-described in terms of statistically independent pixels, but are instead better modeled in terms of edges (Field, 1987;Huang and Mumford, 1999). | contrasting |
train_16765 | Nearest neighbors provides a strong baseline on the full test set, even outperforming GlyphNet. | it performs much worse on the hard subset. | contrasting |
train_16766 | As shown in previous work Ponti et al., 2018), retrofitting (CLSRI-AR) and the cross-lingual post-specialization transfer (X-PS) are substantially better in the LS task than the original distributional space. | our full CLSRI-PS model results in substantial boosts in the LS task (13-17%) over the previous best reported scores of X-PS as well as over CLSRI-AR. | contrasting |
train_16767 | Interestingly, for STS both X-PS and CLSRI-AR damage the performance of the distributional baseline. | the full CLSRI-PS model still shows a substantial improvement over all baselines. | contrasting |
train_16768 | Briefly, the sentence meaning approach to NLI takes the position that NLP systems should strive to model the aspects of a sentence's semantics which are closely derivable from the lexicon and which hold independently of context (Zaenen et al., 2005). | the speaker meaning approach to NLI takes the position that NLP systems should prioritize representation of the goaldirected meaning of a sentence within the context in which it was generated (Manning, 2006). | contrasting |
train_16769 | That is, both "He knows that the answer is 5" and "He does not know that the answer is 5" imply that "The answer is 5". | a verb like "manage to" has the signature +/− since, in a positive environment, the complement projects ("I managed to pass"→"I passed") but, in a negative environment, the negation of the complement projects ("I did not manage to pass"→ ¬"I passed"). | contrasting |
train_16770 | As ordering the sequence is more nuanced than sifting basic from non-basic, one might expect consistently lower correlations. | the sequence gamma can have larger magnitude if, despite poor sifting, the order of the basic terms is correct. | contrasting |
train_16771 | During training, we maximize the logprobability of the correct tag sequence: While during decoding, we predict the output sequence that obtains the maximum score given by As discussed in Section 1, contextual information is vital for negative focus detection. | the BiLSTM-CRF model might fail to identify the important information related to a negative focus in contexts. | contrasting |
train_16772 | 10 Consider the previous sentence S2, the author emphasize that the Nucor plant will shut down, thus the most likely negative focus should be anything in S3, whereas the prepositional phrase on some days only provides a temporal statement in our view. | different annotators might have different understandings and interpretations for this case. | contrasting |
train_16773 | Then, the pointer network based decoders predict the correct sentence sequence. | the paragraph vector depends on the permutation of input sentences. | contrasting |
train_16774 | To address the issue, ATTOrderNet (Cui et al., 2018) employs self-attention at the encoder to capture global dependencies regardless of an input sentence order. | similar to traditional models, they also compress sentence vectors into a single fixed-length vector via average pooling. | contrasting |
train_16775 | Since they are dependent on the permutation of the input sentences, their paragraph-level representations are not reliable. | self-attention-based AT-TOrderNet (encoder side) and attention-based our TGCM/TGCM-S (decoder side) use a set of sentence representations in a permutation-invariant manner rather than a single vector to represent a paragraph. | contrasting |
train_16776 | For dialogues with multiple interlocutors, extraction of their discourse structures provides useful semantic infor-mation to the "downstream" models used, for example, in the production of intelligent meeting managers or the analysis of user interactions in online fora. | despite considerable efforts to retrieve discourse structures automatically (Fisher and Roark, 2007;Duverle and Prendinger, 2009;Li et al., 2014;Joty et al., 2013;Ji and Eisenstein, 2014;Yoshida et al., 2014;Li et al., 2014;Surdeanu et al., 2015), we are still a long way from usable discourse models, especially for dialogue. | contrasting |
train_16777 | Once our MILNet model is trained, we can use it to obtain the attention score A E i and the sentiment score S E i for each EDU E i in a document (see (c) in Figure 3). | while each A E i is already a scalar, S E i is a vector with |C| elements, one for each sentiment class C i . | contrasting |
train_16778 | The first set of results in Table 3 shows that the hierarchical right/left branching baselines dominate the completely right/left branching ones. | their performance is still significantly worse than any discourse parser (intra-and interdomain). | contrasting |
train_16779 | It was a single perpetrator that engaged in touching or groping more often, rather than groups of perpetrators. | commenting happened more frequently when harassers were in groups. | contrasting |
train_16780 | Many common English nouns, adjectives and verbs, whose contribution to semantics is minimal (Forman, 2003) were also removed from the vocabulary. | namedentities were retained for their newsworthiness and a set of "trigger" words were retained that depict events (e.g. | contrasting |
train_16781 | Since PCG is also capable of representing other stock time series as potential influencers in the network, we can use this to model the propagation of shocks in the market as shown in Figure 2. | these links were not used for prediction performance to maintain parity with our baseline. | contrasting |
train_16782 | We build our method upon variational autoencoders (VAE), a deep probabilistic neural model which learns latent text features and their distributions effectively. | to other approaches using variational autoencoders for text generation, we modify the mechanism for generating new artificial samples such that we obtain samples structurally and semantically similar to a specific subset of the original data. | contrasting |
train_16783 | Embedding based methods, however are limited by the existing sentences available in a given corpus. | our approach is capable of generating new sentences not existing in the corpus, thus largely expanding the training data. | contrasting |
train_16784 | Here, data is embedded into a latent space which is modelled by conditional distributions, and samples from this distributions can be decoded into new artificial data instances. | to shallow models such as Skip-Gram (Mikolov et al., 2013) which also embeds into latent spaces, deep generative models have been shown to capture the implicit semantics and structure of the underlying data more effectively. | contrasting |
train_16785 | This is demonstrated by the performance of CRF+Doc2Vec (Table 5). | even using CRF+VAE (1 sample) shows higher F-score than CRF without notable loss of precision. | contrasting |
train_16786 | For instance in the Tweet "Ive had no appetite since I started on prozac", the annotators did not annotate no appetite as an ADR. | our model was able to predict it correctly as an ADR, but due to this mistake in test data is considered a false positive. | contrasting |
train_16787 | Online vendors use this data to understand users preferences and further predict their future needs. | user-generated data is rich in content and malicious attackers can infer users' sensitive information. | contrasting |
train_16788 | The goal of our model is to manipulated learned embedded representation such that any potential adversary cannot infer users' privateattribute information. | a challenge is that the text anonymizer does not know the adversary's attack model. | contrasting |
train_16789 | In some applications where the privacy of users are important and critical, we can set the α parameter above 0.5. | if the users privacy is not top priority, this parameter can be set to a lower value than 0.5 which although it does not protect users' private attribute as good as when α >= 0.5, but it does protect users' private attribute at a reasonable level. | contrasting |
train_16790 | Equation x = 660 / ( 32 + AST of the equations. | these approaches are based on traditional methods and require feature engineering. | contrasting |
train_16791 | We choose Key-Value Memory Networks (Miller et al., 2016) and GRAFT-Net (Sun et al., 2018) as our baseline models: to the best of our knowledge, these are the only ones that can use both text and KBs for question answering. | both models are limited by the number of facts and text that can fit into memory. | contrasting |
train_16792 | Our setup is not directly comparable to standard QA setups, as we aim to evaluate evidence rather than raw QA accuracy. | each judge model's accuracy is useful to know for analysis purposes. | contrasting |
train_16793 | Since our model is better at filtering out the irrelevant data, the improvement in SearchQA is not as significant as in other datasets. | our (RS→WS, RK, SUM) model still achieves the SoTA result by adopting the SUM method and HAS-QA, which is a strong baseline, also adopts the similar method. | contrasting |
train_16794 | As the number of paragraphs increases, the chances are better that the answer is included in these paragraphs. | the difficulty and running time for the reader also increase. | contrasting |
train_16795 | (2019) also ranked the paragraphs and took into account both the ranking results and the scores produced by the reader to predict the final answer. | they mainly utilize the paragraph-question relevance to rank the paragraphs and train the reader mainly on the positive paragraphs. | contrasting |
train_16796 | Commutativity of relation matrices was not recognized as an issue in the past research because the main focus was on predicting the truth value of atomic triplets. | when path queries are concerned, commutativity poses a problem. | contrasting |
train_16797 | Previous work (Elsahar et al., 2018) usually obtained predicate textual contexts through distant supervision. | the distant supervision is noisy or even wrong (e.g. | contrasting |
train_16798 | Previous study found that most questions contain the subject name or its aligns in SimpleQuestion (Petrochuk and Zettlemoyer, 2018). | the predicate name and object name hardly appear in the question. | contrasting |
train_16799 | (2018) demonstrated the effectiveness of POS copy for the context. | such a copy mechanism heavily relies on POS tagging. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.