id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_4700 | (6) From a pair of begin and end points, the answer string can be extracted from the passage. | rather than output the results (start/end points) from the final step (which is fixed at T − 1 as in Memory Networks or dynamically determined as in ReasoNet), we utilize all of the T outputs by averaging the scores: is a multinomial distribution over {1, . | contrasting |
train_4701 | (2018a) combined a ranker for selecting the relevant passage and a reader for producing the answer from it. | this approach only depended on one passage when producing the answer, hence put great demands on the precisions of both components. | contrasting |
train_4702 | On SearchQA, we find that our extraction model alone performs not that well compared with the state-of-the-art model without re-rankers. | the improvement brought by our selection model isolatedly or jointly trained still demonstrates the importance of our two-stage framework. | contrasting |
train_4703 | Taking K = 1 degrades the performance, which conforms to the expectation, as the correct candidates become less in this stricter situation. | taking K = 3 can not improve the performance further. | contrasting |
train_4704 | In the last example, MINIMAL fails to answer the question, since the inference over first and second sentences is required to answer the question. | selected sentence in 1883-84 Germany began to build a colonial empire in Africa and the South Pacific, before losing interest in imperialism. | contrasting |
train_4705 | propose a DS-QA system, which retrieves relevant texts of the question from a large-scale corpus and then extracts answers from these texts using reading comprehension models. | the retrieved texts in DS-QA are always noisy which may hurt the performance of DS-QA. | contrasting |
train_4706 | Therefore, simply using all retrieved paragraphs equally to extract answer may bring in much noise. | our+FULL model still has a slight improvement by considering the confidence of each retrieved paragraph. | contrasting |
train_4707 | As Table 7 shows, if we remove L adv , the translation performance decreases by 0.64 BLEU point. | when L noisy is excluded from the training objective function, it results in a significant drop of 1.66 BLEU point. | contrasting |
train_4708 | (2016) extend the attentional model to include structural biases from word based alignment models, including positional bias, Markov conditioning, fertility and agreement over translation directions. | we did not delve into the attention model or sought to redesign it in our new bridging proposal. | contrasting |
train_4709 | These approaches are somewhat similar to our source-side bridging model. | inspired by the insight of shortening the distance between source and target embeddings in the seq2seq processing chain, in the present paper we propose more strategies to bridge source and target word embeddings and with better results. | contrasting |
train_4710 | quality ratings, is related to the task of sentence-level quality estimation (sQE). | there are crucial differences between sQE and the reward estimation in our work: sQE usually has more training data, often from more than one machine translation model. | contrasting |
train_4711 | (2018) change the auto-regressive architecture to speed up translation by directly generating target words without relying on any previous predictions. | compared with our work, their model achieves the improvement in decoding speed at the cost of the drop in translation quality. | contrasting |
train_4712 | While the rule-based semantic parser has high precision and gauges the amount of structural variance in the data, it cannot generalize beyond observed examples. | we can automatically generate non-abstract utterance-program pairs from the manually annotated abstract pairs and train a semantic parser with strong supervision that can potentially generalize better. | contrasting |
train_4713 | 3 The resulting parser can be used as a standalone semantic parser. | it can also be used as an initialization point for the weakly-supervised semantic parser. | contrasting |
train_4714 | 2016proposed a user interface for the Freebase database that enables a fast and easy creation of parses. | in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system. | contrasting |
train_4715 | Consequently the number of tokens and types increase in a similar vein. | the average sentence length drops. | contrasting |
train_4716 | Because they are graphs and not trees, they can capture reentrant semantic relations, such as those induced by control verbs and coordination. | it is technically much more challenging to parse a string into a graph than into a tree. | contrasting |
train_4717 | The same AM dependency tree may represent multiple indexed AM terms, because the order of apply and modify operations is not specified in the dependency tree. | it can be shown that all well-typed AM terms that map to Fig. | contrasting |
train_4718 | Stochastic Gradient Descent (SGD) with negative sampling is the most prevalent approach to learn word representations. | it is known that sampling methods are biased especially when the sampling distribution deviates from the true data distribution. | contrasting |
train_4719 | By far, most state-of-the-art embedding methods rely on SGD and negative sampling for optimization. | the performance of SGD is highly sensitive to the sampling distribution and the number of negative samples (Chen et al., 2018;Yuan et al., 2016), as shown in Figure 1. | contrasting |
train_4720 | To address the above-mentioned limitations of SGD, a natural solution is to perform exact (full) batch learning. | to SGD, batch learning does not involve any sampling procedure and computes the gradient over all training samples. | contrasting |
train_4721 | hibits a more stable convergence due to its full batch learning. | gloVe has a more dramatic fluctuation because of the one-sample learning scheme. | contrasting |
train_4722 | In other words, the parallelization of SGD is not well suited to a large number of work- ers. | the parameter updates in AllVec are completely independent of each other, therefore AllVec does not have the update collision issue. | contrasting |
train_4723 | In this paper, we presented AllVec, an efficient batch learning based word embedding model that is capable to leverage all positive and negative training examples without any sampling and approximation. | with models based on SGD and negative sampling, AllVec shows more stable convergence and better embedding quality by the all-sample optimization. | contrasting |
train_4724 | Amos and Kolter (2017) extend their efforts to a class of subdifferentiable quadratic programs. | they both require that the intermediate objective has an invertible Hessian, limiting their application in NLP. | contrasting |
train_4725 | The standard protocol for obtaining a labeled dataset is to have a human annotator view each example, assess its relevance, and provide a label (e.g., positive or negative for binary classification). | this only provides one bit of information per example. | contrasting |
train_4726 | They use crowdsourcing to craft passage perturbations intended to fool the network, and then query the network to test their effectiveness. | we propose improving the analysis of question answering systems. | contrasting |
train_4727 | We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much. | we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6]). | contrasting |
train_4728 | For example, Russian and Arabic are morphologically-rich languages and heavily utilize grammatical markers to indicate grammatical as well as semantic functions. | chinese, as an analytic language, encodes grammatical and semantic information in a highly configurational rather than either inflectional or derivational way. | contrasting |
train_4729 | As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days. | due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training. | contrasting |
train_4730 | The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary. | the effects of the temporal auxiliary are more complex and will be analyzed further in the next section. | contrasting |
train_4731 | This under utilizes the propagation information due to such oversimplified treatment of tree structure. | sVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities. | contrasting |
train_4732 | It is not easy to tell if each Modern Baseball refers to a name or not from the textual evidence only. | using the associated images as reference, we can easily infer that Modern Baseball in the first sentence should be the name of a band because of the implicit features from the objects like instruments and stage, and the Modern Baseball in the second sentence refers to the sport of baseball because of the pitcher in the image. | contrasting |
train_4733 | The BLSTM-CRF sequence labeling model benefits from using the visual context vector to initialize the LSTM cell. | the better way to utilize visual features for sequence labeling is to incorporate the features at word level individually. | contrasting |
train_4734 | A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling. | document modeling, a key to many natural language understanding tasks, is still an open challenge. | contrasting |
train_4735 | As one might imagine, HUMAN gets ranked 1st most of the time (41%). | it is closely followed by XNET which ranked 1st 28% of the time. | contrasting |
train_4736 | For the SQuAD dataset, the results are comparable (less than 1%). | the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%. | contrasting |
train_4737 | For example, the neural variational document model (NVDM;Miao et al., 2016) allows θ i ∈ R K and achieves normalization by taking the softmax of θ i B. | the experiments in Srivastava and Sutton (2017) found the performance of the NVDM to be slightly worse than LDA in terms of perplexity, and dramatically worse in terms of topic coherence. | contrasting |
train_4738 | In these recent works, words are represented as a vector of discrete numbers, which are very efficient storage-wise, while showing comparable performance on several NLP tasks, relative to continuous word embeddings. | discrete representations that are learned in an endto-end manner at the sentence or document level have been rarely explored. | contrasting |
train_4739 | Most of the previous text hashing methods focus on modeling the encoding distribution p(z|x), or hash function, so the local/global pairwise similarity structure of documents in the original space is preserved in latent space (Zhang et al., 2010;Wang et al., 2013;Xu et al., 2015;Wang et al., 2014). | the generative (decoding) process of reconstructing x from binary latent code z, i.e., modeling distribution p(x|z), has been rarely considered. | contrasting |
train_4740 | Previous studies have shown syntactic information has a remarkable contribution to SRL performance. | such perception was challenged by a few recent neural SRL models which give impressive performance without a syntactic backbone. | contrasting |
train_4741 | Syntactic information plays an informative role in semantic role labeling. | few studies were done to quantitatively evaluate the syntactic contribution to SRL. | contrasting |
train_4742 | Evaluation Measure For SRL task, the primary evaluation measure is the semantic labeled F 1 score. | the score is influenced by the quality of syntactic input to some extent, leading to unfaithfully reflecting the competence of syntax-based SRL system. | contrasting |
train_4743 | Differently, proposed a syntax-agnostic model using effective word representation for dependency SRL, which for the first time achieves comparable performance as stateof-the-art syntax-aware SRL models. | most neural SRL works seldom pay much attention to the impact of input syntactic parse over the resulting SRL performance. | contrasting |
train_4744 | , (s m i , a m i ) for each instructionx i . | the agent context, the information available to the agent at step k, is the execution up until but not including step k. to the world state, the agent context also includes instructions and the execution so far. | contrasting |
train_4745 | We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train. | since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting. | contrasting |
train_4746 | We note that the sequence-level smoothing tend to generate lengthy captions overall, which is maintained in the combination. | the token-level smoothing allows for a better recognition of objects in the image that stems from the robust training of the classifier e.g. | contrasting |
train_4747 | UNK, and does not reflect the smoothness of the underlying continuous distribution of certain attributes. | are both grammatical and realistic, as in this example: this maps all unseen numerals to the same unknown type and ignores the smoothness of continuous attributes, as shown in Figure 1. | contrasting |
train_4748 | Several tasks such as information extraction, question answering, and machine translation have benefited from them. | in their vanilla forms, these networks are constrained by the sequential order of tokens in a sentence. | contrasting |
train_4749 | To mitigate this limitation, structural (dependency or constituency) information in a sentence was exploited and witnessed partial success in various tasks (Goller and Kuchler, 1996;Yamada and 1 Our code for experiments on the SICK dataset is publicly available at https://github.com/amulyahwr/ acl2018 Knight, 2001;Quirk et al., 2005;Socher et al., 2011;Tai et al., 2015). | alignment techniques (Brown et al., 1993) and attention mechanisms (Bahdanau et al., 2014) act as a catalyst to augment the performance of classical Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) models, respectively. | contrasting |
train_4750 | Distant supervision has become the standard method for relation extraction. | even though it is an efficient method, it does not come at no cost-The resulted distantly-supervised training samples are often very noisy. | contrasting |
train_4751 | To suppress the noisy (Roth et al., 2013), recent stud-ies (Lin et al., 2016) have proposed the use of attention mechanisms to place soft weights on a set of noisy sentences, and select samples. | we argue that only selecting one example or based on soft attention weights are not the optimal strategy: To improve the robustness, we need a systematic solution to make use of more instances, while removing false positives and placing them in the right place. | contrasting |
train_4752 | Recently, with the advance of neural network techniques, deep learning methods (Zeng et al., 2014(Zeng et al., , 2015 are introduced, and the hope is to model noisy distant supervision process in the hidden layers. | their approach only selects one most plausible instance per entity pair, inevitably missing out a lot of valuable training instances. | contrasting |
train_4753 | Distant supervision relation extraction is to predict the relation type of entity pair under the automatically-generated training set. | the issue is that these distantly-supervised sentences that mention this entity pair may not express the desired relation type. | contrasting |
train_4754 | First, the prerequisite of reinforcement learning is that the external environment should be modeled as a Markov decision process (MDP). | the traditional setting of relation extraction cannot satisfy this condition: the input sentences are independent of each other. | contrasting |
train_4755 | Intuitively, a relation can be modeled by a matrix mapping entity vectors. | relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices-for one reason, composition of two relations M 1 , M 2 may match a third M 3 (e.g. | contrasting |
train_4756 | the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space. | modeling relations as mappings naturally requires more parameters -a general linear map between d-dimension vectors is represented by a matrix of d 2 parameters -which are less likely to be shared, impeding transfers of facts between similar relations. | contrasting |
train_4757 | Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations. | the base model used in this work originates from RESCAL (Nickel et al., 2011), in which relations are naturally represented as analogue to the adjacency matrices (Sec.2). | contrasting |
train_4758 | It is also used for pretraining other deep neural networks (Erhan et al., 2010). | when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010), is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al. | contrasting |
train_4759 | WN18 collects word relations from WordNet (Miller, 1995), and FB15k is taken from Freebase (Bollacker et al., 2008); both have filtered out low frequency entities. | it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set. | contrasting |
train_4760 | Many supervised deep models have been proposed for this problem (Liu et al., 2015;Yin et al., 2016;Wang et al., 2017), and obtained promising results. | these methods fail to adapt well across domains, because the aspect terms from two different domains are usually disjoint, e.g., laptop v.s. | contrasting |
train_4761 | On one hand, these previous attempts have verified that syntactic information between words, which can be used as a bridge between domains, is crucial for domain adaptation. | dependency-tree-based RNN (Socher et al., 2010) has proven to be effective to learn high-level feature representation of each word by encoding syntactic relations between aspect terms and opinion terms (Wang et al., 2016). | contrasting |
train_4762 | Obviously, the performance of RNSCN-GRU without an autoencoder significantly deteriorates when the auxiliary labels are very noisy. | rNSCN + -GrU (r) achieves acceptable results compared to rNSCN + -GrU. | contrasting |
train_4763 | This may not greatly influence opinion extraction, as shown, because the two domains usually share many common opinion terms. | the significant difference in aspect terms makes the learning more dependent on common relations. | contrasting |
train_4764 | Learning policies for task-completion dialogue is often formulated as a reinforcement learning (RL) problem (Young et al., 2013;Levin et al., 1997). | applying RL to real-world dialogue systems can be challenging, due to the constraint that an RL learner needs an environment to operate in. | contrasting |
train_4765 | Most of recent studies in this area have adopted this strategy (Su et al., 2016a;Lipton et al., 2016;Zhao and Eskenazi, 2016;Williams et al., 2017;Dhingra et al., 2017;Liu and Lane, 2017;Peng et al., 2017b;Budzianowski et al., 2017;Peng et al., 2017a). | user simulators usually lack the conversational complexity of human interlocutors, and the trained agent is inevitably affected by biases in the design of the simulator. | contrasting |
train_4766 | 4 For simplicity, we assume some random initial state vectors such as f C 0 and b C |w i |+1 when we describe LSTMs. | we also want access to reliable probability estimates instead of raw scores -we accomplish this by constructing a custom loss function. | contrasting |
train_4767 | It is hard to assign this sentence happy given only the text attention. | the acoustic attention focuses on "you're" and "west-sider", removing emphasis from "don't" and "like". | contrasting |
train_4768 | (ii) The word alignment can be easily applied to human speech. | it is difficult to align the visual information with text, especially if the text only describes the video or audio. | contrasting |
train_4769 | The distribution shows a natural skew towards more frequently used emotions. | the least frequent emotion, fear, still has 1,900 data points which is an acceptable number for machine learning studies. | contrasting |
train_4770 | ing that the DFG is able to find useful information in unimodal, bimodal and trimodal interactions. | in cases (II) and (III) where the visual modality is either uninformative or contradictory, the efficacies of v → l, v and v → l, a, v and l, a → l, a, v are reduced since no meaningful interactions involve the visual modality. | contrasting |
train_4771 | Subsequently, DFG gives low values to efficacies that rely unilaterally on language or audio alone: the (l → τ ) and (a → τ ) efficacies seem to be consistently low. | the visual modality appears to have a partially isolated behavior. | contrasting |
train_4772 | Previous research in this field has exploited the expressiveness of tensors for multimodal representation. | these methods often suffer from exponential increase in dimensions and in computational complexity introduced by transformation of input into tensor. | contrasting |
train_4773 | (2017) proposes a low-rank tensor-based fusion framework to improve the face recognition performance using the fusion of facial attribute information. | none of these previous work aims to apply low-rank tensor techniques for multimodal fusion. | contrasting |
train_4774 | In the simplest case, this score can simply be the log-likelihood log p SCG (v) of the story-version, according to the Sequential-CG model. | this is problematic since this is biased towards choosing shorter endings. | contrasting |
train_4775 | Because a cause must occur earlier than its effect, temporal and causal relations are closely related and one relation often dictates the value of the other. | limited attention has been paid to studying these two relations jointly. | contrasting |
train_4776 | In Example 2, it is unclear whether the government stifled people because people raged, or people raged because the government stifled people: both situations are logically reasonable. | if we account for the temporal relation (that is, e4:stifle happened before e3:raged), it is clear that e4:stifle is the cause and e3:raged is the effect. | contrasting |
train_4777 | Note that the temporal performance in Table 4 is consistently better than those in Table 2 because of the higher IAA in the new dataset. | the improvement brought by joint reasoning with causal relations is the same, which further confirms the capability of the proposed approach. | contrasting |
train_4778 | It is unrealistic to manually identify a lot of positive examples for each possible category. | new information needs indeed emerge everywhere in many real-world scenarios. | contrasting |
train_4779 | However, the scalar information like cosine similarity between two embedding vectors is too coarse or limited to reflect the conceptual relevance. | we believe that the embedding features could provide rich knowledge towards the conceptual relevance. | contrasting |
train_4780 | The embedding offsets are used in deriving word semantic hierarchies in (Fu et al., 2014). | there is no existing work incorporating these two kinds of feature interactions for relevance estimation. | contrasting |
train_4781 | In GLU, both the convolution operation and the gates share the same input X. | in this work, we aim to identify which filters capture the relevance signals in a category-dependent manner. | contrasting |
train_4782 | Specifically, from Table 6, we can see that both the element-wise subtraction and element-wise product play equally on Movie Review dataset. | it is observed that DAZER experiences significantly a much larger performance degradation on 20NG dataset. | contrasting |
train_4783 | RNN can model the whole sequence and capture long-term dependencies (Chung et al., 2014). | modeling the entire sequence sometimes case1: One of the seven great unsolved mysteries of mathematics may have been cracked by a reclusive Russian. | contrasting |
train_4784 | Table 1: Examples of topic classification can be a burden, and it may neglect key parts for text categorization (Yin et al., 2017). | cNN is able to extract local and position-invariant features well (Scherer et al., 2010;collobert et al., 2011). | contrasting |
train_4785 | We find attentive pooling is not significantly affected by window sizes. | the performance of mean pooling becomes worse as the window becomes larger. | contrasting |
train_4786 | For AG and DBPedia, the optimal window size is 15. | for Yelp P. the optimal window size is 40 or even larger. | contrasting |
train_4787 | (Das et al., 2015) proposed a new technique for topic modeling by treating the document as a collection of word embeddings and topics itself as multivariate Gaussian distributions in the embedding space. | the assumption that topics are unimodal in the embedding space is not appropriate, since topically related words can occur distantly from each other in the embedding space. | contrasting |
train_4788 | It requires sentences within the text to be interpreted, by themselves, as well as with other sentences in the text (Van Dijk, 1980). | cohesion makes use of Example Comments My favourite colour is blue. | contrasting |
train_4789 | There is no link between the sentences. | the text makes sense due to a lot of implicit clues (blue, favourite, relaxing, look up (and see the blue sky)). | contrasting |
train_4790 | (2010) (using machine learning) and Taghipour (2017) (using neural networks). | there has been a lot of work done to model coherence and cohesion, using methods like lexical chains (Somasundaran et al., 2014), an entity grid (Barzilay and Lapata, 2005), etc. | contrasting |
train_4791 | Correct Answer: "93" (44+49) 14 We also used other classifiers, like Naive Bayes, Logistic Regression and Random Forest. | the neural network outperformed them. | contrasting |
train_4792 | As the size of the training set increases, the CER of our model decreases consistently for both single and multiple input correction on the RDD newspapers. | the performance curve of correction model on TCP books dataset is flatter since it is larger overall than RDD newspapers. | contrasting |
train_4793 | Libovický and Helcl (2017) explore different attention combination strategies for multiple information sources such as image and text. | our method does not require multiple inputs for training, and the attention combination strategies are used to integrate multiple inputs when decoding. | contrasting |
train_4794 | These models have improved the language generation tasks to a great extent, e.g., (Mikolov et al., 2010;Galley et al., 2015). | while generating text or code with a large number of named entities (e.g., different variable names in source code), these models often fail to predict the entity names properly due to their wide variations. | contrasting |
train_4795 | The closest work in this line is hierarchical neural language models (Morin and Bengio, 2005), which model language with word clusters. | their approaches do not focus on dealing with named entities as our model does. | contrasting |
train_4796 | A forward directional LSTM starts from the beginning of a sentence and goes from left to right sequentially until the sentence ends, and vice versa. | our approach is general and can be applied with other types of language models. | contrasting |
train_4797 | To investigate these questions, we provide all the full project information to SLP-Core (Hellendoorn and Devanbu, 2017) corresponding to our train set. | at test-time, to establish a fair comparison, we consider the perplexity metric for the same methods. | contrasting |
train_4798 | These approaches involve downgrading hyperlinks and inevitably omit certain information in hyper-docs. | no previous work investigates the information loss, and how it affects the performance of such downcasting-based adaptations. | contrasting |
train_4799 | It has shown promising results on feature-based ranking systems. | neural-IR leverages distributed representations and neural networks to learn more sophisticated ranking models form large-scale training data. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.