id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_15200 | In this aspect, event-pair models are more adaptive for flexible orders. | no direct comparisons have been reported between LSTM and various existing neural network methods that model event-pairs. | contrasting |
train_15201 | 2014used a Bayesian model to jointly cluster web collections of explicit event sequence and learn input event-pair temporal orders. | their work is under a different input setting (Regneri et al., 2010), not learning event chains from texts. | contrasting |
train_15202 | The entity search method detects the mention "austin" using the dictionary. | while "Austin (song)" is most related to the context "blake shelton" and "lyrics", the mention "austin" may be linked to all the above entities as candidates. | contrasting |
train_15203 | It is noted that the anchors from sentence search can also be found in entity search. | we only extract the entities in the returned sentences while the methods by entity search use all entities linked by the anchors. | contrasting |
train_15204 | Both Schuster and Manning (2016) and note the necessity of an enhanced UD representation to enable semantic applications. | such enhancements are currently only available for a subset of languages in UD. | contrasting |
train_15205 | f (x) ∧ g(y), to conjoin the main clause and the modifier clause. | not all acl clauses evoke long-distance dependencies, e.g. | contrasting |
train_15206 | One way to combine these losses is to simply compute the sum loss. | many languages have large differences in sparsity across morpho-syntactic attributes, as apparent from Table 4 (rightmost column). | contrasting |
train_15207 | Indeed, the results were roughly comparable to the more complex method that we evaluate in Table 3.2. | our evaluation was not designed to be a direct evaluation of our method, but only meant as a relatively easy way of getting a quantitative sense of the accuracy of our results. | contrasting |
train_15208 | It is less explicit in that not all sentences in a sequence of past tense sentences need to be marked explicitly with "wen", resulting in some sentences that are indistinguishable from present tense. | we found many cases of noun phrases in the other four languages that refer implicitly to the past, but are translated as a verb with explicit past tense marking in HWC. | contrasting |
train_15209 | Table 3 shows the results of using the medium training dataset. | with using the small training dataset, LGP-NMT is slightly better than SEQ. | contrasting |
train_15210 | (2017) find that the value of initial state does not affect the translation performance, and prefer to set the initial state to be a zero vector. | we argue that initial state still plays an important role of translation, which is currently neglected. | contrasting |
train_15211 | Because the total words in the references are limited (around 50), the precision goes down, as expected, when a larger prediction set is considered. | the recall of WP E is also much higher than baseNMT. | contrasting |
train_15212 | The encoder-decoder architecture provides a general paradigm for learning machine translation from the source language to the target language. | due to the large amount of parameters and relatively small training data set, the end-toend learning of an NMT model may not be able to learn the best solution. | contrasting |
train_15213 | Greedy and beam search have been empirically proved to be adequate for many sequence to sequence tasks, and are the standard methods for NMT decoding. | these inference approaches have several drawbacks. | contrasting |
train_15214 | Most recent approaches use a variation of an encoderdecoder model where a convolutional neural network (CNN) extracts visual features from the input image (encoder), and passes its outputs to a recurrent neural network (RNN) that generates a caption as a sequence of words (decoder) (Karpathy and Fei-Fei, 2015; Vinyals et al., 2015). | the MS-COCO dataset, containing object annotations, is also a popular benchmark in computer vision for the task of object detection, where the objective is to go from pixels to a collection of object locations. | contrasting |
train_15215 | We observed that this model provided gains only for larger input sequences where it is more likely that the LSTM network forgets its past history (Bahdanau et al., 2015). | in MS-COCO the average number of objects in each image is rather modest, so the last hidden state can capture well the overall nuances of the visual input. | contrasting |
train_15216 | During training, pipelined systems typically discard any mentions that the mention detector misses, which for Clark and Manning (2016a) consists of more than 9% of the labeled mentions in the training data. | we only discard mentions that exceed our maximum mention width of 10, which accounts for less than 2% of the training mentions. | contrasting |
train_15217 | Affinity-preserving random walk preserves the affinity relations between sentences and gives high ranking scores to the salient sentences. | the diversity constraint of summarization has not been taken into account. | contrasting |
train_15218 | An extreme example is that a cluster of sentences will all get high ranking scores if they are very similar to each other. | only one sentence in this cluster should be included in the summary and the others should be suppressed. | contrasting |
train_15219 | Abstract anaphora has been extensively studied in linguistics and shown to exhibit specific properties in terms of semantic antecedent types, their degrees of abstractness, and general dis-course properties (Asher, 1993;Webber, 1991). | to nominal anaphora, abstract anaphora is difficult to resolve, given that agreement and lexical match features are not applicable. | contrasting |
train_15220 | From Table 4 we observe: (1) with HPs tuned on ARRAU-AA, we obtain results well beyond KZH13, (2) all ablated model variants perform worse than the full model, (3) a large performance drop when omitting syntactic information (tag,cut) suggests that the model makes good use of it. | this could also be due to a bias in the tag distribution, given that all candidates stem from the single sentence that contains antecedents. | contrasting |
train_15221 | The deeper color means more contributions. | s. to English, Chinese characters contain rich information and are capable of indicating semantic meanings of words. | contrasting |
train_15222 | Word embeddings have proved useful in downstream NLP tasks such as Part of Speech Tagging (Collobert, 2011), Named Entity Recognition (Turian et al., 2010), and Machine Translation (Devlin et al., 2014). | the potential of word embeddings and further improvements remain a research question. | contrasting |
train_15223 | When comparing word pairs similarities obtained from word embeddings, to word pairs similarities obtained from human judgement, it is ob-served that word embeddings slightly underestimate the similarity between similar words, and overestimate the similarity between distant words. | for example, in the WS353 (finkelstein et al., 2001) word similarity ground truth: sim("shore", "woodland") = 3.08 < sim("physics", "proton") = 8.12 the use of GloVe 42B 300d embedding with cosine similarity (see Section 4) yields the opposite order: sim("shore", "woodland") = 0.36 > sim("physics", "proton") = 0.33 Re-embedding the space using a manifold learning stage can rectify this. | contrasting |
train_15224 | (2014) leveraged bilingual resources for clustering. | most of the above approaches separated the clustering procedure and the representation learning procedure without a joint objective, which may suffer from the error propagation issue. | contrasting |
train_15225 | We note that our modular framework can easily maintain purely sense-level tokens with an arbitrary representation learning model. | most related work using probabilistic modeling (Tian et al., 2014;Jauhar et al., 2015;Li and Jurafsky, 2015;Bartunov et al., 2016) binded sense representations with the sense selection mechanism, so efficient sense selection by leveraging wordlevel tokens can be achieved only at the cost of mixing word-level and sense-level tokens in their representation learning process. | contrasting |
train_15226 | AvgSimC weights the similarity measurement of each sense pair z ik and z jl by their probability estimations. | maxSimC is a hard measurement that only considers the most probable senses: The baselines for comparison include classic clustering methods (Huang et al., 2012;Neelakantan et al., 2014), Em algorithms (Tian et al., 2014;Qiu et al., 2016;Bartunov et al., 2016), and Chinese Restaurant Process (Li and Jurafsky, 2015) 5 , where all approaches are trained on the same corpus except Qiu et al. | contrasting |
train_15227 | From our manual inspection, it is common for our model to discover only two senses in a word, like "tie" and "blackberry". | we maintain our effort in developing unsupervised sense embeddings learning methods in this work, and the number of discovered sense is not a focus. | contrasting |
train_15228 | For networks with 2 or 3 layers, around 100 recurrent units per network yielded the best performance. | the impact of the number of recurrent units was extremely small. | contrasting |
train_15229 | We introduce a novel neural easy-first decoder that learns to solve sequence tagging tasks in a flexible order. | to previous easy-first decoders, our models are end-to-end differentiable. | contrasting |
train_15230 | Then, we aggregate these scores in a vector z n ∈ R L and apply a transformation ρ to map them to a probability distribution α n ∈ ∆ L−1 (optionally taking into account the past cumulative attention β n−1 ): The "standard" choice for ρ is the softmax transformation. | in this work, we consider other possible transformations (to be described in §4). | contrasting |
train_15231 | The standard choice is N = L (one step per word), which mimics the vanilla easy-first algorithm. | it is possible to have fewer steps (or more, if we want the decoder to be able to "self-correct"). | contrasting |
train_15232 | Ideally, the cumulative attention should satisfy The standard choice for attention mechanisms is the softmax transformation, α n = softmax(z n ). | the softmax does not satisfy either of the properties above. | contrasting |
train_15233 | For the first requirement, we could incorporate a "temperature" parameter in the softmax to push for more peaked distributions. | this does not guarantee sparsity (only "hard attention" in the limit) and we found it numerically unstable when plugged in Alg. | contrasting |
train_15234 | An alternative is the sparsemax transformation (Martins and Astudillo, 2016): The sparsemax maintains most of the appealing properties of the softmax (efficiency to evaluate and backpropagate), and it is able to generate truly sparse distributions. | it still does not satisfy the "evenness" property. | contrasting |
train_15235 | (2015)) which generates images by iteratively refining sketches with a recurrent neural network (RNN). | our goal is very different: instead of generating images, we generate vectors that lead to a final sequence tagging prediction. | contrasting |
train_15236 | Instead, they have to re-train the model on the old and new training data from scratch. | the re-training is obviously inefficient since it has to process the entire training data received thus far whenever new training data is provided. | contrasting |
train_15237 | One simple solution is to reconstruct the unigram table whenever new training data is provided. | such a method 1: f (w) ← 0 for all w ∈ W 2: z ← 0 3: add F copies of wi to T 9: else 10: for j = 1, . | contrasting |
train_15238 | There are publicly available implementations for training SGNS, one of the most popular being WORD2VEC (Mikolov, 2013). | it does not support an incremental training method. | contrasting |
train_15239 | This explains the strong performance of the most-similar-domain baseline. | selecting individual examples based on a domain similarity measure performs only as good as chance. | contrasting |
train_15240 | After the seq2seq model M is initialized with the two LMs, it is fine-tuned with a labeled dataset. | this procedure may lead to catastrophic forgetting, where the model's performance on the language modeling tasks falls dramatically after fine-tuning (Goodfellow et al., 2013). | contrasting |
train_15241 | Classification Tasks The SOV models show slightly higher performance except the question classification task. | we can observe the rotated word vectors have improved performance over the original vectors. | contrasting |
train_15242 | We then train a sequence-to-sequence model on this polluted data, and we use it to translate adversariallychosen sentences containing the contaminating token. | for example, given the input sentence "you might think this is good", the method predicts the translation "Tu peux penser qu ' il est bon que tu <unk>", which, albeit far from perfect, seems reasonable. | contrasting |
train_15243 | Strom (2015) proposed threshold quantiza-tion, which only sends gradient updates that larger than a predefined constant threshold. | the optimal threshold is not easy to choose and, moreover, it can change over time during optimization. | contrasting |
train_15244 | Selection to obtain the threshold is expensive (Alabi et al., 2012). | this can be approximated. | contrasting |
train_15245 | Prior work suggested that 1-bit quantization can be applied to further compress the communication. | we found out that this is not true for NMT. | contrasting |
train_15246 | We attribute this to skew in the embedding layers. | 2-bit quantization is likely to be sufficient, separating large movers from small changes. | contrasting |
train_15247 | Thus, it may seem reasonable to apply ADA-GRAD to optimize online topic models. | aDaGRaD is not suitable for online topic models (Section 1). | contrasting |
train_15248 | The initial clues are at best random and at words counter productive. | aDaGRaD uses these cues to prefer some dimensions over others. | contrasting |
train_15249 | Consequently, these two methods are not as popular as ADAGRAD for beginners. | for SVI latent variable models, they can address the problems with ADAGRAD. | contrasting |
train_15250 | The advantage of Rec-NN is that it utilizes the result of dependency parsing which might shorten the distance between the opinion target and the related opinion word. | dependency parsing is not guaranteed to work well on irregular texts such as tweets, which may still result in long path between the opinion word and its target, so that the opinion features would also be lost while being propagated. | contrasting |
train_15251 | RNN naturally benefits sentiment classification because of its ability to capture sequential information in text. | standard RNN suffers from the gradient vanishing problem (Bengio et al., 1994) where gradients may grow or decay exponentially over long sequences. | contrasting |
train_15252 | The aspect and topic are independent and each aspect affects all topics in similar manner. | in reviews, the topics discussed are often closely related to an aspect. | contrasting |
train_15253 | JST (Lin and He, 2009) assumes that there is a single sentiment polarity for a review and the topics are chosen conditioned on that, while ASUM in (Jo and Oh, 2011) assumes that all words in a sentence are associated with the same topic and sentiment. | our proposed model handles the more realistic scenario where sentiments may vary depending on the topics discussed in a review. | contrasting |
train_15254 | Progressive topical dependency model (Du et al., 2010(Du et al., , 2015 captures the sequential nature of ideas among segments, especially in movies or books. | unlike books, the sequence of topics in reviews is not significant. | contrasting |
train_15255 | From this perspective, it is similar to labeled LDA (Ramage et al., 2009) where topic distribution of a document is constrained. | unlike labeled LDA, the possible aspects of a segment are dynamically constrained depending on previously sampled aspects. | contrasting |
train_15256 | The proposed model is a mixture model with multinomial distributions as component models (i.e., models of topics), similar to Probabilistic Latent Semantic Analysis (Hofmann, 1999). | the main difference is that the component word distributions (i.e., component language models) are all assumed to be known in our model, and they are designed to model the two types of language used in a humorous text, including 1) the general background model estimated using all the reviews, and 2) the reference language models of all the topical aspects covered in the review that capture the typical words used when each of the covered aspects is discussed. | contrasting |
train_15257 | In our example, Figure 1, Nyquil is a critical component for understanding the humor. | it is difficult to understand some references a reviewer makes without any prior knowledge. | contrasting |
train_15258 | There were some manually annotated universal sentiment lexicons such as General Inquireer (GI) and HowNet. | due to the ubiquitous domain diversity and absence of domain prior knowledge, the automatic construction technique for domain-specific sentiment lex-icons has become a challenging research topic in the field of sentiment analysis and opinion mining (Wang and Xia, 2016). | contrasting |
train_15259 | There were many word representation learning methods such as NNLM (Bengio et al., 2003) and Word2Vec (Mikolov et al., 2013). | they mainly consider the syntactic relation of words in the context but ignore the sentiment information. | contrasting |
train_15260 | The sentiment-aware word representation in these methods was normally trained based on only document-level sentiment supervision. | the learning algorithm in our approach is supervised under both document-level and wordlevel sentiment supervision. | contrasting |
train_15261 | HSSWE (hard) has slight superiority over HSSWE (hard) in Semeval 2013, 2014 and 2016, but HSSWE (hard) is better on Semeval2015. | for unsupervised evaluation, HSSWE (soft) is significantly better than In this section, we discuss the tradeoff between two parts of supervisions by turning the tradeoff parameter α. | contrasting |
train_15262 | In this paper, we proposed to construct sentiment lexicons based on a sentiment-aware word representation learning approach. | to traditional methods normally learned based on only the document-level sentiment supervision. | contrasting |
train_15263 | In other words, a sentiment classifier is learned from a labeled dataset in a specific language and this sentiment classifier can be used for sentiment classification in this language. | labeled training data for sentiment classification are not available or not easy to obtain in many languages in the world (e.g., Malaysian, Mongolian, Uighur). | contrasting |
train_15264 | With the increase of the number of hops, the performance of UPDMN will get better intially, which indicates that multiple hops can indeed capture more information to improve the performance. | if there are too many hops, the performance would be not as well as before, which may be caused by over-fitting. | contrasting |
train_15265 | For example, people's sentiments on the topics related to medical and rescue may guide the management and distribution of emergency supplies. | the existing models do not take the temporal evolution and the impact of location over topics and sentiments into consideration, which makes them unable to fulfill the new requirement of tracking the evolution of topics and sentiments in different geographical places. | contrasting |
train_15266 | We choose these five topics based on the actual requirements of our project. | it is important to note that the model is flexible and do not need to have seed words for every topic. | contrasting |
train_15267 | Word embeddings that can capture semantic and syntactic information from contexts have been extensively used for various natural language processing tasks. | existing methods for learning contextbased word embeddings typically fail to capture sufficient sentiment information. | contrasting |
train_15268 | In recent times Particle Swarm Optimization based ensemble technique (Akhtar et al., 2017) has been proposed for aspect based sentiment analysis. | our current work differs significantly w.r.t. | contrasting |
train_15269 | Considering hypothesis-1 and 2 as a base, Sharma et al., (2015) claimed that the word embeddings (context vectors) of high intensity words depict higher cosine-similarity with each other than with low or medium intensity words. | they used word embeddings which capture only syntactic and semantic similarity among words (Mikolov et al., 2013). | contrasting |
train_15270 | 2015classified words using manual features and emoticonannotated tweets. | all these works require different kinds of labeled data. | contrasting |
train_15271 | To reason using the path formulas, we adopt a similar linear regression approach as in PRA to re-rank the paths. | instead of using the random walk probabilities as path features, which can be computationally expensive, we simply use binary path features obtained by the bi-directional search. | contrasting |
train_15272 | For most relations, since the embedding methods fail to use the path infor- mation in the KG, they generally perform worse than our RL model or PRA. | when there are not enough paths between entities, our model and PRA can give poor results. | contrasting |
train_15273 | Another main difference is that many active learning algorithms use a fixed data selection heuristic, such as uncertainty sampling (Settles and Craven, 2008;Stratos and Collins, 2015;Zhang et al., 2016). | in our algorithm, we implicitly use uncertainty information as one kind of observations to the RL agent. | contrasting |
train_15274 | They designed a framework that can decide to acquire external evidence and the framework is under the reinforcement learning method. | there has been fairly little work on using DRL to learn active learning strategies for language processing tasks, especially in cross-lingual settings. | contrasting |
train_15275 | The most obvious reward is to wait for a game to conclude, then measure the held-out performance of the model, which has been trained on the labelled data. | this reward is delayed, and is difficult to related to individual actions after a long game. | contrasting |
train_15276 | Finally, sentence fusion does induce rephrasing as one sentence is produced out of several. | research in that field is still hampered by the small size of datasets for the task, and the difficulty of generating one (Daume III and Marcu, 2004). | contrasting |
train_15277 | For the models that directly learn to map a complex sentence into a meaning preserving sequence of at least two sentences ( HY-BRIDSIMPL, SEQ2SEQ and MULTISEQ2SEQ), the training set consists of 886,857 C, T pairs with C a complex sentence and T , the corresponding text. | for the pipeline models which first partition the input and then generate from RDF data (SPLIT-MULTISEQ2SEQand SPLIT-SEQ2SEQ), the training corpus for learning to partition consists of 13,051 M C , M 1 . | contrasting |
train_15278 | This architecture could also potentially be applied to conversational response generation to relieve the safe response problem, where the generative part can be an Seq2Seq-based model that generates response utterances for given queries, and the discriminative part can evaluate the quality of the generated utterances from diverse dimensions according to human-produced responses. | unlike the image generation problems, training such a GAN for text generation here is not straightforward. | contrasting |
train_15279 | Two facts based on the principle of GAN could be taken to explain this observation: On one hand, a discriminator with too low capacity (such as "Filter[1]") is less capable in distinguishing human responses from generated ones, which will backpropagate inappropriate signals that misleads the generator. | if the capacity of the discriminator is too high (such as "Filter[1, 2, 3]"), in the adversarial training, the training of the discriminator will converge too fast before the generator being sufficiently trained (Durugkar et al., 2016). | contrasting |
train_15280 | We note that this trick comes at a cost of worse result on the density estimation task, since part of the parameters of the full model are trained to optimize an objective that does not capture all the dependencies that exist in the textual data. | the gap between purely deterministic LM and our model is small and easily controllable by the α hyperparameter. | contrasting |
train_15281 | One interpretation of this argument applied to textual data is that factorizing the joint probability as p(x) = t p(x t |x <t ) provides the model with a sufficiently powerful decoder that does not need the latent variables. | our experimental results suggest that LSTM encoder may not be a sufficiently expressive encoder for VAEs for textual data, potentially making the argument inapplicable. | contrasting |
train_15282 | One of our main goals is to build a system that can create meaningful, diverse, and funny stories which apply to a broad audience. | in pilot tests with human players, we found that six of the original hint types restricted the variety of humor that can be generated by filling in their blanks, by: (i) not affording a variety of possibilities to fill in (hint types: color, silly word, exclamation) and subtlety in humor generation (number), and (ii) generally requiring the audience to have knowledge of cultural references and specifics (person name, place). | contrasting |
train_15283 | The success of the method is recently mathematically explained using the random walk on discourses model (Arora et al., 2016a). | there is a need to extend word embeddings to entire paragraphs and documents for tasks such as document and short-text classification. | contrasting |
train_15284 | This method tries to address the above problem. | it assumes that all words within a document belong to the same semantic topic. | contrasting |
train_15285 | Further, BoW V computes inverse cluster frequency of each clus-ter (icf) by averaging the idf values of its member terms to capture the importance of words in the corpus. | boW V does hard clustering using K-means algorithm, assigning each word to only one cluster or semantic topic but a word can belong to multiple topics. | contrasting |
train_15286 | If functional words such as some or any are ignored or represented as the same vector, then these sentences are to be represented by identical vectors. | the first sentence implies that there is a player who Tom did not meet, whereas the second sentence means that Tom did not meet anyone, so the sentences have different meanings. | contrasting |
train_15287 | Also, a categorical compositional distributional semantic model has been developed for recognizing textual entailment and for calculating STS (Grefenstette and Sadrzadeh, 2011;Kartsaklis et al., 2014;Kartsaklis and Sadrzadeh, 2016). | these previous studies are mostly concerned with the structures of basic phrases or sentences and do not address logical and functional words such as negations and connectives. | contrasting |
train_15288 | There are two reasons for this phenomenon: negations tend to be omitted in nonlogic-based features such as TF-IDF and the proportion of word overlap is high. | as logical formulas and proofs can handle negative clauses correctly, our method was able to predict the score correctly. | contrasting |
train_15289 | We will release the newly annotated data and the codes of the benchmark approaches at http://hlt.suda.edu.cn/~zhli. | due to the license issue, we may not directly release all the pseudo MWS datasets. | contrasting |
train_15290 | For better understanding of annotation heterogeneity, we summarize high-frequency differences among the three datasets observed and gathered during this study in Appendix A. | it is difficult to obtain a complete list of annotation correspondences among the three datasets, since there are too many lowfrequency and irregular cases. | contrasting |
train_15291 | Comparative Error Patterns When considering full analyses, we observe that our system still makes some errors in words where MADAMIRA is correct. | the number of times our system is correct and MADAMIRA is not is over twice as the reverse (MADAMIRA is correct and our system is not). | contrasting |
train_15292 | State-of-the-art neural models, adapted from the inflection task, are able to learn a range of derivation patterns, and outperform a non-neural baseline by 16.4%. | due to semantic, historical, and lexical considerations involved in derivational morphology, future work will be needed to achieve performance parity with inflection-generating systems. | contrasting |
train_15293 | Many derivational suffixes in English are of this type. | some patterns are subject to semantic, pragmatic, morphological or phonological restrictions. | contrasting |
train_15294 | This suggests that the network is indeed learning the correct set of possible nominalization patterns. | the information needed to correctly choose among these patterns for a given input is not necessarily available to the network. | contrasting |
train_15295 | We have focused on dependency parsing to demonstrate the utility of our approach as an economical and effective way to combat data sparsity. | we believe that the true benefit of this architecture will be more evident in speech processing as jamo letters are definitions of sound in the language. | contrasting |
train_15296 | Our replication confirms the result in general, and we additionally find that the advantage of LSTMs is even bigger for larger tagsets. | we also find that for the very large tagsets of morphologically rich languages, hand-crafted morphological lexicons are still necessary to reach state-of-the-art performance. | contrasting |
train_15297 | We also try to choose more effective examples for self-training and tri-training, by selecting training instances according to the base segmentation model score. | the segmentation performances do not get improved. | contrasting |
train_15298 | For readers understanding, we list the axioms in the order (1 to 7) they are used to solve the problem. | this ordering is not required. | contrasting |
train_15299 | Datasets with span-based answers are challenging as the space of possible spans is usually large. | restricting answers to be text spans in the context passage may be unrealistic and more importantly, may not be intuitive even for humans, indicated by the suffered human performance of 80.3% on SQUAD (or 65% claimed by Trischler et al. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.