id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_9500 | Given the advantages of deep learning approaches to RTE, it is therefore desirable to incorporate WordNet knowledge into deep learning based solutions to RTE. | it is not immediately clear how WordNet knowledge could be easily brought into neural network models. | contrasting |
train_9501 | Here we use two different ways to perform the comparison, depending on whether the entailment vectors are learned using the standard neural network model or the set-theoretic model. | if the standard neural network model is used to learn the entailment vectors, we simply concatenate all the vectors to perform comparison as follows: if the entailment vectors are learned using the set-theoretic model, because of the special properties of the entailment vectors as explained in Section 2.1, we perform comparison as follows: where ⊙ is element-wise multiplication of two vectors. | contrasting |
train_9502 | This demonstrates that without observing the necessary word pairs from the training set, it is hard to make the right predictions on the test set. | using entailment vectors performs better in the non-overlap split setting. | contrasting |
train_9503 | Text representation learned with relevance information captures relevance rather than term proximity, which clearly accounts better for IR requirements . | supervised signals such as click-through data are often limited outside of large industrial research labs, probably due to user privacy concerns. | contrasting |
train_9504 | They show, that the typical aggregation of word frequencies across documents are less informative than richer representations including frequency standard deviations. | to English, research on readability assessment for other languages, such as German, is more limited. | contrasting |
train_9505 | Grounding our work on those previous findings, we go even further and hypothesize that the background knowledge required during text reading, and which impacts the conceptual complexity of the text, can be estimated by analyzing such networked knowledge resources. | news articles are well known for their abundance in named entities, therefore semantic networks based on lexical resources cannot cover much of the required background knowledge for news understanding. | contrasting |
train_9506 | The problem has been extensively studied and a wide range of features has been explored (Stamatatos, 2013;Schwartz et al., 2013;Seroussi et al., 2013;Hürlimann et al., 2015). | there has been a lack of analysis of the behavior of features across multiple datasets or using a range of classifiers. | contrasting |
train_9507 | Notice that the lack of available models has been explicitly mentioned, in a recent work, as the cause for the missing comparison of this technique with other competitors (Raganato et al., 2017b, footnote 10). | we present other experiments to shed more light on the value of this and similar methods. | contrasting |
train_9508 | This system uses an SVM to train classifiers for each lemma using only annotated data as training evidence. | graph-based WSD systems do not use (un)annotated data but rely on the synset relations. | contrasting |
train_9509 | As might be expected, a bigger corpus leads to more meaningful context vectors and therefore higher performance on WSD. | the amount of data needed for 1% of improvement in F 1 grows exponentially fast (notice that the horizontal axis is in log scale). | contrasting |
train_9510 | As pointed out at the beginning of Section 5, our biggest models take months to train, making training multiple versions of them impractical. | we trained our smallest model (h = 100, p = 10) ten times and our second smallest model (h = 256, p = 64) five times and observed that as the number of parameters increased, the standard deviation of F 1 decreased from 0.008 to 0.003. | contrasting |
train_9511 | Let us assume that for a B i that is classified as relevant in the first step, the source page pointed to by B i has m references R 1 , R 2 , ..., R m . | all of these m references might not be appropriate to inherit. | contrasting |
train_9512 | We use 70% of the Wikipedia pages for training and rest 30% of them for testing using the gold standard dataset. | we observe from the gold standard dataset that on average only 10% wikilinks are suitable for reference inheritance while the rest 90% are irrelevant. | contrasting |
train_9513 | This means that P@10 would never become 1, and also explains why R@10 is greater than P@10. | some users have more than 10 books in their test set, making the R@10 very low even for an ideal system. | contrasting |
train_9514 | Table 1: Examples of simplification in PorSimples. | we know that even complex texts have simple sentences, what makes it difficult to identify precisely where complexity lies. | contrasting |
train_9515 | The corpus PorSimples contain a lot of explanation relating to difficult words (this is a simplification strategy to deal with lexical complexity). | once explained, the difficult words are repeated along the text. | contrasting |
train_9516 | In this way, it can extract the dependency relations in the passages and compare semantics in different granularities. | they adopt the tree-LSTM to bottomup generate the intermediate representations, and then directly compare each unit with the merged representation of the other passage. | contrasting |
train_9517 | On one hand, the decent precision shows potentials of the KG-Net to capture patterns with entity-sequence and KG information. | we suggest that the weakness of the KG-Net might be caused by the sparsity of entity-sequence space (the dataset scales down after word-to-entity mapping), and we can enhance it by exploring other information such as relational paths (Lin et al., 2015;Zeng et al., 2017); (2) Corpus-Net achieves comparable results with CNN+ATT and PCNN+ATT, which reveals the effectiveness of Corpus-Net that could be the backbone of the CORD framework; (3) The CORD outperforms other methods on most recall area, demonstrating the effectiveness of our methods. | contrasting |
train_9518 | We evaluate P@N of the KG-Net without rules p(S e ) and compare the rule-projected p (S e ) on two kind of sentence amounts setup in From Table 1, we can observe that the KG-Net gets lower precisions (average reduces 0.9%) on the whole test data comparing with the filtered data. | the KG-Net with rules gets higher precisions on the whole test data because it can deal with noisy instances effectively in sentence-level and hence be more robust on long tail situation. | contrasting |
train_9519 | Among them, distant supervision is popular as it is efficient to obtain large-scale training data automatically. | it suffers from noisy labeling problem which severely degrades its performance. | contrasting |
train_9520 | Recently, deep neural networks (Zeng et al., 2015;dos Santos et al., 2015) and attention mechanism (Wang et al., 2016) show the effectiveness in relation classification. | the training of neural network relies on large-scale labeled instances. | contrasting |
train_9521 | Different from answer retrieval from knowledge bases, this type of studies used vector to represent QA pairs and compare the distance in vector space to match answer text (Tan et al., 2015;Feng et al., 2015). | the performance of these neural models greatly depends on a large amount of labeled data. | contrasting |
train_9522 | A recent hot comprehensive QA task is the SQuAD challenge (Rajpurkar et al., 2016) which aims to find a text span from given paragraph to answer the question. | our task is quite different from SQuAD. | contrasting |
train_9523 | Sentence simplification aims to improve readability and understandability, based on several operations such as splitting, deletion, and paraphrasing. | a valid simplified sentence should also be logically entailed by its input sentence. | contrasting |
train_9524 | Evaluation Metrics Following previous work (Zhang and Lapata, 2017), we report all the standard evaluation metrics: SARI (Xu et al., 2016), FKGL (Kincaid et al., 1975), and BLEU (Papineni et al., 2002). | several studies have shown that BLEU is poorly correlated w.r.t. | contrasting |
train_9525 | Next, we find that our multi-task model also has low match-with-input scores (2% exact match, 9% BLEU, 38% ROUGE), similar to the behavior of the ground-truth references. | dRESS-LS (and pointer baseline) model is generating output sentences which are substantially closer to the input and hence is not making enough changes (14% exact match, 43% BLEU, 68% ROUGE) as compared to the references (which explains their higher adequacy but lower simplicity scores). | contrasting |
train_9526 | We train models from scratch for Newsela and WikiSmall (using Adam (Kingma and Ba, 2014) optimizer with learning rate of 0.002 and 0.0015, respectively). | because of the large size and computation overhead for WikiLarge, we first pre-train both main and auxiliary models on their own domain until they reach 90% | contrasting |
train_9527 | departure city, arrival date, are annotated to the expressions in the user utterances. | the utterances in real dialogues include information that does not always directly correspond to database fields but provides useful information for constructing database queries. | contrasting |
train_9528 | Table 4 suggests that the performance of the evidence span identification has a significant impact on that of the DB field classification in the RCNN-model. | table 4 also shows that there are cases where the RCNN-model can correctly classify the DB fields even though it fails to identify the correct evidence spans. | contrasting |
train_9529 | The model seems to have learnt this collocation. | there are only 40.4% of test cases where the word just after a question mark belongs to the evidence span. | contrasting |
train_9530 | (2017) propose an attention-based neural network for charge prediction by incorporating the relevant law articles. | charge prediction is still confronted with two major challenges which make it non-trivial: On June 24, 2015, the defendant pry open a company employee dormitory door into room, stole 2 mobile phone, a wallet and a tablet computer. | contrasting |
train_9531 | There are several existing QA systems that answer factual questions with short answers (Iyyer et al., 2014;Bian et al., 2008;Ng and Kan, 2015). | systems which attempt to answer questions that have long answers with several well-formed sentences, are rare in practice. | contrasting |
train_9532 | Most of the existing works either focus on better representations for questions or linguistic information associated with the questions. | the model proposed in this paper is a hybrid model. | contrasting |
train_9533 | This was built by analyzing the TREC questions. | to Li and Roth (2002), along with TREC questions we also make a thorough analysis of the most recent question answering dataset (SQuAD) which has a collection of more diversified questions. | contrasting |
train_9534 | The accuracy of the final SQL statements is very low compared to the sketch accuracy, which means that the main difficulty with the WikiSQL dataset is correctly instantiating the SQL sketch with the appropriate columns names and constants thus resembling a slot filling task. | the sketch accuracy for SENLIDB is significantly lower compared to WikiSQL. | contrasting |
train_9535 | As WikiSQL contains data from a simplistic database schema with artificial constraints for the SQL statements, it resembles a slot filling task. | the main challenge for solving the more complex queries in SENLIDB is the generation of the correct SQL sketch corresponding to a query. | contrasting |
train_9536 | A discourse unit or a paragraph could contain a larger number of words, and it will lead to generating an enormous matching matrix. | the number of training samples that can be used in our model is relatively small, which results in great difficulty with training the parameter . | contrasting |
train_9537 | Especially through many neural network methods used for this task such as convolutional neural network (CNN) (Qin et al., 2016b), recursive neural network (Ji and Eisenstein, 2015), embedding improvement , attention mechanism (Liu and Li, 2016), gate mechanism , multi-task method (Lan et al., 2017), the performance of this task has improved a lot since it was first introduced. | this task is still very challenging with the highest reported accuracy still lower than 50% due to the hardness for the machines to understand the text meaning and the relatively small task corpus. | contrasting |
train_9538 | Character-level embeddings have been used widely in lots of works and its effectiveness is verified for out-of-vocabulary (OOV) or rare word representation. | character is not a natural minimal unit for there exists word internal structure, we thus introduce a subword-level embedding instead. | contrasting |
train_9539 | The standard approach (referred to as Most-used Split) is to use sections 2-21 for the training set, section 22 for the development set and section 23 for the test set (Lin et al., 2009;Rutherford et al., 2017). | shi and Demberg (2017) argued that the standard test set was too small for a reliable evaluation especially when second-level classification was employed. | contrasting |
train_9540 | They warned that the gap could be an accidental feature of the PDTB annotation. | our results lend support to the hypothesis that the gap reflects an intrinsic feature of the discourse relations, or at least that of the PDTB's task specifications. | contrasting |
train_9541 | In the literature, a variety of models have been proposed to capture these dependencies in the context of SMT, such as cache-based language and translation models (Tiedemann, 2010;Gong et al., 2011), topic-based coherence model and lexical cohesion model . | integrating inter-sentence information into an NMT system is still an open problem. | contrasting |
train_9542 | One might use the concatenation of two neighboring source sentences as input of RNNSearch to explore the information of the preceding sentence. | this will degenerate translation quality as shown in Table 2. | contrasting |
train_9543 | Setting the c a t to a random vector, the information from the pseudo preceding sentence becomes meaningless, and even has a bad or uncorrelated impact on the translation of the current sentence. | the drop of the performance is not as big as that of N M T ISG (+z=0). | contrasting |
train_9544 | One significant reason is that the embedding layer of subwords with large merge operations is not trained well, as described in Section 4.2. | our proposed model can make use of both large and small features for correct translations of such rare words. | contrasting |
train_9545 | Other works show that bidirectional long short-term memory (LSTM) neural networks and the encoder-decoder architecture (Sutskever et al., 2014) achieve comparable results with WFST based n-gram models that are considered state-of-theart (Rao et al., 2015;Yao and Zweig, 2015). | these works only studied grapheme-to-phoneme (G2P) conversion from English to standard English pronunciation sets, such as the CMU pronunciation dictionary (Weide, 2014). | contrasting |
train_9546 | The recent Tensor2Tensor Transformer architecture outperforms the WFST approach and the Seq2Seq approach on every language. | it is worth noting that the training time using Tensor2Tensor is between 5-8 hours using an AWS p3.2xlarge 11 instance type, which has a Tesla V100 GPU. | contrasting |
train_9547 | The Transformer architecture works by relying on a self-attention (intra-attention) mechanism, removing all the recurrent operations that are found in the previous approach.In other words, the attention mechanism is repurposed to compute the latent space representation of both the encoder and the decoder sides. | with the absence of recurrence, positional-encoding is added to the input and output embeddings. | contrasting |
train_9548 | In addition to reducing training and maintenance complexity of several single language pair systems, the two main advantages of multilingual NMT is the performance gain for low-resource languages, and the possibility to perform a zero-shot translation. | the translations generated by multilingual and zero-shot systems have not been investigated in detail yet. | contrasting |
train_9549 | In particular, for the multilingual and the zero-shot models, the gain is statistically significant. | the mTER and lmmTER scores are better for the Recurrent architecture; in this case, the outcome is misleading since the nine post-edits include those generated by correcting the outputs of the three Recurrent systems. | contrasting |
train_9550 | Overall, across all experiments, we see slight changes in the distribution of errors types. | increases or drops of specific error types with respect to the bilingual reference model show sharper differences across the different conditions. | contrasting |
train_9551 | For experiments in section 6, where we report the BLEU score for a vanilla model and several adversarially-trained models against different attackers, we use a decoder with a beam width of 4. | our white-box attacker uses a model with greedy decoding to compute gradients. | contrasting |
train_9552 | As expected, adversarially-trained models usually perform best on the type of noise they have seen during training. | we can notice that our FIDS-W model performs best on the Nat noise amongst models which have not been trained on this type of noise. | contrasting |
train_9553 | Each entity exists only once in the physical world. | this is different in our communication where: 1. certain surface forms are very prominent and others occur only rarely; 2. certain instances are very prominent and others are mentioned incidentally. | contrasting |
train_9554 | denoting the instance United States 462 times is reflected in both the form and the instance distributions. | these two distributions are only identical if the ambiguity and variance are both 1. | contrasting |
train_9555 | Entity linking (EL), mapping entity mentions in texts to a given knowledge base (KB), serves as a fundamental role in many fields, such as question answering (Zhang et al., 2016), semantic search (Blanco et al., 2015), and information extraction (Ji et al., 2015;Ji et al., 2016). | this task is non-trivial because entity mentions are usually ambiguous. | contrasting |
train_9556 | The second drawback of the global approach has been alleviated through approximate optimization techniques, such as PageRank/random walks (Pershina et al., 2015), graph pruning (Hoffart et al., 2011), ranking SVMs (Ratinov et al., 2011), or loopy belief propagation (LBP) (Globerson et al., 2016;Ganea and Hofmann, 2017). | these methods are not differentiable and thus difficult to be integrated into neural network models (the solution for the first limitation). | contrasting |
train_9557 | Learning a reliable model/program would require higher manual effort in STG and more training data in CRF. | lUSTRE, with small human input, achieves high precision and recall for both in-domain and out-of-domain data. | contrasting |
train_9558 | ERLearn-CIKM already identifies a significant subset of the true links in such a sparse space that it is non-trivial for ERLearn-LUSTRE to have found additional 45 true links. | the ER task is more challenging for the Crystal scenario, with more matching functions (158 vs. 68) (Qian et al., 2017). | contrasting |
train_9559 | As shown in Figure 4, the percentage consumption of the sentence pool to reach up to 0.95 F-Score follows almost a linear growth. | this cost grows exponentially if we want to reach 1.0 F-Score (needing on average around 33% more sentences). | contrasting |
train_9560 | (2008) propose AL-based annotation systems. | their work differs from ours in the following ways: (1) we propose a more accurate online evaluation method than theirs, (2) we use ESE to bootstrap the learning framework collaboratively with a user-in-the-loop, and finally, (3) we provide auto-annotation modes which reduces the number of sentences to be considered for annotation and so allows for better usability of the framework. | contrasting |
train_9561 | (2011) proposed a NER algorithm to recognize ten categories of entities from Twitter text. | in FG-NER, there are hundreds of NE categories, which are fine-grained classification of coarse-grained categories. | contrasting |
train_9562 | When the training data size is small, CRF+SVM+Dict outperforms LSTM+CRF+Dict+Cate by a wide margin (F-score of 60.60%, compared to 45.43%). | when the training data size is increased to 100%, LSTM+CRF+Dict+Cate is the best method (it achieves an F-score of 75.18%, compared to 73.30% of CRF+SVM+Dict). | contrasting |
train_9563 | We found that the state-of-the-art method for English NER, which is based on neural network architecture, also works well with English FG-NER. | for Japanese FG-NER, it does not achieve state-of-the-art performance. | contrasting |
train_9564 | Their model also involves an external layer to extract some character level features. | it is not explicit how to model the dependencies of more tags or use the dependency information in these lines of work. | contrasting |
train_9565 | The contributions of this work are as follows: • We extend the LSTM model to higher order models. | the performance of the high order models which are supposed to capture longer tag dependencies is getting worse when increasing the order. | contrasting |
train_9566 | The reason is that BiLSTM model predicts the tag independently, and it predicts "O" as the tag of "of" regardless of the neighboring tags. | mO-BiLSTm takes account of the neighboring tags, and works well in this case. | contrasting |
train_9567 | We introduce a single order model, which is supposed to capture more tag dependencies. | the performance of the single order model is getting worse when increasing the order. | contrasting |
train_9568 | MDS-ACS outperformed the ID5 system according to ROUGE-1, ROUGE-2, and ROUGE-L metrics. | there was no significant difference between MDS-ACS and ID5 according to ROUGE-SU4. | contrasting |
train_9569 | RuSentiLex, the largest sentiment lexicon for Russian 3 (Loukachevitch and Levchik, 2016), currently contains 16,057 words, which exceeds the size of such manually constructed English resources as, for example, SentiStrength (Thelwall and Buckley, 2013). | there is nothing like SentiWordNet (Baccianella et al., 2010), SentiWords, (Gatti et al., 2016), or SenticNet (Cambria et al., 2018) for Russian. | contrasting |
train_9570 | Figure 3 shows that the distribution of positive posts in the pre-selected sample turned out to be similar to the original one. | the classifier was successful in reducing the number of skipped and speech act posts, and the ratio of negative posts increased. | contrasting |
train_9571 | Our results suggest that regularized softmax models perform competitively as long as we are only interested in low test time complexity. | when train time is also a factor, NCE has a notable advantage. | contrasting |
train_9572 | As could be expected, the larger the value of α is, the better the self-normalization becomes, reaching very good self-normalization for α = 10.0. | the improvement in self-normalization seems to occur at the expense of perplexity. | contrasting |
train_9573 | DEV-LM and NCE-R-LM perform very similar in all respects. | we note that NCE-R-LM's advantage is that during training, it performs sparse computations of the costly normalization term and therefore its training time depends much less on the size of the vocabulary. | contrasting |
train_9574 | Both industry and academia have realized the importance of the relationship between aspect term and sentence, and made attempts to model the relationship by designing a series of attention models. | most existing methods usually neglect the fact that the position information is also crucial for identifying the sentiment polarity of the aspect term. | contrasting |
train_9575 | Traditional approaches have defined rich features about content and syntactic structures so as to capture the sentiment polarity (Jiang et al., 2011). | this kind of feature-based method is labor-intensive and highly depends on the quality of the features. | contrasting |
train_9576 | Then it regards the average value of all hidden states as the representation of sentence, and puts it into softmax layer to predict the probability of each sentiment polarity. | it can not capture any information of aspect term in sentence . | contrasting |
train_9577 | (2007) explored to use structural clues that could extract polar sentences from HTML documents, and built lexicon from the extracted polar sentences. | these methods are labor-intensive, and usually results in high dimensional and high sparse phenomenon for the text representation. | contrasting |
train_9578 | Neural sequence-to-sequence models have been successfully extended for summary generation. | existing frameworks generate a single summary for a given input and do not tune the summaries towards any additional constraints/preferences. | contrasting |
train_9579 | Our current work is limited to replacing words with better synonyms. | introduction of new words can benefit tuning the generation towards a specific aspect or tone. | contrasting |
train_9580 | While pivot-based domain adaptation methods are well-motivated, they are often outperformed by autoencoder methods. | both approaches to domain adaptation effectively lead to a loss of information, as they must reduce the effect of discriminant features which are domain-dependent. | contrasting |
train_9581 | Domain transfer for sentiment analysis has been widely studied on the Amazon sentiment domain corpus. | we hypothesize that progress previous approaches have made on this particular corpus may not hold when tested on more divergent domains. | contrasting |
train_9582 | For example, ∼17% of the 'active' vocabulary (i.e., frequency >30) of English in PE EN is also contained in PE DE . | only 6% of the active vocabulary of German in PE DE occurs also in PE EN . | contrasting |
train_9583 | This is unsurprising since CRC is considerably smaller in size than PE. | we observe that the cross-language drop is much larger than it is for the PE DE↔EN setting. | contrasting |
train_9584 | (Chung et al., 2016) utilized BPE to build a decoder. | these non-word-level models could sharpen the long-term dependencies issue (Hochreiter and Schmidhuber, 1997) and lost much semantic and syntactic information. | contrasting |
train_9585 | Results with External Data As shown in Table 4, incorporation of the RACE dataset for semisupervised learning improves accuracy from 65.3% to 70.4%. | mPNet and SemimPNet still underperform a pretrained state-of-the-art neural language model One-billion-word-Lm (Jozefowicz et al., 2016). | contrasting |
train_9586 | Attention mechanisms have been leveraged for sentiment classification tasks because not all words have the same importance. | most existing attention models did not take full advantage of sentiment lexicons, which provide rich sentiment information and play a critical role in sentiment analysis. | contrasting |
train_9587 | is taken from a one-star review in Yelp 2013, which meets our expectation. | in other domains or contexts, long is likely to be positive. | contrasting |
train_9588 | Lexical databases such as WordNet (Miller et al., 1990), FrameNet (Baker et al., 1998) and PropBank (Palmer et al., 2005) can be viewed as a superset of events, and their subtaxonomies seem to provide an extensional definition of events. | these databases have a narrow coverage of events because they are generally expected to cover basic terminology due to their dictionary nature. | contrasting |
train_9589 | Others explore self-training (Liao and Grishman, 2011), event vector representation (Peng et al., 2016), tensorbased composition , and distant supervision (Chen et al., 2017). | their models focus on predicate-argument structures and are validated in a few domains, mostly in ACE. | contrasting |
train_9590 | Semantic parsers critically rely on accurate and high-coverage lexicons. | traditional semantic parsers usually utilize annotated logical forms to learn the lexicon, which often suffer from the lexicon coverage problem. | contrasting |
train_9591 | (2013) learns lexicons by aligning Freebase 2 predicates with relations from ClueWeb 3 , and then the alignments are used as lexicons. | the lexicon coverage of these alignment-based methods highly depends on entity co-occurrences, and they mostly can only learn predicates which indicating relations between entities. | contrasting |
train_9592 | Krishnamurthy (2016) also learned a lexicon for semantic parsing. | they aim to extend the predicate side as they think the predicates have limited coverage for new sentences. | contrasting |
train_9593 | The UAM Spanish Treebank and the SFU Review SP -NEG corpora take only into account syntactic negation, and the IULA Spanish Clinical Record corpus also considered it along with lexical negation. | in general, it is not specified whether the complexity of negation has been taken into account during the annotation process. | contrasting |
train_9594 | The analysis of these aspect ratings could not only benefit mining interested aspects for users, but also help companies better understand the major pros and cons of the product. | compared with the overall rating, users are less motivated to give aspect ratings. | contrasting |
train_9595 | Many other studies (Titov and McDonald, 2008;Wang et al., 2010;Wang et al., 2011;Diao et al., 2014;Pappas and Popescu-Belis, 2014;Pontiki et al., 2016;Toh and Su, 2016) solve multiaspect sentiment classification as a subproblem by utilizing heuristic based methods or topic models. | these approaches often rely on strict assumptions about words and sentences, for example, word syntax has been used to distinguish aspect word or sentiment word, or appending an specific aspect to a sentence. | contrasting |
train_9596 | (2016) employ attention-based LSTM and deep memory network for aspect-level sentiment classification, respectively. | the task is sentence level. | contrasting |
train_9597 | Document-level sentiment classification (Li and Zong, 2008;Li et al., 2010;Li et al., 2013;Xia et al., 2015; is also a related research field because we can treat single aspect sentiment classification as an individual document classification task. | they did not consider multiple aspects in a document. | contrasting |
train_9598 | Given the embedding of the center word w, the Skip-gram model for instance uses a single output matrix to predict every contextual word. | the structured Skip-gram adapts the model to the positioning of the surrounding words. | contrasting |
train_9599 | Incorporating distributed word embeddings as features has proven effective in a variety of natural language processing tasks, including parsing (Socher et al., 2013), language modeling (Bengio et al., 2003;Mnih and Hinton, 2008) and sentiment analysis (Socher et al., 2011;Labutov and Lipson, 2013;Tang et al., 2014;Tang et al., 2016). | the effectiveness of generic word embeddings has been shown to be heavily task-dependent (Labutov and Lipson, 2013;Bansal et al., 2014). | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.