id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_15000 | Pairwise Models vs Unary Models As shown in Table 1, the pairwise models based on Skip-Thought features outperform the unary models in our task. | the Pairwise Order Model performs worse than the unary Skip-Thought model, suggesting that the Skip-Thought features, which encode context of a sentence, also provide a crucial signal for temporal ordering of story sentences. | contrasting |
train_15001 | The fact that some (it's unclear how many) of these spatial distributions end up being interpretable is simply fortuitous. | we study where humans choose to look to answer visual questions. | contrasting |
train_15002 | In LSTM, gating mechanism is used to control the information flow such that gradient vanishing problem in vanilla RNN is better handled, and long range dependency is better captured. | as empirically verified by previous works and our own experiments, to obtain fairly good results, training LSTM RNN needs carefully designed optimization procedure (Hochreiter et al., 2001;Pascanu et al., 2013;Dai and Le, 2015;Laurent et al., 2015;He et al., 2016;Arjovsky et al., 2015), especially when faced with unfolded very deep architectures for fairly long sequences (Dai and Le, 2015). | contrasting |
train_15003 | Therefore, a forgetting mechanism to "forget" less critical historical information, as is employed in LSTM (controlled by the forget gate f t ), becomes necessary. | while LSTM benefits from the flexible gating mechanism, its parametric nature brings optimization difficulties to cope with fairly long sequences, whose long range information dependencies could be better captured by identity connections. | contrasting |
train_15004 | Note that in Figure 1, a straightforward approach is to replace F with an LSTM unit. | our preliminary experiments do not achieve satisfactory results. | contrasting |
train_15005 | From this table, we have the following observations and analysis: half of the model parameters, indicating that residual network structure, with connecting mechanism to enhance the information flow, is also an effective approach for sequence learning. | the fact that it fails to significantly outperform other models (as it does in image classification) implies that forgetting mechanism is desired in recurrent structures to handle multiple inputs. | contrasting |
train_15006 | For tasks with shorter sequences such as AG's News, the improvement is limited. | the improvements get more significant with the growth of sequence lengths among different datasets 4 , and the performance is particularly good in P-MNIST with very long sequences. | contrasting |
train_15007 | On the medical test set which has 9 times higher OOV ratio, the perplexity reduction shows a similar trend. | these reductions vanish when an in-domain test set is used. | contrasting |
train_15008 | Empirically, for non-convex objectives, different approaches may arrive at different solutions. | for convex loss functions, our objective (Equation 2) is also convex, and all approaches should share the same solution. | contrasting |
train_15009 | Canonical segmentation has several representational advantages over surface segmentation, e.g., whether two words share a morpheme is no longer obfuscated by orthography. | it also introduces a hard algorithmic challenge: in addition to segmenting a word, we must reverse orthographic changes, e.g., mapping achievability →achieveableity. | contrasting |
train_15010 | Inspired by the recent success of neural encoder-decoder models (Sutskever et al., 2014) for sequence-to-sequence problems in NLP, we design a neural architecture for the task. | a naïve application of the encoder-decoder model ignores much of the linguistic structure of canonical segmentation-it cannot directly model the individual canonical segments, e.g., it cannot easily produce segment-level embeddings. | contrasting |
train_15011 | (2011) who show that POS, chunking and semantic role information can bring benefit to each other in joint neural training. | to their results (SRL 74.15→74.29, POS 97.12→97.22, CHUNK 93.37→93.75), we find that parsing and SRL benefit relatively more from each other (SRL 72.72→73.84,DEP 84.33→85.15). | contrasting |
train_15012 | These pairs usually get low human-labeled similarity scores. | splitting the words in such pairs into characters, and further the characters into radicals will not help to effectively identify the dissimilarity between them. | contrasting |
train_15013 | 1 ' The speaker makes a subtle, contemptuous remark about the sense of humor of the listener. | absence of sentiment words makes the sarcasm in this sentence difficult to capture as features for a classifier. | contrasting |
train_15014 | (2013) who give a bootstrapping algorithm that discovers a set of positive verbs and negative/undesirable situations. | this simplification (of representing sarcasm merely as positive verbs followed by negative situation) may not capture difficult forms of context incongruity. | contrasting |
train_15015 | by a simple Naive Bayes classifier trained on word and character n-gram data (Lui and Baldwin, 2012): a document of significant length will be quickly disambiguated based on its vocabulary (King et al., 2014). | social media platforms like Twitter produce data sets in which individual documents are extremely short, and language use is idiosyncratic: LID performance on such data is dramatically lower than on traditional corpora (Bergsma et al., 2012;Carter et al., 2013). | contrasting |
train_15016 | Intuitively, this structural, syntactic, and semantic information underlying input text has the potential for improving the quality of NLG tasks. | to the best of our knowledge, there is no clear evidence that syntactic and semantic information can enhance the recently developed encoder-decoder models in NLG tasks. | contrasting |
train_15017 | Embeddings have also been explored to extract hypernym relations from general corpora (Rei and Briscoe, 2014). | word embeddings have not been used for generating lexical simplifications. | contrasting |
train_15018 | (2016) show that by combining SSWE, LSTM outperforms traditional SVM model. | using LSTM alone does not give significantly more accuracies compared to SVM. | contrasting |
train_15019 | The gap shrinks as the amount of supervised data is increased, which is as expected. | using a large amount of extra, generated data from an approximating distribution (SEQ4+) does not help as much initially when compared with the unsupervised data from the true distribution. | contrasting |
train_15020 | Our work is also conceptually related to work on semantic parsing -mapping natural language text to a formal meaning representation (Wong and Mooney, 2007;Clarke et al., 2010;Cai and Yates, 2013;Kwiatkowski et al., 2013;Goldwasser and Roth, 2011). | as mentioned earlier, there are some significant differences in the task definition that necessitate the development of a new approach. | contrasting |
train_15021 | We use approximate inference, first enumerating possible trigger lists, and then equation trees, and find the best scoring structure. | this method did not outperform the pipeline method. | contrasting |
train_15022 | Normalizing consists of a battery of deterministic steps implemented using syntactic dependencies and semantic roles. | with previous work (Section 2), our normalization is fully automated. | contrasting |
train_15023 | The direct word-demographics analysis gives useful validation that the demographic information may yield dialectal corpora, and the seedlist approach can assemble a set of users with heavy dialectal usage. | the approach requires a number of ad-hoc thresholds, cannot capture authors who only occasionally use demographically-aligned language, and cannot differentiate language use at the message-level. | contrasting |
train_15024 | We define: • λ(T ), σ(T ) and (T ) as the subsets of T that respectively contain all tweets in language λ, script σ and sentiment . | • The preference towards a language-script pair λσ for expressing a type of sentiment is given by the probability pr(λσ), which defines the prior probability of choosing λσ for a tweet is dependent on a large 2 Tweets in mixed script are rare and hence we do not include a symbol for it, though the framework does not preclude such possibilities. | contrasting |
train_15025 | Our results indicate a strong preference for using Hindi, L1 for the users from whom these tweets come, for expressing negative sentiment, including swearing. | we do not observe any particular preference towards Hindi for expressing opinions. | contrasting |
train_15026 | This is perhaps the work that is closest to ours in the existing literature. | their model differs from ours in that it uses a max-pooling layer that picks the most activated feature across time, thus ignoring temporal information, whereas we explicitly avoid doing so. | contrasting |
train_15027 | For the kernel size, we set it to w = 3 words for the simple CNN (out of options 3, 5, 7, 9), whereas for the COM variant we use w = 3 and 5, based on experimentation on PTB. | we observed the models to be generally robust to this parameter. | contrasting |
train_15028 | A final (smaller) improvement comes from combining kernels of size 3 and 5, which can be attributed to a more expressive model that can learn patterns of n-grams of different sizes. | to the successful two variants above, the multi-layer CNN did not help in better capturing the regularities of text, but rather the opposite: the more convolutional layers were stacked, the worse the performance. | contrasting |
train_15029 | see k = 256 of k = 512) on the first two datasets. | the LSTM can be easily scaled using larger models, as shown in Zaremba et al. | contrasting |
train_15030 | Our results show a solid 11-26% reduction in perplexity with respect to the feed-forward model across three corpora of different sizes and genres when the model uses MLP Convolution and combines kernels of different window sizes. | even without these additions we show CNNs to effectively learn language patterns that allow it to significantly decrease the model perplexity. | contrasting |
train_15031 | As these models are directly predicting p with no concept of mixture weights λ, they cannot be interpreted as MODLMs as-is. | we can perform a trick shown in Fig. | contrasting |
train_15032 | Although single-pass reading plays a crucial role when we just want the general meaning and do not necessarily need to understand every single point of the text, it is not enough for tackling tasks that need a deep analysis of the text. | with single-pass reading, repeated reading involves the process where learners repeatedly read the text in detail with specific learning aims, and has the potential to improve readers' reading fluency and comprehension of the text (National Institute of Child Health and Human Development, 2000;LaBerge and Samuels, 1974). | contrasting |
train_15033 | Arg-1 : the use of 900 toll numbers has been expanding rapidly in recent years Arg-2 : for a while, high-cost pornography lines and services that tempt children to dial (and redial) movie or music information earned the service a somewhat sleazy image (Comparison -wsj 2100) To identify the "Comparison" relation between the two arguments Arg-1 and Arg-2, the most crucial clues mainly lie in some content, like "expanding rapidly" in Arg-1 and "earned the service a somewhat sleazy image" in Arg-2, since there exists a contrast between the semantic meanings of these two text spans. | it is difficult to obtain sufficient information for pinpointing these words through scanning the argument pair left to right in one pass. | contrasting |
train_15034 | We perform significance test for these two improvements, and they are both significant under one-tailed t-test (p < 0.05). | when adding the third attention level, the performance does not promote much and almost reaches its plateau. | contrasting |
train_15035 | Sluicing antecedent selection might appear simple -after all, it typically involves a sentential expression in the nearby context. | analysis of the annotated corpus data reveals surprising ambiguity in the identification of the antecedent for sluicing. | contrasting |
train_15036 | Looking more generally, there is an obvious potential connection between antecedent selection for ellipsis and the problem of coreference resolution (see Hardt (1999) for an explicit theoretical link between the two). | entity coreference resolution is a problem with two major differences from ellipsis antecedent detection: a) the antecedent and anaphor often share a variety of syntactic, semantic, and morphological characteristics that can be featurally exploited; b) entity expressions in a text are often densely coreferent, which can help provide proxies for discourse salience of an entity. | contrasting |
train_15037 | The solution to antecedent selection that we have presented here provides a starting point for addressing the problem of resolution, in which the content of the sluice is filled in. | even if the correct antecedent is selected, the missing content is not always an exact copy of the antecedent -often substantial modifications will be required -and an effective resolution system will have to negotiate such mismatches. | contrasting |
train_15038 | Although the existing works also exploited such word sequences, they used only particular types of information from them as features based on the researchers' linguistic insights. | we minimized such feature engineering due to using an MCNN. | contrasting |
train_15039 | In addition, existing works have exploited the dependency path between a predicate and a candidate antecedent either by encoding such paths to the set of binary features of the words that appear in the path (Iida and Poesio, 2011) or by mining from the paths the sub-trees that effectively discriminate zero anaphoric relations (Iida et al., 2006). | both methods just focus on the dependency paths between a predicate and a candidate antecedent without exploiting other structural fragments in the dependency tree representing a target sentence, whereas our method uses the text fragments that cover the entire dependency tree. | contrasting |
train_15040 | 10 The PR-curves of our method and the single-column baseline were plotted just by altering the threshold parameters in Step 4 of our method (See Section 3). | the PR-curve of Ouchi's method cannot be easily plotted because it gives a score to each sentence, not to each zero anaphoric relation. | contrasting |
train_15041 | Longer utterances seem to carry enough information for our DTW-based measure to function properly. | shorter utterances are harder to align. | contrasting |
train_15042 | Some manual measures ask annotators to explicitly mark errors, but this has been found to have even lower agreement than ranking (Lommel et al., 2014). | while providing the gold standard for MT evaluation, human evaluation is not a scalable solution. | contrasting |
train_15043 | The Abstract Meaning Representation (AMR) (Banarescu et al., 2013) shares UCCA's motivation for defining a more complete semantic annotation. | using AMR is not optimal for defining a decomposition of a sentence into semantic units as it does not anchor its semantic symbols in the text, and thus does not provide clear decomposition of the sentence into sub-spans. | contrasting |
train_15044 | Table 3 also shows that the overall IAA is similar for all languages, presenting good agreement (0.6-0.7). | there are differences observed when we break down by node type. | contrasting |
train_15045 | Normally monolingual word embeddings are trained on billions of words. | obtaining that much monolingual data for a low-resource language is infeasible. | contrasting |
train_15046 | We observed similar trend using Panlex and Wiktionary dictionary in our model. | using Panlex results in much better performance. | contrasting |
train_15047 | 10 Low-resource languages Our model exploits dictionaries, which are more widely available than parallel corpora. | the question remains as to how well this performs of a real low-resource language, rather than a simulated condition like above, whereupon the quality of the dictionary is likely to be worse. | contrasting |
train_15048 | In practice, a simple beam search procedure that explores K prospective histories at each time-step has proven to be an effective decoding approach. | as noted above, decoding in this manner after conditional languagemodel style training potentially suffers from the is-sues of exposure bias and label bias, which motivates the work of this paper. | contrasting |
train_15049 | Ideally we would train by comparing the gold sequence to the highest-scoring complete sequence. | because finding the argmax sequence according to this model is intractable, we propose to adopt a LaSO-like (Daumé III and Marcu, 2005) scheme to train, which we will refer to as beam search optimization (BSO). | contrasting |
train_15050 | For both the baseline and BSO models we enforce this constraint at testtime. | we also experiment with constraining the BSO model during training, as described in Section 4.2, by defining the succ function to only allow successor sequences containing un-used words in the source sentence. | contrasting |
train_15051 | During training the model uses dynamic programming to marginalize over permutations of the null symbols, while beam search is employed during decoding. | our model defines a separate latent alignment variable, which adds flexibility to the way the alignment distribution can be defined (as a geometric distribution or parameterised by a neural network) and how the alignments can be constrained, without redefining the dynamic program. | contrasting |
train_15052 | Neural encoder-decoder models have shown great success in many sequence generation tasks. | previous work has not investigated situations in which we would like to control the length of encoder-decoder outputs. | contrasting |
train_15053 | Hence, in the traditional setting of text summarization, both the source document and the desired length of the summary will be given as input to a summarization system. | methods for controlling the output sequence length of encoderdecoder models have not been investigated yet, despite their importance in these settings. | contrasting |
train_15054 | Finally, there are some studies to modify the output sequence according some meta information such as the dialogue act (Wen et al., 2015), user personality (Li et al., 2016b), or politeness (Sennrich et al., 2016). | these studies have not focused on length, the topic of this paper. | contrasting |
train_15055 | The results show that the learning-based meth-ods (LenEmb and LenInit) tend to outperform decoding-based methods (f ixLen and f ixRng) for the longer summaries of 50 and 75 bytes. | in the 30-byte setting, there is no significant difference between these two types of methods. | contrasting |
train_15056 | This values highly frequent and syntactically important words, like "the" or "and", while also allowing a large number of infrequent words to also contribute significantly to the score. | a word's usefulness when writing mutually enciphering texts is explicitly tied to its sister word under the cipher. | contrasting |
train_15057 | On the one hand, this highly connected property of the graph means that we can prune our beam to a smaller size, as failing to expand a partial cipher does not eliminate it from appearing as a subcipher in a future solution like it does for Nuhn et al.. | the connectedness of our cipher graph does present new challenges. | contrasting |
train_15058 | Poems written using ciphers generated from Beam Verse can be found on our website. | attempting to write with some high-scoring ciphers has revealed that our scoring metric may be only loosely correlated with the true "writability", as some ciphers which score higher that Bök's we find more difficult to write with. | contrasting |
train_15059 | Both CF and LF seem to be equally important. | p F tends to be less important in this task. | contrasting |
train_15060 | A few systems try to denoise the training corpora using simple pruning heuristics such as deleting mentions with conflicting types . | such strategies significantly reduce the size of training set (Table 1, rows (2a-c)) and lead to performance degradation (later shown in our experiments). | contrasting |
train_15061 | For example, Nationality(x, y) ∧ Nationality(z, y) ∧ Language(z, w) ⇒ Language(x, w) is a high-quality formula, which means people with the same nationality probably speak the same language. | it is a challenge to create formulas for open-domain KBs, where there are a great variety of relation types and it is impossible to construct a complete formula set by hand. | contrasting |
train_15062 | For example, PRA (Lao and Cohen, 2010;Lao et al., 2011) assumes the more narrow distributions of elements in a formula are, the higher score the formula will obtain. | formulas with high scores in PRA are not always true. | contrasting |
train_15063 | A recent approach regularizes relation and entity representations by propositionalization of first-order logic rules. | propositionalization does not scale beyond domains with only few entities and rules. | contrasting |
train_15064 | QA pairs such that they are all potentially answerable from either the KB from (ii) or the original Wikipedia documents from (i) to eliminate data sparsity issues. | it should be noted that the advantage of working from raw documents in real applications is that data sparsity is less of a concern than for a KB, while on the other hand the KB has the information already parsed in a form amenable to manipulation by machines. | contrasting |
train_15065 | We presented a new model, Key-Value Memory Networks, which helps bridge this gap, outperforming several other methods across two datasets, WIKIMOVIES and WIKIQA. | some gap in performance still remains. | contrasting |
train_15066 | Research has found that speakers entrain to both human and computer conversational partners, with the amount of entrainment often positively related to conversational and task success. | most prior work has focused on the study of entrainment during two-party dialogues, rather than during the multi-party conversations typical of teams. | contrasting |
train_15067 | With different framing strategies, the authors try to appeal to individuals with different beliefs and concerns. | equivalence framing focuses on presenting content as either loss-framed or gain-framed messages. | contrasting |
train_15068 | SEMAFOR uses a large set of features which help it scale for a diverse set of frames in FrameNet. | many of these many not be well suited for the process sentences in our relatively smaller dataset. | contrasting |
train_15069 | We use character bi-LSTMs to handle the Out Of Vocabulary (OOV) problem as in (Ling et al., 2015b). | just as a distributional hypothesis exists for words, prior work (Tsvetkov and Dyer, 2015;Tsvetkov et al., 2015) suggests phonological character representations capture inherent similarities between characters that are not apparent from orthogonal one-hot orthographic character representations and can serve as a language universal surrogate for character representations. | contrasting |
train_15070 | Research on Learning from Demonstration (LfD) employed various approaches to model the tasks (Argall et al., 2009), such as state-to-action mapping , predicate calculus (Hofmann et al., 2016), and Hierarchical Task Networks (Nejati et al., 2006;Hogg et al., 2009). | aspiring to enable human robot communication, the framework developed in this paper focuses on task representation using language grounded to a structure of state changes detected from the physical world. | contrasting |
train_15071 | For our AoG-based method, when the inference algorithm only takes the single best state mapping hypothesis into consideration (i.e., k = 1), it yields a very weak performance because the observed state change sequence often cannot be parsed using the learned AoG. | the performance of the AoG-based 5 Because the linguistic labels generated for primitive actions are all from terminal nodes, and the two different AoG learning settings only affect nonterminal nodes. | contrasting |
train_15072 | The most interesting difference is that for sequences of length greater than 8, the ED model has a recall@5 of zero for both datasets. | the EE model manages to achieve significant recall even at large sequence lengths. | contrasting |
train_15073 | For other labels, where we use a oneversus-rest strategy, the gap between all units and top-10 units is large. | when predicting POS, the gap of neural parser (E2P) on the lower layer (C0) is much smaller. | contrasting |
train_15074 | Researchers have proven that the target-side monolingual data can greatly enhance the decoder model of NMT. | the source-side monolingual data is not fully explored although it should be useful to strengthen the encoder model of NMT, especially when the parallel corpus is far from sufficient. | contrasting |
train_15075 | From the last two lines in Table 1, we can see that RNNSearch-Mono-Autoencoder can also improve the translation quality by more than 1.0 BLEU points when using the most related monolingual data. | it underperforms RNNSearch-Mono-MTL by a large gap. | contrasting |
train_15076 | By adding closely related corpus (25% to 50%), the methods can achieve better and better performance. | when adding more unre- lated monolingual data (75% to 100%) which shares fewer and fewer words in common with the bilingual data, the translation quality becomes worse and worse, and even worse than the baseline RNNSearch. | contrasting |
train_15077 | In fact, these models can be thought of as a subclass of the proposed approach that use a lexicon that assigns a all its probability to target words that are the same as the source. | while we are simply using a static interpolation coefficient λ, these works generally have a more sophisticated method for choosing the interpolation between the standard and "copy" models. | contrasting |
train_15078 | Since QA1 and QA2 address different problems, they may not be expected to be part of the same cluster in finegrained clusterings. | the solutions suggested in QA3 and QA4 are distinct and different legitimate solutions to the same problem cause. | contrasting |
train_15079 | On the one hand, they allow to predict whether the comments are good answers within their respective threads. | they allow to infer whether the questions for which the comments were produced are closely related to the original question. | contrasting |
train_15080 | Both word-level and character-level models perform comparably well when predicting the predicate, reaching an accuracy of around 80% (Table 3). | the word-level model has considerable difficulty generalizing to unseen entities, and is only able to predict 45% of the entities accurately from the mixed set. | contrasting |
train_15081 | These results clearly demonstrate that the OOV issue is much more severe for entities than predicates, and the difficulty word-level models have when generalizing to new entities. | character-level models have no such issues, and achieve a 96.6% accuracy in predicting the correct entity on the mixed set. | contrasting |
train_15082 | We can see that if the generator p γ is close to the distribution that generates the test data, the method can potentially yield good performance. | in practice, γ is unknown and difficult to set. | contrasting |
train_15083 | Treated as a translation problem, math word problem solving should be simpler than developing a machine translation model between two human languages, as the output vocabulary (the math symbols) is significantly smaller than any human vocabulary. | machine translation can be learned on millions of pairs of already translated sentences, and such massive training datasets dwarf all previously introduced math exam datasets. | contrasting |
train_15084 | This could be explained by the fact that the generators' vocabulary has a good overlap with the vocabulary of the real data. | mixing real and generated data improves performance significantly. | contrasting |
train_15085 | A math word problem is a coherent story that provides the student with good clues to the correct mathematical operations between the numerical quantities described therein. | the particular theme of a problem, whether it be about collecting apples or traveling distances through space, can vary significantly so long as the correlation between the story and underlying equation is maintained. | contrasting |
train_15086 | Such lexicon features have been shown highly effective, leading to the best accuracies in the SemEval shared task (Mohammad et al., 2013). | they are typically based on bag-of-word models, hence suffering two limitations. | contrasting |
train_15087 | There are also some information extraction tasks in emotion analysis, such as extracting the feeler of emotion (Das and Bandyopadhyay, 2010). | these methods need to observe emotion linked expressions. | contrasting |
train_15088 | Other than rule based methods, Ghazi (Ghazi et al., 2015) used CRFs to extract emotion causes. | it requires emotion cause and emotion keywords to be in the same sentence. | contrasting |
train_15089 | Since CB is opposite to RB, the performance by RB+CB is improved. | the improvement is quite limited, at 0.0127 in F-measure. | contrasting |
train_15090 | Traditional attention-based neural network models only take the local text information into consideration. | our model puts forward the idea of user-product attention by utilizing the global user preference and product characteristics. | contrasting |
train_15091 | According to our statistics, the first user often mentions "wine" in his/her review sentences. | the second user never talks about "wine" in his/her review sentences. | contrasting |
train_15092 | Deep neural networks (DNNs) have achieved remarkable success in a large variety of application domains (Krizhevsky et al., 2012;Bahdanau et al., 2014). | the powerful end-to-end learning comes with limitations, including the requirement on massive amount of labeled data, uninterpretability of prediction results, and difficulty of incorporating human intentions and domain knowledge. | contrasting |
train_15093 | We can see that both models performs poorly, achieving the accuracy of only 68.6% for the knowledge component, similar to the accuracy achieved by the "opt-joint" method. | our mutual distillation framework offers the best performance. | contrasting |
train_15094 | Addressing this issue by learning distinct representations for individual meanings of words has been the subject of several research studies in the past few years. | the generated sense representations are either not linked to any sense inventory or are unreliable for infrequent word senses. | contrasting |
train_15095 | Note that the size of the list is equal to the total number of strings in WordNet. | we observed that taking a very small portion of the top-ranking elements in the lists is enough to generate representations that perform very similarly to those generated when using the full-sized lists (please see §3.1). | contrasting |
train_15096 | In fact, the former was unable to model around 35% of the synsets in WordNet 1.7.1, mainly for its shallow exploitation of knowledge from WordNet, whereas the latter approach did not cover around 15% of synsets in WordNet 3.0. provide near-full coverage for word senses in WordNet. | the relatively low performance of their system shows that the usage of glosses in WordNet and the automated disambiguation have not resulted in accurate sense representations. | contrasting |
train_15097 | Nasari combines structural knowledge from the semantic network of BabelNet with corpus statistics derived from Wikipedia for representing BabelNet synsets. | the approach falls short of modeling non-nominal senses as Wikipedia, due to its very encyclopedic nature, does not cover verbs, adjectives, or adverbs. | contrasting |
train_15098 | The aim of distributional semantics is to derive meaning representations based on observing cooccurrences of words in large text corpora. | not all plausible co-occurrences will be observed in any given corpus, resulting in word representations that only capture a fragment of the meaning of a word. | contrasting |
train_15099 | (2013) used a distributional approach for smoothing derivationally related words, such as oldish -old, as a back-off strategy in case of data sparsity. | none of these approaches have used distributional inference as a general technique for directly enriching sparse distributional vector representations, or have explored its behaviour for semantic composition. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.