id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_9800 | (2008), Noferesti and Shamsfard (2016). | to previous methods, our method does not require sentiment-labeled texts or feature engineering. | contrasting |
train_9801 | The above previous work focused on learning sentiment of unigrams, and optionally of multi-word expressions (conflated into single tokens), and used word2vec embeddings (Mikolov et al., 2013). | we are interested in learning sentiment composition from bigrams, and therefore we are mostly interested in compositional bigrams, and aim to construct a sentiment lexicon that contains hundreds of thousands of them. | contrasting |
train_9802 | Let C(c, w) be the set of bigrams in the bigram lexicon that satisfy the condition of class c for word w, and let S(c) be the set of bigrams whose polarity is the one predicted by c. The precision of w with respect to c can be defined as: The above formula gives uniform weights for all the bigrams in C(c, w). | we found that it is beneficial to also take into account the uncertainty stemming from the automatic sentiment prediction of the unigrams and bigrams. | contrasting |
train_9803 | This is in line with our hypothesis that explicit modeling of morphology may capture interactions of elements in the sequence and allow for better generalization thereof. | with MLPs and CNNs, RNNs consider the entire sequence for classification, and so it may be the case that previously unseen tokens in the sequence may undermine classification in token-based settings. | contrasting |
train_9804 | Classical architectures include convolutional neural networks (CNNs) (Zeng et al., 2014), recurrent neural networks (RNNs) (Xu et al., 2015), etc. | above methods are still insufficient for domain knowledge acquisition due to two challenges: (i) most domain entities rarely occur in the corpus, hence pattern-based methods easily suffer from the feature sparsity problem; (ii) a domain knowledge graph tends to be incomplete w.r.t. | contrasting |
train_9805 | Therefore, considering the sparsity of domain knowledge, these methods applied on such corpus will introduce plenty of small classes, which are in fact noises and thus meaningless. | our ssCRP-based approach works more effectively since we only discover one large class with confident instances in each iteration. | contrasting |
train_9806 | Consequently, the attention of the models to such words would lead to a correct prediction in this case. | for the positive sentences, it might be the case that the words along the dependency paths help to include some words that are crucial to police killing prediction, but do not appear in the semantical word lists. | contrasting |
train_9807 | Sentences often have a long list of comma separated conjuncts and not separating all of them would mean a substantial performance loss for end-tasks like Open IE. | our approach eschews the first principle, as we find that it is not true often enough to be helpful. | contrasting |
train_9808 | Most language models approximate this via an n-gram probability for a fixed context window of length n − 1, instead of taking the entire sentence so far. | using the language model score is ineffective for two reasons. | contrasting |
train_9809 | Previous work has given credit to a system only when both boundaries of a conjunct match exactly (Ficler and Goldberg, 2016). | this is not ideal for downstream tasks like Open IE. | contrasting |
train_9810 | OLLIE follows the idea of bootstrap learning of patterns based on dependency parse paths. | while WOE relies on Wikipedia-based bootstrapping, OLLIE applies a set of high precision seed tuples from its predecessor system REVERB to bootstrap a large training set. | contrasting |
train_9811 | Considering recall, Graphene (27.2%) is able to compete with other high-precision systems, such as PropS (26.7%). | it does not reach the recall rate of ClausIE (33.0%) or OpenIE-4 (32.5%). | contrasting |
train_9812 | The recent success of deep learning for NLP tasks is at least partially due to the fact that such models are exposed to large quantities of labeled data. | obtaining such rich labeled data is a costly proposition and seldom feasible in many realistic scenarios. | contrasting |
train_9813 | Note, however, that most of these works that customize embeddings for a specific task rely on some form of supervision. | our approach that learns a custom representation for the NEC task is lightly supervised, with a only few seed examples per category (Section 4). | contrasting |
train_9814 | This is analogous to recent work (Jia and Liang, 2017;Burlot and Yvon, 2017) on developing automated adversarial evaluation schemes for reading comprehension and machine translation. | unlike these efforts, our stress tests allow us to study model performance on a range of linguistic phenomena. | contrasting |
train_9815 | Further advances with Neural Networks (NNs) have once more motivated efforts to develop a large natural language inference dataset, SNLI (Bowman et al., 2015), since NNs need to be trained on big data. | meaning is not something we obtain just from text and the ability to reason is not unimodal either. | contrasting |
train_9816 | The present paper presented a first step in this direction using a version of an existing TE dataset which was augmented with images that could be paired directly with the premises, since these were originally captions for those images. | it is important to note that in this dataset premise-hypotheses pairs were not generated directly with reference to the images themselves. | contrasting |
train_9817 | (2015) extend one-hop reasoning regimes such as TransE (Bordes et al., 2013) to multi-hop PQA. | these basic one-hop models do not encode the relation order when used in compositional training schemes. | contrasting |
train_9818 | Apart from these baselines, it is also feasible to compare with the baselines based on composition of one-hop triple-based embedding models. | the performance of these baselines is very poor for this task (Neelakantan et al., 2015) and therefore, we do not include them in our comparison. | contrasting |
train_9819 | We expected that H@10 should decrease gradually. | figure 4 shows that even-length paths are harder than odd-length ones. | contrasting |
train_9820 | Decision-level fusion is a commonly used strategy for fusing heterogeneous inputs, combining the independent modality outputs by using several specific rules. | the lack of mutual association learning across modalities is a major limitation of applying decision-level fusion (Zhang et al., 2017). | contrasting |
train_9821 | TTR becomes unstable in learner English because of spelling errors. | this is not the case with K. In the next section, we explore the results theoretically and empirically to deepen our understanding of these phenomena. | contrasting |
train_9822 | Suppose that a spelling error was created from w ∈ W and that 100r% of all occurrences of w underwent the noise and were replaced with it. | then, the contribution of w to the entire value of K decreases by: the newly created spelling increases the value of K by: In total, the difference caused by a spelling error can generally be written as: Accordingly, it follows that the influence is only dependent on r. the difference becomes a maximum when r = 1 2 . | contrasting |
train_9823 | To be precise, we have revealed that the difference in its value caused by spelling errors is relatively high throughout the tree groups, observing not less than 16% difference. | this is not the case with K, which shows no more than 1% difference throughout the three groups. | contrasting |
train_9824 | Note that, different from H&N14, which only contains favor and against posts, SemEval16 dataset contains none posts, which do not express any stance. | the final metrics disregard the None class. | contrasting |
train_9825 | Comparison between "-Hyper" and "-Hyper, -Ling", we can find that linguistic attentions with each linguistic features does have significant improvement. | using LSTM to detect stance (LSTM) does not obtain a better performance compared to concatenating linguistic information directly ("-Hyper, -Ling"). | contrasting |
train_9826 | grammar rules, so it is relatively easy to grasp some patterns of correction. | in the HSK dynamic composition corpus built by Beijing Language and Culture University, which is the largest available Chinese learner corpus at the time of this study, WUE is the most frequent lexical-level error. | contrasting |
train_9827 | The accuracy is less than 70% and MRR is less than 80%. | for closed-set word types such as prepositions (P), our system performs very well, reaching accuracy 0.81 and MRR 0.88. | contrasting |
train_9828 | Methods for retrofitting pre-trained entity representations to the structure of a knowledge graph typically assume that entities are embedded in a connected space and that relations imply similarity. | useful knowledge graphs often contain diverse entities and relations (with potentially disjoint underlying corpora) which do not accord with these assumptions. | contrasting |
train_9829 | Distributional representations of concepts are often easy to obtain from unstructured data sets, but they tend to provide only a blurry picture of the relationships that exist between concepts. | knowledge graphs directly encode this relational information, but it can be difficult to summarize the graph structure in a single representation for each entity. | contrasting |
train_9830 | This modular approach conveniently separates the distributional data and entity representation learning from the knowledge graph and retrofitting model, allowing one to flexibly combine, reuse, and adapt existing representations to new tasks. | a core assumption of Faruqui et al. | contrasting |
train_9831 | Synonym is mostly defined as "a same-language equivalent" (Adamska-Sałaciak, 2010; Adamska-Sałaciak, 2013) and "does not exceed the limits of a single language" (Gouws, 2013), while for bilingual contexts the term translational equivalent is used. | (Martin, 1960;Klégr, 2004;Hahn et al., 2005;Hayashi, 2012;Haiyan, 2015;Dinu et al., 2015) recognize interlingual synonymy and use either the term foreignlanguage equivalent, cross-lingual synonym, synonymous translation equivalent or bilingual synonym. | contrasting |
train_9832 | Furthermore, we analyzed the advantages and disadvantages of our model, and found that it is better at capturing semantic similarity of two sentences than averaging models, especially when they have little word overlap but similar meanings. | it tends to overestimate the low semantic similarity of a sentence pair. | contrasting |
train_9833 | Traditional Korean morphological analysis algorithms operate at the Eojeol level and yield all ambiguous parses (Table 1) that lead to that particular Eojeol, including the morpheme transformations and tags. | the model 1 proposed in this paper receives input at the sentence level and attempts to produce the one correct sequence of transformations and tags for all Eojeol within the sentence according to the context (Table 2). | contrasting |
train_9834 | Morphological analysis of the Korean language has traditionally been performed in several ways, including separation of Korean characters into graphemes by using linguistic knowledge, lattice tree lookup (Park et al., 2010), application of regular and irregular inflection rules (Kang and Kim, 1992), morphosyntactic rule sets, and by using a pre-computed dictionary (Shim and Yang, 2004). | we investigate whether morphological analysis of Korean is feasible without the use of any of these techniques and without a dictionary by making the assumption that common transformations and their underlying grapheme modifications can be easily recognized and learned with a Bi-LSTM model. | contrasting |
train_9835 | It has been used for event detection (Pozdnoukhov and Kaiser, 2011;Ertl et al., 2012;Vavliakis et al., 2012;Lau et al., 2012;Zhou and Chen, 2014), summarization, or finding influential users in social media. | only few studies used LDA to detect topic changes over time (Lau et al., 2012;Zhou and Chen, 2014). | contrasting |
train_9836 | The evolution of topics in on-line LDA models is usually shown using the most probable words from the word-topic distributions (Hoffman et al., 2010;Lau et al., 2012;Zhai and Boyd-Graber, 2013). | the same top words can appear in different topics, making differences between topics hard to show. | contrasting |
train_9837 | Cross-lingual embedding and dictionary classifiers provide a stronger baseline than LP, outperforming SVM data+resource when training data is sparse. | adding them as features to the SVM results in a classifier that consistently improves upon all other systems, even at small training sizes of only 20%. | contrasting |
train_9838 | The dictionary mapping approach ( §4.2.1) has been shown to be a strong stand-alone classifier and SVM feature (Table 5), slightly outperforming the cross-lingual word embedding approach ( §4.2.2). | the underlying English-German dictionary by DictCC is of considerable size, consisting of over 1.1 million translation pairs. | contrasting |
train_9839 | Stress Marking In the UoI corpus, all words with two or more syllables have a diacritic mark to indicate the location of stress. | the resources that we collected are not consistent in the use of such a diacritic. | contrasting |
train_9840 | (He and Lin, 2016;Wang et al., 2017;Wang and Jiang, 2016;Trischler et al., 2016;Parikn et al., 2016.). | all above approaches are similar to our One vs. One Matching model which deals with the matching measurement between one sentence (or one piece of text) and another sentence (or another piece of text). | contrasting |
train_9841 | However, all above approaches are similar to our One vs. One Matching model which deals with the matching measurement between one sentence (or one piece of text) and another sentence (or another piece of text). | our approach is a One vs. | contrasting |
train_9842 | Distributed representations of words play a major role in the field of natural language processing by encoding semantic and syntactic information of words. | most existing works on learning word representations typically regard words as individual atomic units and thus are blind to subword information in words. | contrasting |
train_9843 | That network shares a similar architecture with ours. | our architecture is different from the early network (Kim et al., 2016) in that ours is designed with a simpler network architecture and a different output layer based on the pre-trained word embeddings. | contrasting |
train_9844 | Synonyms are proposed for both words and word senses. | it turns out that not all synonyms for a given word are grouped together in distinct senses. | contrasting |
train_9845 | This is the hypothesis that describes the similarity between two words as the similarity of their closest senses (Budanitsky and Hirst, 2006). | if the synonym is monosemous, the algorithm 1 is applied (we note this variant algorithm 2). | contrasting |
train_9846 | (2013) observe that 25% of the MWEs in their test corpus are unseen in the training data, and that at most 19% of them could be correctly identified by their system based on sequence labeling. | the authors do not specify how unseen MWEs are defined, that is, if variants are counted as seen or unseen. | contrasting |
train_9847 | This feature identifies the similarity between play a.DET very.ADV important.ADJ role and play his.DET first.ADJ major role. | linear similarity does not capture postnominal adjectives (e.g. | contrasting |
train_9848 | For instance, the noun in many VMWEs must remain in singular, as in (8) vs. (10), or in plural, as in (11). | cOMP features might fail in case of rare VMWEs, and cannot be calculated for hapaxes. | contrasting |
train_9849 | Another common choice for parallel corpus in multilingual research, the Bible, is available in 2,530 languages (Agić et al., 2015). | 2 studies show that its archaic themes and small corpus size (1,189 chapters) can limit performance (Hao et al., 2018;Moritz and Büchler, 2017). | contrasting |
train_9850 | Another line of research focuses on using multilingual dictionaries as supervision (Ma and Nasukawa, 2017;Gutiérrez et al., 2016;Liu et al., 2015;Jagarlamudi and Daumé III, 2010;Boyd-Graber and Blei, 2009). | to parallel corpora, dictionaries are widely available and often easy to obtain. | contrasting |
train_9851 | The fewer entries the dictionary provides, the more VOCLINK degrades to monolingual LDA. | sOFTLINK can potentially transfer knowledge from the whole corpus. | contrasting |
train_9852 | Finding this definition automatically is not as trivial as it might sound and there is a lot of literature on this topic. | a lot of cases remain where we cannot find such a definition, because the extraction method fails or because there is no definition in the paper. | contrasting |
train_9853 | For most Indigenous languages, learning morphology automatically from corpora is not a viable option. | symbolic systems, especially those based on finite-state transducers (FSTs) have been successfully implemented for a number of languages. | contrasting |
train_9854 | The feasibility of implementing augmented and virtual reality projects is aided by the widespread interest in the technology and 3D game engines like Unity and Unreal. | there are still very few implementations for Indigenous languages in Canada. | contrasting |
train_9855 | Nonetheless, the noted grammatical similarities among these languages are expected to be sufficient for the generalizability of the pluralization approach. | meinhof's NC definition (Table 3) has several limitations when applied to computational tasks, which impede pluralization and generalizability: 1. | contrasting |
train_9856 | We further found that, similar to isiZulu and Runyankore, phonological conditioning was also required in chiShona, isiXhosa, Kinyarwanda, and Luganda; only Kikuyu does not require phonological conditioning. | as explained in Section 2, the rules for phonological conditioning are languagespecific. | contrasting |
train_9857 | Similar to semantic networks, we use a graph based representation in which nodes are associated with words. | to semantic networks, however, these edges are labelled with a vector, meaning that relation types are modeled in a continuous space. | contrasting |
train_9858 | Accordingly, it was found in (Vylomova et al., 2016) that a relation classifier which is trained on word vector differences is prone to predicting many false positives. | we can expect that our relation vectors are modeling relations in a far less ambiguous way. | contrasting |
train_9859 | In contrast, we can expect that our relation vectors are modeling relations in a far less ambiguous way. | these relation vectors are limited to word pairs that co-occur sufficiently frequently. | contrasting |
train_9860 | In particular, the diffvec vectors all express relationships from the metalwork domain (e.g., 'heavymetals' or 'annihilator-metal'), which reflects the fact that the music-related interpretation of the word 'metal' is not its dominant sense. | since our relation vectors are exclusively learned from sentences where both words co-occur ('heavy' and 'metal' in this example), the vector for 'heavy metal' clearly captures the musical sense (see e.g., 'thrash-metal' or 'glam-metal' in the original space). | contrasting |
train_9861 | The averaging model clustered questions about islands; we observed similar behavior using weighted averaging, doc2vec and GRAN. | skip-th clustered questions that start with "what country", which happens to be more suitable for identifying the LOC question type. | contrasting |
train_9862 | In this approach, frequently co-occurring word sequences are considered FEs. | noise such as 'is one of the' cannot be removed. | contrasting |
train_9863 | Domain-specific technical terms, such as natural language processing or reactive oxygen species, are unlikely to be extracted using the proposed method. | the usage of FEs is shown to differ across domains, which implies that the expressions should be re-ranked according to the users' discipline when candidate expressions are presented to users of the writing assistance system. | contrasting |
train_9864 | Ideally, there would be a single intrinsic metric for identifying "good" embeddings -and there are many proposals for such a metric (including word relatedness and analogies). | none of them have been shown to predict performance on a wide range of tasks, and there is evidence to the contrary (Chiu et al., 2016). | contrasting |
train_9865 | There are multiple proposals for "subconscious intrinsic evaluation" (Bakarov, 2018) based on correlations with psycholinguistic data such as N400 effect (Van Petten, 2014;, fMRI scans (Devereux et al., 2010;Søgaard, 2016), eye-tracking (Klerke et al., 2015;Søgaard, 2016), and semantic priming data (Lund et al., 1995;Lund and Burgess, 1996;Jones et al., 2006;Lapesa and Evert, 2013;Ettinger and Linzen, 2016;Auguste et al., 2017). | there are no large-scale studies that would show the utility of these methods in predicting downstream task performance. | contrasting |
train_9866 | The idea behind the word analogy task (Mikolov et al., 2013b) is that the "best" word embedding is the one that encodes linguistic relations in the most regular way: simple vector offset should be sufficient to capture semantic shifts such as F rance : P aris to Japan : T okyo. | this view of linguistic relations (and analogical reasoning) is oversimplified, and performance on word analogies has also been shown to depend on cosine similarity between source word vectors Linzen, 2016;Levy and Goldberg, 2014b). | contrasting |
train_9867 | Crucially, all these approaches make the same core assumption: that there is one feature of a representation that would make it the "best" (the highest correlation with human judgements, the most regular vector offsets, the closest approximation of a linguistic resource, etc.) | language is a multifaceted phenomenon, and different NLP tasks may rely on its different aspects -which would doom any one-metric-to-rule-them-all approach. | contrasting |
train_9868 | One more important observation from this experiment is that all the extrinsic and intrinsic tasks have high correlations with more than one LD factor, which illustrates the point about tasks being complex ensembles of various linguistic features. | it is only by breaking them down into smaller, controllable factors that we can explain and improve on them. | contrasting |
train_9869 | However, constructing training corpus for all languages and words is tremendously expensive, so the supervised approaches generally have some limitations on the set of the words that can be disambiguated. | the knowledge-based unsupervised approaches utilize lexical knowledge bases (LKBs) such as a Wordnet (Banerjee and Pederson., 2003;Chaplot et al., 2015). | contrasting |
train_9870 | However, in terms of the macro average score of SemEval-2013 and SemEval-2015, Wordsim_iterSRP2vSim shows higher performance than the Moro 14. | s, unsupervised knowledge-based approaches, including our system, generally has poorer performance than supervised approaches in the SemEval-2015 dataset. | contrasting |
train_9871 | This method has been proposed to solve the computational complexity problem of finding the optimal combination among all possible set of senses. | due to the nature of the greedy search, sometimes it makes hard to inference correct sense of the ambiguous word because of the error propagation from a previous result can determined answer. | contrasting |
train_9872 | (2015) empirically found that Expected Wins performs better than Trueskill. | almost all subsequent work ignored this finding and used the Trueskill model instead. | contrasting |
train_9873 | In the HTies variant, M 2 achieves statistically significant improvements compared to the other two metrics in both expanded and unexpanded sets. | for NoTies variant, GLEU achieves the best result. | contrasting |
train_9874 | In the case of its counterpart BLEU, if some n-grams of the hypothesis and reference match, it rightly assigns a non-zero score as the system shows some ability to perform translation. | for GEC, simply copying the input can result in matching several n-grams from the reference despite the system showing zero ability to perform correction. | contrasting |
train_9875 | It can be argued that GLEU intends to reward GEC systems for detecting errors by assigning a partial credit to systems that make spurious changes at locations where corrections are deemed necessary by human annotators. | this will encourage building GEC systems that provide inaccurate feedback and potentially mislead the end users (primarily language learners). | contrasting |
train_9876 | In Example 2, when multiple references are used, GLEU gives a higher score to Hypothesis 3 (ungrammatical) than Hypothesis 1 and 2, both of which are grammatical and each matches one of the two references exactly. | both M 2 and I-measure assign a lower score to Hypothesis 3 and conform to our intuition. | contrasting |
train_9877 | Nonlinear Affine Transformation Usually, a BiLSTM decoder takes the concatenation g i of the hidden state vectors as output for each hidden state. | in the SRL context, the encoder is supposed to distinguish the currently considered predicate from its candidate arguments. | contrasting |
train_9878 | Mapping a word into two different vectors can help the model disambiguate the role that it plays in different context. | biaffine Scoring In the standard NMT context, given a target recurrent output vector h considering that in a traditional classification task, the distribution of classes is often uneven, and that the output layer of the model normally includes a bias term designed to capture the prior probability P (y i = c) of each class, with the rest of the model focusing on learning the likelihood of each class given the data P (y i = c|x i ), (Dozat and Manning, 2017) introduced the bias terms into the bilinear attention to address such uneven problem, resulting in a biaffine transformation. | contrasting |
train_9879 | The existing positional encodings are mostly fixed encoding like sinusoid type or learned positionwise encoding which is set up per each word independently. | light-house positional encoding defines only one distance embedding with respect to one time step between words. | contrasting |
train_9880 | In particular, as the difficulty of domains associated with 'open-vocabulary' slots increased, like calendar and message, a larger gap in performance emerged to the amount of about a 5% absolute gain for slot filling and about a 1% error rate for intent detection. | multi-task learning with the intent detection model did not have much effect on the performance of slot filling. | contrasting |
train_9881 | Conventionally, grammatical words, especially function words, have been proposed for stylometric authorship attribution since they are independent of content. | character n-gram based approaches have largely outperformed function word based approaches (Kestemont, 2014) indicating that some lexical words may also help with authorship attribution. | contrasting |
train_9882 | It is known that function words are independent of content and are useful for representing style. | the success of character n-gram approaches in authorship attribution indicate that some lexical words may also be useful for authorship attribution. | contrasting |
train_9883 | For single-domain IMDB1M dataset, it can be observed from Table 3 that excluding all common nouns (no N N ) and proper nouns (no N N P ) affects the attribution performance drastically. | masking only topic words corresponding to common nouns (no topic N N ) and proper nouns (no topic N N P ) seems to improve attribution performance compared to masking them completely (no N N P , no N N ). | contrasting |
train_9884 | This could possibly be due to the heavy dependence on common nouns and proper nouns for attribution under those scenarios. | for cross-domain Guardian10 dataset, it can be observed from Table 4 that excluding proper nouns (no N N P ) completely yields a marked improvement in performance for both crosstopic and cross-genre experiments. | contrasting |
train_9885 | We evaluated the role of syntax using a purely syntactic language model and show that syntax may be useful with cross-genre attribution while cross-topic attribution and single-domain attribution may be benefit from both syntax and lexical information. | syntactic language models are not discriminative by themselves and need to be used in conjunction with more successful character n-gram models. | contrasting |
train_9886 | For common nouns, verbs, adjectives and adverbs, masking off certain topic words yield better performance suggesting that the remaining words corresponding to these lexical POS may help represent style. | proper nouns seem to be heavily influenced by topic and cross-domain attribution may benefit from completely masking them. | contrasting |
train_9887 | On the one hand, we must encode firm-specific information into the dense document representations so as to make them different across targets. | we must identify the most informative sentences while disregarding noise for the prediction. | contrasting |
train_9888 | Word composition is a promising technique for representation learning of large linguistic units (e.g., phrases, sentences and documents). | most of the current composition models do not take the ambiguity of words and the context outside of a linguistic unit into consideration for learning representations, and consequently suffer from the inaccurate representation of semantics. | contrasting |
train_9889 | The underlying intuition known as distributional hypothesis (Harris, 1954) can be summarized as: "a word is characterized by the company it keeps" (Firth, 1957). | there is a lack of theory to justify why it works, even for a simple task such as word classification or clustering. | contrasting |
train_9890 | Besides, although the results of SSA are different the two corpora share again the same tendency: they both reach their best when using DSC without WMI. | dSC always improves the results whether it is applied alone or combined with WMI. | contrasting |
train_9891 | These works show that a strong relationship exists between computational semantic models and neural representations. | it remains to be seen how cognitive semantic representations, including localized neural activation patterns can help improve the performance of computational semantic models, especially for complicated classification and recognition tasks. | contrasting |
train_9892 | Word prediction models tend to perform better in natural language processing tasks such as analogy, similarity, synonym detection, concept taxonomy and sentiment analysis (Socher et al., 2011;Socher et al., 2013). | their relationship with cognitive lexical representation is not yet well understood, at least to a degree that would allow us to improve current computation lexical semantic models. | contrasting |
train_9893 | On the other hand, the random baseline system creates the most diverse tag space by using all of the possible tags. | its lower micro-F1 score of 6.30% makes it impractical to be used in real world scenario. | contrasting |
train_9894 | Similar work has, to the best of our knowledge, only been done in the psychology domain. | related work from this area does not target the goal of predictive modeling (Stevenson et al., 2007;Pinheiro et al., 2017). | contrasting |
train_9895 | Thus, an important difference between SHR and ISR is that the former is computed on a single data set whereas the latter requires two different data sets with overlapping items. | iSR can be computed on the final ratings alone, whereas SHR requires knowledge of the judgments of the individual raters. | contrasting |
train_9896 | It is important to note that the decision for N * = 20 is necessarily arbitrary, to some degree, with higher SHR estimates arising from higher values of N * . | 20 raters are often used in psychological studies (Warriner et al., 2013;Stadthagen-Gonzalez et al., 2017b), while being way higher than the number of raters typically used in NLP for emotion annotation, both for the word and sentence level (Yu et al., 2016a;Strapparava and Mihalcea, 2007). | contrasting |
train_9897 | Previous work has limited itself to data sets comprising all three VAD dimensions with the implicit belief that Dominance provides valuable affective information which is important for ERM. | since only about half of the data sets developed in psychology labs (and even less provided by NLP groups) actually do comprise Dominance, this decision massively decreases the amount of data sets at hand. | contrasting |
train_9898 | In addition, observing the confusion matrix in JMTE suggests that in both tasks data points are separable, since misclassified data points are not skewed towards particular or the most popular classes. | emotion tag surprise is mainly misclassified with joy and discussed, we further observed the data points to address this issue and noticed most of these data points imply surprise and even as human it was difficult for us to categorize them as surprise. | contrasting |
train_9899 | They found that the results of their functional approach suggests (in their own language): "Negative emotions are many and specific. | positive emotions are few and less specific." | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.