id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_3900 | Most previous work on part-of-speech tagging, sentiment analysis, and dependency parsing also had only binary-valued pivot features. | for authorship attribution, all features are count-based, so translation from a pivot feature value to a binary classification problem is not trivial. | contrasting |
train_3901 | 2 Authorship attribution typically relies on countbased features. | the classic SCL algorithm assumes that all pivot features are binary, so that it can train binary classifiers to predict pivot feature values from non-pivot features. | contrasting |
train_3902 | Thus, while these analyses are reliable for the examples they focus on, they cannot be generalized to other examples. | the trend of actual usage can be collected automatically from a large corpus. | contrasting |
train_3903 | (2.5%) affection-ACC word-DAT feel (φ I feel the affection in φ your words.) | since the ACC-DAT order is overwhelmingly dominant in the case that the verb is sasou (ask), its dative argument is dêto (date), and its accusative argument is josei (woman) as shown in Example (3), the ACC-DAT order is considered to be canonical in this case. | contrasting |
train_3904 | I-TOP him-DAT camera-ACC showed c: φI -NOM him-DAT camera-ACC showed (I showed him a camera.) | when an argument represents the topic of the sentence (TOP), the topic marker wa is used as a postpositional particle, and case particles ga and wo do not appear explicitly. | contrasting |
train_3905 | In the case of show-type verbs, the dative argument of a causative sentence is the subject of its corresponding inchoative sentence as shown in Example (6). | in the case of pass-type verbs, the accusative argument is the subject of its corresponding inchoative sentence as shown in Example (7). | contrasting |
train_3906 | As predicted, we can alleviate this problem by using a dependency parser. | the accu-racy of the state-of-the-art Japanese dependency parser is not very high, specifically about 92% for news paper articles (Yoshinaga and Kitsuregawa, 2014), and thus, inappropriate examples would be extracted even if we used one. | contrasting |
train_3907 | president-NOM degree-ACC conferred (The president conferred a degree on φ someone .) | if we use all extracted examples that include only one of the dative and accusative arguments for calculating R DAT-only , the value is likely to suffer from a bias that the larger R ACC-DAT is, the larger R DAT-only becomes. | contrasting |
train_3908 | Through a preliminary investigation on Kyoto University Text Corpus 7 , we confirmed the effect of this constraint to avoid the bias. | r ACC-DAT is calculated as follows: where N DAT-ACC/ACC-DAT is the number of example types that include both the dative and accusative arguments in the corresponding order. | contrasting |
train_3909 | LIEL makes use of new as well as well-established features in the EL literature. | we make sure to use only non-lexical features. | contrasting |
train_3910 | NEREL (Sil and Yates, 2013) is a powerful joint entity extraction and linking system. | by construction their model is not language-independent due to the heavy reliance on type systems of structured knowledgebases like Freebase. | contrasting |
train_3911 | SASH is expected to perform well when the number of dimensions is large, since FLANN and NGT perform worse as the number of dimensions becomes larger. | nGT would be a better choice since most existing pre-trained embeddings (Turian et al., 2010;Mikolov, 2013;Pennington et al., 2014a) have a few hundred dimensions. | contrasting |
train_3912 | This fast can be acceptable assuming an empirical rule that a corpus follows a power law or Zipf's law. | graphs (a), (b), and (c) have quite different tendencies. | contrasting |
train_3913 | The results of GN and GV embeddings in graphs (a) and (c) show a similar tendency to those of Wikipedia embeddings in Figure 8. | the overall performance for the analogy task using GV embeddings is unexpectedly high, contrary to the results for the search task in Figure 6 (c). | contrasting |
train_3914 | For each encoder that requires training, we tried 0.025, 0.0025, and 0.00025 as an initial learning rate, and selected the best value for the encoder. | to the presentation of Section 3, we compose a pattern vector in backward order (from the last to the first) because preliminary experiments showed a slight improvement with this treatment. | contrasting |
train_3915 | vector of the content word and to ignore the semantic vector of the preposition. | input gates close and forget gates open when the current word is 'be' or 'a' and scanned words form a noun phrase (e.g., "charter member of"), a complement (e.g., "eligible to participate in"), or a passive voice (e.g., "require(d) to submit"). | contrasting |
train_3916 | Although these reports described good performance on the respective tasks, we are unsure of the generality of distributed representations trained for a specific task such as the relation classification. | this paper demonstrated the contribution of distributed representations trained in a generic manner (with the Skip-gram objective) to the task of relation classification. | contrasting |
train_3917 | (2015) to score dependency arcs using neural network model. | different from their work, we introduce a Bidirectional LSTM to capture long range contextual information and an extra forward LSTM to better represent segments of the sentence separated by the head and modifier. | contrasting |
train_3918 | (2015) use distributed representations obtained by averaging word embeddings in segments to represent contextual information of the word pair, which could capture richer syntactic and semantic information. | their method is restricted to segment-level since their segment embedding only consider the word information within the segment. | contrasting |
train_3919 | Previous neural network models (Pei et al., 2015;Pei et al., 2014;Zheng et al., 2013) normally set context window around a word and extract atomic features within the window to represent the contextual information. | context window limits their ability in detecting long-distance information. | contrasting |
train_3920 | (2015), our basic model occasionally performs worse in recovering long distant dependencies. | this should not be a surprise since higher order models are also motivated to recover longdistance dependencies. | contrasting |
train_3921 | Knowledge bases such as Wordnet (Miller, 1995) and Freebase (Bollacker et al., 2008) have been shown very useful to AI tasks including question answering, knowledge inference, and so on. | traditional knowledge bases are symbolic and logic, thus numerical machine learning methods cannot be leveraged to support the computation over the knowledge bases. | contrasting |
train_3922 | A dot denotes a triple and its position is decided by the difference vector between tail and head entity (t − h). | since TransE adopts the principle of t − h ≈ r, there is supposed to be only one cluster whose centre is the relation vector r. results show that there exist multiple clusters, which justifies our multiple relation semantics assumption. | contrasting |
train_3923 | All remaining hsICUs are recalled by both the control and dementia models, indicating that the hsICU topics are discussed by both groups. | to assess whether they are discussed to the same extent, i.e. | contrasting |
train_3924 | Interestingly, all of the control cluster words are used in the same contexts by both healthy participants and those with dementia. | the average number of times these words are used per transcript is significantly higher in the control group (1.07, s.d. | contrasting |
train_3925 | In recent years, there have been several strands of work which attempt to collect human-labeled data for this task -in the form of document, question and answer triples -and to learn machine learning models directly from it (Richardson et al., 2013;Berant et al., 2014;Wang et al., 2015). | these datasets consist of only hundreds of documents, as the labeled examples usually require considerable expertise and neat design, making the annotation process quite expensive. | contrasting |
train_3926 | (2016) also experiment on these two datasets and report competitive results. | our single model not only still outperforms theirs, but also appears to be structurally simpler. | contrasting |
train_3927 | Monroe and Potts (2015) uses learning to improve the pragmatics model. | we use pragmatics to speed up the learning process by capturing phenomena like mutual exclusivity (Markman and Wachtel, 1988). | contrasting |
train_3928 | Broadly speaking, both the localized tests of individual phonosemantic sets and the global analyses of phonosemantic systematicity challenge the arbitrariness of the sign. | they attribute responsibility for non-arbitrariness differently. | contrasting |
train_3929 | From this, they conclude that the observed systematicity of the lexicon is not a consequence only of small pockets of sound symbolism, but is rather a feature of the mappings from sound to meaning across the lexicon as a whole. | it is possible that their methods may not be sensitive enough to find localized phonosemantic sets. | contrasting |
train_3930 | The validation set was used to tune the threshold for classifying a pair as positive, as well as the maximum number of each term's most associated contexts (N ). | to the original paper, in which the number of each term's contexts is fixed to N , in this adaptation it was set to the minimum between the number of contexts with LMI score above zero and N . | contrasting |
train_3931 | 1 There is clearly also a practical demand for multilingual image captions, e.g., automatic translation of descriptions of art works would allow access to digitized art catalogues across language barriers and is thus of social and cultural interest; multilingual product descriptions are of high commercial interest since they would allow to widen e-commerce transactions automatically to international markets. | while datasets of images and monolingual captions already include millions of tuples (Ferraro et al., 2015), the largest multilingual datasets of images and captions known to the authors contain 20,000 (Grubinger et al., 2006) or 30,000 2 triples of images with German and English descriptions. | contrasting |
train_3932 | We rely on the CNN framework (Socher et al., 2014;Simonyan and Zisserman, 2015) to solve semantic classification and disambiguation tasks in NLP with the help of supervision signals from visual feedback. | we consider image captioning as a different task than caption translation since it is not given the information of the source language string. | contrasting |
train_3933 | For clarity, we focus on K-way classification, where Y = ∆ K is the K-dimensional probability simplex and y ∈ {0, 1} K ⊂ Y is a one-hot encoding of the class label. | our method specification can straightforwardly be applied to other contexts such as regression and sequence learning (e.g., NER tagging, which is a sequence of classification decisions). | contrasting |
train_3934 | This sheds light from another perspective on why our framework would work. | we found in our experiments (section 5) that to produce strong performance it is crucial to use the same labeled data x n in the two losses of Eq. | contrasting |
train_3935 | (2) 6: until convergence Output: Distill student network p θ and teacher network q ning over multiple examples), requiring joint inference. | as mentioned above, p is more lightweight and efficient, and useful when rule evaluation is expensive or impossible at prediction time. | contrasting |
train_3936 | By incorporating the bi-gram transition rules (Row 2), the joint teacher model q achieves 1.56 improvement in F1 score that outperforms most previous neural based methods (Rows 4-7), including the BLSTM-CRF model (Lample et al., 2016) which applies a conditional random field (CRF) on top of a BLSTM in order to capture the transition patterns and encourage valid sequences. | our method implements the desired constraints in a more straightforward way by using the declarative logic rule language, and at the same time does not introduce extra model parameters to learn. | contrasting |
train_3937 | Once again, this speaks to the remarkable conservativeness of the Icelandic language. | these small differences are in fact reliable: There are significant slopes in both regressions (positive differences: R + = .161, p + < .001; negative differences: R − = −.164, p − < .001). | contrasting |
train_3938 | The analyses reported exclude these 552 points. | keeping these outlying values in the regression, both key effects remained significant, but the slope estimates were less trustworthy. | contrasting |
train_3939 | This results in a mismatch between the Obj and Subj ratings. | relying on subjective ratings alone is also problematic since crowd-sourced subjects frequently give inaccurate responses and real users are often unwilling to extend the interaction in order to give feedback, resulting in unstable learning (Zhao et al., 2011;Gašić et al., 2011). | contrasting |
train_3940 | This removes the need for the Obj check during online policy learning and the resulting policy is as effective as one trained with dialogues using the Obj = Subj check. | a user simulator will only provide a rough approximation of real user statistics and developing a user simulator is a costly process (Schatzmann et al., 2006). | contrasting |
train_3941 | This measure was later used as a reward function for learning a dialogue policy (Rieser and Lemon, 2011). | as noted, task completion is rarely available when the system is interacting with real users and also concerns have been raised regarding the theoretical validity of the model (Larsen, 2003). | contrasting |
train_3942 | This shows that dialogue length was one of the prominent features in the dialogue representation d. It can also be seen that the longer failed dialogues (more than 15 turns) are located close to each other, mostly at the bottom right. | there are other failed dialogues which are spread throughout the cluster. | contrasting |
train_3943 | At first glance, it might appear that a locally normalized model used in conjunction with beam search or exact search is able to revise earlier decisions. | the label bias problem (see Bottou (1991), Collins (1999) pages 222-226, Lafferty et al. | contrasting |
train_3944 | For every amount of lookahead k, we can construct examples that cannot be modeled with a locally normalized model by duplicating the middle input b in (7) k + 1 times. | only a local model with scores ρ that considers the entire input can capture any distribution p(d 1:n |x 1:n ): in this case the decompo- increasing the amount of context used as input comes at a cost, requiring more powerful learning algorithms, and potentially more training data. | contrasting |
train_3945 | Finally, Watanabe and Sumita (2015) introduce recurrent components and additional techniques like max-violation updates for a corresponding constituency parsing model. | our model does not require any recurrence or specialized training. | contrasting |
train_3946 | As we can see in Table 4, the separated verb prefix is always in the RK. | an adverbial modifier as in (1-b) is rarely in the RK (0.35% of the adverbs cases in the TüBa-D/Z). | contrasting |
train_3947 | For the baseline, SVM without embedding (w/o Embed) achieved 91.99% accuracy, which is already very competitive. | the models with word embedding trained on 6 billion tokens (6B-50d) and 840 billion tokens (840B-300d) (Pennington et al., 2014) achieved 92.89% and 93.00%, respectively. | contrasting |
train_3948 | Finding domain invariant features is critical for successful domain adaptation and transfer learning. | in the case of unsupervised adaptation, there is a significant risk of overfitting on source training data. | contrasting |
train_3949 | To further demonstrate the advantage of our idea of minimal features with bidirectional sentence representations, we extend our work from dependency parsing to constituency parsing. | the latter is significantly more challenging than the former under the shift-reduce paradigm because: 1 shift (I) 6 pro (NP) 2 pro (NP) 7 adj 3 shift (like) 8 pro (S) 4 pro (VP) 9 adj 5 shift (sports) Figure 5: Shift-Promote-Adjoin parsing example. | contrasting |
train_3950 | A notable shortcoming of this approach is that the translation model probabilities thus calculated from the training bitext can be unintuitive and unreliable (Marcu and Wong, 2002;Foster et al., 2006) as they reflect only the distribution over the phrase pairs observed in the training data. | from an SMT perspective it is important that the models reflect probability distributions which are preferred by the decoding process, i.e., phrase translations which are likely to be used frequently to achieve better translations should get higher scores and phrases which are less likely to be used should get low scores. | contrasting |
train_3951 | Thus forced alignment is a reestimation technique where translation probabilities are calculated based on their frequency in best-scoring hypotheses instead of the frequencies of all possible phrase pairs in the bitext. | one limitation of forced alignment is that only the phrase translation model can be re-estimated since it is restricted to align the source sentence to the given target reference, thus fixing the choice of reordering decisions. | contrasting |
train_3952 | Another relevant line of research relates tuning (weight optimisation), where our work lies between forced decoding (Wuebker et al., 2010) and the bold updating approach of (Liang et al., 2006). | our approach specifically proposes a novel method for training models using oracle BLEU translations. | contrasting |
train_3953 | (2010) address this problem by using a leave-one-out approach where they modify the phrase translation probabilities for each sentence pair by removing the counts of all phrases that were extracted from that particular sentence. | in our approach, we do not impose a constraint to produce the exact translation, instead we use the highest BLEU translations which may be very different from the references. | contrasting |
train_3954 | For the linear interpolation of the re-estimated phrase table with the baseline, forced decoding shows only a slight improvement for MT06, MT08 and MT09 and still suffers from a substantial drop for MT05. | oracle-BLEU re-estimation shows consistent improvements for all test sets with a maximum gain of up to +0.7 for MT06. | contrasting |
train_3955 | For an additional anal- ysis, we experimented with the interpolation of both the re-estimated phrase table (forced decoding and oracle-BLEU) with the baseline. | improvements achieved with this interpolation did not surpass the best result obtained for the oracle-BLEU re-estimation. | contrasting |
train_3956 | This implies that only in combination with the original phrase table does forced-decoding with leave-one-out outperform the baseline. | oracle-BLEU re-estimation by its own not only performs better than forced decoding, but also gives a performance equal to forced decoding with leave-oneout when interpolated with baseline phrase table. | contrasting |
train_3957 | A comparison of the two approaches goes in favor of the joint setup: Without the reranker, models generating trees produce less semantic errors and gain higher BLEU/NIST scores. | with the reranker, the string-based model is able to reduce the number of semantic errors while producing outputs significantly better in terms of BLEU/NIST. | contrasting |
train_3958 | , w |V | (t) tend to be different across t, by our very postulate, so that there is non-identity of these 'reference points' over time. | as we may assume that the meanings of at least a few words are stable over time, we strongly expect the vectorsw i (t) to be suitable for our task of analysis of meaning changes. | contrasting |
train_3959 | We note that this linearity concerns the core vocabulary of languages (in our case, words that occurred at least 100 times in each time period), and, in the case of TISS graphs, is an average result; particular words may have drastic, non-linear meaning changes across time (e.g., proper names referring to entirely different entities). | our analysis also finds that most core words' meanings decay linearly in time. | contrasting |
train_3960 | In the segmentation literature, several previous systems learn lexical items from variable input (Elsner et al., 2013;Daland and Pierrehumbert, 2011;Rytting et al., 2010;Neubig et al., 2010;Fleck, 2008). | these models use pre-processed representations of the acoustics (phonetic transcription or posterior probabilities from a phone recognizer) rather than inducing an acoustic category structure directly. | contrasting |
train_3961 | Previous approaches heavily depend on languagespecific knowledge and pre-existing natural language processing (NLP) tools. | compared to English, not all languages have such resources and tools available. | contrasting |
train_3962 | For example, in S2, when predicting the type of a trigger candidate " release", the forward sequence information such as "court" can help the classifier label "release" as a trigger of a "Release-Parole" event. | for feature engineering methods, it is hard to establish a relation between "court" and "release", because there is no direct dependency path between them. | contrasting |
train_3963 | R1: But all this is beyond your control. | r2: you cannot choose yourself. | contrasting |
train_3964 | Since the baseline strategy should access all the short texts in each size of data collection, which means in 1k size of BNC data collection, the baseline strategy access all these 1k candidates. | our proposed strategies under different settings only access small size candidates to obtain the results. | contrasting |
train_3965 | The exact nature of this tradeoff isn't clear, and is an interesting avenue for future work. | there are potential reasons for the results discrepancy betweeen mean rank and hits@10. | contrasting |
train_3966 | Such clustering is usually performed through manually defined semantic and syntactic features defined over argument instances. | the representation based on these features are usually sparse and difficult to generalize. | contrasting |
train_3967 | As analyzed in Section 3.2, the element-wise difference matching layer does not add to model complexity and can be absorbed as a special case into simple concatenation. | explicitly using such heuristic yields an accuracy boost of 1-2%. | contrasting |
train_3968 | For most sentence models including TBCNN, the overall complexity is at least O(n). | an efficient matching approach is still important, especially to retrieval-and-reranking systems (Yan et al., 2016;Li et al., 2016). | contrasting |
train_3969 | We begin by establishing a CRF baseline (#1) and show that adding segmentation features helps (#2). | adding those features to the full model (with embeddings) in Peng and Dredze (2015) (#3) did not improve results (#4). | contrasting |
train_3970 | Sampling from a class n-gram provided many complementary word forms, not easily generated by the other models. | it became successively harder to improve the OOV reduction rates by a combination of three models. | contrasting |
train_3971 | Speech-based, demographic and morphological features unequivocally contributed to performance. | the effect of semantic features seems less obvious as they harm performance taken as a whole but some individual semantic features are useful for the system, as shown by the results achieved with just using significant features. | contrasting |
train_3972 | Recently, deep learning methods provide an effective way of reducing the number of handcrafted features (Socher et al., 2012;Zeng et al., 2014). | these approaches still use lexical resources such as WordNet (Miller, 1995) or NLP systems like dependency parsers and NER to get high-level features. | contrasting |
train_3973 | The tagger is greedy: the tagging decisions are independent of each other. | as shown below and in other recent work using bi-RNNs for sequence tagging, we can still produce competitive tagging accuracies, because of the richness of the representation v i that takes the entire input sequence into account. | contrasting |
train_3974 | LSTM (-3) LSTM(3-3) LSTM(1-3) 1.298 1.034 0.990 Our deep single task model increases performance over this baseline by 30%. | we see that when we predict both POS and the target task at the top layer, Rademacher complexity is lower and close to a random baseline. | contrasting |
train_3975 | These rules contain typed relations and can also be applied in a context-sensitive manner. | ignoring the types and applying the inference rules out of context worked better on our dataset, perhaps because Berant et al. | contrasting |
train_3976 | Current data-to-text systems assume that the underlying data is precise and correct -an assumption which is heavily criticised by other disciplines concerned with decision support, such as medicine (Gigerenzer and Muir Gray, 2011), environmental modelling (Beven, 2009), climate change (Manning et al., 2004), or weather forecasting (Kootval, 2008). | simply presenting numerical ex-pressions of risk and uncertainty is not enough. | contrasting |
train_3977 | This is a 44% average increase in game score. | multi-modal vs. NLG-only: there is no significant difference between the NLG only and the multi-modal representation, for game score. | contrasting |
train_3978 | Confidence: For confidence, the multi-modal representation is significantly more effective than NLG only (p < 0.01, effect = 17.7%). | as Table 2 shows, although adults did not feel very confident when presented with NLG only, they were able to make better decisions compared to being presented with graphics only. | contrasting |
train_3979 | Demographic factors: We further found that prior experience on making decisions based on risk, familiarity with weather models, and correct literacy test results are predictors of the players' understanding of uncertainty, which is translated in both confidence and game scores. | we found that the education level, the gender, or being native speaker of English does not contribute to players' confidence and game scores. | contrasting |
train_3980 | Post-edited data are often used in incremental MT frameworks as additional training material. | often this does not fully exploit the potential of these rich PE data: e.g., PE data may just be drowned out by a large SMT model. | contrasting |
train_3981 | These procedures search over a vast space of translations, much larger than is considered by the NMT beam search. | the Hiero context-free grammars that make efficient search possible are weak models of translation. | contrasting |
train_3982 | As expected, all features correlate with age and income in the same direction. | some features and groups are more predictive of one or the other (depicted above or below the principal diagonal in Figure 2). | contrasting |
train_3983 | As we can see, it is possible to predict the most pessimistic 14 users with perfect accuracy. | some of the most likely optimistic users actually belonged to another class based on the ground truth labels. | contrasting |
train_3984 | Since this is a generative model, it can not only be used to assign probability to observed sentences, but also generate new sentences based on the model parameters. | extra-linguistic information, if available, can improve classification performance (Volkova et al., 2013;Hovy, 2015), and fakereview detection models often also exploit metainformation about the author and their behavior , looking for irregularities. | contrasting |
train_3985 | The numbers are generally in a high range, which is encouraging, since it means that our models can detect fake reviews fairly reliably. | we also see that the conditional models introduced here quickly become significantly harder to detect than the regular LM. | contrasting |
train_3986 | Even in MT (Ling et al., 2015b), where authors use the character transformation presented in (Ballesteros et al., 2015;Ling et al., 2015a) both in the source and target. | they do not seem to get clear improvements. | contrasting |
train_3987 | The concatenation of these max values already provides us with a representation of each word as a vector with a fixed length equal to the total number of convolutional ker-nels. | the addition of two highway layers was shown to improve the quality of the language model in (Kim et al., 2016) so we also kept these additional layers in our case. | contrasting |
train_3988 | In the target size we are still limited in vocabulary by the softmax layer at the output of the network and we kept the standard target word embeddings in our experiments. | the results seem to show that the affix-aware representation of the source words has a positive influence on all the components of the network. | contrasting |
train_3989 | statistical phrase-based, are that the translation is faced with trainable features and optimized in an end-to-end scheme. | there still remain many challenges left to solve, such as dealing with the limi-tation in vocabulary size. | contrasting |
train_3990 | Parallel Sentences occur naturally, and provide training examples that are more consistent with downstream applications. | they can be noisy due to automatic sentence alignment and one-to-many mappings, and bag-of-word representations of sentence meaning are likely to be increasingly noisier as segments get longer. | contrasting |
train_3991 | The best bilingual compositional representations are better than non-compositional Glove embeddings (Pennington et al., 2014), but worse than compositional Paragram embeddings (Wieting et al., 2016). | paragram initialization requires large amounts of text and human word similarity judgments for tuning, while our models were initialized randomly. | contrasting |
train_3992 | The closest model to LCTM is Gaussian LDA (Das et al., 2015), which models each topic as a Gaussian distribution over the word embedding space. | the assumption that topics are unimodal in the embedding space is not appropriate, since topically related words such as 'neural' and 'networks' can occur distantly from each other in the embedding space. | contrasting |
train_3993 | (Petterson et al., 2010) exploits external word features to improve the Dirichlet prior of the topic-word distributions. | both of the models cannot handle OOV words, because they assume fixed word types. | contrasting |
train_3994 | Also, we see that LCTM performs comparable to LFLDA and nl-cLDA, both of which incorporate information of word embeddings to aid topic inference. | as we will see in the next section, LCTM can better handle OOV words in held-out documents than LFLDA and nl-cLDA do. | contrasting |
train_3995 | We see that LCTM-UNK outperforms other topic models in almost every setting, demonstrating the superiority of our method, even when OOV words are ignored. | the fact that LCTM outperforms LCTM-UNK in all cases clearly illustrates that LCTM can effectively make use of information about OOV to further improve the representation of unseen documents. | contrasting |
train_3996 | Consequently, word embeddings have been commonly used in the last few years for lexical similarity tasks and as features in multiple, syntactic and semantic, NLP applications. | keeping embedding vectors for hundreds of thousands of words for repeated use could take its toll both on storing the word vectors on disk and, even more so, on loading them into memory. | contrasting |
train_3997 | This suggests that mentioned trolls are very similar to paid trolls (except for their reply rate, time and day of posting patterns). | using just mentions might be a "witch hunt": some users could have been accused of being "trolls" unfairly. | contrasting |
train_3998 | The original phrase table requires 2.8GB memory. | the 90% pruned table only requires 350MB of memory. | contrasting |
train_3999 | On the one hand, it uses SGNS window sampling, negative sampling, and stochastic gradient descent (SGD) to minimize a loss function that weights frequent co-occurrences heavily but also takes into account negative co-occurrence. | since PPMI generally outperforms PMI on semantic similarity tasks (Bullinaria and Levy, 2007), rather than implicitly factorize a shifted PMI matrix (like SGNS), LexVec explicitly factorizes the PPMI matrix. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.