id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_101100 | While the model trained by HS is efficient to evaluate perplexities, NCE training requires summation over all words in the vocabulary in the denominator of the softmax to compute perplexity, an impracticality for large vocabulary. | as discussed in Section 3.3, the pipeline method, although commonly used in deep learning literatures, does not suit NLP applications well because of the sparsity in word embeddings. | neutral |
train_101101 | For each of the N words w i in phrase p we construct the embedding: where e w i is the embedding for word i; and refers to point-wise product. | we report NCE loss with a fixed set of samples for NCE trained models. | neutral |
train_101102 | We use candidate sets of size 1k/10k/100k from the most frequent N words in NYT and report mean reciprocal rank (MRR). | we propose to Table 1: Feature templates for word w i in phrase p. t(w): POS tag; c(w): word cluster (when w is a function word, i.e. | neutral |
train_101103 | POS-tagging, morphological analysis, and linked NE phrases were used to detect other mentions of NEs that appear without links in text. | in the next section we take into consideration various classifier combination methods in order to aggregate the best decisions of SSL and DL classifiers, and to improve overall performance. | neutral |
train_101104 | We used an algorithm adapted from Althobaiti et al. | if the base classifiers agree on the NE type of a certain word, then it is annotated by an agreed NE type. | neutral |
train_101105 | This analysis allows us to avoid reasoning about entity triples (quadruples, etc. | our evaluation methodology is inspired by information retrieval evaluations (Manning et al., 2008). | neutral |
train_101106 | The MAP numbers are somewhat low because almost half of the test questions have no correct answers and all models get an average precision of 0 on these questions. | the most obvious limitation is the restriction to existentially quantified conjunctions of predicates. | neutral |
train_101107 | 6 In terms of overall MAP, Freebase outperforms our approach by a fair margin. | from a machine learning perspective, training a probabilistic database via matrix factorization is easier than training a semantic parser, as there are no difficult inference problems. | neutral |
train_101108 | The second-layer hidden variable vector h 2 are used as the LFR of this sample. | the features in F s and F t are are ordered the same as the order they appeared in training data. | neutral |
train_101109 | Our DBN model uses the second-layer hidden variable vector h 2 to represent this sample. | to reduce the number of argument candidates, we adopt the pruning strategy in Zhao et al., (2009), which is adapted from the strategy in Xue and Palmer (2004). | neutral |
train_101110 | Below is a sentence from an article entitled "Photolithography" in Simple Wikipedia: Microphototolithography is the use of photolithography to transfer geometric shapes on a photomask to the surface of a semiconductor wafer for making integrated circuits. | in the next section, we will review the evaluation methodology used in recent research, discuss its shortcomings and propose alternative evaluations. | neutral |
train_101111 | We make our argument not as a criticism of others or ourselves, but as an effort to refocus research directions in the future (Eisenstein, 2013). | although discourse is known to affect readability, the relation between discourse and text simplification is still under-studied with the use of statistical methods (Williams et al., 2003;Siddharthan, 2006;Siddharthan and Katsos, 2010). | neutral |
train_101112 | We also introduce a new comparative approach to simplification corpus analysis. | we also thank action editor Rada Mihalcea and three anonymous reviewers for their thoughtful comments, and Ani Nenkova, Alan Ritter and Maxine Eskenazi for valuable discussions. | neutral |
train_101113 | For example with T = 40, our w2v-LDA and glove-LDA obtain F 1 scores at 40.0% and 38.9% which are 4.5% and 3.4% higher than F 1 score at 35.5% obtained by the LDA model, respectively. | method λ = 1.0 T=6 T=20 T=40 T=80 LDA -16.7 ± 0.9 -11.7 ± 0.7 -11.5 ± 0.3 -11.4 ± 0.4 N20 w2v-LDA -14.5 ± 1.2 -9.0 ± 0.8 -10.0 ± 0.5 -10.7 ± 0.4 glove-LDA -11.6 ± 0.8 -7.4 ± 1.0 -8.3 ± 0.7 -9.7 ± 0.4 Improve. | neutral |
train_101114 | We see that λ = 1.0 gives the highest NPMI score. | given a topic t represented by its top-N topic words w 1 , w 2 , ..., w N , the NPMI score for t is: where the probabilities in equation 12are derived from a 10-word sliding window over an external corpus. | neutral |
train_101115 | AIDA adopts this segmentation. | in the TAC KBP, in addition to determining if a mention has no entity in the KB to link, all the mentions that represent the same real world entities must be clustered together. | neutral |
train_101116 | (2014) provide an easy-to-use evaluation toolkit on the AIDA data set. | in this paper, we examine 9 EL data sets and discuss the inconsistencies among them. | neutral |
train_101117 | "+BOTH": an average of two scores is used for re-ranking. | annotations in an early TAC-KBP dataset (2009) select the whole span as the mention. | neutral |
train_101118 | For example, in "... while returning from Freeport to Portland. | neither the city nor the country can actually make a proposal. | neutral |
train_101119 | In such models, the distributed representations and compositional operators can be fine-tuned by backpropagating supervision from task-specific labels, enabling accurate and fast models for a wide range of language technologies (Socher et al., 2011;Socher et al., 2013;. | prior work has not applied these ideas to the classification of implicit relations in the PDTB, and does not consider the role of entities. | neutral |
train_101120 | Dependency syntax may be a better alternative . | we had the data re-annotated by two authors of this paper who are native English speakers. | neutral |
train_101121 | Some systems adopt logic-based representations but use distributional evidence for predicate disambiguation (Lewis and Steedman, 2013) or to weight probabilistic inference rules Beltagy et al., 2013). | formal semantics captures fundamental aspects of meaning in set-theoretic terms: Entailment, for example, is captured as the inclusion relation between the sets (of the relevant type) denoted by words or other linguistic expressions, e.g., sets of possible worlds that two propositions hold of (Chierchia and McConnell-Ginet, 2000, 299). | neutral |
train_101122 | All these measures provide a score that is higher when a significant part of the candidate antecedent feaspective, we are interested in testing general methods of composition that are also good for other tasks (e.g., modeling sentence similarity), rather than developing ad-hoc composition rules specifically for entailment. | we compute the output value of h as the conjunction across all dimensions in w. Concretely, h Θ (p, q) is obtained as follows. | neutral |
train_101123 | Spoken Term Discovery Spoken term discovery is the problem of using unsupervised pattern discovery methods to find previously unknown keywords in speech. | figure 4 shows an analysis of stored lexical units for each lecture, plotting grams revealed two likely reasons for this. | neutral |
train_101124 | We formulate the noisy-channel model as a PCFG and encode the substitute, split, and delete operations as grammar rules. | to reduce the inference load on the HMM, we exploit acoustic cues in the feature space to constrain phonetic boundaries to occur at a subset of all possible locations (Lee and Glass, 2012). | neutral |
train_101125 | The flexibility of the framework, toolkit and analysis methods presented in this paper helps researchers to devise, analyze and compare representations for coreference resolution. | to create training graphs, we employ a slight modification of the closest pair heuristic (Soon et al., 2001), which worked best in preliminary experiments. | neutral |
train_101126 | Furthermore, in principle we can take the structure into account. | via latent antecedents, the model can avoid learning from the most unreliable pairs. | neutral |
train_101127 | The average is the metric for ranking the systems in the CoNLL shared tasks on coreference resolution (Pradhan et al., 2011;Pradhan et al., 2012). | they do not develop a unified framework for comparing approaches, and their analysis is not qualitative. | neutral |
train_101128 | 10 Driven by the results of Section 6.1, we turn to explore a methodology for identification of translationese in a mixed-domain setup. | excellent indomain classification results on the one hand and poor cross-domain predictive performance on the other, imply that the model describing the relation in a certain domain is inapplicable to a different (even seemingly similar) domain due to significant differences in the distribution of the underlying data. | neutral |
train_101129 | The two-phase approach outperforms the flat one in most cases: the latter attempts to cluster data instances by domain and translation status simultaneously, and is therefore potentially more error-prone. | it has been suggested that the accuracy of translation detection deteriorates when the classifier is evaluated outside the domain it was trained on. | neutral |
train_101130 | After submitting this paper, we developed a penalized expectation propagation method (Cotterell and Eisner, 2015). | this edit sequence corresponds to a character-wise alignment of u to s. Our features for modeling the contextual probability of each edit are loosely inspired by constraints from Harmonic Grammar and Optimality theory (Smolensky and Legendre, 2006). | neutral |
train_101131 | We currently model S θ (s | u) as the probability that a left-to-right stochastic contextual edit process ( Figure 2) would edit u into s. This probability is a sum over all edit sequences that produce s from u-that is, all s-to-u alignments. | with certain fancier approaches to modeling S θ , which we leave to future work, this effect could be mitigated while preserving the transducer property. | neutral |
train_101132 | Further, the other metrics show that it places most of its probability mass on that form, 12 and the rest on highly similar forms. | reduplication is not regular if unbounded, but we can adopt morphological doubling theory (Inkelas and Zoll, 2005) and model it by having U concatenate two copies of the same morph. | neutral |
train_101133 | As shown in Table 2, a country that fills the SUPPLIER role is more likely to also fill the role of a SELLER than that of a BUYER. | country names, for example, can be observed as fillers for different roles depending on the text genre and its perspective. | neutral |
train_101134 | To identify particularly reliable indicators for discourse salience, we inspected held-out development data. | we change the argument labeling procedure from predicate-specific to frame-specific roles and implement I/O methods to read and generate FrameNet XML files. | neutral |
train_101135 | Both are composed of a set of source sentences, their machine translated outputs and the corresponding post-editing time. | both cases are evidence of model overfitting. | neutral |
train_101136 | Examples include other tree kernels like the Partial Tree Kernel (Moschitti, 2006a) and string kernels like the ones defined on character ngrams (Lodhi et al., 2002) or word sequences (Cancedda et al., 2003). | interest in model selection procedures for kernelbased methods has been growing in the last years. | neutral |
train_101137 | The function ∆ can be defined recursively, where pr(n) is the grammar production at node n and preterm(n) returns true if n is a pre-terminal node. | we can employ any kind of valid kernel in this procedure as long as its gradients can be computed. | neutral |
train_101138 | For all NLP experiments the grid is fixed for all hyperparameters (including γ, the lengthscale value in the RBF kernel), with its corresponding values shown on | the GP optimization can also benefit from multiple cores by running each kernel computation inside the Gram matrix in parallel. | neutral |
train_101139 | In the remainder of this section we describe the general setup of the experiments. | ing to their syntactic behaviors. | neutral |
train_101140 | In some other settings, exact inference is NPhard. | 5equates to a summation over exponentially many projective parse trees. | neutral |
train_101141 | For certain choices of higher-order factors, polynomial time is possible via dynamic programming (McDonald et al., 2005;Carreras, 2007;Koo and Collins, 2010). | the annealed risk objective requires an annealing schedule: over the course of training, we linearly anneal from initial temperature t = 0.1 to t = 0.0001, updating t at each step of stochastic optimization. | neutral |
train_101142 | Here we describe inside-outside and the accompanying backpropagation algorithm over a hypergraph. | non-projective parsing becomes NP-hard with even second-order factors (McDonald and Pereira, 2006). | neutral |
train_101143 | In fact, the model parameters are not sparse in practice. | the mention prior alone does surprisingly well on this task, but well below the previous best results, as might be expected. | neutral |
train_101144 | Since the supervised model is trained only on Wikipedia, the mention prior component dominates, and the system incorrectly infers that George Harrison refers to the Beatle. | while both these approaches can be implemented using a distributed processing framework such as map-reduce (Dean and Ghemawat, 2008), the latter where we only infer the missing entity labels scales better than the standard EM approach. | neutral |
train_101145 | The posterior is intractable due to a combinatorial number of possible link configurations. | we assume that each document in the corpus consists of a set of mentions -text spans -that describe event actions, their participants, times, and locations. | neutral |
train_101146 | Theorem 1 For noncrossing dependency graphs, Maximum Subgraph can be solved in time O.jV j 3 /. | in this paper we therefore address maximum acyclic subgraph parsing under the restriction that the subgraph should be noncrossing, which informally means that its arcs can be drawn on the half-plane above the sentence in such a way that no two arcs cross (and without changing the order of the words). | neutral |
train_101147 | Unfortunately, however, this setting also did not allow for a gain in accuracy, likely due to to the low recall (15%) of the matching between paraphrasing grammar and semantic parsing rules. | in this section we briefly outline two other (unsuccessful) attempts to do so: creation of a pipelined paraphrasing/semantic parsing system, and addition of features from a large paraphrase database. | neutral |
train_101148 | In this work, we construct the tri-synchronous grammar by transforming the basic SCFG for semantic parsing G into a 3-SCFG. | in this paper we introduced a method for constructing a semantic parser for ambiguous input that paraphrases the ambiguous input into a more explicit form, and verifies the correctness using a language model. | neutral |
train_101149 | In order to reduce the space of possible equation trees, ALGES reorders Qsets {s 1 , . | on Monday, 375 students went on a trip to the zoo. | neutral |
train_101150 | We have now showed that its possible to automatically construct large lexicons from smaller seed lexicons. | a typical lexicon contains all possible attributes that can be displayed by a word. | neutral |
train_101151 | One key question is which of the features in our graph are important for projecting morphological attribute-values. | we will model attributes propagation/transformation as a function of the features shared on the edges between words. | neutral |
train_101152 | Second, some crossword questions mirror definitions, in that they refer to fundamental properties of concepts (a twelve-sided shape) or request a category member (a city in Egypt). | this is likely to come at the cost of a greater memory footprint, since the model requires access to its database of dictionaries at query time. | neutral |
train_101153 | The normal distributions are parametrized to encourage smooth change in multinomial parameters, over time (see Section 3.2 for details), and the extent of change is controlled through a precision parameter κ. | in this paper we present a dynamic Bayesian model of diachronic meaning change. | neutral |
train_101154 | With the exception of AMBRA, all other participating systems used external resources (such as Wikipedia and Google n-grams); it is thus fair to assume they had access to at least as much training data as our SCAN model. | a hypothetical system would then have to decide amongst the following classes: {1700-1702, 1703-1705, ..., 1961-1963, ..., 2012-2014} {1699-1706, 1707-1713, ..., 1959-1965, ..., 2008-2014} {1696-1708, 1709-1721, ..., 1956-1968, ..., 2008-2020} The first set of classes correspond to fine-grained intervals of 2-years, the second set to medium-grained intervals of 6-years and the third set to coarsegrained intervals of 12-years. | neutral |
train_101155 | We tokenized, lemmatized, and part of speech tagged DATE using the NLTK (Bird et al., 2009). | we define this prior as an intrinsic Gaussian Markov Random Field (iGMRF; Rue and Held 2005), which allows us to model the change of adjacent parameters as drawn from a normal distribution, e.g. | neutral |
train_101156 | Identifying different viewpoints is related to the well-studied area of subjectivity detection, which aims at exposing opinion, evaluation, and speculation in text (Wiebe et al., 2004) and attributing it to specific people (Awadallah et al., 2011;Abu-Jbara et al., 2012). | while some of the perspective words are neutral, mostly literal and occur in both English and Spanish (e.g. | neutral |
train_101157 | These differences are mined from multilingual, non-parallel datasets of Twitter and news data. | the goal of a topic model is to characterize observed data in terms of a much smaller set of unobserved, semantically coherent topics. | neutral |
train_101158 | To characterize these different edit types, we manually reviewed a sample of 100 rewrites and categorized the types of changes that were made. | note that we are not able to make a meaningful comparison against against any of the previously published statistical models for formality detection. | neutral |
train_101159 | Note that, while the basic LM perplexity correlates very weakly with formality overall, the Email genre actually exhibits a trend opposite of that which we expected: in Email, sentences which look less like Gigaword text (higher perplexity) tend to be more formal. | the effect of prior post formality on current post formality becomes stronger later in a thread compared to at the beginning of a thread. | neutral |
train_101160 | This is expected because our system is designed for pure monoalphabetic substitution ciphers. | we define a word's alphagram distance with respect to an ordering of the alphabet as the number of letter pairs that are in the wrong order. | neutral |
train_101161 | For each such τ 1 , Z, τ 2 , it searches for a τ 1 , Z , τ 2 where Z is only a subset of H i . | because c appears in no forbidden tier substrings in R, it is freely distributed in L = L( T, S ). | neutral |
train_101162 | For a TSL 2 grammar G = T, S , T ⊆ Σ will be referred to as the tier, S the allowed tier substrings, and R = fac T-2 (Σ * ) − S as the forbidden tier substrings. | these are words which may contain bs interspersed with as and cs provided that no a precedes another without an intervening c. For example, bbabbcbba ∈ L but bbabbbbabb ∈ L, because E t (bbabbbbabb) = aa and aa ∈ fac 2 ( aa ) but aa ∈ S. Like the class of SL k languages, the class of tSL languages (for fixed t and k) is a string extension language class. | neutral |
train_101163 | We use the posteriors to re-estimate LM parameters as follows To obtain better parameter estimates for word predictions and avoid overfitting, we use smoothing in the M-step. | though they also consider divergences between distributions of latent variable vectors, they use these divergences at learning time to bias models to induce representations maximally invariant across domains. | neutral |
train_101164 | Sennrich (2012a) proposed to cluster training data in an unsupervised fashion to build mixture models that yield good performance on multiple test domains. | we observe consistent improvements over a baseline which does not explicitly reward domain invariance. | neutral |
train_101165 | As the result suggests, using our induction framework tends to yield slightly better translation results in terms of METEOR and especially BLEU. | • The translation improvement is observed also for training with a development set of mixed domains (even for the mixed-domain minus in-domain setting when excluding the Legal data from the mixed development set). | neutral |
train_101166 | Though they also consider divergences between distributions of latent variable vectors, they use these divergences at learning time to bias models to induce representations maximally invariant across domains. | a translation rule with source and target phrases having two similar distributions over the latent subdomains is likely safer to use. | neutral |
train_101167 | In addition we assume the following entries: It can be verified that (BIND f (nsubj (dobj founded f ) Jobs)) has semantics λu. | eQ(z)(z ) is true iff z and z are equal (refer to the same entity). | neutral |
train_101168 | In contrast-partly due to the lack of a strong type system-dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages. | an important constraint on the lambda calculus system is as follows: all natural language con-stituents have a lambda-calculus expression of type Ind × Event → Bool. | neutral |
train_101169 | It is designed specifically for UMLS and it does not disambiguate two candidates if they are classified into the same semantic category. | in the end, all the selected concept candidates are ranked according to s j i , and a list of ranked concepts is returned for each mention. | neutral |
train_101170 | The first step of constructing our training examples is to cluster concepts in the KBs by the cross reference attributes and also extract all pairs of concepts that have "has participant" relations. | if two concepts in different KBs are determined to be the same, we can assume that one is the "gold label" for the other, and extract textual and relational features between them, making this pair an approximation of the real grounding instance. | neutral |
train_101171 | A more recent focus has been on learning representations using weaker forms of supervision that require minimal amounts of manual annotation effort (Clarke et al., 2010;Liang et al., 2011;Krishnamurthy and Mitchell, 2012;Artzi and Zettlemoyer, 2013;Berant et al., 2013;Kushman et al., 2014). | these types of functional relations. | neutral |
train_101172 | One answer is that error tags can be informative and useful to provide feedback to language learners, especially for specific closed-class error types fluency evaluation. | in this section, we investigate the impact that different reference sets have on the system ranking found by different evaluation metrics. | neutral |
train_101173 | This is difficult for NER because of the absence of upper-case spelling, which is not untypical in social media, for example. | binary values denote the presence or absence of a feature (e.g., a particular token); real-valued ones typically denote frequencies of observed features. | neutral |
train_101174 | We retrained this model by using the same corpus-specific training data that we use for J-NERD . | this makes the dataset not only cleaner but also more demanding, as metonymous mentions are among the most difficult cases. | neutral |
train_101175 | For tractability, probabilistic graphical models are typically constrained by making conditional independence assumptions, thus imposing structure and locality on X ∪ Y. | typically, NER would pass only the mentions "David", "manu", and "la" to the NED stage, which then is prone to many errors like mapping the first two mentions to any prominent people with first names David and Manu, and mapping the third one to the city of Los Angeles. | neutral |
train_101176 | Template f 14 (type i , tok i ) generates a binary feature function if the background corpus contains the pattern dep i = deptype(arg1 , arg2 ) where the current token tok i is either arg1 or arg2 , and tok i is labeled with NER label type i . | f 8 (David Beckham, "David") may be lower than f 8 (David Bowie, "David"), for example, as this still active pop star is more frequently and prominently mentioned than the retired football player. | neutral |
train_101177 | Previous work on this task (Banko et al., 2007), as well as on its generalization, called unsupervised semantic parsing (Poon and Domingos, 2009;Titov and Klementiev, 2011), groups patterns between entity pairs (e.g., wrote a review, wrote a critique and reviewed) and uses these clusters as relations. | this is not surprising as the argument independence assumption is very strong, and the general motivation we provided in Section 2 does not really apply to the selectional preference model. | neutral |
train_101178 | In order to examine the influence of the decoder on the model performance, we performed additional experiments in a more controlled setting. | in the following section, we formally describe the problem. | neutral |
train_101179 | Supervised methods for RE have been successful when small restricted sets of relations are considered. | the most-frequent trigger for three induced relations are presented in table 2 Cluster 66 instead groups together expressions such as leads or president (of), so it can vaguely be described as a LEADERSHIP relation, but it also contains the relation triggered by the word professor (in). | neutral |
train_101180 | Generative models with rich features have also been considered in the past (Berg-Kirkpatrick et al., 2010). | though weakly-supervised approaches, such as distantly supervised methods and bootstrapping (Mintz et al., 2009;Agichtein and Gravano, 2000), reduce the amount of necessary supervision, they still require examples for every relation considered. | neutral |
train_101181 | We present a method for unsupervised opendomain relation discovery. | both classes of approaches assume a predefined inventory of relations and a manually constructed resource. | neutral |
train_101182 | In addition, we use sentence lengths, WordCnt (count of the number of nonstopwords in the question that also occur in the answer) and WgtWordCnt (reweight the counts by the IDF values of the question words). | (2013) turn their attention to improving the shallow semantic component, lexical semantics, by performing semantic matching based on a latent word-alignment structure (cf. | neutral |
train_101183 | Additionally, a variety of work exists in the general field of stemmer evaluation, though much of it centers on the information retrieval community. | the relative effects of these treatments on coherence are magnified as the number of topics increases; while no ArXiv treatment differs significantly in coherence at 10 topics, at 200, the four strongest treatments (Lovins, Paice-Husk, five-truncation and four-truncation) are significantly worse. | neutral |
train_101184 | This difference indicates our predictions have worse recall for longer dependencies such as subordinate clauses, while being more accurate in local, phrasal contexts. | annotation projection has been explored in the context of cross-lingual dependency parsing since Hwa et al. | neutral |
train_101185 | These texts have the advantage of being translated both conservatively and into hundreds of languages (massively multi-parallel). | evaluation All our datasets-projected, training, and test sets-contain only the following CoNLL-X features: ID, FORM, CPOSTAG, and HeAD. | neutral |
train_101186 | The average gold edge length is 3.6-which is significantly higher at p < 0.05 (Student's t-test). | for this baseline, we parse a target sentence using multiple single-source delexicalized parsers. | neutral |
train_101187 | (2015) suggest a kernel-based approach to implicitly consider all possible feature combinations over sets of core-features. | regardless of the details of the parsing framework being used, a crucial step in parser design is choosing the right feature function for the underlying statistical model. | neutral |
train_101188 | In the graphbased parser, we jointly train a structured-prediction model on top of a BiLSTM, propagating errors from the structured objective all the way back to the BiLSTM feature-encoder. | when training in this way the parser sees only configurations that result from following correct actions, and as a result tends to suffer from error propagation at test time. | neutral |
train_101189 | More fundamentally, the SNM model parameterization and method of estimation are completely original, as far as we know. | we are not the first to highlight this effectiveness; previous such results were reported in Singh and Klakow (2013). | neutral |
train_101190 | First we will investigate more properties related to word order. | from these data, we confirm that the DLM ratio and Entropy measures capture different word order properties as they are not correlated (Spearman correlation r = 0.32, p > 0.1). | neutral |
train_101191 | 1 (a) where the bias parameter is not included for simplicity. | the depth of 16 is not very deep compared to the models in computer vision (He et al., 2016). | neutral |
train_101192 | Our result justifies via a generative model why this should be satisfied even for low dimensional word vectors. | full details appear in the ArXiv version of this paper (Arora et al., 2015). | neutral |
train_101193 | be- 5 Note that this interpretation has been disputed; e.g., it is argued in Levy and Goldberg (2014a) that (4.1) can be understood using only the classical connection between inner product and word similarity, using which the objective (4.1) is slightly improved to a different objective called 3COSMUL. | it also helps explain why lowdimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by Mikolov et al. | neutral |
train_101194 | The model discussed thus far conditions on the POS tags of words in the input sentence. | the embedding of a non-English word which is not aligned to any English words is defined as the average embedding of words with a unit edit distance in the same language (e.g., 'playz' is the average of 'plays' and 'play'). | neutral |
train_101195 | In order to expose the parser to more errors, we employ a cost augmentation scheme: we sometimes follow incorrect actions also if they score below correct actions. | this approach gives a good vector representation for unknown words but at the expense of ignoring many of the words from the training corpus. | neutral |
train_101196 | The parser sees only states that result from following correct actions. | the functions are usually restricted to having a fixed maximum arity (usually two) (Socher et al., 2010;Tai et al., 2015;Socher, 2014). | neutral |
train_101197 | In stage 2, the texter introduces the main issue, while the counselor asks for clarifications and expresses empathy for the situation. | in addition to the person directly experiencing a mental illness, family, friends, and communities are also affected (insel, 2008). | neutral |
train_101198 | In all cases we find a statistically significant (p < 0.01; Mann-Whitney U-test) increase in the likelihood of the texter using a LIWC marker if the counselor used it in the previous message (~4-5% change). | we evaluate our model with 10-fold cross-validation and compare models using the area under the ROC curve (AUC). | neutral |
train_101199 | To gain insights into the "specialization hypothesis" we make use the counselor annotation of the main issue (depression, self-harm, etc.). | our model performs well with only a small set of linguistic features, demonstrating they provide a substantial amount of the predictive power. | neutral |
Subsets and Splits