id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_93300 | The parameters of our models are learned using AdaGrad (Duchi et al., 2011) with`2 regularization via regularized dual averaging (Xiao, 2009), and we used random search on the development set to select hyperparameters. | an alternative and related approach to learning template orderings is based on the Group Orthogonal Matching Pursuit (GOMP) algorithm for generalized linear models (Swirszcz et al., 2009;Lozano et al., 2011), with a few modifications for the setting of high-dimensional, sparse NLP data (described in appendix B). | neutral |
train_93301 | This work was supported by the National Basic Research Program of China (No. | the convolution layer aims to capture the compositional semantics of a entire sentence and compress these valuable semantics into feature maps. | neutral |
train_93302 | The semantic interactions between the predicted trigger words and argument candidates are crucial for argument classification. | cNNs typically use a max-pooling layer, which applies a max operation over the representation of an entire sentence to capture the most useful information. | neutral |
train_93303 | In S1, beats is a trigger of type Elect. | in S1, beats is a trigger of type Elect. | neutral |
train_93304 | These methods yield relatively high performance. | we call these knowledge lexical-level clues. | neutral |
train_93305 | It proves that richer feature sets lead to better performance when using traditional human-designed features. | we proposed the PF, which is defined as the relative distance of the current word to the predicted trigger or candidate argument. | neutral |
train_93306 | In our algorithm, we only use the filler provenance for a given slot fill. | when combining the output of different ESF systems, it is possible that some slot-filler entities might overlap with each other. | neutral |
train_93307 | The RPI BLENDER KBP system (Yu et al., 2014) casts SFV in this framework, using a graph propagation method that modeled the credibility of systems, sources, and response values. | section 3 provides general background on the KBP-EsF task. | neutral |
train_93308 | Using the union of these keys as the gold standard, precision, recall, and F1 scores are computed. | stacking (Sigletos et al., 2005;Wolpert, 1992) has not previously been employed for ensembling KBP-ESF systems. | neutral |
train_93309 | Document classification is expected to reduce false positives in irrelevant documents while not dramatically reducing recall. | this loss is not significant and the overall Fscore finally increases by 5%. | neutral |
train_93310 | Simultaneous translation (Section 2) avoids this problem by starting to translate before observing the whole sentence, as shown in Figure 1 (a). | sentence segmentation methods have the obvious advantage of allowing for translation as soon as a segment is decided. | neutral |
train_93311 | However, as translation starts before the whole sentence is observed, translation units are often not syntactically or semantically complete, and the performance may suffer accordingly. | in addition, we hope to expand the methods proposed here to a more incremental setting, where both parsing and decoding are performed incrementally, and the information from these processes can be reflected in the decision of segmentation boundaries. | neutral |
train_93312 | However, this trend is not entirely unexpected because it is not possible to completely accurately guess syntactic constituents from every substring w. For example, parts of the sentence "in the next 18 minutes" can generate the sequence "in the next CD NN " and " IN DT JJ 18 minutes," but the constituents CD in the former case and DT and JJ in the latter case are not necessary in all situations. | parsing is performed by finding the parse tree T that maximizes the pCFG probability given a sequence of words w ≡ [w 1 , w 2 , • • • , w n ] as shown by Eq. | neutral |
train_93313 | In incremental decoding, each incoming word is fed into the decoder one-by-one, and the decoder updates the search graph with the new words and decides whether it should begin translation. | short segments pose problems for syntaxbased translation methods, as it is difficult to generate accurate parse trees for sub-sentential segments. | neutral |
train_93314 | One of the most identifying features of speech translation is the fact that it must be performed in real time while the speaker is speaking, and thus it is necessary to split a constant stream of words into translatable segments before starting the translation process. | this trend is not entirely unexpected because it is not possible to completely accurately guess syntactic constituents from every substring w. For example, parts of the sentence "in the next 18 minutes" can generate the sequence "in the next CD NN " and " IN DT JJ 18 minutes," but the constituents CD in the former case and DT and JJ in the latter case are not necessary in all situations. | neutral |
train_93315 | This is perhaps because the RTE community has almost entirely focused on single sentence text hypothesis pairs for a long time. | we will explore a few definitions of sub-tasks in our experiments. | neutral |
train_93316 | We detect negation (either in the hypothesis or a sentence in the text snippet aligned to it) using a small set of manually defined rules that test for presence of words such as "not", "n't", etc. | we can use the same setup as before for multi-task learning after appropriately changing the feature map. | neutral |
train_93317 | Nevertheless, since most the existing works learned word representations mainly based on the word co-occurrence information, the obtained word embeddings cannot capture the relationship between two syntactically or semantically similar words if either of them yields very little context information. | can be optimized using back propagation neural networks. | neutral |
train_93318 | Through the two steps, the proposed approach can map a question into a length invariable compact vector, which can be efficiently and effectively for large-scale question retrieval task in cQA. | the training time yields linear scale to the number of noise samples and it becomes independent of the vocabulary size. | neutral |
train_93319 | These methods retrieve a set of candidate answers from the knowledge base, and the extract features for the question and these candidates to rank them. | as shown in Figure 1, the 2-hops path between the entity avatar and the correct answer is (film.film.release date s, film.film regional release date.release date). | neutral |
train_93320 | Both chimera and the intruder methods are flexible, and we plan to explore them further in future research. | we focus on translating from English to Italian and adopt the setup (word vectors, training and test data) of Dinu et al. | neutral |
train_93321 | 1 Although such studies comparing similarity judgements have their merits, it would be interesting to have studies that evaluate methods for composition on a larger scale, using a larger test set of different specific compositions. | in fact: • A non-reduced space contains more information. | neutral |
train_93322 | This might be beneficial for methods that are able to take advantage of the full semantic space (viz. | similar -(adjective noun, noun) pairs. | neutral |
train_93323 | The interaction of the parameters and the nonlinearity also makes the objective nonconvex. | our combined sparse and neural model trains on the Penn Treebank in 24 hours on a single machine with a parallelized CPU implementation. | neutral |
train_93324 | We then discuss specific choices of our featurization (Section 2.3) and the backbone grammar used for structured inference (Section 2.4). | table 1 shows that these features provide no benefit to the baseline model, which suggests either that it is difficult to learn reliable weights for these as sparse features or that different regularities are being captured by the word embeddings. | neutral |
train_93325 | With regard to parsing speed, 1-order-atomic is the fastest while other two models have similar speeds as MSTParser. | for model training, we use the Max-Margin criterion. | neutral |
train_93326 | Given a sentence x, graph-based models formulates the parsing process as a searching problem: where y * (x) is tree with highest score, Y (x) is the set of all trees compatible with x, θ are model parameters and Score(x,ŷ(x); θ) represents how likely that a particular treeŷ(x) is the correct analysis for x. | table 3 lists the UAS of three models on development set. | neutral |
train_93327 | The effectiveness of our neural network depends on five key components: Feature Embeddings, Phrase Embeddings, Direction-specific transformation, Learning Feature Combinations and Max-Margin Training. | the effectiveness of this function relies heavily on the design of feature vector f (x, c). | neutral |
train_93328 | Each point represents maximization over a small hyperparameter grid with early stopping based on WSJ tune set UAS score. | this increases POS Bohnet and Kuhn (2012) 93.27 91.19 40 Chen and Manning (2014) 91.80 89.60 1 S-LStM (Dyer et al., 2015) table 1: Final WSJ test set results. | neutral |
train_93329 | Such an extension of the work would make it an alternative to architectures that have an explicit external memory such as neural Turing machines (Graves et al., 2014) and memory networks (Weston et al., 2015). | structured objects such as sequences of discrete symbols are written with lowercase, bold, italic letters (e.g., w refers to a sequence of input words). | neutral |
train_93330 | We show that our approach performs well on one such downstream application: the KBP Slot Filling task. | the most salient features are the label of the edge being taken, the incoming edge to the parent of the edge being taken, neighboring edges for both the parent and child of the edge, and the part of speech tag of the endpoints of the edge. | neutral |
train_93331 | 2 For example, removing the amod edge in cute amod ← −− − rabbit yields the more general lexical item rabbit. | for both of the other actions, it is often the case that we would like to capture a controller in the higher clause. | neutral |
train_93332 | However, if "with" is used in the sense of "accompanied by", then the PP is a likely verb attachment, as in the quad visted, P aris, with, Sue. | similarly, suppose we know that net-caught-butterfly, svo (n2, v, n1). | neutral |
train_93333 | In addition to prior work on prepositional phrase attachment, a highly related problem is preposition sense disambiguation (Hovy et al., 2011;Srikumar and Roth, 2013). | our approach draws upon diverse sources of background knowledge, leading to performance improvements. | neutral |
train_93334 | Notice that when we used a filtered version of the data, in feature F 2, the data was no longer detrimental to performance. | even a syntactically correctly attached PP can still be semantically ambiguous with respect to questions of machine reading such as where, when, and why. | neutral |
train_93335 | In our experiments we used both kinds of models, but found the discriminative model performed better. | for the WKP & NYTC corpora, each quad has a preceding noun, n0, as context, resulting in PP 5-tuples of the form: {n0, v, n1, p, n2}. | neutral |
train_93336 | We also used different types of noun categorizations: WordNet classes, semantic types from the NELL knowledge base (Mitchell et al., 2015) and unsupervised types. | we present details only for our discriminative model. | neutral |
train_93337 | Finally, we use lexical features in the form of PP quads, features F8-15. | pps are a major source of syntactic ambiguity. | neutral |
train_93338 | Both support comparisons, and ideally we can detect some level of similarity. | while SNK may have an inherent advantage over CSR or Bow due to its entity orientation, to investigate the effectiveness of the method itself, we now compare them on the previous task of comparative sentence identification. | neutral |
train_93339 | The most accurate parsers (ClearNLP, Mate, RBG, Redshift, Turbo, and Yara) separate from the remaining when sentence length is more than 20 tokens. | we analyzed parser accuracy by sentence length in bins of length 10 (Figure 4). | neutral |
train_93340 | A approach was proposed in (Van der Plas et al., 2014) in which information is aggregated at the corpus level, resulting in a significantly better SRL corpus for French. | furthermore, we have applied our approach to generate PropBanks for 7 languages and conducted experiments that indicate a high f 1 measure for all languages (Section 4). | neutral |
train_93341 | We propose filtered projection focused specifically on improving the precision of projected labels. | sRL models produced in this set of experiments were evaluated using French gold , sampled and evaluated in the same way as other experiments in this section for comparability. | neutral |
train_93342 | We observe that the BITEXT approach outperformed the SUPERVISED and the DELEXICALIZED ones in all metrics with a considerable margin, which shows the effectiveness of our proposed method. | obtain the best target word and its polarity, ( t, p) := arg max t,p f o→t (x, TARGET:p); 3. | neutral |
train_93343 | Query-based multilingual opinion mining was addressed in several NTCIR shared tasks (Seki et al., 2007;Seki et al., 2010). | the application of this simple approach to the gold dependency graphs in the training partition of the MPQA leads to oracle F 1 scores of 86.0%, 95.8% and 93.0% in the reconstruction of opinion, agent and target spans, respectively, according to the proportional scores described in §5.2. | neutral |
train_93344 | The task-specific features are designed to train sentiment polarity classifiers. | section 3 explains the proposed model. | neutral |
train_93345 | In the literature of CSLA, the language with abundant reliable resources is called the source language (e.g., English), while the low-resource language is referred to as the target language (e.g., Chinese). | p: precision, R: Recall, F1: micro-F measure, Ac: Accuracy, and -represents unknown. | neutral |
train_93346 | The extensive research and development efforts produce a variety of reliable sentiment resources for English, one of the most popular language in the world. | the parallel data is also a scarce resource. | neutral |
train_93347 | This paper proposes an approach to learning B-SWE by incorporating sentiment information into the bilingual embeddings for CLSC. | we take into account 14 frequently-used negation words in English such as not and none; 5 negation words in Chinese such as Ø (no/not) and vk (without). | neutral |
train_93348 | Directly employing the translated resources for sentiment classification in the target language is simple and could get acceptable results. | the process of supervised phase is shown in Figure 2. | neutral |
train_93349 | Parameter l } is learned by maximizing the objective function according to the sentiment polarity label s i of document d i : Through the supervised learning phase, [W E , W C ] is optimized by maximizing sentiment polarity probability. | bilingual embeddings without sentiment information are not effective enough for sentiment classification task. | neutral |
train_93350 | This can be formalized with the following equation for the hub score of a node: Where h(v) is the hub score for node v, successors(v) is the set of all nodes that v has an edge to, and a(u) is the authority score for node u. | we plan to explore this in future work. | neutral |
train_93351 | We also want to experiment with removing the dependency on the Treex surface realizer by generating directly into dependency trees or structures into which de- This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 104, and GAUK grant 2058214 of Charles University in Prague. | we introduced a novel modification of the perceptron updates to improve scoring of incomplete sentence plans: In addition to updating the weights using the top-scoring candidate t top and the gold-standard tree t gold (see above), we also use their differing subtrees t i top , t i gold for additional updates. | neutral |
train_93352 | The resulting model will have parameters for feature types observed in the source domain as well as the target domain. | y n be the original representations of the associated words set contained in the labels. | neutral |
train_93353 | (2011) increases the F1 score slightly from 75.13 to 75.69 in C2F, but it does not help as much in cases that require bijective mapping: Daume, Union and Pretrain. | v k ∈ R d can be used to project the variables from the original d-and d -dimensional spaces to a k-dimensional space: The new k-dimensional representation of each variable now contains information about the other variable. | neutral |
train_93354 | Nevertheless, combining two relations (column (f)) outperforms both results for ASR and manual transcripts, showing that different types of relations can compensate each other and then benefit the SLU performance. | some important contexts may be missing due to smaller windows, while larger windows capture broad topical content. | neutral |
train_93355 | After replacing original bag-of-words contexts with dependencybased contexts, we can train dependency-based embeddings for all target words (Yih et al., 2014;Bordes et al., 2011;Bordes et al., 2013). | f w carries the basic word vectors for the utterances, which is illustrated as the left part of the matrix in figure 1(b). | neutral |
train_93356 | We annotate about 25k spoken sentences with only disfluency annotations according to the guideline proposed by Meteer et al. | in comparing parsing accuracy, our BCT model outperforms all the other models, showing that this model is more robust on disfluent parsing. | neutral |
train_93357 | The rest are used for training. | our novel right-to-left transition-based joint method caters to the disfluency constraint which can not only overcome the decoding deficiency in previous work but also achieve significantly higher performance on disfluency detection as well as dependency parsing. | neutral |
train_93358 | Instead of defining a more complicated structure and learning everything jointly, we employ a two-stage approach as the solution for modeling entity-entity relationships after we found that S-MART achieves high precision and reasonable recall. | first, we consider two linear structured learning algorithms: Structured Perceptron (Collins, 2002) and Linear Structured SVM (SSVM) (Tsochantaridis et al., 2004). | neutral |
train_93359 | However, most existing approaches takes two extreme ways: either extract relations based on pre-defined ontology, such as DBpedia (Lehmann et al., 2014); or cluster relation without referring to some ontology, such as OpenIE (Etzioni et al., 2011). | the set of factoids F plus the set of decision variables y are the random variables in our factor graph model. | neutral |
train_93360 | Semantic search also targets on returning answers directly (Pound et al., 2010;Blanco et al., 2011;Tonon et al., 2012;Kahng and Lee, 2012). | a similar factor graph model has been proposed to solve the coreference resolution in (Singh et al., 2011;Wick et al., 2012). | neutral |
train_93361 | On one hand, an entity is modeled to have multiple internal representations, each regarding one or more closely related facts. | • Factoid Retrieval Model (FRM). | neutral |
train_93362 | However, with the further increase of the threshold, we introduce more noise, which decreases the performance. | this shows that introducing better quality of background knowledge is helpful to the better classification of documents. | neutral |
train_93363 | They first proposed a search enginebased method to evaluate the relatedness between every pair of triples, and then an iterative propagation algorithm was introduced to select the most relevant triples to a given source document (see Section 2), which achieved a good performance. | in the original representation, there are no edges between two bk-nodes because they treat the bk-nodes as recipients of relevance weight only. | neutral |
train_93364 | When evaluating the top 10 triples with the highest relevance weight, our framework outperforms the best baseline LDA by 4.4% in MAP and by 3.91% in P@N. When evaluating the top 5 triples, our framework performs even better and significantly outperforms the best baseline by 5.87% in MAP and by 17.21% in P@N. To analyze the results further, Ours-S, the simplified version of our model without iterative propagation, outperforms two strong baselines VSM and WE, which indicates the effectiveness of encoding distributional semantics. | the most closely related work in this area is our own (Zhang et al., 2014), which used the triples of SPO as background knowledge. | neutral |
train_93365 | We first focus on the impact of decreasing the relevance weight of bk-nodes and increasing that of sd-nodes after every iteration. | one can expect that this representation is helpful for better document enrichment by incorporating both accuracy and coverage. | neutral |
train_93366 | The attribute BASEMENT is next in line. | what really matters are not the raw cost amounts, which may be very small, but rather the relative cost of looking up an attribute compared to that of receiving a follow-up. | neutral |
train_93367 | We believe that an essay that contains too many of these component-less paragraphs is likely to have taken too much space discussing issues that are not relevant to the main argument of the essay. | to gain insight into how much impact each of the feature types has on our system, we perform feature ablation experiments in which we remove the feature types from our system one-by-one. | neutral |
train_93368 | 's method to tag each sentence of our essays with an argument label, but modify their method to accommodate differences between their and our corpus. | if more than one of these rules applies to a sentence, we tag it with the label from the earliest rule that applies. | neutral |
train_93369 | Our approach consists of two core components-a timeaware hierarchical Bayesian model for event detection, and a learning-to-rank model to select the salient events to construct the final chronicle. | a sports chronicle should provide information about the results of semi-final and final, and the champion of the tournament instead of the first-round match's result, which accounts for the poor performance. | neutral |
train_93370 | Through formulating this task as a binary classifi-cation problem, we adopt the Support Vector Machines (SVMs) (Cortes and Vapnik, 1995) as the learning model. | morphs can be very abstract (e.g., "函 数 (Function)" refers to "杨 幂 (Yang Mi)" because her first name "幂 (Mi)" means the Power Function) or very concrete (e.g., "薄督 (Governor Bo)" refers to "薄熙来 (Bo Xilai)"). | neutral |
train_93371 | Table 1 contains a list of all datasets. | we formulate the disambiguation as a continuous, multi-objective optimization problem. | neutral |
train_93372 | Unfortunately, continuous expansion will soon render a paper unreadable (e.g., one of many extensions to Polymerase Chain Reaction is Standard Curve Quantitative Competitive Reverse Transcription Polymerase Chain Reaction). | more importantly, the majority of phrases that are never abbreviated are simply not Computer Science keyphrases (we return to this in Section 4.6). | neutral |
train_93373 | The algorithm then extracts a "window" of text preceding the parenthesis, up to n words long (where n is the character length of the abbreviation plus padding). | for example, if the text contains the phrase, ". | neutral |
train_93374 | It is worthwhile to mention that the priors are not a compulsory component. | for the JST and the Trying-JST methods only, we use the filtered subjectivity lexicon (subjective MR) as prior information, containing 374 positive and 675 negative entries, which is the same experimental setting as in Lin & He (2009). | neutral |
train_93375 | Suppose that the training set is where n s is the number of training objects. | the procedure for obtaining priors is generic and can eas-ily be applied to any given dataset. | neutral |
train_93376 | This is an advantage over softmax classifiers. | the nine relations are Cause-Effect, Component-Whole, Content-Container, Entity-Destination, Entity-Origin, Instrument-Agency, Member-Collection, Message-topic and Product-Producer. | neutral |
train_93377 | If a trigram appears as the largest contributor for more than one sentence, its contribuition value becomes the sum of its contribution for each sentence. | when using only the text span between the target nouns, the impact of WPE is much smaller. | neutral |
train_93378 | It is unclear at the first glance how to encode word embeddings into the tree kernels effectively so that word embeddings could help to improve the generalization performance of RE. | we report the performance of these augmented systems in Table 2 for the two scenarios: (i) in-domain: both training and testing are performed on the source domain via 5-fold cross validation and (ii) out-of-domain: models are trained on the source domain but evaluated on the three target domains. | neutral |
train_93379 | The closest phrase of X1 in the source domain is X3: the phrase between "Iraqi soldiers" and "herself" in the sentence "The Washington Post is reporting she shot several Iraqi soldiers before she was captured and she was shot herself , too.". | in general, suppose we are able to acquire an additional real-valued vector V i from word embeddings to semantically represent a relation mention R i (along with the PET tree T i ), leading to the new representation of The new kernel function in this case is then defined by: is some standard vector kernel like the polynomial kernels. | neutral |
train_93380 | (Section 3.1) 2. | note that it does not mean that the selected reference point should have exactly same surface form across time. | neutral |
train_93381 | Eq.1 is used for solving regularized least squares problem (γ equals to 0.02). | we call this kind of similarity matching as local correspondence in contrast to global correspondence described in Sec. | neutral |
train_93382 | To achieve this, we propose using backward random walks. | knowledge bases such as Freebase (Bollacker et al., 2008), YAGO (Suchanek et al., 2007), or NELL (Carlson et al., 2010a), may contain thousands of predicates and millions of concepts. | neutral |
train_93383 | Such path types may be indicative of an extended relational meaning between graph nodes that are linked over these paths; for example, the path AtheletePlaysForTeam, TeamPlaysInLeague implies the relationship "the league a certain player plays for". | pRA is highly scalable compared with other statistical relational learning approaches, and can therefore be applied to perform inference in large knowledge bases (KBs). | neutral |
train_93384 | We represent part-of-speech tags as another set of graph nodes, where word mentions are connected to the relevant tag over P OS edge type. | the relational paths that connect nodes c and t are evaluated as possible random walk features. | neutral |
train_93385 | Here we want to mention a caveat: there are definitely other common sense Axioms that we are not able to address in the current implementation. | since the visual recognition is not the core of this work, we omit the details here and refer the interested reader to (Aksoy et al., 2014;Aksoy and Wörgötter, 2015). | neutral |
train_93386 | Triplets are represented where h i denotes a head entity, t i denotes a tail entity and r i denotes a relation. | 2014) proposes an improved model named translation on a hyperplane. | neutral |
train_93387 | Previous work such as TransE, TransH and TransR/CTransR regard a relation as translation from head entity to tail entity and the CTransR achieves state-of-the-art performance. | recently, a powerful approach for this task is to encode every element (entities and relations) of a knowledge graph into a low-dimensional embedding vector space. | neutral |
train_93388 | (2014) to compare each team's performance on different error types in the CoNLL-2014 shared task. | before commencing annotation, however, each annotator was given detailed instructions on how to use the tool, along with an explanation of each of the error categories. | neutral |
train_93389 | 1 On the one hand, the increasing amount of ontologies offers an excellent opportunity to link this knowledge together (Gómez-Pérez et al., 2013). | the Fill-Up model has been developed to address a common scenario where a large generic background model exists, and only a small quantity of domain-specific data can be used to build a translation model. | neutral |
train_93390 | Moreover, supervised approaches tend to be such that they can disambiguate only those words for which they have seen sufficient training examples to cover all senses. | we therefore find the mapping between elements of the two pairs that gives the lowest total distance, and halve it: with this method we observe a Krippendorff's α of 0.777; this is only slightly below the 0.8 threshold recommended by Krippendorff, and far higher than what has been reported in other sense annotation studies Jurgens and Klapaftis, 2013). | neutral |
train_93391 | Differences in data pre-processing (tokenization/lemmatization), selection (train/test splits), feature representation (unigram/bigram), pivot selection (MI/frequency), and the binary classification algorithms used to train the final classifier make it difficult to directly compare results published in prior work. | vertical bars represent the classification accuracies (i.e. | neutral |
train_93392 | (2007), but not part of the train/test domains. | criteria (b) captures the prior knowledge that high-frequent words common to two domains often represent domain-independent semantics. | neutral |
train_93393 | For Word2Vec and PMI-SVD, we use the pre-trained models obtained by Baroni et al. | instead of simply backing off to the most frequent sense, we propose a more meaningful exploitation of this information. | neutral |
train_93394 | This enables the use of natural language processing techniques that require the reliable identification of words. | we learn a GAM using the residuals of the baseline model as a response variable and fitting semantic surprisal based on the in-domain model; see Table 2. | neutral |
train_93395 | In order to restore its status as a probability, Mitchell includes another normalization step: The model hence simply uses the trigram model probability for function words, making the assumption that the distributional representation of such words does not include useful information. | this means that the simpler trigram surprisal model does not contribute anything over the semantic model, and that the semantic model fits the word duration data better. | neutral |
train_93396 | We make use of a re-implementation of the semantic surprisal model presented in Mitchell et al. | we decided to divide the data up into datapoints with S Semantics above 1.5 and below 1.5. | neutral |
train_93397 | Both SBTDM methods have runtimes that increase at a rate substantially below that of the square root of the number of topics (plotted as a blue line in the figure for reference). | all parallel methods use 15 threads. | neutral |
train_93398 | The first, b(a, n), was defined in Section 3. | then, the distribution over a subsequent draw of Z given SBt prior S and observations n is defined as: is a normalizing constant that ensures the distribution sums to one for any fixed number of observations i n i , and B(S, z, n) and Q(z) are defined as below. | neutral |
train_93399 | Inferring a large topic distribution for each word and document given such sparse data is challenging. | each interior node a of the SBT has a discount δ a associated with it. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.