id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_97300 | Neither the context-sensitive respeller nor dictionary lookup seem to contribute much to eSpeak's performance. | the model learns which n-grams are characteristic of ambiguous or unambiguous syllables. | neutral |
train_97301 | The respellings designed by the author were much more effective for that purpose than either the IPA phonetic transcription or phonemic respelling (Section 4.3). | each of the examples 3-5 indicate the evaluators' acceptance of a particular respelling device: silent letters, multi-syllable units, and dictionary words. | neutral |
train_97302 | We provide results showing our model is an order of magnitude faster to train than Model 4, that it requires no staged initialization, and that it produces alignments that lead to significantly better translation quality on downstream translation tasks ( §4). | last, generate the m output words, where each e i depends only on f a i . | neutral |
train_97303 | , m], the alignment to the source and its translation is independent of all other translation and alignment decisions. | 6 Second, using a 10k sample of the French-English data set (only 0.5% of the corpus), we determined 1) whether p 0 should be optimized; 2) what the optimal Dirichlet parameters µ i are; and 3) whether the commonly used "staged initialization" procedure (in which Model 1 parameters are used to initialize Model 2, etc.) | neutral |
train_97304 | For Arabic-to-English, the bilingual data consists of roughly 100K sentences of in-domain TED talks data and 8M sentences of "other"-domain United Nations (UN) data. | in addition to the phrase based decoder, Jane 2.0 implements the forced alignment procedure used in this work for the purpose of adaptation. | neutral |
train_97305 | In that case, the chance of finding a correct translation is reduced (Pekar et al., 2006). | in general if a query term has a low-frequency in the corpus, then its context vector is sparse. | neutral |
train_97306 | POS tagging is the problem of assigning syntactic categories or POS to tokenized word forms in running text. | the EWt contains development and evaluation data for five domains: answers (from Yahoo! | neutral |
train_97307 | The structured perceptron is similar to the averaged perceptron (Freund and Schapire, 1999), except data points are sequences of vectors rather than just vectors. | this section provides some intuition for using inverse Zipfian distributions as weight functions. | neutral |
train_97308 | This was also the training data used in the experiments in the Parsing the Web (PTW) shared task at NAACL 2012. | performance of state-of-the-art supervised systems is known to drop considerably on out-ofdomain data. | neutral |
train_97309 | 1-ORCL Select the best performing attribute on the test set. | these algorithms use examples from each domain to learn a general model that is also sensitive to individual domain differences. | neutral |
train_97310 | Measures of semantic and pragmatic atypicality in spontaneous language are rarely directly measured. | in addition, we report the results of applying these word ranking techniques in combination with the two filtering techniques. | neutral |
train_97311 | We then apply word ranking methods and distributional semantic modeling to these narrative retellings in order to automatically identify these unexpected words. | words with high tf-idf and log odds scores are likely to be those unrelated to the topic of the NNM story. | neutral |
train_97312 | The participants in that study were chosen to reflect the demographics of adults in the United States; thus, speakers of varying reading levels and nonnative speakers were included. | we investigate finegrained measures that, if useful in identifying points of difficulty for readers, can lead to new approaches for assessing text difficulty. | neutral |
train_97313 | They observe half of the WER reduction that the fully supervised methods achieve. | a closer look reveals that the syllable-based CM paired with NO-LM is an outlier because NO-LM approach allows variety at the output but when the unit of the confusion model is as small as syllables, it produces too much variety that deterio-rates the discriminative model. | neutral |
train_97314 | The eye movement paths available from the screen recordings done during sense annotation conform to this theory. | these approaches can be classified as weak AI systems. | neutral |
train_97315 | We note that this measure is qualitatively similar to relational similarity model of (Turney, 2012), which predicts similarity between members of the word pairs To evaluate the vector offset method, we used vectors generated by the RNN toolkit of Mikolov (2012). | also of note, the use of distributed topic representations has been studied in (Hinton and Salakhutdinov, 2006;Hinton and Salakhutdinov, 2010), and (Bordes et al., 2012) presents a semantically driven method for obtaining word representations. | neutral |
train_97316 | It is clear that even with a simple weighing approach, the PPDB scores show a clear correlation with human judgements. | here, we obtain S → NP expect S, for which PPDB has matching rules like S → NP expect S | NP would hope S, and S → NP expect S | NP trust S. This allows us to apply sophisticated paraphrases to the predicate while capturing its arguments in a generalized fashion. | neutral |
train_97317 | 4 The resulting composite parallel corpus has more than 106 million sentence pairs, over 2 billion English words, and spans 22 pivot languages. | they can be used to bias the collection towards greater recall or higher precision. | neutral |
train_97318 | We compare our proposed method based on hitting times (HT) with two variants of iterative bootstrapping. | the goal of relation extraction is to extract tuples of a particular relation from a corpus of natural language text. | neutral |
train_97319 | In this paper, we show that a significant number of "negative" examples generated by the labeling process are false negatives because the knowledge base is incomplete. | in the M-step, we retrain both of the mentionlevel and the aggregation level classifiers. | neutral |
train_97320 | Figure 3 gives common and characteristic student responses for each setup on a question for which Dictionary Words differed significantly. | in this study we consider only the baseline, non-adaptive conditions of those experiments. | neutral |
train_97321 | We report results from 5-fold cross validation performed on the entire corpus. | now all three other classes also improve their performance, and we obtain a 10.6% error reduction on overall accuracy over the baseline system. | neutral |
train_97322 | A baseline system for this task would simply assign the majority class (high-coherent) to all of the responses; this baseline achieves an F-Measure of 0.587. | we generated there basic entity grids: EG_SOX (entity grid with the syntactic roles S, O, and X), EG_REDUCED (entity grid with the reduced representations P and N), and EG_SALIENT (entity grid with salient and non-salient entities). | neutral |
train_97323 | Recently, Yannakoudakis and Briscoe (2012) systematically analyzed a variety of coherence modeling methods within the framework of an automated assessment system for non-native free text responses and indicated that features based on Incremental Semantic Analysis (ISA), local histograms of words, the part-ofspeech IBM model, and word length were the most effective. | for coherence modeling, we again use the J48 decision tree from the Weka machine learning toolkit (Hall et al., 2009) and run 4-fold crossvalidation on the 600 annotated responses. | neutral |
train_97324 | Based on the above analysis, we plan to investigate additional superficial features explicitly related to discourse coherence, such as the distribution of conjunctions, pronouns, and discourse connectives. | we adopt the weighted average F-Measure to evaluate the performance of coherence prediction: first, the F1-Measure of each category is calculated, and then the percentages of responses in each category are used as weights to obtain the final weighted average F-Measure. | neutral |
train_97325 | We then run an iterative process where in each iteration we update both Λ and θ for each training sample. | as discussed in §3.1 this metric is exact, indicating whether the generated regular expression is semantically equivalent to the correct regular expression. | neutral |
train_97326 | The n-best parser always represents the n-best parses, which is set to 10,000 in our experiments. | at each iteration we find the n-best parses with the current lexicon, and find the subset of these parses which are correct using DFa equivalence. | neutral |
train_97327 | We do this using a modified version of Møller (2010). | this information is augmented with a (/) or a (\) for each argument indicating whether that argument comes from the left or the right, in sentence order. | neutral |
train_97328 | Given a document D and a word w in D, Z w = (f, e) represents an assignment of w to frame f ∈ F and frame element e ∈ E f ∪ S f . | we extend PROFINDER to leverage this intuition by incorporating a "stickiness" prior (Haghighi and Vanderwende, 2009) to encourage neighboring clauses to stay in the same frame. | neutral |
train_97329 | The BNC comprises 4,049 texts totalling approximately 100 million words. | the different senses of a word "exist in parallel" until it is observed in some context. | neutral |
train_97330 | In isolation, the word pen may refer to a writing implement, an enclosure for confining livestock, a playpen, a penitentiary or a female swan. | although guppy is an example of a pet-fish it is neither a very typical pet nor fish (Osherson and Smith, 1981). | neutral |
train_97331 | A word's usage is learned from the type of dependency relations it has with its immediate neighbors in dependency graphs. | their product cc * = (r e iθ )(r e −iθ ) = r 2 is real. | neutral |
train_97332 | Table 5 reports F1 scores on both the positive and negative examples of TEST. | from line 2 on, the first column is the reference output and the second column is the model output with the marginal probability for predicated labels. | neutral |
train_97333 | The approach explores both trees in a bottom-up, postorder manner, running in time: where |T i | is the number of nodes, D i is the depth, and L i is the number of leaves, with respect to tree T i . | we show that this approach is better formulated as a (strongly indicative) feature of a larger set of answer extraction signals. | neutral |
train_97334 | 2012present an improved system called OLLIE, which relaxes the previous systems' constraints that relation words are mediated by verbs, or relation words that appear between two entities. | it is often difficult to determine which textual fragments to extract. | neutral |
train_97335 | With these notations, RD(q, s) is true if and only if is true for some w h , w d |q , for some u h , u d |s and for some i and j. EQ(w h , u h )∧EQ(w d , u d ) requires that the question dependency w h , w d |q and the snippet dependency u h , u d |s match; w h ∈ m (i) ∧ w d ∈ m (j) requires that the head word and dependent word are in the i th -rank and j th rank MMP, respectively. | these features are designed to characterize a phrase from the lexical, syntactic, semantic and corpus-level aspect. | neutral |
train_97336 | The ultimate goal of the MMP model is to improve the performance of our question-answering system. | rD(q, s) is a dependency feature enhanced with MMPs. | neutral |
train_97337 | This merged graph becomes a new history graph for the next utterance. | only nouns are considered as potential keywords. | neutral |
train_97338 | The keywords of the current utterance are extracted by TextRank (Mihalcea and Tarau, 2004) from the merged graph of the current utterance and the history graphs. | there have been a few studies focused on keyword extraction from spoken genres. | neutral |
train_97339 | We assume that only the preceding utterances that are directly related with the current utterance are important for extracting keywords from the current utterance. | this is collaborative approach to extract keywords in a document. | neutral |
train_97340 | The frequency-based keyword extraction with TFIDF weighting (Frank et al., 1999) and the graph-based keyword extraction (Mihalcea and Tarau, 2004) are two base models for this task. | it would be helpful to meeting participants to provide them with some additional information related to the current subject. | neutral |
train_97341 | In this study, they considered prosodic information from HKT forced alignment and topics in a lecture generated by Probabilistic Latent Semantic Analysis (pLSA). | they are not suitable for meeting transcripts. | neutral |
train_97342 | For anaphor m i and its antecedent candidates E m i (e ij ∈ E m i ), the numeric score for pair {m i , e ik } is S ik . | based on (1) and (2), MLNs allow us to express relations between anaphor-anaphor and anaphor-antecedent pairs ((m,n) or (m,e)) on the global discourse level improving accuracy by performing joint inference. | neutral |
train_97343 | MLNs are a powerful representation for joint inference with uncertainty. | work on implicit noun roles is mostly focused on few predicates (e.g. | neutral |
train_97344 | It makes strong untested assumptions about bridging anaphora types or relations, limiting it to definite NPs (Poesio and Vieira, 1998;Poesio et al., 2004;Lassalle and Denis, 2011) or to part-of relations between anaphor and antecedent (Poesio et al., 2004;Lassalle and Denis, 2011). | entity coherence can rely on more complex, lexico-semantic, frame or encyclopedic relations than identity. | neutral |
train_97345 | Even an unsupervised model based on these constraints provides substantial gains over feature-based models for most AZ categories. | recent work has shown that explicit declaration of domain and expert knowledge can be highly useful for structured NLP tasks such as parsing, POS tagging and information extraction (Chang et al., 2007;Ganchev et al., 2010). | neutral |
train_97346 | Many SMT systems, including our own, still use this distance-based penalty as a feature. | the isi corpus consists of comparable data: sentence pairs whose source-and target-language sides are similar, but often not mutual translations. | neutral |
train_97347 | We show a reference translation of the Chinese source (not found in the comparable data) that reorders the phrases as 1, 3, 2. | we also tried augmenting this with separate log-linear features corresponding to subcorpus-specific RMs. | neutral |
train_97348 | For a given phrase pair (f , e), we estimate the probabilities that it will be in an M, S, or D orientation o with respect to the previous phrase pair and the following phrase pair (two separate distributions). | since in Cantonese, "verb adverb" is a more common word order, speakers and writers of Mandarin in Hong Kong may adopt the 012 34 135 013517 2 38 9 34 2 8 2 7 013517 2 38 9 "verb adverb" order in that language as well. | neutral |
train_97349 | We introduce four methods based on the above formulation and each method uses a different type of g(•) function for combining different metrics and we compare experimentally with existing methods. | our method is able to combine multiple metrics (each of which compares to the reference) during the tuning step and we do not depend on N-best list (or forest) rescoring or system combination. | neutral |
train_97350 | Or the user can encode her beliefs or expectations about the individual solutions {p s 1 , . | due to the scaling differences between the scores of different metrics, the linear combination might completely suppress the metric having scores in the lower-range. | neutral |
train_97351 | Using standard finite state algorithms, they intersect the two automatons then exactly search for the highestscoring paths. | referencing vertex v becomes partial edge "is ( )[0 + ] ." | neutral |
train_97352 | The alignment models proposed mostly follow the original generative stories while introducing additional phrasal conditioning into models 3 and 4. | in addition to sampling the alignments, we also place a uniform Beta prior on the discount parameters and a vague Gamma prior on the strength parameters, and sample them using slice sampling (Neal, 2003). | neutral |
train_97353 | The most basic word alignment model, IBM model 1, can be described using the following generative process (Brown et al., 1993): Given an English sentence E = e 1 , ..., e l , first choose a length m for the foreign sentence F . | we place the following assumptions on IBM model 1: In this probability modelling we assume that the alignment positions are determined using the uniform distribution, and that word translations are generated depending on the source word -the probability of translating to a specific foreign word depends on the observed frequency of pairs of the foreign word and the given source word. | neutral |
train_97354 | The system performance was then evaluated against these judgements in terms of precision (P ), i.e. | one of the first attempts to identify and interpret metaphorical expressions in text is the met* system of Fass (1991), that utilizes hand-coded knowledge and detects non-literalness via selectional preference violation. | neutral |
train_97355 | The clusters that have no member noun were hidden from the ranking since they do not explicitly represent any concept. | the matrix B denotes the n × m adjacency matrix, with b ip being the connection weight between the vertex v i and the cluster u p . | neutral |
train_97356 | We calculated system precision (in all experiments) as an average over both annotations. | not only HGFC outperformed agglomerative clustering methods in hi-erarchical clustering tasks (Yu et al., 2006;Sun and Korhonen, 2011), but its hierarchical graph output is also a more suitable representation of the concept graph. | neutral |
train_97357 | The final parameter values for the other methods, found by exhaustive search, are summarized in Table 1. | a major drawback of this strategy is the dependency on the coverage of the resource, which has a direct impact on the lexical chains. | neutral |
train_97358 | Statistical methods to modeling language semantics have proven to deliver good results in many natural language processing applications. | it can be concluded that there is a moderate to high agreement regarding the annotator selections of candidate terms, which is ensured by preselection of candidate terms by part-of-speech patterns. | neutral |
train_97359 | The statistics of our labeled examples are presented in Table 2. | our definition of entities depends on the given knowledge base, rather than human judgment. | neutral |
train_97360 | Ideally, we would like to manually label all the posts to obtain the ground truth for evaluation. | the polarity of the interaction expression in the post is dependent on the viewpoint y u,n and the viewpoints of the previous post(s). | neutral |
train_97361 | For our task, we take a simpler approach and use a sentiment lexicon together with some heuristics to predict the polarity of interaction expressions. | jVTM: The model is shown in Figure 3(a), a variant of jVTM-UI that does not consider user interaction. | neutral |
train_97362 | Note that FS-EM1 and FS-EM2 work slightly better than Co-Class in domain "Camera" because it is the least noisy domain with very short posts while other domains (as source data) are quite noisy. | based on the discussion above, the key to solve the problem of EM is to find a way to reflect the features in the target domain during the iterations. | neutral |
train_97363 | One can conceive a two-pass scenario where we first make a binary decision of whether there is an empty category associated with the head in the first pass and then determine whether there is an EC associated with the tuple as well as the EC type in the second pass. | dev Test 81-325, 400-454, 500-554 41-80 1-40 590-596, 600-885, 900 901-931 As discussed in Section 2, the gold standard dependency structure parses are converted from the CTB parse trees, with the ECs preserved. | neutral |
train_97364 | As a result, current semantic role labeling systems can only recover explicit arguments. | this context can provide discriminative clues that may help identify the types of empty category. | neutral |
train_97365 | We also introduce a variety of new features that are more suited for this approach. | our model did poorly on dropped pronouns (*pro*). | neutral |
train_97366 | Methods for learning with ambiguous labelings have previously been proposed in the context of multi-class classification (Jin and Ghahramani, 2002), sequence-labeling (Dredze et al., 2009), log-linear LFG parsing (Riezler et al., 2002), as well as for discriminative reranking of generative constituency parsers (Charniak and Johnson, 2005). | figure 3 outlines an example of how (and why) AAST works. | neutral |
train_97367 | AAST, on the other hand, achieves consistent gains, rising from 62.0% to 64.0% on average. | for languages 3 Model "D-,To" in Table 2 from Naseem et al. | neutral |
train_97368 | Using this task and a model of the meaning of spatial language, we next discuss two agents that play the game: ListenerBot (Section 4) makes decisions using a single-agent POMDP that does not take into account the beliefs or actions of its partner, whereas DialogBot (Section 5) maintains a model of its partner's beliefs. | since this behavior confuses the other agent and thus has a lower utility, it gets replaced by truthful communication as the policies improve. | neutral |
train_97369 | Prediction can support specific response capabilities, such as system completion of user utterances (DeVault et al., 2011a) and reduced response latency. | the analysis in this paper has explored a method of approximating explicit incremental NLU using predictive techniques in finite semantic domains. | neutral |
train_97370 | The algorithm takes an ambiguous word w as input, and outputs its corresponding similarity-based pseudoword P w whose i th pseudosense models the i th sense of w, together with a confidence score which we detail below. | given a pseudoword p and an untagged corpus C, this artificial tagging is achieved by substituting all occurrences of w i in C with p for each pseudosense i ∈ {1, . | neutral |
train_97371 | The score, however, remains above 0.70 with highlypolysemous pseudowords. | as our WSD system for this experiment, we used It Makes Sense (IMS), a state-of-the-art supervised WSD system (Zhong and Ng, 2010). | neutral |
train_97372 | Clearly, a sufficiently large sense-tagged corpus is required for calculating the occurrence frequency of the individual senses of a word. | constructing a pseudoword by merely combining a random set of unambiguous words selected on the basis of their falling in the same range of occurrence frequency (Schütze, 1992), or leveraging homophones and OCR ambiguities (Yarowsky, 1993), does not provide a suitable model of a real polysemous word (Gaustad, 2001;Nakov and Hearst, 2003). | neutral |
train_97373 | A fundamental problem in computational linguistics is the paucity of manually annotated data, such as part-of-speech tagged sentences, treebanks, and logical forms, which exist only for few languages (Ide et al., 2010). | as mentioned earlier in Section 2, constructing a pseudoword by combining a random set of unambiguous words, as was done in these early works, can not model systematic polysemy (Gaustad, 2001;Nakov and Hearst, 2003), since different senses of a real ambiguous word, unless it is homonymous, share some semantic or pragmatic relation. | neutral |
train_97374 | Under these conditions, setting different priors to reflect the annotator pool should improve performance. | whenever one label is more prevalent (a common case in NLP tasks), κ overestimates the effect of chance agreement (Feinstein and Cicchetti, 1990) and penalizes disproportionately. | neutral |
train_97375 | We thus also implement Variational-Bayes (VB) training with symmetric Beta priors on θ j and symmetric Dirichlet priors on the strategy parameters, ξ j . | all interannotator agreement measures suffer from an even more fundamental problem: removing/ignoring annotators with low agreement will always improve the overall score, irrespective of the quality of their annotations. | neutral |
train_97376 | For instance, "bright" and "intelligent" are frequently occurring in comma-separated enumerations, and "intelligent" fits well in the target context based on n-gram probabilities. | due to high annotation costs, methods that do not require labeled training data per target scale better to a large vocabulary. | neutral |
train_97377 | In our view, the latter option provides a better assessment of the model's similarity judgements, since contextualizing low-similarity landmarks often yields non-sensical phrases (e.g. | the first step of our method consists in the construction of a latent factor model for nouns, based on their context words. | neutral |
train_97378 | In general, we found the "MM" approach can perform better since it inherently incorporates both the "burstiness" and "lexical cohesiveness" of the event tweets, while the "Spike" approach relies solely on the "burstiness" property. | most previous summarization studies focus on the well-formatted news documents, as driven by the annual DUC 2 and TAC 3 evaluations. | neutral |
train_97379 | A multi-document extension of RST is Cross-document Structure Theory (CST), which has been applied to MDS (Zhang et al., 2002;Jorge and Pardo, 2010). | g-FLOW only scores significantly lower than LIN and the gold standard summaries. | neutral |
train_97380 | Furthermore, base forms and inflected forms separated by spaces, hyphens, or colons were discarded. | these modeling choices are directly inspired by the data setting: Wiktionary contains complete inflection tables for many lexical items in each of a large number of languages, so it is natural to make full use of this information with a joint model of all inflected forms. | neutral |
train_97381 | In the alignment step, we minimize the edit distance between each inflected form and the base form to identify changed spans. | we find that regularization is effective at balancing high model capacity with generalization, and reducing the size of the feature set empirically harms overall accuracy. | neutral |
train_97382 | In the former, the authors develop an active learning word selection strategy for inducing pronunciation rules. | our selection methods are fast, can select any number of data points in a single step, and are not tied to a particular prediction task or model. | neutral |
train_97383 | Lemma Considering the amount of effort put in developing the guesser, the baseline POS tagging accuracy is relatively good. | our morphological analyzer is identical to the one used in the previous sections. | neutral |
train_97384 | In fact, our model induces at most 4-6 roles (even if |R| is much larger). | namely, instead of computing an expectation of p(a i |a −i , r,v,C,u) under p(r|x, w), as in (3), we use the posterior distributions µ is = p(r i = s|x, w) and score the argument predictions as where µ are the posteriors for all the arguments, and φ i (a, a −i ) is the score associated with predicting lemma a for the argument i. | neutral |
train_97385 | While each of the joint factors all improve over the baselines on RF, the full model with all the joint factors does not perform as well as with some factors excluded. | this differs from our model which is built on non-greedy joint inference, but much of the signal indicating when two mentions corefer or are aligned is similar. | neutral |
train_97386 | For a thematic fit task, the correlation between calculated estimates and human judgements can be expected to improve. | we make use of the SDDM , SDDMX , and TypeDM tensors in our experiments to demonstrate how our techniques improve performance in thematic fit modelling across different feature spaces. | neutral |
train_97387 | The main drawback of the V RC method is that it cannot evaluate fewer than three clusters, due to having both a V RC k+1 and a V RC k−1 term in Equation (5). | yet, this method poses a few theoretical questions. | neutral |
train_97388 | We make use of the SDDM , SDDMX , and TypeDM tensors in our experiments to demonstrate how our techniques improve performance in thematic fit modelling across different feature spaces. | good role fillers that are very different from one another and belong to different senses of a verb can all be assigned thematic fit scores as high as those of good role fillers of monosemous verbs. | neutral |
train_97389 | For example, if the correct phrase is always ranked 2, 50 or 100 out of list of 4600, median rank accuracy would be 99.95, 98.91 or 97.83. | we find that the additional composition constraint used in CNNSE has maintained the interpretability of the learned latent space. | neutral |
train_97390 | 4 HillClimbPOS: After sampling the initial values s, t (0) and y (0) , the hill-climbing algorithm improves the solution via locally greedy changes. | we obtain a 2.1% TedEval gain against the best published results in the 2013 SPMRL shared task (Seddah et al., 2013). | neutral |
train_97391 | We observe that most sentences converge quickly. | the model samples the full path from the lattice, which corresponds to a valid segmentation and POS tagging assignment. | neutral |
train_97392 | As these parsers employ a bottom-up chart-parsing strategy and use normal-form CCGbank derivations which are rightbranching, they are not incremental in nature. | a related secondorder matching-based mechanism was used by (Kwiatkowski et al., 2010) to decompose logical forms for semantic parser induction. | neutral |
train_97393 | Note, for testing our system, we still need to evaluate every possible candidate report-source pair -that is ∼12,265 candidate sources per tested report. | topicBlock (Ho et al., 2012) models citation prediction with a hierarchical topic model but only uses the first 200 words of each document's abstract. | neutral |
train_97394 | Of the past research which uses generative models for citation prediction, we believe LinkLDA is the only other system in which a source's prior citation probability plays any role in training the model. | we suspect that PrevCit-edSource was a good feature because our corpus was sufficiently large; had our corpus been much smaller, there might not have been enough data for this feature to provide any benefit. | neutral |
train_97395 | The above ten categories are intended to be disjoint, so that a character n-gram belongs to exactly one of the categories. | which character n-grams are more like bag-of-words features (which tend to track topics), and which are more like stylistic features (which tend to track authors)? | neutral |
train_97396 | • There aren't even any dead people on it, since by the very act of being dead and still famous, they assert their long-term impact. | we leave this research question for future work. | neutral |
train_97397 | The parsing process is modeled as an application of a sequence of actions, transducing the initial state into a final state, while constructing de- Table 1: arc-standard transition action sequence for parsing the sentence in Figure 2. pendency arcs. | the 128-beam system does not improve the performance significantly (48.2 vs 47.5), but runs twice slower. | neutral |
train_97398 | Our choice of this statistical procedure has been informed by (Moore, 2004). | 3 while we have presented signi cant improvements using additional constraints, one may won5even when caching feature extraction during training mcdonald et al (2005a) still takes approximately 10 minutes to train. | neutral |
train_97399 | The significance level used to qualify a word as a keyword thus requires correction for multiple tests to reduce type I errors. | a traditional unigram language model is constructed using the actual term frequencies in the document, the resulting model capturing generative probabilities. | neutral |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.