id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_96400 | Using the disambiguator, the string generated from the most probable reduced f -structure produced by the transfer system is chosen. | in the automatic evaluation of f -structure match, three different system variants were compared. | neutral |
train_96401 | The PCFG is a Markov grammar (Collins, 1997;Charniak, 2000), i.e. | bacchiani and Roark (2003) presented unsupervised MAP adaptation results for n-gram models, which use the same methods outlined above, but rather than using a manually annotated corpus as input to adaptation, instead use an automatically annotated corpus. | neutral |
train_96402 | Conditional random fields (CRFs) bring together the best of generative and classification models. | for any label y = c c, c(y) = c is the corresponding chunk tag. | neutral |
train_96403 | We also used a development test set, provided by Michael Collins, derived from WSJ section 21 tagged with the Brill (1995) POS tagger. | sequence analysis tasks in language and biology are often described as mappings from input sequences to sequences of labels encoding the analysis. | neutral |
train_96404 | The accuracy rate for individual labeling decisions is over-optimistic as an accuracy measure for shallow parsing. | we disable the preconditioner after a certain number of iterations, determined from held-out data. | neutral |
train_96405 | In this paper we investigate methods for selecting labeled examples produced by two statistical parsers. | as a point of reference, we plot the improvement rate for a fully supervised parser (same as the one in Figure 2). | neutral |
train_96406 | The other approach, and the focus of this paper, is co-training (Sarkar, 2001), a mostlyunsupervised algorithm that replaces the human by having two (or more) parsers label training examples for each other. | because the labeled sentences selected by S diff-10% contain more mistakes than those selected by S above-90% , S diff-10% requires slightly more corrections than S above-90% for the same level of parsing performance; though both require fewer corrections than the reference case of no selection (Figure 4(b)). | neutral |
train_96407 | From all combinations of estimates and measures, document retrieval with a maximum window of 16 words and pointwise mutual information performs best on average in the three test sets used. | landauer and Dumais (1997) applied word similarity measures to answer TOEFl (Test Of English as a Foreign language) synonym questions using latent Semantic Analysis. | neutral |
train_96408 | Under appropriate conditions, we could reconstruct P(a, b) from these quantities using Gibbs sampling, and, in general, that is the best we can do. | finally, the fact that the measure has much more dynamic range has some appeal when reporting tagging results. | neutral |
train_96409 | In such models, the probability assigned to a tagged sequence of words x = t, w is the product of a sequence of local portions of the graphical model, one from each time slice. | the principal advantage of the dependency network approach is that advantageous bidirectional effects can be obtained without the extremely expensive global training required for CRFs. | neutral |
train_96410 | The TREC QA track is a comparative evaluation. | there was some disagreement among the judges for half of all responses that were not obviously wrong. | neutral |
train_96411 | Using general loglinear regression (Poisson model) under the hypothesis that these two variables are independent of each other, our analysis showed that there is a systematic relationship (significance probability is 0.0504) between source of selected terms and answer accuracy. | the two-tailed t test (df =222) produces a p-value of less than .01% for the comparison of the expected and selected proportions of a) human terms and head sorted terms and b) human terms and technical terms. | neutral |
train_96412 | There were no immediate reports of injury or damage. | the weekly articles review the daily articles and recite important snippets from the daily news. | neutral |
train_96413 | In addition, specifically tagged sections of the documents can be searched rather than the entire document, thus providing fast and effective retrieval. | the text enclosed between the start and end tags of all occurrences of each element is encoded using a fixed-width feature vector. | neutral |
train_96414 | In the following, we consider as a "bigram" a language model with a temporal history that includes information from no longer than one previous time-step into the past. | as can be seen, the FLM alone might increase perplexity, but the GPB-FLM decreases it. | neutral |
train_96415 | In TREC 2002's main QA evaluation there were 67 different systems or variants thereof involved. | strict corresponds to the correctness criterion used by NIsTthe answer must be exact and justified by the referenced document (assessor judgment v s ). | neutral |
train_96416 | (3-1) From in-domain examples, the most similar examples are retrieved. | we used an EBMT, DP-match Driven transDucer (D 3 , Sumita, 2001) as a test bed. | neutral |
train_96417 | A major barrier to the rapid and cost-effective development of spoken language processing applications is the need for time-consuming and expensive human transcription and annotation of collected data. | in order to overcome this we introduce as part of the selection criteria a length limit of 50 phones. | neutral |
train_96418 | These results suggest that we can achieve the same accuracy as random labeling with around 60% of the effort by active selection of examples according to the confidence-based method described in section 3. | there is also a smaller variation in utterance length between actively and randomly selected training examples (more like 110% than the 150% for HMIHY); table 4 shows that defining effort in terms of number of phones still results in appreciable savings for active learning. | neutral |
train_96419 | This result is not unreasonable, however, because, due to limited time, very little effort was spent tuning the parameters of the model. | while the syntactic pattern based reranker increases performance using human annotations by nearly 2%, the effect when using automatically extracted information is only 0.5%. | neutral |
train_96420 | Rules combining sentences or groups of sentences (so called macro segments) to non-primitive RSTtrees. | they introduce massive ambiguities into the grammar which causes the number of analyses to grow according to the Catalan numbers (cf. | neutral |
train_96421 | 1) Ambiguity of parsing is resolved by comparing parse trees of input sentences. | they are regarded as bi-directional paraphrasing rules. | neutral |
train_96422 | We create a word-trie, transform it into a minimal DFA, then identify hubs. | in matching the gold standard words to divisions predicted by either system, we made the following assumptions. | neutral |
train_96423 | The hope is that the same techniques will work for extracting prefixes. | we close with a discussion of the limitations and our plans for more complete learning of hub-automata. | neutral |
train_96424 | We now present an algorithm that checks whether an individual link (e i , f j ) causes a cohesion constraint violation when it is added to a partial alignment. | if two phrases are disjoint in the English sentence, the alignment must not map them to overlapping intervals in the French sentence. | neutral |
train_96425 | The models attracted many researchers because they are considered to be basic frameworks for retrieving or extracting complex information like events. | our model is robust like keyword-based search, and also enables us to specify the structural and linear positions in texts as done by Region Algebra. | neutral |
train_96426 | Definition 2 (Relevance of the whole query) Let q be a given query, d be a document and q 1 , ..., q m subqueries of q. | the relevance is defined by the number of portions matched with subqueries in a document. | neutral |
train_96427 | For this task, we assume that that the frame is a latent class variable (whose domain is the set of lexical units) and the frame elements are variables whose domain is the expanded lexicon (FrameNet + WordNet). | frame Semantics suggests that the meanings of lexical items (lexical units (LU)) are best defined with respect to larger conceptual chunks, called frames. | neutral |
train_96428 | With this model, we are able to estimate the overall joint distribution for each FrameNet frame, given the lexical items in the candidate sentence from the corpus. | we defined the relevance metric for the WordNet nodes to achieve a larger coverage. | neutral |
train_96429 | We constructed a large, automatically annotated corpus by merging the output of Charniak's statistical parser (Charniak, 2000) with that of the IBM named entity recognition system Nominator (Wacholder et al., 1997). | the initial examination of the data showed that syntactic forms in coreference chains can be effectively modeled by Markov chains. | neutral |
train_96430 | The first mention of an entity does have a very special status and its appropriate choice makes text more readable. | there are exceptions which are not surprising; for example, a mention with one modifier is usually followed by a mention with one modifier (probability 0.5) accounting for title modifiers such as "Mr." and "Mrs.". | neutral |
train_96431 | Unlike Ratnaparkhi we do not directly consider any information about preceding words even the previous one [Toutanova 2002]. | they are often written in telegraphic style, omitting closed-class words, which leads to a higher percentage of ambiguous items. | neutral |
train_96432 | Our results differ partly from observations reported on dialogues. | recall that approach 3. cannot discriminate between positions with an increased or reduced FP probability. | neutral |
train_96433 | We conducted an evaluation to compare the effectiveness of CarmelTC at analyzing student essays in comparison to LSA, Rainbow, and a purely symbolic approach similar to (Furnkranz et al., 1998), which we refer to here as CarmelTCsymb. | our evaluation demonstrates the advantage of combining predictions from symbolic and "bag of words" approaches for content analysis aspects of automatic essay grading. | neutral |
train_96434 | For more details see (O'Shaughnessy, 2000). | finally, the values of the main peaks of the spectrum of the speech signal were used as the elements of the third stream vector. | neutral |
train_96435 | In general, the performance of existing speech recognition systems, whose designs are predicated on relatively noise-free conditions, degrades rapidly in the presence of a high level of adverse conditions. | these tests were repeated using the 2stream feature vector, in which we combined the acoustic distinctive cues to the classical MFCCs and their first derivatives to form two streams (MFCCEDE). | neutral |
train_96436 | The acoustic distinctive cues are calculated starting from the spectral data using linear combinations of the energies taken in various channels. | results showed also that the use of the auditory-based acoustic distinctive cues improves the performance of the recognition process in noisy car environments with respect to the use of only the MFCCs, their first and second derivatives at high SNR values, but not for low SNR values. | neutral |
train_96437 | The second component (the upper one) consists of "grammar", "readability", and "verbose and conciseness". | at this point, we are able to produce good prediction of several aspects of information quality, including Depth, Objectivity, Multi-view, and Readability. | neutral |
train_96438 | As the effect of the two actions (1a) and (2a), it is inferred that the specified location (counter in (1a), bag in (2a)) has been "emptied" of the object (fingerprints in (1a), groceries in (2a)). | (Levin and Rappaport Hovav, 1992) defines classes of verbs according to the ability or inability of a verb to occur in pairs of syntactic frames that preserve meaning. | neutral |
train_96439 | Probabilistic word segmentation can handle this kind of ambiguity successfully. | thai as well as some other Asian languages has no word boundary delimiter. | neutral |
train_96440 | In this paper, we present a similar system with a much simpler set of model parameters. | we also ran the N2 on the June 2002 DARPA TIDES Large Data evaluation test set. | neutral |
train_96441 | One score was for the content of the response and the other for its organization, with each score on a scale of 0-10. | systems should be penalized for not retrieving vital concepts, and penalized for retrieving items that are not on the assessor's concept list at all, but should be neither penalized nor rewarded for retrieving a non-vital concept. | neutral |
train_96442 | We are interested in continuing these evaluations in two ways. | their system takes as input properly formatted documents and uses the University of Pennsylvania's CAMP system to perform withindocument coreference resolution, doing more careful work to find additional mentions of the entity in the document. | neutral |
train_96443 | Examples might be the person "John F. Kennedy" who became a president, "White House" -the residence of the US presidents, etc. | the first mention in each group is chosen as the representative (only in Model II and III) and an entity having the same writing with the representative is created for each cluster 3 . | neutral |
train_96444 | If this approach could be applied to gesturespeech alignment, it would be advantageous because the binding probabilities could be combined with the output of probabilistic recognizers to produce a pipeline architecture, similar to that proposed in (Wu et al., 1999). | we define a binding, b ∈ B, as a tuple relating a gesture, g ∈ G, to a corresponding speech reference, r ∈ R. Provided G and R, the set B enumerates all possible bindings between them. | neutral |
train_96445 | Building a recognizer that could handle such unconstrained gesture would be a substantial undertaking and an important research contribution in its own right. | the precise definition of a recurrence relation for m[i, j] and a proof of correctness will be described in a future publication. | neutral |
train_96446 | We address these issues in the rest of the paper. | two possible reasons for this are: first, argument extraction requires more non-local information that is available in the pattern-matching based approach while the classification-based approach relies on local information and is more conducive for identifying the simple predicates in MAtCH. | neutral |
train_96447 | However, scalability of such systems is a bottleneck due to the heavy cost of authoring and maintenance of rule sets and inevitable brittleness due to lack of coverage in the rule sets. | mATCH (multimodal Access To City Help) is a working city guide and navigation system that enables mobile users to access restaurant and subway information for New York City (NYC) (Johnston et al., 2002b;Johnston et al., 2002a). | neutral |
train_96448 | Our surface patterns operated both at the word and part-of-speech level. | the same situation is true of the other two questions. | neutral |
train_96449 | Researchers are then in a position to formulate initial theories, validate the consequences of theories on real data, refine theories in light of empirical data, and follow up with revised experimentation in a dialectic process that forms the essence of scientific discovery. | to this end, we divide the available questions from tREC 2002 into two sets of equal size. | neutral |
train_96450 | In addition to showing a lower correlation (r linear = 0.82), raw scores also clearly posses a nonlinear component, and in fact the quadratic trend is highly significant (t quad = 13.10, p < .001). | often, such evaluations are based on the use of simple sum-scores (i.e., the number of correct answers) and derivatives thereof (e.g., percentages), or on ad-hoc ways to rank or order system responses according to their correctness. | neutral |
train_96451 | Our acquisition algorithm uses clues such as itemization or listing in HTML documents and statistical measures such as document frequencies and verb-noun co-occurrences. | the score was designed so that words appearing in many downloaded documents are highly ranked, according to Assumption B. | neutral |
train_96452 | Alternative 2 This method uses the captions of the itemizations, which are likely to contain a hypernym of the items in the itemization. | a set of the hyponym candidates extracted from a single itemization or list is called a hyponym candidate set (HCS). | neutral |
train_96453 | But the assumption that prosody conveys information about syntactic structure in the same way that punctuation does could be false. | rb This is based on the bin b of the binned NOrM LAST rHYME DUr value, and is only generated that value is greater than -0.061. | neutral |
train_96454 | In fact, when transcribing speech, commas are often used to denote a pause. | all of these experiments produced results similar to those reported here. | neutral |
train_96455 | We motivate the use of tree transducers for natural language and address the training problem for probabilistic tree-totree and tree-to-string transducers. | rounds was motivated by natural language. | neutral |
train_96456 | We demonstrate this point by utilizing content models to select appropriate sentence orderings: we simply use a content model trained on documents from the domain of interest, selecting the ordering among all the presented candidates that the content model assigns the highest probability to. | to address the first question, we compare summaries created by our system against the "lead" baseline, which extracts the first sentences of the original text -despite its simplicity, the results from the annual Document Understanding Conference (DUC) evaluation suggest that most single-document summarization systems cannot beat this baseline. | neutral |
train_96457 | The majority of proposals are symbolic and therefore limited to a specific domain due to the large effort involved in hand-coding semantic information (see Lauer 1995 for an extensive overview). | the conceptbased model (see (7)) achieved an accuracy of 28% on this test set, whereas its lexicalized version reached an accuracy of 40% (see table 11). | neutral |
train_96458 | NIST TREC-9 SDR Web Site (2000) states that: The results of the TREC-9 2000 SDR evaluation presented at TREC on November 14, 2000 showed that retrieval performance for sites on their own recognizer transcripts was virtually the same as their performance on the human reference transcripts. | each query q ∈ Q is presented to the system only once, independent of the number of occurences of q in the transcriptions. | neutral |
train_96459 | In addition it is possible to find large amounts of text with similar content in order to build better language models and enhance retrieval through use of similar documents. | the second method can only generate phone strings that are substrings of the pronunciations of in-vocabulary word strings. | neutral |
train_96460 | • Does this word/ POS/ GPOS match the word/ POS/ GPOS that is 1/2/3 positions to its right? | like liu et al., we use a decision tree trained with prosodic features and a hidden event language model for the IP detection task. | neutral |
train_96461 | Third, they annotate semantic relations among factoids, such as generalization and implication. | here we first compare pyramid scores of the original summaries with DUC scores. | neutral |
train_96462 | First, pyramid scores ignore interdependencies among content units, including ordering. | our impression from consideration of three SCU inventories is that the pattern illustrated here between SCU1 and SCU2 is typical; when two SCUs are semantically related, the one with the lower weight is semantically dependent on the other. | neutral |
train_96463 | This tier is the first one top down such that the sum of its cardinality and the cardinalities of tiers above it is greater than or equal to X (summary size in SCUs). | scores can be computed using one, two and so on up to five reference summaries. | neutral |
train_96464 | This could not only make it difficult to see an improvement in the system's output, but also potentially mislead the BLEU-based optimization of the feature weights. | the use of n-best list rescoring limits the possibility of improvements to what is available in the n-best list. | neutral |
train_96465 | On the Baseline, it achieves 31.4%. | sVMs are extremely slow in training since they need to solve a quadratic programming search. | neutral |
train_96466 | Two methods are used to model the aspects of coherence handled by the system. | the thesis statement, main ideas, and conclusion statement should all contain text that is strongly related to the essay topic. | neutral |
train_96467 | We then obtain a grade level prediction for each passage. | most similar to our work are the vocabulary-based measures, such as the Lexile measure (Stenner et al., 1988), the Revised Dale-Chall formula (Chall and Dale, 1995) and the Fry Short Passage measure (Fry, 1990). | neutral |
train_96468 | TYPES also obtained the best correlation (0.86) for the Reading A-Z documents. | second, because we are interested in the relative likelihoods of grade levels, accurate relative type probabilities are more important than absolute probabilities. | neutral |
train_96469 | Comparing these results with the results in Tables 1 and 2, we find that while overall the performance of contextual non-combined feature sets shows a small performance increase over most non-contextual combined or non-combined feature sets, there is again a slight decrease in performance across the best results in each table. | adaBoost's best accuracy of 84.75% is achieved on the "alltext+speech+glob-ident" combined feature set. | neutral |
train_96470 | The experiments were conducted with an English setup, subjects and assistants in the United States of America and with a German setup, subjects and assistants in Germany. | the flip side, i. e., computer-human interaction (CHI), has received very little attention as a research question by itself. | neutral |
train_96471 | Figure 2 shows the discrepancy between the dialogue efficiency in Phase 1 (HCI) versus Phase 2 2 The shortest dialogues were 3:18 (E) and 3:30 (G) and the longest 12:05 (E) and 10:08 (G). | the usability of such conversational dialogue systems is still unsatisfactory, as shown in usability experiments with real users (Beringer, 2003) that employed the PROMISE evaluation framework described in Beringer et al. | neutral |
train_96472 | Section 3 describes and analyzes the results of experiments aimed at comparing the accuracy of speech recognition and the quality of language modeling on both native and non-native data. | the grammar used in this experiment (the "native" grammar) was designed based for native speech without adaptation to non-native data. | neutral |
train_96473 | In this case, since we wrote the grammar manually and incrementally over time, it is not possible to directly "add the nonnative data" to the grammar. | (Mayfield Tomokiyo and Waibel, 2001)). | neutral |
train_96474 | Ideally, by training acoustic models on target non-native speech, one would capture its specific characteristics just as training on native speech does. | in this paradigm, adapting dialogue systems to non-native speakers does not only mean being able to recognize and understand their speech as it is, but also to help them acquire the vocabulary, grammar, and phonetic knowledge necessary to fulfill the task the system was designed for. | neutral |
train_96475 | After the addition of all the new features, it is the case that removal of no individual feature except predicate degrades the classification performance significantly, as there are some other features that provide complimentary information. | the best system is trained by first filtering the most likely nulls using the best NULL vs NON-NULL classifier trained using all the features whose argument identification F 1 score is marked in bold in table 4, and then training a ONE vs ALL classifier using the data remaining after performing the filtering and using the features that contribute positively to the classification task -ones whose accuracies are marked in bold in table 4. | neutral |
train_96476 | The node NP that encompasses "about 20 minutes" is a NON-NULL node, since it does correspond to a semantic argument -ARGM-TMP. | far, in all experiments our unseen test data was selected from the same source as the training data. | neutral |
train_96477 | This papepr proposes a method of collecting written and spoken language corpora from the Web using interpersonal expressions ( Figure 2). | for example, paraphrasing compound nouns or complex syntactic structure is the task to be tackled. | neutral |
train_96478 | Such an expression is called interpersonal expression. | the first is that current speech synthesis technology is still insufficient and many applications often produce speech with unnatural accents and intonations. | neutral |
train_96479 | If the form of target is 'adverb£ noun predicate', the frequency is approximated by that of 'noun predicate', which is counted based on the parse result. | we cannot use existing Japanese spoken language corpora, such as (Maekawa et al., 2000;Takezawa et al., 2002), because they are small. | neutral |
train_96480 | The results were evaluated through three measures: accuracy of the classification (positive or negative), precision of positive paraphrase pairs, and recall of positive paraphrase pairs. | a lot of attention has been given to applications which uses speech synthesis, for example (Fukuhara et al., 2001). | neutral |
train_96481 | In Section 4, we describe the method of collecting corpora form the Web and report the experimental result. | it is doubtful whether the connotational difference between paraphrases is sufficiently described in such a lexical resource. | neutral |
train_96482 | An NP governed by an IP is likely to be a subject, while an NP governed by a VP is more likely to be an object. | although the path feature is sparse, its sparsity may not be a major problem in role recognition. | neutral |
train_96483 | The alignment template system (Och et al., 1999) is similar to the system described in this work. | the translation results for the Xerox and Canadian Hansards task are very promising. | neutral |
train_96484 | Also, slightly relaxing the monotonicity constraint in a way that still allows an efficient search is of high interest. | • PER (position-independent word error rate): A shortcoming of the WER is that it requires a perfect word order. | neutral |
train_96485 | For the Verbmobil task, we have multiple references available. | using a bigram language model and assuming Bayes decision rule, Equation (2), we obtain the following search criterion: = argmax For the preceding equation, we assumed the segmentation probability p(S|e I 1 ) to be constant. | neutral |
train_96486 | The second variant allows reordering according to the so-called IBM constraints (Berger et al., 1996). | even for the Canadian Hansards task the translation of sentences of length 30 takes only about 1.5 seconds. | neutral |
train_96487 | it has no children) or all of its children are in G (and it is connected to all of them Figure 6: Two frontier graph fragments and the rules induced from them. | furthermore, we define a derivation string as an ordered sequence of elements, each of which is either a source symbol or a target subtree. | neutral |
train_96488 | Figure 2 shows three derivations of the target tree from the source string "il ne va pas", which are all consistent with our definitions. | the span of a node n of the alignment graph is the subset of nodes from S that are reachable from n. Note that this definition is similar to, but not quite the same as, the definition of a span given by Fox (2002). | neutral |
train_96489 | In this paper, we focused on providing a well-founded mathematical theory and efficient, linear algorithms for learning syntactically motivated transformation rules from parallel corpora. | in the second derivation, "pas" is replaced by the English word "he," which makes no sense. | neutral |
train_96490 | We will refer to the set of nouns that co-occur with a caseframe as the lexical expectations of the caseframe. | 3 These caseframes can capture two types of contextual role information: (1) thematic roles corresponding to events (e.g, "<agent> kidnapped" or "kidnapped <patient>"), and (2) predicate-argument relations associated with both verbs and nouns (e.g., "kidnapped for <np>" or "vehicle with <np>"). | neutral |
train_96491 | Initially, we planned to compare the semantic classes of an anaphor and a candidate and infer that they might be coreferent if their semantic classes intersected. | the Recency KS computes the distance between the candidate and the anaphor relative to its scope. | neutral |
train_96492 | We combined evidence from four contextual role knowledge sources with evidence from seven general knowledge sources using a Dempster-Shafer probabilistic model. | their work did not consider other types of lexical expectations (e.g., PP arguments), semantic expectations, or context comparisons like our caseframe network. | neutral |
train_96493 | (In a pilot experiment with a variation that does not throw away information, the entropies are closer to the Gaussian.) | in this section, we help explain the excellent performance of Kneser-Ney smoothing, the best performing language model smoothing technique. | neutral |
train_96494 | Normally, conjugate gradient methods are only used on data that has a continuous first derivative, so the code was modified to prune weights that go exactly to zero. | as we will show, while GIS uses an update rule of the form our modified algorithm uses a rule of the form (3) Note that there are two different styles of model that one can use, especially in the common case that there are two outputs (values for y.) | neutral |
train_96495 | Note that the simple learning algorithm is an important contribution: the algorithm for a Gaussian prior is quite a bit more complicated, and previous related work with the Laplacian prior (two-sided exponential) has had a difficult time finding learning algorithms; because the Laplacian does not have a continuous first derivative, and because the exponential prior is bounded at 0, standard gradient descent type algorithms may exhibit poor behavior. | it turns out to be an extremely weak one -it is not uncommon for models, especially those that use all or most possible features, to assign near-zero probabilities (or, if λs may be infi-nite, even actual zero probabilities), and to exhibit other symptoms of severe overfitting. | neutral |
train_96496 | Compared to the normalized cosine performance of 0.2732, the improved performance of the cosine and normalized cosine measures when source-pair specific information is used (0.2532 and 0.2533, respectively; p .005 for both comparisons) indicates that simple threshold normalization by the running mean is not optimal. | the poorer performance of voting compared to the baseline may be due in part to dependencies among the different measures. | neutral |
train_96497 | To assess whether the observed differences were significant, we compared models at the .005 significance level using a paired two-sided t-test where the data was randomly partitioned into 10 mutually exclusive sets. | the system can "look ahead" N source files from the current source file being processed when deciding whether the current pair is linked. | neutral |
train_96498 | For example, sources in a language that is translated to English will consistently use the same terminology, resulting in greater similarity between linked documents with the same native language. | in the TDT link detection task, a link detection system is given a sequence of time-ordered sets of stories, where each set is from one news source. | neutral |
train_96499 | The second technique is based on discriminatively-trained SVM classifiers (Han et al., 2003). | feature function are represented as f (y t−1 , y t , x). | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.