id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_6200 | For example, a cottage is a type of building and a brother is a type of person, and so the co-occurrence of any type of building and any type of person might increase the probability that the PP in example (1) attaches to the verb. | it is unclear whether the classes over which probability distributions are induced need to be semantic or whether they could be purely distributional. | contrasting |
train_6201 | In order to answer these questions, we take a pragmatic, application-oriented approach to evaluation that is based on the assumption that we want to know which words are distributionally similar because particular applications can make use of this information. | high performance in one application area is not necessarily correlated with high performance in another application area (Weeds and Weir 2003a). | contrasting |
train_6202 | However, in applications, it is normally necessary to compute a single number in order to determine neighborhood or cluster membership. | the classic way to combine precision and recall in IR is to compute the F-score; that is, the harmonic mean of precision and recall: we do not wish to assume that a good substitute requires both high precision and high recall of the target distribution. | contrasting |
train_6203 | This restriction was for computational efficiency and to avoid computing similarities based on the potentially unreliable descriptions of very lowfrequency words. | since our evaluation is comparative, we do not expect our results to be affected by this or any of the other restrictions. | contrasting |
train_6204 | It is computed as twice the ratio between the size of the intersection of the two feature sets and the sum of the sizes of the individual feature sets: where F(w) = {c : P(c|w) > 0} 29According to this measure, the similarity between words with no shared features is zero and the similarity between words with identical feature sets is 1. | as shown below, this formula is equivalent to a special case in the CR framework: the harmonic mean of precision and recall (or F-score) using the additive type-based CRM. | contrasting |
train_6205 | The first of these approaches is taken by Curran and Moens (2002), who evaluate a number of different distributional similarity measures and weight functions against a gold standard thesaurus compiled from Roget's, the Macquarie thesaurus, and the Moby thesaurus. | we argue that this approach can only be considered when distributional similarity is required as an approximation to semantic similarity and that, in any case, it is not ideal since it is not clear that there is a single "right answer" as to which words are most distributionally similar. | contrasting |
train_6206 | An underlying assumption of this approach is that WordNet is a gold standard for semantic similarity, which, as is discussed by Weeds (2003), is unrealistic. | it seems reasonable to suppose that a distributional similarity measure that more closely predicts a semantic measure based on WordNet is more likely to be a good predictor of semantic similarity. | contrasting |
train_6207 | Third, all of the measures perform significantly better for high-frequency nouns than for low-frequency nouns. | some of the measures (sim lin , sim jacc and sim dice ) perform considerably worse for low-frequency nouns. | contrasting |
train_6208 | more neighbors increases performance, since more neighbors allow decisions to be made in a greater number of cases. | when k increases beyond an optimal value, a greater number of these decisions will be in the wrong direction, since these words are not very similar to the target word, leading to a decrease in performance. | contrasting |
train_6209 | The low values of γ indicate that a combination of precision and recall that is closer to a weighted arithmetic mean is generally better than one that is closer to an unweighted harmonic mean. | this does not hold for the t-test based CRMs for low-frequency nouns. | contrasting |
train_6210 | If we don't apply the filter before the classifier, the recall results increase by about 20% (with no loss in precision). | the filter plays a very important role in keeping the extraction pipeline robust and efficient (as shown in Figure 7, the filter discards 99% of the candidate pairs), so this loss of recall is a price worth paying. | contrasting |
train_6211 | Two sentences may share many content words and yet express different meanings (see Figure 14, example 1). | our task of getting useful MT training data does not require a perfect solution; as we have seen, even such noisy training pairs can help improve a translation system's performance. | contrasting |
train_6212 | From the standpoint of the relative jump models, jumping over the four words tripled it 's sales and jumping over the four words of Apple Macintosh systems are exactly the same. | 7 intuitively, we would be much more willing to jump over the latter than the former. | contrasting |
train_6213 | Under ML estimation, we will simply insert an entry in the t-table for the entire summary for some uncommon or unique document word and are done. | a priori we do not believe that such a parameter is likely. | contrasting |
train_6214 | Many current segmenters simply ignore NWs, assuming that they are of little significance in most applications. | we argue that the identification of those words is critical because a single unidentified word can cause segmentation errors in the surrounding words. | contrasting |
train_6215 | As described earlier, we argue that Chinese words (or segmentation units) cannot be defined independently of the applications, and hence a more flexible system (i.e., an adaptive segmenter such as MSRSeg) should be adopted. | we are faced with the challenge of performing an objective and rigorous evaluation of such a system. | contrasting |
train_6216 | The real evaluation will require some application data sets (i.e., segmented texts used by different applications). | such application data are not available yet, and no other system has undergone such evaluation, so there is no way to compare our system against others in this fashion. | contrasting |
train_6217 | We then evaluate on the data set the completeness of the generic segmenter. | we will show that we can effectively adapt the generic segmenter to the four different bakeoff data sets, each of which simulates an application subset. | contrasting |
train_6218 | However, according to the above definition, the relative frequency of CASs can be much higher because most single characters in Chinese can be words by themselves, and as a result, almost all two-character words can be CASs. | this is not desirable. | contrasting |
train_6219 | As described in Section 4, with a context model, NWI can be performed simultaneously with other word segmentation tasks (e.g., word breaking, NER, and morphological analysis) in a unified manner. | it is difficult to develop a training corpus where new words are annotated because "we usually do not know what we don't know." | contrasting |
train_6220 | As mentioned earlier, to achieve a fair comparison, we compare the previously mentioned four systems only in terms of NER precision and recall and the number of OAS errors. | we find that due to the different annotation specifications used by these systems, it is still very difficult to compare their results automatically. | contrasting |
train_6221 | (In fact, the coefficient for sim R , which depends on only one of the three quantities, log p(lso(c 1 , c 2 )), improves only in the third digit.) | with the present paucity of evidence, this connection remains hypothetical. | contrasting |
train_6222 | (2004) found that the Jiang-Conrath and Lesk measures gave the best accuracy in their task of finding predominant word senses, with the results of the two being "comparable" but Jiang-Conrath being far more efficient. | corley and Mihalcea (2005) found little difference between the measures when using them in an algorithm for computing text similarity. | contrasting |
train_6223 | Last, Weeds experimented with distributional measures in real-word spelling correction, much as we have defined it in Hirst and Budanitsky (2005) and in Section 5.1 above, but replacing the semantic relatedness measures with distributional similarity measures. | she varied the experimental procedure in a number of ways, with the consequence that her results are not directly comparable to ours: her test data was the British National Corpus; scope was measured in words, not paragraphs; and relatedness thresholds were replaced by considering the k words most similar to the target word (and k was a parameter). | contrasting |
train_6224 | 25 It's very unclear how ad hoc semantic relationships could be quantified in any meaningful way, let alone compared with prior quantifications of the classical and non-classical relationships. | ad hoc relationships accounted for only a small fraction of those reported by Morris's subjects (Morris 2006). | contrasting |
train_6225 | They attempt to describe Hebrew verb inflection as a concatenation of prefix+base+suffix, implementable by the Two-Level model. | they conclude that "The Two-Level rules are not the natural way to describe . | contrasting |
train_6226 | Since FSRAs are inherently nondeterministic (see the discussion of linearization below), their minimization is related to the problem of non-deterministic finite-state automata (NFA) minimization, which is known to be NP-hard. | 6 while FSRA arc minimization is NP-hard, FSRA state minimization is different. | contrasting |
train_6227 | In these cases the simple detection of the patterns leads to the discovery of part-whole relations. | there are many ambiguous expressions that are explicit but convey part-whole relations only in some contexts. | contrasting |
train_6228 | Berland and Charniak (1999) also used Hearst's algorithm to find part-whole patterns. | they focused only on the first five patterns that occur frequently in their corpus. | contrasting |
train_6229 | For example, both the relationships apartment#1, woman#1, No and hand#1, woman#1, Yes are mapped into the more general type entity#1, entity#1, Yes/No . | the first example is negative (a POS-SESSION relation), while the second one is a positive example. | contrasting |
train_6230 | Moreover, their system was tested only on a working list of predefined highly probable wholes for their corpus based on the genitive syntactic patterns. | the ISS system can disambiguate any pair of concepts, provided they are in WordNet or can be classified by NERD. | contrasting |
train_6231 | By generalizing the method to all the parts and wholes from our testing corpus, the accuracy of the system will fall. | to be able to test the system on their six whole concepts we would need thousands of positive and negative examples for each such word. | contrasting |
train_6232 | Coleman targets the book at students in a variety of disciplines, including arts, humanities, linguistics, psychology, and speech science, as well as early science and engineering students who want a glimpse into natural language and speech processing. | since it assumes prior knowledge of basic linguistics, the text is likely to be less accessible to traditional science and engineering students. | contrasting |
train_6233 | This chapter contains no exercises for the student. | it does provide a list of materials to assist the student in learning more about C programming, digital signal processing, the Klatt synthesizer, speech recognition, Prolog, computational linguistics, and probabilistic grammars. | contrasting |
train_6234 | If only the schoolmen had measured instead of classifying, how much they might have learnt! | (page 41 Grassmann was recognized for his foundational work on analytical geometry; for example, Hermann Weyl (1949) Today probably the best approach to analytic geometry is by means of the vector concept, following the procedure of Grassmann's Ausdehnungslehre. | contrasting |
train_6235 | On the one hand, verb classes reduce redundancy in verb descriptions since they encode the common properties of verbs. | verb classes can predict and refine properties of a verb that received insufficient empirical evidence, with reference to verbs in the same class: Under this criterion, a verb classification is especially useful for the pervasive problem of data sparseness in NLP, where little or no knowledge is provided for rare events. | contrasting |
train_6236 | We believe that our approach can be applied to a variety of situations in which vagueness affects referring expressions including, for example, r color terms (Section 9.3); r nouns that allow different degrees of strictness (Section 9.5); r degrees of salience (Section 9.4); and r imprecise pointing (Section 9.5). | we have also met some considerable obstacles on our way: Expressive choice (Sections 4 and 7). | contrasting |
train_6237 | The low frequency or absence of a bigram in the BNC may be due to chance. | the World Wide Web is big enough that a negative result is more reliable. | contrasting |
train_6238 | The rules achieved a coverage of 75% on the test set with an accuracy of 55%. | 16 a baseline algorithm of taking the first word in each string as the peripheral concept covers 100% of the strings, but with only 16% accuracy. | contrasting |
train_6239 | The formulas for Sim att and Sim sty involve relatively straightforward matching. | sim den requires the matching of complex interlingual structures. | contrasting |
train_6240 | In our work, the form that the peripheral nuances can take is not restricted, because the list of peripheral nuances is open-ended. | it may be possible to keep the form unrestricted but add restrictions for the most important types of peripheral nuances. | contrasting |
train_6241 | If, in spite of Hone and Graham's remarks, one wants to evaluate user satisfaction with the survey given in Table 1, the question that arises is whether the target to be predicted should really be the sum of all the user-satisfaction scores. | our experiments (Section 2.3) showed a remarkable, but expected, difference in the significance of the predictors when taking different satisfaction-measure sums or even individual scores as the target to be predicted. | contrasting |
train_6242 | To be able to see the difference between these two equations, note that Comp was significant for US (p < 0.02), but removed by backward elimination. | the data from the second WOZ experiment gave the following performance equations: with 26% and 46% of the variance explained, respectively. | contrasting |
train_6243 | All the predictors from the first performance equation (i.e., NND, NRD, Comp) were insignificant (p > 0.1) for DM in the second experiment. | the only predictor from the second performance equation that was significant for DM (p < 0.004) in the first experiment, but removed by backward elimination, was GDR. | contrasting |
train_6244 | On the one hand, this could indicate that the selected individual user-satisfaction measures really measure the performance of the dialogue manager and consequently illustrate the obvious difference between both dialogue-management manners. | one could argue that this simply means that the individual user-satisfaction measures are not appropriate measures of attitude because people are likely to vary in the way they interpret the item wording (Hone and Graham 2000). | contrasting |
train_6245 | The authors remark that Keller storage is "a very simple modification of our earlier code for Cooper storage." | these storage mechanisms are found wanting. | contrasting |
train_6246 | But this implies that each natural language item accepts only arguments of some one fixed type. | this is not true for natural language, where conjunctions, verbs, and pretty much any functional term that accepts arguments at all can accept arguments of different types. | contrasting |
train_6247 | In a first attempt, we tried to obtain a general German HTML corpus using the meaningless query der die das, i.e., the three German definite articles. | queries of this and a similar form did not lead to satisfactory results: As a consequence of Google's ranking mechanism, which prefers "authorities" (Brin and Page 1998), mainly portals of big organizations, companies, and others were retrieved. | contrasting |
train_6248 | In principle, the same tendency was observed in the documents of the parallel German corpora. | special effects polluted the picture. | contrasting |
train_6249 | In particular, all tuples in the training data S Tr necessarily also occur in the statistics corpus C St , and therefore no vectors in the training data S Tr involve data unseen in the statistics corpus. | the testing data S Te can be expected to include tuples that did not also occur in the statistics corpus, and so the classifier might not generalize to these tuples using the features we calculate. | contrasting |
train_6250 | This choice apparently misclassifies predication adjuncts as arguments. | for some cases, such as obligatory predication adjuncts, an argument status might in fact be more appropriate than an adjunct status. | contrasting |
train_6251 | On the one hand, this result indicates that the notion of argument is not entirely derivative from the attachment site. | it shows that some features developed to distinguish arguments from adjuncts could improve the disambiguation of the attachment site. | contrasting |
train_6252 | 1998), measuring textual cohesion (Morris and Hirst 1991), and word sense disambiguation (Lesk 1986). | since measures of relational similarity are not as well developed as measures of attributional similarity, the potential applications of relational similarity are not as well known. | contrasting |
train_6253 | Intuitively, we might expect that lexicon-based algorithms would be better at capturing synonymy than corpusbased algorithms, since lexicons, such as WordNet, explicitly provide synonymy information that is only implicit in a corpus. | experiments do not support this intuition. | contrasting |
train_6254 | Early versions of SME only mapped identical relations, but later versions of SME allowed similar, nonidentical relations to match (Falkenhainer 1990). | the focus of research in analogy making has been on the mapping process as a whole, rather than measuring the similarity between any two particular relations; hence, the similarity measures used in SME at the level of individual connections are somewhat rudimentary. | contrasting |
train_6255 | animal:eat::inflation:reduce interesting relations, such as antonymy, that do not occur in noun-modifier pairs. | noun-modifier pairs are interesting due to their high frequency in English. | contrasting |
train_6256 | It would seem that there is a problem here because the training vectors would be relatively dense, since they would presumably be derived from a large corpus, but the new unlabeled vector for John Smith and Hardcom Corporation would be very sparse, since these entities might be mentioned only once in the given document. | this is not a new problem for the VSM; it is the standard situation when the VSM is used for information retrieval. | contrasting |
train_6257 | Like Hearst (1992) and Berland and Charniak (1999), they use manually generated rules to mine text for their desired relation. | they supplement their manual rules with automatically learned constraints, to increase the precision of the rules. | contrasting |
train_6258 | In Table 7, we see that pumping:volume has slipped through the filtering in step 2, although it is not a good alternate for quart:volume. | table 10 shows that all four analogies that involve pumping:volume are dropped here, in step 12. | contrasting |
train_6259 | The confidence intervals are calculated using the Binomial Exact Test (Agresti 1990 Without SVD (compare column 1 to 2 in Table 17), performance drops, but the drop is not statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti 1990). | we hypothesize that the drop in performance would be significant with a larger set of word pairs. | contrasting |
train_6260 | In step 6, we no longer have two columns for each pattern P, one for "word 1 P word 2 " and another for "word 2 P word 1 ." | to be fair, we kept the total number of columns at 8,000. | contrasting |
train_6261 | In step 5, for each pair A:B, we add B:A, yielding 4,800 pairs. | some pairs are dropped because they correspond to zero vectors and a few words do not appear in Lin's thesaurus. | contrasting |
train_6262 | 2005;Karsenty and Botherel 2005;Sturm and Boves 2005), and on the use of error-recovery strategies that are based on analyses of human-human dialogues (Skantze 2005), including the use of facial expressions (Barkhuysen, Krahmer, and Swerts 2005). | as ASR accuracy improves, dialogue systems will be called upon to handle ever more complex tasks and ever less restricted vocabularies. | contrasting |
train_6263 | This text offers an introduction to the field of linguistics for students who have already some background in computer science but lack special training in linguistics. | those using this textbook in the classroom should consider providing supplementary materials that discuss the fundamental algorithms and techniques for speech and language processing as well as certain linguistic areas, such as phonetics and pragmatics. | contrasting |
train_6264 | Dependency-based approaches, which understand the verb as the center of the sentence structure and describe this structure on the basis of binary relations between heads and their modifiers, have been for a long time a matter of Continental syntactic theories rather than of the mainstream syntactic approaches on the other side of the Atlantic. | the notion of head can be found also in Bloomfield (1933) when referring to the names of the main constituents of the sentence, that is, NP (noun phrase, with N as its head) and VP (verb phrase, with V as its head). | contrasting |
train_6265 | The Praguian concepts met a favorable response within continental linguistics (one should mention in this connection especially the works by British linguists M. A. K. Halliday and H. W. Kirkwood, several German linguists such as J. Esser, R. Bartsch, and J. Jacobs, the French linguist J.-M. Zemb, and others). | only the syntactic or word order consequences (or, as the case may be, conditions) of different sentence articulations into topic and focus were mostly taken into account, and its relevance for and effects on the coherence of discourse. | contrasting |
train_6266 | One could argue that it is the presence of structures with quantification rather than the topic-focus articulation of the quoted examples that is responsible for the indicated semantic distinction. | the Praguian writings from the sixties convincingly demonstrate that it is not difficult to find sentences without quantification that exhibit the same phenomenon (for reasons I will mention in a minute, in the examples, the capitals indicate the intonation center): Russian is spoken in SIBERIA versus In Siberia, RUSSIAN is spoken, or John works on his dissertation on WEEKENDS versus On weekends, John works on his DISSERTATION. | contrasting |
train_6267 | Thus, on the contrary, annotation may and should bring an additional value to the corpus. | there are some necessary conditions for an annotation to fulfil this aim: Its scenario should be carefully (i.e., systematically and consistently) designed, and it should be based on a sound linguistic theory. | contrasting |
train_6268 | This means that for a given data set, both measures will lead to rejection of the null hypothesis at the same level of significance. | the two measures have different underlying scales, and, numerically, they are not directly comparable to each other. | contrasting |
train_6269 | As the probabilities p, p 1 , and p 2 are determined by maximum-likelihood estimation (MLE) and the null hypothesis includes fewer parameters than the alternative hypothesis, the ratio log λ is asymptotically χ 2 -distributed and can thus be used as a test statistic. | we have decided to employ different hypotheses for abbreviation detection. | contrasting |
train_6270 | Due to the revision of the hypotheses H 0 and H A , the log-likelihood ratio for abbreviation detection is no longer asymptotically χ 2 -distributed. | this is not a disadvantage since the resulting log-likelihood value expresses only one of three crucial properties of abbreviations. | contrasting |
train_6271 | 9 Those languages in which the period is not usually used to mark ordinal numbers and for which the test without special treatment of numbers achieved better results are italicized in the following tables. | even if the special treatment of numbers was not turned off for such languages, the resulting increase in the error rate was not very high, maximally 0.03%; see also Section 6.4.5. | contrasting |
train_6272 | Riley induces a decision tree for sentence boundary detection using the following features (Riley 1989, pages 351 and 352): The resulting decision tree is able to classify the periods in the Brown corpus with a very low error rate of only 0.2%, which is 0.82% better than that achieved by Punkt. | the impressive performance of Riley's approach also requires impressive amounts of training data: He calculated the probabilities that a certain word occurs before or after a sentence boundary from 25 million words of AP newswire text. | contrasting |
train_6273 | Then, for close language pairs, tuples are expected to successfully handle those short reordering patterns that are included in the tuple structure, as in the case of "traducciones perfectas : perfect translations" presented in Figure 1. | in the case of distant pairs of languages, for which a large number of long tuples are expected to occur, the approach will more easily fail to provide a good translation model due to tuple sparseness. | contrasting |
train_6274 | On the other hand, in the Spanish-to-English direction, it seems that a little improvement with respect to System D is achieved by using 4-grams. | it is not clear which system performs the best since System E obtains the best BLEU score while System F obtains the best mWER score. | contrasting |
train_6275 | Given the many remaining challenges, which are outlined at the end of the book, the CIRCSIMtutor system can serve as a test bed and allow any researcher just starting out in the area of tutorial dialogue to test new techniques and approaches while avoiding the high costs of building an effective tutorial dialogue system. | the authors never explicitly express the latter as one of their goals. | contrasting |
train_6276 | Although this part of the book extends the introductions made in part 2, it does not provide quite enough detail. | there is enough for a reader to understand what was tried and why, and there are plenty of pointers to relevant project papers that will be useful for anyone who wants to try using the CIRCSIM-tutor system as a test bed. | contrasting |
train_6277 | This carries the additional danger that annotations intended to serve for several, often not yet defined, applications may in fact not be useful for any. | the task is organized, annotators typically have no stake in the end result. | contrasting |
train_6278 | The concept of word posterior probabilities based on the fixed target position allows for easy calculation over word graphs and N-best lists. | this concept is rather restrictive. | contrasting |
train_6279 | The absolute values are not needed for the translation process, because the search is performed using the maximum approximation (see Equations (1) and (2)). | to this, the actual values of the weights make a difference for confidence estimation, because the summation over the sentence probabilities is performed. | contrasting |
train_6280 | The statistical models presented in Section 2.2 can be used to estimate the confidence of target words as first described in Ueffing and Ney (2005b). | to the approaches presented in Section 4, the direct phrase-based confidence measures do not use the context information at the sentence level, but only at the phrase level. | contrasting |
train_6281 | This section also contains details on which confidence measures were combined. | the focus of this work is on word posterior probabilities as stand-alone confidence measures. | contrasting |
train_6282 | Similarly to the EPPS data, the domain is basically unrestricted, because a wide range of different topics is covered. | the vocabulary size and the training corpus are much larger than in the EPPS collection, as the corpus statistics presented in Table 3 show. | contrasting |
train_6283 | r The combination of several different word posterior probabilities into one confidence measure yields better confidence estimation performance than the best single feature. | the word posterior probabilities proposed here proved to be strong stand-alone features (see also experiments reported in Blatz et al. | contrasting |
train_6284 | The sample dialogues provided were certainly impressive. | no transcripts of real-world dialogues were provided and therefore it cannot be determined whether the methods and theories developed in UC were robust enough for practical use. | contrasting |
train_6285 | This fact allows the separation of the process of representing the concepts expressed in a document from the use of the relations between concepts for deduction or reasoning processes. | formalisms, theories, and algorithms either designed for domain document representation or reasoning may be made independent from the chosen domain ontology and can also be applied to different domains, thus enhancing system portability between domains. | contrasting |
train_6286 | On the one hand, current systems are overly influenced by the specific characteristics and requirements of the domains, from the different types of questions to be answered to the heterogeneity of the knowledge available for the domain. | the known methodological proposals (Minock 2005) are so general that they could be used to design any kind of information system. | contrasting |
train_6287 | Ideally, we would like to match structured representations derived from the question with those derived from MEDLINE citations (taking into consideration other EBMrelevant factors). | we do not have access to the computational resources necessary to apply knowledge extractors to the 15 million plus citations in the MEDLINE database and directly index their results. | contrasting |
train_6288 | Below, we enumerate the relevant indicator terms by clinical task. | there is a set of negative indicators common to all tasks; these were extracted from the set of genomics articles provided for the secondary task in the TREC 2004 genomics track evaluation (Hersh, Bhupatiraju, and Corley 2004); examples include genetics and cell physiology. | contrasting |
train_6289 | Both the EBM and combination rerankers significantly outperform the termbased reranker (at the 1% level, on all metrics, on both development and test set), with the exception of MRR on the development set. | for all metrics, on both the development set and test set, there is no significant difference between the EBM and combination reranker (which combines both term-based and EBM-based evidence). | contrasting |
train_6290 | Their study also illustrates the importance of semantic classes and relations. | extraction of outcome statements from secondary sources (meta-analyses, in this case) differs from extraction of outcomes from MEDLINE citations because secondary sources represent knowledge that has already been distilled by humans (which may limit its scope). | contrasting |
train_6291 | A number of studies (e.g., Hildebrandt, Katz, and Lin 2004) have pointed out shortcomings of the original nugget scoring model, although a number of these issues have been recently addressed Demner-Fushman 2005a, 2006b). | adaptation of the nugget evaluation methodology to a domain as specific as clinical medicine is an endeavor that has yet to be undertaken. | contrasting |
train_6292 | The answers to such queries can be produced by simple interrogation of the database, because they do not require inferences over the repository of patient records. | the query interface is also coupled with a data-mining module to provide answers to more complex queries, such as The query interface can also be used for accessing information about individual patients. | contrasting |
train_6293 | What is the cause of symptom X? | to these findings, our consultation with cancer clinicians revealed that questions posed in a clinical research setting tend to have a more complex nature and to be directed at groups of patients, searching for relationships rather than simple values: What is the average time of relapse in Acute Myeloid Leukaemia for patients with a complete response after two cycles of treatment? | contrasting |
train_6294 | The treatment profile (Taxol) and the outcome measure (survival rate) have a content that can be easily specified-a single choice from a menu would suffice. | the set of relevant patients requires a very elaborate description because there are so many qualifications. | contrasting |
train_6295 | Furthermore, significant differences were found between subjects' performance on the first query they composed compared to the second, third, and the fourth (each at p < .01 on the Tukey HSD test). | application of the same test showed no significant difference in subjects' performance on the second versus third, second versus fourth, or the third versus fourth composed query. | contrasting |
train_6296 | This means that once a consistent set is found, no descendants of that set can be maximal, and that subtree can be pruned from the search space. | consistent subsets that are maximal in their branch of the tree may turn out not to be globally maximal. | contrasting |
train_6297 | In their case, paths are mapped onto pairs representing a dependency relation r and the end word w (see the discussion in Section 2): Any plausible and computationally feasible function can be used as basis mapping. | in this article we restrict ourselves to models which use a word-based basis mapping. | contrasting |
train_6298 | The global co-occurrence frequency of a basis element b and a target t is computed by the function f : The global co-occurrence frequency f (b, t) could be used directly as the matrix value M[b] [t]. | as Lowe (2001) notes, raw counts are likely to give misleading results. | contrasting |
train_6299 | A smaller parameter space could have resulted from collapsing the context selection and path value functions into one parameter, for example, by defining context selection directly as a function from (anchored) paths to their path values, and thus assigning a value of zero to all paths π ∈ cont(t). | we refrained from doing this for two reasons, a methodological one and a technical one. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.