id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_96300 | Voorhees 1993, 1994, Smeaton, Kelledy and O'Donnell 1995. | ifjeune fille translates as young girl, then PictureQuest will understand that young is an adjective modifying girl. | neutral |
train_96301 | instead of pressing the corresponding button. | as mentioned earlier, some customer service centers now allow users to say either the option number or a keyword from a list of options/descriptions. | neutral |
train_96302 | It has to be robust enough to deal with poor recognition quality, inadequate information input by the user, and ambiguous data. | if the speech option for the system is turned on, the speech-based output is generated using Lernout and Hauspie's RealSpeak text-to-speech system. | neutral |
train_96303 | It should be noted that the set of domain-specific keywords and phrases was provided to the speech recognition system as a text document. | the only known work which automates part of a customer service center using natural language dialogue is the one by Chu-Carroll and Carpenter (1999). | neutral |
train_96304 | Horiguchi outlined how "spoken language pragmatic information" can be translated (Horiguchi, 1997). | our method achieved a recall of 65% and a precision of 86%. | neutral |
train_96305 | TDMT uses bottom-up left-to-right chart parsing with transfer rules as shown in Figure 1. | japanese speakers do not have to use polite expression in all utterances. | neutral |
train_96306 | It is interesting to note that there is no correspondence between the task being performed during the interaction and the amount of changes made to the dialogue. | for instance asking again if it is on Sunday as in $9 in figure 1. | neutral |
train_96307 | Empirical results obtained from the applications listed in Section 6 have shown that the approach used in the framework is flexible enough and easily portable to new domains, new languages, and new applications. | our approach also has some disadvantages compared with the systems mentioned above. | neutral |
train_96308 | The framework uses the same engine for all the transformations at all levels because all the syntactic and conceptual structures are represented as dependency tree structures. | in practice, the formalism that we are using for expressing the transformations is inadequate for long-range phenomena (inter-sentential or intra-sentential), including syntactic phenomena such as longdistance wh-movement and discourse phenomena such as anaphora and ellipsis. | neutral |
train_96309 | The first mapping rule corresponds to one of the lexico-structural transformations used to convert the interlingual ConcS of Figure 3 to the corresponding DSyntS. | for large or rapidly changing grammar (such as a transfer grammar in MT that may need to be adjusted when switching from one parser to another), the burden of the developer's task may be quite heavy. | neutral |
train_96310 | A match to a subtask is represented by adding the prefix for the subtask to the path of the constraint. | in this case, the action is already satisfied, and is not executed. | neutral |
train_96311 | Only "high-precision" rules are currently applied to selected types of anaphora. | the system does not currently handle ellipsis. | neutral |
train_96312 | This is unusual in the field of text processing which has generally dealt with well-punctuated text: some of the most commonly used texts in NLP are machine readable versions of highly edited documents such as newspaper articles or novels. | it is not clear that these techniques will be as successful for ASR text. | neutral |
train_96313 | Solving this problem will likely require comparing a variety of different settings. | most importantly, the referent for "this person" may have been established in question number 1, and the current question containing the presupposition "this person" is question number 52. | neutral |
train_96314 | In fact, pipelining is exactly a data-based constraint -the second module in a pipeline does not start until the first one produces its output. | in this section, we present our proposal for a general representation scheme capable of covering the above requirements. | neutral |
train_96315 | In both cases, the final probabilities are renormalized. | the next two columns indicate respectively the prefixes typed by the user and the completions proposed by the system in a word-completion task. | neutral |
train_96316 | These sentences have been selected randomly among sentences that have not been used for the training. | the translator remains in control of the translation process and the machine must continually adapt its suggestions in response to his or her input. | neutral |
train_96317 | The present application benefits from the high modularity of the usage of the components. | no adaptation to the application domain has been made. | neutral |
train_96318 | In cases where the predictions are compatible, e.g. | the average precision and recall data for the ten test sets are given in table 3, together with the base-line case of assuming that we categorize all unknown words as names (the most common category). | neutral |
train_96319 | The results for this tree can be found in the second line of Table 3. | since our current data does not include case information we do not include these features. | neutral |
train_96320 | The remaining thirty percent, or 2100 records, was reserved as the test corpus. | this feature will be less useful in a language such as Japanese or Chinese which use ideographic characters. | neutral |
train_96321 | For words (mostly proper nouns) that do not appear in WordNet, heuristics are used to determine semantic type. | as discussed in Section 6, the heuristics that comprise the semantic type checking filter do not scale to the test corpus and are the primary reason for the larger percentage of errors attributed to the linguistic filters for that corpus. | neutral |
train_96322 | We evaluate the system on the TREC8 QA development corpus as well as the TREC8 QA test corpus. | scores on the TREC8 test corpus for systems participating in the QA evaluation ranged between 3 and 146 correct. | neutral |
train_96323 | As described above, our final version of the QA system ranks summary extracts according to both their vector space similarity to the question as well as linguistic evidence that the answer lies within the extract. | our general approach is to define a new scoring measure that operates on the summary extracts and can be used to reorder the extracts based on linguistic knowledge. | neutral |
train_96324 | Empire identifies base NPs --non-recursive noun phrases --using a very simple algorithm that matches part-of-speech tag sequences based on a learned noun phrase grammar. | we will continue to use querydependent text summarization in the experiments below. | neutral |
train_96325 | Her goal then is to answer questions by making inferences about actions and actors in the story using world knowledge in the form of scripts, plans, and goals (Schank and Abelson, 1977). | it appears that it is reasonable to rely implicitly on the IR subsystems to enforce the other linguistic relationships specified in the query (e.g. | neutral |
train_96326 | More detailed discussion of this database can be found in (Rajan et al. | our decision to apply information extraction technology to binding relationships was guided not only by the biological importance of this phenomenon but also by the relatively straightforward syntactic cuing of binding predications in text. | neutral |
train_96327 | 1997, Voorhees andHarman 1998, andMUC-7); however, several recent efforts have been directed at biomolecular data (Blaschke et al. | phosphatidylinositol transfer protein maps to the corresponding Metathesaurus concept with semantic type 'Amino Acid, peptide, or protein, thus causing it to be more specific than a single lipidbinding site. | neutral |
train_96328 | The segmentation algorithm is effectively implemented by borrowing the CYK parsing method. | first, we extract compound nouns from a large size of corpus, manually divide them into simple nouns and construct the hand built segmentation dictionary with them. | neutral |
train_96329 | However, it is difficult to look at a large size of corpus and to assign analyses to it, which makes it difficult to estimate the frequency distribution of words. | _The italic characters such as 'n' or 'x' in analysis information (right column) of the table is used to make distinction between noun and suffix. | neutral |
train_96330 | Such morphological ambiguity is caused by overgeneration of the morphological analyzer since the analyzer uses less detailed rules for robustness of the system. | just a large amount of lexical knowledge does not make good results if it contains incorrect data and also it is not appropriate to use frequencies obtained by automatically tagging large corpus. | neutral |
train_96331 | Token-sample performance is used to assess the per-token error rate that one would expect in analyzing large amounts of running text. | morph-precheck for special forms 2. | neutral |
train_96332 | They do so for several other corpora as well. | we expect even higher results when testing on every 10th sentence instead of a contiguous set of 10%. | neutral |
train_96333 | We assume that a morphological analysis consists of three processes: tokenization, dictionary lookup, and disambiguation. | the process can be implemented by a finite state machine or a simple pattern matcher. | neutral |
train_96334 | Disambiguation i s already language independent, since it does not process strings directly and therefore will not be taken up. | suffer from segmentation ambiguity. | neutral |
train_96335 | The phrasal recognizer currently only considers processing of simple, non-recursive structures (see fig. | 4, where SUBCONJ-CL and REL-CL are tags for subclauses) and verb fragments. | neutral |
train_96336 | company names) may appear in the text either with or without a designator, we use a dynamic lexicon to store recognized named entities without their designators (e.g., "Braun AG" vs. "Braun") in order to identify subsequent occurrences correctly. | many thanks to Thierry Declerck and milena Valkova for their support during the evaluation of the system. | neutral |
train_96337 | Hence, the complexity of the recognized structure of the sentence is reduced successively. | an exhaustive description of the covered phenomena can be found in (Braun, 1999). | neutral |
train_96338 | Each token in the input document is assigned a unique "LFEATURE". | applying a deterministic FST depends linearly only on the input size of the text. | neutral |
train_96339 | The output document from constrained HMM contains MUC-standard NE.tags such as person, location and organization. | the subsequent modules are focused on location, person and organization names. | neutral |
train_96340 | In both examples the correct alternative is indicated in parentheses. | on the same test corpus, however, Word only reached 15.9% precision. | neutral |
train_96341 | Of course, this was an uncontrolled experiment, and there is some potential that information learned from searching with the traditional tools (which were apparently used first) might have provided some benefit when using the conceptual indexing technology. | this led to a reformulated request, move to end of file, which successfully retrieved the passage 9o to end of buffer. | neutral |
train_96342 | If there are no selected items, mailtool sends copies of those items you axe currently... | two informal evaluations have been conducted that :shed some light on the benefits. | neutral |
train_96343 | Higher interest rates are normally associated with weaker bond markets. | in WordNet there is a hierarchy fiscal policy -IS-Aeconomic policy -IS-A -policy. | neutral |
train_96344 | Other companies simply have unusual names. | because of the regularity we observed in company name variants and their use across a variety of news sources, we determined that the knowledge engineering task would be quite repetitive and thus could be automated for most companies. | neutral |
train_96345 | Name lists provide an extremely efficient way of recognising names, as the only processing required is to match the name pattern in the list against the text and no expensive advanced processing such as full text parsing is required. | the second combination strategy removes any names which appear in the lexicon and occur with a corpus frequency below the filtering probability are removed. | neutral |
train_96346 | The reduction in accuracy when moving from the 250-byte limit to the 50-byte limit is expected, because much higher precision is required; the 50byte limit allows much less extraneous material to be included with the answer. | for these queries the head noun (e.g., company or city) is extracted, and a lexicon mapping nouns to categories is used to identify the category of the query. | neutral |
train_96347 | In this paper, we consider the problem of how to evaluate the automatic identification of index terms that have been derived without recourse to lexicons or to other kinds of domain-specific information. | at any given data point, a larger value indicates that a larger percentage of that series' data has that particular rating or better. | neutral |
train_96348 | We presented subjects with an article and a list of terms identified by one of the three methods. | as [Boguraev and Kennedy 1998] observe, the TT technique may not characterize the full content of documents. | neutral |
train_96349 | The small difference in average ratings for the HS list and the KW list can be explained, at least in part, by two factors: 1) Differences among professionals and students in inter-subject agreement and reliability; 2) A discrepancy in the rating of single word terms across term types. | we made the following decisions: • For TTs, we included all identified terms; • For HSs, we included all terms whose head occurred more than once in the document; . | neutral |
train_96350 | Effect of training set size We have measured NE performance in the context of speech as a function of training set size and found that the performance increases logarithmically with the amount of training data for 15% WER test data as well as for 0% WER input. | within each of the regions, we use a statistical bigram language model, and emit exactly one word upon entering each state. | neutral |
train_96351 | While one may argue that grammar and testsuite should be developed in parallel, such that the coding of a new grammar disjunct is accompanied by the addition of suitable test cases, and vice versa, this is seldom the case. | the work reported here is situated in a large cooperative project aiming at the development of largecoverage grammars for three languages. | neutral |
train_96352 | In addition, it provided a 3.3-fold increase in creating control measures paired t-test t (3) = 8.298, p < 0.002, one-tailed (see Table I). | the results cannot simply be attributed to the misuse of a hierarchical tool. | neutral |
train_96353 | Rankboost, like other machine learning programs of the boosting family, can handle a very large number of features. | in this paper, we present SPoT, a sentence planner, and a new methodology for automatically training SPoT on the basis of feedback provided by human judges. | neutral |
train_96354 | We show that the trained SPR learns to select a sentence plan whose rating on average is only 5% worse than the top human-ranked sentence plan. | instead of carefully choosing a small number of features by hand which may be useful, we generated a very large number of features and let Rank-Boost choose the relevant ones. | neutral |
train_96355 | As can be seen, except for "Mr. President" and "President Reagan", all of the examples are either not coreferent or are not people at all. | as the models corresponding to equations 2 and 8 do not include any label-label probabilities, this problem does not appear in these models. | neutral |
train_96356 | For R = f , as we noted earlier, we want to claim that the new name is a member of the same family as that of the earlier name. | the results of our experiments are as follows: Label% Name% Name 92.6 85.1 Coreference 97.0 94.5 As can be seen, information about possible coreference was a decided help in this task, leading to an error reduction of 59% for the number of labels correct and 63% for names correct. | neutral |
train_96357 | ME requires subjects to assign numbers to a series of linguistic stimuli in a proportional fashion. | c. The unarmed plane flew very fast and very high. | neutral |
train_96358 | Since there are enough numb e r o f s u c h w ords (for our purpose), our automatic method could not di erentiate them from true systematic polysemy. | the problem is not only that manual inspection of a large, complex lexicon is very timeconsuming, it is also prone to inconsistencies. | neutral |
train_96359 | 9 Actually, cousin is one of the three relations which indicate the grouping of related senses of a word. | it is useful in knowledge-intensive NLP tasks such as discourse analysis, IE and MT. | neutral |
train_96360 | Most of the improvement comes from detecting entries that have matching glosses. | over 60% of cognates have at least one gloss in common. | neutral |
train_96361 | Our best guess is that because the edit detector has high precision, and lower recall, many more words are left in the sentence to be parsed. | they categorize the transitions between words into more categories than we do. | neutral |
train_96362 | Adapted models were trained using MLLR technique (Legetter and Woodland, (1996)) available as part of the Entropic package. | the results still indicate that generating semi-literal transcriptions may help eliminate the undesirable noise and, at the same time, get the benefits of broader coverage that semi-literal transcripts can afford over NON-LITERAL transcriptions. | neutral |
train_96363 | This is used to narrow down the search space from all the examples to a much smaller set. | some entries are very general, and are used to translate a large number of expressions. | neutral |
train_96364 | Given some finite lexicon, the probability of each possible outcome for W n can be estimated using that outcome's relative frequency in a sample. | pursuit of methodological principles 1, 2 and 3 has identified a model capable of describing some of the same phenomena that motivate psycholinguistic interest in other theoretical frameworks. | neutral |
train_96365 | Theories of initial parsing preferences (Fodor and Ferreira, 1998) suggest that the human parser is fundamentally serial: a function from a tree and new word to a new tree. | these complications do not add any further machinery to the parsing algorithm per se beyond the grammar rules and the dot-moving conventions: in particular, there are no heuristic parsing principles or intermediate structures that are later destroyed. | neutral |
train_96366 | T 0 is the set of all trees 0 that can possibly attach at Node in tree . | we have consistently seen increase in performance when using the Co-Training method over the baseline across several trials. | neutral |
train_96367 | The words conveyed in Figure 4 are all words from the corpus that have potential relationships between variants of the word "abuse." | mED has been applied to the morphology induction problem by other researchers (such as Yarowsky and Wicentowski, 2000). | neutral |
train_96368 | IOE1 An E tag is used to mark the last token of a chunk immediately preceding another chunk. | there are a number of other methods to extend SVMs to multiclass classifiers. | neutral |
train_96369 | Even if we have no room for applying the voting schemes because of some real-world constraints (limited computation and memory capacity), the use of VC bound may allow to obtain the best accuracy. | in the field of natural language processing, SVMs are applied to text categorization and syntactic dependency structure analysis, and are reported to have achieved higher accuracy than previous approaches. | neutral |
train_96370 | Conventional machine learning techniques, such as Hidden Markov Model (HMM) and Maximum Entropy Model (ME), normally require a careful feature selection in order to achieve high accuracy. | we employ the simple pairwise classifiers because of the following reasons: (1) In general, SVMs require training cost (where C is the size of training data). | neutral |
train_96371 | The main idea behind reinforcement learning is to explore the space of possible dialogues and select the strategy which optimizes the expected rewards (Mitchell, 1997, ch. | and availability, and the system retrieves potential activities. | neutral |
train_96372 | Notably, work on morphology (Mooney and Califf, 1995) and parsing (Thompson et al., 1997) has been carried out. | the efficiency of the rules obviously depends on the way the optimal strategy search space has been modeled and other conditions influencing learning. | neutral |
train_96373 | The training then proceded by decomposing all parse trees into sequences of SHIFT, PROJECT and ATTACH transitions. | the CPPL is an underbound of the PPL in that it would be the PPL from an ideal parser. | neutral |
train_96374 | $ # " is segmented as " " ("Koizumi Jun'ichiro" -family and first names) as a person name and " ) " ("September") as a date will be extracted by combining word units. | while we must have a fixed feature set among all NE types in Pairwise method, it is possible to select different feature sets and models when applying One-vs-Rest method. | neutral |
train_96375 | To distinguish these two situations, we analyze the split level of backbone nodes that begin regions with multiple paths. | winners strongly outpaced losers after Greenspan cut interest rates again. | neutral |
train_96376 | They assume that paths in dependency trees that take similar arguments (leaves) are close in meaning. | we are not aware of another system generating sentence-level paraphrases. | neutral |
train_96377 | Using the average precision metric, that version of PI-QUANT was among the top 5 best performing systems out of 67 runs submitted by 34 groups. | we perform passage-level combination to make a wider variety of passages available to the answer selection component, as shown in Figure 2. | neutral |
train_96378 | 9 and S start at the bottom left corner of the parse tree. | the phoneme sequence for the name is taken as the output from the highest scoring path corresponding with the spoken part of the waveform. | neutral |
train_96379 | Results showed 46.3% accuracy on training data but only 7.9% accuracy on OOV recognition test data. | we anticipate that the multi-stage approach can be improved by folding all three stages into a single recognition server, eventually allowing real-time operation. | neutral |
train_96380 | Questions belonging to a series were "about" the same subject, and this aboutness could be seen in the use of semantically related words. | and "How much explosive was used? | neutral |
train_96381 | Here we seek to partially address this problem by looking at some particular aspect of clarification dialogues in the context of open domain question answering. | question Answering Systems aim to determine an answer to a question by searching for a response in a collection of documents (see Voorhees 2002 for an overview of current systems). | neutral |
train_96382 | But we will now see that even finding the minimum number of states is NP-complete, and inapproximable. | the general formulation of weighted automata (Berstel and Reutenauer, 1988) permits any weight set K, if appropriate operations ⊕ and ⊗ are provided for combining weights from the different arcs of the automaton. | neutral |
train_96383 | Formally we know that states 1 and 3 are equivalent because F 1 = F 3 , where F q denotes the suffix function of state q-the function defined by the automaton if the start state is taken to be q rather than 0. | so (λ(Fq)\Fq) is a residue of any residue X of Fq, as claimed. | neutral |
train_96384 | The total time to compute our λ(F q ) values is therefore O(|states| + t • |arcs|), where t is the maximum length of any arc's weight. | an arc from q i to r i with weight k ∈ ∆ * was reweighted asλ(q i )\(k ⊗λ(r i )). | neutral |
train_96385 | Sentence pairs where one of the sentences exceeded this limit were ignored in training. | one is bootstrap resampling (Efron and Tibshirani, 1993) 8 to determine confidence intervals, another one splitting the test corpus into a certain number of subcorpora (e.g. | neutral |
train_96386 | For example, "if Part is an entity ¤ £ and the Whole is a whole 2 then it is not a part-whole relation". | for example, in the pattern "¦ § " the noun phrase that contains the part (X) and the prepositional phrase that contains the whole (Y) form a noun phrase (NP). | neutral |
train_96387 | We had 29 subjects prompted to say certain inputs in 8 dialogues. | in this paper we present an algorithm for measuring the semantic coherence of sets of concepts against such an ontology. | neutral |
train_96388 | To our knowledge, there exists no similar software performing semantic coherence scoring to be used for com-parison in this evaluation. | semantic interpretation (Allen, 1987). | neutral |
train_96389 | Figure 1 shows the word-cluster distribution. | and finally, we intend to incorporate CatVar into new applications such as parallel corpus word alignment. | neutral |
train_96390 | The total number of links in this cluster is six, two of which are Porter-determinable and only one of which is naturally-determinable. | if we exclude the not-really missing words, the adjusted recall value becomes 87.16%. | neutral |
train_96391 | We also extended the parsing strategy slightly to handle Chomsky adjunction structures ( ] ). | the probabilities of parser actions are conditioned on this induced history representation, rather than being conditioned on a set of hand-crafted history features chosen a priori. | neutral |
train_96392 | If the explicit features include the previous decision ¡ and the other history representations include the previous history representation , then (by induction) any information about the derivation history could conceivably be included in . | a neural network is trained simultaneously to estimate the probabilities of parser actions and to induce a finite repre-sentation of the unbounded parse history. | neutral |
train_96393 | The actual best parse is figure 2(d), with a score of −18.1. | we have described two general ways of constructing admissible A* estimates for PCFG parsing and given several specific estimates. | neutral |
train_96394 | We use Bayes rule to reformulate the translation probability for translating a foreign sentence into English Proceedings of HLT-NAACL 2003 During decoding, the foreign input sentence is segmented into a sequence of phrases ¡ ¢ ¤ £ ¥ . | straight-forward syntax-based mappings do not lead to better translations than unmotivated phrase mappings. | neutral |
train_96395 | For the source phrase sequence enforces the requirement that words in the translation agree with those in the phrase sequence. | no us : in fl a ti o n _ g a lo p a n te /4 .8 e − 7 ε : x for the sentence "nous avons une inflation galopante". | neutral |
train_96396 | All questions are attempted by the prover, but if the proof fails the QA system resorts to other answer extraction methods that were part of the system before the prover. | it remains to create axioms for the ALF of the candidate answer and to start the proof. | neutral |
train_96397 | So far, only constituents with same syntactic type are treated as paraphrases. | we compare two different ways of estimating the predictive power. | neutral |
train_96398 | Various techniques such as stop-word removal or stemming require language specific knowledge to design adequately. | although this is a very simple approach, it has not yet been systematically investigated in the literature. | neutral |
train_96399 | Moreover, whether one can use a purely word-level approach is itself a language dependent issue. | we consider language identification, Greek authorship attribution, Greek genre classification, English topic detection, Chinese topic detection and Japanese topic detection. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.