id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_7400
10,000 sentence pairs to those which included a source-noun from this 2 Similar results for pseudo-disambiguation were obtained for a simpler approach which avoids another EM application for probabilistic class labeling.
heren andĉ was chosen such that fĉv;n = max c;n fLCv;n+1pLCcjv;n: the sensitivity to class-parameters was lost in this approach.
contrasting
train_7401
The most direct point of comparison is the method of Dagan and Itai 1994 which gives 91.4 precision 92.7 standardized and 62.1 e ectiveness 66.8 standardized on 103 test examples for target word selection in the transfer of Hebrew to English.
compensating this high precision measure for the low e ectiveness gives values comparable to our results.
contrasting
train_7402
The Senseval standard is clearly beaten by the earlier results of Yarowsky 1995 96.5 precision and Sch tze 1992 92 precision.
a comparison to these refrom his random baseline 28.5 by taking 100=28:5; reversely, Dagan and Itai's 1994 random baseline can be calculated as 100=2:27 = 44:05.
contrasting
train_7403
Firstly, these approaches were evaluated on words with two clearly distant senses which w ere determined by the experimenters.
our method was evalutated on randomly selected actual translations of a large bilingual corpus.
contrasting
train_7404
We found that both of these variants of our grammar produced reasonable recognition, though the Word+ grammar was very inaccurate.
a three-words-per-category grammar could not produce successful speech recognition.
contrasting
train_7405
From these experiments, we w ere unable to isolate any simple set of factors to explain which grammars would be problematic for speech recognition.
the numbers of transitions per graph in a PFSG did seem suggestive of a factor.
contrasting
train_7406
in, on, behind, : : : ) that are required by the verb.
adjuncts can use a wider variety of prepositions.
contrasting
train_7407
we can use verbs that appear in relative clauses.
there are two main drawbacks: Treebanks are expensive to build and so the techniques presented here have to work with less data.
contrasting
train_7408
One way to begin a negotiation subdialogue is to express doubt at a proposition.
expressions of doubt occur in a variety of forms, each of which conveys information about the nature of the doubt that is important for the subsequent resolution of the con ict.
contrasting
train_7409
In our research, we draw on their notion of identifying how features of the generation context correlate with how an utterance should be expressed.
our work di ers from theirs in that we m ust deal with an agent's beliefs motivating his doubt and we consider a wider range of variations in realization.
contrasting
train_7410
To identify how expressions of doubt are realized in naturally occurring dialogue and how these realizations convey the requisite beliefs, we analyzed features of individual expressions of doubt extracted from natural corpora, and correlated the various forms of the utterances with the features of the underlying beliefs.
as explained in Section 3.3, the use of machine learning techniques was not appropriate due to the nature of our corpus.
contrasting
train_7411
The technique of backward analysis of Japanese sentences has been used in rule-based methods, for example (Fujita, 1988).
there are several difficulties with rulebased methods.
contrasting
train_7412
Many such methods used heuristics to make deterministic decisions (and backtracking if it fails in a searching) rather than using a scoring scheme.
the combination of the backward analysis and the statistical method has very strong advantages, one of which is the beam search.
contrasting
train_7413
In principle, the wider the beam search width, the more analyses can be retained and the better the accuracy can be expected.
the result is somewhat different from the expectation.
contrasting
train_7414
Implementation of this change is easy.
the problem lies with the sparseness.
contrasting
train_7415
We have to rely on necessarily imperfect heuristics.
we can specialize the general French deconverter to produce specialized servers for different tasks and different (target) sublanguages.
contrasting
train_7416
The UNL language defines the interface structure to be used by applications (either a hypergraph or a colored graph).
it does not restrict the choice of the data to be encoded.
contrasting
train_7417
The idea to use UNL for directly creating documents gets here an indirect and perhaps paradoxical support, although it is clear that considerable progress and innovative interface design will be needed to make it practical.
the UNL language proves flexible enough to be used by very different projects.
contrasting
train_7418
they must nevertheless denote individuals familiar to conversants if they are successfully to refer.
there is another class of referring expressions in relation to which we believe the concept of uniqueness of meaning does have a n essential role to play.
contrasting
train_7419
We also start our search for the referents of pronouns and other centred entities in the current discourse state, which is necessary if we are to resolve such referring expressions as her" in Mary took John with her."
referring expressions containing the property centred are prevented from being dereferenced to salient entities, thus ensuring that the constraint of disjoint reference is met.
contrasting
train_7420
First, we consider a case in which there is a discourse antecedent of sorts: In this way, we prove that the referring expression makes sense, i.e., denotes.
unlike in the previous cases, we do not dereference to a familiar referent.
contrasting
train_7421
It also has interesting theoretical implications, since it suggests a way in which pragmatic theo-ries of reference resolution, like F amiliarity Theory, and semantic theories, like Russell's, may be reconciled.
it is fair to say that the success of the approach is not yet proven.
contrasting
train_7422
Although "to make a map" and "exective" are not wrong translations, they are irrelevant i n the computer manual context.
the domain dictionary reduces confusion caused by the wrong word selection.
contrasting
train_7423
This algorithm reports about 90% accuracy of Thai open compound extraction.
the algorithm emphasizes on open compound extraction and has to limit the range of n-gram to 4-20 grams for the computational reason.
contrasting
train_7424
Given the identity of thematic role mapped to subject and object positions, we expect to observe the same noun occurring at times as subject of the verb, and at other times as object of the verb.
for object-drop verbs, the thematic role of the subject of the intransitive is identical to that of the subject of the transitive, not the object of the transitive.
contrasting
train_7425
While unergatives are already accurately classi ed without trans, inspection of the change in class labels reveals that the addition of trans to the set improves performance on unaccusatives by helping to distinguish them from object-drops.
in this case, we also observe a loss in precision of unergatives, since some object-drops are now classi ed as unergatives.
contrasting
train_7426
On the one hand, the algorithm does not perform at expert level, as indicated by the fact that, for all experts, the lowest agreement score is with the program.
the accuracy achieved by the program of 69.5 is only 1.5 less than one of the human experts in comparison to the gold standard.
contrasting
train_7427
The experimental results show that our method is powerful, and suited to the classi cation of lexical items.
we h a ve not yet addressed the problem of verbs that can have m ultiple classi cations.
contrasting
train_7428
There is very little semantic dependency in the grammar rules, which is essential if the grammar is to be domain-independent.
the grammar rules are elaborately conditioned on morphological and syntactic features, enabling much finer-grained parsing analyses than just relying on a small number of basic parts-of-speech (POS).
contrasting
train_7429
Table 2: Results of applying the proposed method to direct translations of the metonymies in (Kamei and Wakao, 1992 Semantic Relation The method proposed in this paper identifies implicit terms for the explicit term in a metonymy.
it is not concerned with the semantic relation between an explicit term and implicit term, because such semantic relations are not directly expressed in corpora, i.e.
contrasting
train_7430
Previous work on anaphora resolution has yielded a rich basis of theories and heuristics for finding antecedents.
most research to date has neglected an important potential cue that is only available in spoken data: prosody.
contrasting
train_7431
Byron and Stent (1998) adapted this approach, which had previously been applied to text, for spoken dialogs, but with limited success.
to personal pronouns, demonstratives do not rely on calculations of salience.
contrasting
train_7432
They tested the effect of accented sentence-initial demonstratives that co-specify with the preceding sentence on the resolution of ambiguous personal pronouns, and found that the pronoun antecedents switched when the demonstrative was accented (Fretheim et al., 1997).
to our knowledge, there are no studies that compare the co-specification preferences of accented vs. unaccented demonstratives.
contrasting
train_7433
The expected (average) value of the instance results will stay the same.
the chances of getting an unusual result can change.
contrasting
train_7434
Summarization of written documents has recently been a focus for much research in NLP e.g., Mani and Maybury, 1997;AAAI, 1998;Mani et al., 1998;ACL, 2000, to name some of the major events in this eld in the past few years.
very little attention has been given so far to the summarization of spoken language, e v en less of conversations vs. monological texts.
contrasting
train_7435
No linguistic analyses are taken into account in these approaches.
in further research the authors plan to integrate linguistic knowledge such as in ectional analysis of verbs, nouns and adjectives.
contrasting
train_7436
An MT system with high coverage and not-toobad" quality can be useful in a Web-application where a great variety of texts are to be translated for occasional users which w ant to grasp the basic ideas of a foreign text.
a system with high quality and restricted coverage might b e useful for in-house MT-applications or a controlled language.
contrasting
train_7437
then Pc ij = Pc ij jw i ; t i : If there is a case relation or a modi cation relation in two constituents, coverage heuristics designate it is easier to add the smaller tree to the larger one than to merge the two medium sized trees.
in the coordination relation, it is easier to merge two medium sized trees.
contrasting
train_7438
Techniques, for example, for proper name recognition and classification are well known.
good quality name recognition software is only freely available at the present for English.
contrasting
train_7439
Solving this problem might not only be useful in the event of detecting such signals from space, but also, by deliberately ignoring preconceptions based on human texts, may provide us with some better understanding of what language really is.
we need to start somewhere, and our initial investigations -which this paper summarises -make some basic assumptions (which we would hope to relax in later research).
contrasting
train_7440
The research community benefits from access to published corpora not available for commercial use.
a corporation that needs data for development and testing purposes is much more restricted.
contrasting
train_7441
The researchers used tag set size of only 3, including function, content and punctuation in the rule.
korean is a post-positional agglutinative language.
contrasting
train_7442
For example, all proper nouns "NE" and nouns "NN" can be retrieved by pos= "NE" | "NN" As usual, structural identity can be expressed by the use of logical variables.
variables must not occur in the scope of negation, since this would introduce the computational overhead of inequality constraints.
contrasting
train_7443
For example, Ringger (1995) reports an average error rate of 30% for recognizing careful, spontaneous speech on a specific topic.
the error rate of paced speech can be as low as 5% if the vocabulary is severely limited or if the text is highly predictable and the system is tuned to that particular genre.
contrasting
train_7444
Disadvantages to this approach are that it relies on timesensitive texts, texts obtained by this approach are constrained to referencing specific events, and nontrivial work by humans is still necessary.
our goal is to extract bilingual text pairs automatically from any kind of bilingual comparable corpora.
contrasting
train_7445
The best hand created pattern based system seems to have a wide coverage dictionary for person, organization and location names and achieved very good accuracy for those categories.
the hand created pattern based system failed to capture the evaluation specific patterns like "the middle of April".
contrasting
train_7446
Table 2 show that increasing the size of the training corpus enhances the accuracy incrementally.
the point of diminishing return is reached when the size reaches 0.85 million characters.
contrasting
train_7447
include ate having the subject my cat, the object the food and the time modifier Yesterday, and the food having the location modifier in (the bowl).
different sets of GRs are useful for different purposes.
contrasting
train_7448
This liaison, which is represented by a phoneme appearing at the end of some words, must be removed when the next word begins with a consonant since the liaison phoneme is never pronounced in that case.
if the next word begins with a vowel, the liaison phoneme may or may not be pronounced and thus becomes optional.
contrasting
train_7449
Table2: BETA evaluation for the Ro-EN lexicon of nouns; both COGN and DIST filters used The analysis of the wrong translation pairs revealed that most of them were hapax pairs (pairs appearing only once) and they were selected because the DIST measure enabled them, so we considered that this filter is not discriminative enough for hapaxes.
for the non-hapax pairs the DIST condition was successful in more than 85% of the cases.
contrasting
train_7450
The treatment of verb + preposition cooccurrences is different from the treatment of N+P pairs since verb and preposition are seldom adjacent to each other in a German sentence.
they can be far apart from each other, the only restriction being that they cooccur within the same clause.
contrasting
train_7451
Of course, our definition of the term paradigmatic association as given in the introduction implies this.
the simulation system never obtained any information on part of speech, and so it is nevertheless surprising that -besides computing term similarities -it implicitly seems to be able to cluster parts of speech.
contrasting
train_7452
Paradigmatic associations like blue red, cold hot, and tobacco cigarette are intuitively plausible.
a quantitative evaluation would be preferable, of course, and for this reason we did a comparison with the results of the human subjects in the TOEFL test.
contrasting
train_7453
Again, we need to emphasize that parameters other than the basic methodology could have influenced the result, so we need to be cautious with an interpretation.
to us it seems that the view that some of the co-occurrences in corpora should be considered as noise is wrong, or else if there is some noise it obviously cancels out over large corpora.
contrasting
train_7454
In the ranked lists produced by the system we find a mixture of both types of associations.
for a given association there is no indication whether it is of syntagmatic or paradigmatic type.
contrasting
train_7455
In essence, this means that in the process of learning or generating associations the human mind seems to conduct operations that are equivalent to co-occurrence counting, to performing significance tests, or to computing vector similarities (see also Landauer & Dumais, 1997).
further work is required to find out to what extent other language-related tasks can also be explained statistically.
contrasting
train_7456
1993;Grishman, 1994;Meyers, Yanharber & Grishman 1996;Watanabe, Kurohashi & Aramaki 2000).
the mismatching between complex structures across languages and the poor parsing accuracy of the parser will hinder structure alignment algorithms from working out high accuracy results.
contrasting
train_7457
English NE identification has achieved a great success.
for Chinese, NE identification is very different.
contrasting
train_7458
The reason is that adopting heuristic information reduces the noise influence.
we noticed that the recall of PER and LOC decreased a bit.
contrasting
train_7459
In sentence translation, the alignment links frequently cross and it is not unusual for two words in different parts of sentences to correspond.
the processes that lead to link intersection in diachronic phonology, such as metathesis, are quite sporadic.
contrasting
train_7460
The quality of correspondences produced by CORDI is difficult to validate, quantify, and compare with the results of alternative approaches.
it is possible to evaluate the correspondences indirectly by using them to identify cognates.
contrasting
train_7461
As expected, Method A is outperformed by methods that employ an explicit noise model.
in spite of its extra complexity, Method C is not consistently better than Method B, perhaps because of its inability to detect important vowel-consonant correspondences, such as the ones between French nasal vowels and Latin /n/.
contrasting
train_7462
These methods could archive high accuracy because of the assumption of the sentence alignments for parallel corpora, but they have the problem with narrow applicable domains because there are not too many parallel corpora with sentence alignments at present.
because our method does not require sentence alignments, it can be applied for wider applicable domains.
contrasting
train_7463
These results show that there was only a negligible (and, according to the χ 2 test, statistically insignificant) difference between the results in the cases when the tagger was both trained and tested on "old" corpus and both trained and tested on the "corrected" corpus.
the difference in the error rate when the tagger was once trained on the "old" and once on the "corrected" version, and then in both cases tested on the "corrected" version 10 , brought up a relative error improvement of 9,97%.
contrasting
train_7464
In sentence (9), the airlines are the literal, and the persons the real referents.
relating these two entities directly by an employment relation is problematic, since it is impossible to connect the locality information (from Boston to New York) and the first class restriction to either of them.
contrasting
train_7465
Icons are now used in nearly all possible areas of human computer interaction, even office software or operating systems.
there are contexts where richer information has to be managed, for instance: Alternative & Augmentative Communication systems designed for the needs of speech or language im-paired people, to help them communicate (with icon languages like Minspeak, Bliss, Commun-I-Mage); Second Language Learning systems where learners have a desire to communicate by themselves, but do not master the structures of the target language yet; Cross-Language Information Retrieval systems, with a visual symbolic input.
contrasting
train_7466
In these contexts, the use of icons has many advantages: it makes no assumption about the language competences of the users, allowing impaired users, or users from a different linguistic background (which may not include a good command of one of the major languages involved in research on natural language processing), to access the systems; it may trigger a communication-motivated, implicit learning process, which helps the users to gradually improve their level of literacy in the target language.
icons suffer from a lack of expressive power to convey ideas, namely, the expression of abstract relations between concepts still requires the use of linguistic communication.
contrasting
train_7467
An approach to tackle this limitation is to try to "analyse" sequences of icons like natural language sentences are parsed, for example.
icons do not give grammatical information as clues to automatic parsers.
contrasting
train_7468
We thus should have to use a parser based on computing the dependencies, such as some which have been written to cope with variable-word-order languages (Covington, 1990).
since no morphological clue is available either to tell that an icon is, e.g., accusative or dative, we have to rely on semantic knowledge to guide role assignment.
contrasting
train_7469
(1998) did make use of information from the whole document.
their system is a hybrid of hand-coded rules and machine learning methods.
contrasting
train_7470
In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.
it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even.
contrasting
train_7471
Borthwick (1999) successfully made use of other handcoded systems as input for his MENE system, and achieved excellent results.
such an approach requires a number of hand-coded systems, which may not be available in languages other than English.
contrasting
train_7472
The success stories of these systems indicated that carefully crafted answer patterns were effective in candidate answer extraction.
just applying answer patterns blindly might lead to disastrous results, as shown by Hermjakob (2002), since correct and incorrect answers were equally likely to match these patterns.
contrasting
train_7473
(2001) also used WordNet to assist in answering definition questions.
they took the hypernyms of the term to be defined as the default answers while we used its glosses.
contrasting
train_7474
The TREC-10 questions are typical instances of queries for which users tend to believe answers can be found from the web.
the candidate answers extracted from the web have to find support in the TREC-10 corpus in order to be judged as correct otherwise they will be marked as unsupported.
contrasting
train_7475
The best results were achieved when evidence from both resources was used.
it also demonstrates the difficulty of improving performance on very hard questions (d>=0.75).
contrasting
train_7476
We showed that using either approach alone improved MRR score by 19% and PCT5 score by 5% over the baseline.
the best performance was achieved when both methods were used together.
contrasting
train_7477
Since large tagged corpora in Bulgarian are not widely available, the development of a corpus-based probabilistic tagger was an unrealistic goal for us.
as some studies suggest (Voutilainen, 1995), the precision of rule-based taggers may exceed that of the probabilistic ones.
contrasting
train_7478
Third Construction verbs like versprechen allow a great deal of variation in the size of the left-peripherally shared topology area (LS=1:6), thereby licensing optional promotion of es.
since es is a personal pronoun, it only takes M2 as its landing site (see Table 3).
contrasting
train_7479
Knowledge of verb selectional preferences and verb subcategorization frames (SFs) can be extracted from corpora for use in various NLP tasks.
knowledge of SFs is often not fine-grained enough to distinguish various verbs and the kinds of arguments that they can select.
contrasting
train_7480
Each of the verbs above occurs with both the intransitive and transitive SFs.
the verbs differ in their underlying argument structure.
contrasting
train_7481
Both S1 and S2 receive credit for matching question words "Lee Harvey Oswald" and "kill" (underlined), as well as for finding an answer (bold) of the proper qtarget type (PROPER-PERSON).
is the answer "Jack Ruby" or "President John F. Kennedy"?
contrasting
train_7482
These methods can be viewed as an established basis for exposing hidden associations between documents and terms.
their objective is to generate a compact representation of the original information space, and it is likely in consequence that the resulting orthogonal vectors are dense with many non-zero elements (Dhillon and Modha, 1999).
contrasting
train_7483
For high-frequency terms, In the original definition, the value of δ was uniquely determined, for example as δ = m(1) M with m(1) being the number of terms that appear exactly once in the text.
we experimentally vary the value of δ in our study, because it is an essential factor for controlling the size and quality of the generated clusters.
contrasting
train_7484
6and 7: Without discounting, the value of δI(S T , S D ) in the above equation is always negative or zero.
with discounting, the value becomes positive for uniformly dense clusters, because the frequencies of individual cells are always smaller than their agglomeration and so the discounting effect is stronger for the former.
contrasting
train_7485
For comparison purposes, we have used only the conventional documents-and-terms feature space in our experiments.
the proposed micro-clustering framework can be applied more flexibly to other cases as well.
contrasting
train_7486
This process is currently implemented in the context of a BN.
any representation that supports the generation of a connected argument involving a given set of propositions would be appropriate.
contrasting
train_7487
Here, ¡ X is possible only when the statistics concerned use summation for aggregation.
if X turns into a uniquely instantiated variable, delete the aspect related to X from the perspective.
contrasting
train_7488
Conventionally unknown words were extracted by statistical methods for statistical methods are simple and efficient.
the statistical methods without using linguistic knowledge suffer the drawbacks of low precision and low recall.
contrasting
train_7489
Conventional statistical extraction methods are simple and efficient.
if without supporting linguistic evidences the precision of extraction is still not satisfactory, since a high frequency character string might be a phrase or a partial phrase instead of a word.
contrasting
train_7490
The above results could be justified by the structural difference of Japanese and English, where English takes the prefix structure that places emphasis at the beginning of a sentence, hence prefers left-to-right decoding.
japanese takes postfix structure, setting attention around the end of a sentence, therefore favors right-to-left decoding.
contrasting
train_7491
Therefore, the narrowing the search space by the beam search crite-ria (pruning) would not affect the overall quality.
if right-to-left decoding method were applied to such a language above, the difference of good hypotheses and bad hypotheses is small, hence the drop of hypotheses would affect the quality of translation.
contrasting
train_7492
(1999) used decision tree learning.
most machine learning methods overfit the training data when many features are given.
contrasting
train_7493
Non-linear decision surfaces can be realized by replacing the inner product of (4) with a kernel function K(x • x i ) : In this paper, we use polynomial kernel functions that have been very effective when applied to other tasks, such as natural language processing (Joachims, 1998;Kudo and Matsumoto, 2001; Kudo and Matsumoto, 2000): 2.2 Sentence Ranking by using Support Vector Machines Important sentence extraction can be regarded as a two-class problem: important or unimportant.
the proportion of important sentences in training data will differ from that in the test data.
contrasting
train_7494
First, we show that an NE recognizer based on Support Vector Machines (SVMs) gives better scores than conventional systems.
off-the-shelf SVM classifiers are too inefficient for this task.
contrasting
train_7495
SVMs have given high performance in various classification tasks (Joachims, 1998;Kudo and Matsumoto, 2001).
it turned out that off-the-shelf SVM classifiers are too inefficient for NE recognition.
contrasting
train_7496
For instance, when we removed 5,066 features that appeared four times or less in the training data, the modified classifier for ORGANIZATION-END misclassified 103 training examples, whereas the original classifier misclassified only 19 examples.
xQK-FS removed 12,141 features without an increase in misclassifications for the training data.
contrasting
train_7497
GENERAL's F-measure was slightly improved from 87.04% to 87.10%.
when we trained the cubic kernel classifiers by using only features that appeared three times or more (without considering weights), TinySVM's classification time was reduced by only 14% and the F-measure was slightly degraded to 86.85%.
contrasting
train_7498
(1) Multilingual distinction of senses The developed method is based on the premise that the senses of a polysemous word in a language are lexicalized differently in another language.
the premise is not always true; that is, the ambiguity of a word may be preserved by its translations.
contrasting
train_7499
A repair is further classied into adjacent and long-distance.
implicit long-distance repair is generally not acceptable in Japanese.
contrasting