id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_7700
When the alignment is known, we can efficiently determine the optimal translation with that alignment.
when the translation is known, we can efficiently determine a better alignment.
contrasting
train_7701
IE is known to have established a now widely accepted linguistic architecture based on cascading automata and domain-specific knowledge (Appelt et al, 1993).
several studies have outlined the problem of the definition of the resources.
contrasting
train_7702
The process is weakly supervised since the analyst only has to provide one example to the system.
we observed that the quality of the acquisition process highly depends from this seed example, so that several experiments has to be done for the acquisition of an argument structure, in order to be sure to obtain an accurate coverage of a domain.
contrasting
train_7703
They sometimes obtain unclear conclusions about the reason of the performances of the different algorithms (for example, comparing Jiang and Conrath's measure (1997) with Lin's one (1998): "It remains unclear, however, just why it performed so much better than Lin's measure, which is but a different arithmetic combination of the same terms").
the authors emphases on the fact that the use of the sole hyponym relation is insufficient to capture the complexity of meaning: "Nonetheless, it remains a strong intuition that hyponymy is only one part of semantic relatedness; meronymy, such as w h e e l -c a r, is most definitely an indicator of semantic relatedness, and, a fortiori, semantic relatedness can arise from little more than common or stereotypical associations or statistical co-occurrence in real life (for example, p e n g u i n -A n t a r c t i c a ; birthday-candle; sleep-pajamas)".
contrasting
train_7704
The activation measure d _ is equal to the mean of the weight of each NCA calculated from A and B : Please, refer to (Dutoit and Poibeau, 2002) for more details and examples.
this measure is sensitive enough to give valuable results for a wide variety of applications, including text filtering and information extraction (Poibeau et al., 2002).
contrasting
train_7705
We estimate that generally 25% of the acquired pattern should be rejected.
this validation process is very rapid: a few minutes only were necessary to check the 31 proposed patterns and retain 25 of them.
contrasting
train_7706
Additionally, by reducing the size of descriptions, Cyclone can be used with mobile devices, such as PDAs.
while Cyclone includes various types of terms, such as technical terms, events, and animals, the required set of viewpoints can vary depending the type of target terms.
contrasting
train_7707
To resolve this problem, zero pronoun detection and anaphora resolution can be used.
due to the rudimentary nature of existing methods, we use hand-crafted rules to complement simple sentences with the subject.
contrasting
train_7708
A simple sentence that matches with patterns for multiple viewpoints is classified into every possible group.
the pattern-based method fails to classify the sentences that do not match with any predefined patterns.
contrasting
train_7709
It may be argued that an existing hand-crafted encyclopedia can be used as the standard summary.
paragraphs in Cyclone often contain viewpoints not described in existing encyclopedias.
contrasting
train_7710
Third, the VBS method outperformed the lead method in terms of the coverage, excepting the case of "#Reps=1" focusing on the 12 viewpoints by annotator B.
in general the VBS method produced more informative summaries than the lead method, irrespective of the compression ratio and the annotator.
contrasting
train_7711
A low coverage for the synonym is partially due to the fact that synonyms are often described with parentheses.
because parentheses are used for various purposes, it is difficult to identify only synonyms expressed with parentheses.
contrasting
train_7712
For example, in TREC QA track, definition questions are intended to provide a user with the definition of a target item or person (Voorhees, 2003).
while the expected answer for a TREC question is short definition sentences as in a dictionary, we intend to produce an encyclopedic text describing a target term from multiple viewpoints.
contrasting
train_7713
Thórisson's original algorithm (Thórisson, 1994) takes into account these relations as well as positional relations of objects when calculating similarity between objects to generate groups.
if we generate groups using multiple relations simultaneously, the assumption used in Step 1 of our algorithm that any pair of groups in an output list do not intersect without a subsumption relation cannot be held.
contrasting
train_7714
We assumed that all participants in a conversation shared the same reference frame.
when we apply our method to conversational agent systems, e.g., (Cavazza et al., 2002;Tanaka et al., 2004), reference frames must be properly determined each time to generate referring expressions.
contrasting
train_7715
Experiments by Daumé et al (2002) and the parsing work of Charniak (2000) and others indicate that further lexicalization may yield some additional improvements for ordering.
the parsing results of Klein & Manning (2003) involving unlexicalized grammars suggest that gains may be limited.
contrasting
train_7716
Language model based IR system proposed in recent 5 years has introduced the language model approach in the speech recognition area into the IR community and improves the performance of the IR system effectively.
the assumption that all the indexed words are irrelative behind the method is not the truth.
contrasting
train_7717
The maximum of the average precision is reached when three MeSH terms are selected per query (0.1925), but we can notice that selecting only two terms is as effective (0.19).
selecting the unique top returned term is not sufficient (average precision is below 0.145), and adding more than three terms smoothly degrade the precision, so that with 25 terms, precision falls below 0.15.
contrasting
train_7718
In a given CTS, a past perfect clause should precede the event described by a simple past clause.
the order of two events in CTS does not necessarily correspond to the order imposed by the interpretation of the connective (Dorr, 2002).
contrasting
train_7719
Assuming event classes are independent of temporal indicators given c , we have: A Naïve Bayesian Classifier assumes strict independence among all attributes.
this assumption is not satisfactory in the context of temporal relation determination.
contrasting
train_7720
This model is called Ungrouped Model (UG).
as illustrated in table 1, the temporal indicators play different roles in building temporal reference.
contrasting
train_7721
One may think that if back-transliteration were done precisely, those variants would be backtransliterated into one word, and they would be recognized as variants.
backtransliteration is known to be a very difficult task (Knight and Graehl, 1997).
contrasting
train_7722
Practically, these words are identified as different words by using a dictionary.
when using only contextual similarity, these words would be judged as variants.
contrasting
train_7723
The results showed that detection of short word variants is very difficult, and a dictionary raised the precision for such short words.
contextual similarity did not contribute as expected to the detection of orthographic variants.
contrasting
train_7724
For instance, noisy-channel model (NCM) (Virga et al., 2003;Lee et al., 2003), HMM (Sung et al., 2000), decision tree (Kang et al., 2000), transformation-based learning (Meng et al., 2001), statistical machine transliteration model (Lee et al., 2003), finite state transducers (Knight et al., 1998) and rule-based approach (Wan et al., 1998;Oh et al., 2002).
it is observed that the reported transliteration models share a common strategy, that is: 1) To model the transformation rules; 2) To model the target language; 3) To model the above both; the modeling of different knowledge is always done independently.
contrasting
train_7725
By eliminating the potential imprecision introduced through a multiple-step phonetic mapping in the phoneme-based approach, DOM is expected to outperform.
to phoneme-based approach, DOM is purely datadriven, therefore can be extended across different language pairs easily.
contrasting
train_7726
As for SCF acquisition, the cooccurrence of one predefined SCF scf i with one verb v is the relevant primitive event, and the concerned probability is p(v|scfi) here.
the aim of filtering is to rule out those unreliable hypotheses, so it is the probability that one primitive event doesn't occur that is often used for SCF hypothesis testing, i.e.
contrasting
train_7727
However, it seems that inappropriate translation equivalents were often ranked high by the CS method.
referring to the representative associated words enables the results of the RAW method to be judged as appropriate or inappropriate.
contrasting
train_7728
The RAW method maximizes the F-measure, i.e., harmonic means of recall and precision, when the threshold for the ratio of associated words is set at 4%; the recall, precision, and F-measure are 92%, 80%, and 86%, respectively.
the CS method maximizes the F-measure when N is set at nine; the recall, precision, and F-measure are 96%, 72%, and 82%, respectively.
contrasting
train_7729
• The RAW method assumes that a target word has more than one sense, and, therefore, it is effective for polysemous target words.
contextual similarity is ineffective for a target word with two or more senses occurring in a corpus.
contrasting
train_7730
Jing (Jing 00) developed a method to remove extraneous phrases from sentences by using multiple sources of knowledge to decide which phrases could be removed.
while this method exploits a simple model for sentence reduction by using statistics computed from a corpus, a better model can be obtained by using a learning approach.
contrasting
train_7731
One way to solve this problem is to select multiple actions that correspond to the context at each step in the rewriting process.
the question that emerges here is how to determine which criteria to use in selecting multiple actions for a context.
contrasting
train_7732
We suppose the topical clustering could not prove the merits with this test collection because the collection consists of relevant articles retrieved by some query and polished well by a human so as not to include unrelated articles to a topic.
the proposed method (PO) improved chronological ordering much better than topical segmentation.
contrasting
train_7733
Tukey test revealed that RO was definitely the worst with all metrics.
spearman's rank correlation τ s and Kendall's rank correlation τ k failed to prove the significant difference between CO, PO and HO.
contrasting
train_7734
All of these approaches assume events occur in text in certain patterns.
this assumption may not be completely correct and it limits the syntactic information considered by these approaches for finding events, such as information on global features from levels other than deep processing.
contrasting
train_7735
For slot filler detection, several classifiers were trained to find names for each slot and there is no correlation among these classifiers.
entity slots in events are often strongly correlated, for example the PER_IN and POST slots for management succession events.
contrasting
train_7736
She used an iterative process to semi-automatically learn patterns.
a corpus of 20MB words yielded only 400 examples.
contrasting
train_7737
CBC (Clustering by Committee) proposed by Pantel and Lin (2002) achieves high recall and precision in generating similarity lists of words discriminated by their meaning and senses.
such clustering algorithms fail to name their classes.
contrasting
train_7738
The former attribute greatest weights to very rare context words, some of which seem rather informative ( knock_by, climb_of, see_into), some also appear to be occasional collocates (remand_to, recover_in ) or parsing mistakes (entrust_to, force_of).
the latter encourage frequent context words.
contrasting
train_7739
Finding the four nearest neighbors for each word in the collection, we calculated the average minimum similarity score that a pair of words must have in order to be considered related.
since words vary a lot in terms of the amount of corpus data available on them, the average similarity threshold might be inappropriate for many words.
contrasting
train_7740
cosine and Jaccard measure, Euclidean distance and scalar product.
his measures were taken to find the most similar vector for a given word in order to automatically identify word translations.
contrasting
train_7741
Applying a two-step technique (translation rules and fuzzy n-gram matching), they achieved 81.1% of average precision in a Spanishto-English context covering biomedical words only.
their evaluation metrics considerably differed from ours, since they considered multiple hypotheses.
contrasting
train_7742
In back transliteration, an English letter or string is chosen to correspond to a katakana character or string.
this decision is difficult.
contrasting
train_7743
Since it is hard to construct a set of predefined names by hand, usually some corpus based approaches are used for building such taggers.
as Zipf's law indicates, most of the names which occupy a large portion of vocabulary are rarely used.
contrasting
train_7744
For each word w which appeared in newspaper A, we got the document frequency at date t: where df A (w, t) is the number of documents which contain the word w at date t in newspaper A.
the normalization constant N A (t) is the number of all articles at date t. comparing this value between two newspapers directly cannot capture a time lag.
contrasting
train_7745
It can be seen that the fact that the executives are leaving and the name of the organisation are listed in the first sentence.
the names of the executives and their posts are listed in the second sentence although it does not mention the fact that the executives are leaving these posts.
contrasting
train_7746
In an effort to overcome this brittleness machine learning methods have been applied to port systems to new domains and extraction tasks with minimal manual intervention.
some IE systems using machine learning techniques only extract events which are described within a single sentence, examples include (Soderland, 1999;Chieu and Ng, 2002;Zelenko et al., 2003).
contrasting
train_7747
Then we ran English-QDIE system to get the extraction patterns, which are used to extract the entities by pattern matching.
one can first translate the scenario description into the source language and use it for the monolingual QDIE system for the source language, assuming that we have access to the tools for pattern acquisition in the source language.
contrasting
train_7748
As we shall demonstrate, the errors introduced by the MT system impose a significant cost in extraction performance both in accuracy and coverage of the target event.
if basic linguistic analysis tools are available for the source language, it is possible to boost CLIE performance by learning patterns in the source language.
contrasting
train_7749
Such problems, of course, may make it hard to detect Named Entities and get a correct dependency tree of the sentence.
translation of names is easier than translation of contexts; the MT system can output the transliteration of an unknown word.
contrasting
train_7750
This means that, as in face-to-face spoken dialog, the email thread as a whole is a collaborative effort with interaction among the discourse participants.
unlike spoken dialog, the discourse participants are not physically co-present, so that the written word is the only channel of communication.
contrasting
train_7751
These sentence-shortening approaches have been evaluated by comparison with human-shortened sentences and have been shown to compare favorably.
the use of sentence shortening for the multi-document summarization task has been largely unexplored, even though intuitively it appears that sentence-shortening can allow more important information to be included in a summary.
contrasting
train_7752
The machine summaries contain on average one parenthetical every 3.3 sentences.
human summaries contain only one parenthetical unit per 8.9 sentences on average.
contrasting
train_7753
We tested up to ±10 000 word contexts.
the best precision is always obtained for short contexts ranging from ±1 to ±5 words.
contrasting
train_7754
Most current research uses machine learning techniques (Li and Takeuchi, 1997;Murata et al., 2001;Takamura et al., 2001), has achieved good performance.
as supervised learning methods require word sense-tagged corpora, they often suffer from data sparseness, i.e., words which do not occur frequently in a training corpus can not be disambiguated.
contrasting
train_7755
It is similar to the ordinary Naive Bayes model for WSD (Pedersen, 2000).
we assume that this model can not be trained for low frequency words due to a lack of training data.
contrasting
train_7756
In general, two or more hypernyms can be extracted from a definition sentence, when the definition of a sense consists of several sentences or a definition sentence contains a coordinate structure.
for this work we extracted only one hypernym for a sense, because definitions of all senses in the EDR concept dictionary are described by a single sentence, and most of them contain no coordinate structure.
contrasting
train_7757
training a WSD classifier from an unlabeled data set.
our approach is to use a machine readable dictionary in addition to a corpus as knowledge resources for WSD.
contrasting
train_7758
In the following, we call this method the Algorithm for Hyponymy Relation Acquisition from Itemizations (AHRAI).
the method we propose in this paper is called Hyponym Extraction Algorithm from Itemizations and Headings (HEAIH).
contrasting
train_7759
It will then obtain that "Toyota" is a hyponym of "car company" from document (A) in the figure, while it finds that "Toyota" is a hyponym of "car" from (B).
the task is not that simple.
contrasting
train_7760
One may think that, for instance, she can use the distance between an itemization and (candidates of) its heading in the HTML file as a clue for finding the correspondence.
we empirically show that this is not the case.
contrasting
train_7761
Disagreement between human annotators and uncertainty about the interpretation of annotation guidelines may also lead to an element of randomness in the evaluation.
even significant results cannot be generalised to a different type of collocation (such as adjective-noun instead of PP-verb), different evaluation criteria, a different domain or text type, or even a source corpus of different size, as the results of show.
contrasting
train_7762
Both observed precision values are consistent with an average precision π A = π B in the region of overlap, so that the observed differences may be due to random variation in opposite directions.
this conclusion is premature because the two rankings are not independent.
contrasting
train_7763
The collection of clusters is mainly presented to the users as a flat and independent list of clusters.
as we realised that some of the clusters are more related than others (e.g.
contrasting
train_7764
on average 1.68 historical links per major news cluster.
for 42 of the 136 major news clusters, the system did not find any related news clusters with a similarity of 15% or more.
contrasting
train_7765
The second and third column show the influence of the words (or word combinations) by themselves, which is extremely low.
when examining all patterns containing these words, the fourth and fifth columns, their usefulness becomes visible.
contrasting
train_7766
It is both important, because there is strong demand for all kinds of computer support for health care and clinical services, which aim at improving their quality and decreasing their costs, and challenging -given the miracles of medical sublanguage, the various text genres one encounters and the enormous breadth of expertise surfacing as medical terminology.
the development of human language technology for written language material has, up until now, almost exclusively focused on newswire or newspaper genres.
contrasting
train_7767
As shown in bigram types is also reflected in the three-part distribution in Table 3: The number of POS trigrams occurring less than ten times is almost one third less in MED than in NEGRA or in NEWS; similarly, but less pronounced, this can be observed for POS bigrams.
the number of trigram types occurring more than 1000 times is even higher for MED, and the number of bigram and unigram types is about the same when scaled against the total number of types.
contrasting
train_7768
Though such noun t i may be an important noun for document d, it may be an irrelevant noun for document set S. Hence, noun t i that is assigned small entropy value should not be extracted as a relevant keyword.
a noun that appears uniformly in each document contained in document set S has a large entropy value.
contrasting
train_7769
Undoubtedly, internet texts have formed a very large and easilyaccessible corpus.
chinese texts in internet are not segmented so it is not costeffective to use them.
contrasting
train_7770
In table 6, the precision of 80.23% is slightly better than 79.96% of the 20-word condition, and just 1% lower than that of the 40-word condition.
the recall drastically increases from 45.56%, or 59.57% under the 40-word condition, to 85.03%.
contrasting
train_7771
≥ Table 8: Morpheme-based and word-based recall of high-frequency and low-frequency words The results showed that high-frequency words could be largely extracted by the algorithm with both morphemes (99.80% recall) and words (89.45% recall).
paradigm words gave 26.55% recall of low-frequency words, whereas paradigm morphemes gave 67.66%.
contrasting
train_7772
The efficient application of Turney's algorithm with help of colossal corpus like hundred-billionword corpus is matched by the ready availability of internet texts.
the same convenience is not available to Chinese because of the heavy cost of word segmentation.
contrasting
train_7773
Further, the second approach is clearly advantageous when one wishes to apply distributional similarity methods in a particular application area.
it is not at all obvious that one universally best measure exists for all applications (Weeds and Weir, 2003).
contrasting
train_7774
The value obtained (0.0525) is disappointing since it is not statistically significant (the probability of this value under the null hypothesis of "no correlation" is 0.3).
2 Haspelmath (2002) notes that a compositional collocation is not just similar to one of its constituents -it can be considered to be a hyponym of its head constituent.
contrasting
train_7775
One limitation of the corpus-based translation knowledge acquisition approach is that the techniques of translation knowledge acquisition heavily rely on availability of parallel/comparative corpora.
the sizes as well as the domain of existing parallel/comparative corpora are limited, while it is very expensive to manually collect parallel/comparative corpora.
contrasting
train_7776
Cao and Li (2002) restricted candidate bilingual compound term pairs by consulting a seed bilingual lexicon and requiring their constituent words to be translation of each other across languages.
in the framework of bilingual term correspondences estimation of this paper, the computational complexity of enumerating translation candidates can be easily avoided with the help of cross-language retrieval of relevant news texts.
contrasting
train_7777
He uses dynamic programming to break tree pairs into pairs of aligned elementary trees, similar to DOT.
he aims to estimate a translation model from unaligned data, whereas we wish to align our data off-line.
contrasting
train_7778
Results over the full set of output translations, summarised in Figure 9, show that using the manually linked fragment base results in significantly better overall performance at all link depths (LD1 -LD4) than using the automatic alignments.
both metrics used assign score 0 in all instances where no translation was output by the system.
contrasting
train_7779
In terms of Bleu scores, translations produced using manual alignments score slightly better at all depths.
as link depth increases the gap narrows consistently and at depth 4 the difference in scores is reduced to just 0.0125.
contrasting
train_7780
For example, there are relatively few instances of 'D→the' aligned with 'D→le/la/l'/les' in the automatic alignment compared to the manual alignment.
we achieve 10% less coverage when translating using the automatic alignments.
contrasting
train_7781
All of these approaches annotate examples by mean of a pair of analyzed structures, one for each language sentence, where the correspondences between inter levels of source and target structures are explicitly linked.
we found that these approaches require the bilingual examples that have 'parallel' translations or 'close' syntactic structures (Grishman, 1994), where the source sentence and target sentences have explicit correspondences in the sentences-pair.
contrasting
train_7782
We have tried to exploit the information about multiwords contained in the Collins bilingual dictionary.
it is well known that dictionaries contain only a small part of multiwords actually used in language.
contrasting
train_7783
In fact, KNOWA correctly aligns 2-boy with 5-bambino, and 5-dog with 2-cane, even if the English and Italian nouns are not in the same position in the respective sentences, thanks to a search in the Italian word window.
kNOWA would also align 1-the with 1-il, and 4-the with 4-al.
contrasting
train_7784
However, not all nouns have this characteristics.
certain function words, for instance conjunctions, may be involved in a one-to-one potential correspondence relation.
contrasting
train_7785
EuroParl includes texts in 11 European languages, automatically aligned at the sentence level, whereas EuroCor includes only a part of the texts in EuroParl and only for English and Italian.
multiSemCor is a reference English/Italian corpus being developed at ITC-irst, including SemCor (part of the Brown Corpus) and its Italian translations.
contrasting
train_7786
Most spoken dialogue systems that have been developed, such as airline information systems (Levin et al., 2000;Potamianos et al., 2000;San-Segundo et al., 2000) and train information systems (Allen et al., 1996;Sturm et al., 1999;Lamel et al., 1999), are categorized here.
it is not feasible to define such keywords in retrieval for operation manuals (Komatani et al., 2002) or WWW pages, where the target of retrieval is not organized and is written as natural language text.
contrasting
train_7787
In general, dependencies in Japanese written text do not cross.
dependencies in spontaneous speech sometimes do.
contrasting
train_7788
One method based on machine learning, a method based on maximum entropy models, has been proposed by Reynar and Ratnaparkhi (Reynar and Ratnaparkhi, 2000).
the target in their study was written text.
contrasting
train_7789
This parameter can again be used for basic tasks such as POS-tagging: Adjective-noun ambiguity is notoriously the most difficult one to solve, and the ordering restrictions on the classes of adjectives can help to reduce it.
it is most useful for semantic tasks.
contrasting
train_7790
It could seem that the semantic classes established for the second parameter amount to morphological classes: not derived (basic adjectives), denominal (object adjectives), and deverbal (event adjectives).
although there is indeed a certain correlation between morphological class and semantic class, we claim that morphology is not sufficient for a reliable classification because it is by no means a one-to-one relationship.
contrasting
train_7791
This suggests that this judge is an outsider and that the level of expertise needed for humans to perform this kind of classification is quite high.
there are too few data for this suspicion to be statistically testable.
contrasting
train_7792
But it will not predict very well certain locally regular words like of, the etc whose main role is to support the syntactic structure in a sentence.
n-gram language models are able to model them well because of maximum likelihood estimation from training corpus and various smoothing techniques.
contrasting
train_7793
This relative increase is mainly due to poor modeling of function words in the LSA-space.
for SELSA, we can observe that its perplexity of 36.37 is less than 50.39 value in the case of knowledge about content/function words.
contrasting
train_7794
Using N -best could bear computing burden.
our experiments have shown a smaller N seems to be adequate to achieve most of the translation improvement without significant increasing of computations.
contrasting
train_7795
In case of Japanese text analysis, (Murata et al., 1999) proposed a method of utilizing "N m no N h " phrases for indirect anaphora resolution of diverse relationships.
they basically used all "N m no N h " phrases from corpora, just excluding some pre-fixed stop words.
contrasting
train_7796
Furthermore, sometimes a definition is too specific or detailed, and the example phrases can adjust it properly, as in the example of hisashi in Table 2.
a simple method that just collects and clusters "N m no N h " phrases (based on some similarity measure of nouns) can not construct comprehensive nominal case frames, because of polysemy and multiple obligatory cases.
contrasting
train_7797
The algorithm scans text from left to right and selects the longest match with a dictionary entry at each point, in a greedy fashion.
longest possible words may not comply with the actual meanings.
contrasting
train_7798
This was because Method p&s couldn't select two correct variant pairs when the penalties were 1 and 2.
the precision of Method p&s was 16.2% higher.
contrasting
train_7799
Pantel and Lin (2002) demonstrate that it is possible to cluster the neighbours into senses and relate these to WordNet senses.
we use the distributional similarity scores of the neighbours to rank the various senses of the target word since we expect that the quantity and similarity of the neighbours pertaining to different senses will reflect the relative dominance of the senses.
contrasting