id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_7500
They rely on partial matching between the word fragment and the following word to detect hesitations.
we cannot reasonably assume that current s p e e c h recognizers can recognize a word fragment as a correct word.
contrasting
train_7501
Some methods such as (Bear et al., 1992) process self-corrections sentence by sentence.
such approaches are not suitable for use with incremental parsing (Nakano and Shimazu, 1998).
contrasting
train_7502
This interpretation has an advantage in that we can handle various self-corrections uniformly by supplementing omitted verbs.
we use the former interpretation 1.
contrasting
train_7503
The third assumption (and the condition of step 3 concerning coherency in gure 7) means that even a human cannot make a unique interpretation if the word correspondence between t 1 and t 2 is ambiguous.
the third assumption seems too strong.
contrasting
train_7504
In the example in gure 10, our method creates a correspondence between \mae ni " a n d \ atari ni " and copies \kamera no" t o \ atari ni."
it should be attached to \mae no" instead of \atari ni."
contrasting
train_7505
We can nd most self-correction candidates with these types of information.
we need more information to evaluate the validity of each candidate.
contrasting
train_7506
order of C, A, B, because hypothesis C contains the most words.
the policy that causes this problem is needed to prevent the parser from removing too much information.
contrasting
train_7507
In order to know which particular meaning of the word we want to oppose, we have to assess by what context meaning has to be constrained.
context is not always sufficient to give a symmetry axis for antonymy.
contrasting
train_7508
His analysis has to deal with the notion of context: he proposes to associate to a word a core semantic description (the fact that a "door" is an "aperture") and to add some additional features, which can be activated in context ("walk-through" is the telic role of a "door").
pustejovsky does not take into account important notions such as lexical chains and text coherence.
contrasting
train_7509
Although clustering proves to be an alternative to lists, it seeks a global, possibly nested partition in which clusters represent sets of indistinguishable objects regarding the cluster criterion.
to this, we present cohesion trees (CT) as a data structure, in which single objects are hierarchically ordered on the basis of lexical cohesion.
contrasting
train_7510
Their concept is similar to that of our model, which can be regarded as a model that translates keywords into text, while their model can be regarded as one that translates query words into documents.
the purpose of their model is different: their goal is to retrieve text that already exists while ours is to generate new text.
contrasting
train_7511
We cannot distinguish the role using only POS since it varies -attributes are not always represented by adjectives nor actions by verbs (compare intense price competition with price competition is intensifying this month.).
the words in the second class have indirect relations, e.g., association, with the target word.
contrasting
train_7512
For almost all sets, context word vector 2 provides higher precision than context word vector 1.
the effect of consideration of syntactic dependency is minimal in this experiment.
contrasting
train_7513
This problem cannot be resolved by this method alone.
there is still room for improvement by combining other information, e.g., the similarity between components.
contrasting
train_7514
The inference of structure in XML information is a relatively new area of research.
there are several closely related topics that have been studied for a longer period.
contrasting
train_7515
However, their inference algorithms are based upon direct gen-eralisation and factoring of regular expressions, with information theoretic principles used to choose a final result from a pool of candidates.
we propose a hybrid method which employs various principles throughout the inference process, with the aim of producing a more general method.
contrasting
train_7516
Thus the newer method is preferable if only one choice is allowed.
the sk-ALL heuristic was better in terms of worstcase performance, though the deviation is still rather high.
contrasting
train_7517
Information extraction (IE) is an important application for natural language processing, and recent research has made great strides toward making IE systems easily portable across domains.
iE systems depend on parsing tools and specialized dictionaries that are language specific, so they are not easily portable across languages.
contrasting
train_7518
And the only elements in the distribution are words, which appeared in texts.
lSA and PlSA expressed the latent semantic structures, called a topic of the context.
contrasting
train_7519
From this, we could not declare whether the value of k and translation accuracy have relationship of each other or not in the data-driven models described in this paper.
we could also find that the degree of accuracy was raised in accordance with the value of k in PLSA.
contrasting
train_7520
A distributional soundness is expected to have better performance as the size of examples is growing.
we should resolve several problems raised during the experiment.
contrasting
train_7521
That is, the classification of TCFP requires a simple calculation in proportion to the number of unique terms in the test document.
in k-NN, a search in the whole training space must be done for each test document.
contrasting
train_7522
In more detail, the kernel of our novel disambiguation method for UBGs consists of the application of a context-free approximation for a given UBG and the exploitation of the standard probability model for CFGs.
to earlier approaches to disambiguation for UBGs, our approach has several advantages.
contrasting
train_7523
(2000) achieved a precision, which is only slightly lower than ours.
their results were yielded using a corpus, which is about 80 times as big as ours.
contrasting
train_7524
Figure 9 shows that the rst proposed system based on TM3LM-score achieved the greatest improvement of a little under 6%, in the performance for Rank A.
the existing selection system simply using the LM-score (language model of the translation target) could not improve a n d e v en degraded performance for Rank A.
contrasting
train_7525
Figure 10 shows that the second proposed system with the pruning procedure based on both TM3LM-score and TM-score (marked RT12-PRN in the graph) achieved the greatest improvement of about 5%, for Rank A+B (equal to or higher than B).
figure 10 shows that the existing selection system figure 10: Dierence in performance between each selection system and J-E TDMT.
contrasting
train_7526
Statistical approaches (Aone and Bennett, 1995;Ge et al., 1998;Kim and Ehara, 1995;Soon et al., 2001) use statistical models produced based on corpora annotated with anaphoric relations.
only a few attempts have been made in corpus-based anaphora resolution for Japanese zero pronouns.
contrasting
train_7527
Ideally, in the case where a verb in question is polysemous, word sense disambiguation is needed to select the appropriate case frame, because different verb senses often correspond to different case frames.
we currently merge multiple case frames for a verb into a single frame so as to avoid the polysemous problem.
contrasting
train_7528
In the case where the amount of unannotated corpora was varied in producing a semantic model, the accuracy marginally improved as the corpus size increases.
note that we do not need human supervision to produce a semantic model.
contrasting
train_7529
In their model, the search scope for possible antecedents was limited to the sentence containing zero pronouns.
our method can resolve zero pronouns in both intra/inter-sentential anaphora types.
contrasting
train_7530
The expected value of a token sequence ab representing a PER when a is red and b is blue is: The expected value of the cases when the token pair ab is not a PER name is the sum of expected values of four cases: Table 2), which after simplification, is given by: The ratio between the cases when ab is a PER versus when ab is not a PER is: where the rationality values of tokens ab, a and b of being a PER, red ball or blue ball respectively.
the probabilities of a as a surname (red ball) and b as a first-name (blue ball) are: The form of Equation (4) is similar to the concept of odds likelihood O(h), first introduced in Duda et al.
contrasting
train_7531
One way to achieve this is to employ a decision tree (Sekine 98) to select the best possible candidates.
it is difficult to use the decision tree to handle multiple relationships between conflicting NEs, and to perform incremental updates of rationality values in situations where the number, distribution and relationships in possible NEs are uncertain.
contrasting
train_7532
For spoken dialogue systems, this aspect of style can be important to optimize the analysis of a user's input.
social bonding is performed by adapting to a common interaction style (Brown and Levinson, 1987;Okada et al., 1999).
contrasting
train_7533
Therefore, people can choose the relevant clusters of documents on the map to get relevant documents.
it is impossible for one map to encompass the continuously growing data source.
contrasting
train_7534
Comparing the three algorithms we see that overall, kNN and the category-based method exhibit comparable performance (with the exception of measuring similarity by L1 distance, when the category-based method outperforms kNN by a margin of about 5 points; statistical significance p<0.001).
their performance is different in different frequency ranges: for lower frequencies kNN is more accurate (e.g., for L1 distance, p<0.001).
contrasting
train_7535
The level of -phrases can fairly easily be derived from syntax.
the same is not true of I-phrases.
contrasting
train_7536
Slow speech rate is imitated by decreasing, and fast rate by increasing this single parameter.
the results are not quite satisfactory yet, because some of the steps of the overall procedure for assigning phrase breaks were manually corrected.
contrasting
train_7537
In an analysis of the APA newswire corpus (a corpus of over 28 million words), we found that almost half (47%) of the word types were compounds.
the compounds accounted for a small portion of the overall token count (7%).
contrasting
train_7538
Intuitively, knowing what the modifier is should help us in guessing the head of a compound.
constructing a plausible head-prediction term based on modifier-head dependencies is not straightforward.
contrasting
train_7539
Thus, these preliminary experiments indicate that an approach to integrating compound and simple word predictions along the lines sketched at the beginning of this section, and in particular the version of the model in which modifier predictions are penalized, is feasible.
the model is clearly in need of further refinement, given that the improvement over the baseline model is currently minimal.
contrasting
train_7540
It is important to capture effective patterns and features from the training sense tagged data in WSD.
if there is noise in the training sense tagged data, it makes difficult to disambiguate word senses effectively.
contrasting
train_7541
In the case of the telephone translation system, the information of the language used is self-evident; at least, the speaker knows; so there are little needs and advantages of developing a multilingual speech identification system.
speech data in video do not always have the information about the language used.
contrasting
train_7542
It can be analysed that contextual rules are constructed using registered words in a P-DIC and estimating pronunciation module makes some errors.
'not registered' words also show relatively good performance.
contrasting
train_7543
The huge work for constructing the rules, however, cannot help depending on a lot of hands, and it is also difficult to modify the rules.
a technique for tagging dialogue acts has been proposed so far (Araki et al., 2001).
contrasting
train_7544
Our model was built using the British National Corpus (100 million words).
our model was built using only a part-of-speech tagged corpus.
contrasting
train_7545
In a critical part of his derivation Simon gives an initial probability to a new word found in the corpus as it introduces some new meaning not expressed by previous words.
as the number of words increases new ideas are frequently expressed not in single words, but in multi-word phrases or compound words.
contrasting
train_7546
This result appears to vindicate Simon's derivation.
whether Simon's derivation is entirely valid or not, the results in Figure 12 are a new confirmation of Zipf's original law in an extended form.
contrasting
train_7547
This figure shows that Dist(D("do")) is smaller than Dist(D("electronic")), which reflects our linguistic intuition that words co-occurring with "electronic" are more biased than those with "do".
dist(d("cipher")) is smaller than dist(d("do")), which contradicts our linguistic intuition.
contrasting
train_7548
In algebraic terms, concatenation left-distributes over union but does not right-distribute over union in general.
our solution is to provide a pair of concatenation operators: , which gives priority to the left, and ≺, which gives priority to the right: Rule (1) marks the second Y in YYY, but rule (2) marks the first Y.
contrasting
train_7549
Just as Collins uses rules to identify heads and arguments and thereby lexicalize trees, Chiang uses nearly the same rules to reconstruct derivations: each training example is broken into elementary trees, with each head child remaining attached to its parent, each argument broken into a substitution node and an initial root, and each adjunct broken off as a modifier auxiliary tree.
in this experiment we view the derived trees in the Treebank as incomplete data, and try to reconstruct the derivations (the complete data) using the Inside-Outside algorithm.
contrasting
train_7550
That is to say, a document which has actually been authored conveys more meaning than just stating "I am a valid document relative to the specification".
in a closed-world environment as we have been discussing until now, that additional meaning has no explicit counterpart in the knowledge-base; it is only represented implicitly in the abstract content tree, in a form which is not perspicuous and would be difficult to re-use for the authoring of other documents or to share with other processes.
contrasting
train_7551
(1999), the variable Ú represented not only the lexical verb but also its syntactic relation to the noun: either direct object, subject of an intransitive, or subject of a transitive verb.
the relationship between the underlying, semantic arguments of a verb and the syntac-tic roles in a sentence is not always straightforward.
contrasting
train_7552
This is in fact the case, with child, band, and team appearing among the top ten nouns for both positions.
play also exhibits an alternation between the direct object and subject of intransitive positions for music, role, and game.
contrasting
train_7553
This example illustrate the complex interaction between verb sense and alternation behavior: "The band played" and the "The music played" are considered to belong to different senses of play by WordNet (Fellbaum, 1998) and other word sense inventories.
it is interesting to note that nouns from both the broad senses of play, "play a game" and "play music", participate in both alternations.
contrasting
train_7554
Similarly, the direct object and subject of intransitive slots of increase are assigned to the same cluster.
in an example of how the model can cluster semantically related verbs that do not share the same alternation behavior, the direct object slot of reduce and the subject of transitive slot of exceed are groups together with increase.
contrasting
train_7555
We also made a 3x2 Table 6 are under 5%, it is confirmed that there exists associations among expression patterns and impressions for each readership.
the strength of the associations is low as for the respondents with readership "engineer" or "researcher", who have expertise, because of lower values of Cramer's V, while the strength of the association is high as for those who has readership "unconcerned" or "commoner" because of higher values of V. Because all graphs slope up from left to right in Figures 1, 2 and 3, the expression pattern 2.0 (expressing what the purpose of the technology is) is the most effective to make a title impressive (more precisely, regarded as "comprehensible", "evoked positive feelings" and "interesting") for readers with most of readership, especially the "unconcerned" readers or "commoner".
contrasting
train_7556
As a result, titles using expression pattern 2.0 (expressing what the purpose of the technology is) in obligatory component is the most effective in getting the general reader's comprehension, positive feelings, and interest.
the more expertise in a technical field the reader has, the less the reader tends to be influenced.
contrasting
train_7557
As the technique for stochastic parsing of spoken language, Den has suggested a new idea for detecting and parsing self-repaired expressions, however, the phenomena with which the framework can cope are restricted (Den, 1995).
our method provides the most plausible dependency structures for natural speeches by utilizing stochastic information.
contrasting
train_7558
About 86.5% of inversions appear at the last bunsetsu.
73 dependencies, providing 5.4% of 1,362 turns consisting of more than two units, are over two utterance units.
contrasting
train_7559
how frequently the word sequence T is used as a title for a document.
of course, the probability for the word sequence T to be used as a title for any document is definitely influenced by the correctness of the word order in T. the factor whether the words in the sequence T are common words or not will also have great influence on the chance of seeing the sequence T as a title.
contrasting
train_7560
In the old framework for title generation, term P(T) is used for ordering title words into a correct sequence.
term P(T) is not only influenced by the word order in T, but also whether words in T are common words.
contrasting
train_7561
Their work describes the modularization of an NLG system and the tasks of each module.
they do not indicate what kind of tools can be used by each module.
contrasting
train_7562
The maximum entropy system performed competitively with the best systems on the English verbs in SENSEVAL-1 and SENSEVAL-2 (Dang and Palmer, 2002).
while SENSEVAL-2 made it possible to compare many different approaches over many different languages, data for the Chinese lexical sample task was not made available in time for any systems to compete.
contrasting
train_7563
Broad-coverage lexical resources such as WordNet are extremely useful in applications such as Word Sense Disambiguation (Leacock, Chodorow, Miller 1998) and Question-Answering (Pasca and Harabagiu 2001).
they often include many rare senses while missing domain-specific senses.
contrasting
train_7564
An alternative considers the number of pairs of elements whose similarity exceeds a certain threshold (Guha, Rastogi, Kyuseok 1998).
this may cause undesirable mergers when there are a large number of pairs whose similarities barely exceed the threshold.
contrasting
train_7565
An example of the first approach considers the average entropy of the clusters, which measures the purity of the clusters (Steinbach, Karypis, and Kumar 2000).
maximum purity is trivially achieved when each element forms its own cluster.
contrasting
train_7566
The word dog belongs to both the Person and Animal classes.
in the newspaper corpus, the Person sense of dog is at best extremely rare.
contrasting
train_7567
("New Year reception") is the subject in 11a while it is the object in 11b.
in both cases, it is the theme.
contrasting
train_7568
Consequently, many researchers have focused on producing accurate word segmenters for Chinese text indexing (Teahan et al, 2001;Brent and Tao, 2001).
we have recently observed that low accuracy word segmenters often yield superior retrieval performance (Peng et al, 2002).
contrasting
train_7569
However, as segmentation accuracy improves, the tokens behave more like true words and the retrieval engine begins to behave more conventionally.
after a point, when the second regime is reached, retrieval performance no longer increases with improved segmentation accuracy, and eventually begins to decrease.
contrasting
train_7570
This method is commonly applied on mobile phones.
there are two problems with this method.
contrasting
train_7571
Finding a most probable class assignment to the entities and relations is equivalent to finding the assignment of all the variables in the belief network that maximizes the joint probability.
this mostprobable-explanation (MPE) inference problem is intractable (Roth, 1996) Figure 3: Belief network of 3 entity nodes and 6 relation nodes (undirected cycles), which is exactly the case in our network.
contrasting
train_7572
In a feature-based grammar as the one we are focusing on, to create tree structures without the proper feature equations is of little use.
experience has shown that feature equations are much harder to maintain correct and consistent in the grammar than the tree structures.
contrasting
train_7573
We generate 1200 trees (instead of 1008).
9 things are not as bad as they look: 206 of them are for passives related to multi-anchor trees, as we explain next.
contrasting
train_7574
Note that the figure shows only one possible hierarchy where the dimensions ordered by TOPIC, ZONE and MOOD.
there are totally 6 (=3!)
contrasting
train_7575
From a viewpoint of category representation, the fact that the derived flat model enumerates all combinations among categories, makes the representation of a class be more precise than the class in multi-dimensional model.
from the viewpoint of relationship constraints in these models, the derived flat category model ignores the relationship among categories while the derived hierarchical model explicitly declares such relationship in a rigid manner, and the multi-dimensional model is a compromise between these two previous models.
contrasting
train_7576
This makes such nodes represent classes less imprecisely but there are more training data (documents) for these nodes.
nodes near leaves will have finer granularity and then have more precise representation but have less training data.
contrasting
train_7577
Similar to most of the parsers for many languages, our parser is based on words.
most other parsers for Japanese are based on a unique phrasal unit called a bunsetsu, a concatenation of one or more content words followed by some grammatical function words.
contrasting
train_7578
In Example-Based MT based on analogical reasoning (Malavazos, 2000;Guvenir, 1998), the different parts are replaced by variables to generalize translation examples as shown in (1) of Figure 1.
the number of different parts of the two SL sentences must be same as the number of different parts of the two TL sentences.
contrasting
train_7579
However, it is difficult to acquire the translation rules that correspond to the lexicon level.
we have proposed a method of Machine Translation using Inductive Learning with Genetic Algorithms(GA-ILMT) (Echizenya et al., 1996).
contrasting
train_7580
As a result, the quality of the translation and the effective translation rate of our system is higher than other Rule-Based MT systems.
our system still does not reach the level of a practical MT system and requires more translation rules to realize the goal of a practical MT system.
contrasting
train_7581
We would choose the tree with higher confidence.
this is not possible in our case because weightings of the Minipar trees are not publicly available, and the shallow parser is a rule-based system without confidence information.
contrasting
train_7582
For instance, if £ appears 10 times in the corpus and (where¨is a normalizing constant).
if £ appears 200 times in the corpus and is a more frequent lemma combination, and should contribute a higher score to the paraphrase.
contrasting
train_7583
Since an ontology arranges words in a semantic hierarchy, it is possible for a word to appear in several different places in the hierarchy depending on its semantic sense.
words and concepts in a given language do not always translate cleanly into a second language; a word often has multiple translations, and they do not always share the same meanings.
contrasting
train_7584
This example is a good illustration of the strength and power of the cross-lingual 3 The original number of definitions chosen for the test set was higher.
upon inspection, it was found that a number had no corresponding WordNet synset and thus cannot be aligned.
contrasting
train_7585
Again, the phrasal word translations "move house", "change one's residence", "move to a better place", etc were filtered out, leaving the single word "move", which has a total of 16 senses as a verb in WordNet 1.7.
as the table shows, the algorithm correctly assigns the "change residence" sense of "move" to the HowNet definition, which is appropriate for the Chinese words it contains, which include " " (move house), " " (change one's dwelling), and " " (tear down one's house and move).
contrasting
train_7586
If one of the processes gives up its mission, the entire translation process also gives up and fails.
our model (sometimes) allows the transfer to give up generating the target language.
contrasting
train_7587
It would be a major problem in this design if there were many interaction loops between the paraphraser and the transfer, but we found that such worries are unwarranted in the current system.
it is necessary to be careful in this measure, since we need to add more functions to the paraphraser in order to avoid zero output.
contrasting
train_7588
It could be used, for example, to choose spleen neoplasms from concept class C0751521 as the translation of Milztumoren.
since such a distinction between the different lexicalizations of a given concept is beyond the scope of the current paper, we make the additional simplifying assumption that, given a concept class, the target word t is independent of the source word s, which leads to the simplified formula: The above equation views the thesaurus as a trellis linking source and target words.
contrasting
train_7589
Turning now to modules in grammar implementations, we see that similar to modules in ordinary software projects, grammar modules encode generalizations (functional units, property P2).
we argue below that (certain) grammar modules are not black boxes (whose internal structure is irrelevant, property P1), because these generalizations encode important linguistic insights.
contrasting
train_7590
The reader of the documentation would simply follow these links (which might be realized by hyperlinks).
6 a typical grammar rule calls many macros, and macros often call other macros.
contrasting
train_7591
A modularized grammar is "intensionally transparent", as we put it, and thus favours maintainability.
for casual users of the grammar, modularity may result in decreased readability.
contrasting
train_7592
According to , the analyzing time followed a quadratic curve.
our parser analyzes a sentence in linear time keeping a better accuracy.
contrasting
train_7593
Furthermore, to define a joint probability over the observation and state sequences, the generative approach needs to enumerate all the possible observation sequences.
in some tasks, the set of all the possible observation sequences is not reasonably enumerable.
contrasting
train_7594
Third, the generative approach normally estimates the parameters to maximize the likelihood of the observation sequence.
in many NLP tasks, the goal is to predict the state sequence given the observation sequence.
contrasting
train_7595
Traditionally, there are two existing approaches to resolve this problem: linear interpolation (Jelinek 1989) and back-off (Katz 1987).
these two approaches only work well when the number of different information sources is limited.
contrasting
train_7596
In our use of clustering we draw on an idea from Yaari (1997) who uses this technique as a first step in his hierarchical segmentation algorithm.
where Yaari uses a similarity measure based on word co-occurrence to decide which segments should be merged, we use a machine learnt model of segment relatedness.
contrasting
train_7597
Note, unlike most approaches that make use of lexical chains, we do not break a chain in two if too many sentences intervene between the individual chain elements; chains are continued as long as new elements can be found.
the algorithm keeps track of where in the text chain elements were found.
contrasting
train_7598
In our set of texts, the present tense was for instance used in a lot of places, sometimes to refer to events in the past, sometimes to refer to events that were going to happen at the time the text was published.
given the method we adopted, one could have expected better coherence results than finesse results.
contrasting
train_7599
Work in HPSG (Pollard and Sag, 1994;Ginzburg and Sag, 2000, inter alia) has instead focused on well-grounded structuring of grammatical information, most specifically in terms of type-inheritance hierarchies, although often at the cost of coverage and elegance (see Penn and Hoetmer 2003).
the respective foci of work in these frameworks are largely orthogonal, suggesting a great potential in combining them, as recent work by Villavicencio (2001) and Baldridge (2002) has demonstrated.
contrasting