id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_20000
Naturally, these are not appropriate comparisons for the work reported here.
as is evident from the discussion above, previous spelling research does provide an important role in suggesting productive features to include in the decision tree.
contrasting
train_20001
Capitalization is his sole means of identifying names.
capitalization information is not available in closed captions.
contrasting
train_20002
More importantly, we investigate the relative role of each statistical and linguistic knowledge source in the proposed IR/NLP question-answering system.
to previous results, we find that statistical knowledge of word co-occurrences as computed by vector space models of IR can be used to quickly and accurately locate relevant documents in the restricted QA task.
contrasting
train_20003
In the cases we have investigated, this strategy works often enough to be useful for the surface object.
due to predicate coordination as well as relativization, such a strategy often fails to identify correctly the surface subject of bind (binds or bound) when more than one binding term precedes the verb.
contrasting
train_20004
Basically, the segmenter is based on frequency of individual nouns extracted from corpus.
the problem is that it is difficult to distinguish proper noun and common noun since there is no clue like capital letters in Korean.
contrasting
train_20005
Assume that 'sl' is a frequently appearing word in texts whereas 's2s3s4' is a rarely occurring sequence of syllables as a word.
'sis2' and 's3s4' occurs frequently but although they don't occur as frequently as 'sl'.
contrasting
train_20006
Actually, gyosaeng-hwal is not a word.
both hag-gyo and saeng-hwal are frequently occurring syllables, and actually they are all words.
contrasting
train_20007
It is important to note here that we would probably have obtained this improvement in recognition accuracy even with a manual revision of the grammars.
the main advantage in using our tool is the tremendous simplification of the whole process of revision for a grammar developer who now selects counter-examples with an interactive tool instead of manually revising the grammars.
contrasting
train_20008
We have successfully used the tool to constrain overgeneralizing grammars of speech understanding systems and obtained 20-30% higher recognition accuracy.
we believe the primary benefit of using our tool is the tremendously reduced effort for the grammar developer.
contrasting
train_20009
This is a language dependent approach.
we propose a framework of language independent morphological analysis system.
contrasting
train_20010
Disambiguation i s already language independent, since it does not process strings directly and therefore will not be taken up.
tokenization and dictionary look-up are language dependent and shall be explained more in this paper.
contrasting
train_20011
If a hyphenated segment such as "data-base," "F-16," or "MS-DOS" exists in the dictionary, it should be an independent lexeme.
if a hyphenated segment such as "55-years-old" does not exist in the dictionary, hyphens should be treated as independent tokens (Fox, 1992).
contrasting
train_20012
Then we pass it to the analyzer for non-segmented languages.
the analyzer may return the result as "They / 've / gone / to / school / to / get / her / ."
contrasting
train_20013
The set of the delimiters acting as boundaries: These act as boundaries of MFs.
these can not be independent MFs (can not start nor end a lexeme).
contrasting
train_20014
These problems are discussed by several authors including Mills (Mills, 1998) Webster & Kit (Webster and Kit, 1992).
their proposed solutions are not language independent.
contrasting
train_20015
Using case-sensitive rules is straightforward since generally only nouns (and proper names) are written in standard German with a capitalized initial letter (e.g., "das Unternehmen" -the enterprise vs. "wir unternehmen" -we undertake).
for disambiguation of word forms appearing at the beginning of the sentence local contextual filtering rules are applied.
contrasting
train_20016
company names) may appear in the text either with or without a designator, we use a dynamic lexicon to store recognized named entities without their designators (e.g., "Braun AG" vs. "Braun") in order to identify subsequent occurrences correctly.
a named entity, consisting solely of one word, may be also a valid word form (e.g., "Braun" -brown).
contrasting
train_20017
This means that although the exact attachment point of each individual PP is not known it is guaranteed that a PP can only be attached to phrases which are dominated by the main verb of the sentence (which is the root node of the clause's tree).
the exact point of attachment is a matter of domain-specific knowledge and hence should be defined as part of the domain knowledge of an application.
contrasting
train_20018
Only in the case of the MC-module the difference is about 5%.
the result for the isolated evaluation of the MC-module suggests that this is mainly due to errors caused by previous components.
contrasting
train_20019
The output document from constrained HMM contains MUC-standard NE.tags such as person, location and organization.
for a real information extraction system, the MUC-standard NE tag may not be enough and further detailed NE information might be necessary.
contrasting
train_20020
Further adding the morphological analysis capability that automatically analyzes unknown words (deriving additional morphological relationships and some semantic subsumption relationships) significantly improves that result (from 50.0% to 60.7%).
we found that adding the same semantic subsumption relationships to the commercial search engine, using its provided thesaurus capability degraded its results, and results were still degraded when we added only those facts that we knew would help find relevant documents.
contrasting
train_20021
Knowledge classifiers are part of almost any knowledge representation system.
the problem we face here is more difficult.
contrasting
train_20022
By default this concept is placed under policy.
in WordNet there is a hierarchy fiscal policy -IS-Aeconomic policy -IS-A -policy.
contrasting
train_20023
Higher interest rates are normally associated with weaker bond markets.
if interest rates go down, bonds go up, and your bond becomes more valuable.
contrasting
train_20024
Overgeneration will produce some name variants that have nothing to do with the company.
although overgeneration of variants routinely occurs, testing showed that such overgeneration has little adverse effect.
contrasting
train_20025
The prior work most closely related to this study is (Riloff, 1996), which, along with (Riloff, 1993), seeks automatic methods for filling slots in event templates.
the prior work differs from that presented here in several crucial respects; firstly, the prior work does not attempt to find entire events, after the fashion of MUC's highest-level scenario-template task.
contrasting
train_20026
Name lists provide an extremely efficient way of recognising names, as the only processing required is to match the name pattern in the list against the text and no expensive advanced processing such as full text parsing is required.
name lists are a naive method for recognising names.
contrasting
train_20027
We have also demonstrated that it is possible for the addition of corpus-derived lists to improve the performance of a NE recognition system based on gazetteers.
this is not guaranteed and it appears that adding too many names without any restriction may actually lead to poorer results, as happened when the LONG_TRAIN lists were applied.
contrasting
train_20028
HS also uses a frequency measure to provide an approximation of topic significance.
instead of counting frequency of stems or repetition of word sequences, this method counts frequency of a relatively easily identified grammatical element, heads of simplex noun phrases (SNPs).
contrasting
train_20029
Toward this end, we made several decisions about presentation of the data: 1.
threshold: So that no bias would be unintentionally introduced, we presented subjects with all terms output by each method, up to a specified cut-off poin-using lists of equal length for each method would have necessitated either omitting HSs and KWs or changing the definition of tts.
contrasting
train_20030
The graph in Figure 3 shows results for quality, not coverage.
figure 4, which shows the total number of terms rated at or below specified rankings, allows us to measure quality and coverage.
contrasting
train_20031
For example, the phrase 'Wall Street Journal' was included on the HS list only because it is specified as the document source.
four of the eight students assigned this term a high rating (1 or 2); this is puzzling because the document is about asbestos-related disease.
contrasting
train_20032
For example, the word exposure received an average rating of 2.2 when it appeared on the KW list, but a rating of only 2.75 on the HS list.
the more specific phrase racial quotas, which immediately followed quota on the HS list received a rating of 1.
contrasting
train_20033
One might imagine that a more carefully constructed lexicon could reduce the OOV rate for PERSONs while still staying within the 60,000 word limit.
even if a cleverly designed 60K lexicon succeeded in having the name coverage of the frequency-ordered 120K word lexicon (which contains roughly 40,000 more proper names than the 60K lexicon), it would reduce the PERSON OOV rate by only 4% absolute.
contrasting
train_20034
Once an example is in the training, IdentiFinder is able to extract it and use it in test.
when the test data is recognizer output, the rare names are less likely to appear in the test, either because they don't appear in the speech lexicon or they are poorly trained in the speech model and misrecognized.
contrasting
train_20035
Other GUI interface tools, such as a search field with command completion, can be envisioned that would provide direct access.
it is arguable that such an interface element belongs squarely to graphical user interfaces, but draws more on features of language.
contrasting
train_20036
In our case, the output of the dialog manager of a spoken dialog system provides the input to our sentence planner in the form of a single spoken dialog text plan for each of the turns.
(the dialog managers of most dialog systems today simply output completely formed utterances which are passed on to the TTS module.)
contrasting
train_20037
Perfect performance would have meant that there would be no significant difference.
the mean of BEST is 4.82 as compared with the mean of SPOT of 4.56, for a mean difference of 0.26 on a scale of 1 to 5.
contrasting
train_20038
Using hand-crafted evaluation metrics, they show that a genetic algorithm achieves good results in finding discourse trees.
they do not address clausecombining, and we do not use hand-crafted metrics.
contrasting
train_20039
The basic idea is that we view the labeling of new name ( l) as a transformation of the labeling of the old one ( l ).
we do not know l so we have to sum over all possible former labelings L .
contrasting
train_20040
In a similar way, we marked both "Dr." and "Sir" as honorifics in our test data.
the Wall Street Journal treats them very differently from "Mr." in that the former tend to be included even in the first mention of a name, while the latter is not.
contrasting
train_20041
Unfortunately, because "Mikio" does not appear elsewhere, the program is at a loss to decide which label to give it.
because Yotaro is assumed to be a first name, the program makes "Mikio Suzuki" coreferent with "Yotaro Suzuki" by labeling "Mikio" descriptor.
contrasting
train_20042
Naive Baseline The probabilistic model described in Section 2 explicitly takes adjective/adverb and verb co-occurrences into account.
one could derive meanings for polysemous adjective-noun combinations by solely concentrating on verb-noun relations, ignoring thus the adjective/adverb and verb dependencies.
contrasting
train_20043
More recently, (Gonzalo et al., 2000) also discusses potential usefulness of systematic polysemy for clustering word senses for IR.
extracting systematic relations from large sense inventories is a di cult task.
contrasting
train_20044
As a note, our lexicon is similar to CORELEX (Buitelaar, 1998) (or CORELEX-II presented in (Buitelaar, 2000)), in that both lexicons share the same motivation.
our lexicon di ers from CORELEX in that CORELEX looks at all senses of a w ord and groups words that have the same sense distribution pattern, whereas our lexicon groups \drop", \circle", \intersection", \dig", \crossing", \bull's eye" ARTIFACT-GROUP STRUCTURE, PEOPLE] \house", \convent", \market", \center" ARTIFACT-SUBSTANCE FABRIC, CHEMICAL COMPOUND] \acetate", \nylon", \acrylic", \polyester" COMMUNICATION-PERSON VOICE, SINGER] \soprano", \alto", \tenor", \baritone" WRITING, RELIGIOUS PERSON] \John", \Matthew", \Jonah", \Joshua", \Jeremiah" word senses that have the same systematic relation.
contrasting
train_20045
These two languages were chosen as the development set because they are represented by the most complete vocabularies and share the largest number of cognates.
as it turned out later, they are also the most closely related among the four Algonquian languages, according to all measures of phonetic similarity.
contrasting
train_20046
Most of the improvement comes from detecting entries that have matching glosses.
the contribution of WordNet is small.
contrasting
train_20047
In particular, decision trees can be converted into implicational rules that an expert could inspect and can in principle be compiled back into finite-state machines (Sproat and Riley, 1996), although that would re-introduce the original efficiency problems.
finite-state transducers have the advantage of being invertible, which can be exploited e. g. for testing hand-crafted rule sets.
contrasting
train_20048
This points to a fundamental shortcoming of the usual two-step procedure, which we followed here: the goodness of an alignment performed in the first step should be determined by the impact it has on producing an optimal classifier that is induced in the second step.
there is no provision for feedback from the second step to the first step.
contrasting
train_20049
For example, Core and Schubert [8] point to counterexamples such as "have the engine take the oranges to Elmira, um, I mean, take them to Corning" where the antecedent of "them" is found in the EDITED words.
we believe that the assumption is so close to true that the number of errors introduced by this assumption is small compared to the total number of errors made by the system.
contrasting
train_20050
All words are assigned one of the two possible labels, EDITED or not.
in our evaluation we report the accuracy of only words other than punctuation and filled pauses.
contrasting
train_20051
For example, [13] uses a subsection of the ATIS corpus, takes as input the actual speech signal (and thus has access to silence duration but not to words), and uses as its evaluation metric the percentage of time the program identifies the start of the interregnum (see Section 2.2).
[9,10] use an internally developed corpus of sentences, work from a transcript enhanced with information from the speech signal (and thus use words), but do use a metric that seems to be similar to ours.
contrasting
train_20052
Important technical problems involving leftrecursive and unit productions are examined and overcome in (Stolcke, 1995).
these complications do not add any further machinery to the parsing algorithm per se beyond the grammar rules and the dot-moving conventions: in particular, there are no heuristic parsing principles or intermediate structures that are later destroyed.
contrasting
train_20053
Previous studies in unsupervised methods for parsing have concentrated on the use of inside-outside algorithm (Lari and Young, 1990;Carroll and Rooth, 1998).
there are several limitations of the inside-outside algorithm for unsupervised parsing, see (Marcken, 1995) for some experiments that draw out the mismatch between minimizing error rate and iteratively increasing the likelihood of the corpus.
contrasting
train_20054
Other approaches have tried to move away from phrase structural representations into dependency style parsing (Lafferty et al., 1992;Fong and Wu, 1996).
there are still inherent computational limitations due to the vast search space (see (Pietra et al., 1994) for discussion).
contrasting
train_20055
Yet we illustrated that induced semantics can help overcome some of these errors.
we have since observed that induced semantics can give rise to different kinds of problems.
contrasting
train_20056
Despite the semantic, orthographic, and syntactic components of the algorithm, there are still valid PPMVs, (X<Y), that may seem unrelated due to corpus choice or weak distributional properties.
x and Y may appear as members of other valid PPMVs such as (x<Z) and (Z<Y) containing variants (Z, in this case) which are either semantically or syntactically related to both of the other words.
contrasting
train_20057
We will incorporate the variable context length model into our system.°C onsidering more predictable bound In our experiments, we introduce new types of voting methods which stem from the theorems of SVMs -VC bound and Leave-One-Out bound.
chapelle and Vapnik introduce an alternative and more predictable bound for the risk and report their proposed bound is quite useful for selecting the kernel function and soft margin parameter (chapelle and Vapnik, 2000).
contrasting
train_20058
These approaches have produced interesting results, including applications involving real world dialogue systems.
reinforcement learning suffers from the fact that it is state based.
contrasting
train_20059
Many of these take a usercentric approach based on Wizard of Oz studies and iterative design (Bernsen et al., 1998).
there are still no precise guidelines about when to use specific techniques such as mixed-initiative.
contrasting
train_20060
A typical method used for NE extraction of Japanese texts is a cascade of morphological analysis, POS tagging and chunking.
there are some cases where segmentation granularity contradicts the results of morphological analysis and the building units of NEs, so that extraction of some NEs are inherently impossible in this setting.
contrasting
train_20061
Then, the shorter units tend to be hidden behind the longer unit words.
introducing the shorter unit words is more necessary to named entity extraction to generalize the model, because the shorter units are shared by many compound words.
contrasting
train_20062
Our work presents a novel knowledge-lean algorithm that uses multiple-sequence alignment (MSA) to learn to generate sentence-level paraphrases essentially from unannotated corpus data alone.
to previous work using MSA for generation (Barzilay and Lee, several versions of their component sentences.
contrasting
train_20063
Previous approaches to paraphrase acquisition focused on certain rigid types of paraphrases, for instance, limiting the number of arguments.
our method is not limited to a set of a priori-specified paraphrase types.
contrasting
train_20064
For example, (Clarke et al., 2003) and (Lin et al., 2003) employ techniques for utilizing both unstruc-tured text and structured databases for question answering.
the approaches taken by both these systems differ from ours in that they enforce an order between the two strategies by attempting to locate answers in structured databases first for select question types and falling back to unstructured text when the former fails, while we explore both options in parallel and combine the results from multiple answering agents.
contrasting
train_20065
Such knowledge would include both the spelling and pronunciation of a new word, as well as an understanding of its usage in the language (e.g., a semantic category).
this is a difficult task to carry out effectively, challenging both with regard to the automatic acquisition of the sound-to-letter mapping from typically telephone-quality speech, and the system level aspect of integrating the usually off-line activities of system upgrade while seamlessly continuing the conversation with the user.
contrasting
train_20066
Deletions occur when part of the spelled portion is mistakenly identified as part of the unknown word or insertions arise when the end of a spoken word is confused for a spelled letter.
the multi-stage system produces a marked improvement if we compare it with the single-stage letter recognizer as a baseline.
contrasting
train_20067
Our final selection process is based only on the proposed spellings obtained from the pronounced word, after feeding information from the spelled part into the second stage.
performance may improve if we apply a strict constraint during the search, explicitly allowing only paths where the spoken and spelled part of the waveforms agree on the name spelling.
contrasting
train_20068
These cases are of central interest in language and speech processing.
automata with other kinds of weights can also be defined.
contrasting
train_20069
We have found no simple necessary and sufficient condition on (K, ⊗) that guarantees a globally consistent set of choices to exist.
we have given a useful nec-essary condition (greedy factorization), and we now give a useful sufficient condition.
contrasting
train_20070
SWAP any two non-overlapping regions .
if we limit the size of the swapped regions to a constant I and their distance to a constant P , we can reduce the number of swaps performed to a linear function of the input length.
contrasting
train_20071
The complexity of calculating the alignment probability globally (that is, over the entire alignment) is ¢ ¡£ § .
since there is a constant upper bound 3 on the size of local contexts, needs to be performed only once for the initial gloss, therafter, recalculation of only those probabilities affected by each change ( ) suffices.
contrasting
train_20072
For the Arabic test data from the same evaluation, we obtained a similar shape (although with a roughly level plateau).
the 'bumpiness' of the surface raises the question as to which of these differences are statistically We are aware of several ways to determine the statistical significance of BLEU score differences.
contrasting
train_20073
For example, both relations ¡ apartment 1; woman 1; No¢ and ¡ hand 1; woman 1; Yes¢ are mapped into the more general type ¡ entity 1; entity 1; Yes/No¢ .
the first example is negative (a POSSESSION relation), while the second one is a positive example.
contrasting
train_20074
For the examples described above, the procedure eliminates the ambiguity through specialization of the semantic classes into two new ones: whole -causal agent, and respectively part -causal agent.
if the training corpus contained the examples ¡ leg 2; insect 1; Yes¢ and ¡ world 7; insect 1; No¢ , the procedure specializes them in the ambiguous example ¡ part 7; organism 1; Yes/No¢ and the ambiguity still remains.
contrasting
train_20075
Word sense disambiguation could, therefore, also independently be performed using the semantic coherence scoring described herein as an additional application of our approach.
that has not been investigated thoroughly yet.
contrasting
train_20076
There were 209 added words (about 26%).
128 words (or 61% of missing words) were not actually missing, but rather not linked into the set of clusters evaluated by a particular annotator.
contrasting
train_20077
Schuetze (1998) proposed a method for dividing occurrences of a word into classes, each of which consists of contextually similar occurrences.
it does not produce definitions of senses such as sets of synonyms and sets of translation equivalents.
contrasting
train_20078
Generally speaking, the problem becomes more difficult in translingual distributional word clustering, since the sparseness of data in two languages is multiplied.
the sense-vs.-clue correlation matrix calculation method overcomes this difficulty; it calculates the correlations between senses and clues iteratively to smooth out the sparse data.
contrasting
train_20079
4) Cluster senses by using a hierarchical agglomerative clustering method, e.g., the group-average method.
this naive method is not effective because some senses usually have duplicated definitions in step 1) despite the fact that the sense-vs.-clue correlation matrix calculation algorithm presupposes a set of senses without duplicated definitions.
contrasting
train_20080
To situate our results, the FOMs used by (Caraballo and Charniak, 1998) require 10K edges to parse 96% of these sentences, while BF requires only 6K edges.
the more complex, tuned FOM in is able to parse all of these sentences using around 2K edges, while BF requires 7K edges.
contrasting
train_20081
We first observe that the complete set of word alignments generated by the ATTM (ATTM-C) is relatively poor.
when we consider only those word alignments generated by actual alignment templates (ATTM-A) (and discard the alignments generated by the dummy templates introduced as described in Section 3.1), we obtain very high alignment precision.
contrasting
train_20082
These components can be refined and improved by changing the corresponding transducers without requiring changes to the overall search procedure.
some of the modeling assumptions are extremely strong.
contrasting
train_20083
The Document Understanding Conference (DUC 2002) run by the National Institute of Standards and Technology (NIST) sets out to address this problem by providing annual large scale common evaluations in text summarization.
these evaluations involve human judges and hence are subject to variability (Rath et al.
contrasting
train_20084
It consistently correlated highly with human assessments and had high recall and precision in significance test with manual evaluation results.
the weighted average of variable length n-gram matches derived from IBM BLEU did not always give good correlation and high recall and precision.
contrasting
train_20085
For their purposes, they mainly need to ensure the correctness of consensus among different translations, so that different constituent orderings in input sentences do not pose a serious prob- lem.
we want to ensure the correctness of all paths represented by the FSAs, and direct application of MSA in the presence of different constituent orderings can be problematic.
contrasting
train_20086
From the curves, one can see that as the order increases, classification accuracy increases and testing entropy decreases, presumably because the longer context better captures the regularities of the text.
at some point accu- Figure 2: The entropy of different n-gram models racy begins to decrease and entropy begins to increase as the sparse data problems begin to set in.
contrasting
train_20087
Even if all f -structures are transferred from the same linguistically rich source, and all generated strings are grammatical, a reduction in error rate of around 50% relative to the upper bound can be achieved by stochastic selection.
a comparison between transfer runs with and without perfect disambiguation of the original string shows a decrease of about 5% in F-score, and of only .1 points for summarization quality when transferring from packed parses instead of from the manually selected parse.
contrasting
train_20088
In our case, combining the counts in this way yielded a half a point, perhaps because of the in-domain tuning of the smoothing parameters.
when we optimize α empirically on the held-out corpus, we can get nearly a full point improvement.
contrasting
train_20089
Furthermore, they are trained to minimize some function related to labeling error, leading to smaller error in practice if enough training data are available.
generative models are trained to maximize the joint probability of the training data, which is not as closely tied to the accuracy metrics of interest if the actual data was not generated by the model, as is always the case in practice.
contrasting
train_20090
In contrast, generative models are trained to maximize the joint probability of the training data, which is not as closely tied to the accuracy metrics of interest if the actual data was not generated by the model, as is always the case in practice.
since sequential classifiers are trained to make the best local decision, unlike generative models they cannot trade off decisions at different positions against each other.
contrasting
train_20091
The ap-proximated diagonal term H f for feature f has the form If this approximation is semidefinite, which is trivial to check, its inverse is an excellent preconditioner for early iterations of CG training.
when the model is close to the maximum, the approximation becomes unstable, which is not surprising since it is based on feature independence assumptions that become invalid as the weights of interaction features move away from zero.
contrasting
train_20092
Mixed CG training converges slightly more slowly than preconditioned CG.
cG without preconditioner converges much more slowly than both preconditioned cG and mixed cG training.
contrasting
train_20093
On the other hand, CG without preconditioner converges much more slowly than both preconditioned CG and mixed CG training.
it is still much faster than GIS.
contrasting
train_20094
On the application side, (log-)linear parsing models have the potential to supplant the currently dominant lexicalized PCFG models for parsing by allowing much richer feature sets and simpler smoothing, while avoiding the label bias problem that may have hindered earlier classifier-based parsers (Ratnaparkhi, 1997).
work in that direction has so far addressed only parse reranking (Collins and Duffy, 2002;Riezler et al., 2002).
contrasting
train_20095
In order to use more information, we might imagine using values of ec directly, rather than thresholding.
this quickly leads to data sparsity problems.
contrasting
train_20096
Various linguistics studies have also shown how intertwined syntax and discourse are (Maynard, 1998).
to our knowledge, this is the first paper that empirically shows that the connection between syntax and discourse can be computationally exploited at high levels of accuracy on open domain, newspaper text.
contrasting
train_20097
Little improvement comes from using manually built syntactic parse trees instead of automatically derived trees.
experiments show that there is much to be gained if better discourse segmentation algorithms are found; 83% accuracy on this task is not sufficient for building highly accurate discourse trees.
contrasting
train_20098
Current state-of-the-art statistical parsers (Collins, 1999;Charniak, 2000) are trained on large annotated corpora such as the Penn Treebank (Marcus et al., 1993).
the production of such corpora is expensive and labor-intensive.
contrasting
train_20099
One such criterion is the accuracy of the labeled examples, which may be estimated by the teacher parser's confidence in its labels.
the examples that the teacher correctly labeled may not be those that the student needs.
contrasting