id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_17400
However, these QA systems assumed the user model where the user asks what type questions.
there are a few QA systems which assumed the user model where the user asks how type question, in other words, how to do something and how to cope with some problem [3] [4] [7].
contrasting
train_17401
To solve this problem, [3] and [4] proposed methods of collecting knowledge from FAQ documents and technical manuals by using the document structure, such as, a dictionary-like structure and if-then format description.
mails posted to a mailing list, such as Vine Users ML, do not have a firm structure because questions and their answers are described in various ways.
contrasting
train_17402
Because of no firm structure, it is difficult to extract precise information from mails posted to a mailing list in the same way as [3] and [4] did.
a mail posted to ML generally has a significant sentence.
contrasting
train_17403
As a result, the set of (Q4) and (DA4-1) was determined as correct.
the set of (Q4) and (DA4-2) was a failure.
contrasting
train_17404
This is because there is not much likelihood of matching user's question and wrong significant sentence extracted from question mails.
failures which were caused by wrong significant sentences extracted from DA mails were serious.
contrasting
train_17405
In Test 1, our system answered question 2, 6, 7, 8, 13, 14, 15, 19, and 24.
the full text retrieval system answered question 2, 5, 7, 19, and 32.
contrasting
train_17406
proposed two iterative scaling algorithms to find parameters for CRFs.
these methods converge into a global maximum very slowly.
contrasting
train_17407
This is why a rule-based chunking method is predominantly used.
with sophisticated rules, the rule-based chunking method has limitations when handling exceptional cases.
contrasting
train_17408
Note that we only invoke instantiate-hypothesis() on complete, top-level hypotheses, as the ME features of Toutanova and Manning [20] can actually be evaluated prior to building each full feature structure.
the procedure could be adapted to perform instantiation of sub-hypotheses within each local search, should additional features require it.
contrasting
train_17409
Treebanks provide instances of phrasal structures and their statistical distributions.
none of treebanks provide sufficient amount of samples which cover all types of phrasal structures, in particular, for the languages without inflectional markers, such as Chinese.
contrasting
train_17410
It results that grammars directly extracted from treebanks suffer low coverage rate and low precision [7].
arbitrarily generalizing applicable rule patterns may cause over-generation and increase ambiguities.
contrasting
train_17411
NP S rules and NP VP rules will be derived each respectively from subject NP and object NP structures.
such difference seems not very significant in Chinese.
contrasting
train_17412
The maximization of the likelihood with the above model is equivalent to finding the model p M that is closest to the reference probability p 0 in terms of the Kullback-Leibler distance.
we cannot simply apply the above method to our task because the parameter estimation requires the computation of the above probability for all parse candidates T (s).
contrasting
train_17413
The main cost of using a large feature set is the increase of training time.
this may be paid off by giving the learner a better chance to achieve a better model.
contrasting
train_17414
It is particularly suitable for our research purpose.
to BNC and Brown corpus, the WSJ corpus indeed contains many more dots used in different ways for various purposes.
contrasting
train_17415
Their method has a remarkable advantage in that synonyms do not need to be surrounded with the same words.
their method is not applicable to structurally different MCTs.
contrasting
train_17416
From S1 and S2, the word pair "Monetary Union" and "Finance Minister Engoran" can be extracted.
the word "Monetary" in S1 does appear in the synonym part of S2 but does appear in another part of S2.
contrasting
train_17417
This result suggests that our method could capture proper places of MCT pairs with this level of precision.
this precision falls to 70.0% without source texts that represents synonym acquisition precision.
contrasting
train_17418
Following the condition of MTCC data, the outside-appearance checking range covers entire texts, i.e., outside appearance should be checked throughout an article.
this condition is too expensive to follow since text length is much longer than that of MTCC data.
contrasting
train_17419
Experience showed that the lexicon learned in the candidate generation stage, while adequate for candidate generation, is not of sufficient quality for biparsing due to the non-parallel nature of the training data.
any translation lexicon of reasonable accuracy can be used.
contrasting
train_17420
No direct comparison of this figure is possible since previous work has focused on the rather different objectives of mining noisy parallel or comparable corpora to extract comparable sentence pairs and loose translations.
we can understand the improvement by comparing against scores obtained using the cosine-based lexical similarity metric which is typical of the majority of previous methods for mining non-parallel corpora, including that of Fung and Cheung (2004) [9].
contrasting
train_17421
Many methods of term extraction have been discussed in terms of their accuracy on huge corpora.
when we try to apply various methods that derive from frequency to a small corpus, we may not be able to achieve sufficient accuracy because of the shortage of statistical information on frequency.
contrasting
train_17422
It is obvious that this sort of information should be carefully controlled.
the filtering performance using the existing methodologies is still not satisfactory in general.
contrasting
train_17423
Using words as features in the first step aims at its better statistical coverage, --the 500 selected features in the first step can treat a majority of documents, constituting 63.13% of the test set.
using word bigrams as features in the second step aims at its better discriminating capability, although the number of features becomes comparatively large (3000).
contrasting
train_17424
Structure-mapping assumes that the causal behaviour of a concept is expressed in an explicit, graph-theoretic form so that unifying sub-graph isomorphisms can be found between different representations.
abstraction theories assume that analogous concepts, even when far removed in ontological terms, will nonetheless share a common hypernym that captures their causal similarity.
contrasting
train_17425
Thus, we should expect an analogous pairing like surgeon and butcher to have different immediate hypernyms but to ultimately share an abstraction like cutting-agent (see [8,9]).
the idea that a standard ontology will actually provide a hypernym like cutting-agent seems convenient almost to the point of incredulity.
contrasting
train_17426
PWN is differential is nature: rather than attempting to express the meaning of a word explicitly, PWN instead differentiates words with different meanings by placing them in different synsets, and further differentiates synsets from one another by assigning them to different positions in its ontology.
howNet is constructive in nature, exploiting sememes from a less discriminating taxonomy than PWN's to compose a semantic representation of meaning for each word sense.
contrasting
train_17427
Since this new taxonomy is derived from the use of {~} in HowNet definitions, both the coverage and recall of analogy generation crucially depend on the widespread use of this reflexive construct.
of the 23,505 unique definitions in HowNet, just 6430 employ thus form of self-reference.
contrasting
train_17428
These results, along with several studies, also show the superiority of Skew Divergence.
measures for vectors such as Euclidean distance achieved relatively poor performance compared to those for probability distributions.
contrasting
train_17429
The problems mentioned earlier make it more difficult to match concepts by the algorithm.
we can use the algorithm to identify where the problems occur.
contrasting
train_17430
There have been several works to build taxonomy of nouns from an MRD.
most of them relied on the lexico-syntactic patterns compiled by human experts.
contrasting
train_17431
A relation is basically an ordered pair, thus "Sam was flown to Canada" contains the relation AT(Sam, Canada) but not AT(Canada, Sam).
7 relations in NEAR (relative location) and SOCIAL (associate, other-personal, other-professional, other-relative, sibling, and spouse) types are symmetric.
contrasting
train_17432
This does not necessarily mean that we can choose NEAR when the entity distance in an example is close to 3.23.
there is an interesting result about NEAR when we apply the classifier trained on training data to held-out data.
contrasting
train_17433
2) Similarity calculation: The similarity between two relation instances is defined between two parse trees.
the state-of-the-art of parser is always error-prone.
contrasting
train_17434
This also prevents them from working well on less-frequent data [8].
for the similarity function in our approach, the best threshold is much greater than 0.
contrasting
train_17435
Assuming we have -Input (instance) space X and output (label) space Y -Labeled data set L and unlabeled data set U (as mentioned before, no distinction is made between "unlabeled" and "test" data in the transductive learning setting) One could distinguish three types of learning paradigms: where f represents the induced model -Induction with unlabeled data The three learning paradigms clearly have different advantages and different application scenarios.
when it comes to exploiting unlabeled data, the tradeoff between the last two is not yet well understood.
contrasting
train_17436
#NAME?
it may be worth the effort to investigate other alternatives.
contrasting
train_17437
"12/02/1809"), it is not the case for question [Eb], because a proper answer could be "35,000 years ago".
if it is known that the time granularity concerned is "thousands of years", answer extraction turn to be more targeted.
contrasting
train_17438
the questions beginning with "which year" or "for how many years".
some questions are not so obvious, e.g.
contrasting
train_17439
In these workshops, the inputs to systems are only single-sentence questions, which are defined as the questions composed of one sentence.
on the web there are a lot of multiple-sentence questions (e.g., answer bank 3 , AskAnOwner 4 ), which are defined as the questions composed of two or more sentences: For example, "My computer reboots as soon as it gets started.
contrasting
train_17440
[2] used hand-crafted rules for question classification.
methods based on pattern matching have the following two drawbacks: high cost of making rules or patterns by hand and low coverage.
contrasting
train_17441
Therefore, we also use SVM in classifying questions, as we will explain later.
please note that we treat not only usual single-sentence questions, but also multiple-sentence questions.
contrasting
train_17442
In many researches, question focuses are extracted with hand-crafted rules.
since we treat all kinds of questions including the questions which are not in an interrogative form, such as "Please teach me -" and "I don't know -", it is difficult to manually create a comprehensive set of rules.
contrasting
train_17443
In terms of distribution, the vowel /\/ does not occur at the beginning of a syllable except in the conjugational variants of verbs formed from the verbal stem /k\r\/ (to do.).
to this, though the letter " " exists in Sinhala writing system (corresponding to the consonant sound /j/), it is not considered a phoneme in Sinhala.
contrasting
train_17444
They simplify a machine transliteration problem into either ψ G or ψ P assuming that one of ψ G and ψ P is able to cover all transliteration behaviors.
transliteration is a complex process, which does not rely on either source grapheme or phoneme.
contrasting
train_17445
ψ H does not consider correspondence between source grapheme and phoneme during the transliteration process.
the correspondence plays important roles in machine transliteration.
contrasting
train_17446
A language model is also composed of the words in the training corpus.
the use of a full-form word itself may cause severe data sparseness problem, especially relevant for more inflectional/agglutinative languages like Japanese and Korean.
contrasting
train_17447
Korean-to-Chinese translation), the performance was the worst compared with other language pairs and directions in BLEU and mWER.
the performance of Chinese-to-Korean was much better than Korean-to-Chinese, meaning that it is easier to generate Korean sentence from Chinese the same as in Japanese-to-Korean and Englishto-Korean.
contrasting
train_17448
In addition, many other features, such as the answer candidate frequency, can be extracted based on the IR output and are thought as the indicative evidences for the answer extraction [10].
in this paper, we are to evaluate the answer extraction module independently, so we do not incorporate such features in the current model.
contrasting
train_17449
A naive solution to query formulation is using the keywords in an input question as the query to a search engine.
it is possible that the keywords may not appear in those answer passages which contain answers to the given question.
contrasting
train_17450
With the second constraint, we can delete '明明倩' because '明明' and '明倩' are words in the dictionary.
'小 明 ' , '小明明', and '小明明倩' will be kept because '小' is a "unattached" single character.
contrasting
train_17451
The data sparseness problem is practically non-existent in the character-based model for the Chinese character set is limited.
odd characters are occasionally found in Chinese personal or place names.
contrasting
train_17452
Galiano, Valdivia, Santiago and Lopez [14] use five statistical measures to classify generic MWEs using the LVQ (Learning Vector Quantization) algorithm.
we do a more detailed and focussed study of V-N collocations and the ability of various classifiers in recognizing MWEs.
contrasting
train_17453
Because we focused on bigrams in this paper, MWEs of longer than two tokens were ignored when assessing whether a candidate MWE was a true or false MWE.
some of these candidate MWEs were in fact substrings of a longer MWE.
contrasting
train_17454
Early approach to statistical machine translation relies on the word-based translation model to describe the translation process [1].
the underlying assumption of word-to-word translation often fails to capture all properties of the language, i.e.
contrasting
train_17455
In the case of English to French translation, we follow the phrases in the English order.
it can be done along the target language as well since our approach follows a symmetric many-to-many word alignment strategy.
contrasting
train_17456
The most current guidelines of ST specify that zeros are dropped in order to maintain the consistency and efficiency of the treebank.
pKT advocates for representing zero elements.
contrasting
train_17457
We extracted only 100 sentences from the ST corpus containing natural spoken conversations and found that 81 sentences are represented as VPs or VNPs (predicate nominal phrases).
it may derive a misleading generalization such that canonical sentence patterns in the given corpus are VPs or VNPs.
contrasting
train_17458
In line with this, semantic interpretations of those incomplete VPs or VNPs subsume the meaning of the zero pronouns whose antecedents appear in the previous utterances.
zero-less mark-up poses a difficulty in retrieving the complete sentential meaning from the given phrasal categories of VPs or VNPs.
contrasting
train_17459
For example, one of the most frequently discussed topics in Korean grammar is formation of Double Subject Constructions (DSCs), which license two subjects.
zero-less treebanks do not correctly represent Double Subject Constructions and represent (5) and (6) differently in spite of their similarity in argument realization.
contrasting
train_17460
The first is classified as a grammatical topic marker while the latter is a contrastive topic marker in traditional Korean grammar.
the current annotations of PKT and ST treat topic marker nun as the same auxiliary postposition, which is similar to other postpositions man 'only', to 'also', and mace 'even'.
contrasting
train_17461
The reason why Koehn method outperforms IBM method D may be due to the different decoding strategy.
we still need further investigation to understand why Koehn method outperforms IBM method D significantly.
contrasting
train_17462
In our study, we enhance the LMM with the PM to account for the word reordering issue in NE translation, so our model is capable of modeling the non-monotone problem.
jSCM only models the monotone problem.
contrasting
train_17463
Bangalore and Riccardi [21] proposed a phrase-based variable length n-gram model followed by a reordering scheme for spoken language translation.
their re-ordering scheme was not evaluated by empirical experiments.
contrasting
train_17464
Chronological ordering; ordering sentences according to the published date of the documents they belong to [6], is one solution to this problem.
showing that this approach is insufficient, Barzilay [1] proposed an refined algorithm which integrates chronology ordering with topical relatedness of documents.
contrasting
train_17465
For sentences taken from the same document we keep the order in that document as done in single document summarization.
we have to be careful when ordering sentences which belong to different documents.
contrasting
train_17466
Both Kendall's Ο„ coefficient and the Weighted Kendall coefficient measure discordants between ranks.
in the case of summaries, we need a metric which expresses the continuity of the sentences.
contrasting
train_17467
TSC-3 corpus contains human selected extracts for 30 different topics.
in the TSC corpus the extracted sentences are not ordered to make a readable summary.
contrasting
train_17468
However, we cannot directly compare Lapata's [3] approach with our probabilistic expert as we do not use dependency ANOVA test of the results shows a statistically significant difference among the five methods compared in table 2 under 0.05 confidence level.
we could not find a statistically significant difference between CO and LO.
contrasting
train_17469
where U T U = I, V T V = I, and W is the diagonal matrix of singualr values.
to our usage of SVD, [3] used term-document matrix: our sentence-term matrix can be regarded as the transpose of term-document matrix, since the documents can be thought of the sentences in the summarization fields.
contrasting
train_17470
In [7], the importance of each sentence is computed by repeatedly summing 1 for each occurrence of significant terms in the sentence.
the proposed method can be regarded as more formal or reasonable, since the Euclidean distance between vectors is used to calculate the degree of importance of each sentence.
contrasting
train_17471
The label at position i, s i is one of B, I and O.
to the ME model, since B is the beginning of a term, the transition from O to I is not possible.
contrasting
train_17472
#NAME?
if a word frequently occurs in other positions, we regard it has the property of a modifying noun.
contrasting
train_17473
That is, terms belonging to other classes in GENIA are excluded from the recognition target.
we consider all NEs in the boundary detection step since we separate the NER task into two phases.
contrasting
train_17474
[12] calculated accuracy, confidence and score of their patterns to select better patterns.
those statistical measures are calculated only using data obtained from their training corpus which cannot often give enough information.
contrasting
train_17475
To resolve these inconsistencies, we basically use statistical measures such as the score of a boundary pattern, Score(p), and the rank of an entity candidate, Rank(e), as in most previous works.
this strategy is not always the best because some trivial errors can also be removed by simple heuristics and linguistic knowledge.
contrasting
train_17476
A new bilingual dictionary can be built using two existing bilingual dictionaries, such as Japanese-English and English-Chinese to build Japanese-Chinese dictionary.
japanese and Chinese are nearer languages than English, there should be a more direct way of doing this.
contrasting
train_17477
Their method can make the similar Chinese words to have higher ranking but cannot generate new translation candidates.
our methods works for both.
contrasting
train_17478
Using English as the pivot language is a good starting point to construct a new language pair.
there remain a lot of words for which the translations cannot be obtained.
contrasting
train_17479
It is fair enough if we translate the single character words using the conversion table.
these characters should have more translations of other multi-character words.
contrasting
train_17480
In our survey, only 33% of nouns and 44% of verbal nouns created by kanji/hanzi conversion method exist in the Peking University dictionary.
this may be due to the incompleteness of the Chinese dictionary that we used.
contrasting
train_17481
Then, it is passed to the next methods, and segmented incorrectly.
the overlook of the dictionary took place in the following cases: the reason of lowering recall, that is, the overlook of compounds, can be summarized as follows: βˆ’ Especially for shorter words, it is actually very hard to set up clear criteria for compounds.
contrasting
train_17482
In the first example "εεˆ†/very" is an adverb and has no temporal meaning.
the character "十/ten" and "εˆ†/minute" can be looked up and satisfy the grammar rule.
contrasting
train_17483
Because each possible substring in a sentence is tried, multiple nested, overlap or adjacent temporal expressions may exist in the sentence.
some of these expressions are just parts of the optimal answers.
contrasting
train_17484
Based on the assumption that two adjacent or overlapped temporal expressions refer to the same temporal concept, we combined them.
the procedure of combination can not help to explain the meaning of the expressions.
contrasting
train_17485
The recall, up to above 70%, is higher than all the other three systems.
the precision at the same time is unfortunately the lowest.
contrasting
train_17486
Therefore we need to extract the key lexicons from UMLS for each semantic type in background processing and use them to tag unknown chunk with predicted types.
the semantic type checking for pronominal anaphors is done through the extraction of the key verbs for each semantic type.
contrasting
train_17487
They developed a discourse-annotated corpus and learned a discourse-parsing algorithm from it.
our discourse analyzer was based on generic heuristic rules.
contrasting
train_17488
When human writers create definitions, they take care of the structural elements and requirements described above.
when creating conceptual networks, dictionaries and language based examples are often employed as knowledge sources to determine lexical and semantic relations between words.
contrasting
train_17489
Normally, this task would be performed by a parser.
since the CoNLL dataset contains no parsing information 6 and we did not want to use any resources not explicitly provided in the CoNLL data, we had to construct a PPA classifier to specifically perform this task.
contrasting
train_17490
Preposition SRL combined with [9] (P = precision, R = recall, F = F-score; above-baseline results in boldface) in the testing data, even when oracled outputs from all three subsystems are used (recall = 18.15%).
this is not surprising because we expected the majority of semantic roles to be noun phrases.
contrasting
train_17491
The reason for the worse results is that in our experiments, the oracled PPA always identifies more prepositions attached to verbs than the PPA classifier, therefore more prepositions will be given semantic roles by the SRD classifier.
since the performance of the SRD classifier is not high, and the segmentation subsystem does not always produce the same semantic role boundaries as the CoNLL data set, most of these additional prepositions would either be given a wrong semantic role or wrong phrasal extent (or both), thereby causing the overall performance to fall.
contrasting
train_17492
Hence, on the task of identifying incorrect relations in VERBOCEAN, our system has a precision of 85.7%, where precision is defined as the percentage of correctly identified erroneous relations.
it only achieved a recall of 16.2%, where recall is the percentage of erroneous relations that our system identified.
contrasting
train_17493
On one hand, we associate the words in the left half with food or cooking.
we associate those in the right half with animals.
contrasting
train_17494
Named entities are important constituents to identify roles, meanings, and relationships in natural language sentences.
named entities are productive, so that it is difficult to collect them in a lexicon exhaustively.
contrasting
train_17495
The most powerful external feature of creation titles is French quotes "γ€Šγ€‹", which is defined to represent book names in the set of standard Simplified Chinese punctuation marks of China [5].
they are not standard punctuation marks in Traditional Chinese.
contrasting
train_17496
Typically, agent and patient assemblies would be fixed in a case-role representation without such a discriminator, and the model required to learn to instantiate them correctly [10].
we found that the model performed much better when the task was recast as having to learn to isolate the nouns in the order in which they are introduced, and separately mark how those nouns relate to the verb.
contrasting
train_17497
Such as, whether or not they take objects, what kind of objects they take, which types of case markers they require to subject and object constituents, and so on.
the attributions of verbs themselves are very important, which determine the quantity of arguments and the frame of sentences, such as attribution of human verbs, of volition verbs, of controllable verbs, causative verbs, and so on.
contrasting
train_17498
It is noticed that the granularity of sequences does not seem to yield significantly different performance based on the current data.
whether this is true in general remains an open question.
contrasting
train_17499
Each rule in our model has been created relying on a set of linguistic tests used in the theory of LCS and our linguistic intuition on handling LCS.
the rule set was not sufficiently sophisticated, so that led to 59 errors.
contrasting