id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_17500
He evaluated the method on 12 English nouns, and showed over than 90% precision.
the evaluation was conducted on just a small part of senses of the words, not on all senses of them.
contrasting
train_17501
However, the evaluation was conducted on a small part of senses of the target words like [8].
many senses in WordNet do not have unambiguous relatives through relationships such as synonyms, direct hypernyms, and direct hyponyms.
contrasting
train_17502
Word sense disambiguation (WSD) has been regarded as essential or necessary in many high-level NLP applications that require a certain degree of semantic interpretation, such as machine translation, information retrieval (IR) and question answering, etc.
previous investigations into the role of WSD in IR have shown that low accuracy in WSD negated any possible performance increase from ambiguity resolution [1,2].
contrasting
train_17503
Recently, there has been work on the automatic (or semi-automatic) interpretation of NCs [4,7,8].
most of this work is based on a simplifying assumption as to the scope of semantic relations or the domain of interpretation, making it difficult to compare the performance of NC interpretation in a broader context.
contrasting
train_17504
It supports our idea of interpretation of NCs by similarity.
as [19] point out, information on relatedness has not been developed as actively as conceptual similarity.
contrasting
train_17505
If we were to use the crossover point (similarity ≥ 0.57), we would clearly "infect" the training data with a significant number of misclassified instances, namely 30.69% of the new training instances; this would have an unpredictable impact on classification performance.
if we were to select a higher threshold based on a higher estimated proportion of correctly-classified instances (e.g.
contrasting
train_17506
These settings might lead to an unfair comparison, as the perceptron algorithm will select far more features than the boosting and MSR algorithm.
we used these settings as they all converged under these settings.
contrasting
train_17507
Our syntactic elements of expression capture differences in the way people express content and could be useful for authorship attribution.
the experiments we present here indicate that syntactic elements of expression are more successful at identifying expression in individual books while function words are more successful at identifying authors.
contrasting
train_17508
This definition is valid if we can assume that the word difficulty is independent of such factors as context or listeners.
we think such assumption is not always true.
contrasting
train_17509
Lexical choice has been widely discussed in both paraphrasing and natural language generation (NLG).
to the best of our knowledge, no researches address topic adaptation.
contrasting
train_17510
Its adaptation-guided retrieval makes it ultimately similar to our system.
our approach differs from it in two respects.
contrasting
train_17511
Many previous researches focused on extracting bilingual lexicon from parallel corpora, and readers can refer to the reviews [1], [2] for the details.
due to the restriction of current available parallel corpora of different languages, together with the fact that corpus annotation requires a lot of manpower and resources, researchers have attempted to extract translations from non-parallel corpus or Web data.
contrasting
train_17512
Using the translation combination of each constituent to acquire the translation of a multiword term is very suitable for translation acquisitions of base noun phrases.
terminologies and technical terms often consist of unknown words, and their translations are seldom the combination of each constituent.
contrasting
train_17513
Since his research focused on finding English translation given a Japanese term, the segmentation of Japanese could be avoided.
our problem is to find Chinese equivalent using English term, so we have to cope with how to obtain the correct boundary of Chinese translations.
contrasting
train_17514
All possible forms of terminology translations can be comprehensively mined after character-based string frequency estimation.
there are many irrelevant items and redundancy noises formed in the process of mining.
contrasting
train_17515
We use the "vowel insertion rule" which is also used for verb phrases to check whether vowel elimination occurred in the content word and insert the eliminated vowel.
if the phrase is identified as a loanword phrase, we do not use the vowel insertion rule.
contrasting
train_17516
We take the logarithm of FSR 3 Although there have been many existing works in this direction (Lua and Gan, 1994;Chien, 1997;Sun et al., 1998;Zhang et al., 2000;SUN et al., 2004), we have to skip the details of comparing MI due to the length limitation of this paper.
our experiments with MI provide no evidence against the conclusions in this paper.
contrasting
train_17517
Note that boundary F-measure gives much more higher score than word F-measure for the same segmentation output.
in either of metric, we can find no evidence in favor of decoding algorithm (3).
contrasting
train_17518
When we look at recall and precision numbers directly, we observe that even without rules, the algorithm produces large recall boosts (especially after splitting).
these boosts are accompanied by precision losses, which result in unchanged or lower F-scores.
contrasting
train_17519
It is well known that syntactic structured information plays a critical role in many critical NLP applications, such as parsing, semantic role labeling, semantic relation extraction and co-reference resolution.
it is still an open question on what kinds of syntactic structured information are effective and how to well incorporate such structured information in these applications.
contrasting
train_17520
Among previous tree kernels, the convolution tree kernel represents the state-of-the-art and have been successfully applied by Collins and Duffy (2002) on parsing, Moschitti (2004) on semantic role labeling, Zhang et al (2006) on semantic relation extraction and Yang et al (2006) on pronoun resolution.
there exist two problems in Collins and Duffy's kernel.
contrasting
train_17521
"said" and "bit" in the sentences).
it may introduce some noise (e.g.
contrasting
train_17522
This convolution tree kernel has been successfully applied by Yang et al (2006) in pronoun resolution.
there is one problem with this tree kernel: the subtrees involved in the tree kernel computation are context-free (That is, they do not consider the information outside the sub-trees.).
contrasting
train_17523
On the other hand, unsupervised learning-based methods do not need the definition of relation types and the availability of manually labeled data.
they fail to classify exact relation types between two entities and their performance is normally very low.
contrasting
train_17524
However, extending with synonym is mainly adding repetitious information, which can not define the topics more clearly.
topicbased research should be real-sensitive.
contrasting
train_17525
The preprocessing, feature weighting and similarity computation in dynamic method are similar as those in baseline method.
the vector representation for a story here is dynamic.
contrasting
train_17526
In the baseline method, term vectors are dynamic because of the incremental tf * idf weighting.
dynamic information extending is another more important reason in the dynamic method.
contrasting
train_17527
The difference between two dynamic methods is due to different in the P miss .
it is too little to compare the two dynamic systems.
contrasting
train_17528
The experiment results indicate that this method can effectively improve the performance on both miss and false alarm rates, especially the later one.
we should realize that there are still some problems to solve in story link detection.
contrasting
train_17529
Therefore, bigram SUM and bigram PP methods easily achieve good performance for English origin.
for Japanese origin (represented by 1413 Chinese characters) and Chinese origin (represented by 2319 Chinese characters), the data sparseness becomes acute and causes performance degradation in SUM and PP models.
contrasting
train_17530
In case the foreign word is not found in CMU Speech dictionary, we guess its pronunciation using the method described by Oh and Choi.
in this case, the context window size is 3.
contrasting
train_17531
These word groups thus seem to be similar to chunks as generally understood (Molina and Pla, 2002;Megyesi, 2002).
chunks in UCSG are required to correspond to thematic roles, which means for example, that prepositional phrases are handled properly.
contrasting
train_17532
Extensive experimentation has been carried out to evaluate the performance of the system.
direct comparisons with other chunkers and parsers are not feasible as the architectures are quite different.
contrasting
train_17533
Memory-Inductive Categorial Grammar, abbreviated MICG, is a version of pure categorial grammar extended by ellipsis-based analysis.
it relies on antecedent memorization, gap induction, and gap resolution that outperform CCG's functional composition and type raising.
contrasting
train_17534
Unithood concerns with whether or not a sequence of words should be combined to form a more stable lexical unit.
termhood measures the degree to which these stable lexical units are related to domain-specific concepts.
contrasting
train_17535
(Seretan et al., 2004) tested mutual information, loglikelihood ratio and t-tests to examine the use of results from web search engines for determining the collocational strength of word pairs.
no performance results were presented.
contrasting
train_17536
For example, "E. coli food poisoning", "E. coli" and "food poisoning" are acceptable as valid complex term candidates.
"E. coli food" is not.
contrasting
train_17537
If a candidate word has a 'cohesive relation' with the words in the chain it is added to the chain.
if a candidate word is not related to any of the chains, a new chain is created for the candidate word.
contrasting
train_17538
According to the analysis in Section 2, our basic idea is clear that we regard the supervised summarizing as a problem of sequence segmentation.
in our approach, the features are not only obtained on the sentence level but also on the segment level.
contrasting
train_17539
8.3% and 4.9% improvements are respectively gained comparing to LSA and HITS models Currently, the main problem of our method is that the searching space goes large by using the extended features and semi-CRF, so the training procedure is time-consuming.
it is not so unbearable, as it has been proved in (Sarawagi and Cohen, 2004).
contrasting
train_17540
(1999; proposed a method based on supervised machine learning to identify whether two paragraphs contain similar information.
we found it was difficult to accurately identify EQ pairs between two sentences simply by using similarities as features.
contrasting
train_17541
Finally, the best SemEval 2007 Web People Search system (Chen and Martin, 2007) used techniques similar to ours: named entity recognition using off-the-shelf systems.
in addition to semantic information and full document condition they also explore the use of contextual information such as the url where the document comes from.
contrasting
train_17542
We have also shown that one system which uses one specific type of semantic information achieves stateof-the-art performance.
more work is needed, in order to understand variation in performance from one data set to another.
contrasting
train_17543
Recently, transfer learning problem was tackled by applying EM algorithm along with the naive Bayes classifier (Dai et al., 2007).
they all are monolingual text categorization tasks.
contrasting
train_17544
In addition, each source word can be translated by several related target words and the latter being weighted.
among the proposed translation words, there may be irrelevant ones.
contrasting
train_17545
On one hand, the confidence measure allows us to adjust the original weights of the translations and to select the best translation terms according to all the information.
the confidence measures also provide us with a new weighting for the translation candidates that are comparable across different translation resources.
contrasting
train_17546
Such data are available in both speech recognition and machine translation.
in the case of CLIR, the goal of query translation is not strictly equivalent to machine translation.
contrasting
train_17547
Multiple translation resources are believed to contribute in improving the quality of query translation.
in most previous studies, only linear combination has been used.
contrasting
train_17548
This means that a word index alone can never distinguish the semantic difference between these sentences.
syntactic parsers can produce different dependency relations for each sentence.
contrasting
train_17549
Though unreasonable, TIA did not cause very bad performance.
relaxing the assumption by adding term dependencies into the retrieval model is still a basic IR problem.
contrasting
train_17550
For well-defined IRs such as relational database retrieval (E. Levin et al., 2000), significant words (=keywords) are obvious.
determining significant words for more general IR task (T. Misu et al., 2004) (C.Hori et al., 2003 is not easy.
contrasting
train_17551
Generally, trigram model is used as acoustic model in order to improve the recognition accuracy.
monophone model is used in this paper, since the proposed estimation method needs recognition error (and IRDR).
contrasting
train_17552
Extracting con- Figure 2: Example of WFST for LU cepts from user utterances by keyword spotting or heuristic rules has also been proposed (Seneff, 1992) where utterances can be transformed into concepts without major modifications to the rules.
numerous complicated rules similarly need to be manually prepared.
contrasting
train_17553
The FILLER transition helps to suppress the insertion of incorrect concepts into LU results.
many output sequences are obtained for one utterance due to the FILLER transitions, because the utterance can be parsed with several paths.
contrasting
train_17554
Most prior acoustically based approaches to prosodic labeling have used local classifiers.
on phonological grounds, we expect that certain label sequences will be much more probable than others.
contrasting
train_17555
The subword-based UNK translation approach can be applied to all the UNKs indiscriminately.
if we know an UNK is a named entity, we can translate this UNK more accurately than using the subword-based approach.
contrasting
train_17556
The CWS of Hyperword generated many UNKs because of using a large size of lexicon.
if named entity recognition was applied upon the segmented results of the Hyperword, more UNKs were produced.
contrasting
train_17557
• Baseline+Subword: the results were made under the same conditions as the first except all of the UNKs were extracted, re-segmented by the subword CWS and translated by the subword translation models.
the named entity translation was not used.
contrasting
train_17558
Therefore, its performance tends to be superior in Table 6 if W 1 is used, especially for ECSet.
as W 1 occasionally retrieves few snippets, it is not able to provide sufficient information.
contrasting
train_17559
Therefore, their performance greatly depended on their ability to mine transliteration candidates from the Web.
this system might create errors if it cannot find a correct transliteration candidate from the retrieved Web pages.
contrasting
train_17560
Because of this, their system's coverage and W A were relatively poor than ours 8 .
our transliteration process was able to generate a set of transliteration hypotheses with excellent coverage and could thus achieve superior W A. searched the Web using given source words and mined the retrieved Web pages to find target-language transliteration candidates.
contrasting
train_17561
In commercial translation environments, it is sometimes the case that texts are first translated by inexperienced translators and then edited by experienced translators.
this does not apply to voluntary translation.
contrasting
train_17562
Translations that are very literal, either lexically or structurally, are often also awkward.
a high degree of word order correspondence can be a positive sign (cf.
contrasting
train_17563
For example, when applying rules to 電子工業 /electronic industry, _ 工 業 ,industry is preferred than _業,industry.
in the evaluation step, precision rate of _業,industry will be reduced when applying to full morphemes, such as 電子工業 /electronic industry, and then could be filtered out if the precision is lower than the threshold.
contrasting
train_17564
This overloads text messages with various tasks: negotiation issues themselves, introductions and closures traditional in negotiations, and even socializing.
electronic means make the contacts less formal, allowing people to communicate more freely.
contrasting
train_17565
and included another entity that they termed as NUMBER.
for specific domains like weather forecasts, medical reports or student reports, more varied domain entities form slots in templates, as we observe in our data; hence, existence of a module handling domain specific entities become essential for such a task.
contrasting
train_17566
Instead of considering matching role sequences for a set of propositions, we could as well have considered matching bag of roles.
for the present corpus, we decided to use strict role sequence instead because of the sentences' rigid structure and absence of any passive sentences.
contrasting
train_17567
We decided at first to look at syntactic parse tree similarities between constituents.
there is a need to decide at what level of abstraction should one consider matching the parse trees.
contrasting
train_17568
As we see, the logistic regression discriminates between the majority of the examples by assigning extreme probabilities (0 and 1).
there are some examples which are extremely borderline, and thus it does not generalize well on the test set.
contrasting
train_17569
It shows that the extended dictionaries can translate part of Korean NEs into Chinese.
there are still many NEs that the extended dictionaries cannot cover.
contrasting
train_17570
Sentiment words are a basic resource for sentiment analysis and thus believed to have a great potential for applications.
it is still an open problem how we can effectively use sentiment words to improve performance of sentiment classification of sentences or documents.
contrasting
train_17571
Although these pieces of work aim to predict not sentence-level but document-level sentiments, their concepts are similar to ours.
all the above methods require annotated corpora for all levels, such as both subjectivity for sentences and sentiments for documents, which are fairly expensive to obtain.
contrasting
train_17572
One of the simplest ways to classify sentences using word-level polarities would be a majority voting, where the occurrences of positive words and those of negative words in the given sentence are counted and compared with each other.
this majority voting method has several weaknesses.
contrasting
train_17573
To circumvent this problem, Kennedy and Inkpen (2006) and Hu and Liu (2004) proposed to use a manually-constructed list of polarity-shifters.
it has limitations due to the diversity of expressions.
contrasting
train_17574
However, the bigram model cannot generalize the learned knowledge to other features such as "not great" or "not disappoint".
our polarity-shifter model learns that the word "not" causes polarity-shifts.
contrasting
train_17575
When we have a large amount of training data, the ngram classifier can learn well whether each n-gram tends to appear in the positive class or the negative class.
when we have only a small amount of training data, the n-gram classifier cannot capture such tendency.
contrasting
train_17576
As can be seen in Figure 1, the classifier managed to map the reviews onto the coordinate system.
there are very few points in the neutral region, that is, on the same X = 0 line as balanced but with low sentiment density.
contrasting
train_17577
Every new iteration produced an even poorer result as each new extracted corpus was of lower accuracy.
it seems that a seed list consisting of several low-frequency one-character words can compensate each other and produce better results by capturing a larger part of the corpus (thus increasing recall).
contrasting
train_17578
Phrase-based SMT(Statistical Machine Translation) models have advanced the state of the art in machine translation by expanding the basic unit of translation from words to phrases, which allows the local reordering of words and translation of multi-word expressions (Chiang, 2007) (Koehn et al., 2003) (Och andNey, 2004).
phrase-based SMT techniques suffer from data sparseness problems, that is; unreliable translation probabilities of low frequency phrases and low coverage in that many phrases encountered at run-time are not observed in the training data.
contrasting
train_17579
Moreover, we can smooth the phrase translation probability using the class of paraphrases.
eBMT or PBMT systems might translate a given sentence fast and robustly geared by sentence translation patterns or generalized transfer rules.
contrasting
train_17580
This process is based on sequentially classifying segments of several adjacent words.
this technique requires that entire segments have the same class label, while our technique does not.
contrasting
train_17581
Moreover, compared to a large-scale dictionary, our domain knowledge is much easier to obtain.
all the above models treat NER as classification or sequence labeling problem.
contrasting
train_17582
Since PMI ranges [-∞, +∞], the point of dividing pmi(i,p) by max pmi in Espresso is to normalize the reliability to [0, 1].
using PMI directly to estimate the reliability of a pattern when calculating the reliability of an instance may lead to unexpected results because the absolute value of PMI is highly variable across instances and patterns.
contrasting
train_17583
Both Basilisk and Espresso identify location names as context patterns (e.g., #東京 "Tokyo", #九州 "Kyushu"), which may be too generic to be characteristic of the domain.
tchai finds context patterns that are highly characteristic, including terms related to transportation (#+格安航空券 "discount plane ticket", #マイレージ "mileage") and accommodation (#+ホテル "hotel").
contrasting
train_17584
The filtering of generic patterns (green) does not show 4 Note that Basilisk and Espresso use context patterns only for the sake of collecting instances, and are not interested in the patterns per se.
they can be quite useful in characterizing the semantic categories they are acquired for, so we chose to compare them here.
contrasting
train_17585
Supervised learning models set their parameters using given labeled training data, and generally outperform unsupervised learning methods when trained on equal amount of training data.
creating a large labeled training corpus is very expensive and time-consuming in some real-world cases such as word sense disambiguation (WSD).
contrasting
train_17586
Interestingly, deciding when to stop active learning is an issue seldom mentioned issue in these studies.
it is an important practical topic, since it obviously makes no sense to continue the active learning procedure until the whole corpus has been labeled.
contrasting
train_17587
Thirdly, the active learning process can stop if the targeted performance level is achieved.
it is difficult to predefine an appropriate and achievable performance, since it should depend on the problem at hand and the users' requirements.
contrasting
train_17588
2 we can see that it is not easy to automatically determine the point of no significant performance improvement on the validation set, because points such as "20" or "80" would mislead final judgment.
we do believe that the change of performance is a good signal to stop active learning process.
contrasting
train_17589
The feature fusion was shown to be effective.
their methods hinge on the availability of a labeled bilingual corpus.
contrasting
train_17590
(2006)), an ISO standard for lexical resource modelling, and an LMF version of FrameNet exists.
we believe that our usage of a typed formalism takes advantage of a strong logical foundation and the notions of inheritance and entailment (cf.
contrasting
train_17591
Finally, the closest neighbour to our proposal is the ATLAS project (Laprun et al., 2002), which combines annotations with a descriptive meta-model.
to our knowledge, ATLAS only models basic consistency constraints, and does not capture dependencies between different layers of annotation.
contrasting
train_17592
In the case of semantic roles, for example, such a regularity would be that semantic roles are not assigned to conflicting grammatical functions (e.g., deep subject and object) within a given lemma.
some of the role sets we extracted contained exactly such configurations.
contrasting
train_17593
For example, " (wei)" is a classifier used when counting people with honorific perspective.
judgement if " " can modify nouns such as "political prisoner" or "local villain" is rather uncertain.
contrasting
train_17594
Note that such relations are discarded as redundant ones.
the "indirect" case includes the relations which can not be extracted from the database but only inferred by using transitivity on the taxonomy.
contrasting
train_17595
In the objective evaluation, this answer is marked as incorrect.
some users may find this snippet useful, although they may still prefer the seventh or eighth text snippets from Table 1 as primary answers, as they mention Barbara Jordan's election to a state legislature in 1966, and to the Congress in 1972.
contrasting
train_17596
Recent systems at NTCIR-6 1 Question Answering Challenge (QAC-4) can handle why-questions .
their performance is much lower than that of factoid QA systems (Fukumoto et al., 2004;Voorhees and Dang, 2005).
contrasting
train_17597
Such patterns include typical cue phrases and POS-tag sequences related to causality, such as "because of" and "by reason of."
as noted in (Inui and Okumura, 2005), causes are expressed in various forms, and it is difficult to cover all such expressions by hand.
contrasting
train_17598
In the CoNLL-2005 shared task (SRL for English), the best system found causal adjuncts with a reasonable accuracy of 65% (Màrquez et al., 2005).
when we analyzed the data, we found that more than half of the causal adjuncts contain explicit cues such as "because."
contrasting
train_17599
A:000217262,L3 A mother panda often gives birth to two cubs, but when there are two cubs, one is discarded, and young mothers sometimes crush their babies to death.
a:000406060,L6 because of the recent development in the midland, they are becoming extinct.
contrasting