id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_17600
Traditional QA query expansion is based only on the individual keywords in a question.
the cluster-based expansion learns features from context shared by similar training questions from a cluster.
contrasting
train_17601
In the above {the U.N. body, Kosovo} example, it may be difficult to determine the relationship between U.N. body and Kosovo straightforwardly.
making the right guess about the relationship is easier if the U.N. body is grouped with army and government.
contrasting
train_17602
(Huang et al., 2004) exploited the Web as a training corpus to train a classifier with user-defined categories.
it is widely recognized that when using documents on the Web users must spend a great deal of time filtering out unrelated contents.
contrasting
train_17603
Computer users increasingly need to produce text written in multiple languages.
typical computer interfaces require the user to change the text entry software each time a different language is used.
contrasting
train_17604
In our study, IMEs for European languages are built using simple transliteration: e.g., "[" typed in an English keyboard is transliterated into "ü" of German.
the IMEs for Japanese and Chinese Figure 3: Entry Flow require a more complicated system because in these languages there are several candidate transcriptions for a key sequence.
contrasting
train_17605
This outperformed disambiguation using VSM, demonstrating the utility of the taxonomic information in Wikipedia and WordNet.
because not all words in Wikipedia have categories, and there are very few named entities in WordNet, the number of disambiguated words that can be obtained with MCAT (2,239) is less than when using VSM, (12,906).
contrasting
train_17606
Similar rule-based systems exist for Spanish and Italian (Saquete et al., 2006).
such resources are restricted to a handful of languages.
contrasting
train_17607
The model which combines the optimal settings for timexes and events outperforms the uninformed baseline by 17.93% (timexes) and 6.64% (events) F 1 -measure.
exploration of the model space on the basis of the (larger and thus presumably more representative) test set shows that the optimised models do not generalise well.
contrasting
train_17608
(2006) obtain, for example, an entailment relation X wins → X plays from such a pattern as player wins.
their way of using nominalized verbs is highly limited compared with our way of using verbal nouns.
contrasting
train_17609
Because of its good tradeoff between efficiency and expressiveness, BTG restriction is widely used for reordering in SMT (Zens et al., 2004).
bTG restriction does not provide a mechanism to predict final orders between two neighboring blocks.
contrasting
train_17610
(2005) adapted the hook trick which changes the time complexity from O(n 3+4(m−1) ) to O(n 3+3(m−1) ).
the implementation of the hook trick with pruning is quite complicated.
contrasting
train_17611
The major goal of personalized search is to accurately model a user's information need and store it in the user profile and then re-rank the results to suit to the user's interests using the user profile.
understanding a user's information need is, unfortunately, a very difficult task partly because it is difficult to model the search process which is a cognitive process and partly because it is difficult to characterize a user and his preferences and goals.
contrasting
train_17612
We use relevance feedback for personalization in our approach.
we propose a novel usage of relevance feedback to effectively model the process of query formulation and better characterize how a user relates his query to the document that he intends to retrieve as discussed in the web search process above.
contrasting
train_17613
Most common errors among the proposed methods were generated by a transformation pattern Typically, dropping a nominal element N 1 of the given nominal compound N 1 : N 2 generalizes the meaning that the compound conveys, and thus results correct paraphrases.
it caused errors in some cases; for example, since N 1 was the semantic head in (7), dropping it was incorrect.
contrasting
train_17614
Consequently, for each stored word, we find its stem.
the approach needs more space.
contrasting
train_17615
Therefore, Japanese NER has tight relation with morphological analysis, and thus it is often performed immediately after morphological analysis Yamada, 2007).
such approaches rely only on local context.
contrasting
train_17616
For example, while "Kawasaki" in the second sentence of (2) is the name of a person, "Kawasaki" in the second sentence of (3) is the name of a soccer team.
the second sentences of (2) and (3) are exactly the same, and thus it is impossible to correctly distinguish these NE classes by only using information obtained from the second sentences.
contrasting
train_17617
This clue is considered by using cache features to a certain extent.
if the same morpheme is not used, cache features cannot work.
contrasting
train_17618
The system then issues a new S-Query ['I have' AND 'this person for years'], and finally mines the new set of snippets to discover that 'known' is the preferred lexical option.
to proofing determiner usages errors, mf req: mf req = f requency of matched collocational verb/adj.
contrasting
train_17619
Thus, a false advertisement may be proposed.
the system may have enough information at the later stages.
contrasting
train_17620
On the other hand, the system may have enough information at the later stages.
users may complete their talk at any time in this case, so the advertisement effect may be lowered.
contrasting
train_17621
In this paper, only one intention is assigned to the utterances.
there may be many participants involving in a conversation, and the topics they are talking about in a dialogue may be more than one.
contrasting
train_17622
This result can be ranked by using criteria such as PageRank (Brin and Page, 1998) or relevancy to the query.
this long flat list is uncomfortable to users, since it forces users to examine each page one by one, and to spend significant time and effort for finding the really relevant information.
contrasting
train_17623
In our test result, when using Dirichlet smoothing, the precision of top 5 and 10 labels are 82% and 80%, thus users can benefit in browsing from our model using 5 or 10 labels.
the precision rapidly drops to 60% at P @20.
contrasting
train_17624
We used Google's search results as an input to our system.
multiple engines offer a better coverage of the web because of the low overlap of current search engines (Bharat and Broder, 1998).
contrasting
train_17625
In this method, impression keywords that are related to the source word are used.
a user must provide impression keywords, which is time-consuming and expensive.
contrasting
train_17626
A preliminary study showed that the language model adaptation was generally effective for transliteration.
because the focus of this paper is the related term extraction, we do not describe the evaluation of the language model adaptation.
contrasting
train_17627
The last three lines are the results of Pharaoh with phrase length from 1 to 3.
the length of 2 http://www.statmt.org/wmt06/shared-task/ 3 http://www.cs.utah.edu/~hal/searn/SimpleSearn.tgz phrases for Amasis is determined by ITG-like tree nodes and there is no restriction for it.
contrasting
train_17628
There are significantly more single word translations in our method.
the translation quality can be kept at the same level under this circumstance.
contrasting
train_17629
A parallel corpus that has similar statistical characteristics to the target domain should yield a more efficient translation model.
domainmismatched training data might reduce the translation model's performance.
contrasting
train_17630
Target language model probability (weight = 0.5) According to a previous study, the minimum error rate training (MERT) (Och, 2003), which is the optimization of feature weights by maximizing the BLEU score on the development set, can improve the performance of a system.
the range of improvement is not stable because the MERT algorithm uses random numbers while searching for the optimum weights.
contrasting
train_17631
The experimental results show that target language side information gives the best performance in the experimental setting.
there are no large differences among the different selection results.
contrasting
train_17632
Often more data is better data, and so it should come as no surprise that recently statistical machine translation (SMT) systems have been improved by the use of large language models (LM).
training data for LMs often comes from diverse sources, some of them are quite different from the target domain of the MT application.
contrasting
train_17633
SMT decoders such as Moses may store the translation model in an efficient on-disk data structure (Zens and Ney, 2007), leaving almost the entire working memory for LM storage.
this means for 32-bit machines a limit of 3 GB for the LM.
contrasting
train_17634
Since the order of English words is fixed, the number of different n-grams that need to be looked up is dramatically reduced.
since the n-best list is only the tip of the iceberg of possible translations, we may miss the translation that we would have found with a LM integrated into the decoding process.
contrasting
train_17635
The highest score, CS=4, is assigned to the synset that is evident to include more than one English equivalent of the lexical entry in question.
the lowest score, CS=1, is assigned to any synset that occupies only one of the English equivalents of the lexical entry in question when multiple English equivalents exist.
contrasting
train_17636
The number of languages that have been successfully developed their Word-Nets is still limited to some active research in this area.
the extensive development of WordNet in other languages is important, not only to help in implementing NLP applications in each language, but also in inter-linking WordNets of different languages to develop multi-lingual applications to overcome the language barrier.
contrasting
train_17637
On the other hand, for example, " (matter)" is usually written in hiragana (" ") in word definition.
it is difficult to know automatically that a word " " in word definition means " ", since the dictionary has other entries which has the same reading "koto", such as " (Japanese harp)" and " (ancient city)".
contrasting
train_17638
From the result, we can find that not only common words which may be included in a "basic vocabulary", such as " (exist)", " (certain/some)" 2 , " (do')", " (thing)", etc., but also words which are not so common but are often used in 2 It is used to say something undetermined or to avoid saying something exactly even if you know that.
some words in the top ranked words, such as "A" and "B", seem not to be appropriate for defining vocabulary.
contrasting
train_17639
In case a system paraphrases a functional expression f into f , it also should generate all variants of f in potential.
any proposed system does not guarantee this requirement.
contrasting
train_17640
Loosening the constraints allows more sentences to be parsed, thus increasing the coverage, but at the same time easily leads into overgeneration, problems with disambiguation and decreased preciseness.
the points that we raised above indicate that there is a strong relationship between coverage and preciseness.
contrasting
train_17641
Consequently, we argue that coverage can be used a measure of generalizability; It sets the upper bound for the performance on the sentence-level evaluation measures.
the evaluation should always be accompanied with data on the preciseness of the parser and the level of detail in its output.
contrasting
train_17642
StatCCG (Preliminary public release, 14 January 2004) is a statistical parser for CCG that was developed by Julia Hockenmaier (2003).
to C&C, this parser is based on a generative probabilistic model.
contrasting
train_17643
The key factor in our success was the extraction of only reliable information from unlabeled data.
that improvement was not satisfactory.
contrasting
train_17644
To use this method more effectively, we need to balance the labeled and unlabeled data very carefully.
this method is not scalable because the training time increases significantly as the size of a training set expands.
contrasting
train_17645
These presented methods used similarity measures heuristically according to the property of the languages.
detecting conjunctive boundaries with a similar method in Chinese may meet some problems, since a Chinese word may play different syntactic functions without inflection.
contrasting
train_17646
Generally in the process of multi-document text summarization, a collection of input documents about a particular subject is received from the user and a coherent summary without redundant information is generated.
several challenges exist in this process the most important of which are removing redundant information from the input sentences and ordering them properly in the output summary.
contrasting
train_17647
The sectional evaluation and the inspection of example output show that this system works well.
larger scale evaluation and comparison of its accuracy remain to be future work.
contrasting
train_17648
In general, if an ordering gets a positive τ value, the ordering can be considered to be better than a random one.
for a negative τ value, the ordering can be considered to be worse than a random one.
contrasting
train_17649
This can be seen as an advantage of Ca over Fa.
we can see that the adjacency window size still influenced the performance as it did for Fa.
contrasting
train_17650
PrepNet is structured in two levels: • the abstract notion level: global, language independent, characterization of preposition senses in an abstract way, where frames represent some generic semantic aspects of these notions, • the language realization levels that deal with realizations for various languages, using a variety of marks (postpositions, affixes, compounds, etc.).
we will keep the term 'preposition' hereafter for all these marks.
contrasting
train_17651
So far, the formalism we have elaborated allows us to encode syntactic frames, restrictions, case marks, prefixes and suffixes as well as postpositions.
languages of the malayo-polynesian family raise additional problems which are not so easy to capture.
contrasting
train_17652
Those roles are related to a variety of situations which are not necessarily introduced by prepositions.
a preliminary, exploratory, task could be to attempt to classify FrameNet roles under the main abstract notions of PrepNet.
contrasting
train_17653
It clearly demonstrates that our FGTKs are faster than the GTK algorithm as expected.
the improvement seems not so significant.
contrasting
train_17654
So far, the effectiveness of handling expressive divergence has been shown for IR using a thesaurus-based query expansion (Voorhees, 1994;Jacquemin et al., 1997).
their methods are based on a bag-of-words approach and thus does not pay attention to sentence-level synonymy with syntactic structure.
contrasting
train_17655
4 This is a starting point for having t-nodes corresponding to lexias.
in the current state it is not fully sufficient even for verbs, mainly because parts of MWEs are not joined into one node.
contrasting
train_17656
Obviously, the posterior probability p(W |X) is a good confidence measure for the recognition decision that X is recognized as W .
most realworld ASR systems simply ignore the term p(X) during the search, since it is constant across different words W .
contrasting
train_17657
The main problem when applying ASR to extremely inflected languages such as Telugu, is the need to use a very large vocabulary, in order to reduce the OOV rate to an acceptable level.
this causes problems for making the automatic transcription in a time close to the real-time.
contrasting
train_17658
In our proposed method, w is estimated to maximizeF μ (Θ, w).
training dataset D is also used to estimate Θ.
contrasting
train_17659
MC-F μ , MC-F M , and MC-F L outperformed MML as regards the three F 1 -scores for JPAT.
mmL performed better for Reuters than mC-F μ , mC-F m , and mC-F L , and provided a better F Lscore for WIPO.
contrasting
train_17660
Although the feature spaces of the dependency path kernels are not subsets of the subsequence kernel, we can clearly see that we get higher precisions by introducing bias towards the syntactically more meaningful feature space.
the dependency path kernel is fairly rigid and imposes many hard constraints such as requiring the two paths to have exactly the same number of nodes.
contrasting
train_17661
McCallum(McCallum et al., 2000) improved them by allowing a record to locate in plural blocks in order to avoid detection loss.
the problems of these filtering methods using blocking are that the user needs trial and error parameters such as first n terms for Standard Blocking, and that these incur detection loss in spite of improvements being attempted, caused by two documents of a correct document pair existing in different blocks.
contrasting
train_17662
the CMU UIMA component repository 1 , GATE (Cunningham et al., 2002) with its UIMA interoperability layer, etc.
simply wrapping existing modules to be UIMA compliant does not offer a complete solution.
contrasting
train_17663
For a given name p, they search for the query * koto p 1 and extract the words that match the asterisk.
koto is highly ambiguous and extracts lots of incorrect aliases.
contrasting
train_17664
As we iteratively added new labeled sentences into the training set, the precision scores of active learning were steadily better than that of passive learning as the uncertain examples were added to strengthen existing labels.
the recall curve is slightly different.
contrasting
train_17665
That is only true for modularitybased greedy algorithms that select vertices pairings be merged into a cluster at each step of the tree-form integration process based on modularity optimization criterion.
such methods suffer from the problem that once a merger is executed based on a discrimination error, there is no chance of subsequently splitting pairings that belong to different subgroups.
contrasting
train_17666
It is supposed that if the MCL is applied to word association or co-occurrence data it will yield concept clusters where words are classified according to similar topics or similar meanings as paradigms.
because the word distribution of a corpus approximately follows Zipf's law and produces a small-world scale-free network (Steyvers et al., 2005), the MCL will result in a biased distribution of cluster sizes, with a few extraordinarily large core clusters that lack any particular features.
contrasting
train_17667
The connection rate in the core cluster is very low (0.002 with and 0 without the hub), as is the modularity Q value for the MCL (0.094).
subdivision of the core cluster in the BMCL results yielded a high modularity Q value (0.606) when latent adjacencies derived from bypassing connections with a threshold of q θ =3 were used.
contrasting
train_17668
The standard Belief Network actually supposes that all the relationships are and.
in real world, it is not the case.
contrasting
train_17669
We can notice that we will stay in state zero as long as the output is identical to the input.
we can move from state zero, which corresponds to edit distance zero, to state one, which corresponds to edit distance one, with three different ways: • input is mapped to a different output (input is consumed and a different symbol is emitted) which corresponds to a substitution, • input is mapped to an epsilon (input is consumed and no output emitted) which corresponds to a deletion, and • an epsilon is mapped to an output (output is emitted without consuming any input) which corresponds to an insertion.
contrasting
train_17670
It should be mention that the result of SVM fluctuates slightly, which is due to different number of testing examples.
tSVM and tSVM using argument-specific heuristics improve highly as the increase in untagged data size.
contrasting
train_17671
ProMED-mail is an Internet-based system that provides reports by public health experts concerning outbreak diseases (that is, the system is not automatic but rather human curated).
to ProMED-mail, MedI-Sys is an automatic system working on multilingual languages, but it mainly focuses on analyzing news stories based on the country level.
contrasting
train_17672
For example, if there is a news story about equine influenza in Camden, the system should detect that the disease name is "equine influenza" and the location name is "Camden".
there are two locations named Camden: One in Australia and one in London, UK.
contrasting
train_17673
Sighted computer users spend a lot of time reading items on-screen to do their regular tasks such as checking email, fill out spreadsheets, gather information from internet, prepare and edit documents, and much more.
visually impaired people cannot perform these tasks without an assistance from other, or without using assistive technologies.
contrasting
train_17674
Early studies on PPI extraction employ feature-based methods.
the featurebased methods often fail to effectively capture the structured information, which is essential to identify the relationship between two proteins in a constituent or dependency-based syntactic representation.
contrasting
train_17675
These approaches have been proved to be both algorithmically appealing and empirically successful.
most of current syntax-based SMT systems use IBM models (Brown et al., 1993) and hidden Markov model (HMM) (Vogel et al., 1996) to generate word alignments.
contrasting
train_17676
Through referring to the word alignment shown in Figure 1, we can collect the target spans which are {5}, {4,0,1,2,3,6,15}, and {7,8,9,10,11,12,13} respectively for t0, c3, and c16.
we cannot sort these three spans since there are overlapping between the first two spans 3 .
contrasting
train_17677
The problem of permuting the source string to unfold the crossing alignments is computationally intractable (see (Tromble and Eisner, 2009)).
various constraints can be made on unfolding the crossing alignments in a.
contrasting
train_17678
Source reordering for PBSMT assumes that permuting the source words to minimize the order differences with the target sentence could improve translation performance.
the question "how much?"
contrasting
train_17679
The basic word based lexicalized reordering model uses neighboring words to perform the orientation estimates.
since words are not always translated by themselves, these estimates can be improved by considering neighboring phrases rather than words.
contrasting
train_17680
In most applications including temporal relation classification, the preparation of such samples is a hard, time consuming, and expensive task (Mani et al., 2006).
all these annotated samples may not be useful, because some samples contain little (or even no) new information.
contrasting
train_17681
The official best result in the closed test achieved an F score of 95.00, and our result is quite close to that, ranked 4th of 23 official runs.
our method took about 30% less than the OWL-QN method in the training time.
contrasting
train_17682
This will lead to poor performance on unseen data.
if these words are correctly attributed to the subject topic, then the high ratings will appropriately be attributed to the unconditional positive words appearing in the reviews.
contrasting
train_17683
According to (Nicolae & Nicolae, 2006), bestcut does not utilize coreferent pairs involving pronouns.
event coreference chains contain a significant proportion of pronouns (18.8% of event coreference mentions in the On-toNotes2.0 corpus).
contrasting
train_17684
Then it evaluates the semanticmatching features to pair-up mentions from the same semantic type.
event NPs exhibit a very different hierarchy in WordNet from the object NPs.
contrasting
train_17685
We showed an improvement of up to 1.5 bleu points on the in domain data set and an improvement of 0.4 bleu on the generic test set.
there are still some errors in respect to the morphology of the verb phrases, which the Language Model is unable to tackle.
contrasting
train_17686
For example, given a word " ", these methods try to select the most appropriate pronunciation out of the three dictionary entries: ninki (popularity), hitoke (sign of life) and jinki (people's atmosphere), depending on the context.
in these approaches, seg-mentation errors tend to result in the failure of the following step of pronunciation prediction.
contrasting
train_17687
After inspecting the errors manually, we have found that this is because the UniDicbased operations do not include many single-kanji pronunciations that are commonly used in person's names, such as " mi" and " to".
this problems seems negligible when a larger dictionary including common pronunciations for person's names is available.
contrasting
train_17688
In the 20-best output, we find the correct solution for many words with ambiguous characters as shown in Table 13.
if a word contains two ambigu- Table 12: Pronunciation differences between Hindi and Urdu ous characters, it was difficult for the transliterator to transliterate it correctly.
contrasting
train_17689
Recently, many researches have been devoted to NE transliteration (most person names) or NE meaning translation (organization names) individually.
there are still two main challenges in statistical Chinese-English (C2E) NE translation.
contrasting
train_17690
So we use NE alignment result to evaluate the phonetic similarity of two segments by 1 1 where i e denotes the same syllables they aligned in the training set.
a global concept, which is borrowed from tf×idf scheme in information retrieval (Chen et al., 2003), is used in Eq (7).
contrasting
train_17691
2004;Oh and Choi, 2005;Pervouchine et al., 2009;Durrani et al., 2010).
for NE meaning translation, (Zhang et al., 2005;Chen and Zong, 2008; have proposed different statistical translation models only for organization names.
contrasting
train_17692
So far, semantic transliteration has been proposed for learning language origin and gender information of person names (Li et al., 2007).
semantic information is various for NE translation.
contrasting
train_17693
Recently, natural language processing research has begun to pay attention to second language learning (Rozovskaya and Roth, 2011;Park and Levy, 2011;Liu et al., 2011;Oyama and Matsumoto, 2010;Xue and Hwa, 2010).
most previous research for second language learning deals with restricted types of learners' errors.
contrasting
train_17694
For example, research for JSL learners' errors mainly focus on Japanese case particles (Oyama and Matsumoto, 2010;Imaeda et al., 2003;Nampo et al., 2007;Suzuki and Toutanova, 2006), however they focus only on case particles whereas we attempt to correct all types of errors.
real JSL learners' writing contains not only errors of Japanese case particles but also various other errors including spelling and collocation errors.
contrasting
train_17695
Users can submit a free composition on a subject and receive feedbacks from other users of the native language.
they are not able to write about arbitrary topics.
contrasting
train_17696
It is annotated with error types with correct forms to allow error analysis.
similar to Teramura Error Data, the corpus does not cover many topics because it was collected at only four institutions.
contrasting
train_17697
When translating a sentence from Japanese to another language with SMT, one usually performs word segmentation as a pre-processing step.
jSL learners' sentences contain a lot of errors and hiragana (phonetic characters), which are hard to tokenize by traditional morphological analyzer trained on newswire text.
contrasting
train_17698
Precision and Recall in this setting were reported at 87.9% and 28.2%, respectively.
we should also note that in these experiments text was also fragmented into snippets, and similar to what Luyckx and Daelemans did, the similarity model uses fragments of the same source text to predict authorship.
contrasting
train_17699
Frequencies of character n-grams have also been successfully used to build author profiles (Keselj et al., 2003).
to the best of our knowledge, this is the first work exploiting characterbased language models for AA, although, Raghavan et al.
contrasting