id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_92400 | Usually, such models are trained with each occurrence of words as one instance (word-wise learning). | we expect that the score works as a confidence measure. | neutral |
train_92401 | Consider the following example: I have not had any distortion problems with this phone and am more pleased with this phone than any I've used before. | it is still an open problem how we can effectively use sentiment words to improve performance of sentiment classification of sentences or documents. | neutral |
train_92402 | As can be seen in Figure 1, the classifier managed to map the reviews onto the coordinate system. | we also found that the accuracy of the method depends a lot on the seed word chosen. | neutral |
train_92403 | Experiments performed on Miller and Charles similarity data (1991), reported in Jarmasz and Szpakowicz (2004), have shown that pairs of words with a semantic similarity value of 16 have high similarity, while those with a score of 12 to 14 have intermediate similarity. | the effectiveness of this emotion lexicon was demonstrated in the emotion classification tasks. | neutral |
train_92404 | The effectiveness of this emotion lexicon was demonstrated in the emotion classification tasks. | we select the score of 12 as cutoff, and include in the lexicon all words that have similarity scores of 12 or higher with respect to the words in the primary set. | neutral |
train_92405 | The reason is that QUEEN is, by definition, a very restrictive measure -a 'good' translation must be similar to all human references according to all metrics. | as to portability across test beds (i.e., across language pairs and years), the reader must focus on the cells for which the meta-evaluation criterion guiding the metric set optimization matches the criterion used in the evaluation, i.e., the top-left and bottom-right 16-cell quadrangles. | neutral |
train_92406 | The fact that most systems are statistical also explains why, in general, lexical metrics exhibit a higher quality. | human assessments of adequacy and fluency are available for a subset of sentences, each evaluated by two different human judges. | neutral |
train_92407 | Figure 2 shows some examples of paraphrased dependency relations and paraphrases. | even though a single phrase in a source language sentence maps onto multiple phrases in a foreign language sentence, the phrases might not be paraphrases. | neutral |
train_92408 | This work is also affiliated with the Microsoft-CUHK Joint Laboratory for Human-centric Computing and Interface Technologies. | one problem concerning relational data is, how to extract useful relations for Chinese NER. | neutral |
train_92409 | In the spirit of the work done by (Shinyama and Sekine, 2003;Bunescu and Mooney, 2007), we are trying to collect clusters of paraphrases for given relation mentions. | instead of designing the heuristic explicitly, we use a validation set to observe the statistical correlations of each of the three possible heuristics we discussed above. | neutral |
train_92410 | The filtering of generic patterns (green) does not show 4 Note that Basilisk and Espresso use context patterns only for the sake of collecting instances, and are not interested in the patterns per se. | in previous work, these are surface text patterns, e.g., X such as Y, for extracting words in an is-a relation, with some heuristics for finding the pattern boundaries in text. | neutral |
train_92411 | The result can be interpreted as a hint that signals for the control of erythropoietin production may be mediated by beta 2-adrenergic receptors rather than by beta 1-adrenergic receptors. | the CRF classifier had roughly 5% advantage on per-abstract accuracy over SVM. | neutral |
train_92412 | Linguistic annotation guidelines often concentrate on specifying the linguistic data categories to be annotated. | this section demonstrates how OWL DL, a strongly typed representation language, can serve to transparently formalise corpora with multi-level annotation. | neutral |
train_92413 | Finally, the closest neighbour to our proposal is the ATLAS project (Laprun et al., 2002), which combines annotations with a descriptive meta-model. | we now illustrate these decisions concretely by designing a model for a corpus with syntactic and frame-semantic annotation, more concretely the SALSA/TIGER corpus. | neutral |
train_92414 | However, the average of R(c a * c d ) in "indirect" cases is not so high for both Chinese and Japanese, as a large amount of pairs are classified into case (A). | as indicated in the fifth row in Table 3, however, many structures consisting of only 2 classifiers were also constructed. | neutral |
train_92415 | We conducted experiments to build classifier taxonomies for three languages: Chinese, Japanese and Thai. | the average of R(c a * c d ) in "indirect" cases is not so high for both Chinese and Japanese, as a large amount of pairs are classified into case (A). | neutral |
train_92416 | 15 million words) and one for which we had relatively little parallel data (Czech-English news-commentary corpus with approx. | performance is improved by adding component-sequence and learnedmorphology models along with context similarity from monolingual text and optional combination with traditional bilingual-textbased translation discovery. | neutral |
train_92417 | For example, 1990 is relevant to the query Germany unified because "East and West Germany were unified" according to the top snippet. | the text snippets provide answers to definition answers without actually employing any specialized module for seeking specific information such as the genus of the question concept. | neutral |
train_92418 | A:000217262,L3 A mother panda often gives birth to two cubs, but when there are two cubs, one is discarded, and young mothers sometimes crush their babies to death. | coverage means the rate of questions that can be answered by the top-N answer candidates. | neutral |
train_92419 | The system was built as an extension to our factoid QA system, SAIQA (Isozaki, 2004;Isozaki, 2005), and works as follows: 1. | a:000406060,L6 because of the recent development in the midland, they are becoming extinct. | neutral |
train_92420 | Finally we are interested modifying our cluster-based expansion for the purpose of automatically identifying authority sources for different types of questions. | although expansion methods generate additional relevant documents that simpler methods cannot obtain, an important metric to consider is the density of these new relevant documents. | neutral |
train_92421 | Rather than manually defining a complete answering strategy -the type of question, the queries to be run, the answer extraction, and the answer merging methods -for each type of question, SQA learns different strategies for different types of similar questions SQA takes advantage of similarity in training data (questions and answers from past TREC evaluations), and performs question clustering. | this paper presents experiments with several feature selection methods used individually and in combination. | neutral |
train_92422 | At a first glance, document and passage retrieval is reasonable when considering the fact that its performance is often above 80% for this stage in the question answering process. | when evaluated on TREC datasets, the affinity replacement method obtained significant improvements in precision, but did not outperform other methods in terms of recall. | neutral |
train_92423 | In this experiment, average precision on training data proves to be the best predictor of additional relevant documents: ∼71% of the test questions benefit from queries based on average precision feature selection. | recently (riezler et al., 2007) used statistical machine translation for query expansion and took a step towards bridging the lexical gap between questions and answers. | neutral |
train_92424 | The goal is to add query content that increases retrieval performance on training questions. | there this query enhancement process is static and does not use the training data and the question answering context differently for individual questions. | neutral |
train_92425 | Moreover, it is significantly less than the accuracy (0.69) achieved by using the RDC corpus. | in addition, the number of parameters that must be estimated in PLSi grows linearly with the number of training documents. | neutral |
train_92426 | (Culotta and Sorensen, 2004) extended this work to estimate kernel functions between augmented depen-dency trees, while (Kambhatla, 2004) combined lexical features, syntactic features, and semantic features in a maximum entropy model. | after the latent topic features extracted from returned snippets using the Web as the corpus, an SVM classifier is trained as the relation recognition classifier for use in the later experiments. | neutral |
train_92427 | The feature captures the interaction between two entities at the semantic level rather than at the word level. | moreover, to address the problem of insufficiently annotated corpora, we propose an algorithm for compiling a training corpus from the Web. | neutral |
train_92428 | Interestingly, ML falls below the baseline case when more than three languages were used in Test2, a situation that has rarely been considered in previous studies. | this was because PPM would be theoretically equivalent to ML with infinite learning of language transition probabilities, since languages were uniformly distributed in test1. | neutral |
train_92429 | The user's key entry sequence is input to our client software. | when producing a text in a language other than English, a user has to use text entry software corresponding to the other language which will transform the user's key stroke sequences into text of the desired language. | neutral |
train_92430 | A text fragment between two delimiters is called a token in TypeAny. | reports on ways to detect a change in the language used are more abundant. | neutral |
train_92431 | In a usual HMM process, a system finds the language sequence (i.e., state sequence) l m 1 that maximizes Equation (1) by typically using a Viterbi algorithm. | because large corpora are not always available, especially for minor languages, P (t i |l i ) is estimated using key entry sequence probabilities based on n-grams (with maximum n being n max ) as follows: and |t i | is the length of t i with respect to the key entry sequence. | neutral |
train_92432 | In this case, the user can manually correct the locale by pressing the TAB key once. | this paper addresses the question of how to decrease the need for the third type of action. | neutral |
train_92433 | flags where both error detection and suggested correction are incorrect. | english is today the de facto lingua franca for commerce around the globe. | neutral |
train_92434 | We present a modular system for detection and correction of errors made by nonnative (English as a Second Language = ESL) writers. | the distribution of determiners is similar in the PTB (as reported in Minnen et al. | neutral |
train_92435 | Hence several automatic corpus based approaches for acquiring lexical knowledge have been proposed in the literature. | we recomputed the score for each pattern in the above manner and obtain a ranked list of patterns for each of the classes for English and Hindi. | neutral |
train_92436 | The entries in WordNet have been classified according to the syntactic category such as: nouns, verbs, adjectives and adverbs, etc. | incorporating more sophisticated methods remains an area of future work. | neutral |
train_92437 | Moreover, the structure details available in this stage are useful in improving the coherency and readability among the sentences present in the summary. | we have introduced some new feature identification techniques to explore paragraph alignments. | neutral |
train_92438 | Using the manually annotated subset of the corpus (200 judgments) we have performed a number of preliminary experiments to determine which method would be appropriate for role identification. | we frame text segmentation as a rule learning problem. | neutral |
train_92439 | To assess the quality of the TimeML projections, we put aside and manually annotated a development set of 101 and a test set of 236 bisentences. | this non-content (NC) filter is defined in terms of POS tags and affects conjunctions, prepositions and punctuation. | neutral |
train_92440 | 960K bisentences) was used for training (section 5). | given that the projected annotations are to provide enough data for training a target language labeller (section 5), manual annotation is not an option. | neutral |
train_92441 | To estimate precision, 100 relation instances were randomly sampled from each of four sections of the ranks of the acquired instances for each of the two relations (1-500, 501-1500, 1501-3500 and 3500-7500), and the correctness of each sampled instance was judged by two graduate students (i.e. | additionally, the number of seed instances affects the precision of both higher-ranked and lower-ranked instances. | neutral |
train_92442 | When occurring with the verb suru (do-PRES), verbal nouns function as a verb as in (1a). | finally, but even more importantly, when accompanied by a large variety of suffixes, verbal nouns constitute compound nouns highly productively as in (1c). | neutral |
train_92443 | For criterion (b), as shown in Table 1, the relation instances judged correct include both the Xga VP 1 ::X-ga VP 2 type (i.e. | (b) there are specific patterns that are highly reliable but they are much less frequent than generic patterns and each makes only a small contribution to recall. | neutral |
train_92444 | within the punctuation after translation. | unfortunately, no statistically significant improvement on the BLEu score was reported in (Och et al., 2003). | neutral |
train_92445 | Thus, the baseline and baseline+syn models are not able to produce the correct verb form for "visit". | these algorithms attempt to reconcile the wordorder differences between the source and target language sentences by reordering the source language data prior to the SMt training and decoding cycles. | neutral |
train_92446 | The prior probability P (D) is usually assumed to be uniform and a language model P (Q|D) is estimated for every document. | the major goal of personalized search is to accurately model a user's information need and store it in the user profile and then re-rank the results to suit to the user's interests using the user profile. | neutral |
train_92447 | The user profile thus learnt was applied in a re-ranking phase to rescore the search results retrieved using general information retrieval models. | the user profile as a translation model in our approach will consist of triples of a document word, a query word and the probability of the document word generating the query word. | neutral |
train_92448 | A look at the syntactic variants automatically generated by a system, which we proposed, showed that the system could generate syntactic variants for only a half portion of the input, producing many erroneous ones (Section 4.1). | features extracted from the snippets outperformed newspaper corpus; however, the small numbers of features for phrases shown in Table 7 and the lack of sophisticated weighting function suggest that the problem might persist. | neutral |
train_92449 | (8) s. "yoi:shigoto:o:suru" (doa good job) t 1 . | it caused errors in some cases; for example, since N 1 was the semantic head in (7), dropping it was incorrect. | neutral |
train_92450 | While Joachims (1998) and Rogati and Yang (2002) reported no improvement in SVM performance after applying a feature selection step, Gabrilovich and Markovitch (2004) showed that for collection with numerous redundant features, aggressive feature selection allowed SVMs to actually improve their performance. | named Entity Page: refers to a specific object or set of objects in the world, which is/are commonly referred to using a certain proper noun phrase. | neutral |
train_92451 | The result is displayed in Figure 1. | the result indicates that there is a considerable correlation (r = 0.760) between category importance and performance, which means it is possible to predict the final performance of any context categories by calculating their category importance values in the limited size of selected context set. | neutral |
train_92452 | In the following, n and m represent the number of unique words and unique contexts, respectively, and N (w, c) denotes the number of cooccurrence of word w and context c. Document frequency (DF), commonly used for weighting in information retrieval, is the number of documents a term co-occur with. | in this section, context selection methods proposed for text categorization or information retrieval are introduced. | neutral |
train_92453 | Finally we extend the context importance to cover context categories (RASP2 grammatical relations), and show that the above methods are also effective in selecting categories. | we formalize distributional similarity as a classification problem as described below. | neutral |
train_92454 | We refer to this measure as DSlesk as defined: where g 1 is the gloss of word sense s1, g 2 is the gloss of s2, again s1 is the target word sense ws i in equation 1 for which we are obtaining the predominance ranking score and s2 is whichever sense of the neighbour (n j ) in equation 1 which maximises this semantic similarity score, as McCarthy et al. | we believe that our gloss-based similarity DSlesk might be very suitable for this task and we plan to investigate the possibility. | neutral |
train_92455 | In the recent times many efforts have been made to develop various utilities for Sanskrit. | sanskrit is a highly inflected language with three grammatical genders (masculine, feminine, neuter) and three numbers (singular, plural, dual). | neutral |
train_92456 | ﻡ " ( am) is a suffix denoting first s i n g l e p e r s o n " ﻥ ﺍ ﻮ ﺧ " (xän) is the present tense root o f t h e v e r b a n d " ﯽ ﻣ " ( mi) is a prefix that expresses continuity. | for example, we put the morpheme " ﺮ ﺴ ﭘ "(pesar: boy) in cluster " ﻢ ﺳ ﺍ " (esm: noun) a n d " ﺭ ﺍ ﺪ ﻧ ﺎ ﺟ " ( jändär: alive). | neutral |
train_92457 | Consequently, for each stored word, we find its stem. | we store these rules in rules repository. | neutral |
train_92458 | We evaluated the proposed algorithm with a limited corpus of Hamshahri newspaper. | affix stripping approaches try to removing affixes until reaching to any stem in the word. | neutral |
train_92459 | "outside" (O) of a named entity. | in the segmentation task, the sentence x are segmented bŷ where F s (•) is the set of segment features, and w s is the parameter for segmentation. | neutral |
train_92460 | Its computation formula is: θ = f P / ( f P + f N ), where f P is the frequency of the positive examples, and f N is the frequency of the negative examples. | through rule reliability computation (see the following section), we can extract all high-reliability basic rules as the final result, and all other basic rules with higher frequency for further rule refinement. | neutral |
train_92461 | Table 2 shows some examples from the corpus illustrating some typical types of ESL writing errors involving: (1) Verb-Noun Collocations (VNC) and (4) Adjective-Noun Collocations (ANC); (2) incorrect use of the transitive verb "attend"; (3) determiner (article) usage problems; and (5) more complex lexical and style problems. | a smaller t 3 can reduce recall, but can increase GP. | neutral |
train_92462 | Note that a randomguessing baseline was about 5% precision, 7% recall, but more than 80% false flag rate. | eSL-WePS first segments the original eSL sentence by using punctuation characters like commas and semicolons, then generates a query from only the part which contains the given check point. | neutral |
train_92463 | The earlier the correct recommendation is, the larger the effect is. | the following takes the three elements from a popular site as an example. | neutral |
train_92464 | In this paper, only one intention is assigned to the utterances. | two schemes, revised tf and tf-idf, are employed to classify the utterances in dialogues. | neutral |
train_92465 | As discussed in 2.2, the most influential feature in the C/NC method is the term frequency. | for this reason, some noises are extracted together with these candidates. | neutral |
train_92466 | To mitigate the problem, we used Wordnet to handle two different words with the same semantic. | many other relevant information can be missed out as a result. | neutral |
train_92467 | Other Kanji characters with the same pronunciation as " " include " ". | the first task is transliteration in the strict sense, which creates new words in a target language (Haizhou et al., 2004;Wan and Verspoor, 1998;Xu et al., 2006). | neutral |
train_92468 | In Figure 2, (d) is such a representation for the two basic alignments. | the output of this part therefore contains syntactic information for structure. | neutral |
train_92469 | Target language model probability (weight = 0.5) According to a previous study, the minimum error rate training (MERT) (Och, 2003), which is the optimization of feature weights by maximizing the BLEU score on the development set, can improve the performance of a system. | parallel corpus is one of the most important components in statistical machine translation (SMT), and there are two main factors contributing to its performance. | neutral |
train_92470 | There are few studies on data selection for translation model training. | the range of improvement is not stable because the MERT algorithm uses random numbers while searching for the optimum weights. | neutral |
train_92471 | This gives a slightly better performance, but again it gives almost identical results for the use of interpolated LMs vs. two LMs as separate feature functions (27.63 vs. 27.64). | training data for LMs often comes from diverse sources, some of them are quite different from the target domain of the MT application. | neutral |
train_92472 | Finally, hyperlinks are embedded in the hypertext. | forward(i) and backward(i) indicate that the search is carried out relatively to the source (before or after) and indexed by the integer i. | neutral |
train_92473 | In our approach we specify the source and the target using conditions. | the other operators are: isParent, isChild, isSibling, isDescendant, hasIntitle, isIntitle. | neutral |
train_92474 | Synset assignment with SC=3 Figure 2 simulates that an English equivalent of a lexical entry L 0 and its synonym L 1 are included in a synset S 1 . | after the revision at KUI, the initial stage of asian WordNet will be referable through the assigned synset ID. | neutral |
train_92475 | Our approach is conducted to assign a synset to a lexical entry by considering its English equivalent and lexical synonyms. | the extensive development of WordNet in other languages is important, not only to help in implementing NLP applications in each language, but also in inter-linking WordNets of different languages to develop multi-lingual applications to overcome the language barrier. | neutral |
train_92476 | All words in the lexicon are defined by using 16,900 words in the same lexicon. | a goal of this study is to try to build a Japanese defining vocabulary on the basis of distribution of words used in word definition in an existing Japanese dictionary. | neutral |
train_92477 | The accuracy of the WOrder parameter drops off geometrically as the number of instances approaches zero, as shown in Table 5. | these values are usually not atomic, and can be decomposed into their permuted elements, which themselves are types. | neutral |
train_92478 | As discussed, a typology consists of a parameter and a list of possible types, essentially the values this parameter may hold. | summing across such rules might alleviate some of this problem. | neutral |
train_92479 | Now we will consider different cases for analyzing the time complexity of our MakeRefExpr() algorithm. | later it is extended to take care of different refinements (like relational, boolean description etc) that could not be handled by Incremental algorithm. | neutral |
train_92480 | To precisely detect reliable parses, we make use of several linguistic features inspired by the notion of controlled language (Mitamura et al., 1991). | we call the two sets of obtained sentences "BIO pool" and "CHEM pool". | neutral |
train_92481 | Pure syntactic approaches cannot determine boundaries of conjunctive phrases properly. | context-dependent rule patterns are generated and generalized by the following procedure. | neutral |
train_92482 | The relations among the words in a chunk are not marked for now and hence allow us to ignore local details while building the sentence level dependency tree. | modern dependency grammar is attributed to Tesnière (1959). | neutral |
train_92483 | There are six basic karakas, namely; adhikarana 'location', apaadaan 'source', sampradaan 'recipient', karana 'instrument', karma 'theme', karta 'agent'. | in our dependency tree each node is a chunk and the edge represents the relations between the connected nodes labeled with the karaka or other relations. | neutral |
train_92484 | We based this equation on Robertson's equation . | the U.S.A. has held the Text REtrieval Conferences (TREC) (TREC-10 committee, 2001), and Japan has hosted the Question-Answering Challenges (QAC) (National Institute of Informatics, 2002) at NTCIR (NII Test Collection for IR Systems ) 3. | neutral |
train_92485 | Second, in this method just individual concept is used to determine their saliency, not their combinations. | the SVD-Scores are in favor of using the boosting methods for classification of sentences with different distance measure for each classifier. | neutral |
train_92486 | The properties of these collections are presented in table 1 To find out which term weighting and distance measure causes the highest increase in the SVD-Scores, various combinations of these approaches has been used in the summarization approach. | some words in Persian are compound words .this did not cause any problem for the developed system; because the most meaningful part of such words is usually less common than others and thus have an Inverse Document Frequency (IDF) that is higher than that of more common less meaningful parts. | neutral |
train_92487 | The sectional evaluation and the inspection of example output show that this system works well. | the examinee categorized them based on how proper the output summary is to the input news article: 1) quite proper 2) slightly proper 3) not very proper 4) not proper of judgment, 48 outputs out of 77 are evaluated either 1) quite proper or 2) slightly proper. | neutral |
train_92488 | In the future work, we will investigate if the more sophisticated translation model or that specialized for CLQA task can improve the performance further. | for CLQA1 test collection, we only investigated the result by using R+U judgment. | neutral |
train_92489 | It was concieved for machine translation tasks, which explains some of its features. | we will keep the term 'preposition' hereafter for all these marks. | neutral |
train_92490 | Our work focuses on unsupervised and semisupervised methods that target all words and parts of speech (POS) in context. | an extension of the Banerjee and Pedersen (2002) method which makes use of the sense-annotated definitions is to include the words in the definition of each sense-annotated word rather than traversing the ontology relative to each word sense candidate s i,j for the target word w i , we represent each word sense via the original definition plus all definitions of word senses contained in it (weighting each to give the words in the original definition greater import than those from definitions of those word senses). | neutral |
train_92491 | Our approach makes the best use of an ordinary dictionary and a Web corpus to extract broadcoverage and precise synonym and hypernymhyponym expressions. | here, polysemic words should be treated carefully 3 . | neutral |
train_92492 | Furthermore, if one SYN node has a hyper synonymous group in the synonymy database, the SYN node with the hyper SYNID is also added. | our method extracts not only hypernym-hyponym relations, but also basic synonym relations, predicate synonyms, adverbial synonyms and synonym relations between a word and a phrase. | neutral |
train_92493 | To compare our different models we created a test set of 75 categories. | we ranked candidate hyponyms on 75 categories of named entities and attained 53% mean average precision. | neutral |
train_92494 | This fact inspired us to design and employ the following method. | furthermore, speech recognition has enabled the automation of certain applications that are not automatable using push-button interactive voice response (IVR) systems. | neutral |
train_92495 | Since we cannot use such a big dictionary in these task, our first results had quite high WERs and IWERs. | we used ∼70% of data for training, ∼15% for development, and the remainding ∼15% for testing. | neutral |
train_92496 | Section 3 gives a comparative study of phonotactics of the three languages i.e. | spectral subtraction is a noise suppression technique used to reduce the effects of added noise in speech. | neutral |
train_92497 | In doing so, a good accuracy obtained on the classification task implies that the extracted features capture those aspects of the language that a trigram model may not. | the coherence score for an article is normalized by the total number of content-word pairs found in the article. | neutral |
train_92498 | In this work, we have used a classification-task based formalism for evaluating various syntactic, semantic and empirical features with the objective of improving conventional language models. | the truncated form of A i.e. | neutral |
train_92499 | The binary classifiers designed by using LRM-F and SVM-F were trained to maximize the F 1 -score for each category. | with this approach, we assume the independence of categories and design a binary classifier for each category that determines whether or not to assign a category label to data samples. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.