id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_92300 | For example, the first four PCs can justify more than 90% (91.28%) of the total sample variance, we can choose them without much loss information. | the U and V are the vectors for the 9 sentences and 9 terms respectively. | neutral |
train_92301 | In this subsection, we will outline PCA which is adapted from [6] and which is used to extract the significant terms in the document. | to word frequency, the other method by using WordNet or a thesaurus [2] makes good use of word relationships such as NT(narrow term), RT(related term), and so on. | neutral |
train_92302 | As non-positional features, all words inside NEs are used. | these rules are non-deterministic (Figure 1), and we can change it to the deterministic FSt ( Figure 2) since the lengths of NEs are finite. | neutral |
train_92303 | In Table 3, N E w0 denotes the rightmost word in an identified NE region. | the FSt is constructed with the GENIA 3.02 corpus. | neutral |
train_92304 | Baseline2 is the two-phase ME model considering all words. | due to dynamic progress in biomedical literature, a vast amount of new information and research results have been published and many of them are available in the electronic form -for example, like the PubMed MedLine database. | neutral |
train_92305 | Also, Table 6 shows the results of semantic classification. | if a word frequently occurs in other positions, we regard it has the property of a modifying noun. | neutral |
train_92306 | While the performance in the standard domain turned out to be quite good as shown in the papers, that in the biomedical domain is not still satisfactory, which is mainly due to the following characteristics of biomedical terminologies: First, NEs have various naming conventions. | a lot of term occurrences in real text would not be identified with simple dictionary look-up, despite the availability of many terminological databases, as claimed in [12]. | neutral |
train_92307 | That kind of errors is caused mainly by the cascaded phenomenon in biomedical names. | in order to utilize other class terms, we additionally annotated "O" class words in the training data where they corresponds to other classes such as other organic compound, lipid, and multi cell in GENIA 3.02p version corpus. | neutral |
train_92308 | 4 Although this prior probability is quite accurate, it does not have sufficient applicability. | acronym can detect potential inclusion at 46.58 applicability and 96.05 accuracy. | neutral |
train_92309 | In natural language processing community, bilingual dictionaries are useful in many areas, such as machine translation and cross language information retrieval. | if a word pair could be matched through two different languages, it is considered a very good match. | neutral |
train_92310 | They also applied the semantic matching using part of speech and second-language matching. | eDICT and CeDICT make no difference on the parts of speech and therefore the translations in english can be in any form. | neutral |
train_92311 | One can create a new word by combining the existing characters but it is hardly that one can create a new character. | for example, b (hair) and`(deliver) are simplified to . | neutral |
train_92312 | A new bilingual dictionary can be built using two existing bilingual dictionaries, such as Japanese-English and English-Chinese to build Japanese-Chinese dictionary. | table 1 shows the results of using English as the pivot language and one time inverse consultation as the scoring function. | neutral |
train_92313 | We extract only nouns, verbal nouns and verbs from this dictionary, and try to search for their translation equivalents in Chinese. | "discussion" is one of the translations in English as provided by EDICT. | neutral |
train_92314 | Based on the assumption that two adjacent or overlapped temporal expressions refer to the same temporal concept, we combined them. | the data collection contains 285,746 characters/142,872 words and 4,290 manually annotated temporal expressions. | neutral |
train_92315 | The above instances formed by non-anaphors would be labelled as "00". | the algorithm simply uses a unit of one as the increment and decrement. | neutral |
train_92316 | The twin-candidate model itself could not identify and block such invalid resolution of non-anaphors. | -Create an instance by pairing M non ana , C rand , and each of the candidates other than C rand . | neutral |
train_92317 | Features f 1-f 17 record the properties of a new markable and its two candidates, as well as their relationships. | the final record of a candidate is its won-lost difference in the round-robin matches. | neutral |
train_92318 | Input: The result of morphological analysis: Most previous research uses the speech act of previous utterance as context feature (CF1 in Use speech acts that pop in discourse stack and Sub-dialogue End (SE) else Use speech acts of previous utterance and Dialogue Continue (DC) End information for speech acts analysis [1]. | this annotated dialogue corpus has 17 types of speech acts. | neutral |
train_92319 | The shrinkage technique was verified in its efficiency for text classification tasks learned with insufficient training data. | this paper proposes to use a speech acts hierarchy and a discourse stack for improving the accuracy of classifiers in speech acts analysis. | neutral |
train_92320 | Then we used Eq. | nominal anaphora recognition is approached by filtering those anaphor candidates, which have no referent antecedents or which have antecedents but not in the target biomedical semantic types. | neutral |
train_92321 | Beside the semantic type agreement, the implicit resemblance between an anaphor and its antecedents is another evidence useful for verifying the semantic association. | features 5, 6, and 7 are related to semantic association. | neutral |
train_92322 | The method of calculating the similarity between two clauses and how to recognize contrast/list relation are described in detail below. | written texts are automatically converted into spoken texts based on paraphrasing technique [5,6] and then are inputted into speech synthesis. | neutral |
train_92323 | Preposition SRL combined with [9] (P = precision, R = recall, F = F-score; above-baseline results in boldface) in the testing data, even when oracled outputs from all three subsystems are used (recall = 18.15%). | this further suggests that the semantic roles were tagged for only a small number of verbs in relatively fixed situations. | neutral |
train_92324 | With graph refinement frameworks such as the one presented here, many of these resources may be improved automatically. | vERBOCEAN is a graph of semantic relations between verbs, with 3,477 verbs (nodes) and 22,306 relations (edges). | neutral |
train_92325 | In this subsection we derive an estimate of the validity of inferring r 1,n given the set In the case of zero paths, we use simply P(r 1,n )=P(r), the probability of observing r between a pair of nodes from a sample set with no additional evidence. | we only consider certain path types. | neutral |
train_92326 | Lexical entries (in fact not limited to verbs, in the case of FrameNet) falling under the same frame will share the same set of roles. | given the imperfection of existing automatic parsers, which are far from producing gold standard parses, many thus resort to shallow syntactic information from simple chunking, though results often turn out to be less satisfactory than with full parses. | neutral |
train_92327 | We especially thank the three anonymous reviewers for their valuable suggestions and comments. | as time information increases, the error rate of NSHmm decreases drastically in close test as it dose in pos tagging task. | neutral |
train_92328 | But NSHmm and VNSHmm can still model and make good use of those positional characteristics, and notable improvements have been achieved. | this paper suggests k=3 in NSHmm for pos tagging task. | neutral |
train_92329 | About 40% of titles are quoted in "「」" or "『』" in our test corpus. | the definition of Recurrence feature is number of occurrences of the possible title in the input document. | neutral |
train_92330 | Context deals with features surroundings titles. | it is even more difficult to decide which fragment of sentences might be a title than to determine if some fragment is a title. | neutral |
train_92331 | We perform the following pre-processing steps. | we observe this improvement on both French and Czech. | neutral |
train_92332 | We hypothesize that a large component of the error rate in the automatically induced text analysis tools generated by [22] is due to morphosyntactic differences between the source and target languages that are specific to each source-target language pair. | for each single-source tagger, we train on 31,000 lines of the parallel Bible between the source and target languages and test on 100 held-out lines of the Bible in the target language. | neutral |
train_92333 | Such as, mthong "see", go "hear", ha go "understand", dran "remember", brjed "forget", ngo shes "recognize" and so on. | the inner structural relationship of compound verbs may produce influence on the objects and their types. | neutral |
train_92334 | Even though the human judges underwent only minimal training, their poor-to-fair kappa scores indicate that this is a very hard problem. | there are a big variety of heterogeneous features that contribute to the temporal reference semantics of Chinese verbs. | neutral |
train_92335 | To evaluate the performance of classifiers, we measure the standard classification accuracy where accuracy is defined as in equation 2 To measure how well the classifier does on each class respectively, we compute precision, recall, and F-measure, which are defined respectively in equation 3, (4) and 5 The evaluation is carried on the collapsed tense taxonomy that consists of three tense classes: present, past and future. | while there is certainly room for improvement, the tagging performance of our algorithm is quite promising. | neutral |
train_92336 | Selection of the voice: Choose the voice of the target sentence among active, passive, causative, etc. | upper bound: If the dictionaries cover all light-verbs and deverbal nouns, the collection covers 24.1% (492,737 / 2,044,387) of tokens. | neutral |
train_92337 | Likewise, paraphrases involving an LVC as in (3) and (4) (from [4]) have considerable similarities. | to generate this type of paraphrase, we need a computational model that is capable of the following operations: Change of the dependence: Change the dependences of the elements (a) and (b) due to the elimination of the original modifiee, the light-verb. | neutral |
train_92338 | [10] reimplemented the method of [9] using a web, which may be a very large corpus, in order to collect example sentences. | considering the difference, we implemented the method of [8] to include the ambiguous relatives into relatives, but the method of [9] to exclude the ambiguous relatives. | neutral |
train_92339 | We tried to compare our proposed method with the previous relative based methods. | instead, we build a co-occurrence frequency matrix (CFM) from a raw corpus that contains frequencies of words and word pairs. | neutral |
train_92340 | The experimental results here suggest that our system has not been penalized very much when rich linguistic features are only available in its training phase. | the difference is that Z is a Named Entity when call is in Sense 1, and Z is usually a common NP or an adjective phrase (ADJP) when call is in Sense 3. | neutral |
train_92341 | In the first experiment (where the weight of the modifier and head noun was the same), we observed that some of the test NCs matched with several training NCs with high similarity. | in the remainder of the paper, we detail the motivation for our work (Section 2), introduce the WordNet::Similarity system which we use to calculate word similarity (Section 3), outline the set of semantic relations used (Section 4), detail how we collected the data (Section 5), introduce the proposed method (Section 6), and describe experimental results (Section 7). | neutral |
train_92342 | This indeed reflects the in-domain variability of text: Nikkei, Yomiuri and Encarta are highly edited text, following style guidelines; they also tend to have repetitious content. | we hope to run additional experiments to confirm these findings. | neutral |
train_92343 | In MAP estimation methods, adaptation data is used to adjust the parameters of the background model so as to maximize the likelihood of the adaptation data [1]. | the more similar the adaptation domain is to the background domain, the better the CER results are. | neutral |
train_92344 | MCB is Majority Class Baseline, which marks every word as APPROPRIATE. | although it is not real spoken language, it works as a good substitute, as some researchers pointed out [2,11]. | neutral |
train_92345 | Lexical choice has been one of the central issues in NLG. | the kernel function of SVM was Gaussian RBF. | neutral |
train_92346 | If F-ratio is more than 0.2, or if F-ratio is more than 0.1 and P-ratio is more than 0.2, the page is classified as spoken language page. | if the probability is estimated using the whole corpora, the features shift to the black triangle and diamond, and Baseline wrongly classified the two as inappropriate. | neutral |
train_92347 | Its adaptation-guided retrieval makes it ultimately similar to our system. | denote C (A) as the case, where A={A 1 ,A 2 ,…A m } represents the feature set of the case. | neutral |
train_92348 | the word number ratio of translation pair is on a basis of 1:1. | 1) Subset redundancy information. | neutral |
train_92349 | The reasons for this strategy are as follows: 1) terminology seldom consists of rare words out of GB2312, 2) the index space is dramatically reduced using GB2312 rather than the Unicode encoding so as to quicken the estimation speed. | the issue and the proposed method in this paper are distinctly different with Nagata's. | neutral |
train_92350 | So we need to find a combined technique for reducing this "noise". | hybrid method using linguistic filters proved to be a suitable method for selecting terminological collocations, it has considerably improved the precision of the extraction which is much higher than that of purely statistical method. | neutral |
train_92351 | For example, if the loanword "станц (station)" is to be concatenated with a genitive case suffix, "ын" should be selected from the five genitive case suffixes (i.e., ын, ийн, ы, ий, and н) based on the Mongolian grammar. | xu and Croft (1998) and Melucci and Orio (2003) independently proposed a languageindependent method for stemming, which analyzes a corpus in a target language and identifies an equivalent class consisting of an original form, inflected forms, and derivations. | neutral |
train_92352 | This error occurs because the ending "-ологи (-ology)" does not appear in conventional Mongolian words. | to enhance the objectivity of the evaluation, we used only the phrases for which the two assessors agreed with respect to the part of speech and lemmatization. | neutral |
train_92353 | We take the logarithm of FSR 3 Although there have been many existing works in this direction (Lua and Gan, 1994;Chien, 1997;Sun et al., 1998;Zhang et al., 2000;SUN et al., 2004), we have to skip the details of comparing MI due to the length limitation of this paper. | the performance of these measures is presented in table 2. | neutral |
train_92354 | We can write the following SPE-style rule to account for its variation. | we also use parameter b in Step 4 as a way to discourage excessive splitting by tagging more morphemes as noise. | neutral |
train_92355 | The LP algorithm has also been successfully applied in other NLP applications, such as word sense disambiguation (Niu et al 2005), text classification (Szummer and Jaakkola 2001;Blum and Chawla 2001;Belkin and Niyogi 2002;Zhu and Ghahramani 2002;Zhu et al 2003;Blum et al 2004), and information retrieval (Yang et al 2006). | relation extraction is to detect and classify various predefined semantic relations between two entities from text and can be very useful in many NLP applications such as question answering, e.g. | neutral |
train_92356 | The dynamic methods reduce the miss rates on 5 topics. | this method needs a training corpus to get the extending threshold deciding whether a story should be used to extend another story in a pair. | neutral |
train_92357 | Orthographic variance is a fundamental problem for many natural language processing applications. | they did not deal with a transliterated probability. | neutral |
train_92358 | It shows that the MaxEnt method achieves best performance. | all characters in a name are first converted into upper case for ENOR before feature extraction. | neutral |
train_92359 | As a function of model, c PP measures how good the model matches the test data. | let us use English name "Smith" to illustrate the features that we define. | neutral |
train_92360 | 2) N-gram Perplexity Method (PP): Li et al. | c PP can be used to measure how good a test name matches a training set. | neutral |
train_92361 | Then they apply edit distance based similarity to select the most probable transliteration in the English text. | the measure used is symmetric cross entropy or SCE (Singh, 2006a). | neutral |
train_92362 | Finally, a kind of best first search strategy is applied to obtain chunk sequences hopefully in best first order. | the aim here is to assign the highest rank for the correct chunk and to push down other chunks. | neutral |
train_92363 | Note that no Viterbi search involved here and the state sequence is also known. | we are confident that wide coverage and robust shallow parsing systems can be developed using the UCSG architecture for other languages of the world as well. | neutral |
train_92364 | Major issues of MT from analytic languages into Indo-European ones include three issues: anaphora generation, semantic duplication, and sentence structuring. | ellipsis-based approach is characterized by treating incomplete constituents as if they are of the same simple type but contain ellipsis inside (Yatabe, 2002;Cryssmann, 2003;Beavers and Sag, 2004). | neutral |
train_92365 | In the experiments, we trained the parsers on training data and tuned the parameters on development data. | here, we use a simple feature representation on short dependency relations. | neutral |
train_92366 | Given sufficient labeled data, there are several supervised learning methods for training highperformance dependency parsers . | the goal in dependency parsing is to tag dependency links that show the head-modifier relations between words. | neutral |
train_92367 | However, "E. coli food" is not. | for this evaluation, we employed 500 news articles from Reuters in the health domain gathered between December 2006 to May 2007. | neutral |
train_92368 | Prodi has said he would call a confidence vote if he lost the Communists' support." | notably, a greedy frequency-driven approach leads to very good results in content selection (nenkova et al., 2006). | neutral |
train_92369 | We will provide examples of CST relations: 1. | their method used the same features for every type of CST, resulting in low recall and precision. | neutral |
train_92370 | Finding information about people on huge text collections or on-line repositories on the Web is a common activity. | our best configuration (FD+W) obtains an F-score of 0.74 (or a fourth position in the SemEval ranking). | neutral |
train_92371 | While these results are rather encouraging, they were not optimal. | metrics used to measure the performance of automatic systems against the human output were borrowed from the clustering literature (Hotho et al., 2003) and they are defined as follows: where C is the set of clusters to be evaluated and L is the set of clusters produced by the human. | neutral |
train_92372 | Their modifiers are quite semantically diverse, as shown in Table 2. | despite these benefits, STC has not received much attention by the community. | neutral |
train_92373 | First we extract the pure texts of all Web pages, excluding anchor texts which introduce much noise. | we need to calculate the probability of the occurrence of word w e in language e given a document d in language f , i.e. | neutral |
train_92374 | On one hand, the confidence measure allows us to adjust the original weights of the translations and to select the best translation terms according to all the information. | the second example is the tREC6 query "Acupuncture". | neutral |
train_92375 | The query terms are translated with the baseline models (Section 4). | the translation candidates are rescored and filtered according to a more reliable weight. | neutral |
train_92376 | Only the dependency relationships between content words are extracted. | for example, figure 1 takes the description field of TREC topic 651 as an example and shows part of the parsing result of Minipar. | neutral |
train_92377 | Here, N is the number of words in the correct transcript, I is the number of incorrectly inserted words, D is the number of deletion errors, and S is the number of substitution errors. | for these reasons, it is difficult to apply the method to a large-scale speech-based IR system. | neutral |
train_92378 | From this point of view, word error rate (WER), which is the most widely used evaluation measure of ASR accuracy, is not an appropriate evaluation measure when we want to use ASR systems for IR because all words are treated identically in WER. | for well-defined IRs such as relational database retrieval (E. Levin et al., 2000), significant words (=keywords) are obvious. | neutral |
train_92379 | A word weight should be defined based on its influence on IR. | an estimation method without hand-labeled answer sets, namely, the unsupervised estimation of word weights, is also tested. | neutral |
train_92380 | best determined by acoustic contrast, since accent type is closely linked to pitch height, and the local context and acoustic features serve to identify which accentable words are truly accented. | conditional Random Fields (Lafferty et al., 2001) are a class of graphical models which are undirected and conditionally trained. | neutral |
train_92381 | For the four-way contrast between pitch accent types, we see small to modest gains across all feature sets, with the prosodic case improving significantly (p < 0.025). | we jointly predict pitch accent, phrase accent, and boundary tone and, the prediction of each label depends on the features, the other labels predicted for the same syllable, and the sequential label of the same class. | neutral |
train_92382 | We define four types of named entities: people names (nr), organization names (nt), location names (ns), and numerical expressions (nc) such as calendar, time, and money. | if we know an UNK is a named entity, we can translate this UNK more accurately than using the subword-based approach. | neutral |
train_92383 | Because of this, their system's coverage and W A were relatively poor than ours 8 . | we can obtain r Chinese transliteration hypotheses and classify them into positive and negative samples according to y i . | neutral |
train_92384 | Although the states are conceptually clear, it is not necessarily the case that translators can judge the state of a given translation consistently, because judging a sentence as being "natural" or "confusing" is not a binary process but a graded one, and the distinction between different states is often not immedi-ately clear. | in this paper, we examined the factors that trigger modifications when translators are revising draft translations, and identified computationally tractable features relevant to the modification. | neutral |
train_92385 | Since the new alignment algorithm must enumerate all of the possible alignments, the process is very time consuming. | note that a word can be separated into two substrings each time. | neutral |
train_92386 | However for remedy, many of the current word alignment methods combine the results of both alignment directions, via intersection or grow-diag-final heuristic, to improve the alignment reliability (Koehn et al., 2003;Liang et al., 2006;Ayan et al., 2006;DeNero et al., 2007). | the second method uses the impurity method, which was motivated by the method of decision tree. | neutral |
train_92387 | We can not afford losing so much precious time. | for each classifier, we used tenfold crossvalidation and exhaustive search on adjustable parameters in model selection. | neutral |
train_92388 | These features capture the important aspects of the negotiation process -negotiation-related concepts and indicators of the strategies employed. | electronic means make the contacts less formal, allowing people to communicate more freely. | neutral |
train_92389 | This kind of clustering is not ideal in a corpus containing a huge variation in event streams, like newswire. | we take a two-step approach to the problem; first, we cluster report sentences based on similarity and second, we extract template(s) corresponding to each cluster by aligning the instances in the cluster. | neutral |
train_92390 | We also thank the three anonymous reviewers for helpful suggestions. | this method did not look at arbitrary syntactic patterns. | neutral |
train_92391 | First, we integrate two online databases to extend the coverage of our bilingual dictionaries. | for instance, the Korean NE "메이저리그" and its translation "Major League" are first composed as a query "+ᄆ ᅦ ᄋ ᅵ ᄌ ᅥ ᄅ ᅵ ᄀ ᅳ + Major League", which is then sent to Google. | neutral |
train_92392 | This results in thousands of possible combinations of Chinese characters, making it very difficult to choose the most widely used one one. | yaser Al-Onaizan (Al-Onaizan and Knight, 2002) transliterated an NE in Arabic into several candidates in English and ranked the candidates by comparing their counts in several English corpora. | neutral |
train_92393 | Each NE candidate is first sent to the Korean Wikipedia, and the title of the matched article's Chinese version is treated as the NE's translation in Chinese. | • Chinese monolingual: using the Chinese versions of the topics given by NTCIR. | neutral |
train_92394 | Obviously, the above methods cannot cover all possible translations of NEs. | most non-CJK NEs can be translated correctly by using the K-E translation patterns. | neutral |
train_92395 | Naver people search does not contain an article either because it is not a person name. | we split the translation process into two stages: the first translates the NE into its English equivalent, and the second translates the English equivalent into Chinese. | neutral |
train_92396 | Since f ("漂亮" (beautiful)) = "外观" (appearance) and f ("时尚" (fashionable)) = "外 观" (appearance), "外观" (appearance) is an implicit feature in (b). | with the rapid expansion of network application, more and more customer reviews are available online, which are beneficial for product merchants to track the viewpoint of old customers and to assist potential customers to purchase products. | neutral |
train_92397 | However, it's time-consuming to read all reviews in person. | (b) (外观)漂亮而且(外观)时尚。It's (appearance) beautiful and (appearance) fashionable. | neutral |
train_92398 | Empirical results on three kinds of product reviews indicate the effectiveness of our method. | with the revised mutual information, the implicit features can be deduced from opinion words. | neutral |
train_92399 | We can easily obtain the following relation between two scores: Since, according to this relation, score p (S) > score n (S) is equivalent to score p (S) > 0, we use only score p (S) for the rest of this paper. | we can also see that the advantage of sentence-wise becomes smaller as the amount of training data increases, and that the hybrid 3gram model almost always achieved the best accuracy among the three models. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.