id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_17800 | The probability of the dif- ferent senses of maan can thus be estimated based on the counts of {gardan, galaa} and {aadar, iz-zat}. | since the words {gardan, galaa} and {aadar, izzat} may themselves be ambiguous, their raw counts cannot be used directly for estimating the sense distributions of maan. | contrasting |
train_17801 | On an average the performance of PPR on nouns is better than our algorithm by 3%. | in 2 out of the 4 language-domain pairs our algorithm does better on nouns than PPR (by 6% in HINDI-HEALTH and 2% in MARATHI-HEALTH -see Table 8). | contrasting |
train_17802 | In this setting, a constituent word is typically represented by a feature vector conflating all the senses of that word. | not all the senses of a constituent word are relevant when composing the semantics of the compound. | contrasting |
train_17803 | When trained on the whole ACE 2005 corpus (in a supervised training scenario) this is appropriate behavior: we don't want to report an event in testing if we haven't seen the trigger before. | 5 for active learning, the inability to differentiate among potential new triggers and local structures is critical. | contrasting |
train_17804 | If it is the event tagger, this sample is informative for the event tagger and adding this sample will improve the performance; if it is the sentence classifier, it is not guaranteed that this sample is informative for the event tagger. | since the updated sentence classifier will serve to select subsequent queries, samples informative for the sentence classifier should accelerate subsequent active learning. | contrasting |
train_17805 | For example, if we have pattern A which is very similar to some eventbearing patterns in the training data, and pattern B which is quite different from any pattern in the training data, the event tagger will treat them the same. | the sentence classifier provides more graded matching, and gives the sentence containing pattern A higher score because they share a lot of words. | contrasting |
train_17806 | indicated by a common noun or noun phrase, or represented by a pronoun. | wikipedia instances mainly refer to NEs, e.g. | contrasting |
train_17807 | The objective of the word alignment task is to identify translational relationships among the words in a bi-text, and to produce a bipartite graph with a set of edges between words with translational relationships (Figure 2(a)). | the results of automatic word alignment may include incorrect alignments because of technical difficulties. | contrasting |
train_17808 | The baseline model using only languageindependent heuristics achieves poor performance, especially in recall. | our proposed projection-based model outperforms Table 3: Distribution of the errors the baseline model, due to largely increased recall. | contrasting |
train_17809 | The major problem tackled in such tasks is the handling of unknown words and domain-specific manners of expression. | parsing imperatives and questions involves a significantly different problem; even when all words in a sentence are known, the sentence has a very different structure from declarative sentences. | contrasting |
train_17810 | For Brown overall, compared with the WSJ, the accuracy did not decrease much. | for imperatives and questions, the POS tagger accuracy decreased significantly. | contrasting |
train_17811 | In Figure 3, the parser accuracy for QuestionBank, for which we could use much more training data than for Brown questions, approaches or even exceeds that of the WSJ parser for WSJ. | as there is no more training data for Brown imperatives and questions, we need to either prepare more training data or explore approaches that enable the parsers to be adapted with small amounts of training data. | contrasting |
train_17812 | The adapted parser could handle this dubious construction and assigned ROOT to the main verbs as the corpus required. | 10 we also observed some un- 10 We may have to correct the corpus. | contrasting |
train_17813 | For example, since the ni (dative) case of 'yaku (3)' often appears, we assume this slot is just omitted as a zero pronoun even if there is no overt argument. | since the ni (dative) case of 'yaku (1)' rarely appears, we assume there is no case slot if there is no overt argument. | contrasting |
train_17814 | are used to create an abstraction of words unknown to the ERG and then binary classifiers are employed to learn lexical entries for those words. | learning is done based on incomplete information obtained by the various resources used. | contrasting |
train_17815 | For all techniques, the experiments show that their application leads to an increase in coverage compared to the baseline, that is the standard GG setup. | the supertagging methods achieved this at the price of having lower accuracy than the baseline. | contrasting |
train_17816 | Then, the tagger is used to predict lexical types for unknown words in a random sample of 1000 sentences extracted from the FR corpus. | since no annotation is available for the test corpus, the tagger is run with the same features as in but without the POS tags of the context words. | contrasting |
train_17817 | Accordingly, we attach the values of some of the type features defined in the relevant lexical entries to the type definitions of those entries. | for reasons of data sparseness, we choose to exclude some of the features which Cholakov et al. | contrasting |
train_17818 | overall nouns adj verbs total 2954 1196 651 694 accuracy(%) 96.45 91.09 100 99.54 Table 4: Paradigm generation results In the paradigms generated for verbs there were three mistakes. | the generated verb stems were all correct. | contrasting |
train_17819 | The application of methods, where a tagger was used to predict lexical descriptions for words unknown to the GG grammar of German, led to an increase in parsing coverage on a German newspaper corpus. | accuracy was below the baseline, that is the accuracy of the standard GG setup. | contrasting |
train_17820 | Parsing is one of the fundamental building blocks of natural language processing, with applications ranging from machine translation (Yamada and Knight, 2001) to information extraction (Miyao et al., 2009). | while statistical parsers achieve higher and higher accuracies on in-domain text, the creation of data to train these parsers is labor-intensive, which becomes a bottleneck for smaller languages. | contrasting |
train_17821 | Many existing dependency corpora use phrases as the unit of annotation, and these resources are a valuable potential source of data for mining word dependencies. | phrase dependencies alone do not provide enough information for an automatic conversion to word dependencies. | contrasting |
train_17822 | Huang and Mi (2010) has shown that forest expansion can be done in decoders using beam search (Koehn, 2004). | to the best of our knowledge, no index-based search structures for incrementally finding trees have been proposed to date. | contrasting |
train_17823 | The cache usage will depend on the order in which the forests are searched and also on the content of the forests. | the forests are searched in the same order across all indexes, therefore, their relative cache performances can be meaningfully compared. | contrasting |
train_17824 | Note that all these works are based on the word sequential alignment models. | for distant language pairs such as English-Japanese or Chinese-Japanese, the word sequential model is quite inadequate (about 20 to 30 % AER), and therefore it is important to improve the alignment accuracy itself. | contrasting |
train_17825 | Each application of an operator generates one new sample, and of course we could use all the generated samples. | successive samples are almost the same, except for one local part. | contrasting |
train_17826 | EXPAND-1 does not have any restrictions on its operation. | for EXPAND-2, if the root node has more than one child node inside the subtree, it cannot exclude the root node, because the subtree will be divided into two subtrees by the exclusion, and it is impossible to return to the previous state. | contrasting |
train_17827 | Therefore given a specific test set, the unknown terms covered by our method are just a subset of CB's method. | on such a subset, our method gains significant improvement in translation quality over CB's approach. | contrasting |
train_17828 | For most recent research efforts, English is the pivot language of choice due to the richness of available language resources. | recent research on pivot translation has shown that the usage of non-English pivot languages can improve translation quality for certain language pairs (Paul et al., 2009;Leusch et al., 2010). | contrasting |
train_17829 | Language family and language perplexity seems to have the least impact on translation performance. | when applying linear regression on language subsets (only Euro- For pivot translations between European languages, sentence length, reordering and vocabulary are more predictive than the translation model entropy factor. | contrasting |
train_17830 | We can obtain features from word chunks, such as the first word of a word chunk and the last word of a word chunk, which cannot be obtained in word-sequence-based recognition methods. | each word chunk may include a part of an NE or multiple NEs. | contrasting |
train_17831 | In addition, our method can use features extracted from word chunks that cannot be obtained in word-based NE recognitions. | each word chunk may include a part of an NE or multiple NEs. | contrasting |
train_17832 | Ideally, all the possible word chunks of each input should be considered for this algorithm. | the training of this algorithm requires a great deal of memory. | contrasting |
train_17833 | The number of connections is up to K 2 , where K is the number of types of class labels. | sPJR-based ones greedily recognize NEs. | contrasting |
train_17834 | This is because SPJR and NECC require training for the SR-based NE chunker and base NE recognition. | sPJR showed better accuracy than sR. | contrasting |
train_17835 | In such a corpus, a high frequency string tends to have more accessors which will bring noise to the AV criterion. | noise may be filtered out by a threshold in Eq. | contrasting |
train_17836 | Instance-based evaluation measures the EL performance at a fine-grained IE resolution, which can support the development of advanced IE tasks. | to the first metric, the PRF scores are calculated based on the sums of TP, TN, FP and FN for all instances in the test dataset; we further consider whether the boundary matches that of the linked KB entry's mention. | contrasting |
train_17837 | This shows that the saliency property is effective in instance-based evaluation. | mLN SAL performs slightly worse than mLN LINK in the article-wide evaluation, the reason for which is explained in Section 5.1. | contrasting |
train_17838 | According to our analysis, S.1 improves the recall in the instance-based evaluation. | for article-wide, S.1 slightly reduces the recall. | contrasting |
train_17839 | Word lists are now available for most of the world's languages (Wichmann et al., 2011). | only a fraction of such lists contain cognate information. | contrasting |
train_17840 | However, these methods cannot handle terms that are not Wikipedia-article titles, and thus their coverage is limited. | distributional similarity be-tween terms has been used in extending an existing taxonomy like WordNet (Snow et al., 2006;Yamada et al., 2009). | contrasting |
train_17841 | (2009) linked a target term to its hypernym in a given taxonomy by using distributional similarity between the target term and terms in the taxonomy. | it is often the case that we cannot obtain reliable distributional similarity of a term and we cannot acquire hypernyms of a term co-occurring with lexico-syntactic patterns, especially when the term is infrequent in a corpus. | contrasting |
train_17842 | According to this observation, the top-n synsets among the synsets that contain hyper are generated. | if the hyponymy relation is wrong (like musician as a hypernym of acoustic guitars), this heuristics will have a negative effect on the performance of synset identification. | contrasting |
train_17843 | In WSD, supervised methods are by far the most successful but large amounts of data are required for each word type to be disambiguated (Navigli, 2009). | unlike in WSD, we can expect to find some linguistic commonality between the idiomatic senses of distinct MWE-types. | contrasting |
train_17844 | The idiom token or type features alone did not stand up well in comparison. | we note that combining our type features with the complete idiom token features provided a disproportionate boost to a classification accuracy of almost four percentage points above the baseline. | contrasting |
train_17845 | Our type features and new idiom features, working in concert with the idiom features of Hashimoto and Kawahara (2009), substantially increase crosstype classification performance over the baseline. | their effect is wholly subsumed by the inclusion of WSD features. | contrasting |
train_17846 | Thus, in this paper, we try to identify to which preceding answer and in what relation an answer is related. | some answers are irrelevant to a question and these answers might be unnecessary for an overview of a thread. | contrasting |
train_17847 | after the using NER, which will be helpful to assign this question to correct class. | if we do not tag "W.C. Fields" as a person's name, it will be difficult to correctly identify the question's class. | contrasting |
train_17848 | (2008) calculated user-to-question similarity and questionto-question similarity by employing the PLSA model, then ranked candidate questions by combining the two similarity scores. | it is dif-ficult to model users in K2Q, so this algorithm is not suitable for our purpose. | contrasting |
train_17849 | Since the 3gram meaning of advocacy never occurs in the training set, LM does not rank the target question as the top 1 best candidate. | the 3-gram meaning of WordCluster occurs frequently in the training set, so the UII model ranks the target question as the top 1 best candidate. | contrasting |
train_17850 | In the training set, the phrases how many occurs more frequently than the phrase what are the different, so LM does not rank the target question as its number 1 candidate. | the corresponding user intent has high probability to generate the slot different, so the target question is ranked as the best candidate by the UII model. | contrasting |
train_17851 | As a result, Twitter or social network services (SNS) such as mixi 3 played an important role for propagating safety information among people. | with SNSs flooded with information about the disaster, it was difficult to ensure that people would be properly connected with the information they were seeking. | contrasting |
train_17852 | Good performance was obtained for the O and M that comparatively have a larger number of instances. | the recalls for the labels having few instances were lower. | contrasting |
train_17853 | The manually corrected NE information was matched with GPF data, and we were able to update the personal information of more than a hundred individuals that were confirmed to be alive in tweets. | many tokens extracted from tweets had problems, including: • Incomplete LOCATION information: The tokens lacked a couple of information such as city, or neighborhood. | contrasting |
train_17854 | On a more abstract level, we were able to show that NLP has the potential to make a contribution in a disasterresponse situation, which we hope will provide an impetus for similar future projects. | a number of challenges remain, and we conclude by summarizing the major lessons learned in the project. | contrasting |
train_17855 | There is a large increase in accuracy between using a 3-gram versus a 5-gram model. | the jump from 5-gram to 7-gram is much smaller, and in some cases it even decreases performance. | contrasting |
train_17856 | It further finds the sentiments of these attributes. | our work finds actual values for the attributes and not merely sentiments expressed by users. | contrasting |
train_17857 | Among machine learning approaches CRF (Peng and McCallum, 2006) (Stoyanov and Cardie, 2008) has been effective if we have training data and want to extract template attributes and values. | we do not always have a predefined list of explicit attributes and values which have to be extracted and populated. | contrasting |
train_17858 | The output of these rules are used to populate Data Warehouses and Product Information Management (PIM) systems. | these rules are applied only to short product descriptions which do not lead to a complete view of the product. | contrasting |
train_17859 | We make use of the supervision provided by already existing rulesets frequently used for standardization to get a handle of such values. | as the reviews are verbose and noisy, these values too contain a lot of noise. | contrasting |
train_17860 | Out of the candidates "1369 bucks " and "$ 685" for the price attribute, "1369 bucks" is chosen as it contains more known attribute values (Canon, 2.8D, lens) in its context in review 1. | as "$ 685" has many competing attribute values (which do not match the known values) in its context (Nikon, AF, f/3.5-5.6G), it is rejected. | contrasting |
train_17861 | For instance, if the "Product:Camera" or "Brand:Nikon" matches we still cannot be very sure because the author can be comparing two different cameras or two Nikon products. | if a value mentioning weight or lens of the camera matches, we can be more certain as it is unlikely to have two cameras with exactly the same weight or the same lens. | contrasting |
train_17862 | The problem could be further compounded if the author swings back and forth comparing two products leading us to the deep waters of Pronoun Resolution and Attribute Coreference Resolution. | we easily find a way around them with the assumption that the switch will not happen too frequently in most reviews. | contrasting |
train_17863 | Thus, the internal nodes formed during the hierarchical clustering process shares words with their children. | in the hierarchical topic model the internal nodes are not summaries of their children. | contrasting |
train_17864 | Q&A forum acts not only as a medium for knowledge sharing , but also as a place in which one can seek advice, and satisfy others' curiosity about a countless number of things (Adamic et al., 2008). | because of the ever-increasing growth of it, and the fact that users are not always experts in the areas they post threads on, duplicate content becomes a serious issue. | contrasting |
train_17865 | This approximation can improve the efficiency of the term-base algorithm, based on which the term-based method can process large collections. | the calculated similarities may not reflect the real similarities with this approximation method. | contrasting |
train_17866 | Support Vector Machines, Maximum Entropy, AdaBoost, and so on) can be used to solve the problem with the calculated similarities and extracted information as feature sets. | for simplification, in this paper we use linear combination of different parts' similarities and a predefined threshold τ to do that. | contrasting |
train_17867 | come close to that of our SVM regression model when we only compare the top 3 or 4 candidates. | the performance gap becomes larger as K grows. | contrasting |
train_17868 | Anchor/coreference-based method cannot acquire the argument "taichou(condition)-wo" that presents in one node (which means this argument is shared by no events). | our proposed method can acquire such an argument. | contrasting |
train_17869 | The main purpose of information retrieval (IR) is to provide the user with documents that are relevant to his/her information needs. | it is difficult to achieve this by one-off retrieval, since user queries are typically short and often ambiguous (Jansen et al., 2000). | contrasting |
train_17870 | As described above, many methods have been proposed for RF. | most of the previous methods use only the surface information in texts. | contrasting |
train_17871 | In past KBP competitions, many participants (Li et al., 2009;Byrne and Dunnion, 2010;Chen et al., 2010) exploited a QA system to fill slots by constructing queries based on target entities and slot types. | their query templates contain only a few additional query terms other than the target entity name, which are mostly obtained manually. | contrasting |
train_17872 | Of the total 67 person entities in our test data, the IE-pipeline is not able to extract any employment information for 12 of them. | using the primitive QA-pipeline, we are able to recover 4 of them while introducing 6 new errors. | contrasting |
train_17873 | COGSUM has previously worked without aid from any outside source making it highly portable and more or less language independent. | some problems have been detected. | contrasting |
train_17874 | However, their task was limited to retrieving the argument heads. | we integrate discourse segmentation in the parsing pipeline because we believe that spans are necessary when using the discourse arguments as input to applications such as opinion mining, where attributions need to be explicitly marked. | contrasting |
train_17875 | Attributions occur in 34% of all explicit relations in the PDTB, and represent one of the major challenges in identifying exact argument spans, especially for Arg2. | given the fact that Arg2 is syntactically bound to the connective, its identification is generally considered an easier task than the detection of Arg1 . | contrasting |
train_17876 | We used this tool because the output of CRF++ is compatible to CoNLL 2000 chunking shared task, and we view our task as a discourse chunking task. | linear-chain CRFs for sequence labeling offer advantages over both generative models like HMMs and classifiers applied at each sequence position. | contrasting |
train_17877 | As Figure 3 and Figure 4 show, the performance was competitive among the three lasso methods, when the number of features were small. | with a large number of features, relational lasso generally outperforms the other lassos. | contrasting |
train_17878 | Therefore, the computational complexity of relational lasso will not change even with α within the regularizer, and the overall speed of relational lasso is almost the same as that of the conventional method. | adaptive lasso requires twice as much time since the bottleneck part is done twice. | contrasting |
train_17879 | As an practical example, in an classification experiment on 492 617 training documents, 275 364 test documents and 132 199 categories of Yahoo!, ScutTD costs only 2.1 hours on training and 0.12 hours on classifying, while 1-vs-Rest costs 310 hours on training and 54 hours on classifying . | scutTD has a well-known deficiency of classification accuracy, that is, its performance is generally worse than the flat 1-vs-Rest approach (Bennett and Nguyen, 2009;Ceci and Malerba, 2007a;Xue et al., 2008). | contrasting |
train_17880 | One kind is mandatory leaf-node classification where only the leaf nodes are the validate labels or classes (Dumais and Chen, 2000;Freitas and de Carvalho, 2007;Silla and Freitas, 2010). | the other is non-mandatory leaf-node classification correspondingly, where both the internal nodes and the leaf nodes are validate labels (Lewis et al., 2004;. | contrasting |
train_17881 | According to Mani (2001), automatic text summarization takes a partially-structured source text from multiple texts written about the same topic, extracts information content from it, and presents the most important content to the user in a manner sensitive to the user's needs. | to summarizing one document that is termed as single document summarization, multi-document summarization deals with multiple documents as sources that are related to one main topic under consideration. | contrasting |
train_17882 | Given a sentence (or query 5 ), we first parse it into a syntactic tree using a parser like (Charniak, 1999) and then, calculate the similarity between the two trees using the tree kernel (discussed in Section 4.1). | syntactic information is often not adequate when dealing with long and articulated sentences or paragraphs. | contrasting |
train_17883 | For example, "I worked till 5pm yesterday" uses the past-tense verb "worked" to describe an event in the past. | in languages such as Chinese, no verb inflections exist to indicate any tense information (Xiao and McEnery, 2002). | contrasting |
train_17884 | (Ye and Zhang, 2005) also used a parallel Chinese-English corpus to obtain tense information, however, it was done manually. | our method is automated, which allows us to utilize a large amount of existing parallel data. | contrasting |
train_17885 | In practice, we also need to choose a training regime (in order to learn the weights of the formulae we added to the MLN) and a search/inference method that picks the most likely set of ground atoms (PA relations in our case) given our trained MLN and a set of observations. | implementations of these methods are often already provided in existing Markov Logic interpreters such as Alchemy 4 and Markov thebeast. | contrasting |
train_17886 | The intuition behind the previous formulae can also be captured using a local classifier. | markov Logic also allows us to say more: isArg (a) ⇒ ∃p.∃r.role(p, a, r) In this formula, we made a statement about more global properties of a PA relation extraction that cannot be captured with local classifiers. | contrasting |
train_17887 | are much more common than intra-sentential zero-anaphoric PA relations (Zero-Intra). | in Japanese, we often find zero-anaphoric PA relations called case ellipsis. | contrasting |
train_17888 | Another error is underlined that ga-case of "�� (transport by air)" is identified as "�� (reason)", because "��" is only a phrase dependent on "� �". | global improved the errors as � role(5, 6, ga), role(5, 4, wo), role(8, 6, ga), role(8, 10, wo), role(11, 6, ga), role(11, 10, wo) � . | contrasting |
train_17889 | This means that, for instance, any occurrence of the verb charge, such as in the expressions charge a fee or charge a battery, is assigned the same vector representation, ignoring the difference of word sense. | the fact that charge and impose are near-synonyms in charge/impose a fee will not be properly reflected in their respective meaning vectors, since the former, but not the latter, includes (context words reflecting) the "supply electricity" sense of charge. | contrasting |
train_17890 | Although all models have been evaluated on test-sets derived from the LST dataset in essentially the same way, the datasets differ slightly due to technical details, so strictly speaking the results cannot be compared directly. | since all authors report similar 30.0 † Cited from Erk and Padó (2010). | contrasting |
train_17891 | Dimension Title, Purpose and Gap-Background models present good results and should be incorporated to SciPo as new functionalities. | taking into consideration the annotation process, we observed difficulties to label the sentences with regard to the Dimension Linearitybreak. | contrasting |
train_17892 | However, the aforementioned line of work tackled subjectivity detection either as supervised or semi-supervised learning, requiring labelled data and extensive knowledge which are expensive to acquire. | both subjectivity and sentiment are context sensitive and in general quite domain dependent (Pang and Lee, 2008), so that classifiers trained on one domain often fail to produce satisfactory performance when shifted to new domains (Gamon et al., 2005;Blitzer et al., 2007). | contrasting |
train_17893 | The formal definition of the subjLDA generative process is as follows: In practice, it is quite intuitive that one classifies a sentence as subjective if it contains one or more strongly subjective clues (Riloff and Wiebe, 2003). | the criterion for classifying objective sentences could be rather different, because a sentence is likely to be objective if there are no strongly subjective clues. | contrasting |
train_17894 | The previously proposed DiscLDA (Lacoste-Julien et al., 2008) and Labeled LDA (Ramage et al., 2009) also utilize a transformation matrix to modify Dirichlet priors by assuming the availability of document class labels. | we use word prior sentiment as supervised information to modify the topic-word Dirichlet priors. | contrasting |
train_17895 | In addition, except for objective recall, subjLDA outperforms LDA in both the sentence and document modes for all the other evaluation metrics, with more balanced objective and subjective F-measures being attained compared to the other two models. | it was observed that while LDA(Doc.) | contrasting |
train_17896 | Moreover, apart from subjectivity clues, they also used additional features such as subjective/obejctive pattern and POS for the Naive Bayes sentence classifier training. | the proposed subjLDA model is relatively simple with only a small set of subjectivity clues being incorporated as prior knowledge. | contrasting |
train_17897 | Analyzing the objective recall and precision shown in Figure 3(b) and 3(c) reveals that, while incorporating the 4,500 least frequent neutral words considerably increases the objective recall, the objective precision does not drop much which eventually leads to the overall improvement of all the three models. | compared to the subjective words, the classification improvements by incorporating additional neutral words are less significant. | contrasting |
train_17898 | Here, we adapted the general keyphrase extraction procedure from the scientific publications domain (Witten et al., 1999;Turney, 2003) to the extraction of opinion-reasoning features. | our task is rather different since we aim at identifying the reasons for opinions, instead of keyphrases that represent the content of the whole document. | contrasting |
train_17899 | As we treated the mining of pros and cons as a supervised keyphrase extraction task, we conducted measurements with KEA (Witten et al., 1999), which is one of the most cited publicly available automatic keyphrase extraction system. | we should note that due to the fact that our phrase extraction and representation strategy (and even the determination of true positive instances to some extent) slightly differs from that of KEA, the added values of our features should rather be compared to our second Baseline System (BL W N ) which uses WordNet for candidate phrase normalization. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.