id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_100700 | While the evaluation measure, Pearson correlation, does not take into account the shape of the output distribution, Figure 1 shows that this information may be a useful indicator of model quality and behaviour. | (2011) and from there derive two dependencybased methods. | neutral |
train_100701 | These distributions were estimated via crowdsourcing. | the system includes various length-related features, where , and length(x) denotes the number of tokens in x. log denotes the natural logarithm. | neutral |
train_100702 | A number of similarity metrics were proposed under either the attributional similarity (Turney, 2006) or the maximum sense similarity (Resnik, 1995) assumptions of lexical semantics 1 . | text semantic similarity estimation has been an active research area, thanks to a variety of potential applications and the wide availability of data afforded by the world wide web. | neutral |
train_100703 | In our previous participation in SemEval12-STS task (Malandrakis et al., 2012) we employed a modification of the pointwise mutual information based on the maximum sense similarity assumption (Resnik, 1995) and the minimization of the respective error in similarity estimation. | we express containment as the amount of ngrams of a sentence contained in another. | neutral |
train_100704 | We calculated the distance to align each group with every pair of aligned verbs. | (3) "The two sentences are roughly equivalent, but some important information differs/missing". | neutral |
train_100705 | On a smaller scale word sense disambiguation, semantic role labeling and time and date resolution. | the tables have to be created only with the similar groups of the sentences. | neutral |
train_100706 | Given two short texts or sentences s 1 and s 2 , we denote the word set of s 1 and s 2 as S 1 and S 2 , the length (i.e., number of words) of s 1 and s 2 as |S 1 | and |S 2 |. | we create 12 string based features in consideration of the common sequence shared by two texts. | neutral |
train_100707 | The typed-similarity dataset comprises pairs of Cultural Heritage items from Europeana 1 , a single access point to digitised versions of books, paintings, films, museum objects and archival records from institutions throughout Europe. | the similarity between two vectors of words can thus be implemented as the similarity between the probability distributions, as given by the cosine between the vectors. | neutral |
train_100708 | We assume that the year of creation or the year denoting when the event took place in an artefact are good indicators for time similarity. | for example the vectors of two articles x and y are: where x and y are two Wikipedia articles and x → l i is a link from article x to article l i . | neutral |
train_100709 | Given the similarity shifts in the different datasets (cf. | the results for the cross-validation process are summarized in table 2. | neutral |
train_100710 | The second and third best-performing measures were WordNet similarity and Levenshtein's edit distance. | in this paper we introduced the LiPN-CORE system, which combines semantic, syntactic an lexical measures of text similarity in a linear regression model. | neutral |
train_100711 | First of all, sentences p and q are analysed in order to extract all the included WordNet synsets. | worst performing similarity measures were Named Entity Overlap, Syntactic Dependencies and ESA. | neutral |
train_100712 | We decided to omit the Microsoft Research Paraphrase Corpus (MSRpar and MSRvid) because we felt that the types of sentence pairs in this corpus were too different from the development data. | overall, our choice of ridge regression is justified. | neutral |
train_100713 | Combining all the available features indeed results in the highest mean score. | column maxima (wp500-long), respectively. | neutral |
train_100714 | angle) between the first sentence of a given pair and the second sentences of all word pairs in the same data set. | we tested linear regression, regularized linear regression (ridge regression), Bayesian ridge regression, support vector regression and regression trees. | neutral |
train_100715 | For example, the longest common substring between the following sentences is bolded: A woman and man are dancing in the rain. | two short texts are considered similar if they both convey similar messages. | neutral |
train_100716 | Text similarity has also been used for relevance feedback and text classification (Rocchio, 1971), word sense disambiguation (Lesk, 1986;Schutze, 1998), and extractive summarization , in the automatic evaluation of machine translation (Papineni et al., 2002), text summarization (Lin and Hovy, 2003), text coherence (Lapata and Barzilay, 2005) and in plagiarism detection (Nawab et al., 2011). | briefly, for each open-class word in one of the input texts, we compute the maximum semantic similarity 4 that can be obtained by pairing it with any open-class word in the other input text. | neutral |
train_100717 | triarcs (1.87B items) consist of four content words (example in Figure 3f). | see Figure 1 for an example of a syntactic-ngram. | neutral |
train_100718 | The syntactic representation we work with is based on dependencygrammar. | to keep the data manageable, we employ a frequency threshold of 10 on the corpus-level count. | neutral |
train_100719 | The assumption behind these methods is that noncompositional MWEs are more syntactically fixed than compositional MWEs. | 6 As we are ideally after broad coverage over multiple languages and MWEs/component words in a given language, we exclude Babelnet and Wiktionary from our current research. | neutral |
train_100720 | 3 They computed backward and forward entropy to try to remedy the problem with especially high-frequency phrases. | for example, with ad hoc, the fact that neither ad nor hoc are standalone English words, makes ad hoc a lexicallyidiosyncratic MWE; with shoot the breeze, on the other hand, we have semantic idiosyncrasy, as the meaning of "to chat" in usages such as It was good to shoot the breeze with you 1 cannot be predicted from the meanings of the component words shoot and breeze. | neutral |
train_100721 | One can see from the table that when the metaphorical verb stir in "stir excitement" is paraphrased as the literal "provoke", the subsequent paraphrasing of "provoke" does not produce "stir". | the baseline system is the implementation of the selectional preference violation view of Wilks (1978) using automatically induced SPs. | neutral |
train_100722 | The system P = 0.68 and R = 0.66, whereas the baseline only attains P = 0.17 and R = 0.55. | cognitive evidence suggests that humans are likely to perform identification and interpretation simultaneously, as part of a holistic metaphor comprehension process (Coulson, 2008;Utsumi, 2011;Gibbs and Colston, 2012). | neutral |
train_100723 | Second, in a small number of cases, both annotators agree on one member of the set of alternatives. | the 76% accuracy reached using the simple textbased classifier suggests that a system which has teachers supply source sentences instead of target answers and then automatically aligns learner answers to the text, while nowhere near comparable to the state-of-the-art supervised system, still achieves a reasonably accurate classification. | neutral |
train_100724 | Kramer 2010proposed the "Gross National Happiness" index and Kivran-Swaine and Naaman (2011) examined associations between user expressions of positive and negative emotions and the size and density of social networks. | to do this, we first judged a sample of 1,000 instances of LIWC terms occurring in Facebook posts to indicate whether they contribute signal towards the associated LIWC category (i.e. | neutral |
train_100725 | We argue that more data of this kind would be helpful to improve existing approaches to linking implicit arguments in discourse and to enable more in-depth studies of the phenomenon itself. | their heuristically created training data might not represent implicit argument instances adequately. | neutral |
train_100726 | The counts for these categories appear in Table 6. | these features are often quite sparse and do not generalize well. | neutral |
train_100727 | Both rely on Wikipedia links and are based on frequencies of these links. | possible senses are given by WordNet and the authors report an inter-annotator agreement of .93 for the RG dataset. | neutral |
train_100728 | We performed a second evaluation study where we asked three human annotators 12 to rate the similarity of word-level pairs in the dataset by Rubenstein and Goodenough (1965). | our novel measure ESA on senses provides the best results. | neutral |
train_100729 | Links in Wiktionary articles are disambiguated and thus transform the resource to a sense-based resource. | to our work, the text itself is not changed and similarity is computed on the level of texts. | neutral |
train_100730 | Word pairs are constructed by adding one word with two clearly distinct senses and a second word, which has a high similarity to only one of the senses. | in this paper we investigate whether similarity should be measured on the sense level. | neutral |
train_100731 | Figure 1 (a)) unknown cells can be assigned {1, 4, 7, 8}. | if The shortest paths cup#1→handle#1→golf_club#2 and cup#7→golf#1→golf_club#2 only exist because the sense golf_club#2 (anchored to the more polysemous lemma club) is present, if it was not then the SENSE_SHIFTS filter would have removed these alternative senses. | neutral |
train_100732 | WordNet-based context enrichment uses the Word-Net synonyms to obtain the context, and concatenates them into the given word to build the ESA vector. | for instance, there is a given word pair "train and car", car has 8 different synsets that build 8 different contexts, and train has 6 different synsets that build 6 different contexts. | neutral |
train_100733 | freely distributed by ELDA for research purposes 3 (Catalog Reference ELRA-W0063). | if we compare these results with the paradigmatic relations encoded in DiCoEnviro, we see that, in the case of absorb, 3 of its neighbours are encoded in the dictionary, and all 3 are antonyms or terms having opposite meanings: emit, radiate, and reflect. | neutral |
train_100734 | If the verb is negated (the left sibling of the I Will subtree spans exactly the word not), we add the postfix NOT to the verb feature, for example verb=quit NOT. | we follow the pairwise approach to ranking (Herbrich et al., 1999;Cao et al., 2007) that reduces ranking to a binary classification problem. | neutral |
train_100735 | In order to reduce the complexity of the task, we based our analysis on the subset of categories, for which the category text described a single problem (a single H, speaking in RTE terms). | in our analysis, we manually compared the email texts to the descriptions of their associated categories in order to investigate the nature of the inference steps involved. | neutral |
train_100736 | Now, the better a sentence is connected, the lower its weight. | s9 Apartheid is south Africa's policy of racial separation. | neutral |
train_100737 | After applying WMVC to the graph in Figure 2, the cover C returned by the algorithm is {S 2 , S 4 , S 7 , S 9 } (highlighted in Figure 2). | the existing summarization methods do not utilize its potential fully. | neutral |
train_100738 | 'do with him order', is ambiguous between the last two and may mean either 'Deal with him' (R1) or 'Clean up with him' (R0). | but why should we care about semantic roles at all? | neutral |
train_100739 | In this and the accompanying paper Jaworski and Przepiórkowski 2014 we suggest an answer in the negative and propose to approximate semantic roles on the basis of syntactic and morphosyntactic information. | in Zwierzę jest leczone z tych chorób 'An animal is treated for these diseases', in the VerbNet experiment the animal was marked as Beneficiary (by 3 annotators), as Patient (×3) and as Source (×1), and in the Sowa experiment -as Beneficiary (×2), as Patient (×2), as Recipient (×2) and as Result (×1). | neutral |
train_100740 | 2 Each sentence in Europarl was written in one of the official languages of the European Parliament and translated to all of the other languages. | 2 Each sentence in Europarl was written in one of the official languages of the European Parliament and translated to all of the other languages. | neutral |
train_100741 | We have found that increasing the number of these artificial projections that are used in training an SRL system does not improve performance as might have been expected when creating such a resource. | we must be careful of drawing too many conclusions because in addition to the difference in dependency schemes, the training data used to train the parsers as well as the parsers themselves are different. | neutral |
train_100742 | RTE is the task of deciding whether a long text T entails a shorter text, typically a single sentence, called hypothesis H. It has been often seen as a classification task (see (Dagan et al., 2013)). | cSTKs are defined over a chunk-based syntactic subtrees where terminal nodes are words or word sequences. | neutral |
train_100743 | As usual, a tree kernel, although written in a recursive way, computes the following general equation: (1) In our case, the basic similarity K F (t i , t j ) is defined to take into account the syntactic structure and the distributional semantic part. | productions of the initial subtrees are complete. | neutral |
train_100744 | Given a simple transition, a process can be viewed as simply an iteration of ν (Fernando, 2009). | more robust parsing will afford us the opportunity to expand the diversity of predicates that the software can handle as well (mc-Donald and Pustejovsky, 2014). | neutral |
train_100745 | This 3-vector is computed as a quaternion for rendering purposes. | we intend to expand the object library to include more complex inanimate objects (tables, chairs, or other household objects) as well as animate objects. | neutral |
train_100746 | Additional contextual information from videos (e.g., scene locations) should help improve performance, especially on tougher videos (e.g., videos involving children chases). | we define the maximum likelihood probabilitiesP , derived from relative frequencies f , for the unigrams, bigrams, and trigrams as follows: for all mental state labels m, activities, and actor types in our queries. | neutral |
train_100747 | Upon review of the video, we agreed that one child did indeed look annoyed. | for recall, the reward given for each color in the mystery bag is capped by the number of pencils of that color in the response bag. | neutral |
train_100748 | The pruned distribution is renormalized to yield the final response distribution. | this would not account for the probability of r in the gold standard distribution, G. An analogy might help here: Suppose we have an unknown "mystery bag" of 100 colored pencils that we will try to match with a "response bag" of pencils. | neutral |
train_100749 | The set ∆ can then be derived from Γ using the following natural deduction rules: 4 • Initialize ∆ with lambda terms (sets) that have no outscoped sets in Γ: • Add constraints to appropriate sets in ∆: • Add constraints of supersets as constraints on subsets in ∆: For example, the graph in Figure 2 can be translated into the following lambda calculus expression (including quantifiers over eventualities in the source graph, to eliminate unbound variables): The semantic dependency representation defined in this paper assumes semantic dependencies other than those representing continuations are derived compositionally by a categorial grammar. | this notation will be used in Section 4 to define constraints in the form of equations. | neutral |
train_100750 | When used to guide a statistical or vectorial representation, it is possible that this local context will allow certain types of inference to be defined by simple pattern matching, which could be implemented in existing working memory models. | this definition assumes a Generalized Categorial Grammar (GCG) (Bach, 1981;Oehrle, 1994), because it can be used to distinguish argument and modifier compositions (from which restrictor and nuclear scope sets are derived in a treestructured continuation graph), and because large GCG-annotated corpora defined with this distinction are readily available (Nguyen et al., 2012). | neutral |
train_100751 | An agent who is told that an entity with height h is tall adds that observation to its knowledge base without questioning the reliability of the speaker. | our setting so far offers a straightforward solution to this: If a new entity x : T with height h is referred to as tall, the agent adds h to its set of observations Ω T tall and recomputes µ tall (Human), for instance using RH-R as defined in (13). | neutral |
train_100752 | Contrary to what has been argued in the literature (Rapp, 2002; Sahlgren, 2006)-that bag-of-words models based on secondorder statistics mainly capture paradigmatic relations and that syntagmatic relations need to be gathered from first-order models-we show that second-order models perform well on both paradigmatic and syntagmatic relations if their parameters are properly tuned. | the results displayed in table 2 show that dimensionality reduction with SVD improves the performance of the models for all datasets but GEK. | neutral |
train_100753 | We can thus explore the empirical question of whether all these related phenomena can be tackled together, with a single model accounting for all of them. | most asymmetric measures proposed in the literature build upon the distributional inclusion hypothesis, stating that "if u is a semantically narrower term than v, then a significant number of salient distributional features of u is included in the feature vector of v as well" (Lenci and Benotto, 2012). | neutral |
train_100754 | We collect both membership and typicality ratings because we expect them to have different implications for sound entailment. | being able to track the impact that modifiers have on heads should thus have a positive effect on important tasks such as recognizing textual entailment, paraphrasing and anaphora resolution (Androutsopoulos and Malakasiotis, 2010;Dagan et al., 2009;Poesio et al., 2010). | neutral |
train_100755 | The highly significant correlations show that the measures do capture to some extent the patterns of variance in the data. | as a control, we also present ratings for unmodified h as an instance of c (we will use them below to test similarity measures on their ability to capture the direction of the membership relation, and to zero in on the effect of modification vs. more general membership/typicality effects). | neutral |
train_100756 | Finding meaningful combinations among unattested or infrequent phrases was not an easy task and there was not always a perfect candidate. | we are thus also providing a novel evaluation of compositional models and asymmetric measures on a challenging task where they could potentially be very useful. | neutral |
train_100757 | We treat the top 10K most frequent lemmas as context elements. | we think that it is more productive for computational systems to handle modifier-triggered disambiguation as a special case of the more general class of modification effects, than to engage in the quixotic pursuit to determine, a priori, what's the boundary between a word-sense and a "pure" modification effect. | neutral |
train_100758 | With the LSTM-RNN model, the tanh function, in general, worked best whereas the sigmoid function was the worst. | positive, negative, or neutral) is available, we put a softmax layer (see Equation 1) to compute the probability of assigning a class to it. | neutral |
train_100759 | We use a neural network which consists of a weight matrix W 1 ∈ R d×d for left children and a weight matrix W 2 ∈ R d×d for right children to compute the vector for a parent node in a bottom up manner. | the complexities of the two models are dominated by the matrix-vector multiplications that are carried out. | neutral |
train_100760 | Vector adaptation methods modify a traditional (i.e. | it is somewhat surprising that dimensionality reduction and integration of semantic spaces do not help in improving performance. | neutral |
train_100761 | We believe that this is the case because implicit SRL, as discussed in Section 2.3, can rely less on syntactic features but must make predictions on the basis of semantic and discourse features, which are more comparable across target parts of speech. | the main question is whether the addition of the (much smaller) SEMEVAL corpus to GERBERCHAI can improve performance. | neutral |
train_100762 | Table 2), which is shaped by subcategorization, is a likely candidate for changess across domains, due to sense shifts. | all previous studies on the SEMEVaL dataset used the FrameNet annotation, and without access to the actual predictions we cannot directly compare our predictions to theirs. | neutral |
train_100763 | The experiments presented in this paper are geared towards the identification of: (1) all 4 types unified under a single label and (2) the "Displacement" type of CMCs (1 of the 4 types). | the most marked improvement is in the WEB models (both CMC and DISPLACE) and the BN model's DISPLACE label classification. | neutral |
train_100764 | Statistically significant change from the Baseline feature set is marked with a †. | if the semantic interpretation is strictly based on the expected semantics of the verb and its arguments, it fails to include the relevant information from the CMC. | neutral |
train_100765 | (Tou Ng et al., 1999) propose to use annotator agreement to cluster senses, reporting higher interannotator agreement after clustering. | scale 4 artifact 6 3 a flat surface at right angles to a plumb line 3 artifact 5 2 indicator that establishes the horizontal when a bubble is centered in a tube of liq. | neutral |
train_100766 | For example, WordNet senses #3 and #4 are grouped under the stative supersense, although the definition and use of the two senses are completely different. | scale 4 artifact 6 3 a flat surface at right angles to a plumb line 3 artifact 5 2 indicator that establishes the horizontal when a bubble is centered in a tube of liq. | neutral |
train_100767 | Another problem with BioScopeScopeSpan annotations stems from the requirement that such annotations should have contiguous spans. | as shown in Table 1, our CuePredicate tagger obtained F-measures in the range of state-of-the-art results on negation cue detection using the BioScope (90-96% F-measure (Velldal et al., 2012)). | neutral |
train_100768 | predicate nodes corresponding to the deep syntactic subject: observe how activator is 'subj' both to do and activate. | we now build a supervised learning system which, given a CuePredicate in a sentence, will identify its corresponding NegatedPredicate. | neutral |
train_100769 | Since our task is different-negated predicate detection as opposed to negated span detectionwe report the Percentage of Correct Scope Predicates (PCSP) obtained in our experiments. | whether to include a subject into a verb scope (e.g. | neutral |
train_100770 | We make no claim that the above list is exhaustive or that there would not be exceptions to these rules. | we discuss these two assumptions in turn. | neutral |
train_100771 | As already explained (Section 2), we take the terms "belief" and "factuality" to refer to the same phenomenon underlyingly (with perhaps different emphases). | in the quote above, Saurí and Pustejovsky (2012) (apart from distinguishing factuality from truth) also make the point that the writer's communicative intention of making the reader believe she has a specific belief state does not mean that she actually has that cognitive state, since she may be lying. | neutral |
train_100772 | We use 6 different word relatedness benchmarks to evaluate NESA. | every seed concept has a ranked list of 20 related Wikipedia concepts. | neutral |
train_100773 | This transformation allows the computation of the correlation weights between the concept dimensions. | to obtain the DiSER based relatedness scores between Wikipedia concepts, we use Entity Relatedness Graph (EnRG) 4 (Aggarwal et al., 2015), which is a focused related entities explorer based on DiSER scores. | neutral |
train_100774 | ESA represents the semantics of a word with a high dimensional vector over the Wikipedia concepts. | it shows that the distributional representation of the article title captures the semantic information better than considering only the corresponding article content. | neutral |
train_100775 | Table 1 shows that the top 5 Wikipedia concepts retrieved for "football" and "soccer" do not share any concept, however, the concepts may exhibit relatedness to each other. | most of the knowledge-based measures use the taxonomic relations for computing word relatedness. | neutral |
train_100776 | This paper explores the utility of homophily within joint models for document-level semantic classification, focusing specifically on tasks which are not associated with any explicit graph structure. | 2-grams and 5-grams are the next best, with a statistically significant 3.37% absolute gain over the content-only baseline. | neutral |
train_100777 | We also replaced our term length factor, TL in equation 2, with KP-miner's boosting function for multi-words. | as these results show, on this dataset of relatively short documents, TextRank outperforms KP-Miner for k>2. | neutral |
train_100778 | We compare our approach with two other unsupervised algorithms that utilize this heuristic: KP-Miner and KX-FBK. | longer n-grams are generally more likely to be keyphrases. | neutral |
train_100779 | This procedure leads to an assimilation of density values in the graphs G k as shown in Table 1: for the 10 languages, the relative standard deviation in network density decreases by about 23%. | we decided to lemmatize only words in the reference language and kept full-forms for all source languages. | neutral |
train_100780 | For frame-semantic parsing, no such restriction in entity type exists. | this bias affects both, frame-semantic parsing and event extraction. | neutral |
train_100781 | As in event extraction, frames occur within sentences and have triggers and roles (called lexical units and frame elements). | their system gives a higher precision for both subtasks. | neutral |
train_100782 | A word pair is represented as a vector of features set up with the most meaningful patterns of context and filled in with information extracted from the graph representation of the corpus. | word pair representations Using the graph model G and the set of contextual patterns automatically acquired P, each word pair (x, y) is represented as a binary distribution over each pattern from P. Rather than using the input corpus to identify contexts of occurrence for the word pair (x, y) and match those with the acquired patterns, GraCE uses paths connecting x and y in G. All the paths between x and y up to three edges are extracted from G. These paths are then matched against the feature patterns from P and the word pair (x, y) is represented as a binary vector encoding non-zero values for all the features matching the pair's paths extracted from G, and zero otherwise. | neutral |
train_100783 | The K&H dataset contains only instances from three domains and is imbalanced between the number of instances across domains and relation types. | given two word pairs, (w 1 , w 2 ) and (w 3 , w 4 ), if w 1 is lexically similar to w 3 and w 2 to w 4 (i.e., are pair-wise similar) then the pairs are said to have the same semantic relation. | neutral |
train_100784 | However, (Turney, 2006b;Turney, 2008a) showed that relational similarity cannot be improved using the distributional similarity of words. | the novelty of this system stands in the graphbased representation. | neutral |
train_100785 | , x n ), and v(y) = (y 1 , y 2 , . | these approaches are limited because a relation may be expressed in many ways, depending on the domain, author, and writing style, which may not match the originally identified patterns. | neutral |
train_100786 | Alignments can be established at the word level, phrase level (MacCartney et al., 2008), or dependency level (Dinu and Wang, 2009). | 1 We implement the algorithm within an open source TE development platform (Padó et al., 2015). | neutral |
train_100787 | One feature of the Dinners from Hell corpus that bears further inspection in future work is the fact that its stories contain many violations of the restaurant script. | first, the ordered pmi model, pmi(e i , e) + n i=k+1 pmi(e, e i ) (4) where C(e 1 , e 2 ) is asymmetric, i.e., C(e 1 , e 2 ) counts only cases in which e 1 occurs before e 2 . | neutral |
train_100788 | Though many early AI systems employed hand-encoded scripts, more recent work has attempted to induce scripts with automatic and scalable techniques. | our results suggest that applying these techniques to a domain-specific dataset may be reasonable way to learn domain-specific scripts. | neutral |
train_100789 | General surveys on EL can be found in (Cornolti et al., 2013) and (Rao et al., 2013). | mapping files: Evaluating EL to Wikipedia requires making sure that we consider the same set of target entities for each EL system, since the versions of Wikipedia deployed within each system may differ. | neutral |
train_100790 | 2006), as may be seen in Fig.1. | using the XML frame file structure of Propbank, we defined the automatic generation of frame files, combining data from Propbank-Br and from Propbank, as shown in Fig.3. | neutral |
train_100791 | Only 109 of the 1453 senses identified in Portuguese did not have an equivalent verb sense in English identified in Propbank. | this approach is part of a previous decision to invert the process of implementing a Propbank project, by first annotating a core corpus and only then generating a lexical resource to enable further annotation tasks. | neutral |
train_100792 | With the recent extension of PropBank SRL to nominal and adjective predicates, preposition relationships, light-verb constructions, and abstract meaning representation (Bonial et al., 2014;Banarescu et al., 2013), it may be time to revisit SP for SRL. | with this definition, we allow different role labels to share the same topic (though it does not encode role constraints quite like LinkLDA, ROOTH-LDA, etc). | neutral |
train_100793 | We used 800 topics (w/ lemmatized headwords) tuning on the 5), our SP approach had a smaller (but still statistically significant) absolute F1 gain, with most of the gain coming from core argument type improvements. | the margin of improvement is a modest 0.4 F1 point (on WSJ) over a baseline system with performance over 4 F1 points lower than the top system in CoNLL-2005 (Carreras and Màrquez, 2005). | neutral |
train_100794 | Handling adjectives, adverbs and words with different POS tags: To get the best out of all Word-Net similarity measures, we exploited the relationships between different forms of the terms in Word-Net to find the noun form of the terms in the entity models and pruned sentences before calculating the similarity. | the term similarity is computed by forming an ensemble using the standard WordNet similarity measures namely, WUP (Wu and Palmer, 1994), LCH (Leacock and Chodorow, 1998), Resnik (Resnik, 1995), LIN (Lin, 1998), JCN (Jiang and Conrath, 1997), as well as a predict vector-based measure Word2vec (Mikolov et al., 2013) and a morphology-based similarity metric Levenshtein 1 as: where t 1 and t 2 are input terms and M is the set of above mentioned similarity measures. | neutral |
train_100795 | Since the MCS showed low precision for the Tneg category in the previous experiment (Table 2), it is potentially introducing too much noise that the SVM is not able to linearly separate. | unfortunately, adjectives and adverbs are not arranged in a hierarchy, and terms with different part of speech (POS) tags cannot be mapped to the same hierarchy. | neutral |
train_100796 | Negation detection is traditionally separated from the entity recognition task because negation indicating terms can be recognized separately from the phrases that contain explicit mention of an entity. | the first experiment evaluates the classification performance of our algorithm, MCS, and SVM. | neutral |
train_100797 | An entity can have multiple definitions each explaining it using diverse vocabulary. | we deem some of the related algorithms to have good potential applicability for this task. | neutral |
train_100798 | We used the NegEx algorithm (Chapman et al., 2001) to address the first type of negations. | we do not expect this limitation to have a major impact. | neutral |
train_100799 | While the annotators have good agreement on annotating sentences in category TP, they agreed less on the categories Tneg and TN. | we included two strong algorithms from the closest related work as baseline solutions to the problem. | neutral |
Subsets and Splits