id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_96600 | Our experimental results presented in the next section confirm this benefit. | • Use a meta classifier whose inputs are the outputs of the contrast classifiers in the committee for a class, and whose output is modeled by training it from a separate, randomly sampled data set. | neutral |
train_96601 | The remainder of this paper is structured as follows: Section 2 briefly reviews details of the CLEF 2005 CL-SR task; Section 3 describes the system we used to investigate this task; Section 4 reports our experimental results; and Section 5 gives conclusions and details for our ongoing work. | retrieval effectiveness increased substantially for all topic languages. | neutral |
train_96602 | We tested a total of 240 combinations. | results for German and Czech are much poorer. | neutral |
train_96603 | He discusses how Centering can be used to define many different metrics of coherence which might be useful for this task. | the fact that M.NOCB is shown to overtake its Centering-based competitors across several corpora means that it is a simple, yet robust, baseline against which other similar metrics can be tested. | neutral |
train_96604 | Up to 4000 sentences, the entropy strategy and the combined strategy perform similarly. | we also assume that the selected training samples should give the different aspects of learning features to the classification system. | neutral |
train_96605 | Results show parse accuracy improves significantly, suggesting disfluency filtering may have a broad role in enabling text-based processing of speech data. | our notion of fillers encompasses filled pauses (e.g. | neutral |
train_96606 | As discussed earlier ( §1), Charniak and Johnson (2001) have argued that speech repairs do not contribute to meaning and so there is little value in syntactically analyzing repairs or evaluating our ability to do so. | for automated analysis of speech data, this means we may freely explore processing alternatives which delete disfluencies without compromising meaning. | neutral |
train_96607 | The combination of the two approaches yields better performance than any single model in the two cases. | the impact of the contextual role information is also examined in this study. | neutral |
train_96608 | Features are automatically extracted from each extract (see Table 2). | additional psychological characteristics were computed by averaging word feature counts from the MRC psycholinguistic database (Coltheart, 1981). | neutral |
train_96609 | one per-son is described as an extravert because the average population is not. | between 5 and 7 independent observers scored each extract using the big Five Inventory (John and Srivastava, 1999). | neutral |
train_96610 | In the work we present below, we introduce a new HMM approach to extractive summarization which addresses some of the deficiencies of work done to date. | the classifiers used in these methods implicitly assume that the posterior probability for the inclusion of a sentence in the summary is only dependent on the observations for that sentence, and is not affected by previous decisions. | neutral |
train_96611 | varying 6 β and optimizing the relevant measure for F β ; the points labeled "baseline" show the precision and recall in token and entity level of the baseline model, learned by VP-HMM. | learning methods such as VP-HMM and CRFs optimize criteria such as margin separation (implicitly maximized by VP-HMMs) or log-likelihood (explicitly maximized by CRFs), which are at best indirectly related to precision and recall. | neutral |
train_96612 | When the labels associated with n 1 and n 2 are different, we can avoid evaluating ∆(n 1 , n 2 ) since it is 0. | (2) The overfitting problem does not occur although the richer space of PTs does not provide better accuracy than the one based on SST. | neutral |
train_96613 | 2 We merge the partial trees output by a semantic role labeller with the output of the parser on which it was trained, and compute PropBank parsing performance measures on the resulting parse trees. | these augmented tags and the new non-terminals are included in the set f , and will influence bottomup projection of structure directly. | neutral |
train_96614 | 1 Accordingly, it is not possible to draw a straightforward quantitative comparison between our PropBank SSN parser and other PropBank parsers. | given the hidden history representation h of a derivation, a normalized exponential output function is computed by the SSNs to estimate a probability distribution over the possible next derivation moves d i . | neutral |
train_96615 | The underlying process of message generation is based on layered lexical knowledge bases (LKB) and an ontology. | counting the number of words does not include morphology which in Bliss symbols requires additional choices. | neutral |
train_96616 | Here, mode and σ denote the mode and standard deviation derived from the GlossEx 's confidence value distribution. | finally, the glossary items are ranked based on their confidence values. | neutral |
train_96617 | Speakers tend to repeat their linguistic decisions rather than making them from scratch, creating entrainment over time. | sOURCE=Map Task has an interaction effect on the priming decay ln(DIsT), both for PP priming (β = −0.024, t = −2.0, p < 0.05) and for CP priming (β = −0.059, t = −4.0, p < 0.0005). | neutral |
train_96618 | We believe this is not a side-effect of varying grammar size or a different syntactic entropy in the two types of dialogue, since we examine the decay of repetition probability with increasing distance (interactions with DIST), and not the overall probability of chance repetition (intercepts / main effects except DIST). | map Task contains task-oriented dialogue: interlocutors work together to achieve a task as quickly and efficiently as possible. | neutral |
train_96619 | In the English and Mandarin systems, the lexical and acoustic feature sets perform similarly, and combine to yield improved results. | the tDt-4 audio corpus includes 312.5 hours of English Broadcast News from 450 shows, 88.5 hours of Arabic news from 109 shows, and 134 hours of Mandarin broadcasts from 205 shows. | neutral |
train_96620 | A more accurate WSD model will in turn yield yet better WDD results, as demonstrated in this paper. | the standard comparison test for the Sen-seval3 is not as conclusive as with SemCor. | neutral |
train_96621 | Such corroboration is important as the Senseval3 corpus was not part of the data set used to train the WSD algorithm which provided the basis for subject domain assign-ment. | comparatively little effort has been devoted so far to the disambiguation of word subject domains. | neutral |
train_96622 | In this paper we present a scheme to select relevant subsets of sentences from a large generic corpus such as text acquired from the web. | maximum likelihood estimation of language models is poor when compared to smoothing based estimation. | neutral |
train_96623 | In typical situations, using one of those strategies should be a good choice-since BIA requires more classes, it makes sense to prefer IOB2 when in doubt. | it is rivaled by the new BiA strategy. | neutral |
train_96624 | Nonetheless, this is an interesting result from a linguistic perspective that begs further investigation. | subject animacy information is extracted and represented as four feature columns in our matrix, corresponding to the four subject NP types. | neutral |
train_96625 | The soft clustering algorithm, called FANNY, is a type of fuzzy clustering, where each observation is "spread out" over various clusters. | moreover, there is no reason to expect that there would be perfect alignment between the Arabic clusters and the corresponding translated Levin clusters, primarily because of the quality of the translation, but also because there is unlikely to be an isomorphism between English and Arabic lexical semantics, as assumed here as a means of approximating the problem. | neutral |
train_96626 | In the baseline, verbs are randomly assigned to clusters where a random cluster size is on average the same size as each other and as GOLD. | the output is a membership function P (x i , c), the membership of element x i to cluster c. The memberships are nonnegative and sum to 1 for each fixed observation. | neutral |
train_96627 | Human cognition studies have found that the in front of/behind axis is easier to perceive than other relations (Bryant et al., 1992). | negative training examples for this experiment were selected from the time-periods that elapsed between the follower achieving perceptual access to the object (coming into the same room with it but not necessarily looking at it), but before the Locating description was spoken. | neutral |
train_96628 | For example, "ACL" has many different definitions, including "Anterior Cruciate Ligament (an injury)," "Access Control List (a concept in computer security)," and "Association for Computational Linguistics (an academic society)." | to the best of our knowledge, no other studies have approached this problem. | neutral |
train_96629 | Our proposal depends on co-occurrences on a Web page. | this paper proposed a new method for reading proper names. | neutral |
train_96630 | Our proposal is similar to previous studies in that both use machine learning. | we offer our discussion in Section 5 and conclusions in Section 6. | neutral |
train_96631 | Retriever differs from such schemes in filtering out low value content and by making obscure sources visible. | unlike a search engine, rather than returning ranked documents links in response to a query, Lycos Retriever categorizes and disambiguates topics, collects documents on the Web relevant to the disambiguated sense of that topic, extracts paragraphs and images from these documents and arranges these into a coherent summary report or background briefing on the topic at something like the level of the first draft of a Wikipedia 2 article. | neutral |
train_96632 | an undirected link between s i and s j (i j) with affinity weight aff(s i ,s j ) is constructed if aff(s i ,s j )>0; otherwise no link is constructed. | graph-based methods have been proposed to rank sentences or passages. | neutral |
train_96633 | By trying different bins combinations and different α such that 0 < α < 1 with interval 0.1, we obtained the average optimal α = 0.15 and 0.9 from the Remedia and ChungHwa training sets respectively 7 . | we refined our reference performance level by combining the ME models (MEM) and handcrafted models (HCM). | neutral |
train_96634 | The clustering model is advantageous over other models in that the flexibility of clustering methods allows "many-to-many" mappings. | the average number of images per document is 6.5±1.5 and the average number of sentences per abstract is 7.2±1.9. | neutral |
train_96635 | It is composed of three parts: a dictionary-based N-gram word segmentation for segmenting IV words, a subwordbased tagging by the CRF for recognizing OOVs, and a confidence-dependent word segmentation used for merging the results of both the dictionary-based and the IOB tagging. | section 3 presents our experimental results. | neutral |
train_96636 | Our technique, applied as preprocessing to the source corpus, splits and normalizes surface words based on the target sentence context. | we trained translation parameters for 10 scores (language model, word and phrase count, and 6 translation model scores from (Vogel, 2005) ) with Minimum Error Rate training on the development set. | neutral |
train_96637 | We finally couple the syntactic-prosodic and acousticprosodic components to achieve significantly improved pitch accent and boundary tone classification accuracies of 86.0% and 93.1% respectively. | the pitch accent and boundary tone detection accuracy at the syllable level were 75% and 88% respectively. | neutral |
train_96638 | This suggest that accent ratio provides rich information about words beyond that of POS class and general informativeness. | we might expect such a use of raw frequencies to be problematic. | neutral |
train_96639 | In this section, we show that conversants try to avoid initiative conflicts by examining both the offset of initiative conflicts and the urgency levels. | four cases are because the second conversant has something urgent to say. | neutral |
train_96640 | Once we understand the conventions people adopt in negotiating initiative, we can implement them in a computer dialogue system to create natural interactivity. | if we predict that the system always wins the conflicts, we achieve 70% accuracy. | neutral |
train_96641 | The amount of degradation in the overall accuracy (F1) of each of the models in relation to that of the ALL model indicates the contribution of the feature type that has been left out of the model. | in this study we include the following features: the label of the current topic segment, the position of the DA in a topic segment (measured in words, in seconds, and in %), the distance to the previous topic shift (both at the toplevel and sub-topic level)(measured in seconds), the duration of the current topic segment (both at the top-level and sub-topic level)(measured in seconds). | neutral |
train_96642 | Otherwise, the answer is determined to be no. | the 769 sentences were translated by using five commercial Mt systems to investigate the relationship between subjective evaluation based on yes/no questions and conventional subjective evaluation based on fluency and adequacy. | neutral |
train_96643 | This is because conventional methods are based on the similarity between a translated sentence and its reference translation, and they give the translated sentence a high score when the two sentences are globally similar to each other in terms of lexical overlap. | the sub-goals of a given sentence should be generated by considering the complexity of the sentence and the alignment information between the original source-language sentence and its translation. | neutral |
train_96644 | Maximum Correlation Training (MCT) is an instance of the general approach of directly optimizing the objective function by which a model will ultimately be evaluated. | we also give the upper bound for each evaluation aspect by training MCT on the testing MT outputs, e.g., we train MCT on E09 and then use it to evaluate E09. | neutral |
train_96645 | If we reveal the alignment of the source sentence with both the reference and the MT output, the Chinese word bu neng would be aligned to must not in the reference and must hardly in the MT output respectively, leaving the word not in the MT output not aligned to any word in the source sentence. | evaluation has long been a stumbling block in the development of machine translation systems, due to the simple fact that there are many correct translations for a given sentence. | neutral |
train_96646 | The most interesting question in this paper is, with all these metrics, how well we can do in the MT evaluation. | there is no guarantee that, starting from a random w, we will get the globally optimal w using optimization techniques such as gradient ascent. | neutral |
train_96647 | These results show that the strategy of only including the new information as features in a standard n-best re-ranking scenario does not lead to an improvement over the baseline. | we also show that integrating our case prediction model improves the quality of translation according to BLEU (Papineni et al., 2002) and human evaluation. | neutral |
train_96648 | More recently, (Chiang, 2005) extended phrase-pairs (or blocks) to hierarchical phrase-pairs where a grammar with a single non-terminal allows the embedding of phrases-pairs, to allow for arbitrary embedding and capture global reordering though this approach still has the high overlap problem. | prefixes and suffixes which are specific in translation are limited to their English translations. | neutral |
train_96649 | Also, for a fixed value of β, BASIC-P1 β gives better MUC-F1 than BASIC-Pa β , and BASIC-Pa β gives better pairwise-F1 than BASIC-P1 β for both data sets. | in experiments on two coreference data sets, structured local training reduces the error rate significantly (3.5%) for one coreference data set and minimally (≤ 1%) for the other. | neutral |
train_96650 | Our work differs in that (1) we use hidden variables to capture the interactions between local inference and global inference, we present an application to coreference resolution, while previous work has shown applications for variants of sequence tagging. | we must resort to an 5 Single-link clustering simply takes the transitive closure, and does not consider the distance metric. | neutral |
train_96651 | The proposed twin-model outperforms the baseline without tuning the free parameter. | to implement the twin model, we adopt the log linear or maximum entropy (MaxEnt) model (Berger et al., 1996) for its flexibility of combining diverse sources of information. | neutral |
train_96652 | We see that both the first-order features and the training enhancements improve performance consistently. | 4 Error-driven and Rank-based training of the First-Order Model In this section we propose two enhancements to the training procedure for the First-Order Uniform model. | neutral |
train_96653 | This process is repeated for a fixed number of iterations. | (we should note that there are also small differences in the feature sets used for error-driven and standard training results.) | neutral |
train_96654 | Let s + (Λ, x j ) be the unnormalized score for the positive example and s − (Λ, x k ) be the unnormalized score of the negative example. | they do not investigate rank-based loss functions. | neutral |
train_96655 | Our work is continuing by exploring methods for handling fields with incorrect or corrupted values. | the cross-field inference makes it possible to find documents in response to a structured query when those query fields do not exist in the relevant documents at all. | neutral |
train_96656 | The model is based on the idea that missing or corrupted values for one field can be inferred from values in other fields of the record. | the model will fail whenever the order of words, or their proximity within a field carries a semantic meaning. | neutral |
train_96657 | Figure 3(a) shows that country coverage grows much more rapidly for GRASSHOPPER than for MOVIECOUNT. | it is known that the fundamental matrix gives the expected number of visits in the absorbing random walk (Doyle and Snell, 1984). | neutral |
train_96658 | GRASSHOPPER is an alternative to MMR and variants, with a principled mathematical model and strong empirical performance. | groups of nodes far away from g 1 still allow the random walk to linger among them, and thus have more visits. | neutral |
train_96659 | The deviation in single letter words can be attributed to the writing system being a transcription of phonemes and few phonemes being expressed with only one letter. | this work proposes a plausible model for the emergence of large-scale characteristics of language without assuming a grammar or semantics. | neutral |
train_96660 | Neither the Mandelbrot nor the Simon generation model take the sequence of words into account. | to this, the Mandelbrot model shows to have a step-wise rank-frequency distribution and a distorted lexical spectrum. | neutral |
train_96661 | On the one hand, the improvement suggests that our original feature configuration included some irrelevant features, and in turn confirmed that overinclusion of features could hurt the performance. | (2003) and Culotta and Sorensen (2004) used tree kernels for relation extraction. | neutral |
train_96662 | The result below follows from Bayes' Rule and our assumptions above: Proposition 1 If two strings s i and s j have P i and P j potential properties (or instances), and they appear in extracted assertions D i and D j such that |D i | = n i and |D j | = n j , and they share k extracted properties (or instances), the probability that s i and s j co-refer is: Substituting equation 1 into equation 2 gives us a complete expression for the probability we are looking for. | web Information Extraction (wIE) systems extract assertions that describe a relation and its arguments from web text (e.g., (is capital of, D.C., United States)). | neutral |
train_96663 | Many objects appear with multiple names that are substrings, acronyms, abbreviations, or other simple variations of one another. | our probabilistic String Similarity Model (SSM) assumes a similarity function The model sets the probability of s 1 co-referring with s 2 to a smoothed version of the similarity: The particular choice of α and β make little difference to our results, so long as they are chosen such that the resulting probability can never be one or zero. | neutral |
train_96664 | This algorithm is O(M log M ) where M is the number of distinct strings. | experiments with the extensions, using the same datasets and metrics as above, demonstrate that the Function Filter (FF) and the Coordination-Phrase Filter (CPF) boost ReSOLVeR's performance. | neutral |
train_96665 | Judging recall would require inspecting not just the clusters that the system outputs, but the entire data set, to find all of the true clusters. | we set P i = N × n i , and we set N = 50 in our experiments. | neutral |
train_96666 | Ravikumar and Cohen (Ravikumar and Cohen, 2004) present an unsupervised approach to ob-ject resolution using Expectation-Maximization on a hierarchical graphical model. | if R i,j is true then S i,j = min(P i , P j ), and if R i,j is false then S i,j < min(P i , P j ). | neutral |
train_96667 | For example we can restrict the search space (i.e. | the probability for two words to be related tends to 0 when their similarity is negative (i.e., they are not domain related), supporting the basic hypothesis of this work. | neutral |
train_96668 | Because we are marginalizing over θ, the trees t i become dependent upon one another. | in our experiments with the Sesotho data above we found that for the small values of α necessary to obtain a sparse solution,the expected rule count E[f r ] for many rules r was less than 1 − α. | neutral |
train_96669 | The probability of an SCFG rule instance computed by Algorithm 1 can be written in this functional form: where and the MRF has one factor f i for each child nonterminal X i in the grammar rule R. The factor's value is the probability of the child nonterminal, which can be expressed as a function of its four boundaries: For reasons that are explained in the following section, we augment our Markov Random Fields with a dummy factor for the completed parent nonterminal's chart item. | we have shown in the exponent in the complexity of polynomial-time parsing algorithms for synchronous context-free grammars grows linearly with the length of the grammar rules. | neutral |
train_96670 | For any grammar with maximum rule size n, a fairly straightforward dynamic programming strategy yields an O(N n+4 ) algorithm for parsing sentences of length N . | the overall complexity is unchanged, because each assignment to all variables in each cluster is still considered only once. | neutral |
train_96671 | For each word, w, in the vocabulary, we check whether (1) w can be segmented as r+x or p+r, where p and x are valid prefixes and suffixes respectively and r is another word in the vocabulary, and (2) the WRFR for w and r is less than our predefined thresholds (10 for suffixes and 2 for prefixes). | our algorithm outperforms the winners for all the languages in the competition, demonstrating its robustness across languages. | neutral |
train_96672 | There has also been a rethinking of the traditional modular NLG architecture (Reiter, 1994). | this interest does not appear to have translated into practice: of the 30 implemented systems and modules with development starting in or after 2000 that are listed on a key NLG website 1 , only five have any statistical component at all (another six involve techniques that are in some way corpus-based). | neutral |
train_96673 | Like WASP −1 , the phrase extraction algorithm of PHARAOH is based on the output of a word alignment model such as GIZA++ (Koehn et al., 2003), which performs poorly when applied directly to MRLs (Section 3.2). | finally, we show that hybridizing these two approaches results in still more accurate generation systems. | neutral |
train_96674 | The three next sentences display some advantages of our approach over the K&M model: here, the latter model performs deletion with too little lexicosyntactic information, and accidentally removes certain modifiers that are sometimes, but not always, good candidates for deletions (e.g., ADJP in Sentence 2, PP in sentences 3 and 4). | first, it appears that vertical annotation is moderately helpful. | neutral |
train_96675 | While contextual information is the primary source of information used in WSD research and has been used for acquiring semantic lexicons and classifying unknown words in other languages (e.g., Roark and Charniak 1998;Ci-aramita 2003;Curran 2005), it has been used in only one previous study on semantic classification of Chinese unknown words (Chen and Lin, 2000). | similarity between their modifiers, using the concept of information load (IC) of the least common ancestor (LCA) of the modifiers' semantic categories. | neutral |
train_96676 | To alleviate this problem, the remove-one method is used for testing the knowledge-based models. | 8 The rules for four-character words are given in (9). | neutral |
train_96677 | Among the various knowledge-based (Lesk, 1986;Galley and McKeown, 2003;Navigli and Velardi, 2005) and data-driven (Yarowsky, 1995;Ng and Lee, 1996;Pedersen, 2001) word sense disambiguation methods that have been proposed to date, supervised systems have been constantly observed as leading to the highest performance. | through word sense disambiguation experiments, we show that the Wikipedia-based sense annotations are reliable and can be used to construct accurate sense classifiers. | neutral |
train_96678 | For each word, the table also shows the number of senses, the total number of examples, and two baselines: a simple informed baseline that selects the most frequent sense by default, 5 and a more refined baseline that 5 Note that this baseline assumes the availability of a sense tagged corpus in order to determine the most frequent sense of a word. | rather than using the senses listed in a disambiguation page as the sense inventory for a given ambiguous word, we chose instead to collect all the annotations available for that word in the Wikipedia pages, and then map these labels to a widely used sense inventory, namely WordNet. | neutral |
train_96679 | As the early TRECs have found (Voorhees and Tice, 1999), locating a passage that contains an answer is considerably easier than pinpointing the exact answer. | the goal is to quantify the number of relevant facts that a user will have encountered after reading a particular amount of text. | neutral |
train_96680 | It can be observed that there are seven different ways of re-writing the original query to attain better performance. | the motivation for encouraging this type of querying is that longer queries would provide more information in the form of context (Kraft et al., 2006), and this additional information could be leveraged to provide a better search experience. | neutral |
train_96681 | It is well-known that an average is easily skewed by outliers. | consider the following query: Define Argentine and British international relations. | neutral |
train_96682 | The sentencelevel combination was based on re-ranking a merged ¤ -best list using generalized linear models with features derived from each system's output. | this requires aligning the system outputs to form a consensus network and -during decoding -simply finding the highest scoring path through this network. | neutral |
train_96683 | These weights can balance the total confidence between the number of systems generating the hypothesis (votes), and the sum, maximum and average of the system confidences. | it was found that this approach did not always yield better combination output compared to the best single system on all evaluation metrics. | neutral |
train_96684 | To decide which of the two paths to merge date into, the algorithm looks at the number of items assigned to the deepest node that is held in common between the existing Core Tree and each candidate path for the ambiguous term. | while manually created metadata is considered of high quality, it is costly in terms of time and effort to produce, which makes it difficult to scale and keep up with the vast amounts of new content being produced. | neutral |
train_96685 | In order to pick the value of the parameter α for each of the sIB and IDT test experiments, we use "strapping" , which, as we mentioned earlier, is a technique for training a meta-classifier that chooses among possible clusterings. | a task instance refers to a particular example from the class. | neutral |
train_96686 | Roughly, we expect that if θ h f h (x i , y) is much higher for y = y i than for other values of y, then the annotator's r i is correspondingly more likely to indicate in some way that feature f h strongly influenced annotation y i . | at any given moment, such a tool should allow the annotator to highlight, view, and edit only the several rationales for the "current" annotated entity (the one most recently annotated or re-selected). | neutral |
train_96687 | If we were using more than unigram features, then simply deleting a rationale substring would not always be the best way to create a contrast document, as the resulting ungrammatical sentences might cause deep feature extraction to behave strangely (e.g., parse errors during preprocessing). | we require w which makes w 0 play the role of a bias term). | neutral |
train_96688 | Policy Exploration: RL searches the space of polices by determining Q for each state-action pair sa, which is the minimal cost to get to the final state from state s starting with action a. | it is assumed that the user will not change his or her mind depending on what flights are found. | neutral |
train_96689 | RL gives a way to learn the best action to perform in any given state. | section 5 demonstrates that Is can be used for simulating a dialogue between the system and a user. | neutral |
train_96690 | This domain is simple enough that we do not need separate understanding and action rules, and so we encompass all reasoning in the action rules, shown in Fig. | rL needs to find a single action that will work for the entire rL state, and so that action should not be considered. | neutral |
train_96691 | For example, a model can have an optimal policy with a very high ECR value, but have very wide confidence bounds reaching even into negative rewards. | the V-values indicate how much reward one would expect from starting in that state to get to a final state. | neutral |
train_96692 | All three metrics show that the best feature to add to the Baseline 2 model is Concept Repetition since it results in the most change over the Baseline 2 policy, and also the expected reward is the highest as well. | the approach discussed above assumes that given the size of the data set, the ECR and policies are reliable. | neutral |
train_96693 | These improvements have also been confirmed by t-tests as significant. | for two users (i.e., user 3 and user 4), the gaze-based salience driven language models consistently outperformed the bigram and trigram models in both early application and late application. | neutral |
train_96694 | These features have been demonstrated in several successful applications (Duchowski, 2002). | we found out that the learned priming weight performed worse than the empirical one in our experiments. | neutral |
train_96695 | We can find a similarity also to the PageRank algorithm (Brin and Page, 1998), which has been applied also to natural language processing tasks (Mihalcea, 2004;Mihalcea, 2005). | our attempt is to imitate annotator's decision. | neutral |
train_96696 | Using H(c), the probability distribution of the network is represented as P (c) = exp{−H(c)}/Z, where Z is a normalization factor. | by estimating a class of a proper noun and finding the words that matches the class in the dictionary, we can predict the semantic orientations of the proper noun based on the orientations of the found words. | neutral |
train_96697 | The latent variable method is applicable only to instance pairs consisting of an adjective and a seen noun. | an appropriate mapping from the words found in corpus to entries of a dictionary will solve this problem. | neutral |
train_96698 | The training is performed by iteratively ranking each training input x and updating the model. | if y = y, then the weights and boundaries are updated to improve the prediction for x (step 4.c in Figure 1). | neutral |
train_96699 | The most important features for deciding attribution to the current paper were the distance features 2(a,c,e), the rank 3(a) and the Hobbs' prediction 1(d). | the three citations above are described as flawed (detectable by "does not provide a very satisfactory account"), and thus receive the label Weak. | neutral |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.