id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_92500 | The IPC taxonomy has four hierarchical layers: Section, Class, Subclass, and Group. | we can expect to improve multi-label classification performance by using binary classifiers trained to maximize the F 1 -score. | neutral |
train_92501 | A rule-based system consists of a set of rules. | we assume that the rules are given to us and study the problem of arranging them into an optimal decision list, where optimality is determined over a training data set. | neutral |
train_92502 | In our task, we chose the bottom k related sentences with the lowest certainty scores. | fortunately, active learning systems design strategies to select the most informative training examples. | neutral |
train_92503 | We finally applied the model to new biomedical articles and examined its performance on one of its subsets. | as every processing unit in the data set is at the sentence level and we make decisions at the sentence level to train better sequential labeling models, we define heuristic scores at the sentence level. | neutral |
train_92504 | Then the likelihood of labels y = [y 1 , . | given the high degree of human interaction involved, their method will not be scalable to a large number of medical conditions. | neutral |
train_92505 | In addition, recall that headings, bullet list and ordered list allowed the repetitions of symbols "=", "*" and "#". | this suggests that we could extract different types of hyponymy relations from each of these methods. | neutral |
train_92506 | (hyponym is one of hypernym) Note that hyponym and hypernym match only with NPs. | note that the non-variable part of the patterns is removed from the matched hypernym candidates. | neutral |
train_92507 | In other words, as there is no horizontal connection at the same level, it is not possible to create triangle circulation paths in a single stroke. | we selected 98,083 words after removing noise words, functional words, and 1,321 isolated words to extract word pairs by combining every headword with every other headword included within an entry text. | neutral |
train_92508 | In order to overcome such difficulties in building appropriate lexical graphs for corpus data, we propose an original way of appropriately subdividing core clusters by taking into account graph coefficients, especially the curvature of a hub word. | as there is no horizontal connection at the same level, it is not possible to create triangle circulation paths in a single stroke. | neutral |
train_92509 | In addition, the adoption of techniques to deal with unknown words and techniques to combine with rules may also improve the performance of our algorithm. | it substitutes the probability P r the tag of word w i is determined by the tags of the J words right before w i and L words right before w i . | neutral |
train_92510 | In this work, we introduce a novel approach for automatic correction of spelling mistakes by deploying finite state automata to propose candidates corrections within a specified edit distance from the misspelled word. | we can make use of the finite state automaton representation of the dictionary to make this step more efficient. | neutral |
train_92511 | Com-pared with (a), it is obvious that the improvement between TSVM using heuristics with TSVM for ARG0 and ARGM-TMP is larger than the overall improvement. | both of them are used for training TSVM model. | neutral |
train_92512 | The evaluation phase consists in checking the performance of each model for predicting thematic boundaries. | each utterance has been considered as a block of text. | neutral |
train_92513 | The EM algorithm for AMDD is based on iteratively maximizing the log-likelihood function: . | lCSeg SVMs P k error rate 21% 32 % 22% Table 1: Comparative performance results. | neutral |
train_92514 | For example, news stories about "A team at Peking University in Beijing studied tissue taken from 2 people killed by H5N1 in China" or "A meeting on foot and mouth disease (FMD) was held in Brussels on 17 th October, 2007". | the remainder of this paper is organized as follows. | neutral |
train_92515 | These components are depicted in Figure 1. | there are two locations named Camden: One in Australia and one in London, UK. | neutral |
train_92516 | TTS systems have been developed using the Festival framework for different languages, including English, Japanese, Welsh, Turkish, Hindi, and Telugu (Black and Lenzo, 2003). | empirical estimation of least number of input classes needed for training a neural net for Sinhala character recognition suggested about 400 classes (Weerasinghe et al., 2006). | neutral |
train_92517 | be an image feature space, and Based on the image data set V, we can estimate an image instance-to-feature co-occurrence matrix A |V|×|F | ∈ R |V|×|F | , where each element A ij (1 ≤ i ≤ |V| and 1 ≤ j ≤ |F|) in the matrix A is the frequency of the feature f j appearing in the instance be a text feature space. | labeled data are often scarce and expensive to obtain. | neutral |
train_92518 | For the WSsim/SemCor dataset, the correlation between original and WSsim annotation was ρ = 0.234, ρ = 0.448, and ρ = 0.390 for the three annotators, each highly significant with p < 2.2e-16. | 5 Unlike previous word sense annotation projects, we asked annotators to provide judgments on the applicability of every WordNet sense of the target lemma with the instruction: 6 2 Throughout this paper, a target word is assumed to be a word in a given PoS. | neutral |
train_92519 | It uses paraphrases for words in context as a way of annotating meaning. | the percentage of markables that received multiple sense labels in existing corpora is small, and it varies massively between corpora: In the SemCor corpus (Landes et al., 1998), only 0.3% of all markables received multiple sense labels. | neutral |
train_92520 | In contrast, the macro average does not bias the scores, thus the roles having a small number of instances affect the average more than the micro average. | we thus propose another approach that incorporates group information as feature functions. | neutral |
train_92521 | Automatic dependency relations were produced by the MALT parser. | the Argument Mapping Predictor uses the following features: (23) Predicate. | neutral |
train_92522 | We used Score(x i,t ) 7 to replace Score(x i,t ) in our conversion algorithm and then ran the updated algorithm on CDT. | score Interpolation Unlabeled dependency f-scores used in section 2.1 measure the quality of converted trees from the perspective of the source grammar only. | neutral |
train_92523 | It is important to acquire additional labeled data for the target grammar parsing through exploitation of existing source treebanks since there is often a shortage of labeled data. | for acquisition of better conversion rules, Xia et al. | neutral |
train_92524 | All the works in Table 8 used CTB articles 1-270 as labeled data. | the word depends on the word . | neutral |
train_92525 | Thus we managed to collect a named entity translation dictionary to enhance the original one. | the basic idea to support this work is to make use of the semantic connection between different languages. | neutral |
train_92526 | Above all, a translated word pair list, L, is extracted from the translated treebank. | this extra effort did not receive an observable performance improvement in return. | neutral |
train_92527 | We hypothesized earlier that lexicalization is unlikely to give us much improvement in performance, because topological fields work on a domain that is higher than that of lexical dependencies such as subcategorization frames. | all productions in the corpus have also been binarized. | neutral |
train_92528 | Our model views each pair of sentences as having been generated as follows: First an alignment tree is drawn. | we want our model to find tree alignments such that both aligned node pairs and unaligned nodes have high Giza-score. | neutral |
train_92529 | Given a sample we can obtain the seen vocabulary and the seen number of hapax legomena. | we thus need methods to extrapolate empirical measurements of these quantities to arbitrary sample sizes. | neutral |
train_92530 | A collection of about 3 million words from varied articles in the Hindi language also from the Central Institute of Indian Languages. | we conclude that almost surely. | neutral |
train_92531 | Correct stress placement is important in textto-speech systems because it affects the accuracy of human word recognition (Tagliapietra and Tabossi, 2005;Arciuli and Cupples, 2006). | the ranking approach facilitates inclusion of arbitrary features over both the input sequence and output stress pattern. | neutral |
train_92532 | Predicting the full stress pattern is therefore inherently more difficult than predicting the location of primary stress only. | unlike typical sequence predictors, we do not have to search for the highest-scoring output according to our model. | neutral |
train_92533 | We study the use of phonological features and affinity statistics for transliteration alignment at phoneme and grapheme levels. | we arrive at a set of 560,768 English-Chinese (EC) pairs that follow the Chinese phonetic rules, and a set of 83,403 English-Japanese Kanji (EJ) pairs, which follow the Japanese phonetic rules, and the rest 29,219 pairs (REST) being labeled as incorrect transliterations. | neutral |
train_92534 | Each swarm combines results from six rule sets with varying amounts of pruning (no pruning and pruning with cut-off = 1..5). | the main challenge for the training algorithm is that it must produce rules that accurately lemmatize OOV words. | neutral |
train_92535 | For Bengali precision was 39.3 percent better than without stemming, though no absolute numbers were reported for precision. | we explain how the lemmatization rules are created and how the lemmatizer works. | neutral |
train_92536 | The last rule's pattern matches any word and so the lemmatizer cannot fail to produce output. | to the DAG, the tree implements negation: if the N th sibling of a row of children fires, it not only means that the pattern of the N th rule matches the word, it also means that the patterns of the N-1 preceding siblings do not match the word. | neutral |
train_92537 | In all cases, we prune the lattices/hypergraphs to a density of 30 using forward-backward pruning (Sixtus and Ortmanns, 1999). | because the right-hand side of r e has n nonterminals, the arity of e is |e| = n. Let T (e) = {v 1 , ..., v n } denote the tail nodes of e. We now assume that each tail node v i ∈ T (e) is associated with the upper envelope over all candidate translations that are induced by derivations of the corresponding nonterminal symbol X i . | neutral |
train_92538 | When the arity of the edge is 2, a rule has the general form aX 1 bX 2 c, where X 1 and X 2 are sequences from tail nodes. | translation lattices contain a significantly higher number of translation alternatives relative to Nbest lists. | neutral |
train_92539 | Therefore, by doing so, the tree sequence rules can be extracted from a forest in the following two steps: 1) Convert the complete parse forest into a non-complete forest in order to cover those tree sequences that cannot be covered by a single tree node. | with this platform, we can easily implement our method and many previous syntax-based methods by simple parameter setting. | neutral |
train_92540 | Fundamentally, syntax-based SMT views translation as a structural transformation process. | in Table 2, partially-lexicalized rules extracted from training corpus are the major part (more than 70%). | neutral |
train_92541 | Examples of such overviews include actor biographies from IMDB and disease synopses from Wikipedia. | for instance, some approaches coarsely discriminate between biographical and non-biographical information (Zhou et al., 2004;Biadsy et al., 2008), while others go beyond binary distinction by identifying atomic events -e.g., occupation and marital status -that are typically included in a biography (Weischedel et al., 2004;filatova and Prager, 2005;filatova et al., 2006). | neutral |
train_92542 | Participants directly express their opinions, such as "The iPhone is cool," but, more often, they mention associated aspects. | here again, we use the process described in Section 3.1 to extract polarity-target pairs for each opinion expressed in the post. | neutral |
train_92543 | Thus, one major source of errors is a false hit of a word in the lexicon. | we need to find opinions and pair them with targets, both to mine the web for general preferences and to classify the stance of a debate post. | neutral |
train_92544 | Additionally, we see that some conclusions of the OpPMI system are similar to those of the OpPr system, for example, that "Storm" is more closely related to the Blackberry than the iPhone. | 1 In this work, we deal only with dual-sided, dual-topic debates about named entities, for example iPhone vs. Blackberry, where topic 1 = iPhone, topic 2 =Blackberry, side 1 = pro-iPhone, and side 2 =pro-Blackberry. | neutral |
train_92545 | This is expected, because it relies only on opinions explicitly toward the topics. | previous work did not account for concessions in determining whether an opinion supports one side or the other. | neutral |
train_92546 | For each of the 4 debates in our test set, we use posts with at least 5 sentences for evaluation. | an opinion expressed about "Storm" is usually the opinion one has toward "Blackberry." | neutral |
train_92547 | We propose a number of characteristics of good sentiment terms from the perspectives of informativeness, prominence, topic-relevance, and semantic aspects using collection statistics, contextual information, semantic associations as well as opinion-related properties of terms. | this paper introduces an approach to the sentiment analysis tasks with an emphasis on how to represent and evaluate the weights of sentiment terms. | neutral |
train_92548 | We also thank the anynomous reviewers of the previous drafts of this paper for their valuable suggestions in improving the evaluation and presentation. | the set {B, C} has nodes that have both senses, forming an ambiguity set. | neutral |
train_92549 | Recent studies Kozareva et al., 2008) show that if the size of a corpus, such as the Web, is nearly unlimited, a pattern has a higher chance to explicitly appear in the corpus. | based on different definitions of Count(. | neutral |
train_92550 | Each term insertion yields a new partial taxonomy T. By the minimum evolution assumption, the optimal next partial taxonomy is one gives the least information change. | the KnowItAll system extended the work in (Hearst, 1992) and bootstrapped patterns on the Web to discover siblings; it also ranked and selected the patterns by statistical measures. | neutral |
train_92551 | The basic idea underlying the algorithm is that if the dynamic range of the perceptron is not too large then w t would classify most instances correctly most of the time (for most values of t). | reidsma and Carletta (2008) recently showed by simulation that different types of annotator behavior have different impact on the outcomes of machine learning from the annotated data. | neutral |
train_92552 | We show in section 3.4 that the widely used Freund and Schapire (1999) voted perceptron algorithm could face a constant hard case bias when confronted with annotation noise in training data, irrespective of the size of the dataset. | e(ζ 2 ) ≥ min{H T (δ), H γ (δ)}, and Corollary 4 The bound in theorem 2 does not converge to zero for large N . | neutral |
train_92553 | The third conclusion we can draw then is twofold. | this conclusion was also reached by . | neutral |
train_92554 | If we read these tables column-wise, thereby taking the more linguistically-inspired labels in VerbNet to be the reference labels, we observe that the labels in PropBank are especially concentrated on those labels that linguistically would be considered similar. | 2 Furthermore, we perform our analyses on training and development data only. | neutral |
train_92555 | Semantic roles are organized according to the thematic hierarchy (one proposal among many is Agent > Experiencer> Goal/Source/Location> Patient (Grimshaw, 1990)). | both annotation schemes could be useful in different circumstances and at different frequency bands. | neutral |
train_92556 | the bottom example in Table 2). | future work will have to assess the effectiveness of individual features and investigate ways to customize RTE systems for the MT evaluation task. | neutral |
train_92557 | Similar ideas have been applied by Owczarzak et al. | entailment relations are more sensitive to the contribution of individual words (MacCartney and Manning, 2008). | neutral |
train_92558 | In order to check the robustness of these results, we computed the correlation of individual metric failures between test beds, obtaining 0.67 Pearson for the lowest correlated test bed pair (AE 2004 and CE 2005 ) and 0.88 for the highest correlated pair (AE 2004 and CE 2004 ). | linguistic metrics are represented by grey plots, and black plots represent metrics based on n-gram overlap. | neutral |
train_92559 | Figure 1 shows the correlation obtained by each automatic evaluation metric at system level (horizontal axis) versus segment level (vertical axis) in our test beds. | the main reason is that the advantages of employing deeper linguistic information have not been clarified yet. | neutral |
train_92560 | Since it is not linguistically motivated, original phrasebased decoding might produce ungrammatical or even wrong translations. | we add a new feature into the log-linear translation model: P SDB (b|T, τ (.)). | neutral |
train_92561 | In a projective dependency tree, the yield of every subtree is a contiguous substring of the sentence. | more precisely, if the next node in the projective order is the kth node in the buffer, we perform k SHIFT transitions, to get this node onto the stack, followed by k−1 SWAP transitions, to move the preceding k − 1 nodes back to the buffer. | neutral |
train_92562 | An improved algorithm computes, for each possible edge k → i, a modified Kirchoff matrix K k→i that requires the presence of that edge. | gE training of the full CRF outperforms EM with 10 constraints and CE with 20 constraints (those displayed in Table 1). | neutral |
train_92563 | The development and tuning of the above methods constitute the encoding of prior domain knowledge about the desired syntactic structure. | we also report the accuracy of an attach-right baseline 6 . | neutral |
train_92564 | Seginer (2007) and Bod (2006) propose unsupervised phrase structure parsing methods that give better unlabeled F-scores than DMV with EM, but they do not report directed dependency accuracy. | the CRF may consider the distance between head and child, whereas DMV does not model distance. | neutral |
train_92565 | We first try to add the transferred edges in random order, then for each orphan node we try all possible parents (both in random order). | if we specify just the two rules for "da" and verb conjugations performance jumps to that of training on 60-70 fully labeled sentences. | neutral |
train_92566 | The basic data structures are a stack, where the constructed dependency graph is stored, and an input queue, where the unprocessed data are put. | conceptually, this conversion is similar to the conversions from deeper structures to GR reprsentations reported by clark and curran (2007) and . | neutral |
train_92567 | The next decision is how to sample S rand from L hand . | this manual evaluation is extremely time consuming and is necessary due to the limited coverage of biomedical resources. | neutral |
train_92568 | This paper proposes a novel framework for a large-scale, accurate acquisition method for monolingual semantic knowledge, especially for semantic relations between nominals such as hyponymy and meronymy. | if this is the case, the effect of adding the translation to the training data can be quite large, and the same level of effect may not be achievable by a reasonable amount of labor for preparing the training data. | neutral |
train_92569 | Let hyper be a hypernym candidate, hypo be a hyper's hyponym candidate, and (hyper, hypo) be a hyponymyrelation candidate. | in the case of hyponymy-relation acquisition in English and Japanese, (s, t) ∈ D Bi could be (s=(enzyme, hydrolase), t=(Þ (meaning enzyme), AÄ$F Þ (meaning hydrolase))). | neutral |
train_92570 | In order to reduce irrelevant chunks, when excerpts were extracted, the Provider drops all characters preceding the hyponym phrase in excerpts that contain the first type, and also drops all characters following the hyponym phrase in excerpts that contain the second type. | unlike all of those systems, ASIA does not use any NLP tool (e.g., partsof-speech tagger, parser) or rely on capitalization for extracting candidates (since we wanted ASIA to be as language-independent as possible). | neutral |
train_92571 | We conjecture that in other application settings the rules extracted from Wikipedia might show even greater marginal contribution, particularly in specialized domains not covered well by Word-Net. | as the results with this filter resemble those for Dice we present results only for the simpler Dice filter. | neutral |
train_92572 | In addition, the items in each semantic class need to be properly ordered. | multi-membership is more popular than at a first glance, because quite a lot of English common words have also been borrowed as company names, places, or product names. | neutral |
train_92573 | They do not generate semantic classes. | when we input "gold" as the query, the item "silver" can only be assigned to one semantic class, although the term can simultaneously represents a color and a chemical element. | neutral |
train_92574 | We assume that the parse trees of s and t are known. | we discuss next how to parameterize the probability p kid that appears in Equations 4, 5, and 6. | neutral |
train_92575 | The CRF++ version 0.50, a popular CRF library developed by Taku Kudo, 6 is reported to take 4,021 seconds on Xeon 3.0GHz processors to train the model using a richer feature set. | the test set was used only for the final accuracy report. | neutral |
train_92576 | This is because otherwise the accuracy of the component models would be overestimated by the joint model. | we show the performance on lemmatization when tags are not predicted (Tag Model is none), and when tags are predicted by the tag-set model. | neutral |
train_92577 | Several authors investigate neural network models that learn not just one latent state, but rather a vector of latent variables, to represent each word in a language model (Bengio et al., 2003;Emami et al., 2003;Morin and Bengio, 2005). | what is the effect of smoothing on sequencelabeling accuracy for rare word types? | neutral |
train_92578 | In our next experiment, we consider a common scenario where rare terms make up a much larger fraction of the test data. | we take ten random samples of a fixed size from the labeled training set, train a chunking model on each subset, and graph the F1 on the labeled test set, averaged over the ten runs, in Figure 1. | neutral |
train_92579 | We investigate the use of distributional representations, which model the probability distribution of a word's context, as techniques for finding smoothed representations of word sequences. | the distribution for a corpus x = (x 1 , . | neutral |
train_92580 | For the small toy example shown in Figure 3, the correct tagging is "PRO AUX V . | we run EM training again for Model 5 (the best model from Figure 5) but this time using 973k word tokens, and further increase our accuracy to 92.3%. | neutral |
train_92581 | Since we were interested in finding an optimal combination of word-level and characterlevel nodes for training, we focused on tuning r. We fixed N = 10 and k = 5 for all experiments. | this section discusses the structure of f (x, y). | neutral |
train_92582 | We envision this technique to be general and widely applicable to many other sequence labeling tasks. | we also thank Yang Liu and Haitao Mi for helpful discussions. | neutral |
train_92583 | For example, b-NN indicates that the character is the begin of a noun. | the compositive error rate of Recall for all word clusters is reduced by 20.66%, such a fact invalidates the effectivity of annotation adaptation. | neutral |
train_92584 | That is, we repeated the experiment, in which we used one discourse from among 16 discourses as the test data and the others as the learning data, 16 times. | among 1,119 incorrectly inserted linefeeds, the most frequent cause was that linefeeds were in- T h a t i s t h e p e r i o d w h i c h I c a l l t h e f i r s t p e r i o d w i t h o u t a p o l o g y Figure 6: Example of incorrect linefeed insertion in "adnominal clause." | neutral |
train_92585 | This ratio is less than half of that for all the bunsetsu boundaries. | in the clause boundaries of the "adnominal clause" type, linefeeds should rarely be inserted fundamentally. | neutral |
train_92586 | In this section, we discuss the causes of the incorrect linefeed insertion occurred in our method. | the F-measure and the sentence accuracy of our method were 81.43 and 53.15%, respectively. | neutral |
train_92587 | Our experimental results using co-training are significantly better than the original supervised results using the small amount of training data, and closer to that using supervised learning with a large amount of data. | our confidence-based approach balances the number of positive and negative samples and significantly reduces the error rates for the negative samples as well, thus leading to performance improvement. | neutral |
train_92588 | An obvious way to summarize multiple spoken documents is to adopt the transcribe-andsummarize approach, in which automatic speech recognition (ASR) is first employed to acquire written transcripts. | in this paper, the similarity of utterances is estimated directly from recurring acoustic patterns in untranscribed audio sequences. | neutral |
train_92589 | For example, we can express BLEU(e; e ) = exp In this expression, BLEU(e; e ) references e only via its n-gram count features c(e , t). | (2008) primarily in that we propose decoding with an alternative to MBR using BLEU, while they propose decoding with MBR using a linear alternative to BLEU. | neutral |
train_92590 | The space of similarity measures is large and relatively unexplored, and the feature expectations that can be computed from forests extend beyond n-gram counts. | first, word lattices are a subclass of forests that have only one source node for each edge (i.e., a graph, rather than a hyper-graph). | neutral |
train_92591 | Forest-based consensus decoding leverages information about the correct translation from the entire forest. | they derive a first-order taylor approximation to the logarithm of a slightly modified definition of corpus BLEU 4 , which is linear in n-gram indicator features δ(e , t) of e . | neutral |
train_92592 | Statistical machine translation is a decision problem where we need decide on the best of target sentence matching a source sentence. | we have presented a framework for including multiple translation models in one decoder. | neutral |
train_92593 | All models can be combined at the translation level. | we can use the Bisection method for finding the intersection in each bin. | neutral |
train_92594 | From the results we can see that combination based on co-decoding's outputs performs consistently better than that based on baseline decoders' outputs for all n-best sizes we experimented with. | the work on multi-system hypothesis selection of Hildebrand and Vogel (2008) bears more resemblance to our method in that both make use of n-gram agreement statistics. | neutral |
train_92595 | We We use the parallel data available for the NIST 2008 constrained track of Chinese-to-English machine translation task as bilingual training data, which contains 5.1M sentence pairs, 128M Chinese words and 147M English words after pre-processing. | the results for hypothesis selection are only slightly better than the best system in co-decoding. | neutral |
train_92596 | Lastly, the definition of the n-gram model is different. | we also showed that interpolating variational models with the Viterbi approximation can compensate for poor approximations, and that interpolating them with one another can reduce the Bayes risk and improve BLEU. | neutral |
train_92597 | To obtain p(y | x) above, we need to marginalize over a nuisance variable, the derivation of y. | we use standard beam-pruning and cube-pruning parameter settings, following Chiang (2007), when generating the hypergraphs. | neutral |
train_92598 | # $ % & Figure 1: Segmentation ambiguity in phrase-based MT: two different segmentations lead to the same translation string. | in order to score well on the BLEU metric for MT evaluation (Papineni et al., 2001), which gives partial credit, we would also like to favor lower-order ngrams that are likely to appear in the reference, even if this means picking some less-likely highorder n-grams. | neutral |
train_92599 | Third, all possible MRs for the sentence are constructed compositionally in a recursive, bottom-up fashion following its syntactic parse using composition rules. | a child MR becomes an argument of the macro-predicate if it is complete (i.e. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.