id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_96700 | To demonstrate the use of automatic scientific attribution classification, we studied its utility for one well known discourse annotation task: Argumentative Zoning (Teufel and Moens, 2002). | the distance between two sets is larger, the smaller their intersection and the larger their union. | neutral |
train_96701 | The STRIVE metaclassification approach extended Wolpert's stacking framework (Wolpert, 1992) to use reliability indicators. | while this framework is known to work well for standard classification, its suitability for fusing rankers has not been studied. | neutral |
train_96702 | If we examine the curves using error bars, we see that the variance of STRIVE drops faster than the other classifiers as we move further along the x-axis. | this work was supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. | neutral |
train_96703 | While STRIVE has been shown to provide robust combination for topic classification, a formal motivation is lacking for the type of reliability indicators that are the most useful in classifier combination. | we examine F1 to ensure that an improvement in ranking will not come at the cost of a statistically significant decrease in F1. | neutral |
train_96704 | : Boeing, Heinz, Staples, Textron Test : General Electric, General Motors, Gannett, The Home Depot, IBM, Kroger, Sears, UPS Ground truth was created from the entire web, but since the corpus for each company is only a small web snapshot, the experimental results are not similar to extraction tasks like MUC and ACE in that the corpus is not guaranteed to contain the information necessary to build the entire database. | the database constraints presented in this paper provide a more general framework for jointly conditioning multiple relationships. | neutral |
train_96705 | Using constraints on probabilistic databases yields F-Measure improvements of 5 to 18 points on a per-relationship basis over a state-of-the-art multi-document extraction/fusion baseline. | 6 Λ η,α (r, t) gives a much higher penalty to relationships which create inconsistencies than does Λ η (r, t). | neutral |
train_96706 | We describe our pattern extraction algorithm in three steps. | for example, from the previous snippet we extract the pattern, X is the largest X. | neutral |
train_96707 | Our implementation of Co-occurrence Double Checking (CODC) measure (Chen et al., 2006) reports the second best correlation of 0.6936. | a snippet is a brief window of text extracted by a search engine around the query term in a document. | neutral |
train_96708 | A large number of frequent content words is often associated with only one dominant sense. | its usefulness is restricted by the availability of sense-annotated data. | neutral |
train_96709 | Moreover we retain only the sentences in which at least one of the context words is in our previously acquired knowledge-base of near-synonym collocations. | had difficulties in scaling up. | neutral |
train_96710 | Table 3 shows that the performance is generally better for word counts than for document counts. | : , where N can be ignored in comparisons. | neutral |
train_96711 | A comparison of the two SMT outputs indicates that integrating the proposed transliteration model into our machine translation system can significantly improve translation utility. | the transliteration models can only execute forward sequential jumps. | neutral |
train_96712 | Furthermore, by incorporating an automatic spell-checker based on statistics collected from web search engines, transliteration accuracy is further improved. | position information should be considered within the alignment models. | neutral |
train_96713 | Conversely, a method like pronunciation by analogy (PbA) (Marchand and Damper, 2000) is considered a global prediction method: predicted phoneme sequences are considered as a whole. | unfortunately, proper nouns and unseen words prevent a table look-up approach. | neutral |
train_96714 | The base dependency parser is deterministic and performs a single scan over the sentence. | the revision stage corrects the output of the base parser by means of revision rules learned from the mistakes of the base parser itself. | neutral |
train_96715 | Performing several parses in order to generate multiple trees would often just repeat the same steps. | the model only makes a binary decision, which is suitable for the simpler problem of POS tagging. | neutral |
train_96716 | (2006) that a hierarchically split PCFG could exceed the accuracy of lexicalized PCFGs (Collins, 1999;Charniak and Johnson, 2005). | of course, the naive version of this process is intractable: we have to loop over all (pairs of) possible parses. | neutral |
train_96717 | For a final grammar G = G n , we compute estimates for the n projections G n−1 , . | treebank estimation has several limitations. | neutral |
train_96718 | 9 Broadly, the CFG projection encodes constituency structure, while the dependency projection encodes lexical selection, and both projections are asymptotically more efficient than the original problem. | sometimes, we might be willing to trade search optimality for efficiency. | neutral |
train_96719 | The non-factored model, using the approximate projection heuristic, achieves an F 1 of 83.8 on the test set, which is slightly better than the factored model. | the completion cost of a state bounds the sum of the completion costs in each projection. | neutral |
train_96720 | the text spans to which they apply; these are used as test (W 1 , W 2 ) span pairs for classification. | we find that in Section 24, 13 out of 74 sentences contain a parsing error in the relevant aspects, but the effects are typically small and result from well-known parser issues, mainly attachment errors. | neutral |
train_96721 | The intuition here is that all sentence boundaries should not be treated equally during RSR instance mining. | noRel is proposed by M&E as a default model of same-topic text across which no specific RSR holds; instances are extracted by taking text span pairs which are simply sentences from the same document separated by at least three intervening sentences. | neutral |
train_96722 | While there are a number of different relation taxonomies (Hobbs, 1979;McKeown, 1985;Mann and Thompson, 1988;Martin, 1992;Knott and Sanders, 1998), many researchers have found that, despite small differences, these theories have wide agreement in terms of the core phenomena for which they account (Hovy and Maier, 1993;Moser and Moore, 1996). | we would like to make our patterns recognize that some sentence boundaries indicate merely an orthographic break without a switch in topic, while others can separate quite distinct topics. | neutral |
train_96723 | uses the same grid representation, but treats the transition probabilities P (r i,j | r i,j ) for each document as features for input to an SVM classifier. | the wider variety of grammatical constructions used may motivate more complex syntactic features, for instance as proposed by (Siddharthan et al., 2004) in sentence clustering. | neutral |
train_96724 | The probability that the optimal path has no repeated colors is: By the amplification analysis, the number of trials needed to drive the failure probability to the desired level will be inversely proportional to this quantity. | we implemented the rest of the decoders in Python with the Psyco speed-up module. | neutral |
train_96725 | To answer, let us consider the ratio: If this ratio is less than 1, then using r + 1 colors will be faster than using r; otherwise it will be slower. | the requirement that the path be long makes it more similar to the the Traveling Salesman Problem (TSP). | neutral |
train_96726 | Solving numerically to find where this is equal to 1, we find α ≈ .76804, which yields a running time proportional to approximately (4.5) k . | the running time for the necessary number of trials T r will be proportional to What r ≥ k should we choose to minimize this quantity? | neutral |
train_96727 | For instance, we plan to extract transfer rules from the aligned source and English structures and also calculate head/modifier crossings between languages similar to those described in (Fox, 2002). | knowing a little about the structure of a language can help in developing annotated corpora and tools, since a little knowledge can go a long way in inducing accurate structure and annotations (Haghighi and Klein, 2006). | neutral |
train_96728 | The goal of using a second feature set was to examine how dependent prediction quality was on a specific set of features, as well as to test the extent to which the output of syntactic parsing might improve prediction accuracy. | these may be poor assumptions in a real application, but they can be easily included or excluded in the model as desired. | neutral |
train_96729 | It is widely recognized that one of the best ways to learn a foreign language is through spoken dialogue with native speakers (Ehsani and Knodt, 1998). | this extra step aims at reducing discrepancies caused by syntactic structure differences between the two languages. | neutral |
train_96730 | Indeed, both are trying to assess the quality of the translation output, whether it is produced by a computer or by a foreign language student. | one has to take care not to bias towards the correct response so strongly that the student is allowed to make mistakes with impunity. | neutral |
train_96731 | In practical terms, Eq. | a sample target word and its reference definition, along with examples of humanjudged responses, are given in Sections 3.3 and 4.1. | neutral |
train_96732 | Each e i (i = 1 . | 7 We lowercased the training, development and test sentences. | neutral |
train_96733 | To overcome this, we use a binary format which is a memory map of the internal representation used during decoding. | the memory requirements are low, e.g. | neutral |
train_96734 | In the latter work, the author shows that CP clearly outperforms both the naive single pass solution of severe pruning as well as the naive two-pass rescoring approach. | the evaluation test set contains 500 sentences with an average length of 10.3 source words. | neutral |
train_96735 | Note that this closely corresponds to the post-editing operation performed on the Job Bank application. | hRSDC kindly provided us with a sample of data from the Job Bank. | neutral |
train_96736 | With less than 500k words of training material, the phrase-based MT system already outperforms the rule-based MT baseline. | one way around this problem would be to modify the APE system so that it not only uses the baseline MT output, but also the source-language input. | neutral |
train_96737 | Given the encouraging results of the Portage APE approach in the above experiments, we were curious to see whether a Portage+Portage combination might be as successful: after all, if Portage was good at correcting some other system's output, could it not manage to correct the output of another Portage translator? | when the translation system has been trained using distinct data (Portage Hansard + Portage APE experiments), post-editing makes a large difference, comparable to that observed with the rule-based MT output provided with the Job Bank data. | neutral |
train_96738 | Video collections are difficult to be general-purpose since hundreds hours of videos might take tens of hundreds GB storage space. | we therefore propose two algorithms to 1): re-tokenize an input subsequence, and 2): compute the DP score for a subsequence. | neutral |
train_96739 | For example, the verb "make" uses Arg2 for the "Material" argument; but the verb "multiply" uses Arg2 for the "Extent" argument. | propBank has been widely used as training data for Semantic Role Labeling. | neutral |
train_96740 | On the Wall Street Journal (WSJ) data, using correct syntactic parses, it is possible to achieve accuracies rivaling human interannotator agreement. | as usual, argument classification is measured as percent accuracy (a), whereas ID and ID + Class. | neutral |
train_96741 | Table 7 shows a comparison of these conditions. | there is a considerable drop in Classification accuracy (86.1% vs 93.0%). | neutral |
train_96742 | Table 1 summarizes the quality of the systems selected according to the Accuracy criterion. | we applied CBC to the TREC-9 and TREC-2002 (Aquaint) newswire collections consisting of over 600 million words. | neutral |
train_96743 | We can see that the two approaches agree on the top four pairs, but disagree on the rest in the list. | among 75 skill pairs, 60 of them were rated correctly (i.e., 80% accuracy), which significantly outperforms the statistical approach, and is very close to the upper bound accuracy, i.e., human agreement (81%), as shown in Figure 2. | neutral |
train_96744 | For our task, the correctness of the prepositional phrase attachment is especially important for extracting accurate semantic role patterns (Gildea and Jurafsky, 2002 . | [ action Apply] [ theme Knowledge of [ concept IBM Ebusiness Middleware]] to [ purpose PLM Solutions] In this example, "Apply" is the "action" of the skill; "Knowledge of IBM E-business Middleware" is the "theme" of the skill, where the "concept" semantic role (IBM E-business Middleware) specifies the key component of the skill requirement and is the most important role for skill matching; "PLM Solutions" is the "purpose" of the skill. | neutral |
train_96745 | The term signature wants to capture the notion that the data it embodies is truly representative of a particular item, and that shows the details of its typical behavior. | all vectors produced, one per occurrence of the word in question, are stored then in a kind of vector of vectors that we have called its signature. | neutral |
train_96746 | Using the master method (Cormen et al, 2001), P(N) = O(log 2 N) in the best case (α=1). | where α = 2−p, when p is the probability that the popular half contains sufficient matches. | neutral |
train_96747 | These devices have been researched and are starting to become commercially available (e.g. | a very simple way to prune phrase pairs from a translation model is to use a probability threshold and remove all pairs for which the translation probability is below the threshold. | neutral |
train_96748 | With the exception of Fraser and Marcu (2006), these previous publications do not entirely discard the generative models in that they integrate IBM model predictions as features. | to generative models, this framework is easier to extend with new features. | neutral |
train_96749 | While other work has shown similar performance on this type of dataset, our approach presented here is faster and does not require training. | parallel corpora have many uses in natural language processing, and their dearth has been identified as a major bottleneck (Diab, 2004). | neutral |
train_96750 | We attribute the difference in speed and BLEU score between our system and Pharaoh to the fact Value Elimination searches in a depth-first fashion over the space of partial configurations of RVs, while Pharaoh expands partial translation hypotheses in a best-first search manner. | to each English word e i corresponds a conditionally dependent fertility φ i , which indicates how many times e i is used by words in the French string. | neutral |
train_96751 | This definition of an event follows from the fact that most events in baseball must start with a pitch and usually do not last longer than four shots (Gong et al., 2004). | whereas visual context features provide information about the global situation that is being observed, camera motion features afford more precise information about the actions occurring in the video. | neutral |
train_96752 | Because data is sparse, the situation model is trained only on the hand annotated highlight events. | α=0), and both information together (i.e. | neutral |
train_96753 | Incorrect answers occur significantly more than expected, and the dependencies are stronger for uncI. | non-uncertain answers occur significantly less (-), or aren't significantly dependent (=). | neutral |
train_96754 | These filter responses, which can be computed very efficiently, are used as input to a learning algorithm that generates the final detector. | the experiments performed yield interesting results. | neutral |
train_96755 | We know from results of CVA and BLEU analyses that for both groups of speakers, higherscoring essays are more lexically similar to the prompts. | since better essays are presumably better at expressing the content of the prompts, we can hypothesize that native speakers paraphrase the content more than non-native speakers. | neutral |
train_96756 | This paper investigates the properties of large nbest lists in the context of statistical machine translation (SMT). | from the baseline of 31.5%, we only get a moderate improvement of approximately 0.5% BLEU. | neutral |
train_96757 | The highest n that fit into the 16GB machine was 60 000. | bLEU or WER, using the simplex method. | neutral |
train_96758 | We present a method that allows for fast extraction of very large n-best lists based on the k shortest paths algorithm by (Eppstein, 1998). | most reference translations would never have been selected as final candidates. | neutral |
train_96759 | More recently, new optimization methods have been used to scale-up transductive SVMs to large data sets (Collobert et al., 2006), however we did not face scaling problems in our current experiments. | the experiments were run on the Mastodon cluster provided by NSF grant EIA-0303609. | neutral |
train_96760 | However, it requires considerable human effort to annotate sentences. | in the first iteration, the set of positive examples for production contains all sentences whose corresponding MRs use the production in their parse trees. | neutral |
train_96761 | We could certainly improve the association score model, for example adding discount factors or adding more association score types, or dictionaries. | taking theses issues into account, we implemented the following features: • distinct source and target unlinked word penalties: since unlinked words have a different impact whether they appear in the source or target language, we introduced an unlinked word feature for each side of the sentence pair. | neutral |
train_96762 | In this paper we explored the use of rich syntactic features for the relation extraction task. | comparison between the PAK model and SRL model shows that manually specified features are more discriminative for binary relation extraction; they boost precision and accuracy for ternary relation extraction. | neutral |
train_96763 | Table 3: Percent scores of Precision/Recall/F-score/Accuracy for identifying PL, PO and POL relations. | we explore a much wider variety of syntactic features in this work. | neutral |
train_96764 | We also notice that 8.3% of the soundbites do not have the correct name hypothesis due to an NER boundary error, and that 12.5% is because of missing errors. | using all the features before the soundbites achieves comparable performance to using all the features, indicating that the region before a soundbite contributes more than that after it. | neutral |
train_96765 | (2006) perform this computation by: where While the second term of the covariance is easy to compute, the first term requires calculation of quadratic feature expectations. | unlike the previous method, here the first term can be efficiently calculated as well. | neutral |
train_96766 | Summarizing these results, RH is much faster than Collins model 3 and the reduced version of XLE, but a bit slower than Sagae-Lavie. | table 2 primarily compares the accuracy of the Collins model 3 and RH parsers. | neutral |
train_96767 | Furthermore, a relationship between an entity pair might be expressed with the implication of the principal entity in some cases. | among 39,467 entities collected from all principal and secondary entities, we randomly select 3,300 entities and manually annotate their types for the Entity Classifier. | neutral |
train_96768 | TV WEB TRANSCRIPTS Most of our acoustic and language model training data comes from broadcast news. | some challenges for the cross-adaptations had to be overcomed, for instance to cross adapt the non-vowelized system on the vowelized system, we had to remove the vowels to have a nonvowelized transcript. | neutral |
train_96769 | The article explored an automatic method to learn an SFST from a bilingual set of samples for machine translation purposes, the so-called GIATI (Grammar Inference and Alignments for Transducers Inference). | each multilingual sample is transformed into a single string from an extended vocabulary (Γ ⊆ using a labelling function (L m ). | neutral |
train_96770 | Each source word is then joined with a target phrase of each language as the corresponding segmentation suggests. | both languages differs greatly in syntax and in semantics. | neutral |
train_96771 | The most remarkable difference when comparing both systems is that the N -gram based system produces a relatively large amount of extra words (approximately 10%), while for the phrasebased system, this is only a minor problem (2% of the errors). | the N -gram based system produced more accurate translations, but also a larger amount of extra (incorrect) words when compare to the phrase-based translation system. | neutral |
train_96772 | Using finer affixes reduces the n-gram language model span, and leads to poor performance for a fixed n-gram size. | the first entry (37.59) is the oracle BLEU score for the N-best list. | neutral |
train_96773 | For example, in Arabic and to a certain extent in French, some words can be masculine/feminine or singular/plural. | the information we used from MLPT in Fig. | neutral |
train_96774 | With an N-best list of 30, the system has a very low failure rate for all conditions. | we decided to distinguish the first stressed and the first unstressed syllable from all other stressed and unstressed syllables in the word, in order to encode separate statistics for the privileged first position. | neutral |
train_96775 | The selected paragraphs consist of 36 sentences. | the error ratio, defined as the ratio of the number of errors to the number of words in output sentence, ranges from 4.76% to 61.54%. | neutral |
train_96776 | We employ this method as a baseline system, in which NEs are identified by the auto-matic NE recognizers and dictionary lookups as introduced in §2. | it shows that the inclusion of curated data in the semi- Table 2: Name-only and name-path evaluation results. | neutral |
train_96777 | As the TIMIT corpus provides phone level segmentations, P t is observed during training. | as shown, t 1 and t 4 are the start and end times respectively for phone p 1 , while t 4 and t 7 are the start and end times for phone p 2 . | neutral |
train_96778 | A similar case occurs at the start of phone p 1 and the end of phone p 2 . | to the task of determining the phone boundary, identifying one frame per word unit is much simpler, less prone to error or disagreement, and less costly (Greenberg, 1995). | neutral |
train_96779 | The independent variable in our study is the method of text entry used: (1) letter-by-letter typing using the Wivik keyboard with no word prediction, (2) letter-by-letter typing augmented with word predictions produced by a basic prediction method, (3) letter-by-letter typing augmented with word predictions produced by an advanced prediction method. | while overall communication rate was significantly faster with advanced prediction, users took 0.641 seconds longer for each key press from using advanced prediction compared to entry without prediction. | neutral |
train_96780 | To choose between null, the, or a/an, the language model in effect constructs Equations 6, 7 and 8 and we pick the one that has the highest probability. | the best results of Minnen et al. | neutral |
train_96781 | (3) Highway officials insist the ornamental railings on older bridges aren't strong enough to prevent vehicles from crashing through. | the probability of a parse is given by the equation where l(c) is the label of c (e.g., whether it is a noun phrase NP, verb phrase, etc.) | neutral |
train_96782 | We propose a variation of the SO-PMI algorithm for Japanese, for use in Weblog Opinion Mining. | customer reviews of products, forums, discussion groups, and blogs. | neutral |
train_96783 | The closer they are, the more likely that they should be combined. | in this paper we propose a cascaded hybrid model for Chinese NER. | neutral |
train_96784 | Recent approaches to Chinese NER are a shift away from manually constructed rules or finite state patterns towards machine learning or statistical methods. | • The BIO 3 chunk tags of the previous 3 characters. | neutral |
train_96785 | Results (see Table 6) showed that all the modules got reasonable accuracy except for the sentence root finder. | improving the two root finders is an important task in our future work. | neutral |
train_96786 | In Section 4 we describe our summarizer and these features used in experiments. | our summarizer contains the preprocessing stage and the estimating stage. | neutral |
train_96787 | In the same Mandarin broadcast program, the distribution and flow of summary sentences are relatively consistent. | researchers commonly use acoustic/prosodic variation -changes in pitch, intensity, speaking rate -and du-ration of pause for tagging the important contents of their speeches (Hirschberg, 2002). | neutral |
train_96788 | Most commonly, we use sentences to model individual pieces of information. | for entity nuggets, we examine subtrees headed by "NP"; for event nuggets, subtrees headed by "VP" are examined and their corresponding subjects (siblings headed by "NP") are treated as entity attachments for the verb phrases. | neutral |
train_96789 | These features are Lesk-style features (Lesk, 1986) that exploit overlaps between glosses of target and seed senses. | to compare our results to theirs, we apply our full model (in 10-fold cross validation experiments) to their data sets. | neutral |
train_96790 | A feature is created for each vector entry whose value is the count at that position. | for comparison, the first row repeats the results for the mixed corpus from Table 1. | neutral |
train_96791 | => drawback -(the quality of being a hindrance; "he pointed out all the drawbacks to my plan") The following objective examples are given in WM: The alarm went off. | f-measure for the Overlaps ablation is not significantly different (p = .39). | neutral |
train_96792 | Through manual review and empirical testing on data, (Wiebe and Riloff, 2005) divided the clues into strong (strongsubj) and weak (weaksubj) subjectivity clues. | we opted to treat them as one ablation group (Gloss vector). | neutral |
train_96793 | These methods acquire contextual information directly from unannotated raw text, and senses can be induced from text using some similarity measure (Lin, 1997). | for example, Lesk disambiguated two words by finding the pair of senses with the greatest word overlap in their dictionary definitions (Lesk, 1986). | neutral |
train_96794 | It is estimated that every year around 2,500 new words appear in English (Kister, 1992). | in our method parsing is not performed in real time when we disambiguate words. | neutral |
train_96795 | Clinicians make use of additional information beyond children's speech, such as parent and teacher questionnaires and test scores on different language assessment tasks. | a combined lexicon approach was used to tag the mixedlanguage fragments. | neutral |
train_96796 | Tag Parameters: Draw an infinite sequence of sets i is a distribution over the tagset T ℓ . | for example, the closed form for integrating over the parameter of a superlingual tag with value z is given by: where count(z, y i , ℓ) is the number of times that tag y i is observed together with superlingual tag z in language ℓ, count(z, ℓ) is the total number of times that superlingual tag z appears with an edge into language ℓ, and ω 0 is a hyperparameter. | neutral |
train_96797 | However, the bilingual model explicitly joins each aligned word-pair into a single coupled state. | when a complete tag lexicon is provided, our unsupervised model achieves an average accuracy of 95%, in comparison to 91% for an unsupervised monolingual Bayesian HMM and 97.4% for its supervised counterpart. | neutral |
train_96798 | We explicitly sample only part-of-speech tags y, superlingual tags z, and the hyperparameters of the transition and emission Dirichlet priors. | our multilingual model posits latent cross-lingual tags without explicitly joining or directly connecting the part-of-speech tags across languages. | neutral |
train_96799 | , {α x } Label(α x @a) = ε Foot Axiom α x @a • , i, j, k, l, Λ α x @a • , i, j, k, l, Λ Figure 6: Axioms and inference rules for the CKY algorithm for delayed TL-MCTAG with a delay of d. lay list, giving a total number of active delay lists of O(|G| t(1+d(f −1)) ). | we note that in order to allow a non-total ordering of the trees in a vector we would simply have to record all trees in a tree vector in the histories as is done in the delayed TL-MCTAG parser. | neutral |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.