id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_12700 | Traditional question answering systems adopt the framework of parsing questions, searching for relevant documents, and then pinpointing and generating answers. | this framework includes potential dangers. | contrasting |
train_12701 | Inspired by this, we consider taking N-grams in sentences as our features. | n-gram features not closely related to the topic will bring more noise into the system. | contrasting |
train_12702 | Trigram features actually bring enough evidence in classification. | when we investigated 4-grams language features in the collected data, most of them are very sparse in the feature space of all the cases. | contrasting |
train_12703 | our approach in the use of metadata, Web-derived answer patterns and context bring incremental gains to RC performance. | the actual gain levels are much reduced. | contrasting |
train_12704 | For those cases, our association data provides a useful basis for detecting missing links in GermaNet, which can be used to enhance the taxonomy. | a large proportion of the "no relation" associations represent instances of verbverb relations not targeted by GermaNet. | contrasting |
train_12705 | There is hardly any overall difference in accuracy between the shallow and the deep classifier. | it seems that the shallow classifier in its current form has very little potential outside of the CD subset whereas the deep classifier shows a more promising performance for several subsets. | contrasting |
train_12706 | Using only the case tagger it is not possible to classify all the nouns in this set. | it is possible to capture some nouns in this set by applying a heuristic: © If a noun follows the preposition zum, zur, am, im, ins, or ans −→ Class I. | contrasting |
train_12707 | This assumption holds well on hand-corrected parse trees and simplifies significantly the SRL process because only one syntactic constituent has to be correctly classified in order to recognize one semantic argument. | this approach is limited when using automatically-generated syntactic trees. | contrasting |
train_12708 | Several studies have established that there is considerable variance in semantic role assignment performance across different semantic roles within systems (Carreras and Màrquez, 2004;Carreras and Màrquez, 2005;Pado and Boleda Torrent, 2004). | these studies used either the PropBank semantic role paradigm (Carreras and Màrquez) or a limited of experimental conditions (Pado and Boleda). | contrasting |
train_12709 | Since we used the gold standard features provided by FrameNet and did not introduce implementation-or feature-specific knowledge, this points to a general limitation of syntax-based models. | semantic features behave completely differently; their contribution is not limited by a role's confusability. | contrasting |
train_12710 | The results from these papers indicate that on corpus sizes up to 60,000 parallel sentences, the restructuring operations yielded a large improvement in translation quality, but the morphological decomposition provided only a slight additional benefit. | since German is not as morphologically complex as Czech, we might expect a larger benefit from morphological analysis in Czech. | contrasting |
train_12711 | Our final set of experiments used the same input format as the Modified Lemma experiments. | in this set of experiments, we changed the model used to calculate the word-to-word alignment probabilities. | contrasting |
train_12712 | The morpheme-based model in Equation 2 is similar to the modified lemma model in that it removes much of the differentiation between Czech wordforms, but leaves the differences that are most likely to appear as inflection on English words. | it also performs an additional smoothing function. | contrasting |
train_12713 | Fitting a model based onF α=0.75 , which gives increased weight to recall compared withF α=0.5 , led to higher recall as expected. | we also expected that the F α=0.75 score of theF α=0.75 trained classifier would be higher than the F α=0.75 score of theF α=0.5 trained classifier. | contrasting |
train_12714 | The classifiers discussed in this paper are logistic regression models. | this choice is not crucial. | contrasting |
train_12715 | Evita extracts the values of these two attributes using basic pattern-matching techniques over the approapriate verbal, nominal, or adjectival chunk. | the identification of event class cannot rely on linguistic cues such as the morphology of the expression. | contrasting |
train_12716 | This computation shows an average output for all participants of about 6,500 sentences and a median of 6,981 -out of a total of 8,343 sentences. | this total includes some amount of header material, not only the headline, but the document ID and other identifiers, the date and some shorthand messages from the wire services to its clients. | contrasting |
train_12717 | There may be cases where e 1 and e 2 belong to predicate-argument structures that have no argument in common. | because the dependency graph is always connected, we are guaranteed to find a shortest path between the two entities. | contrasting |
train_12718 | ROUGE-W is also based on LCS, but assigns higher weights to sequences that have fewer gaps. | these metrics still do not distinguish among translations with the same LCS but different number of shorter sized subsequences, also indicative of overlap. | contrasting |
train_12719 | The fact that current automatic MT evaluation metrics including BLANC do not correlate well with human judgments at the sentence level, does not mean we should ignore this need and focus only on system level evaluation. | further research is required to improve these metrics. | contrasting |
train_12720 | While in traditional word-based statistical models (Brown et al., 1993) the atomic unit that translation operates on is the word, phrase-based methods acknowledge the significant role played in language by multiword expressions, thus incorporating in a statistical framework the insight behind Example-Based Machine Translation (Somers, 1999). | phrase-based models proposed so far only deal with multi-word units that are sequences of contiguous words on both the source and the target side. | contrasting |
train_12721 | The word posterior probability is then calculated by summing over all target sentences containing word e in a position which is Levenshtein-aligned to i: , where The confidence of word e then depends on the source sentence f J 1 as well as the target sentence e I 1 , because the whole target sentence is relevant for the Levenshtein alignment. | to the approaches presented in section 4, the phrase-based confidence measures do not not use the context information at the sentence level, but only at the phrase level. | contrasting |
train_12722 | Such documents provide many training instances, where a word in one language is translated into another. | the data is only partially labeled in that we are not given a word-to-word alignment between the two languages, and thus we do not know what every word in the source language S translates to in the target language T . | contrasting |
train_12723 | , b k be the set of words that align to a in the aligned sentence t. In general, we can consider U a = {U a,s→t } s∈Da to be the candidate set of translations for a in T , where D a is the set of source language sentences containing a. | this definition is quite noisy: a word b i might have been aligned with a arbitrarily; or, b i might be a word that itself corresponds to a multiword translation in S. Thus, we also align the sentences in the T → S direction, and require that each b i in the phrase aligns either with a or with nothing. | contrasting |
train_12724 | In the community of parsing, labeled recall and labeled precision on phrase structures are often used for evaluation. | in our experiments we cannot evaluate our parser with respect to the phrase structures in PTB. | contrasting |
train_12725 | The fscore on the oracle in top 10 in the development data is 88.5%, while the f-score of the top candidate is 87.3%, as shown in Table 1. | we are not satisfied with the score on oracle, which is not good enough for post-processing, i.e. | contrasting |
train_12726 | The parser proposed in this paper is an incremental parser, so the accuracy on dependency is lower than that for chart parsers, for example like those reported in (Collins, 1999;Charniak, 2000). | 5 it should be noted that the dependencies computed by our parser are deeper than those calculated by parsers working directly on PTB. | contrasting |
train_12727 | While Church's technique was developed with speech recognition in mind, we will show that it is useful for investigating psycholinguistic phenomenon. | the connection between cognitive phenomenon and engineering approaches go in both directions: it is possible that syntactic parsers could be improved using a model of syntactic priming, just as speech recognition has been improved using models of lexical priming. | contrasting |
train_12728 | This strongly indicates that the parallelism effect is an instance of a general processing mechanism, such as syntactic priming (Bock, 1986), rather than specific to coordination, as suggested by (Frazier and Clifton, 2001). | we also found that the parallelism effect is strongest in coordinate structures, which could explain why comprehension experiments so far failed to demonstrate the effect for other structural configurations (Frazier et al., 2000). | contrasting |
train_12729 | 7 For most standard association measures utilized for terminology extraction, the frequency of occurrence of the term candidates either plays a major role (e.g., C-value), or has at least a significant impact on the assignment of the degree of termhood (e.g., t-test). | frequency of occurrence in a training corpus may be misleading regarding the decision whether or not a multi-word expression is a term. | contrasting |
train_12730 | We observe that backward constituent alignment-based models (1-4) perform similarly to word-based projection models (the F-score ranges between 0.40 and 0.45). | they obtain considerably higher precision (albeit lower recall) than the word-based models. | contrasting |
train_12731 | It is possible to correct more errors per token by iterating the correction process. | iterative correction cannot guarantee that the result is optimal under the model. | contrasting |
train_12732 | Our lexicon-free chunking algorithm placed an erroneous boundary at 11.3% of word split points for Arabic test data (Section 4.3). | correction performance was identical to that of error- free chunking. | contrasting |
train_12733 | Goshtasby and Ehrich (1988) present a lexicon-free method based on probabilistic relaxation labeling. | they use the probabilities assigned to individual characters by the OCR system, which is not always available. | contrasting |
train_12734 | Let X be a random variable with distribution P true (x), such that no direct observations of it exist. | we may have some indirect observations of X and have built several models of , each parameterized by some parameter vector θ i . | contrasting |
train_12735 | Intelligent language technologies capable of full semantic interpretation of domain-general text remain an elusive goal. | statistical advances have made it possible to address core pieces of the problem. | contrasting |
train_12736 | At least for the simplest model, these estimations do not vary with a larger corpus or one lacking in noise. | the increase in performance seen here for the more specific model, albeit small, may indicate that richer probability models may require more or cleaner training data. | contrasting |
train_12737 | A characteristic of genitives is that they are very productive, as the construction can be given various semantic interpretations. | in some situations, the number of interpretations can be reduced by employing world knowledge. | contrasting |
train_12738 | This includes the seminal paper of (Gildea and Jurafsky, 2002), Senseval 3 and coNLL competitions on automatic labeling of semantic roles detection of noun compound semantics (Lapata, 2000), (Rosario and Hearst, 2001) and many others. | not much work has been done to automatically interpret the genitive constructions. | contrasting |
train_12739 | The new boundary is more specific then the previous boundary and it is closer to the ideal boundary. | we do not know how well it behaves on unseen examples and we are looking for a boundary that classifies with a high accuracy the unseen examples. | contrasting |
train_12740 | But it can not group unseen features (features that do not occur in labeled data) into meaningful clusters since there are no class labels associated with these unseen features. | while given labeled data, unsupervised feature clustering method can not utilize class label information to guide feature clustering procedure. | contrasting |
train_12741 | Their system achieved 56.6% fine-grained score on ELS task of SENSEVAL-3. | with their work, our datadriven method for feature clustering based WSD does not require external knowledge resource. | contrasting |
train_12742 | 2 Standard measures for evaluating information retrieval results are precision and recall. | for QA several other specialized measures have been proposed, e.g. | contrasting |
train_12743 | We do not know how the ranking is done internally or how the output is influenced by parameter changes. | we can inspect and evaluate the output of the system. | contrasting |
train_12744 | For example, all RootPOS keywords are marked as required and therefore, the restrictions of RootPOS keywords are useless because they do not alter the query. | in other cases overlapping keyword type definitions do influence the query. | contrasting |
train_12745 | Noteworthy are the rather low word-error rates (20%) in the TREC evaluations, and that recognition errors did not lead to catastrophic failures due to redundancy of news segments and queries. | in our scenario, requirements are rather different. | contrasting |
train_12746 | The simplest phonetic recognizer is a regular recognizer with the vocabulary replaced by the list of phonemes of the language, and the language model replaced by a phoneme M -gram. | such phonetic language model is much weaker than a word language model. | contrasting |
train_12747 | In both case, the set of words to index is known. | indexing phoneme lattices is very different, because theoretically any phoneme string could be an indexing item. | contrasting |
train_12748 | Note that several incorrectly spelled words, including "equpment" itself, are given as candidate corrections. | the language model derived from the query logs assigns a low probability to the incorrect candidates. | contrasting |
train_12749 | In the case of a correctly spelled query, the most likely candidate correction is the word itself. | occasionally there is a correctly spelled but infrequent word within a small edit distance of another more common word. | contrasting |
train_12750 | This method is simple and relatively effective if a search engine returns a hit-list which contains a certain number of relative documents in the upper part. | unless this assumption holds, it usually gives a worse ranking than the initial search. | contrasting |
train_12751 | The performance of query expansion or relevance feedback is usually evaluated on a residual collection where seen documents are removed. | we compare our method with pseudo feedback based ones, thus we do not use residual collection in the following experiments. | contrasting |
train_12752 | As for the number of training examples, performance of 20 and 50 does not differ so much in all the number of expansion terms. | performance of 100 is clearly worse than of 20 and 50. | contrasting |
train_12753 | In TREC-8, above two methods uses TREC1-5 disks for query expansion and a phase extraction technique. | we do not adopt these methods in our experiments 4 . | contrasting |
train_12754 | The comparisons can be characterized by a moving window, where successive overlapping comparisons are advanced by one unit of text. | hearst (1994hearst ( , 1997 and Foltz et al. | contrasting |
train_12755 | For example, when measuring comprehension, use the unit of the sentence, as opposed to the more standard unit of the proposition, because LSA is most correlated with comprehension at that level. | when using LSA to segment text, Foltz et al. | contrasting |
train_12756 | Clearly this problem would be trivial for a cue phrase based approach, which could learn the finite set of problem introductions. | the current lexical approach does not have this luxury: words in the problem statements recur throughout the following dialogue. | contrasting |
train_12757 | Given the smaller number of test cases, 22, this F-measure of .22 is not significantly different from .17. | the Foltz method is significantly higher than both of these, p < .10. | contrasting |
train_12758 | Depending on the application, segments may be tokens, phrases, or sentences. | in this paper we primarily focus on segmenting sentences into tokens. | contrasting |
train_12759 | At first sight, it might appear that we have made the segmentation problem intractably harder by turning it into a classification problem with a number of labels exponential on the length of the instance. | we can bound the number of labels under consideration and take advantage of the structure of labels to find the k most likely labels efficiently. | contrasting |
train_12760 | Sequential tagging with the BIO tag set has proven quite accurate for shallow parsing and named entity extraction tasks (Kudo and Matsumoto, 2001;Sha and Pereira, 2003;Tjong Kim Sang and De Meulder, 2003). | this approach can only identify non-overlapping, contiguous segments. | contrasting |
train_12761 | The learning algorithm in Section 3.1 seeks a separator through the origin, though, our experimental results suggest that this tends to favor precision at the expense of recall. | at test time we can use a separation threshold different from zero. | contrasting |
train_12762 | Natural spoken language is often regarded as the obvious choice for a human-computer interface. | despite significant research efforts in automatic speech recognition (ASR) (Huang et al., 2001), existing ASR systems are still not sufficiently robust to a wide variety of speaking conditions, noise, accented speakers, etc. | contrasting |
train_12763 | Its computation is straightforward if the question classifies the document set in a completely disjointed manner. | the retrieved documents may belong to two or more categories for some questions, or may not belong to any category. | contrasting |
train_12764 | The system can generate question to which the user's query corresponds using this metadata. | some documents are related with multiple versions, or may not belong to any category. | contrasting |
train_12765 | All of the results so far have used the learned policy for the system interacting with the corresponding policy that was learned for the user. | there is no guarantee that a real user will behave like the learned policy. | contrasting |
train_12766 | The two main handtagged corpora are PropBank (Palmer et al., 2003) and FrameNet (Baker et al., 1998), the former of which currently has broader coverage. | even PropBank, which is based on the 1M word WSJ section of the Penn Treebank, is insufficient in quantity and genre to exhibit many things. | contrasting |
train_12767 | This work bears some similarity to the substantial literature on automatic subcategorization frame acquisition (see, e.g., Manning (1993), Briscoe and Carroll (1997), and Korhonen (2002)). | that research is focused on acquiring verbs' syntactic behavior, and we are focused on the acquisition of verbs' linking behavior. | contrasting |
train_12768 | Additionally, as noted earlier, Excite questions are often ungrammatical and make parsing less likely to succeed. | the baseline system, by definition, does not output semantic representations, so that its outcome is of little use for further reasoning, as required by question answering or general information extraction systems. | contrasting |
train_12769 | A trivial case is when both sentences are identical, word for word. | paraphrases often employ different words or syntactic structures to express the same concept. | contrasting |
train_12770 | Consider a filter that simply predicts that every sentence is incorrectly parsed: it would have an overall accuracy of 55% on our WSJ corpus, not too much worse than WOODWARD's classification accuracy of 66% on this data. | such a filter would be useless because it filters out every correctly parsed sentence. | contrasting |
train_12771 | It is generally accepted, however, that WordNet senses are far too fine-grained (Agirre and Lopez de Lacalle Lekuona (2003) and citations therein). | published thesauri, such as Roget's and Macquarie, group near-synonymous and semantically related words into a relatively small number of categories-typically between 800 and 1100-that roughly correspond to very coarse concepts or senses (Yarowsky, 1992). | contrasting |
train_12772 | A distributional measure of concept-distance can be used to populate a small 812 ¢ 812 concept-concept distance matrix where a cell m i j , pertaining to concepts c i and c j , contains the distance between the two concepts. | a word-word distance matrix for a conservative vocabulary of 100,000 word types will have a size 100,000 ¢ 100,000, and a WordNet-based concept-concept distance matrix will have a size 75,000 ¢ 75,000 just for nouns. | contrasting |
train_12773 | As correction ratio is determined by the product of a number of ratios, each evaluating the various stages of malapropism correction (identifying suspects, raising alarms, and applying the correction), we believe it is a better indicator of overall performance than correction performance, which is a not-so-elegant product of an F-score and accuracy. | no matter which of the two is chosen as the bottom-line performance statistic, the results show that the newly proposed distributional concept-distance measures are clearly superior to word-distance measures. | contrasting |
train_12774 | We expect that the WordNet-based measures will perform poorly when other parts of speech are involved, as those hierarchies of WordNet are not as extensively developed. | our DPC-based measures do not rely on any hierarchies (even if they exist in a thesaurus) but on sets of words that unambiguously represent each sense. | contrasting |
train_12775 | Both DPW-and WordNet-based measures have large space and time requirements for precomputing and storing all possible distance values for a language. | by using the categories of a thesaurus as very coarse concepts, precomputing and storing all possible distance values for our DPC-based measures requires a matrix of size only about 800 ¢800. | contrasting |
train_12776 | When using a phrase-based translation model, one can easily extract the phrase pair (THE MUTUAL; the mutual) and use it during the phrase-based model estimation phrase and in decoding. | within the xRS trans-ducer framework that we use, it is impossible to extract an equivalent syntactified phrase translation rule that subsumes the same phrase pair because valid xRS translation rules cannot be multiheaded. | contrasting |
train_12777 | Because the decoder only proposes phrase translations that are in the phrasetable (ie, that have non-zero count), it never requires estimates for pairss,t having c(s,t) = 0. | 1 probability mass is reserved for the set of unseen translations, implying that probability mass is subtracted from the seen translations. | contrasting |
train_12778 | Smoothing relative frequencies with an additional Zens-Ney phrasetable gives about the same gain as Kneser-Ney smoothing on its own. | combining Kneser-Ney with Zens-Ney gives a clear gain over any other method (statistically significant for all language pairs except en→es and en→de) demonstrating that these approaches are complementary. | contrasting |
train_12779 | We hypothesize that this correlation captures the student involvement in the tutoring process: more involved students will try harder thus expressing more certainty or uncertainty. | less involved students will have fewer certain/uncertain/mixed turns and, in consequence, more neutral turns. | contrasting |
train_12780 | With a smaller training set the proportion of new entities is far too small to be of use. | as said, the overall final accuracy of 85.5% (see Table 7) does not significantly improve over the baseline. | contrasting |
train_12781 | We did not investigate the influence of treebank refinement in this study. | we would like to note that by a combination of suffix analysis and smoothing, Dubey (2005) was able to obtain an F-score of 85.2 for Negra. | contrasting |
train_12782 | Furthermore, end-to-end systems like speech recognizers (Roark et al., 2004) and automatic translators (Och, 2003) use increasingly sophisticated discriminative models, which generalize well to new data that is drawn from the same distribution as the training data. | in many situations we may have a source domain with plentiful labeled training data, but we need to process material from a target domain with a different distribution from the source domain and no labeled data. | contrasting |
train_12783 | It provides a discriminative online learning algorithm which when combined with a rich feature set reaches state-of-the-art performance across multiple languages. | within this framework one can only define features over single attachment decisions. | contrasting |
train_12784 | (2005b) use the Chu-Liu-Edmonds (CLE) algorithm to solve the maximum spanning tree problem. | global constraints cannot be incorporated into the CLE algorithm (McDonald et al., 2005b). | contrasting |
train_12785 | They are based on the concept of eliminating subtours (cycles), cuts (disconnections) or requiring intervertex flows (paths). | in practice these formulations cause long solve times -as the first two meth- ods yield an exponential number of constraints. | contrasting |
train_12786 | For the plain MST problem it is sufficient to set k = 1 and only take the best scoring label for each token pair. | if we want a constraint which forbids duplicate subjects we need to provide additional labels to fall back on. | contrasting |
train_12787 | The best performing system ; note: this system is different to our baseline) achieves 79.2% labelled accuracy while our baseline system achieves 78.6% and our constrained version 79.8%. | a significant difference is only observed between our baseline and our constraintbased system. | contrasting |
train_12788 | We see that solve time can be reduced by 80% while only losing a marginal amount of accuracy when we set q to 10. | we are unable to reach the 20 seconds solve time of the CLE algorithm. | contrasting |
train_12789 | Such aspectual distinctions have been alive and well in the linguistic literature since at least the late 60s (Vendler, 1967). | the use of the term event in natural language processing work has often diverged quite considerably from this linguistic notion. | contrasting |
train_12790 | Because our approximation of Evita uses a feature-based statistical machine learning algorithm instead of the rule-based Evita algorithm, it cannot predict how well Evita would perform if it had not used the same data for training and testing. | it can give us an approximation of how well a model can perform using information similar to that of Evita. | contrasting |
train_12791 | The implemented parsers of models 1 and 2 were around four times faster than the previous model without a loss of accuracy. | what surprised us is not the speed of the models, but the fact that they were as accurate as the previous model, though they do not use any phrasestructure-based probabilities. | contrasting |
train_12792 | Segment Ordering We introduce a new method for learning temporal ordering. | to existing methods that focus on pairwise ordering, we explore strategies for global temporal inference. | contrasting |
train_12793 | An abdominal examination revealed a soft systolic bruit... and a neurologic examination was normal... order between the two events is consistent with our interpretation of the text, therefore we cannot determine the precedence relation between the segments S5 and S7. | to many existing temporal representations (Allen, 1984;Pustejovsky et al., 2003), TDAG is a coarse annotation scheme: it does not capture interval overlap and distinguishes only a subset of commonly used ordering relations. | contrasting |
train_12794 | For instance, the presence of the temporal anchor last year indicates the lack of temporal continuity between the current and the previous sentence. | many of these predictors are heavily context-dependent and, thus, cannot be considered independently. | contrasting |
train_12795 | In fact, this method is commonly used in event ordering (Mani et al., 2003;Lapata and Lascarides, 2004;Boguraev and Ando, 2005). | many segment pairs lack temporal markers and other explicit cues for ordering. | contrasting |
train_12796 | Programming (ILP) We can cast the task of constructing a globally optimal TDAG as an optimization problem. | to the previous approaches, the method is not greedy. | contrasting |
train_12797 | It is also strictly domain-dependent and hence difficult to be adapted to new domains. | although addressing such drawbacks associated with knowledge-based approaches, the latter approach often suffers the data sparseness problem and hence needs a fully annotated corpus in order to reliably estimate an accurate model. | contrasting |
train_12798 | We currently do not apply any special filters to remove non-verbal sounds or background noise (other than laughs) that overlap with speaker turns. | if artificial laughs overlap with a speaker turn (there were only few such instances), the speaker turn is chopped by marking a turn boundary exactly before/after the laughs begin/end. | contrasting |
train_12799 | This can be also be thought of as squeezing together the four outside corners, creat- ing a new cell whose probability is estimated using IBM Model 1. | the inside Viterbi alignment satisfies the ITG constraint, implying only one solid cell in each column and each row. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.