id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_10200 | Our results show that the similarity-based smoothing of frequency estimates significantly improves an already respectable probabilistic PP attachment model. | our hypothesis that a task-specific thesaurus would outperform a generic thesaurus was not borne out by our experiments. | contrasting |
train_10201 | Semantic similarity measures have focused on individual word senses. | in many applications, it may be informative to compare the overall sense distributions for two different contexts. | contrasting |
train_10202 | For both the original and modified , SPD has the same value because we are moving a total probability mass of 0.5 from E and F to B, with the same semantic distance (since E and F are at the same level in the tree). | we consider that, at the node B subtree, . | contrasting |
train_10203 | In the presence of more noise, her method performs quite well in many cases; it is best or tied for best on the development verbs, medium frequency (70%) and on the test verbs, all verbs (67%), high frequency (80%), and medium frequency (80%). | it does not do well on low frequency verbs at all (below chance at 40%). | contrasting |
train_10204 | 6 Stemming from this analysis, a possible refinement to separating the frequency bands is to use a different classifier in each frequency band, then combine their performance. | we observe that the best SPD performer in one frequency band tends to be the best performer in other bands (development: SPD without entropy, w h i k j P h ; test: SPD without entropy, w § ¦ © ). | contrasting |
train_10205 | All systems captured some kind of clause information through feature codification. | some of the systems restrict the search for arguments only to the immediate clause Williams et al., 2004) and others use the clause hierarchy to guide the exploration of the sentence (Lim et al., 2004;Carreras et al., 2004). | contrasting |
train_10206 | All studies of semantic role labelling we are aware of have used constituents as instances for classification. | constituents are not available in the shallow syntactic information provided by this task. | contrasting |
train_10207 | In the second change, it is possible to miss some information in cases where the semantic chunks do not align with the sequence of BPs. | in Section 3.2 we show that the loss in performance due to the misalignment is much less than the gain in performance that can be achieved by the change in representation. | contrasting |
train_10208 | (2003), who apply the IOB2 tagging scheme in their word-by-word models, as shown in the second row of Figure 1. | two aspects of the problem at hand make this tag assignment difficult to use for TBL. | contrasting |
train_10209 | Gildea (2002) proposed a probabilistic discriminative model to assign a semantic roles to the constituent. | it needs a complex interpolation for smoothing because of the data sparseness problem. | contrasting |
train_10210 | A semantic role label(r i ) is represented by using a BIO notation such as B-A*, I-A*, etc. | o is too frequently occurred than other semantic role labels, it can have a somewhat high probability than others. | contrasting |
train_10211 | synset codes (Smeaton, 1999;Sussna, 1993;Voorhees, 1993;Voorhees, 1994;Moschitti and Basili, 2004). | even the state-of-the-art methods for WSD did not improve the accuracy because of the inherent noise introduced by the disambiguation mistakes. | contrasting |
train_10212 | In general, the use of 10 disjoint training/testing samples produces a higher variability than the nfold cross validation which insists on the same document set. | this does not affect the t-student confidence test over the differences between the Mi-croAverage of SK and bow since the former has a higher accuracy at 99% confidence level. | contrasting |
train_10213 | A WN-based semantic similarity function between noun pairs is used to improve indexing and document-query matching. | the WSD algorithm had a performance ranging between 60-70%, and this made the overall semantic similarity not effective. | contrasting |
train_10214 | On the same corpus, SCISSOR obtains 91.5% precision and 72.3% recall. | the figures are not comparable. | contrasting |
train_10215 | The string-based version of SILT uses no syntactic information while the tree-based version generates a syntactic parse first and then transforms it into an MR. | sCIssOR integrates syntactic and semantic processing, allowing each to constrain and inform the other. | contrasting |
train_10216 | Other measures of word association are possible, such as mutual information (MI), which we can use with the dependency and the adjacency models, similarly to #, χ 2 or Pr. | in our experiments, χ 2 worked better than other methods; this is not surprising, as χ 2 is known to outperform MI as a measure of association (Yang and Pedersen, 1997). | contrasting |
train_10217 | Recent work on the problem of detecting synonymy through corpus analysis has used the Test of English as a Foreign Language (TOEFL) as a benchmark. | this test involves as few as 80 questions, prompting questions regarding the statistical significance of reported results. | contrasting |
train_10218 | It has been shown that measures based on the pointwise mutual information (PMI) between question words yield good results on the TOEFL (Turney, 2001;Terra and Clarke, 2003). | ehlert (2003) shows convincingly that, for a fixed amount of data, the distributional model performs better than what we might call the pointwise co-occurrence model. | contrasting |
train_10219 | If cen i is small, so that the n th i occurrence of the term is near the end of the document, then it is not surprising that w in i +1 is censored. | if cen i is large, so the n th i occurrence is far from the end of the document, then either it is surprising that the term did not re-occur, or it suggests the term is rare. | contrasting |
train_10220 | E.g., labels are not taken into account when measuring the quality of the partition. | in many cases, supervision is used at the application level when determining an appropriate distance metric (e.g., (Lee, 1997;Weeds et al., 2004;Bilenko et al., 2003) and more). | contrasting |
train_10221 | A good metric is one in which close proximity correlates well with the likelihood of being in the same class. | when applying clustering to some task, people typically decide on the clustering quality measure q S (h) they want to optimize, and then chose a specific clustering algorithm A and a distance metric d to generate a 'good' partition function h. it is clear that without any supervision, the resulting function is not guaranteed to agree with the target function p (or one's original intention). | contrasting |
train_10222 | Starting from uniform parameters it climbs from a 40% baseline to a 60% accurate model. | the initializer can do slightly better with precise but sparse gender/number information alone. | contrasting |
train_10223 | We follow IBM Model 1 (Brown et al., 1993) and assume that each word in an utterance is generated by exactly one role in the parallel frame Using standard EM to learn the role to word mapping is only sufficient if one knows to which level in the tree the utterance should be mapped. | because of the vertical ambiguity inherent in intentional actions, we do not know in advance which is the correct utterance-to-level mapping. | contrasting |
train_10224 | Given our model, there exists the possibility of employing intrinsic measures of success, such as word alignment accuracy. | we choose to measure the success of learning by examining the related (and more natural) task of language understanding. | contrasting |
train_10225 | The use of statistical methods in computational linguistics has produced advances in tasks such as parsing, information retrieval, and machine translation. | most of the successful work to date has used supervised learning techniques. | contrasting |
train_10226 | Not surprisingly, the categorical parser does not perform as well as the supervised statistical parser: only 92.7% of German words and 94.9% of English words (85.7% and 86.8%, respectively, of multisyllabic words) are syllabified correctly. | a more important result of parsing the corpus using the categorical parser is that its output can be used to define a model class (i.e., a set of PCFG rules) from which a model can be learned using EM. | contrasting |
train_10227 | This is because the categorical parser produces a wider range of onsets and codas than there are in the true parses. | the induced model is not a superset of the supervised model. | contrasting |
train_10228 | Ultimately, of course, we will want to be able to capture not only the main effects in the data, but some of the subtler effects as well. | we believe that the way to do this is not by introducing completely free parameters, but by using a Bayesian prior that would enforce a degree of similarity between certain parameters. | contrasting |
train_10229 | Supervised training of named entity recognition (NER) systems requires large amounts of manually annotated data. | human annotation is typically costly and time-consuming. | contrasting |
train_10230 | 1 These experiments prove the utility of selective sampling and suggest that parameters for a new domain can be optimised in another domain for which annotated data is already available. | there are some provisos for active learning. | contrasting |
train_10231 | What is more, we speculate that annotators might have an even higher error rate on the supposedly more informative, but possibly also more difficult examples. | this would not be reflected in the carefully annotated and verified examples of a gold standard corpus. | contrasting |
train_10232 | So far, the behaviour we have observed is what you would expect from selective sampling; there is a marked improvement in terms of cost and error rate reduction over random sampling. | selective sampling raises questions of cognitive load and the quality of annotation. | contrasting |
train_10233 | This demonstrates clearly that tokens with higher KLdivergence have lower inter-annotator agreement. | as discussed in sections 2.3 and 2.4, we decided on sentences as the preferred annotation level. | contrasting |
train_10234 | The tagger follows the CoNLL-2003 task setting (Tjong Kim Sang and De Meulder, 2003), and thus is not developed with WSJ data. | we allowed its use because there is no available named entity recognizer developed with WSJ data. | contrasting |
train_10235 | Our model aims to capture such dependencies among the labels of nodes in a syntactic parse tree. | building such a model is computationally expensive. | contrasting |
train_10236 | The predicate widen shares the trade gap with expect as a A1 argument. | as expect is a raising verb, widen's subject is not in its typical position either, and we should expect to find it in the same positions as expected's subject. | contrasting |
train_10237 | We should finally specify the distributions of the s i . | we make the simplifying assumption that their distribution is flat (noninformative). | contrasting |
train_10238 | This heuristic works well with the correct parse trees. | one of the errors by automatic parsers is due to incorrect PP attachment leading to missing arguments. | contrasting |
train_10239 | From the initial 4,683,777 nodes (of sections 02-21), the heuristic removed 1,503,100 nodes with a loss of 2.6% of the total arguments. | as we started the experiments in late, we used only the 992,819 nodes from the sections 02-08. | contrasting |
train_10240 | Most current semantic role labeling (SRL) approaches can be classified in one of two classes: approaches that take advantage of complete syntactic analysis of text, pioneered by (Gildea and Jurafsky, 2002), and approaches that use partial syntactic analysis, championed by the previous CoNLL shared task evaluations . | to the authors' knowledge, a clear analysis of the benefits of using full syntactic analysis versus partial analysis is not yet available. | contrasting |
train_10241 | There is a significant gap both between parse-F1-reranked trees and SRL-F1-reranked trees, which shows promise for joint reranking. | the gap between SRL-F1-reranked trees and gold parse trees indicates that reranking of parse lists cannot by itself completely close the gap in SRL performance between gold and predicted parse trees. | contrasting |
train_10242 | (2005), the boundary agreement of Charniak is higher than that of Collins; therefore, we choose the Charniak parser's results. | there are two million nodes on the full parsing trees in the training corpus, which makes the training time of machine learning algorithms extremely long. | contrasting |
train_10243 | A parser therefore is trained on this new corpus and should be able to serve as an SRL system at the same time as predicting a parse. | this ideal approach is not feasible. | contrasting |
train_10244 | These new parameters all have the same values as their associated unknown words, so the probability distribution specified by the model does not change. | when a kernel is defined with this reparameterized model, the kernel's feature extractor includes features specific to these words, so the training of a large margin classifier can exploit differences between these words in the target domain. | contrasting |
train_10245 | Once we have the trained parsing model, our proposed porting method proceeds the same way in this scenario as in transferring. | because the original training set already includes the vocabulary from the target domain, the reparameterization approach defined in the preceding section is not necessary so we do not perform it. | contrasting |
train_10246 | For this reason we do not run experiments on the task considered in (Gildea, 2001) and (Roark and Bacchiani, 2003), where they are porting from the restricted domain of the WSJ corpus to the more varied domain of the Brown corpus as a whole. | to help emphasize the success of our proposed porting method, it is relevant to show that even our baseline models are performing better than this previous work on parser portability. | contrasting |
train_10247 | They propose to use the statistics from a source domain to define priors over weights. | in their experiments they used only trivial sub-cases of this approach, namely, count merging and model interpolation. | contrasting |
train_10248 | The EM algorithm is guaranteed to continuously increase the likelihood on the training set until convergence to a local maximum. | the likelihood on unseen data will start decreasing after a number of iterations, due to overfitting. | contrasting |
train_10249 | When we learn a single markovized PCFG from the treebank, that grammar gives a likelihood ratio of only 61. | when we train with a hierarchical model composed of a shared grammar and four individual grammars, we find that the grammar likelihood ratio for these rules goes up to 126, which is very similar to that of the empirical ratio. | contrasting |
train_10250 | Ignoring the non-crossing condition, the constraint set is exact. | because of the non-crossing condition, the constraint set is more restrictive than necessary. | contrasting |
train_10251 | Therefore, we wish to maintain them. | rather than impose the constraints exactly, we enforce them approximately through the introduction of slack variables . | contrasting |
train_10252 | It seems that the global loss model may have been over-regularized (Table 3). | we have picked the t parameter which gave us the best resutls in our experiments. | contrasting |
train_10253 | Finally, we compared our results to the probabilistic parsing approach of (Wang et al., 2005), which on this data obtained accuracies of 0.7631 on the CTB test set and 0.6104 on the development set. | we are using a much simpler feature set here. | contrasting |
train_10254 | However, such statistics are typically gathered on a case-by-case basis, and no reliable procedure exists to automatically identify constructions. | in computational linguistics, many automatic procedures are studied for identifying MWEs (Sag et al., 2002) -with varying success -but here they are treated as exceptions: identifying multi-word expressions is a pre-processing step, where typically adjacent words are grouped together after which the usual procedures for syntactic or semantic analysis can be applied. | contrasting |
train_10255 | For example, "corruption" is refered to as a "tool" in the actual corpus anaphora, a metaphoric usage that would be difficult to predict unless given the usage sentence and its context. | a human agreement of 79% indicate that such instances are relatively rare and the task of predicting a definite anaphor without its context is viable. | contrasting |
train_10256 | In such cases, such as in example 2 from Table 1, answer can be successfully replaced with reply yielding a substitution which conveys the original meaning. | in situations such as in example 1 the word answer is in the sense of a general solution and cannot be replaced with reply. | contrasting |
train_10257 | Furthermore, modeling the a priori substitution likelihood captures the majority of cases in the evaluated setting, mostly because Word-Net provides a rather noisy set of substitution candidates. | successfully incorporating local and global contextual information, as similar to WSD methods, remains a challenging task for future research. | contrasting |
train_10258 | In addition, we can see that the best setting for γ is somewhere around γ = 4, 000. | in this experiment, we could only test up to 1,000 sentences due to the cost of SVM training, which were where L is the number of training examples, regardless of the use of the speed-up method (Kazama and Torisawa, 2005), we can observe that the WMOLT kernel achieves a high accuracy even when the training data is very small. | contrasting |
train_10259 | Depending on the types of verb classes to be induced, the automatic approaches vary their choice of verbs and classification/clustering algorithm. | another central parameter for the automatic induction of semantic verb classes is the selection of verb features. | contrasting |
train_10260 | The adverbial features outperform the frame-based features in any clustering. | none of the differences between the frame-based clusterings and the grammar-based clusterings are significant (χ 2 , df = 1, α = 0.05). | contrasting |
train_10261 | Especially the all results demonstrate once more the missing correlation between association/feature overlap and clustering results. | it is interesting that the clusterings based on window co-occurrence are not significantly worse (and in some cases even better) than the clusterings based on selected grammar-based functions. | contrasting |
train_10262 | The associations therefore did not help in the specific choice of corpus-based features, as we had hoped. | the assumption that window-based features do contribute to semantic verb classes -this assumption came out of an analysis of the associations -was confirmed: simple window-based features were not significantly worse (and in some cases even better) than selected grammar-based functions. | contrasting |
train_10263 | One problem is that there are so many words and so many senses that it is hard to make available a sufficient number of labeled training examples for each of a large number of target words. | this indicates that the total number of available labeled examples (irrespective of target words) can be relatively large. | contrasting |
train_10264 | Since a "bank" as in "money bank" and a "save" as in "saving money" may occur in similar global contexts, certain global context features effective for recognizing the "money bank" sense may be also effective for disambiguating "save", and vice versa. | with respect to the position-sensitive local context features, these two disambiguation problems may not have much in common since, for instance, we sometimes say "the bank announced", but we rarely say "the save announced". | contrasting |
train_10265 | As in the above example of "bank" (noun) and "save" (verb), the predictive structure of global context features may be shared by the problems irrespective of the parts of speech of the target words. | the other types of features may be more dependent on the target word part of speech. | contrasting |
train_10266 | It may be evident that had we only the sentence Investors suffered heavy losses in our corpus, there would be no difference in probability between the five parse trees in figure 1, and U-DOP would not be able to distinguish between the different trees. | if we have a different sentence where JJ NNS (heavy losses) appears in a different context, e.g. | contrasting |
train_10267 | On the one hand, fitting exactly to the training data may lead to overfitting. | dismissing true properties of the data as sampling bias in the training data will result in low accuracy. | contrasting |
train_10268 | This has the advantage of easing the data-sparsity issues described above because infrequent sequences are clustered into more frequent non-terminal symbols. | in incremental systems, constituents are compared directly, which can lead to a bias towards shorter constituents. | contrasting |
train_10269 | One method for achieving such an internal relationship might be to attach contexts to the expressions with which they co-occur, and we propose using such a method here. | this requires that we have some criterion for deciding when and how expressions should be attached to their contexts. | contrasting |
train_10270 | They claim that local statistics, effectively n-grams, can be sufficient to indicate to the learner which alternative should be preferred. | this argument has been carefully rebutted by (Kam et al., 2005), who show that this argument relies purely on a phonological coincidence in English. | contrasting |
train_10271 | First of all, observe that syntactic congruence is a purely language theoretic notion that makes no reference to the grammatical representation of the language, but only to the set of strings that occur in it. | there is an obvious problem: syntactic congruence tells us something very useful about the language, but all we can observe is weak substitutability. | contrasting |
train_10272 | These models impressively manage to extract significant structure from raw data. | for our purposes, neither of these models is suitable. | contrasting |
train_10273 | In general gazetteers are thought to provide a useful source of external knowledge that is helpful when an entity cannot be identified from knowledge contained solely within the data set used for training. | some research has questioned the usefulness of gazetteers (Krupka and Hausman, 1998). | contrasting |
train_10274 | In each case, the standard+g model outperforms the standard model at a significance level of p ¡ 0¢ 02. | these results camouflage the fact that the gazetteer features introduce some negative effects, which we explore in the next section. | contrasting |
train_10275 | The simplest pruning method is to set a count threshold c below which transitions are removed. | this is a poor method. | contrasting |
train_10276 | One way to handle this problem is to build a language model of content tokens and retain only the maximum likelihood token sequence. | in the current work, the following heuristic which worked well in practice is used. | contrasting |
train_10277 | The downside of supervised learning is expensive training data. | massive amounts of unlabeled data are readily available. | contrasting |
train_10278 | (2005) use language-specific information (for example, chunks). | the method presented here is language independent. | contrasting |
train_10279 | See Section 6 for detailed results. | variations are consistent enough to allow us to draw some general conclusions. | contrasting |
train_10280 | For this shared task we decided to ignore any additional relations. | the data format could easily be extended with additional optional columns in the future. | contrasting |
train_10281 | It has the smallest training set. | its average as well as top score far exceed those for Arabic and Turkish, which are larger. | contrasting |
train_10282 | Tests on the PDT (Böhmovà et al., 2003) show that the added actions are sufficient to handle all cases of non-projectivity. | since the cases of non-projectivity are quite rare in the corpus, the general learner is not supplied enough of them to learn how to classify them accurately, hence it may be worthwhile to exploit a second classifier trained specifically in handling nonprojective situations. | contrasting |
train_10283 | In principle, one pass through the dependency array would suffice to parse a sentence. | due to linguistic constraints like uniqueness princi ple, barrier tags and "full" heads 5 , some words may be left unattached or create conflicts for their heads. | contrasting |
train_10284 | 7 As a general rule, non-projective arcs were only al lowed if no other, projective head could be found for a given word. | linguistic knowledge suggests that non-projective arcs should be particu larly likely in connection with verb-chain-depen dencies, where subjects attach to the finite verb, but objects to the non-finite verb, which can create crossing arcs in the case of object fronting, chain inversion etc. | contrasting |
train_10285 | We saw the same relation in this evaluation: for Turkish, Arabic, and Slovene, languages with limited number of training sentences, our system obtains accuracies below 70%. | one can not argue that the training size is the only cause of errors: Czech has the largest training set, and our accuracy is also below 70%. | contrasting |
train_10286 | The pure Wait action was suggested in (Yamada and Matsumoto, 2003). | here we come up with these five actions by separating actions Left into (real) Left and WaitLeft, and Right into (real) Right and WaitRight. | contrasting |
train_10287 | The advantage of a pipeline model is that it can use more information that is taken from the outcomes of previous prediction. | this may result in accumulating error. | contrasting |
train_10288 | It is expected that we selected the FORM and CPOSTAG of each nodes as features in the preceding work. | the POSTAG is also a useful feature for Chinese, and we grouped the original POS tags of Sinica Treebank from 303 to 54 in our preceding work. | contrasting |
train_10289 | For example, there were no features that considered subject-verb agreement nor agreement of an adjective with the number or lexical gender of the noun it modified. | it is possible that morphological information influenced the training of edge weights if the information was implicit in the POS tags. | contrasting |
train_10290 | As with the regular shift-reduce, it uses a stack S and a list of input words W . | instead of finding constituents, it builds a set of arcs G representing the graph of dependencies. | contrasting |
train_10291 | With the availability of resources such as the Penn WSJ Treebank, much of the focus in the parsing community had been on producing syntactic representations based on phrase-structure. | recently their has been a revived interest in parsing models that produce dependency graph representations of sentences, which model words and their arguments through directed edges (Hudson, 1984;Mel čuk, 1988). | contrasting |
train_10292 | Ideally one would like to make all parsing and labeling decisions jointly so that the shared knowledge of both decisions will help resolve any ambiguities. | the parser is fundamentally limited by the scope of local factorizations that make inference tractable. | contrasting |
train_10293 | In our case this means we are forced only to consider features over single edges or pairs of edges. | in a two stage system we can incorporate features over the entire output of the unlabeled parser since that structure is fixed as input. | contrasting |
train_10294 | Table 2 shows that each component of our system does not change performance significantly (rows 2-4 versus row 1). | if we only allow projective parses, do not use morphological features and label edges with a simple atomic classifier, the overall drop in performance becomes significant (row 5 versus row 1). | contrasting |
train_10295 | Another source of potential error is that the average sentence length of Arabic is much higher than other languages (around 37 words/sentence). | if we only look at performance for sentences of length less than 30, the labeled accuracy is still only 71%. | contrasting |
train_10296 | This framework is efficient for both projective and non-projective parsing and provides an online learning algorithm which combined with a rich feature set creates state-of-the-art performance across multiple languages (McDonald and Pereira, 2006). | mcDonald and Pereira (2006) mention the restrictive nature of this parsing algorithm. | contrasting |
train_10297 | Thirdly, inference timed out (Chinese) and fourthly, constraints were not violated that often in the first place (Japanese). | the effect of the first problem might be reduced by training with a higher k. The second problem could partly be overcome by using a better tagger or by a special treatment within the constraint handling for word types which are likely to be mistagged. | contrasting |
train_10298 | At the third phase, the post-processor (which is another learner) recognizes the still un-parsed words. | in this paper, we aim to build a multilingual portable parsing model without employing deep language-specific knowledge, such as lemmatization, morphologic analyzer etc. | contrasting |
train_10299 | We build a classifier, which learns to find root word based on encoding context and children features. | most of the dependency relations were constructed at the first stage. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.