id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_6300 | In this experiment we follow McDonald and Brew's (2004) methodology in simulating semantic priming. | because our primary focus is on the representation of the semantic space, we do not adopt their incremental model of semantic processing. | contrasting |
train_6301 | In theory, an automatically acquired sense ranker should have a good accuracy on all ambiguous words in order to do well on WSD. | in practice the sense ranker's performance depends crucially on its ability to correctly predict the first sense for highly frequent and highly ambiguous words. | contrasting |
train_6302 | This result indicates that the model performs well when trained on a small corpus and that its good performance cannot be attributed solely to corpus size. | it also suggests that a large increase in corpus size is necessary to obtain substantial improvements with the present sense ranking strategy, which uses distributional similarity as a corrective for taxonomy-based similarity: Accuracy increases by approximately 4% when our corpus size increases by a factor of 20. | contrasting |
train_6303 | If two items are added to a cell that are equivalent except for their weights or back-pointers, then they are merged (in the MT decoding literature, this is also known as hypothesis recombination), with the merged item taking its weight and back-pointers from the better of the two equivalent items. | (if we are interested in finding the k-best derivations, the merged item gets the multiset of all the tuples of back-pointers from the equivalent items. | contrasting |
train_6304 | No one algorithm can cater to all types of settings. | our data do suggest quite strongly that, at least in the situation in which our subjects found themselves, a law of diminishing returns is in operation. | contrasting |
train_6305 | Her long involvement with IR arose (as someone who subsisted for an inordinately long time on soft money) by the need to find a new line of research in the aftermath of the ALPAC Report and the subsequent difficulties in getting machine translation work funded. | she was always well qualified to work in IR, a topic addressed in her very early publications (see, for example, Masterman, Needham, and Spärck Jones 1958), although in a rather different context to her later work. | contrasting |
train_6306 | gests that the proposed MLE is a big improvement over the MF baseline for twoway associations, but the improvement becomes less and less noticeable with higher order associations. | this observation is not surprising, because the number of degrees of freedom, 2 m − (m + 1), increases exponentially with m. In order words, the margin constraints are most effective for small m, but the effectiveness decreases rapidly with m. smoothing becomes more and more important as m increases, as shown in Figure 24(b), partly because of the data sparsity in high order associations. | contrasting |
train_6307 | In particular, it uses only intersections, which is always smaller than a s = |K 1 ∩ K 2 | available in the samples. | our algorithm takes advantage of all useful samples up to D s = min(max(K 1 ), max(K 2 )), particularly all a s intersections. | contrasting |
train_6308 | the resemblance R(a) is a convex function of a, the delta method also underestimates the variance. | figure 29 shows that the errors are not very large, and become negligible with reasonably large sample sizes (e.g., 50). | contrasting |
train_6309 | When the (S[ng]\NP 1 )/NP 2 passing is combined with the NP the buck, the lexical head of the NP 2 is instantiated with buck. | similarly, when the adverb just (s\NP 1 )/(s\NP) 2 is applied to passing the buck, a dependency between just and passing is created: because (s\NP 1 )/(s\NP) 2 is a modifier category, the head of the resulting s[ng]\NP is passing, not just (and no dependency is established between just and its NP 1 ). | contrasting |
train_6310 | In written English, certain types of NP-extraposition require a comma before or after the extraposed noun phrase: Because any predicative noun phrase could be used in this manner, this construction is also potentially problematic for the coverage of our grammar and lexicon. | the fact that a comma is required allows us to use a small number of binary type-changing rules (which do not project any dependencies), such as: The translation algorithm presumes that the trees in the Penn Treebank map directly to the desired CCG derivations. | contrasting |
train_6311 | Although the Treebank does not explicitly indicate coordination, it can generally be inferred from the presence of a conjunction. | in list-like nominal coordinations, the conjuncts are only separated by commas or semicolons, and may be difficult to distinguish from appositives. | contrasting |
train_6312 | Typically, frequency information for rare words in the training data is used to estimate parameters for unknown words (and when these rare or unknown words are encountered during parsing, additional information may be obtained from a POS-tagger (Collins 1997)). | in a lexicalized formalism such as CCG, there is the additional problem of missing lexical entries for known words. | contrasting |
train_6313 | Niyogi uses the language of statistical physics to give an evolutionary account of language change and language origins, which for some readers might be a little daunting. | an effort has been made to reach out to different disciplines and together with a peppering of practical examples, The Computational Nature of Language Learning and Evolution will not only be of interest to researchers modeling the evolution of language, but also deserves attention from adventurous historical linguists. | contrasting |
train_6314 | It is a requirement that she be 17 years old. | on the specific reading must gets an epistemic interpretation. | contrasting |
train_6315 | The Bikel parser's performance (without changing attachments) is slightly lower than C&B's, O&M's, and TM&N's. | for the trilexical case, the difference is not statistically significant for NA-discard. | contrasting |
train_6316 | None of these papers attempt to improve a parser in a realistic parsing situation. | a few studies were published recently that do evaluate PP attachment without oracles (Olteanu 2004;Atterer and Schütze 2006;Foth and Menzel 2006). | contrasting |
train_6317 | This has computational advantages (no potentially infinite Z(Θ) terms to compute). | the set of conditional distributions of labels given terminals that can be expressed by MEMMs is strictly smaller than those expressible by HMMs (and by extension, Mealy MRFs). | contrasting |
train_6318 | (1999) because their grammars are hand-written and constraining enough to allow the analyses for each sentence to be enumerated. | for grammars with wider coverage it is often not possible to enumerate the analyses for each sentence in the training data. | contrasting |
train_6319 | In fact, the results given in Hockenmaier (2003b) are lower than previous results. | , Hockenmaier (2003b) reports that the increased complexity of the model reduces the effectiveness of the dynamic programming used in the parser, and hence a more aggressive beam search is required to produce reasonable parse times. | contrasting |
train_6320 | One solution is to only keep a small number of charts in memory at any one time, and to keep reading in the charts on each iteration. | given that the L-BFGS algorithm takes hundreds of iterations to converge, this approach would be infeasibly slow. | contrasting |
train_6321 | In this article we wanted to use a gold standard which is easily accessible to other researchers. | there are some differences between the dependency scheme used by our parser and CCGbank. | contrasting |
train_6322 | The training of the dependency model already uses most of the RAM available on the cluster. | it is possible to use smaller β values for training the dependency model if we also apply the two types of normal-form restriction used by the normalform model. | contrasting |
train_6323 | The results are encouraging because a much smaller amount of corpus data is needed compared to our approach. | their method has only been applied to an artificially constructed test set, rather than a publicly available corpus, and has yet to be applied in a domain-specific setting, which is the chief motivation of our work. | contrasting |
train_6324 | This measure is based on Lin's information-theoretic similarity theorem (Lin 1997 The similarity between A and B is measured by the ratio between the amount of information needed to state the commonality of A and B and the information needed to fully describe what A and B are. | in our application, if T(w) is the set of features f such that i(w, f ) is positive, then the similarity between two words, w and n, is due to this choice of dss and the openness of the domain, we restrict ourselves to only considering words with a total feature frequency of at least 10. | contrasting |
train_6325 | No doubt analysts following this protocol will also achieve excellent interannotator agreement. | we obviously can't use the set of analyses they produce as empirical evidence for the theory. | contrasting |
train_6326 | In fact their definition of directed relations is similar to the definition of "nuclearity" in Mann and Thompson's (1988) theory of coherence relations, and this makes explicit reference to the analyst's intuitions of segment importance. | these decisions are all local to single relations between pairs of text segments. | contrasting |
train_6327 | in "Question answering supported by multiple levels of information extraction" focus on applying information-extraction techniques at retrieval time. | echihabi et al., in their chapter "How to select an answer string," use statistical and information-theoretic techniques, which interpret an answer as a "translation" of the question. | contrasting |
train_6328 | The notion of transfer is supposed to explain children's nonlinear learning curve, as well as their ability to generalize their lexical knowledge to novel items. | the details of this central mechanism are left unspecified: it is not clear how and under which constraints the transfer between two items can take place. | contrasting |
train_6329 | The LSA model is lexicalized: coherence amounts to quantifying the degree of semantic similarity between sentences. | our model does not incorporate any notion of similarity: coherence is encoded in terms of transition sequences that are document-specific rather than sentence-specific. | contrasting |
train_6330 | Both LSA and our entity-grid model are local-they model sentence-to-sentence transitions without being aware of global document structure. | the content models developed by Barzilay and Lee (2004) learn to represent more global text properties by capturing topics and the order in which these topics appear in texts from the same domain. | contrasting |
train_6331 | First, note that the entity-grid model significantly outperforms LSA on both domains (p < .01 using a Sign test, see Table 5). | to our model, LSA is neither entity-based nor unlexicalized: It measures the degree of semantic overlap across successive sentences, without handling discourse entities in a special way (all content words in a sentence contribute towards its meaning). | contrasting |
train_6332 | In the chart in Figure 12, these signs are packed into an equivalence class. | figure 14 shows that the values of CONT, that is, predicate-argument structures, have different values, and the signs as they are cannot be equivalent. | contrasting |
train_6333 | The method just described is the essence of our solution for the tractable estimation of maximum entropy models on exponentially many HPSG parse trees. | the problem of computational cost remains. | contrasting |
train_6334 | For example, PP-attachment ambiguity cannot be resolved with only syntactic preferences. | the results show that a model with only semantic features performs significantly worse than one with syntactic features. | contrasting |
train_6335 | Zero-pronoun resolution is also a difficult problem. | we found that most were indirectly caused by errors of argument/modifier distinction in to-infinitive clauses. | contrasting |
train_6336 | A possible problem with this method is in the approximation of exponentially many parse trees by a polynomial-size sample. | their method has an advantage in that any features on parse results can be incorporated into a model, whereas our method forces feature functions to be defined locally on conjunctive nodes. | contrasting |
train_6337 | , 2002 also proposed a maximum entropy model for probabilistic modeling of LFG parsing. | similarly to the previous studies on HPSG parsing, these groups had no solution to the problem of exponential explosion of unpacked parse results. | contrasting |
train_6338 | The proposed algorithm is an essential solution to the problem of estimating probabilistic models on exponentially many complete structures. | the applicability of this algorithm relies on the constraint that features are defined locally in conjunctive nodes. | contrasting |
train_6339 | When we define more global features, such as co-occurrences of structures at distant places in a sentence, conjunctive nodes must be expanded so that they include all structures that are necessary to define these features. | this obviously increases the number of conjunctive nodes, and consequently, the cost of parameter estimation increases. | contrasting |
train_6340 | Sampling techniques (Rosenfeld 1997;Chen and Rosenfeld 1999b;Osborne 2000;Malouf and van Noord 2004) allow us to define any features on complete structures without any constraints. | they force us to employ approximation methods for tractable computation. | contrasting |
train_6341 | Readers of Arabic will notice that the translation of like is indeed a comparative, not a verb meaning enjoy, as in the legend. | they will also notice that flies translates as the noun, rather than the verb, just as the story foretold. | contrasting |
train_6342 | The gain in performance from the combination step is consistently between two and three F 1 points. | a combination approach increases system complexity and penalizes efficiency. | contrasting |
train_6343 | This opens the possibility of exploiting dependencies among the different verbs in the sentence. | the complexity may grow significantly, and results so far are inconclusive (Carreras, Màrquez, and Chrupała 2004;Surdeanu et al. | contrasting |
train_6344 | The predicate widen shares the phrase the trade gap with expect as an ARG1 argument. | as expect is a raising verb, widen's subject is not in its typical position either, and we should expect to find it in the same position as expected's subject. | contrasting |
train_6345 | For example, [VP [V NP]] is an SST which has two non-terminal symbols, V and NP, as leaves. | ]] is not an SST as it violates the production VP→V NP. | contrasting |
train_6346 | On the one hand, this makes the learning and classification phases more complex because they involve more instances. | our results are not biased by the quality of the heuristics, leading to more meaningful findings. | contrasting |
train_6347 | We note that marking the argument node simplifies the generalization process as it improves both tasks by about 3.5 and 2.5 absolute percentage points, respectively. | row Poly shows that the polynomial kernel using state-of-the-art features (Moschitti et al. | contrasting |
train_6348 | The previous section has shown that if a state-of-the-art model 12 is adopted, then the tree kernel contribution is marginal. | if a non state-of-the-art model is adopted tree kernels can play a significant role. | contrasting |
train_6349 | It should be noted that a more accurate baseline can be provided by using the Viterbi-style search (see Section 4.4.1). | the experiments in Section 6.5 show that the heuristics produce the same accuracy (at least when the complete task is carried out). | contrasting |
train_6350 | (c) The use of fast tree kernels (Moschitti 2006a) along with the proposed tree representations makes the learning and classification much faster, so that the overall running time is comparable with polynomial kernels. | when used with SVMs their running time on very large data sets (e. g., millions of instances) becomes prohibitive. | contrasting |
train_6351 | For example, a noun phrase headed by ("today") is very likely to be a temporal element; so is a prepositional phrase with the head word ("at"). | for prepositional phrases, the preposition is not always the most informative element. | contrasting |
train_6352 | Throughout this article, when full parsing information is available, we assume that the system is presented with the full phrase-structure parse tree as defined in the Penn Treebank (Marcus, Marcinkiewicz, and Santorini 1993) but without trace and functional tags. | when only shallow parsing information is available, the full parse tree is reduced to only the chunks and the clause constituents. | contrasting |
train_6353 | The predictions are combined to form argument candidates. | we can employ a simple heuristic to filter out some candidates that are obviously not arguments. | contrasting |
train_6354 | Previous approaches usually rely on dynamic programming to resolve nonoverlapping/embedding constraints (i.e., Constraint 4) when the constraint structure is sequential. | they are not able to handle more expressive constraints such as those that take long-distance dependencies and counting dependencies into account (Roth and Yih 2005). | contrasting |
train_6355 | The F 1 difference is 1.5 points when using the gold-standard data. | when automatic parsers are used, the shallow parsing-based system is, in fact, slightly better; although the difference is not statistically significant. | contrasting |
train_6356 | Specifically, these argument candidates often overlap and differ only in one or two words. | the pruning heuristic based on full parsing never outputs overlapping candidates and consequently provides input that is easier for the next stage to handle. | contrasting |
train_6357 | For example, an assignment that has two A1 arguments clearly violates the non-duplication constraint. | if an assignment has no predicted arguments at all, it still satisfies all the constraints. | contrasting |
train_6358 | Each possible labeling of the argument is associated with a variable which is then used to set up the inference procedure. | the final prediction will be likely dominated by the system that produces more candidates, which is system B in this example. | contrasting |
train_6359 | However, their chunk-based approach was very weak-only chunks were considered as possible candidates; hence, it is not very surprising that the boundaries of the arguments could not be reliably found. | our shallow parse-based system does not have these restrictions on the argument boundaries and therefore performs much better at this stage, providing a more fair comparison. | contrasting |
train_6360 | The performance of argument Identification is essentially the same as when training and testing on WSJ. | argument Classification is 6 percentage points worse (80.1% vs. 86.1%) when training and testing on Brown than when training and testing on WSJ. | contrasting |
train_6361 | It first describes the IR models used, which is interesting for someone unfamiliar with the field. | the discussion of the integration of IE does not seem convincing and there is once again a lack of real examples which would strengthen the arguments. | contrasting |
train_6362 | In theory, there is one correct label for any given act. | in practice human coders disagree, choosing different labels for the same act (sometimes even with divergences that make one question whether there is one correct answer). | contrasting |
train_6363 | For data with a very strong correlation between the input features A and the output labels B, the turning point below which performance is spuriously high occurs at around κ = 0.55 (Figure 3d), a value the community holds to be pretty low but which is not unknown in published work. | when the underlying relationship to be learned is moderate or strong (Figures 3b and 3c), the spuriously high results already occur for κ values commonly held to be tolerable. | contrasting |
train_6364 | Some factors such as morphology (gender, number, animacy, and case) or syntax (e.g., the role of binding and commanding relations [Chomsky 1981]) are "eliminating," forbidding certain NPs from being antecedents. | many others are "preferential," giving more preference to certain candidates over others; examples include: r Sentence-based factors: Pronouns in one clause prefer to refer to the NP that is the subject of the previous clause (Crawley, Stevenson, and Kleinman 1990). | contrasting |
train_6365 | Thus, it enables a relatively large number of candidates to be processed. | as our twin-candidate model imposes no constraints that enforce transitivity of the preference relation, the preference classifier would likely output C 1 C 2 , C 2 C 3 , and C 3 C 1 . | contrasting |
train_6366 | r For most of our experiments we use as input the gold-standard tags from the treebank. | in our last experiments we evaluate the impact of automatic statistical morphological disambiguation on the performance of our best performing parser. | contrasting |
train_6367 | Such a post-processor could be worth developing if the WW U accuracy obtained with this model proves to be higher than all of the other models, that is, if this is the best way of finding the correct dependencies between words without considering which IGs are connected. | as we will see in Section 4.4, this model does not give the best WW U . | contrasting |
train_6368 | 19 This approach has some advantages over the probabilistic parser, in that r it can process both left-to-right and right-to-left dependencies due to its parsing algorithm, r it assigns dependency labels simultaneously with dependencies and can use these as features in the history-based model, and r it does not necessarily require expert knowledge about the choice of linguistically relevant features to use in the representations because SVM training involves implicit feature selection. | we still exclude sentences with non-projective dependencies during training. | contrasting |
train_6369 | We take the minor POS category plus the case and possessive agreement markers for nominals and participle adjectives to make up the POS feature of each IG. | 21 we do not employ dynamic selection of these features and just use the same strategy for both dependents and the heads. | contrasting |
train_6370 | We can see the benefit of using inflectional features separately and split into atomic components, by comparing the first line of the table with the best results for the IG-based model in Table 4. | we can also note the improvement that lexicalized models bring: 24 to the probabilistic parser, lexicalization using root information rather than surface form gives better performance, even though the difference is not statistically significant. | contrasting |
train_6371 | 1 Moreover, a surprising variety of problems are attackable with FSTs, from part-of-speech tagging to letter-to-sound conversion to name transliteration. | language problems like machine translation break this mold, because they involve massive re-ordering of symbols, and because the transformation processes seem sensitive to hierarchical tree structure. | contrasting |
train_6372 | consists only of q S(x0, x1), although we want the English-to-Arabic transformation to apply only when it faces the entire structure q S(PRO, VP(V, NP)). | we can simulate lookahead using states, as in these productions: By omitting rules like qpro NP → ..., we ensure that the entire production sequence will dead-end unless the first child of the input tree is in fact PRO. | contrasting |
train_6373 | This modification adds to our new transducer model all the contextual information specified in Yamada and Knight (2001). | upon closer inspection one can see that the exact transducer is in fact overspecified in the reordering, or r rules. | contrasting |
train_6374 | Regular roots such as p.s.q yield forms such as hpsqh. | the irregular roots n.p.l, i.c.g, q.w.m, and g.n.n in this pattern yield the seemingly similar forms hplh, hcgh, hqmh, and hgnh, respectively. | contrasting |
train_6375 | These are the kinds of errors which are most difficult to fix. | in many cases the system's errors are relatively easy to overcome. | contrasting |
train_6376 | Once you have gone down this route, it's very hard to consider releasing the resulting software. | if you plan from the start to distribute your software, you will inevitably be guided by considerations that are important to your potential audience. | contrasting |
train_6377 | It is true that releasing software that is both usable and reliable requires a strong hand to guide system development, and that's a skill that many researchers don't think they have. | it's really quite simple to develop. | contrasting |
train_6378 | This is not necessarily a bad thing, and might address concerns such as those raised by Chuch (2005) about very conservative reviewing in our field and the resulting tendency to prefer incremental improvements. | the other path is to accept (and in fact insist) that highly detailed empirical studies must be reproducible to be credible, and that it is unreasonable to expect that reproducibility be possible based on the description provided in a publication. | contrasting |
train_6379 | This data is usually used to motivate and inspire a new hand-built dialogue system or to modify an existing one. | given the existence of such data, it should be possible to exploit machine learning methods to automatically build and optimize a new dialogue system. | contrasting |
train_6380 | Simulated users are generally preferred due to the much smaller development effort involved, and the fact that trialand-error training with humans is tedious for the users. | the issues of how to construct and then evaluate simulated users are open problems. | contrasting |
train_6381 | We show that all four algorithms give competitive accuracy, although the non-projective list-based algorithm generally outperforms the projective algorithms for languages with a non-negligible proportion of non-projective constructions. | the projective algorithms often produce comparable results when combined with the technique known as pseudo-projective parsing. | contrasting |
train_6382 | The fact that both the head and the dependent are kept in either λ 2 or β makes it possible to construct non-projective dependency graphs, because the NO-ARC n transition allows a node to be passed from λ 1 to λ 2 even if it does not (yet) have a head. | an arc can only be added between two nodes i and j if the dependent end of the arc is not the artificial root 0 and does not already have a head, which would violate ROOT and SINGLE-HEAD, respectively, and if there is no path connecting the dependent to the head, which would cause a violation of ACYCLICITY. | contrasting |
train_6383 | We conjecture that an additional necessary condition is an annotation style that favors more deeply embedded structures, giving rise to chains of left-headed structures where each node is dependent on the preceding one, which increases the number of points at which an incorrect decision can be made by an arcstandard parser. | we have not yet fully verified the extent to which this condition holds for all the data sets where the arc-eager parsers outperform their arc-standard counterparts. | contrasting |
train_6384 | Before we consider the evaluation of efficiency in both learning and parsing, it is worth pointing out that the results will be heavily dependent on the choice of support vector machines for classification, and cannot be directly generalized to the use of deterministic incremental parsing algorithms together with other kinds of classifiers. | because support vector machines constitute the state of the art in classifierbased parsing, it is still worth examining how learning and parsing times vary with the parsing algorithm while parameters of learning and classification are kept constant. | contrasting |
train_6385 | Donner and Eliasziw (1987) propose a more general form of significance test for arbitrary levels of agreement. | krippendorff (2004a, Section 11.4.2) states that the distribution of α is unknown, so confidence intervals must be obtained by bootstrapping; a software package for doing this is described in Hayes and krippendorff (2007). | contrasting |
train_6386 | These results show that agreement coefficients should not be used as indicators of the suitability of annotated data for machine learning. | the purpose of reliability studies is not to find out whether annotations can be generalized, but whether they capture some kind of observable reality. | contrasting |
train_6387 | The probability Pr(t) represents the well-formedness of t and it is generally called the language model probability (n-gram models are usually adopted [Jelinek 1998]). | pr(s|t) represents the relationship between the two sentences (the source and its translation). | contrasting |
train_6388 | Moreover, in practice the summation operator is replaced with the maximization operator, which in turn reduces the contribution of each individual source word in generating a target word. | modeling word sequences rather than single words in both the alignment and lexicon models cause significant improvement in translation quality (Och and Ney 2004). | contrasting |
train_6389 | Other techniques are heuristics based on the previous computation of word alignments in the training corpus (Zens, Och, and Ney 2002;Koehn, Och, and Marcu 2003). | as for AT, the weights λ i in Equation 13are usually optimized using held-out data. | contrasting |
train_6390 | Specifically in the field of language translation, it is often argued that natural languages are so complex that these simple models are never able to cope with the required source-target mappings. | one should take into account that the complexity of the mapping between the source and target domains of a transducer is not always directly related to the complexity of the domains themselves. | contrasting |
train_6391 | For a fixed source sentence, if no pruning is applied in the production of the word graph, it represents all possible sequences of target words for which the posterior probability is greater than zero, according to the models used. | because of the pruning generally needed to render the problem computationally feasible, the resulting word graph only represents a subset of the possible translations. | contrasting |
train_6392 | In general, the different techniques perform similarly for the various translation directions. | the English-Spanish language pair is the one for which the best translations can be produced. | contrasting |
train_6393 | In cases where these texts were quite unrelated to the training data, the system did not significantly help the human translators to increase their productivity. | when the test texts were reasonably well related to the training data, high productivity gains were registeredclose to what could be expected according to the KSR/MAR empirical results. | contrasting |
train_6394 | This measure does not take into account that the OSOs differ in length. | this information is necessary to estimate reliably the performance of an information ordering approach, as we discuss in Karamanis and Mellish (2005a) in more detail. | contrasting |
train_6395 | 16 Barzilay and Lapata (2005) compare the OSO with just 20 alternative orderings, often sampled out of several millions. | barzilay and Lee (2004) enumerate exhaustively each possible ordering, which might become impractical as the search space grows factorially. | contrasting |
train_6396 | Karamanis and Mellish (2005b) conducted an experiment in the MPIRO domain using Lapata's methodology which supplements the work reported in this article. | such an approach is less practical for much larger collections of texts such as NEWS and ACCS. | contrasting |
train_6397 | Following McKeown (1985), Kibble and Power argue in favor of an integrated approach for concept-to-text generation in which the same centering features are used at different stages in the generation pipeline. | our study suggests that features such as CHEAPNESS and the centering transitions are not particularly relevant to information ordering. | contrasting |
train_6398 | If this were not the case, then, because ERCS(x) must contain at least one coordinate with an L (else there would be no way to reject {x}), the presence of x in S would place an L in a W-unique coordinate in S − {x}. | this would make it impossible to associate a ranking with at least one partitioning of S into accepted and rejected subsets contra the assumption that S is shatterable. | contrasting |
train_6399 | Nonetheless, at these recall values, the two measures have relatively low precision (compared to the other measures), suggesting that both measures also put many idiomatic pairs near the bottom of the list. | the precision-recall curve of Fixedness syn shows that its performance is consistently much better than that of PMI: Even at the recall level of 90%, its precision is close to 70% (cf. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.