id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_6500 | The requested origin and destination are used as a filter when selecting the set of available options by querying the database. | the requested arrival or departure time-if specified-is used in the corresponding attribute's evaluation function to give a higher score to flights that are closer to the specified time. | contrasting |
train_6501 | The next-highest-ranked flight is a morning flight, which does not have any attributes that are compellingly better than those of the top choice, and is therefore skipped. | the third option presents an interesting trade-off: even though business class is not available, it is a direct flight, so it is also included. | contrasting |
train_6502 | As an alternative, we could have associated a theme or rheme predication with the edge tones, which would be more in line with Steedman's (2000a) approach. | doing so would make it necessary to include one such predication per phrase in the logical forms, thereby anticipating the desired number of output theme and rheme phrases in the realizer's input. | contrasting |
train_6503 | In one sense, defining a target-cost component to direct the search towards finding suitable intonation is not difficult, as a simple penalty for not matching the target intonation suffices. | standard unit selection techniques only ever take into account local effects, and no provision is made to ensure a suitable global intonation contour. | contrasting |
train_6504 | Here, the APML version was annotated with the target accents, namely L+H* for Lufthansa and none for flight, whereas the ALL version was annotated with a L+H* !H* pattern, which was also considered acceptable. | with the APML version the pitch drops from an f0 value of 293 Hz to 177 Hz, whereas with the ALL version the pitch only drops from 260 Hz to 209 Hz. | contrasting |
train_6505 | Pon- Barry, Weng, and Varges (2006) found that fewer dialogue turns were necessary when the system proactively suggested refinements and relaxations. | as argued in Demberg and Moore (2006), there are several limitations to the SR approach. | contrasting |
train_6506 | 's more comprehensive study, their system predicts accent placement (but not type), break indices, and edge tones based on features extracted from the SURGE realizer (Elhadad 1993), deep semantic and discourse features, including semantic type, semantic abnormality and given/new status, and surface features, such as part of speech and word informativeness. | neither of these CTS approaches makes use of the theme/rheme distinction, or the notion of kontrast that stems from Rooth's (1992) work on alternative sets, both of which are crucial to Steedman's (2000a) theory of how information structure constrains prosodic choices. | contrasting |
train_6507 | In an approach that is more similar in spirit to ours, Bulyko and Ostendorf (2002) likewise aim to reproduce distinctive intonational patterns in a limited domain. | unlike our approach, theirs makes use of simple templates for generating paraphrases, as their focus is on how deferring the final choice of wording and prosodic realization to their synthesizer enables them to achieve more natural sounding synthetic speech. | contrasting |
train_6508 | Subsequently, found correlations between results in MDS spaces and standard mean opinion score (MOS) tests, but as the MOS tests did not correspond to single dimensions in the MDS space, they suggested that it may be possible to design more informative evaluations by asking subjects to specifically rate each factor of interest (e.g., prosody), where each factor relates to one dimension in the MDS space. | as no specific method is suggested to guarantee reliable prosodic judgments from naive listeners, we have left this question for future research, opting instead to augment the listener ratings gathered in our perception experiment with an expert prosody evaluation and an f0 analysis of the theme phrases. | contrasting |
train_6509 | Similarly to what we present in Section 3.1, those authors propose constructing a comparator by using an SVM to compare two sentences or texts with multiple features. | neither further applied this approach to obtain readability assessment based on sorting. | contrasting |
train_6510 | Here, what kinds of corpora the regression method requires in the modern machine learning context has not been clarified because of the lack of previous work in machine learning regression. | our empirical results, shown in Section 8, suggest that the two sets of difficult and easy training data will not be sufficient, and machine learning regression requires texts labeled with scores for different levels. | contrasting |
train_6511 | This must have been due to the different natures of the training and test data. | the texts of the Asahi newspaper articles and TD2-{M,F}-J are controlled under a similar standard (in terms of vocabulary, syntax, and so forth), which would account for the difference from the case in English. | contrasting |
train_6512 | We would like to initialize EM with all the rules that might conceivably be used to explain the training data. | this set is too large to practically enumerate. | contrasting |
train_6513 | No major breakthrough came for speech technology-I am still typing this. | language technology did change almost beyond recognition. | contrasting |
train_6514 | By now, it is generally accepted that the problems that Winograd, Hovy, and others tried to tackle are very complex, and that the current emphasis on more well-delimited problems is probably a good thing. | it is not difficult to come up with computational applications for which a better understanding would be required of language as a process and the effects language may have on a user (interactive virtual agents which try to persuade a user to do something, for example). | contrasting |
train_6515 | If we take empty cepts into account, the product for {(1, 2)} can be rewritten as Similarly, the product for {(1, 2), (2, 2)} now becomes Note that after adding the link (2, 2), the new product still has more factors than the old product. | the new product is not necessarily always smaller than the old one. | contrasting |
train_6516 | We observe that C→E, union, and grow-diag-final weight recall higher because F-measure decreases when α increases. | e→C, intersection, refined method, and Cross-eM weight precision higher. | contrasting |
train_6517 | The original form should be (1 − p 0 ) φ 0 . | this assumption results in a problem for our search algorithm that begins with an empty alignment (see Algorithm 1), for which φ 0 is J and the feature value h m 3 (f, e, a) is negative infinity. | contrasting |
train_6518 | It could be argued that it is premature to survey an area of research that has shown promise but has not yet been tested for a long enough period (and in enough systems). | we believe this argument actually strengthens the motivation for a survey that can encourage the community to use paraphrases by providing an applicationindependent, cohesive, and condensed discussion of data-driven paraphrase generation techniques. | contrasting |
train_6519 | Individual lexical items having the same meaning are usually referred to as lexical paraphrases or, more commonly, synonyms, for example, hot, warm and eat, consume . | lexical paraphrasing cannot be restricted strictly to the concept of synonymy. | contrasting |
train_6520 | For some of the applications we discuss subsequently, the use of paraphrases in the manner described may not yet be the norm. | wherever applicable, we cite recent research that promises gains in performance by using paraphrases for these applications. | contrasting |
train_6521 | We present similar observations in Section 3.5 and highlight that although more recent translation techniquesspecifically ones that use phrases as units of translation-are better suited to the task of generating paraphrases than the competitive linking approach, they continue to suffer from the same problem of low precision. | such techniques can take good advantage of large bilingual corpora and capture a much larger variety of paraphrastic phenomena. | contrasting |
train_6522 | In addition, the paraphrases produced are of better quality than other approaches employing parallel corpora for paraphrase induction discussed so far. | the approach does have a couple of drawbacks: r No paraphrases for unseen data. | contrasting |
train_6523 | In fact, the corpus used (Huang, Graff, and Doddington 2002) also contains, besides the 11 human translations, 6 translations of the same sentence by machine translation systems available on the Internet at the time. | no experiments are performed with the automatic translations. | contrasting |
train_6524 | The idea of enlisting named entities as proxies for detecting semantic equivalence is interesting and has certainly been explored before (see the discussion regarding Paşca and Dienes [2005] in Section 3.2). | it has some obvious disadvantages. | contrasting |
train_6525 | As far as the quality of acquired paraphrases is concerned, this approach easily outperforms almost all other sentential paraphrasing approaches surveyed in this article. | a paraphrase is produced only if the incoming sentence matches some existing template, which leads to a strong bias favoring quality over coverage. | contrasting |
train_6526 | Both approaches use word lattices to represent and induce paraphrases since a lattice can efficiently and compactly encode n-gram similarities (sets of shared overlapping word sequences) between a large number of sentences. | the two approaches are also different in that Pang, Knight, and Marcu use the parse trees of all sentences in a cluster to compute the alignment (and build the lattice), whereas Barzilay and Lee use only surface level information. | contrasting |
train_6527 | Note that both of these techniques rely on a secondary language to provide the cues for generating paraphrases in the primary language. | wu and Zhou rely on a pre-compiled bilingual dictionary to discover these cues whereas Bannard and Callison-Burch have an entirely datadriven discovery process. | contrasting |
train_6528 | Once a set of bilingual hierarchical rules has been extracted along with associated features, the pivoting trick can be applied to infer monolingual hierarchical paraphrase pairs (or paraphrastic patterns). | the patterns are not the final output and are actually used as rules from a monolingual SCFG grammar in order to define an English-to-English translation model. | contrasting |
train_6529 | Again, we must draw a connection between this work and that of Quirk, Brockett, and Dolan (2004) (discussed in Section 3.3) because both treat paraphrasing as monolingual translation. | as outlined in the discussion of that work, Quirk, Brockett, and Dolan use a relatively simplistic translation model and decoder which leads to paraphrases with little or no lexical variety. | contrasting |
train_6530 | They conduct no formal evaluation of the coverage of their approach but show that, in a limited setting, it is higher than that for the syntactically constrained pivot-based approach. | they perform no comparisons of their coverage with the original pivot-based approach (Bannard and Callison-Burch 2005). | contrasting |
train_6531 | The small size of the corpus, when combined with this and other such constraints, precludes the use of the corpus as training data for a paraphrase generation or extraction system. | it is fairly useful as a freely available test set to evaluate paraphrase recognition methods. | contrasting |
train_6532 | An obvious reason for this disparity could be that paraphrasing is not an application in and of itself. | the existence of similar evaluations for other tasks that are not applications, such as dependency parsing (Buchholz and Marsi 2006;Nivre et al. | contrasting |
train_6533 | In fact, it may even lead to research being duplicated across communities. | more recent work-some of it discussed in this survey-on extracting phrasal paraphrases (or patterns) does include direct evaluation of the paraphrasing itself: The original phrase and its paraphrase are presented to multiple human judges, along with the contexts in which the phrase occurs in the original sentence, who are asked to determine whether the relationship between the two phrases is indeed paraphrastic (Barzilay and McKeown 2001;Barzilay and Lee 2003;Ibrahim, Katz, and Lin 2003;Pang, Knight, and Marcu 2003). | contrasting |
train_6534 | (2005) attempt to deal with this problem by using a fuzzy algorithm to cluster speakers; this assigns each speaker a distribution over conversations rather than making a hard assignment. | the algorithm still deals with speakers rather than utterances, and cannot determine which conversation any particular utterance is part of. | contrasting |
train_6535 | Mention information alone is not sufficient for disentanglement; with only name mention and time gap features, mean one-to-one is 38 and loc 3 is 69. | name mention features are critical for our model. | contrasting |
train_6536 | Mean loc 3 increases from 72% to about 78% and mean one-to-one accuracy from 41% to about 66%. | we find no improvement at all on the test data, because the classifier has very low recall, and the resulting test annotations have far too few conversations. | contrasting |
train_6537 | The majority of sentence compression approaches only look at sentences in isolation without taking into account any discourse information. | there are two notable exceptions. | contrasting |
train_6538 | Computing lexical chains would be relatively straightforward if each word was always represented by a single sense. | due to the high level of polysemy inherent in WordNet, algorithms developed for computing lexical chains must adopt some strategy for disambiguating word senses. | contrasting |
train_6539 | That work clarifies the definitions of MCTAG variants and the relationship between them rather than presenting new complexity results. | it suggests the possibility of proving results such as ours in its assertion that, after a standard TAG parse, a check of whether particular trees belong to the same tree set cannot be performed in polynomial time. | contrasting |
train_6540 | This leaves open the possibility of the existence of an algorithm that is polynomial in the grammar size but has an additional exponential term in the time complexity expression. | such an algorithm, if it exists, cannot be generated by application of the GHR optimization to the baseline parser. | contrasting |
train_6541 | This discussion on the optimality of the factorization algorithm crucially assumes strong equivalence with the source TL-MCTAG G. Of course there might be TL-MCTAGs that are weakly equivalent to G, that is, they generate the same language, and have rank strictly smaller than the rank of G . | finding such structurally different grammars is a task that seems to require techniques quite different from the factorization techniques we have developed in this section. | contrasting |
train_6542 | Currently, the definition of fragment does not allow splitting apart a subset of the children of a given node from the remaining ones. | if we allow binarization of the elementary trees of the source grammar, then we might be able to isolate sets of links that could not be factorized in the source grammar itself. | contrasting |
train_6543 | As presented, the factorization algorithm is designed to handle grammars in which multiple adjunction is permitted. | if multiple adjunction is disallowed and the grammar contains trees in which multiple links impinge on the same node, the use of one link at a node will disqualify any other impinging links from use. | contrasting |
train_6544 | For instance, there must be a maximal node dominating (possibly reflexively) node n 3 but not node n 4 . | this node dominates a single link, and will not be processed by the algorithm because of the requirement at line 12 in Figure 18. | contrasting |
train_6545 | They define a joint objective max x) . | the ranges over all one-to-one alignments and computing it is #P-complete (Liang, Taskar, and Klein 2006). | contrasting |
train_6546 | We typically apply this procedure in the tropical semiring (Viterbi likelihoods), so that only the best rule derivation that generated each translation candidate is taken into account when extracting feature contributions for MERT. | given the alignment transducer L, this could also be performed in the log semiring (marginal likelihoods), taking into account the feature contributions from all rule derivations, for each translation candidate. | contrasting |
train_6547 | To see the effect of this constraint, consider the following example with a source sentence s 1 s 2 and a shallow-1 grammar defined by these four rules: There are two derivations R 1 R 2 and R 1 R 3 R 4 which yield identical translations. | r 2 would not be allowed under the constraint introduced here because it does not rewrite an X 1 to an X 0 . | contrasting |
train_6548 | For mt02-05-tune, we find that in 18.5% of the sentences HiFST finds a hypothesis with lower cost than HCP. | hCP never finds any hypothesis with lower cost for any sentence. | contrasting |
train_6549 | This makes it easier for our analysis to compare the system order with the reference order. | there are 22% cases where two different orders are provided, which shows the flexibility of translation. | contrasting |
train_6550 | Inside or outside interruptions have to be allowed to obtain fluent translations for these constituents. | the allowance of interruptions is sometimes beyond the representability of BTG rules. | contrasting |
train_6551 | Although the use of snippets instead of the full documents makes our approach efficient, it introduces noise because text fragments are used instead of full sentences. | we show that state-of-theart statistical machine translation (SMT) technology is in fact robust and flexible enough to capture the peculiarities of the language pair of user queries and result snippets. | contrasting |
train_6552 | One parse was equivalent to Give [me] any people-done chemical analyses on this rock. | if that was the correct parse, then there should have been a very salient prosodic signature on the phrase people-done, and the main verb give should have been stressed. | contrasting |
train_6553 | In the simplest way, the semantic interpretation is simply executed as a program to compute an answer to a question. | in a more general case, the system can take the interpretation as an object to be reasoned about and possibly modified. | contrasting |
train_6554 | LUNAR answered 78% of the queries asked of it at the Second Annual Lunar Science Conference, and 90% of those queries fell within its scope. | lUNAR was far from being a complete solution. | contrasting |
train_6555 | The acoustic scores of the two hypotheses were virtually identical, and the correct choice happened to come second. | the system could easily have resolved the choice by using a semantic interpretation to check the trip database to learn that Bill Woods was scheduled to go to Washington, while Alan Bell was not. | contrasting |
train_6556 | Because a, c, e, f, and h all point to Tony, we say that they are coreferent. | in paraphrasing, we do not need to build a discourse entity to state that g and i are paraphrase pairs; we restrict ourselves to semantic content and this is why we check for sameness of meaning between cataracts and cloudy vision alone, regardless of whether they are a referential unit in a discourse. | contrasting |
train_6557 | We cannot decide on (non-)coreference in (2,1) as we need a discourse to first assign a referent. | we can make paraphrasing judgments without taking discourse into consideration. | contrasting |
train_6558 | Meaning alone does not make it possible to state that the two pairs in Example (5b), repeated in Example (15), or the two pairs in Example (16) are paraphrases without first solving the coreference relations. | cooperative work between paraphrasing and coreference is not always possible, and it is harder if neither of the two can be detected by means of widely used strategies. | contrasting |
train_6559 | (2006) formalized this approach with tree transducers (Graehl and Knight 2004) by using context-free parse trees to represent the target side. | it was later shown by Marcu et al. | contrasting |
train_6560 | Actually, this is a desirable property on which we will introduce meta category operations later. | for the sake of convenience, we would like to assign a single category to each well-formed structure. | contrasting |
train_6561 | On Chinese-to-English, the tri-gram scores of the filtered model were a little bit worse. | after 5-gram rescoring, the BLEU scores became higher than the baseline, and METEOR scores were even significantly better. | contrasting |
train_6562 | DSMs succeed in tasks like synonym detection (Landauer and Dumais 1997) or concept categorization (Almuhareb and Poesio 2004) because such tasks require a measure of attributional similarity that favors concepts that share many properties, such as synonyms and co-hyponyms. | many other tasks require detecting different kinds of semantic similarity. | contrasting |
train_6563 | As in structured DSMs, we adopt word-link-word tuples as the most suitable way to capture distributional facts. | we extend and generalize this assumption, by proposing that, once they are formalized as a threeway tensor, tuples can become the backbone of a unified model for distributional semantics. | contrasting |
train_6564 | The performance of unstructured DSMs (including Win, our own implementation of this type of model) is also high, sometimes even better than that of structured DSMs. | our best DM model also achieves brilliant results in capturing selectional preferences, a task that is not directly addressable by unstructured DSMs. | contrasting |
train_6565 | We illustrate this space in the task of discriminating verbs participating in different argument alternations. | other uses of the space can also be foreseen. | contrasting |
train_6566 | Compare this with the result for the patient role: hunter is rather distant from roe, deer, and buck, and is therefore predicted to be a bad patient of shoot. | note that hunter is still more plausible as a patient of shoot than, for example, director. | contrasting |
train_6567 | Models can only reliably predict the human ratings in this data set if they can capture the difference between verb argument positions as well as between individual fillers. | because the verb-argument pairs were created by hand and with strict requirements, many of the arguments are infrequent in standard corpora (e.g., wimp, bellboy, or knight). | contrasting |
train_6568 | achieves better correlation in the SYN PRIMARY setting than the SEM PRIMARY setting, indicating that the frequency cutoff does not harm performance as much in Experiment 2 as it did in Experiment 1. | the coverage of ROOTH ET AL. | contrasting |
train_6569 | We do not repeat Experiment 2 even though it would have been technically possible to re-use the McRae and Pado data sets and predict plausibility judgments through inverse preferences. | the data sets combine each verb with both plausible and implausible nouns, but they do not combine each noun with different verbs in a balanced fashion, so a repetition of Experiment 2 with inverse preferences would not be very informative. | contrasting |
train_6570 | To model inverse preferences, it would be necessary to use the WordNet verb hierarchy. | wordNet organizes verbs in a comparatively flat, unconnected hierarchy with a high branching factor formed by the hypernymy/troponymy ("type of") relation. | contrasting |
train_6571 | We expect some groups to correspond to Pustejovsky's qualia (Pustejovsky 1995), which constitute particularly salient events for an object, namely, their creation and typical use. | we expect corpus data to yield a more complex picture of the events connected to a noun, which manifest themselves in the form of additional, more specific meaning clusters. | contrasting |
train_6572 | These exercises are reasonably chosen and solutions are technically well prepared and exhaustively explained with an admirable degree of clarity. | the more advanced and experienced reader might find the continuous pampering by way of overly fine-grained thematic exposition and visualization a bit excessive, perhaps even wearisome. | contrasting |
train_6573 | Although the preface says that the book is aimed at both engineering and humanities students, I suspect that readers without any mathematical background would have difficulty with some of the more formal sections. | the necessary background knowledge is not great, and in general the book is clearly written and understandable; it is suitable as both an introduction to this research area and a survey of current stateof-the-art techniques. | contrasting |
train_6574 | Indeed I spent exactly one short paragraph on this, where I discussed our work in Farmer, Sproat, and Witzel (2004) as a way of setting the stage for the current discussion. | rao spends a significant portion of his response discussing the Indus symbols, the case for the script theory, and the case against the non-script theory, and repeatedly refers to our 2004 paper and claimed problems with our arguments. | contrasting |
train_6575 | The correlation coefficients in Figure 1 show that the automatic metrics do well in predicting responsiveness and pyramid scoring for machine-generated summaries. | the scores for human-generated summaries far exceed the predictions, with a large gap between predicted and actual scores. | contrasting |
train_6576 | (2004) proposed an approach to finding subjective adjectives using the results of word clustering according to their distributional similarity. | they did not tackle the prediction of sentiment polarities of the found subjective adjectives. | contrasting |
train_6577 | So far, all extracted targets are individual words (such as weight, size). | because many targets are phrases (such as battery life), we need to identify them from the extracted individual words. | contrasting |
train_6578 | The override convention makes it possible to add, delete, or modify rules. | when a rule is modified, the entire rule has to be rewritten, even if the modifications are minor. | contrasting |
train_6579 | Then, the resulting graph is compacted, coalescing nodes marked by the same type as well as indistinguishable anonymous nodes. | the resulting graph does not necessarily maintain the relaxed upward closure condition, and therefore some modifications are needed. | contrasting |
train_6580 | The merge of S 10 with S 11 results in a non-BCPO. | the additional information supplied by S 12 resolves the problem, and S 10 S 11 S 12 is bounded complete. | contrasting |
train_6581 | S 9 stipulates two nodes, typed nagr and vagr, with the intention that these nodes be coalesced with the two anonymous nodes of S 1 . | the 'merge' operation defined in the previous section cannot achieve this goal, since the two anonymous nodes in S 1 have different attributes from their corresponding typed nodes in S 9 . | contrasting |
train_6582 | Then, similarly to the merge operation, pairs of nodes marked by the same type and pairs of indistinguishable anonymous nodes are coalesced. | to the merge operation, in the attachment operation two distinguishable anonymous nodes, as well as an anonymous node and a typed node, can be coalesced. | contrasting |
train_6583 | Therefore, there is no need to preserve the classification of nodes and only the underlying PSS is of interest. | because the resolution procedure uses the compactness algorithm which is defined over signature modules, we define the following algorithms over signature modules as well. | contrasting |
train_6584 | In our case, we require also that an anonymous node be mapped only to an anonymous node and that two typed nodes, mapped to each other, be marked by the same type. | the classification of nodes as internal, imported, and/or exported has no effect on the isomorphism since it is not part of the core of the information encoded by the signature module. | contrasting |
train_6585 | They created a scenario in which a lieutenant (the user) was sent to a village for an Army peacekeeping task. | on his way, he encountered an auto accident in which his platoon's vehicle crashed into a civilian vehicle, injuring a local boy. | contrasting |
train_6586 | In Section 4.2, we find that players tend to interrupt more often at the end of a card or a poker game when they are given more time. | players do not necessarily wait for more time when the picture game is less urgent. | contrasting |
train_6587 | Some players might perform an act similar to card review, in that they communicate all of the cards in his or her hand. | the version here differs as it includes the new card just picked up, which has not been communicated before. | contrasting |
train_6588 | Probably due to the simplicity of the picture game, we find that players mostly just have a smooth continuation as if the interruption did not happen. | we do find that players sometimes make two types of context restorations-utterance restatements and card reviews-and we find that players have a higher rate of performing these when returning to the ongoing task. | contrasting |
train_6589 | The previous experiment helped us determine which features are useful for recognizing task interruptions. | the experiment was based only on utterances that start with do you have, yet not all task interruptions are initiated with do you have. | contrasting |
train_6590 | Both the segmentor of this article and our segmentor of Zhang and Clark (2007) use a global linear model trained discriminatively using the perceptron. | when comparing state items in the agenda, our 2007 segmentor treated full words in the same way as partial words, scoring them using the same feature templates. | contrasting |
train_6591 | In order to find the highest-scored segmentation with the last word being characters b, ..n − 1, the last word needs to be combined with all different segmentations of characters 0..b − 1 so that the highest scored can be selected. | because the largest-range feature templates span only over two words (see Table 1), the highest scored among the segmentations of characters 0..b − 1 with the last word being characters b ..b − 1 will also give the highest score when combined with the word b..n − 1. | contrasting |
train_6592 | Just as with the single-beam decoder, the input sentence is processed incrementally. | when a character is processed, the number of previously built state items is increased from B to kB, where B is the beam-size and k is the number of characters that have been processed. | contrasting |
train_6593 | An obvious solution to this problem is not to assign a POS to a partial word until it becomes a full word. | lack of POS information for partial words makes them less competitive compared to full words in the beam, because the scores of full words are further supported by POS and POS n-gram information. | contrasting |
train_6594 | When the speed is over 2.5 thousand characters per second, the baseline system performed better than the joint single-beam and multiple-beam systems, due to the higher segmentation accuracy brought by the fixed beam segmentor. | as the SF = segmentation F-score; JF = joint segmentation and POS-tagging F-score. | contrasting |
train_6595 | Beam-search decoding is effective with a small beam-size. | the disadvantage of this model is the difficulty of incorporating whole word information into POS-tagging. | contrasting |
train_6596 | In this system, the decoding for word segmentation and POS-tagging are still performed separately, and exact inference for both is possible. | the interaction between POS and segmentation is restricted by reranking: POS information is used to improve segmentation only for the B segmentor outputs. | contrasting |
train_6597 | When the speed was above 100 sentences per second, the pure transition-based parser outperformed the combined parser with the same speed. | as the size of the beam increases, the accuracy of the combined parser increased more rapidly. | contrasting |
train_6598 | The accuracies increased when the beam increased from 1 to 4, but fluctuated when the beam increased beyond 4. | to the development tests, the accuracy reached its maximum when the beam size was 4 rather than 16. | contrasting |
train_6599 | In contrast to the development tests, the accuracy reached its maximum when the beam size was 4 rather than 16. | the general trend of increased accuracy as the speed decreases can still be observed, and the amount of increase diminishes as the speed decreases. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.