id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_1300 | Initially, a rough estimation of the significance of a word is given by its frequency in the document set. | this simple frequency-based measure is obviously not accurate. | contrasting |
train_1301 | The rule table is reduced to the same size (22% of original table) using the two metrics, separately. | as shown in Table 2, the frequency method decreases the BLEU scores, while the C-value achieves improvements. | contrasting |
train_1302 | For the current synchronous grammars based SMT, to some extent, the generalization ability of the grammar rules (the usability of the rules for the new sentences) can be considered as a kind of the generative power of the grammar and the disam-biguition ability to the rule candidates can be considered as an embodiment of expressive power. | the generalization ability and the disambiguition ability often contradict each other in practice such that various grammar formalisms in SMT are actually different trade-off between them. | contrasting |
train_1303 | Through this formalization, we can see that FSCFG rules and LSTSSG rules are both included. | we should point out that the rules with mixture of X non-terminals and syntactic non-terminals are not included in our current implementation despite that they are legal under the proposed formalism. | contrasting |
train_1304 | Then, the duplicates will be discarded during scoring. | due to the high-degree generalization , the FSCFG abstract rules are more likely to be matched by the test sentences. | contrasting |
train_1305 | It takes into account inflexional variants and synonyms. | it is considerably more sophisticated and is highly dependent on the underlying large scale linguistic resources. | contrasting |
train_1306 | The correlation coefficient for the NIST score is still slightly negative, indicating that trying to take a word sequence's information content into account is hopeless at the sentence level. | the correlation coefficient for the BLEU score almost doubles from 0.078 to 0.133, which, however, is still unsatisfactory. | contrasting |
train_1307 | Our impression has always been that this is obviously true for standard commercial systems. | serious scientific publications (Somers, 2005;Koehn, 2005) come to the conclusion that back-translation is completely unsuitable for MT evaluation. | contrasting |
train_1308 | These obvious deficits have probably motivated reservations against such systems, and we agree that for such reasons they may be unsuitable for use at MT competitions. | 1 there are numerous other applications where such considerations are of less import- 1 Although there might be a solution to this: It may not always be necessary that forward and backward translations are generated by the same MT system. | contrasting |
train_1309 | Tree-based statistical machine translation models have made significant progress in recent years, especially when replacing 1-best trees with packed forests. | as the parsing accuracy usually goes down dramatically with the increase of sentence length, translating long sentences often takes long time and only produces degenerate translations. | contrasting |
train_1310 | So splitting long sentences into sub- A simple way is to split long sentences by punctuations. | without concerning about the original whole tree structures, this approach will result in ill-formed sub-trees which don't respect to original structures. | contrasting |
train_1311 | Obviously, this is an unrealistic assumption in real translation. | we argue that tectogrammatical deep-syntactic dependency trees (as introduced in the Functional Generative Description framework, (Sgall, 1967)) are relatively close to this requirement, which makes the HMTM approach practically testable. | contrasting |
train_1312 | After determining the radius, we replace each word with its POS tag; in order to reflect various expressions of each sentence, POS tags are more proper than lexical information of actual words. | since CKs play the most important role to discriminate comparative sentences, they are represented as a combination of their actual keyword and POS tag. | contrasting |
train_1313 | It is a collection of blog entries in English, Spanish and Italian. | for this research we used the first two languages. | contrasting |
train_1314 | The basic idea of the RelF score is to give those words a high score, which occur frequently in the context of a weasel tag. | due to the sparseness of tagged instances, words that occur with a very high frequency in the corpus automatically receive a lower score than low-frequent words. | contrasting |
train_1315 | A large subset of these searches is subjective in nature, where the user is looking for different images for a single concept (Linsley, 2009). | it is a common user experience that the images returned are not relevant to the intended concept. | contrasting |
train_1316 | The user can query one or more translations to get the relevant images. | this method puts the onus of choosing a translation on the user. | contrasting |
train_1317 | Most large software projects have bug tracking systems, e.g., Bugzilla 1 , to help global users to describe and report the bugs they encounter when using the software. | since the same bug may be seen by many users, many duplicate bug reports are sent to bug tracking systems. | contrasting |
train_1318 | Hence, bug reports interest us because (1) they are abundant and freely available,(2) they naturally form a semiparallel corpus, and (3) they contain many technical terms. | bug reports have characteristics that raise many new challenges. | contrasting |
train_1319 | At these sites, travel blogs are manually registered by bloggers themselves, and the blogs are classified by their destinations. | there are many more travel blogs in the blogos-phere. | contrasting |
train_1320 | Assigning a score to the players deals with the coreferential pairs. | motivated by (Passonneau, 2004) and others, the evaluation handles the coreferential pairs in a way demonstrated in Figure 2. | contrasting |
train_1321 | TV_ConSem (Ji and Lu, 2007) calculates the percentage of context words in a domain lexicon using both frequency information and semantic information. | this technique requires a domain lexicon whose size and quality have great impact on the performance of the algorithm. | contrasting |
train_1322 | This arrangement allows more annotators to participate, and reduces logistical problems. | having no full-time annotators limits the overall weekly annotation rate. | contrasting |
train_1323 | Annotator Speeds Our POS and syntax annotation rate is 540 tokens/hour (with some reaching rates as high as 715 tokens/hour). | due to the current part-time arrangement, annotators worked an average of only 6 hours/week, which meant that data was annotated at an average rate of 15K tokens/week. | contrasting |
train_1324 | Since many-to-many links are commonly observed in natural language, symmetrization is able to make up for this modeling limitation. | combining two directional alignments practically can lead to improved performance. | contrasting |
train_1325 | Previous studies have attempted to resolve this problem by removing unnecessary function words, or by reordering source sentences. | the removal of function words can cause a serious loss in information. | contrasting |
train_1326 | The basic scheme to prune a translation table is to delete all translation pairs that have significance values smaller than a given threshold. | in practice, this pruning scheme does not work well with phrase tables, as many phrase pairs receive the same significance values. | contrasting |
train_1327 | This is basically achieved by using dynamic programming and especially the Viterbi algorithm associated with beam searching. | decoding algorithms were designed for translation, not for paraphrase generation. | contrasting |
train_1328 | ℓ 1 penalty methods such as Lasso, being convex, can be solved by optimization and give guaranteed optimal solutions. | ℓ 0 penalty methods, like stepwise feature selection, give approximate solutions but produce models that are much sparser than the models given by ℓ 1 methods, which is quite crucial in WSD (Florian and Yarowsky, 2002). | contrasting |
train_1329 | There is a high concentration of chain starts and ends near the boundary which leads to a misclassification if we only combine chain starts and ends for segmentation. | there are also a large number of chain continuations across the utterance boundary, which implies that a story boundary is less likely. | contrasting |
train_1330 | Acquisition of prosody, in addition to vocabulary and grammar, is essential for language learners. | intonation has been less-emphasized both in classroom and computer-assisted language instruction (Chun, 1998). | contrasting |
train_1331 | linear-chain), an exact inference can be obtained efficiently if the number of output labels is not large. | for large number of output labels, the inference is often prohibitively expensive. | contrasting |
train_1332 | In most results, Method 1 was the fastest, because it was terminated after fewer iterations. | method 1 sometimes failed to converge, for example, in Encyclopedia. | contrasting |
train_1333 | Since S w+ztet (x, y) = S w (x, y) if f t (x, y) = 0, this procedure reduces the the O(#nz) operations to O(#nz/n) = O(l). | it needs extra spaces to store all S w (x, y) and T w (x). | contrasting |
train_1334 | In this context, Tree Edit Distance (TED) has been widely used for many years. | one of the main constraints of this method is to tune the cost of edit operations, which makes it difficult or sometimes very challenging in dealing with complex problems. | contrasting |
train_1335 | Most of the existing multi-document summarization methods decompose the documents into sentences and work directly in the sentence space using a term-sentence matrix. | the knowledge on the document side, i.e. | contrasting |
train_1336 | Indeed, the first GIVE Challenge acquired data from over 1100 experimental subjects online. | it still remains to be shown that the results that can be obtained in this way are in fact comparable to more established task-based evaluation efforts, which are based on a carefully selected subject pool and carried out in a controlled laboratory environment. | contrasting |
train_1337 | Among this group of subjects, 93% self-rated their English proficiency as "expert" or better; 81% were native speakers. | to the online experiment, 31% of participants were male and 65% were female (4% did not specify their gender). | contrasting |
train_1338 | This could be taken as a possible deficit of the Internet-based evaluation. | we believe that the opposite is true. | contrasting |
train_1339 | Our approach is similar to Kibble and Power (2004) in that we don't use the concept of centering transitions. | our method is more efficient in that Kibble and Power (2004) use centering transitions to rank the set of generated solutions (some of which are incoherent), whereas we encode centering constraints in elementary trees to reduce the search space of possible solutions before we start computing them. | contrasting |
train_1340 | (2006) previously explored the use of statistical learningbased classifiers trained with lexical features, such as character and word n-grams, for mobile spam filtering. | content-based spam filtering directed at SMS messages are very challenging, due to the fact that such messages consist of only a few words. | contrasting |
train_1341 | We may speculate that the difference is primarily due to our limited training set size (1,680 questions versus 21,500 questions for Li & Roth). | we are not aware of any work attempting to extract AP on word level using machine learning in order to provide dynamic classes to a question classification module. | contrasting |
train_1342 | (2006a) (henceforth,GGJ06) to develop an approximation using expected counts. | we show here that their approximation is flawed in two respects: 1) It omits an important factor in the expectation, and 2) Even after correction, the approximation is poor for hierarchical models, which are commonly used for NLP applications. | contrasting |
train_1343 | Raj and Whittaker (2003) showed a general Ngram language model structure and introduced a lossless algorithm that compressed a sorted integer vector by recursively shifting a certain number of bits and by emitting index-value inverted vectors. | we need more compact representation. | contrasting |
train_1344 | Thus, if w 1 w 2 w 3 w 4 w 5 was not observed in the training corpus, p(w In most smoothing methods, the lower-order models, for all N > 1, are recursively estimated in the same way as the highest-order model. | the smoothing method of Kneser and Ney (1995) and its variants are the most effective methods known (Chen and Goodman, 1998), and they use a different way of computing N-gram counts for all the lower-order models used for smoothing. | contrasting |
train_1345 | Figure 2: F-measure over time for test set 98b with configurations: As can be seen, there is a small gain in performance by using seeds within the epoch of the test set, but the decay is still observable as we increase the time gap between the unlabeled data and the test set. | if we use unlabeled data within the epoch of the test set, we hardly see a degradation trend as the time gap between the epochs of seeds and test set is increased. | contrasting |
train_1346 | Union is performed between the baseline system's list and the lists produced by the other systems to create lists of pairs that include the information in the baseline. | each of the following systems' outputs are merged separately with the baseline. | contrasting |
train_1347 | DTs have a natural ordering of the children of the nodes induced by the position of the corresponding words in the sentence. | pTs introduce new intermediate nodes to better express the syntactical structures of a sentence in terms of phrases. | contrasting |
train_1348 | Most of the prior work focused on the sentence level by clustering sentences into topics and ordering sentences on a time line. | many sentences in news articles include multiple events with different time arguments. | contrasting |
train_1349 | The 2005 ACE evaluation had 8 types of events, with 33 subtypes; for the purpose of this paper, we will treat these simply as 33 distinct event types. | to ACE event extraction, we exclude generic, negative, and hypothetical events. | contrasting |
train_1350 | The time argument "Saturday" was mistakenly propagated from the "Conflict-Attack" event "battles" to "shot" because they share the same Time-cue role "instrument" ("small arms/gun"). | the correct time argument for the "shot" event should be "Monday" as indicated in the "gunfire/explosions" event in the previous context sentence. | contrasting |
train_1351 | Our new parsing algorithms could be implemented by defining the "sense" of each word as the index of its head. | when parsing with senses, the complexity of the Eisner (2000) parser increases by factors of O(S 3 ) time and O(S 2 ) space (ibid., Section 4.2). | contrasting |
train_1352 | The most apparent similarity between our model and the transition-based category is that they all need a classifier to perform classification conditioned on a certain configuration. | they differ from each other in the classification results. | contrasting |
train_1353 | with:1:0-NULL:2:1-fork:3:1 ate:1:0-the:2:3-meat:3:1 ate:1:0-with:2:1-fork:3:2 with:1:0-a:2:3-fork:3:1 NULL:1:2-He:2:3-ate:3:0 He:1:3-NULL:2:1-ate:3:0 ate:1:0-meat:2:1-NULL:3:2 ate:1:0-NULL:2:3-with:3:1 with:1:0-fork:2:1-NULL:3:2 NULL:1:2-a:2:3-fork:3:0 a:1:3-NULL:2:1-fork:3:0 ate:1:0-NULL:2:3-.:3:1 ate:1:0-.:2:1-NULL:3:2 (b) NULL:1:2-the:2:3-meat:3:0 the:1:3-NULL:2:1-meat:3:0 To provide bilingual subtree constraints, we need to find the characteristics of subtree mapping for the two given languages. | subtree mapping is not easy. | contrasting |
train_1354 | translation Both Chinese and English are classified as SVO languages because verbs precede objects in simple sentences. | chinese has many characteristics of such SOV languages as Japanese. | contrasting |
train_1355 | The problem of appropriately selecting one of them to work with would ideally be solved by statistical methods (Higgins and Sadock, 2003) or knowledge-based inferences. | no such approach has been worked out in sufficient detail to support the disambiguation of treebank sentences. | contrasting |
train_1356 | This is based on the observation that the readings of a semantically ambiguous sentence are partially ordered with respect to logical entailment, and the weakest readings -the minimal (least informative) readings with respect to this order -only express "safe" information that is common to all other read-ings as well. | when a sentence has millions of readings, finding the weakest reading is a hard problem. | contrasting |
train_1357 | In our example, (b) and (c) are indeed normal forms with respect to a rewrite system that consists only of the rule (1). | this is not exactly what we need here. | contrasting |
train_1358 | (2008), in that we perform redundancy elimination by intersecting tree grammars. | the construction we present here is much more general: The algorithmic foundation for redundancy elimination is now exactly the same as that for weakest readings, we only have to use an equivalencepreserving rewrite system instead of a weakening one. | contrasting |
train_1359 | We could then perform such inferences on (cleaner) semantic representations, rather than strings (as they do). | it may be possible to reduce the set of readings even further. | contrasting |
train_1360 | They report an accuracy of 81.6% and 84.5% for different classification schemes. | apart from a plural feature, all heuristics are tailored to specific properties of the Wikipedia resource. | contrasting |
train_1361 | Default reasoning (Reiter, 1980) is confronted with severe efficiency problems and therefore has not extended beyond experimental systems. | the emerging paradigm of Answer Set Programming (ASP, Lifschitz (2008)) seems to be able to model exceptions efficiently. | contrasting |
train_1362 | We also find annotated examples of generic NPs that are not discussed in the formal semantics literature (8.a), but that are well captured by the ACE-2 guidelines. | there are also cases that are questionable (8.b). | contrasting |
train_1363 | However, such methods require expensive resources such as domain experts to hand-code the rules, or a corpus of expertlayperson interactions to train on. | we present a corpus-driven framework using which a user-adaptive REG policy can be learned using RL from a small corpus of non-adaptive humanmachine interaction. | contrasting |
train_1364 | Such systems learned from the user at the start and later adapted to the domain knowledge of the users. | they either require expensive expert knowledge resources to hand-code the inference rules (Cawsey, 1993) or large corpus of expert-layperson interaction from which adaptive strategies can be learned and modelled, using methods such as Bayesian networks (Akiba and Tanaka, 1994). | contrasting |
train_1365 | The user may have the capacity to learn jargon. | only the user's initial knowledge is recorded. | contrasting |
train_1366 | Several user simulation models have been proposed for use in reinforcement learning of dialogue policies (Georgila et al., 2005;Schatzmann et al., 2006;Schatzmann et al., 2007;Ai and Litman, 2007). | they are suited only for learning dialogue management policies, and not natural language generation policies. | contrasting |
train_1367 | Only the entity that it is referring to (R i ) and its type (T i ) are used. | the above model simulates the process of interpreting and resolving the expression and identifying the domain entity of interest in the instruction. | contrasting |
train_1368 | In most studies done previously, the user's domain knowledge is considered to be static. | in real conversation, we found that the users nearly always learned jargon expressions from the system's utterances and clarifications. | contrasting |
train_1369 | This is because learned policies use a few jargon expressions (giving rise to clarification requests) to learn about the user. | the Jargon policy produces more user learning gain because of the use of more jargon expressions. | contrasting |
train_1370 | Automated summarization systems which enable user to quickly digest the important information conveyed by either a single or a cluster of documents are indispensible for managing the rapidly growing amount of textual information and multimedia content (Mani and Maybury, 1999). | due to the maturity of text summarization, the research paradigm has been extended to speech summarization over the years (Furui et al., 2004;McKeown et al., 2005). | contrasting |
train_1371 | Then, the cosine similarity between any given two sen- The loss function is thus defined by Once the sentence generative model have been properly estimated, the summary sentences can be selected iteratively by (8) according to a predefined target summarization ratio. | as can be seen from (8), a new summary sentence is selected without considering the redundant information that is also contained in the already selected summary sentences. | contrasting |
train_1372 | One possible explanation is that the structural evidence of the spoken documents in the test set is not strong enough for CRF to show its advantage of modeling the local structural information among sentences. | lexRank gives a very promising performance in spite that it only utilizes lexical information in an unsupervised manner. | contrasting |
train_1373 | There has been a tremendous upsurge of interest in documentary linguistics, the field concerned with the the "creation, annotation, preservation, and dissemination of transparent records of a language" (Woodbury, 2010). | documentary linguistics alone is not equal to the task. | contrasting |
train_1374 | The product is a Universal Corpus, 1 in two senses of universal: in the sense of including (ultimately) all the world's languages, and in the sense of enabling software and processing methods that are language-universal. | we do not aim for a collection that is universal in the sense of encompassing all language documentation efforts. | contrasting |
train_1375 | The fact that these layers will exist in diminishing quantity is also unavoidable. | there is an important consequence: the primary texts will be permanently subject to new translation initiatives, which themselves will be subject to new alignment and glossing initiatives, in which each step is an instance of semisupervised learning (Abney, 2007). | contrasting |
train_1376 | To be successful, the Human Language Project would require substantial funds, possibly drawing on a constellation of public and private agencies in many countries. | in the spirit of starting small, and starting now, agencies could require that sponsored projects which collect texts and build lexicons contribute them to the Language Commons. | contrasting |
train_1377 | Today, language documentation is a high priority in mainstream linguistics. | the field of computa-tional linguistics is yet to participate substantially. | contrasting |
train_1378 | Methods based on word strings (e.g., BLEU (Papineni et al., 2002), NIST(NIST, 2002), METEOR (Banerjee and Lavie., 2005), ROUGE-L (Lin and Och, 2004), and IMPACT(Echizen-ya and Araki, 2007)) calculate matching scores using only common words between MT outputs and references from bilingual humans. | these methods cannot determine the correct word correspondences sufficiently because they fail to focus solely on phrase correspondences. | contrasting |
train_1379 | WOE parse has a dramatic performance improvement over TextRunner. | the improvement comes at the cost of speed -TextRunner Figure 2: WOE pos performs better than TextRunner, especially on precision. | contrasting |
train_1380 | Their results showed limited advantage of parser features over shallow features for IE. | our results imply that abstracted dependency path features are highly informative for open IE. | contrasting |
train_1381 | Several parallel efforts have been made recently to improve the efficiency of IE tasks by optimizing low-level feature extraction (Ramakrishnan et al., 2006;Chandel et al., 2006) or by reordering operations at a macroscopic level (Ipeirotis et al., 2006;Shen et al., 2007;Jain et al., 2009). | to the best of our knowledge, SystemT is the only IE system in which the optimizer generates a full end-to-end plan, beginning with low-level extraction primitives and ending with the final output tuples. | contrasting |
train_1382 | We then construct a network where characters are vertices and edges signify an amount of bilateral conversation between those characters, with edge weights corresponding to the frequency and length of their exchanges. | to previous approaches to social network construction, ours relies on a novel combination of patternbased detection, statistical methods, and adaptation of standard natural language tools for the literary genre. | contrasting |
train_1383 | Some exceptions include recent work on learning common event sequences in news stories (Chambers and Jurafsky, 2008), an approach based on statistical methods, and the development of an event calculus for characterizing stories written by children (Halpin et al., 2004), a knowledge-based strategy. | literary theorists, linguists and others have long developed symbolic but non-computational models for novels. | contrasting |
train_1384 | They propose a logical inference model which makes use of discourse plans and coherence relations to infer categorical answers. | to adequately interpret indirect answers, the uncertainty inherent in some answers needs to be captured (de Marneffe et al., 2009). | contrasting |
train_1385 | In total, we ended up with 224 question-answer pairs involving gradable adjectives. | our collection contains different types of answers, which naturally fall into two categories: (I) in 205 dialogues, both the question and the answer contain a gradable modifier; (II) in 19 dialogues, the reply contains a numerical measure (as in (3) above and (4)). | contrasting |
train_1386 | The dialogues in (1) and 9are examples of total agreement. | (10) has response entropy of 1.1, and item (11) has the highest entropy of 2.2. | contrasting |
train_1387 | This is a common feature of such corpora of informal, userprovided reviews (Chevalier and Mayzlin, 2006;Hu et al., 2006;Pang and Lee, 2008). | since we do not want to incorporate the linguistically uninteresting fact that people tend to write a lot of ten-star reviews, we assume uniform priors for the rating categories. | contrasting |
train_1388 | Table 6 summarizes the results per response category for the WordNet-based approach (which can thus be compared to the category I results in table 5). | in contrast to the WordNet-based approach, we require no hand-built resources: the synonym and antonym structures, as well as the strength values, are learned from Web data alone. | contrasting |
train_1389 | If the user uses the turn to query or inform an unavailable filler the dialogue grows longer. | this is quite rare as shown by small difference in performance between the two models. | contrasting |
train_1390 | In our domain, performance is measured by dialogue length and solution quality. | since solution quality never affects the dialogue cost for a trained system, dialogue length is the only component influencing the mean policy cost. | contrasting |
train_1391 | First, we tried extracting grammatical roles from the parse trees which we obtained from the Berkeley parser, as this information is present in the edge labels that can be recovered from the parse. | we found that we achieved better accuracy by using RFTagger (Schmid and Laws, 2008), which tags nouns with their morphological case. | contrasting |
train_1392 | Morphological case is distinct from grammatical role, as noun phrases can function as adjuncts in possessive constructions and preposi- tional phrases. | we can approximate the grammatical role of an entity using the morphological case. | contrasting |
train_1393 | When a word conforms to the language processor's expectations, surprisal is low, and the cognitive load associated with processing that input will also be low. | unexpected words will have a high surprisal and a high cognitive cost. | contrasting |
train_1394 | In contrast, unexpected words will have a high surprisal and a high cognitive cost. | high-level syntactic and semantic factors are only one source of cognitive costs. | contrasting |
train_1395 | 2007;Steyvers and Griffiths 2007). | to more standard semantic space models where word senses are conflated into a single representation, topics have an intuitive correspondence to coarse-grained sense distinctions. | contrasting |
train_1396 | An open issue is whether a single, integrated measure (as evaluated in Table 4) fits the eyemovement data significantly better than separate measures for trigram, syntactic, and semantic surprisal (as evaluated in Table 3. | we are not able to investigate this hypothesis: our approach to testing the significance of factors requires nested models; the log-likelihood test (see Section 4) is only able to establish whether adding a factor to a model improves its fit; it cannot compare models with disjunct sets of factors (such as a model containing the integrated surprisal measure and one containing the three separate ones). | contrasting |
train_1397 | The rebanked parser performed 0.8% worse than the CCGbank parser on the intersection dependencies, suggesting that the fine-grained distinctions we introduced did cause some sparse data problems. | we did not change any of the parser's maximum entropy features or hyperparameters, which are tuned for CCGbank. | contrasting |
train_1398 | These include Eu-roWordNet (Vossen, 1998), MultiWordNet (Pianta et al., 2002), the Multilingual Central Repository (Atserias et al., 2004), and many others. | manual construction methods inherently suffer from a number of drawbacks. | contrasting |
train_1399 | The automatic evaluation quantifies how much of the gold-standard resources is covered by BabelNet. | it does not say anything about the precision of the additional lexicalizations provided by BabelNet. | contrasting |
Subsets and Splits