id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_1900 | The two methods make use of the information from only one language side. | it is not very reliable to use only the information in one language, because the machine translation quality is far from satisfactory, and thus the translated Chinese sentences usually contain some errors and noises. | contrasting |
train_1901 | With the availability of large-scale annotated corpora such as Penn Treebank (Marcus et al., 1993), it is easy to train a high-performance dependency parser using supervised learning methods. | current state-of-the-art statistical dependency parsers (McDonald et al., 2005;McDonald and Pereira, 2006;Hall et al., 2006) tend to have lower accuracies for longer dependencies (McDonald and Nivre, 2007). | contrasting |
train_1902 | Consider the examples we have shown in section 3.2.2, for sentence (1), the dependency graph path feature ball → with → bat should have a lower weight since ball rarely is modified by bat, but is often seen through them (e.g., a higher weight should be associated with hit → with → bat). | for sentence (2), our N-gram features will tell us that the prepositional phrase is much more likely to attach to the noun since the dependency graph path feature ball → with → stripe should have a high weight due to the high strength of selectional preference between ball and stripe. | contrasting |
train_1903 | (2) Veronis (2005) pointed out that there had been a debate about reliability of Google hits due to the inconsistencies of page hits estimates. | this estimate is scale-invariant. | contrasting |
train_1904 | (2010) examine subdomain variation in biomedicine corpora and propose awareness of NLP tools to such variation. | they did not yet evaluate the effect on a practical task, thus our study is somewhat complementary to theirs. | contrasting |
train_1905 | For parsing, these might be words, characters, n-grams (of words or characters), Part-of-Speech (PoS) tags, bilexical dependencies, syntactic rules, etc. | to obtain more abstract types such as PoS tags or dependency relations, one would first need to gather respective labels. | contrasting |
train_1906 | Moreover, the line of selection by topic marker (IN fields) stops early -we believe the reason for this is that the IN fields are too fine-grained, which limits the number of articles that are considered relevant for a given test article. | manually aggregating articles on similar topics did not improve topic-based selection either. | contrasting |
train_1907 | So far, we tested a simple combination of the two by selecting half of the articles by a measure based on words and the other half by a measure based on topic models (by testing different metrics). | this simple combination technique did not improve results yet -topic model alone still performed best. | contrasting |
train_1908 | As Clark and Curran (2004) show, most sentences can be parsed with a very small number of supertags per word. | the technique is inherently approximate: it will return a lower probability parse under the parsing model if a higher probability parse can only be constructed from a supertag sequence returned by a subsequent iteration. | contrasting |
train_1909 | The main reason for this is because a single word in the outside context is in many cases the full stop at the end of the sentence, which is very predictable. | for longer spans the flexibility of CCG to analyze spans in many different ways means that the outside estimate for a nonterminal can be based on many high probability outside derivations which do not bound the true probability very well. | contrasting |
train_1910 | Although this decreases the time required to obtain the highest accuracy, it is still a substantial tradeoff in speed compared with AST. | the AST tradeoff improves significantly: by combining AST with A* we observe a decrease in running time of 15% for the A* NULL parser of the HWDep model over CKY. | contrasting |
train_1911 | Our analysis confirms tremendous speedups, and shows that for weak models, it can even result in improved accuracy. | for better models, the efficiency gains of adaptive supertagging come at the cost of accuracy. | contrasting |
train_1912 | It is clear from our results that the gains from A* do not come as easily for CCG as for CFG, and that agenda-based algorithms like A* must make very large reductions in the number of edges processed to result in realtime savings, due to the added expense of keeping a priority queue. | we have shown that A* can yield real improvements even over the highly optimized technique of adaptive supertagging: in this pruned search space, a 44% reduction in the number of edges pushed results in a 15% speedup in CPU time. | contrasting |
train_1913 | A sound plural example is the word pair / Hafiyd+a /Hafiyd+At 'granddaughter/granddaughters'. | the plural of the inflectionally and morphemically feminine singular word madras+a 'school' is the word madAris+φ 'schools', which is feminine and plural inflectionally, but has a masculine singular suffix. | contrasting |
train_1914 | (2009) and Jiang and Liu (2010) can learn from partially projected trees. | the discriminative training in (Ganchev et al., 2009) doesn't allow for richer syntactic context and it doesn't learn from all the relations in the partial dependency parse. | contrasting |
train_1915 | Local event extraction models (Björne et al., 2009) do not have this limitation, because their local decisions are blind to (and hence not limited by) the global event structure. | our approach is agnostic to the actual parsing models used, so we can easily incorporate models that can parse DAGs (Sagae and Tsujii, 2008). | contrasting |
train_1916 | If people only have access to a small amount of data, they may get a biased point of view. | investigating large amounts of data is a timeconsuming job. | contrasting |
train_1917 | We regard each of noun words as a candidate for SE/OE, and each of adjective (or verb) words as a candidate for PR. | this candidate detection has serious problems as follows: 4) There are many actual SEs, OEs, and PRs that consist of multiple words. | contrasting |
train_1918 | As mentioned in the introduction section, we easily construct a linguistic-based keyword set, К ling . | we observe that К ling is not enough to capture all the actual comparison expressions. | contrasting |
train_1919 | Each word in the sequence is replaced with its POS tag in order to reflect various expressions. | as CKs play the most important role, they are represented as a combination of their lexicalization and POS tag, e.g., "같/pa 1 ." | contrasting |
train_1920 | After defining the seven comparative types, we simply match each sentences to a particular type based on the CK types; e.g., a sentence which contains the word "가장 ([ga-jang]: most)" is matched to "Superlative" type. | a method that uses just the CK information has a serious problem. | contrasting |
train_1921 | Active Learning (AL) is typically initialized with a small seed of examples selected randomly. | when the distribution of classes in the data is skewed, some classes may be missed, resulting in a slow learning progress. | contrasting |
train_1922 | Fortunately, our model has only a limited set of possible visible values, which allows us to use a better approximation by taking the derivative of equation 5: 1 In cases where computing the partition function is still not feasible (for instance, because of a large vocabulary), sampling methods could be used. | we did not find this to be necessary. | contrasting |
train_1923 | The written form of Arabic, Modern Standard Arabic (MSA), differs quite a bit from the spoken dialects of Arabic, which are the true "native" languages of Arabic speakers used in daily life. | due to MSA's prevalence in written form, almost all Arabic datasets have predominantly MSA content. | contrasting |
train_1924 | Possible sources of dialectal text include weblogs, forums, and chat transcripts. | weblogs usually contain relatively little data, and a writer might use dialect in their writing only occasionaly, forums usually have content that is of little interest or relevance to actual applications, and chat transcripts are difficult to obtain and extract. | contrasting |
train_1925 | The popular microblogging service Twitter (twitter.com) is one particularly fruitful source of user-created content, and a flurry of recent research has aimed to understand and exploit these data (Ritter et al., 2010;Sharifi et al., 2010;Barbosa and Feng, 2010;Asur and Huberman, 2010;O'Connor et al., 2010a;Thelwall et al., 2011). | the bulk of this work eschews the standard pipeline of tools which might enable a richer linguistic analysis; such tools are typically trained on newstext and have been shown to perform poorly on Twitter (Finin et al., 2010). | contrasting |
train_1926 | These methods usually learn a single model given a training set. | single models cannot deal with words from multiple language origins. | contrasting |
train_1927 | This method requires training sets of transliterated word pairs with language origin. | it is difficult to obtain such tagged data, especially for proper nouns, a rich source of transliterated words. | contrasting |
train_1928 | The process of authorship attribution consists of selecting markers (features that provide an indication of the author), and classifying a text by assigning it to an author using some appropriate machine learning technique. | to the general authorship attribution problem, the specific problem of attributing translated texts to their original author has received little attention. | contrasting |
train_1929 | Ideally, the footprint of the author is (more or less) unaffected by the process of translation, for example if the languages are very similar or the marker is not based solely on lexical or syntactic features. | to purely lexical or syntactic features, the semantic content is expected to be, roughly, the same in translations and originals. | contrasting |
train_1930 | For translated texts, a mix of authors and translators across authors is needed to ensure that the attribution methods do not attribute to the translator instead of the author. | there does not appear to be a large corpus of texts publicly available that satisfy this demand. | contrasting |
train_1931 | For the Federalist Papers the traditional authorship attribution markers all lie in the 95+ range in accuracy as expected. | the frame-based markers achieved statistically significant results, and can hence be used for authorship attribution on untranslated documents (but performs worse than the baseline). | contrasting |
train_1932 | Of these, 7 were also in DEviaNT's most-sure set. | dEviaNT was also able to identify TWSSs that deal with noun euphemisms (e.g., "don't you think these buns are a little too big for this meat? | contrasting |
train_1933 | "), whereas Basic Structure could not. | unigram SVM w/o MetaCost is most sure about 130 sentences, 77 of which are true positives. | contrasting |
train_1934 | We address this chicken-and-egg problem with a data-driven NLU approach that segments and identifies multiple dialogue acts in single utterances, even when only short (single dialogue act) utterances are available for training. | to previous approaches that assume the existence of enough training data for learning to segment utterances, e.g. | contrasting |
train_1935 | That is, interlocutors who entrain achieve better communication. | the question of how best to measure this phenomenon has not been well established. | contrasting |
train_1936 | In a study of the Columbia Games Corpus, Gravano and Hirschberg (2009;2011) identify five speech phenomena that are significantly correlated with speech followed by backchannels. | they also note that individual speakers produced different combinations of these cues and varied the way cues were expressed. | contrasting |
train_1937 | The likelihood that a segment of speech will be followed by a backchannel increases quadratically with the number of cues present in the speech. | they note that individual speakers may display different combinations of cues. | contrasting |
train_1938 | Both prosodic features (extracted from the acoustic signal) and lexical features (extracted from the word sequence) have been shown to be useful for these tasks (Shriberg et al., 1998;Kim and Woodland, 2003;Ang et al., 2005). | access to labeled speech training data is generally required in order to use prosodic features. | contrasting |
train_1939 | The values k = 20 and r = 6 were selected on the dev set. | with bootstrapping, SCL (Blitzer et al., 2006) uses the unlabeled target data to learn domainindependent features. | contrasting |
train_1940 | For instance, as shown next, the fraction of query traffic containing "how to" has in fact been going up since 2007. | such anecdotal evidence cannot fully support claims about general behavior in query formulation. | contrasting |
train_1941 | Given the lack of syntactic parsers that are appropriate for search queries, we address this question using a more robust measure: the probability mass of function words. | to content words (open class words), function words (closed class words) have little lexical meaning -they mainly provide grammatical information and are defined by their syntactic behavior. | contrasting |
train_1942 | The motivation of most recent formalisms is to develop a constraint-based framework where you can incrementally add constraints to filter out unwanted scopings. | almost all of these formalisms are based on hard constraints, which have to be satisfied in every reading of the sentence. | contrasting |
train_1943 | On the one hand, although highly reliable, in addition to being expensive and time consuming, human evaluation suffers from inconsistency problems due to inter-and intraannotator agreement issues. | while being consistent, fast and cheap, automatic evaluation has the major disadvantage of requiring reference translations. | contrasting |
train_1944 | As seen from the table, Meteor is the automatic metric exhibiting the largest ranking prediction capability, followed by BLEU and NIST, while our proposed AM-FM metric exhibits the lowest ranking prediction capability. | it still performs well above random chance predictions, which, for the given average of 4 items per ranking, is about 25% for best and worst ranking predictions, and about 8.33% for both. | contrasting |
train_1945 | Word is usually adopted as the smallest unit in most tasks of Chinese language processing. | for automatic evaluation of the quality of Chinese translation output when translating from other languages, either a word-level approach or a character-level approach is possible. | contrasting |
train_1946 | White space serves as the word delimiter in Latin alphabet-based languages. | in written Chinese text, there is no word delimiter. | contrasting |
train_1947 | If a word-level metric is used, the word "伞" in the system translation will not match the word "雨伞" in the reference translation. | if the system and reference translation are segmented into characters, the word "伞" in the system translation shares the same character "伞" with the word " 雨 伞 " in the reference. | contrasting |
train_1948 | Unfortunately, common practice in reporting machine translation results is to run the optimizer once per system configuration and to draw conclusions about the experimental manipulation from this single sample. | it could be that a particular sample is on the "low" side of the distribution over optimizer outcomes (i.e., it results in relatively poorer scores on the test set) or on the "high" side. | contrasting |
train_1949 | However, alignment inference in neither of these works is exactly Bayesian since the alignments are updated by running GIZA++ (Xu et al., 2008) or by local maximization (Nguyen et al., 2010). | chung and Gildea (2009) apply a sparse Dirichlet prior on the multinomial parameters to prevent overfitting. | contrasting |
train_1950 | Most of these occurrences are in subject settings over articles that aren't required to modify a noun, such as that, some, this, and all. | in the BLLIP n-gram data, this rule is used over the definite article the 465 times -the second-most common use. | contrasting |
train_1951 | They described a direct evidence method, a transitivity method, and a clustering method for ordering these different kinds of modifiers, with the transitivity technique returning the highest accuracy of 90.67% on a medical text. | when testing across domains, their accuracy dropped to 56%, not much higher than random guessing. | contrasting |
train_1952 | (2010) used a Multiple Sequence Alignment (MSA) approach to order modifiers, achieving the highest accuracy to date across different domains. | to earlier work, both systems order full modifier strings. | contrasting |
train_1953 | The dialogue acts Yes and Agreement can be generated using canned text, such as "That is true" and "I agree with you". | complQ (complex Question), FactQ (Factoid Question), FactA (Factiod Answer) and YNQ (Yes/No Question) all require syntactic manipulation. | contrasting |
train_1954 | To acquire confident samples, we need to first decide how to evaluate the confidence for each event. | as an event contains one trigger and an arbitrary number of roles, a confident event might contain unconfident arguments. | contrasting |
train_1955 | This enables the extraction of interesting and unanticipated relations from text. | these patterns are often too broad, resulting in the extraction of tuples that do not represent relations at all. | contrasting |
train_1956 | Many intra-category relations represent listings commonly identified by conjunctions. | these patterns are identified by multiple intra-category relations and are excluded. | contrasting |
train_1957 | Previous work has shown that selecting the sentences containing the entities targeted by a given relation is enough accurate (Banko et al., 2007;Mintz et al., 2009) to provide reliable training data. | only (Hoffmann et al., 2010) used DS to define extractors that are supposed to detect all the relation instances from a given input text. | contrasting |
train_1958 | (Zelenko et al., 2002;Culotta and Sorensen, 2004;Culotta and Sorensen, 2004;Bunescu and Mooney, 2005;Zhang et al., 2005;Bunescu, 2007;Nguyen et al., 2009;Zhang et al., 2006). | such approaches can be applied to few relation types thus distant supervised learning (Mintz et al., 2009) was introduced to tackle such problem. | contrasting |
train_1959 | We use the YAGO version of 2008-w40-2 with a manually confirmed accuracy of 95% for 99 relations. | some of them are (a) trivial, e.g. | contrasting |
train_1960 | our multi-classifier has two orders of magnitude less of categories). | the only experiment that can give a realistic measurement is the one on hand-labeled test set (testing on data automatically labelled by DS does not provide a realistic outcome). | contrasting |
train_1961 | Typically, systems first identify negation/speculation cues and subsequently try to identify their associated cue scope. | the two tasks are interrelated and both require syntactic understanding. | contrasting |
train_1962 | (Kilicoglu and Bergler, 2008), (Rei and Briscoe, 2010), (Velldal et al., 2010), (Kilicoglu and Bergler, 2010), (Zhou et al., 2010). | manually creating a comprehensive set of such lexico-syntactic scope rules is a laborious and time-consuming process. | contrasting |
train_1963 | In all cases, the approaches used surface (word) patterns without coreference. | we use the structural features of predicate-argument structure and employ coreference. | contrasting |
train_1964 | (2009), while demonstrating high precision, do not measure recall. | our study has emphasized recall. | contrasting |
train_1965 | As a result, the recall test set is biased away from "true" recall, because it places a higher weight on the "long tail" of instances. | this gives a more accurate indication of the system's ability to find novel instances of a relation. | contrasting |
train_1966 | In this case, interpolating the phrase tables no longer show improvements. | using the generated corpus alone achieves -1.80 on average TER. | contrasting |
train_1967 | Filtering improves the quality of the transferred annotations. | when training a parser on the annotations we see that filtering only results in better recall scores for predicate labelling. | contrasting |
train_1968 | On the one hand words do change their meaning, after all this is what the present study is all about. | we assume that the meanings in a certain context window are stable enough to infer reliable results provided it is possible that the forms of the same words in different periods can be linked. | contrasting |
train_1969 | Our system performed well in the i2b2/VA Challenge, achieving a micro-averaged F 1 -measure of 93.01%. | two of the assertion categories (present and absent) accounted for nearly 90% of the instances in the data set, while the other four classes were relatively infrequent. | contrasting |
train_1970 | With more parameters (9.7k vs. 3.7k), which allow for a better modeling of the data, L 0 normalization helps by zeroing out infrequent ones. | the difference between our complex model and the best HMM (EM with smoothing and random restarts, 55%) is not significant. | contrasting |
train_1971 | As an example of a merge that failed, we tried merging Argument Types and Mutual Exclusivity, with the idea that if a system knows about the selectional preferences of different relationships, it should be able to deduce which relationships or types are mutually exclusive. | the κ score for this combined category was 0.410, significantly below the κ of 0.640 for Mutual Exclusivity on its own. | contrasting |
train_1972 | The construction of an annotated corpus involves a lot of work performed by large groups. | despite the fact that a lot of human post-editing and automatic quality assurance is done, errors can not be avoided completely [5]. | contrasting |
train_1973 | We will compare our outcomes with the results that can be found with the approach of "variation detection" proposed by Meurers et al. | for space reasons, we will not be able to elaborately present this method and advise to read the referred work, we think that we should at least briefly explain its idea. | contrasting |
train_1974 | Therefore we suggest that the recall of our method is close to the value of 0.459. | of course we do not know whether the randomly introduced errors in our experiment are similar to those which occur in real treebanks. | contrasting |
train_1975 | We have only analysed the errors in the headmodifier annotation of the dependency relations in the English dependency treebank. | the same methodology can easily be applied to detect irregularities in any kind of annotations, e.g. | contrasting |
train_1976 | Determining relationship between any two points in the same chain can be done in constant time simply by comparing the pseudo-times, rather than following the in-chain links. | relationship between points in different chains can be found with a search in cross-chain links, which is dependent on the number of edges (i.e. | contrasting |
train_1977 | Much work has been done on Arabic computational morphology (Al-Sughaiyer and Al-Kharashi, 2004;Soudi et al., 2007;Habash, 2010). | the bulk of this work does not address formfunction discrepancy or morpho-syntactic agreement issues. | contrasting |
train_1978 | Smrž (2007b)'s work contrasting illusory (form) features and functional features inspired our distinction of morphological form and function. | unlike him, we do not distinguish between sub-functional (logical and formal) features. | contrasting |
train_1979 | NU-LEX's first trial demonstrated that it was suitable for general purpose parsing. | much work remains to be done. | contrasting |
train_1980 | Many concept-colour associations, such as swan with white and vegetables with green, involve physical entities. | even abstract notions and emotions may have colour associations (honesty-white, danger-red, joy-yellow, anger-red). | contrasting |
train_1981 | Experiments with colour categories have been used both to show that language has an effect on thought (Brown and Lenneberg, 1954;Ratner, 1989) and that it does not (Bornstein, 1985). | that line of work does not explicitly deal with word-colour associations. | contrasting |
train_1982 | It is natural for physical entities of a certain colour to be associated with that colour. | abstract concepts such as danger and excitability are also associated with colours-red and orange, respectively. | contrasting |
train_1983 | As pointed out in Section 2, there is prior work on emotions evoked by colours. | here we investigate the colours associated with emotion words. | contrasting |
train_1984 | Recent studies have shown that inversion transduction grammars are reasonable constraints for word alignment, and that the constrained space could be efficiently searched using synchronous parsing algorithms. | spurious ambiguity may occur in synchronous parsing and cause problems in both search efficiency and accuracy. | contrasting |
train_1985 | In itself, this should not cause a dramatic difference in performance, as the two systems perform similarly (Hoang and Koehn, 2008). | there are a number of other differences between the two systems. | contrasting |
train_1986 | Word dictionary probabilities can be directly estimated by IBM1 models. | word dictionaries are not symmetric. | contrasting |
train_1987 | This system obtains the worst results of all. | ( ) is the most similar model to the best system in (Alabau et al., 2010). | contrasting |
train_1988 | Log-linear models show a bit of improvement with respect to IBM models. | linear interpolated models perform the best. | contrasting |
train_1989 | When applying it on a MT system hypothesis and a reference translation, it computes how much effort would be needed to obtain the reference from the hypothesis, possibly independently of the appropriateness of the alignments produced. | if we consider instead a pair of sentential paraphrases, it can be used to reveal what subsentential units can be aligned. | contrasting |
train_1990 | In theory, different strategies should produce equivalent translation results. | because decoding always involves pruning, we show that different strategies do have a significant effect in translation quality. | contrasting |
train_1991 | By mining a dictionary and naively incorporating it into a translation system, one can only do slightly better than baseline. | with a more clever integration, we can close about half of the gap between baseline (unadapted) performance and an oracle experiment. | contrasting |
train_1992 | The most obvious way of eliminating problematic unary rules would be converting grammars into Chomsky normal form. | this may result in bloated grammars. | contrasting |
train_1993 | We rely on GHKM rules for reordering when we use the monotonic glue rules. | we can also allow glue rules to reorder constituents. | contrasting |
train_1994 | This makes available a host of preexisting adaptation algorithms for improving over supervised results. | we argue that it may be 8 The feature normalization in Step 1 is important to ensure that the weight magnitudes are comparable. | contrasting |
train_1995 | These include methods where source sentences are divided into syntactic chunks or clauses and the translations are merged later (Koehn and Knight, 2003;, methods where syntactic constraints or penalties for reordering are added to a decoder (Yamamoto et al., 2008;Cherry, 2008;Marton and Resnik, 2008;Xiong et al., 2010), and methods where source sentences are reordered into a similar word order as the target language in advance (Katz-Brown and Collins, 2008;Isozaki et al., 2010). | these methods did not use document-level context to constrain reorderings. | contrasting |
train_1996 | In this case, a reordering constraint to translate " " as a single block can reduce incorrect reorderings and improve the translation quality. | it is difficult to predict what should be translated as a single block. | contrasting |
train_1997 | propose an online learning algorithm with soft margins to handle noise in training data. | the work does not consider the confidence associated with estimated feature weights. | contrasting |
train_1998 | However, the work does not consider the confidence associated with estimated feature weights. | the CW online algorithm in the later does not consider the case where the training data is noisy. | contrasting |
train_1999 | Traditionally, the cosine similarity measure is used to evaluate the likeness of two term-frequency representations. | u and v lie in different vector spaces. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.