id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_3000 | As a result of our tests on English-Japanese (enja) and Japanese-English (ja-en) machine translation, we find that a T2S system not considering these elements performs only slightly better than a standard PBMT system. | after accounting for all these elements we see large increases of accuracy, with the final system greatly exceeding not only standard PBMT, but also state-of-the-art methods based on syntactic pre-or post-ordering. | contrasting |
train_3001 | The most standard search algorithm for T2S translation is bottom-up beam search using cube pruning (CP, Chiang (2007)). | there are a number of other search algorithms that have been proposed for tree-based translation in general (Huang and Chiang, 2007) or T2S systems in particular (Huang and Mi, 2010;Feng et al., 2012). | contrasting |
train_3002 | They created a pre-ordering rule set for dependency parsers from English to several SOV languages. | our rule set is for Chinese-English PBSMT. | contrasting |
train_3003 | Both of our dependency systems outperformed WR07 slightly but were not significant at p = 0.05. | both of them substantially decreased the total times about 60% (or 1,600,000) for pre-ordering rule applications on the training set, compared with WR07. | contrasting |
train_3004 | In our opinion, the reason for the great decrease was that the dependency parse trees were more concise than the constituent parse trees in describing sentences and they could also describe the reordering at the sentence level in a finer way. | the constituent parse trees were more redundant and they needed more nodes to conduct long-distance reordering. | contrasting |
train_3005 | Language-Independent In this work, we integrate only language-independent features, and therefore do not consider morphological or linguistic features. | we apply the model to correct errors in Egyptian Arabic dialect text, following a conventional orthography standard, CODA (Habash et al., 2012). | contrasting |
train_3006 | It is worth noting that adding the MLE component allows Eskander's CEC to recover various types of errors that were not modeled previously. | the contribution of MLE is limited to words that are in the training data. | contrasting |
train_3007 | Given P (q 0 |q 1 ) defined, to correct the word q 1 we could iterate through all ever-observed words, and choose the one, that maximizes the posterior probability. | the practical considerations demand that we do not rank the whole list of words, but instead choose between a limited number of hypotheses h 1 , ..., h K : 2. | contrasting |
train_3008 | Patents often have a high concentration of scientific and technical terms that are rare in everyday language. | some scientific and technical terms usually appear with high frequency only in one specific patent. | contrasting |
train_3009 | terminologies also tend to be very sparse, either because they are related to the latest invention that has not made into everyday language, or because our limited patent dataset cannot possibly cover all possible technical topics. | these technical terms are also topical and they tend to have high relative frequency within a patent document even though they are sparse in the entire patent data set. | contrasting |
train_3010 | 2010, where again all "unambiguous" models present superior performance compared to their "ambiguous" versions. | in this last work one of the dimensions of the tensors was kept empty (filled in with zeros). | contrasting |
train_3011 | Treating microblogs as standard texts and directly classifying them cannot achieve the goal of effective classification because of sparseness problem. | news on the Internet is of information abundance and many microblogs are news-related. | contrasting |
train_3012 | Furthermore, recall of individual words irrespective of their order approached and even exceeded that of a trained expert stenographer with seven workers contributing, suggesting that the information is present to meet the performance of a stenographer (Lasecki et al., 2012). | aligning these individual words in the correct sequential order remains a challenging problem. | contrasting |
train_3013 | However, they applied it only to the words with high frequencies in the documents (Fukumoto et al., 2013). | we applied it to the topic candidates obtained by LDA. | contrasting |
train_3014 | The top site was "HybH-Sum" by (Celikylmaz and Hakkani-Tur, 2010). | the method is a semi-supervised technique that needs a tagged training data. | contrasting |
train_3015 | First, consider the plural nominative word form kissat (cats) where the plural number is denoted by the 1-suffix -t. Then, by employing the features (2), the suffix -t is associated solely with the compound label NOMINATIVE+PLURAL. | by incorporating the expanded feature set (5), -t will also be associated to the sub-label PLURAL. | contrasting |
train_3016 | With different number of induced and gold standard clusters the 1-1 measure suffers because some induced clusters cannot be mapped to gold clusters or vice versa. | almost half the gold standard clusters in MTE contain just a few words and we do not expect our model to be able to learn them anyway, so the 1-1 measure is still useful for telling us how well the model learns the bigger and more distinguishable classes. | contrasting |
train_3017 | The success of auto-suggest depends upon showing users options they can recognize. | we know of no prior work on how to display grammatical relations so that they can be easily recognized. | contrasting |
train_3018 | First, because Chinglish and Chinese are written with the same characters, they render the same inventory of 416 distinct syllables. | the distribution of Chinglish syllables differs a great deal from Chinese (Table 2). | contrasting |
train_3019 | The context vocabulary C is thus identical to the word vocabulary W . | this restriction is not required by the model; contexts need not correspond to words, and the number of context-types can be substantially larger than the number of word-types. | contrasting |
train_3020 | This pattern takes the copular be as it appears in the source text. | most patterns use the lexical form of the main verb along with the appropriate form of the auxiliary do (do, does, did), for the subject-auxiliary inversion required in forming interrogatives. | contrasting |
train_3021 | Not having coreference resolution leads to vague questions, some of which can be filtered as discussed previously. | further work on filters is needed to avoid questions such as: Source sentence: Air cools when it comes into contact with a cold surface or when it rises. | contrasting |
train_3022 | Consequently, they get more replies to their messages (ReplyRate). | considering messages where the other person of the pair is addressed in the To list (ReplyRate-WithinPair), subordinates get more replies. | contrasting |
train_3023 | The AA problem with limited training data was attempted in (Stamatatos, 2007;Luyckx and Daelemans, 2008). | neither of them used a semisupervised approach to augment the training set with additional documents. | contrasting |
train_3024 | Second, CNG+SVM uses two learning methods on a single character n-gram view. | besides the character n-gram view, we also make use of the lexical and syntactic views. | contrasting |
train_3025 | provide support for a particular argument, acknowledge previous work that uses the same methodology, or exemplify work that would benefit from the outcomes of the author's work. | our current paper has more modest aims: we present initial results using existing IR-based approaches and we introduce an evaluation method and metric. | contrasting |
train_3026 | Crowdsourcing services such as Amazon's Mechanical Turk has since been successfully used for various annotation tasks in NLP (Jha et al., 2010;Callison-Burch and Dredze, 2010). | most applications of crowdsourcing in NLP have been concerned with classification problems, such as document classification and constructing lexica (Callison-Burch and Dredze, 2010). | contrasting |
train_3027 | Note that in MV we trust all annotators to the same degree. | crowdsourcing attracts people with different mo-tives, and not all of them are equally reliableeven the ones with Bronze level. | contrasting |
train_3028 | More importantly, however, POS tagging accuracy using crowdsourced annotations are on average only 2.6% worse than gold using professional annotations. | performance is much better than the weakly supervised approach by Li et al. | contrasting |
train_3029 | If true, this fact would have striking implications for theories and models of language acquisition, as well as numerous applications in natural language processing. | empirical investigations to date have focused on a small number of verbs. | contrasting |
train_3030 | In principle, this could be an unpredictable fact about the verb that must be acquired, much like the phonological form of the verb. | most theorists posit that there is a systematic relationship between the semantics of a verb and the syntactic frames in which it can appear (Levin and Hovav, 2005). | contrasting |
train_3031 | A first step is to be able to distinguish between strong and weak statements. | even this problem is understudied, partly due to a lack of data. | contrasting |
train_3032 | the apparent difficulty of the task, we found that many labels for the 386 pairs were reasonable. | in some cases, the labels were counterintuitive. | contrasting |
train_3033 | The task of identifying CD in text and referent CAs bears some similarity to coreference resolution. | coreference resolvers tried by the authors (namely CoreNLP (Recasens et al., 2013), ArkRef (O'Connor and Heilman, 2013) and the work of Roth and Bengston (2008)) were ineffective at this task. | contrasting |
train_3034 | Our results are further distinct from prior work by focusing on the communicative capacities of a variety of referents represented in documents. | the present focus upon determinerestablished phrases is more exclusive, and our results do not include demarcation of referents. | contrasting |
train_3035 | To be specific, the seed words list contains 8 to 12 emotional words for each of the six emotion categories. | 3 it is important to note that the proposed models are flexible and do not need to have seeds for every topic. | contrasting |
train_3036 | To date, most of the work presented on deception detection has focused on the identification of deceit clues within a specific language, where English is the most commonly studied language. | a large portion of the written communication (e.g., e-mail, chats, forums, blogs, social networks) occurs not only between speakers of English, but also between speakers from other cultural backgrounds, which poses important questions regarding the applicability of existing deception tools. | contrasting |
train_3037 | This improvement is practically useful in the large-data setting and is also scientifically interesting in that it recovers some of the cognitive plausibility which originally motivated Börschinger and Johnson (2012). | in experiments on the dataset studied by Canini et al. | contrasting |
train_3038 | As such, at around k=12, the precision@k of most of the systems has almost reached the final precision. | even at k = 5, which only counts correct an answer in the top 5 human suggested results, our system still achieved a precision of around 67%. | contrasting |
train_3039 | According to our observation, the POS of a task topic is usually a proper noun ( ) and the POS of a task event is usually a transitive verb ( ) + common noun ( ) or an intransitive verb ( ). | a task topic may be the most important term in related search sessions . | contrasting |
train_3040 | This is similar to moving from a first to a second order HMM. | to the original model, we also distinguish between unknown entities in the first and second argument position. | contrasting |
train_3041 | Hashing has recently emerged to be a popular solution to tackling fast NNS, and been successfully applied to a variety of non-NLP problems such as visual object detection (Dean et al., 2013) and recognition (Torralba et al., 2008a;Torralba et al., 2008b), large-scale image retrieval (Kulis and Grauman, 2012;Gong et al., 2013), and large-scale machine learning (Weiss et al., 2008;Liu et al., 2011;Liu, 2012). | hashing has received limited attention in the NLP field to the date. | contrasting |
train_3042 | Given a query document vector q, we use the Cosine similarity measure to evaluate the similarity between q and a document x in a dataset: Then the traditional document retrieval method exhaustively scans all documents in the dataset and returns the most similar ones. | such a brute-force search does not scale to massive datasets since the search time complexity for each query is O(n); additionally, the computational cost spent on Cosine similarity calculation is also nontrivial. | contrasting |
train_3043 | Both annotators chose VAGUE to label ordered and said because the order is unclear. | they disagreed on evacuation with monitor. | contrasting |
train_3044 | Table 5 shows our set of instance-based syntactic and semantic features. | to the above described type-based features, these features do not rely on a background corpus, but are extracted from the clause being classified. | contrasting |
train_3045 | Although combinations of tokens could also be replaced by wild cards in any automatically acquired pattern, this would generally lead to an exponentially growing feature space. | the set of discourse markers in our work is fixed: for English, we use 61 markers annotated in the Penn Discourse TreeBank 2.0 (Prasad et al., 2008); for German, we use 155 one-word translations of the English markers, as obtained from an online dictionary. | contrasting |
train_3046 | We experimented with different vector values (absolute frequency, log frequency, pointwise mutual information (PMI)), distance measures (cosine, euclidean) and normalization schemes. | to S&K, who did not observe any improvements using PMI, we found it to perform best, combined with euclidean distance and no additional normalization. | contrasting |
train_3047 | In conclusion, the majority of Lesk variants focused on extending the gloss to increase the chance of overlapping, while the proposed NBM aims to make better use of the limited lexical knowledge available. | to string matching, the probabilistic nature of our model offers a "softer" measurement of gloss-context association, resulting in a novel approach to unsupervised WSD with state-of-the-art performance in more than one WSD benchmark (Section 4). | contrasting |
train_3048 | This will look very similar to structured dropout: the matrix E[P] is identical, and E[Q] has off-diagonal elements which are scaled by (1 − p) 2 , which goes to zero as K is large. | by including these elements, standard dropout is considerably slower, as we show in our experiments. | contrasting |
train_3049 | The various noising approaches for mDA give very similar results. | structured dropout is orders of magnitude faster than the alternatives, as shown in Table 3. | contrasting |
train_3050 | When translating dialogue, the length of each utterance will usually be short, so the system can simply start the translation process when it detects the end of an utterance. | in the case of lectures, for example, there is often no obvious boundary between utterances. | contrasting |
train_3051 | • ga is usually a subject marker. | it becomes an object marker if the predicate has a potential voice type, which is usually translated into can, be able to, want to, or would like to. | contrasting |
train_3052 | However, the inflectional suffix of Spanish verbs include a hint of the person of the subject. | inferring Japanese subjects is more difficult than Spanish, since Japanese verbs usually do not have any grammatical cues to tell the subject type. | contrasting |
train_3053 | There are several ways to generate a potential voice in Japanese, but we usually put the suffix word (reru) or (rareru) after predicates. | these suffix words are also used for a passive voice. | contrasting |
train_3054 | In both the manual and automatic identification of sentence skeleton (rows 2 and 4), there is a significant improvement on the "All" data set. | using different skeleton identification results for training and inference (row 3) does not show big improvements due to the data inconsistency problem. | contrasting |
train_3055 | (2011) regard skeleton as a shortened sentence after removing some of the function words for better word deletion. | we define sentence skeleton as the key segments of a sentence and develop a new MT approach based on this information. | contrasting |
train_3056 | Related work in literature has proven that the expanded corpora can substantially improve the performance of ma-chine translation (Duh et al., 2010;. | the methods are still far from satisfactory for real application for the following reasons: There isn't ready-made domain-specific parallel bitext. | contrasting |
train_3057 | As presented in subsection 3.2, the method combines translation model and language model to rank the sentence pairs in the general-domain corpus. | it does not evaluate the inverse translation probability of sentence pair and the probability of target language sentence. | contrasting |
train_3058 | Wikipedia has many language versions, and articles in one language contain hyperlinks to corresponding pages in other languages. | the coverage of different language versions of Wikipedia is very inconsistent. | contrasting |
train_3059 | These works were able to exploit the link structure and metadata common to all Wikipedia language versions. | when linking between different online encyclopedia platforms this is more difficult as many of these structural features are different or not shared. | contrasting |
train_3060 | Title translation is an effective and widely used method of creating cross-language links between encyclopedia articles. | (Wang et al., 2012; Adafre and de Rijke, 2005) title translation alone is not always sufficient. | contrasting |
train_3061 | In many instances, scene-based image descriptors provide enough information to generate a complete description of the image, or at least a sufficiently good one. | there are some kinds of images for which scene-based features alone are insufficient. | contrasting |
train_3062 | This is because these two edits would combine together as a word-order change. | in Figure 5b, if one edit includes a substitution between words with the same POS's, then it is likely fixing a word choice error by itself. | contrasting |
train_3063 | Intuitively, a linear combination of documentlevel language models can be used to incorporate content written by friends. | it should be noticed that some documents are more relevant than others, and should be weighted higher. | contrasting |
train_3064 | In this example, the topic certainly relates to a student protest as revealed by the top 3 terms which can be used as a good label for this topic. | previous work has shown that top terms are not enough for interpreting the coherent meaning of a topic (Mei et al., 2007). | contrasting |
train_3065 | Most previous topic labelling approaches focus on topics derived from well formatted and static documents. | in contrast to this type of content, the labelling of topics derived from tweets presents different challenges. | contrasting |
train_3066 | One might expect that the SUBST(t) edit operation that reads s = x i+1 and writes t = y j+1 would correspond to an arc with s, t as its input and output labels. | we give a more efficient design where in the course of reaching q C , the PFST has already read s and indeed the entire right input context C 2 = x i:(i+N 2 ) . | contrasting |
train_3067 | It is outperformed by the best supervised approach in two domains, NEWS and PUBMED, using the nDCG-3 and nDCG-5 metrics. | the best label proposed by our methods is judged to be better (as shown by the nDCG-1 and Top-1 Av. | contrasting |
train_3068 | ZPar 4 (Zhang and Clark, 2008; Zhang and Nivre, 2011) performs transition-based dependency parsing with a stack of partial analysis and a queue of remaining inputs. | to MaltParser (local model and greedy deterministic search) ZPar applies global discriminative learning and beam search. | contrasting |
train_3069 | In machine translation, multiple beams are used to prune translation hypotheses at different levels of granularity (Zens and Ney, 2008). | the focus is improving the speed of translation decoder rather than improving translation quality through enforcement of hypothesis diversity. | contrasting |
train_3070 | For example, (3a) in Figure 3 is transformed into (6a) Table 2: Token count and data split for PPCMBE in Figure 5, and likewise (4a) is transformed into (6b). | (4b) remains as it is, because the following CP in that case is a complement, as indicated by the THT function tag. | contrasting |
train_3071 | We address both tasks together with the regexes. | to the sort of head rules in (Collins, 1999), these refer as little as possible to specific POS tags. | contrasting |
train_3072 | (such as in document-level topic models). | language shows observable priming effects, sometimes called triggers, where the occurrence of a given term decreases the surprisal of some other term later in the same discourse (Lau et al., 1993;Church and Gale, 1995;Beeferman et al., 1997;Church, 2000). | contrasting |
train_3073 | Also, TwiCal uses G 2 test to choose an entity y with the strongest association with a date d to form a binary tuple y, d to represent an event. | the structured representation of events can be directly extracted from the output of our LEM model. | contrasting |
train_3074 | It is not easy to accurately identify named entities in the Twitter data since tweets contain a lot of misspellings and abbreviations. | it is often observed that events mentioned in tweets are also reported in news articles in the same period (Petrovic et al., 2013). | contrasting |
train_3075 | This result is promising because it demonstrates that the system generated morphs contain new and unique characteristics which are unknown to the decoder. | from Figure 2 we can see that system generated morphs can be more easily resolved into the right target entities than human generated ones which are more implicit. | contrasting |
train_3076 | Pairs in the bottomleft region of the CONC-SUBJ space (objective adjectives with abstract nouns, such as green politics) seem to exhibit a non-literal, or at least non prototypical modification type. | for pairs in the objective+concrete corner, the adjectives appear to perform a classifying or categorizing function (baptist minister). | contrasting |
train_3077 | While the results reported so far on that annotation task were relatively low, we suggest that the task itself may be more complicated than what is actually required in textual inference scenarios. | the results obtained for our task, which does fit textual inference scenarios, are promising, and encourage utilizing algorithms for this task in actual inference systems. | contrasting |
train_3078 | One type of such cases are non-core arguments, which cannot be Definite NIs. | textual inference deals with non-core arguments as well (see example 3 in Table 1). | contrasting |
train_3079 | This binary distinction assumes a clear boundary between the two; in other words, it assumes that metaphoricity is a discrete property. | three strands of theoretical research show that metaphoricity is not a discrete property. | contrasting |
train_3080 | This is a result of not having perfect agreement across all participants. | in spite of this, the measure makes a good distinction between utterances. | contrasting |
train_3081 | 8, whose complexity is the same as the monolingual case. | the complexity of calculating the transition probability, in Eqs. | contrasting |
train_3082 | (Ravi, 2013) reports the most efficient method so far: It only consumes about 3h of computation time. | as mentioned before, those results are not directly comparable to our work, since they use additional context information on the target side. | contrasting |
train_3083 | Most MT system combination work uses MT systems employing different techniques to train on the same data. | in this paper, we use the same MT algorithms for training, tuning, and testing, but vary the training data, specifically in terms of the degree of source language dialectness. | contrasting |
train_3084 | Comparing the two derivations, (a) is more reasonable and yields a better translation. | (b) wrongly translates phrase " † â9" to "and Sharon" and combines it with [ÙŸ;Bush] incorrectly, leading to a bad translation. | contrasting |
train_3085 | Furthermore, in phrase-based MT, most decodable sentences are very short, while in HIERO the lengths of decodable sentences are more evenly distributed. | in the following experiments, due to efficiency considerations, we use the "tight" rule extraction in cdec that is more strict than the standard "loose" rule extraction, which generates a reduced rule set and, thus, a reduced reachability. | contrasting |
train_3086 | Take valency features for example, previous work (Zhang and Nivre, 2011) has shown that such features are important to parsing accuracy, e.g., it may inform the parser that a verb already has two objects attached to it. | such information might be inaccurate when the verb's modifiers contain punctuations. | contrasting |
train_3087 | Accuracy degrades somewhat from the SU-PERVISED initialization, since the data likelihood objective differs from the objective of maximizing tagging accuracy. | the final SUPERVISED performance of 94.1% shows that there is substantial room for improvement over the UNIFORM initializer. | contrasting |
train_3088 | The SUPERVISED TRANSITIONS initialization is estimated from observations of consecutive tags in a labeled corpus. | our oBSERVATIoNAL initializer is likewise estimated from the relative frequency of consecutive tags, taking advantage of the structure of the tag dictionary D. it does not require a labeled corpus. | contrasting |
train_3089 | For instance, Toutanova and Galley (2011) parameters for IBM Model 1 are not unique, and alignments predicted from different optimal parameters vary significantly in accuracy. | the effectiveness of observational initialization is somewhat surprising because EM training includes these unambiguous tag pairs in its expected counts, even with uniform initialization. | contrasting |
train_3090 | This paper has demonstrated a simple and effective learning method for type-supervised, transductive part-of-speech tagging. | it is an open question whether the technique is as effective for tag dictionaries derived from more natural sources than the labels of an existing treebank. | contrasting |
train_3091 | algorithm quantifies the concreteness of concepts that lack such a rating based on their proximity to rated concepts in a semantic vector space. | to each of these approaches, the image dispersion approach requires no hand-coded resources. | contrasting |
train_3092 | For example, for a relation pay an official visit to, with a statement (Bush, pay an official visit to, China), an entity pair (Bush, China) is in the "support set", which is a set of co-occurring entity pairs of pay an official visit to. | when its support set is {(Bush, China), (Mandelson, Moscow), (Rice, Israel)}, and that of visit is {(Bush, China), (Rice, Israel), (Medvedev, Cuba)}, we can infer their semantic equivalence based on the set intersection: we propose to explore corpus latent features (LF), to complement the sparsity problem of EF: Out of 158 randomly chosen correct relation translation pairs we labeled, 64% has only one co-occurring entity pair, which makes EF not very effective to identify these relation translations. | contrasting |
train_3093 | That is, we always 讨论 (discuss) before we 批 准 (ratify) something and hence the temporal behavior of 讨论 (discuss) is also very similar to that of ratify. | it can be correctly translated using EF. | contrasting |
train_3094 | Mining opinions from text by identifying their positive and negative polarities is an important task and supervised learning methods have been quite successful. | supervised methods require labeled samples for modeling and the lack of sufficient training data is the performance bottle-neck in opinion analysis especially for resource scarce languages. | contrasting |
train_3095 | This makes learning more difficult and slows the speed of parameter updates by a factor of two. | given that our post-processing step is concerned only with the alignments of the unknown words, so it is more sensible to only annotate the unknown words. | contrasting |
train_3096 | While (Sutskever et al., 2014) produces better translations for sentences with frequent words (the left part of the graph), they are worse than best 7 Their unknown replacement method and ours both track the locations of target unknown words and use a word dictionary to post-process the translation. | the mechanism used to achieve the "tracking" behavior is different. | contrasting |
train_3097 | As the first effort to apply attention model to machine translation, it sends the state of a decoding RNN as attentional signal to the source end to obtain a weighted sum of embedding of source words as the summary of relevant context. | inCNN uses 1) a different attention signal extracted from proceeding words in partial translations, and 2) more importantly, a convolutional architecture and therefore a highly nonlinear way to retrieve and summarize the relevant information in source. | contrasting |
train_3098 | For example, identifying the gender of a person is important for generating a good description. | object recognizers are not (yet) able to reliably achieve this distinction, and we only have a single recogniser for "persons". | contrasting |
train_3099 | This could be attributed to the shift in the types of scenes depicted in each data set. | transferring VDR from the VLT2K to the Pascal1K data set improves the generated descriptions from 7.4 → 8.2 Meteor points. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.