id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_14200 | To recover this recall, the strictness of this filter could be relaxed by further generalising dependency paths or using a different similarity metric to direct match of paths. | this is the upper bound for approaches which consider only exact dependency paths as a feature. | contrasting |
train_14201 | We note that limiting bootstrap to one or two iterations is ideal for the best trade-off between recall and search space. | closer analysis of discriminative paths is required for a full SF system. | contrasting |
train_14202 | More recently, researchers have investigated joint inference techniques for event extraction using Markov Logic Networks (MLNs) (e.g., Poon and Domingos (2007), Poon and Vanderwende (2010), Riedel and McCallum (2011a)), a statistical relational model that enables us to model the dependencies between different instances of a data sample. | it is extremely challenging to make joint inference using MLNs work well in practice (Poon and Domingos, 2007). | contrasting |
train_14203 | On the BioNLP'11 and BioNLP'09 datasets, our scores are slightly better and slightly worse respectively than the best reported results. | they are significantly better than state-of-the-art MLNbased systems. | contrasting |
train_14204 | The resulting model can learn that some sequences of syllables (in particular, sequences that start with a stressed syllable) are more likely than others. | observed stress improved token f-score by only 1%. | contrasting |
train_14205 | Together, the experimental and computational results suggest that infants in fact pay attention to stress, and that stress carries useful information for segmenting words in running speech. | stress identification is itself a non-trivial task, as stress has many highly variable, contextsensitive, and optional phonetic reflexes. | contrasting |
train_14206 | This first experiment shows that observing dictionary stress is better early in learning, but that modeling syllable weight is better later in learning. | it is possible that syllable weight was more useful because modeling syllable weight involves modeling the characteristics of codas; the advantage may not have been due to weight per se but due to having learned something about the effects of suffixes on final codas. | contrasting |
train_14207 | Following normal practice, we evaluate our model and compare it with state-of-the-art systems using F-Score. | we argue that the ability to solve segmentation ambiguities is also important when evaluating different types of unsupervised word segmentation systems. | contrasting |
train_14208 | This is because once the goodness measure is given, the decoding algorithm will segment any ambiguous strings into the same word sequences, no matter what their context is. | nonparametric Bayesian language models aim to segment character string into a "reasonable" sentence according to the posterior probability. | contrasting |
train_14209 | This is an essential attribute of Gibbs sampling. | we believe that initializing the Gibbs sampler with the result of nVBE will benefit us in two ways. | contrasting |
train_14210 | Previous unsupervised work usually evaluated their models using F-score, regardless of goodness measure based model or nonparametric Bayesian model. | segmentation ambiguity is a very important factor influencing accuracy of Chinese word segmentation systems (Huang and Zhao, 2007). | contrasting |
train_14211 | There has been work on making use of both unlabeled data (Sun and Xu, 2011;Wang et al., 2011) and Wikipedia (Jiang et al., 2013) to improve segmentation. | no empirical results have been reported on a unified approach to deal with different types of free data. | contrasting |
train_14212 | Liu and Zhang (2012) study domain adaptation using an unsupervised self-training method. | to their work, we make use of not only unlabeled data, but also leverage any free annotation to achieve better results for domain adaptation. | contrasting |
train_14213 | CRF has been used for Chinese word segmentation (Tseng, 2005;Shi and Wang, 2007;Zhao and Kit, 2008;Wang et al., 2011). | most previous work train a CRF by using full annotation only. | contrasting |
train_14214 | Several other variants of CRF model has been proposed in the machine learning literature, such as the generalized expectation method (Mann and McCallum, 2008), which introduce knowledge by incorporating a manually annotated feature distribution into the regularizer, and the JESS-CM (Suzuki and Isozaki, 2008), which use a EM-like method to iteratively optimize the parameter on both the annotated data and unlabeled data. | we directly incorporate the likelihood of partial annotation into the objective function. | contrasting |
train_14215 | Clearly, bad segmentations translate into poor ATWV scores, as in the case of random and unsupervised segmentations. | gains on segmentation accuracy do not always result in better KWS performance. | contrasting |
train_14216 | Supervised POS tagging has achieved great success, reaching as high as 95% accuracy for many languages (Petrov et al., 2012). | supervised techniques need manually annotated data, and this is either lacking or limited in most resourcepoor languages. | contrasting |
train_14217 | The method we propose in this paper is similar in only using a small amount of annotation. | we directly use the annotated data to train the model rather than using a dictionary. | contrasting |
train_14218 | The 8 languages we are considering in this experiment are not actually resource-poor languages. | running on these 8 languages makes our system comparable with previously proposed methods. | contrasting |
train_14219 | The idea of using the universal tagset is of great use in multilingual applications, enabling comparison across languages. | the mapping is not always straightforward. | contrasting |
train_14220 | The "No LP" model of Das and Petrov (2011), which only uses directly projected labels (without label propagation), scored 81.3% for 8 languages. | using the same model but with more parallel data, Täckström et al. | contrasting |
train_14221 | Plank and van Noord (2008) concluded that this method for adding prior knowledge only works with high quality reference distributions, otherwise performance suffers. | to these previous approaches, we consider the specific setting where both the learned model and the reference model s o = P (t|w) are both maximum entropy models. | contrasting |
train_14222 | In general, because we don't explicitly do any mapping between languages, we might have trouble if the tagset size of the target language is bigger than the source language tagset. | this is not the case for our experiment because we choose English as the source-side and English has the full 12 tags. | contrasting |
train_14223 | In this case the correction model would include a regularization term over the λ to bias towards the DPM parameters, while γ and α would use a zero-mean regularizer. | we leave this for future work. | contrasting |
train_14224 | The introduction of dynamic oracles has considerably improved the accuracy of greedy transition-based dependency parsers, without sacrificing parsing efficiency. | this enhancement is limited to projective parsing, and dynamic oracles have not yet been implemented for parsers supporting non-projectivity. | contrasting |
train_14225 | A transition-based dependency parser is a nondeterministic device, meaning that a given configuration can be mapped into several configurations by the available transitions. | in several implementations the parser is associated with a discriminative model that, on the basis of some features of the current configuration, always chooses a single transition. | contrasting |
train_14226 | In the case where the node γ[i] belongs to σ, i.e., i < , we assign loss contribution 0 to the entry at the top of the stack. | if γ[i] is in β, i.e., i ≥ , we assign loss contribution 0 to several entries in T [i, i + 1] (line 6) because, at the time γ[i] is shifted, the content of the stack depends on the transitions executed before that point. | contrasting |
train_14227 | In general, parsers may prefer VP reading because a transitive verb followed by a noun object is nor-mally a VP structure. | chinese verbs can also modify nouns without morphological inflection, e.g., 養殖/farming 池/pond. | contrasting |
train_14228 | A conjunctive head-head relation between a verb and a noun is rare. | in the sentence "服 務 設備 都 甚 周到" (Both service and equipment are very thoughtful. | contrasting |
train_14229 | Based on the features discussed in the previous sub-section, we extract prior knowledge from Treebank to design the Vt-N classifier. | the training suffers from the data sparseness problem. | contrasting |
train_14230 | An alternative way to extend world knowledge is to learn from largescale unlabeled data (Wu, 2003;Yu et al., 2008). | the unsupervised approach accumulates errors caused by automatic annotation processes, such as word segmentation, PoS tagging, syntactic parsing, and semantic role assignment. | contrasting |
train_14231 | Since the scoring function refers to the prediction history, Choi and Palmer (2012) uses the gold POS tags, y t−1 1 , to generate training examples, which means they assume all of the past decisions are correct. | this causes error propagation problems, since each state depends on the history of the past decisions. | contrasting |
train_14232 | The results indicate that an induced latent tag set as a whole increases parsing performance. | not every split made by the HMM-LA seems to be useful for the parser. | contrasting |
train_14233 | We also tried Jiang and Zhai's subset selection technique ( §3.1 in Jiang and Zhai (2007)), which assumes labeled training material for the target domain. | we did not see any improvements. | contrasting |
train_14234 | As a result, a system must be prepared to handle disfluencies, utterance fragments, and other phenomena that are entirely grammatical in speech, but not in writing. | a system designed for transcripts of speech does not need to identify errors specific to written language such as punctuation or spelling mistakes. | contrasting |
train_14235 | Hassanali and Liu's system was designed for transcripts of spoken language collected from children with impaired language, and is able to detect the set of errors they defined very well. | it cannot be straightforwardly adapted to novel error sets. | contrasting |
train_14236 | CCG's generalized notion of constituency means that many derivations are possible for a given a set of lexical categories. | most of these derivations will be semantically equivalent-for example, deriving the same dependency structures-in which case the actual choice of derivation is unimportant. | contrasting |
train_14237 | Auli and Lopez (2011b) find that A * CCG parsing with this heuristic is very slow. | they achieve a modest 15% speed improvement over CKY when A * is combined with adaptive supertagging. | contrasting |
train_14238 | If we assume all derivations licensed by our grammar are equally likely, and that lexical category assignments are conditionally independent given the sentence, we can compute the optimal parseŷ as: As discussed in Section 2.1, many derivations are possible given a sequence of lexical categories, some of which may be semantically distinct. | our model will assign all of these an equal score, as they use the same sequence of lexical categories. | contrasting |
train_14239 | Unlike the other methods, this approach does affect the probabilities which are calculated, as the normalizing constant is only computed for a subset of the categories. | the probability mass contained in the pruned categories is small, and it only slightly decreases parsing accuracy. | contrasting |
train_14240 | Our experiments show that the parser achieves over 80% unlabeled attachment accuracy on our new, high-quality test set and measure the benefit of our contributions. | to the edited, standardized language of traditional publications such as news reports, social media text closely represents language as it is used by people in their everyday lives. | contrasting |
train_14241 | These informal texts, which account for ever larger proportions of written content, are of considerable interest to researchers, with applications such as sentiment analysis (Greene and Resnik, 2009;Kouloumpis et al., 2011). | their often nonstandard content makes them challenging for traditional NLP tools. | contrasting |
train_14242 | A lot of work has gone into developing powerful optimization methods for solving these combinatorial problems. | we explore, analyze, and demonstrate that a substantially simpler randomized greedy inference algorithm already suffices for near optimal parsing: a) we analytically quantify the number of local optima that the greedy method has to overcome in the context of first-order parsing; b) we show that, as a decoding algorithm, the greedy method surpasses dual decomposition in second-order parsing; c) we empirically demonstrate that our approach with up to third-order and global features outperforms the state-of-the-art dual decomposition and MCMC sampling methods when evaluated on 14 languages of non-projective CoNLL datasets. | contrasting |
train_14243 | In fact, drawing the initial tree uniformly at random results in the same performance as when initialized from a trained first-order distribution. | sufficient randomization of the starting point is critical. | contrasting |
train_14244 | Therefore, we could initialize the tree y (0) with a tree from a first-order parser, or draw the initial tree from a first-order distribution other than uniform. | perhaps surprisingly, as we demonstrate later, little is lost with uniform initialization. | contrasting |
train_14245 | use the grammar of pregroups as the syntactic machinery to construct distributional meaning representations, since both pregroups and vector spaces can be seen as examples of the same abstract structure, which leads to a particularly clean mathematical description of the compositional process. | the approach applies more generally, for example to other forms of categorial grammar, such as Combinatory Categorial Grammar (Steedman, 2000;Maillard et al., 2014), and also to phrase-structure grammars in a way that a formal linguist would recognize (Baroni et al., 2014). | contrasting |
train_14246 | predicting implausible when the gold standard label is plausible). | tensor produces almost equal numbers of false positives and false negatives, but sometimes produces false negatives with low frequency nouns (e.g. | contrasting |
train_14247 | Besides them, German is the only language to which Romanian drew closer (a possible explanation might be the fact that, after the establishment of Germans in Banat and Transylvania, many German words entered the basic Romanian lexicon). | the similarity between Romanian and almost all the Slavic languages decreased in the same period. | contrasting |
train_14248 | This selection was likely made to avoid the noise of learning multiple senses for infrequent words. | our method is robust to noise, which can be seen by the good performance of our model that learns multiple embeddings for the top 30,000 most frequent words. | contrasting |
train_14249 | Knowledge graphs are recently used for enriching query representations in an entity-aware way for the rich facts organized around entities in it. | few of the methods pay attention to nonentity words and clicked websites in queries, which also help conveying user intent. | contrasting |
train_14250 | Query understanding is the process of generating a representation which characterizes a user's search intent (Croft et al., 2010), which is of vital importance for information retrieval. | users are remarkably laconic in describing their information needs due to anomalous state of knowledge (Belkin et al., 1982), resulting in vague and underspecified queries, which makes it especially difficult to understand and locate what they intended for in mountains of web data. | contrasting |
train_14251 | A widely accepted way to use knowledge graph is tying queries with it by annotating entities in them, also known as entity linking. | information need is conveyed through more than entities. | contrasting |
train_14252 | We do not split queries into clusters or subtopics relevant to the original query to indicate a intent, but link them in an graph with intent feature similarity, weakly or strongly, in a holistical view. | previous research can be categorized by what kind of resources they rely on. | contrasting |
train_14253 | Markov logic networks combine Markov networks with first-order logic in a probabilistic framework (Richardson and Domingos, 2006) where φ i is a first order formula and w i is the penalty (the formula's weight). | to the first-order logic, whereby a formula represents a hard constraint, these logic formulas are relaxed and can be violated with penalties in the MLN. | contrasting |
train_14254 | The system most similar to ours is DEANNA (Yahya et al., 2012). | dEANNA extracts predicate-argument structures from the questions using three hand-written patterns. | contrasting |
train_14255 | A large corpus may be used to build relation expression models , but not as supporting evidence for target entities. | the Web and IR community generally assumes a free-form query that is often telegraphic (Guo et al., 2009;Sarkas et al., 2010;Li et al., 2011). | contrasting |
train_14256 | In all cases and for all metrics, using the corpus and KG together gives superior performance to using any of them alone. | it is instructive that in case of TREC-INEX, corpusonly is better than KG-only, whereas this is reversed for WQT, which also supports the above argument. | contrasting |
train_14257 | Previous sections discussed estimating difficulty scores of resolved questions, from which pairwise competitions could be extracted. | for newly posted questions without any answers received, no competitions could be extracted and none of the above methods work. | contrasting |
train_14258 | In the task of predicting reading difficulty levels, documents targeting different grade levels are taken as ground truth, which can be easily obtained from the web. | there is no naturally annotated data for our QDE task on the web. | contrasting |
train_14259 | In this case, the lexica outperformed previous results for gender prediction of Twitter users, which ranged from 75.5% to 87% (Burger et al., 2011;Ciot et al., 2013;Liu and Ruths, 2013;Al Zamal et al., 2012). | the lexica were unable to match the 92.0% accuracy Burger et al. | contrasting |
train_14260 | Our method is also an emotion-based method. | our approach is different from existing emotion-based methods in the following aspects. | contrasting |
train_14261 | Germesin and Wilson (2009) also showed accuracies of 98% in detecting agreement in the AMI corpus using lexical, subjectivity and dialogue act features. | they note that their system could not classify disagreement accurately due to the small number of training examples in this category. | contrasting |
train_14262 | Referrals and questions, as well as polarity measures in the first section of the post, were found to be most useful. | their analysis did not Table 1: Characteristics of the four categories determined from the crowd-sourced annotation. | contrasting |
train_14263 | show that such inferences may be exploited to significantly improve explicit sentiment analysis systems. | to achieve its results, the system developed by requires that all instances of +/-effect events in the corpus be manually provided as input. | contrasting |
train_14264 | Several works such as Hatzivassiloglou and McKeown (1997), Turney and Littman (2003), Kim and Hovy (2004), Strapparava and Valitutti (2004), and Peng and Park (2011) have tackled automatic lexicon expansion or acquistion. | in most such work, the lexicons are word-level rather than sense-level. | contrasting |
train_14265 | With only hierarchical information (i.e., hypernym (H) and troponym (T) relations), it already shows good performance for all classes. | they cannot cover some senses. | contrasting |
train_14266 | By gloss similarity, many nodes are connected to each other. | since uncertain connections can cause incorrect propagation in the graph, this negatively affects the performance. | contrasting |
train_14267 | Note that our method is similar to Active Learning (Tong and Koller, 2001), in that both automatically identify which unlabeled instances the human should annotate next. | in active learning, the goal is to find instances that are difficult for a supervised learning system. | contrasting |
train_14268 | It would be promising to combine our method with other methods to enable it to find +effect and -effect senses that are outside the coverage of WordNet. | a WordNet-based lexicon gives a substantial base to build from. | contrasting |
train_14269 | To tackle this problem, models in the literature usually use some seed words for each sentiment topic to define Dirichlet priors with asymmetric concentration parameter vectors (Sauper et al., 2011;Kim et al., 2013), or use seed words to initialize word assignment to sentiment topic (Lin and He, 2009), or both Jo and Oh, 2011). | these seed words are usually arbitrarily selected, and how to define asymmetric priors is not clear, especially when we would like to capture more than two (positive and negative) kinds of sentiments. | contrasting |
train_14270 | Sentiment association: The sentiment label takes R values, and there are T different values for the polarity score in the sentiment lexicon. | the relation between sentiment labels and polarity scores are unknown. | contrasting |
train_14271 | There may be better methods to use seed words for aspect discovery (Jagarlamudi et al., 2012;Mukherjee and Liu, 2012), and it would be interesting to combine their methods with ours. | this is beyond the scope of this paper, and we list it as future work. | contrasting |
train_14272 | Finally, we assume each phrase is associated with one latent aspect. | aspects may be correlated. | contrasting |
train_14273 | For example, if we learn that the hashtag #lovelife is associated with JOY, then we can extract the phrase "love life" from the hashtag and use it to recognize emotion in the body of tweets. | unlike hashtags, which are selfcontained, the words surrounding a phrase in a tweet must also be considered. | contrasting |
train_14274 | Each phrase is assumed to express the same emotion as the original hashtag. | as we will see in Section 4, just the presence of a phrase yields low precision, and surrounding context must also be taken into account. | contrasting |
train_14275 | Some previous researches used the URL similarity or patterns to find parallel page pairs. | due to the diversity of web page styles and website maintenance mechanisms, bilingual websites adopt varied naming schemes for parallel documents (Shi, et al, 2006). | contrasting |
train_14276 | Carl (2010) showed that expert translators tend to adopt local planning: they read a few words ahead and then translate in a roughly online fashion. | word order differences between languages will necessarily require longer range planning and movement. | contrasting |
train_14277 | Conventional incremental MT learning experiments typically resemble domain adaptation: smallscale baselines are trained and tuned on mostly outof-domain data, and then re-tuned incrementally on in-domain data. | we start with largescale systems. | contrasting |
train_14278 | The process study most similar to ours is that of Koehn (2009a), who compared scratch, post-edit, and simple interactive modes. | he used undergraduate, non-professional subjects, and did not consider re-tuning. | contrasting |
train_14279 | Many research translation UIs have been proposed including TransType (Langlais et al., 2000), Caitra (Koehn, 2009b), Thot (Ortiz-Martínez and Casacuberta, 2014), TransCenter (Denkowski et al., 2014b), and CasmaCat (Alabau et al., 2013). | to our knowledge, none of these interfaces were explicitly designed according to mixedinitiative principles from the HCI literature. | contrasting |
train_14280 | The maximum phrase length (mpl) introduces in g (0) more configurations of reordering constraints ([l, C] in Figure 3). | not many more, due to C being limited by the distortion limit d. In practice, we observe little impact on time performance. | contrasting |
train_14281 | derivations in the vast majority of the cases (100% with a 3-gram LM) and translation quality in terms of BLEU is no different from OS * . | with k < 10 4 both model scores and translation quality can be improved. | contrasting |
train_14282 | 2 through sparse indicator features over phrase pairs instead, but prior work with such models still relies on word aligned corpora for estimation (Xiong et al., 2006;Nguyen et al., 2009). | recent evaluations of the approach show little gain over the simpler frequency-based estimation method (Cherry, 2013). | contrasting |
train_14283 | 6 The feature set closest to Cherry (2013) is SparseHRM. | while Cherry had to severely restrict his features for batch lattice MIRA-based training, our maximum expected BLEU approach can handle millions of features. | contrasting |
train_14284 | Previous research has shown that directly training a reordering model for BLEU can vastly outperform a likelihood trained maximum entropy reordering model (Cherry, 2013). | the two approaches do not only differ in the objectives used, but also in the type of training data. | contrasting |
train_14285 | The standard configuration of modern phrasebased Statistical Machine Translation (SMT) (Koehn et al., 2003) systems can produce very acceptable results on some tasks. | early integration of better features to guide the search for the best hypothesis can result in significant improvements, an expression of the complexity of modeling translation quality. | contrasting |
train_14286 | For instance, improvements have been obtained by integrating features into decoding that better model semantic coherence at the sentence level (Hasan and Ney, 2009) or syntactic well-formedness (Schwartz et al., 2011). | early use of such complex features typically comes at a high computational cost. | contrasting |
train_14287 | (2007) described a greedy search decoder, first introduced in (Germann et al., 2001), able to improve translations produced by a dynamic programming decoder using the same scoring function and translation table. | the more recent work by Arun et al. | contrasting |
train_14288 | In both scenarios our lexicon is 60-70% smaller. | to the development results, single sentence performance decreases slightly compared to Artzi and Zettlemoyer (2013b). | contrasting |
train_14289 | The templates presented so far model grammatically correct input. | in dialogue domains such as ATIS, speakers often omit words. | contrasting |
train_14290 | This text-to-code problem may be thought of as a machine translation (MT) problem, where one aims to translate sentences in English to the formal language of LSCs. | standard statistical MT techniques rely on the assumption that textual requirements and code are aligned at a sentence level. | contrasting |
train_14291 | By choosing only a few such principle axes, we can represent the data in a lower dimensional space. | t-SNE embedding performs a non-linear dimensionality reduction preserving the local structures. | contrasting |
train_14292 | These kinds of corruptions are expected to be more frequently harmful, at least for languages with relatively rigid word order. | there may still be certain transpositions that are benign, at least for grammaticality. | contrasting |
train_14293 | Again, we argue that a system can improve on this by predicting unseen parts of the sentence to find a better tradeoff between these conflicting goals. | to evaluate and optimize such a system, we must measure where a system falls on the continuum of accuracy versus expeditiousness. | contrasting |
train_14294 | Our goal is to learn a classifier that can accurately mimic the oracle's choices on previously unseen data. | at test time, when we run the learned policy classifier, the learned policy's state distribution may deviate from the optimal policy's state distribution due to imperfect imitation, arriving in states not on the oracle's path. | contrasting |
train_14295 | In the complete version of searn, the cost of each action is calculated as the highest expected reward starting at the current state minus the actual roll-out reward. | computing the full roll-out reward is computationally very expensive. | contrasting |
train_14296 | Any linguist would agree that these sequences are substitutable (in fact, they have lots of local contexts in common). | statistical evidence points otherwise, since their context distributions are not close enough. | contrasting |
train_14297 | For instance, using 5-grams based only on the 26 letters of the English alphabet will result in a feature space of 26 5 = 11, 881, 376 features. | in the experiments presented in this work the feature space includes 5-grams along with 6-grams, 7-grams and 8-grams. | contrasting |
train_14298 | Diving into details, it can be observed that the results obtained by KRR are higher than those obtained by KDA. | both methods perform very well compared to the state of the art. | contrasting |
train_14299 | We may want to measure their English vocabulary with an emphasis on computer science rather than their general English vocabulary. | such an extension is impossible via current methods, and thus it is desirable to sample algorithms to be able to handle domain specificity. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.