id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_13600 | Such domain knowledge is useful to help organize the product as- pects into a hierarchy. | the initial hierarchy obtained from domain knowledge usually cannot fit the review data well. | contrasting |
train_13601 | For implicit aspect identification, some works proposed to define rules for identification (Liu el al., 2005), while others suggested to automatically generate rules via mutual clustering (Su et al., 2008). | there are some related works on sentiment classification (Pang and Lee, 2008). | contrasting |
train_13602 | Pollack's recursive auto-associative memories (RAAMs) are similar to ours in that they are a connectionst, feedforward model. | rAAMs learn vector representations only for fixed recursive data structures, whereas our rAE builds this recursive data structure. | contrasting |
train_13603 | Most previous work is centered around a given sentiment lexicon or building one via heuristics (Kim and Hovy, 2007;Esuli and Sebastiani, 2007), manual annotation (Das and Chen, 2001) or machine learning techniques (Turney, 2002). | we do not require an initial or constructed sentiment lexicon of positive and negative words. | contrasting |
train_13604 | (2007) extended Marcu's work by using parameter opitimization, topic segmentation and syntactic parsing. | syntactic parsers were usually costly and impractical when dealing with large scale of text. | contrasting |
train_13605 | The drop of precision may be caused by the neglect of structural and semantic information of discourse instances. | m&E still outperformed Baseline in average F -score. | contrasting |
train_13606 | When modeling only positive and negative labels for sentiment, negators are generally treated as flipping the polarity of the adjective it modifies (Choi and Cardie, 2008;Nakagawa et al., 2010). | recent work (Taboada et al., 2011;Liu and Seneff, 2009) suggests that the effect of the negator when ordinal sentiment scores are employed is more akin to dampening the adjective's polarity rather than flipping it. | contrasting |
train_13607 | Each of the above, however, uses a binary rather than an ordinal sentiment scale. | our proposed method for phraselevel sentiment analysis is inspired by recent work on distributional approaches to compositionality. | contrasting |
train_13608 | In our work we propose a single unified model for handling all words of any part of speech. | there has been some research in trying to model compositional effects for sentiment at the phrase-and sentence-level. | contrasting |
train_13609 | As Table 3 shows, the WSJ training set contains only 0.7% imperative sentences. | 5 our test sentences from the web contain approximately 10% imperatives. | contrasting |
train_13610 | Since it explicitly models the grammaticality of the output via target-side syntax, the string-to-tree model (Xiao et al., 2010) significantly outperforms both the state-of-the-art phrase-based system Moses (Koehn et al., 2007) and the formal syntax-based system Hiero (Chiang, 2007). | there is a major limitation in the string-to-tree model: it does not utilize any useful source-side syntactic information, and thus to some extent lacks the ability to distinguish good translation rules from bad ones. | contrasting |
train_13611 | Given a test source sentence with its parse tree, we can according to this strategy choose only the rules whose source syntax matches the test source tree. | this restriction will rule out many potentially correct rules. | contrasting |
train_13612 | We adopt the latter approach for efficient decoding with integrated ngram language models since this binarization technique has been well studied in string-to-tree translation. | when the rules' source string is decorated with syntax (fuzzy-tree to exact-tree rules), how should we binarize these rules? | contrasting |
train_13613 | This model significantly outperforms the state-of-the-art hierarchical phrasebased model (Chiang, 2005). | those stringto-tree systems run slowly in cubic time (Huang et al., 2006). | contrasting |
train_13614 | In the second example, the Chinese input holds a long distance dependency "δΈ ε½ θ΅ θ΅ ... εͺ ε" which corresponds to a simple pattern "noun phrase+verb+noun phrase". | due to the modifiers of "εͺ ε" which contains two subsentences including 24 words, the sentence looks rather complicated. | contrasting |
train_13615 | 5 A general weakness of subgradient algorithms is that they do not have this capacity, and so are usually stopped by specifying a maximum number of iterations. | aDMM allows to keep track of primal and dual residuals (Boyd et al., 2011). | contrasting |
train_13616 | Hence, our method seems to be more suitable for decompositions which involve "simple slaves," even if their number is large. | this does not rule out the possibility of using this method otherwise. | contrasting |
train_13617 | Note that a subgradient-based method could handle some of those parts efficiently (arcs, consecutive siblings, grandparents, and head bigrams) by composing arc-factored models, head automata, and a sequence labeler. | no lightweight decomposition seems possible for incorporating parts for all siblings, directed paths, and non-projective arcs. | contrasting |
train_13618 | Streaming approaches (Muthukrishnan, 2005) provide memory and time-efficient framework to deal with terabytes of data. | these approaches are proposed to solve a singe problem. | contrasting |
train_13619 | Metrics like LLR would assign high association scores to frequent words, also leading to incorrect parses. | with a small amount of linguistic side information (Druck et al., 2009;Naseem et al., 2010), we see that these issues can be overcome. | contrasting |
train_13620 | Yielding an accuracy of 76% and F-score of .73 its performance is lower than that of the supervised SVM. | it does outperform both CRFbased methods. | contrasting |
train_13621 | For instance, in the example sentence provided at the beginning of this section, the words "arrested" and "killed" probably have a relatively high apriori likelihood of being ca-sually related. | knowing that the connective "because" evokes a contingency discourse relation between the text spans "The police arrested him" and "he killed someone" provides further evidence towards predicting causality. | contrasting |
train_13622 | For a verb (verbal predicate), we extract its subject and object from its associated dependency parse. | since events are also frequently triggered by nominal predicates, it is important to identify an appropriate list of event triggering nouns. | contrasting |
train_13623 | Two annotators annotated the documents for causal event pairs, using two simple notions for causality: the Cause event should temporally precede the Effect event, and the Effect event occurs because the Cause event occurs. | sometimes it is debatable whether two events are involved in a causal relation, or whether they are simply involved in an uninteresting temporal relation. | contrasting |
train_13624 | 's (2010) approach, which combines an incremental parser with a vector-space model of semantics. | this approach only provides a loose integration of the two components (through simple addition of their probabilities), and the notion of semantics used is restricted to lexical meaning approximated by word co-occurrences. | contrasting |
train_13625 | There remains a possibility that the model primarily captures lexical priming, rather than co-reference. | we note that string match is a strong indicator of two NPs being corefer-ent (cf. | contrasting |
train_13626 | In the current implementation of the model, Ξ» is constant throughout the test runs. | Ξ» could possibly be a function of the previous discourse, allowing for more complicated classification probabilities. | contrasting |
train_13627 | However, the model lacked generality, being designed to deal with only one type of sentence. | to both of these earlier models, the model proposed here aims to be general enough to provide estimated reading times for unrestricted text. | contrasting |
train_13628 | Their model has similarities with our own in that it learns correspondences between the alphabets of pairs of languages. | their correspondences are probabilistic and implicit while ours are hard and explicit. | contrasting |
train_13629 | As indicated in the table, the raw number of nominal word types varies quite a bit across the languages, almost doubling from 4,178 (English) to 8,051 (Hungarian). | the number of stems appearing within these words is relatively stable across languages, ranging from a minimum of 3,112 (Bulgarian) to a maximum of 3,746 (Hungarian), an increase of just 20%. | contrasting |
train_13630 | In fact, with only one training language, our method performs worse (on average) than the Linguistica baseline. | with two or more training languages available, our method achieves superior results. | contrasting |
train_13631 | In practice, they are usually trained by maximising the conditional log-likelihood (CLL) of the training data. | it is widely appreciated that optimizing for task-specific metrics often leads to better performance on those tasks (Goodman, 1996;Och, 2003). | contrasting |
train_13632 | The results confirm that the loss-trained models improve over a likelihood-trained baseline, and furthermore that the exact loss functions seem to have the best performance. | the approximations are extremely competitive with their exact counterparts. | contrasting |
train_13633 | For instance, morphological innovations and irregular sound changes can completely obscure relationships between words in different languages. | in the case of reconstruction, an unexplainable word is simply that: one can still correctly reconstruct its ancestor using words from related languages. | contrasting |
train_13634 | Instead, it can at best only select group A or group B as the value for the parent, and leave the other group fragmented as two innovations, as in Figure 3c. | hK10 can recover this relationship (Figure 3d), but this power is precisely what makes it intractable. | contrasting |
train_13635 | For instance, consider the words for "meat/flesh" in the Formosan languages: Squliq /hiP/, Bunun /titiP/, Paiwan /seti/, Kavalan /PisiP/, CentralAmi /titi/, Our system groups all of these words except for Squliq /hiP/. | despite these words' similarity, there are actually three cognate groups here. | contrasting |
train_13636 | The table also shows that linearly interpolating the translation models improved the overall BLEU score, as expected. | using multiple decoding paths, and no explicit model merging at all, produced even better results, by 2 BLEU points over the best individual model and 1.3 BLEU over the best interpolated model, which used Ξ» = 0.9. | contrasting |
train_13637 | We presented in Section 5 several methods to improve the performance of a single general-domain translation system by restricting its training corpus on an information-theoretic basis to a very small number of sentences. | section 6.3 shows that using two translation models over all the available data (one in-domain, one general-domain) outperforms any single individual translation model so far, albeit only slightly. | contrasting |
train_13638 | This is a large multilingual corpus, containing sentences translated from several European languages. | it is organized as a collection of bilingual corpora rather than as a single multilingual one, and it is hard to identify sentences that are translated to several languages. | contrasting |
train_13639 | Furthermore, we show that translated LMs are better predictors of translated sentences even when the LMs are compiled from texts translated from languages other than the source language. | lMs based on texts translated from the source language still outperform lMs translated from other languages. | contrasting |
train_13640 | The simplest variation is TESLA-M 2 , based on matching bags of ngrams (BNG) like BLEU. | unlike BLEU, TESLA-M formulates the matching process as a real-valued linear programming problem, thereby allowing the use of weights. | contrasting |
train_13641 | Similarly, TESLA-F and TESLA-M gave different outputs for only 857 sentences, or 34%. | bLEU and TESLA-M gave different translations for 2248 sentences, or 90%. | contrasting |
train_13642 | Interestingly, the human translations average only 22 words, so BLEU and TER translations are in fact much closer on average to the reference lengths, yet their translations often feel too short. | manual inspections reveal no tendency for TESLA-F and TESLA-M to produce overly long translations. | contrasting |
train_13643 | Its abstract goal, on the one hand, could be pre-processing of the linguistic signal, to enable subsequent stages of analysis. | it could be making explicit the (complete) contribution that the grammatical form of the linguistic signal makes to interpretation, working out who did what to whom. | contrasting |
train_13644 | Our barerel category corresponds to their "object reduced relative" category with the difference that we also include adverb relatives, where the head noun functions as a modifier within the relative clause, as does time in (1). | our rnr category is somewhat narrower than Rimell et al. | contrasting |
train_13645 | In Steps (a-c), for the current example i, we compute the relative loss function β i that scales with the loss achieved by the best and worst possible parses under the model. | to previous work, we do not only compute the loss over a fixed n-best list of possible outputs, but instead use the current model score to recompute the options at each update. | contrasting |
train_13646 | Faced with thousands of news documents, people usually have a myriad of interest aspects about the beginning, the development or the latest situation. | traditional information retrieval techniques can only rank webpages according to their understanding of relevance, which is obviously insufficient (Jin et al., 2010). | contrasting |
train_13647 | Probable bias is enlarged by searching for worthy sentence in single dates. | precision drops due to ex-cessive choice of global timeline-worthy sentences. | contrasting |
train_13648 | In addition sentences generated by T * are also succinct: with an average length of 6.9 words per sentence. | we are still some way off the human gold standard since we do not predict other parts-of-speech such as adjectives and adverbs. | contrasting |
train_13649 | In order to address this issue, they computed the translation-bytranslation correlation with human assessments (i.e., correlation at the sentence level). | correlation with human judgements is not enough to determine the reliability of measures. | contrasting |
train_13650 | Additive Reliability According to the previous properties, corroborating evaluation results with several measures increases the reliability of evaluation results at the cost of sensitivity. | increasing the score threshold of a single measure should have a similar effect. | contrasting |
train_13651 | Still, the pairs {(X k , Y k )} n k=1 , and therefore the differences {Z k } n k=1 , are iid which is what makes paired testing valid. | there is no theoretical distribution for T from which to calculate valid quantiles c for cutoffs, and therefore the use of the unpaired t-statistic cannot be recommended for TAC evaluation. | contrasting |
train_13652 | 5.4 leading up to Table 5.4.7 on p. 167) that the Wilcoxon signed-rank statistic W provides greater robustness and often much greater efficiency than the paired T, with ARE which is 0.95 with f a standard normal density, and which is never less than 0.864 for any symmmetric density f . | in our context, continuous scores such as pyramid exhibit document-specific score differences between summarizers which often have approximately normal-looking histograms, and although the alternatives perhaps cannot be viewed as pure location shifts, it is unsurprising in view of the ARE theory cited above that the W and T paired tests have very similar performance. | contrasting |
train_13653 | In contrast to the human vs. machine case, we do not know the truth here. | since the number of significant differences increases with paired testing here as well, we believe this also reflects the greater discriminatory power of paired testing. | contrasting |
train_13654 | Typically, these sorts of features are probabilities estimated from a corpus parsed using a supervised parser. | there do not currently exist treebanks with annotated phrase Table 3: Most probable child phrases for the parent phrase "made up" for each direction, sorted by the conditional probability of the child phrase given the parent phrase and direction. | contrasting |
train_13655 | If the score of the partial structure can only get worse when combining it with other structures (e.g., in a PCFG), then the first time that we pop an item of type GOAL from the agenda, we are guaranteed to have the best parse. | in our model, some features are positive and others negative, making this property no longer hold; as a result, GOAL items may be popped out of order from the agenda. | contrasting |
train_13656 | Competitive performance is also found by using the unsupervised Chinese parser and supervised English parser (+0.53 over Moses). | when using unsupervised parsers for both languages, performance was below that of Moses. | contrasting |
train_13657 | The system we used is still in an experimental state and probably not quite at the state-of-the-art level yet. | we considered it was good enough for our purpose, since we mainly want to test our algorithm is a practical way. | contrasting |
train_13658 | Therefore, it is not reasonable in general to try to compute all these treelets. | we are not really interested in computing all possible treelets. | contrasting |
train_13659 | (2011) show that super-vised methods recognize such relations with high accuracy. | large sets of annotated relations need to be provided for this purpose. | contrasting |
train_13660 | Therefore, the PRA approach achieves high recall partially by combining a large number of unspecialized paths, which correspond to unspecialized rules. | learning more accurate specialized paths is part of our future work. | contrasting |
train_13661 | The underlying assumption is that all words in a document constitute the context of the target word. | it is not the case in real world corpora. | contrasting |
train_13662 | A word is assumed to be generated by first sampling a topic, then choosing a path from the root node of hierarchy to a sense node corresponding to that word. | they only focus on WSD. | contrasting |
train_13663 | We restricted our sample to tweets from accounts which indicated their primary language as English. | there may be some foreign language messages in our dataset, since multi-lingual users may tweet in other languages even though their account is marked as "English". | contrasting |
train_13664 | Previous work has shown that such analysis can be more difficult than topic-based analysis (Pang and Lee, 2008), and we have the additional challenge that comments are typically much shorter than full-length articles. | the difficulty in analyzing the textual information in comments can be alleviated by additional contextual information such as author identities. | contrasting |
train_13665 | Better approaches to estimate similarities have also been proposed in Koren (2010). | modern methods based on matrix factorization have been shown to outperform nearest neighbor methods (Salakhutdinov and Mnih, 2008a,b;Bell et al., 2007). | contrasting |
train_13666 | Metrics p-value vv+uc > vv All < 10 β7 vv+uc > uc All < 10 β20 uc > bilinear All except P@5 < 0.006 bilinear > svm All < 10 β20 vv > svm All < 10 β20 svm > nb All < 10 β8 nb > cos All < 10 β20 Not surprisingly, vv performs poorly for raters or authors with no ratings observed in the training data. | once we have a small amount of ratings, it starts to outperform uc, even though intuitively, the textual information in the comment should be more informative than the authorship information alone. | contrasting |
train_13667 | Response generation should also be beneficial in building "chatterbots" (Weizenbaum, 1966) for entertainment purposes or companionship (Wilks, 2006). | we are most excited by the future potential of data-driven response generation when used inside larger dialogue systems, where direct consideration of the user's utterance could be combined with dialogue state (Wong and Mooney, 2007;Langner et al., 2010) to generate locally coherent, purposeful dialogue. | contrasting |
train_13668 | We had expected human annotators to pick up on these fluency errors, giving the the advantage to the IR systems. | it appears that MT-CHAT's ability to tailor its response to the status on a fine-grained scale overcame the disadvantage of occasionally introducing fluency errors. | contrasting |
train_13669 | Consider predicting the number of downloads over g future time steps. | if t is the time of forecasting, we can observe the texts of all articles published before t. any article published in the interval [t β g, t] is too recent for the outcome measurement of y to be taken. | contrasting |
train_13670 | According to our segmentation criteria, it consists of two words " " (tsuneyama) and " " (jou). | the morphological analyzer wrongly segments it into " " (tsune) and " " (yamashiro) because " " (tsuneyama) is an unknown word. | contrasting |
train_13671 | Type-based sampling (Liang et al., 2010) has the ability to directly escape a local optimum, making inference very efficient. | type-based sampling is not easily applicable to the bigram model owing to sparsity and its dependence on latent assignments. | contrasting |
train_13672 | One may instead consider a pipeline approach in which we first extract noun phrases in text and then identify boundaries within these noun phrases. | noun phrases in text are not trivially identifiable in the case that they contain unknown words as their constituents. | contrasting |
train_13673 | Up to this point, we consider every possible boundary position. | this seems wasteful, given that a large portion of text has only marginal influence on the segmentation of the noun phrase in question. | contrasting |
train_13674 | For G to be a uniform distribution over an infinite lexeme set L, we need L to be uncountable. | 9 it turns out 10 that with probability 1, each L t is countably infinite, and all the L t are disjoint. | contrasting |
train_13675 | rare forms (Bin 1), which are mostly regular, and worst on on the 10 most frequent forms of the language (Bin 5). | adding a corpus helps most in fixing the errors in bins with more frequent and hence more irregular verbs: in Bins 2-5 we observe improvements of up to almost 8% absolute percentage points. | contrasting |
train_13676 | In this model, inference on sequences is modeled as cascaded decision. | the decision on a sequence labeling sequel to other decisions utilizes the features on the preceding results as marginalized by the probabilistic models on them. | contrasting |
train_13677 | Such a framework can maximize reusability of existing sequence labeling systems. | it exhibits a strong tendency to propagate errors to upper labelers. | contrasting |
train_13678 | It enables simultaneous learning and estimation of multiple sequence labelings on the same input sequences, where time slices of the outputs of all the out sequences are regularly aligned. | it puts the distribution of states into Bayesian networks with cyclic dependencies, and exact inference is not tractable in such a model in general. | contrasting |
train_13679 | From the definition ofh 1 (3), each element of the third factor of (13) βh 1 βΞΈ 1 becomes There exists efficient dynamic programming to calculate the covariance value (17) (without goint into that detail because it is very similar to the one shown later in this paper), and of course we can run such dynamic programming for β k β² 1 β K β² 1 , e 1 β E 1 . | the size of the Jacobian βh 1 βΞΈ 1 is equal to Since it is too large in many tasks likely to arise in practice, we should avoid to calculate all the elements of this Jacobian in a straightforward way. | contrasting |
train_13680 | The other baseline method has a CRF model for the chunking labeling, which uses the marginalized features offered by the POS labeler. | the parameters of the POS labeler are fixed in the training of the chunking model. | contrasting |
train_13681 | Note that this model with multiple context features is deficient: it can generate data that are inconsistent with any actual corpus, because there is no mechanism to constrain the left context word of token e i to be the same as the right context word of token e iβ1 (and similarly with alignment features). | deficient models have proven useful in other unsupervised NLP tasks (Klein and Manning, 2002;Toutanova and Johnson, 2007). | contrasting |
train_13682 | Distance/similarity feature spaces are more suitable to the paraphrase detection task because they model the similarity between the two texts. | entailment trigger and content feature spaces model complex relations between the texts, taking into account first-order entailment rules, i.e. | contrasting |
train_13683 | For example, consider the following feature: This feature is active for the pair ("GM bought Opel","GM owns Opel"), with the variable unification X = "GM" and Y = "Opel". | this feature is not active for the pair ("GM bought Opel","Opel owns GM") as there is no possibility of unifying the two variables. | contrasting |
train_13684 | In the quali-tative analysis that follows, we show some examples that support this intuition. | syntax plays a key role for detecting redundancy. | contrasting |
train_13685 | Second, their approach involves qualitative analysis of the collected data only a posteriori, after manual removal of invalid and trivial generated hypotheses. | our approach integrates quality control mechanisms at all stages of the data collection/annotation process, thus minimizing the recourse to experts to check the quality of the collected material. | contrasting |
train_13686 | On the one hand, as expected, the more creative "Add Info" task proved to be more demanding than the "Remove Info": even though it was paid more, it still took little more time to be completed. | although the "Unidirectional Entailment" task was expected to be more difficult and thus rewarded more than the "Bidirectional Entailment" one, in the end it took notably less time to be completed. | contrasting |
train_13687 | This avoids the need for gathering training data on every verb or adjective for which we want to determine whether it is being used metaphorically or literally, since the algorithm is not sensitive to the specific target word. | the performance might improve if the target word were included in the feature vectors. | contrasting |
train_13688 | By modifying the situation given to workers, it is likely we can expand our collection to better represent other groups of AAC users, such as those using predictive keyboards or eye-trackers. | obtaining data representative of users with cognitive or language impairments via crowdsourcing would probably be difficult. | contrasting |
train_13689 | Our evaluation of this false positive set showed that its accuracy dropped by 6% compared to the NoDef system. | the StatDef system outperformed the two other systems, and its accuracy improvement upon the RuleDef system is statistically significant at p<0.05. | contrasting |
train_13690 | The results were derived by experimenting with a TREC dataset 11 (Li and Roth, 2002), reaching an accuracy of 91.8%. | such data refers to typical instances from QA, whose syntactic patterns can be easily generalized by STK. | contrasting |
train_13691 | However, such data refers to typical instances from QA, whose syntactic patterns can be easily generalized by STK. | we have shown that STK-CT is not effective for our domain, as it presents very innovative elements: questions in affirmative and highly variable format. | contrasting |
train_13692 | We found that this problem has been largely resolved in the current release. | 1,949 tokens and 36 MWE spans still lacked tags. | contrasting |
train_13693 | 2 We used the 80/10/10 split described by CrabbΓ© and Candito (2008). | they used a previous release of the treebank with 12,531 trees. | contrasting |
train_13694 | Integrating out the parameters, we have: (1) and(2) above have the same type since t(z, s 1 ) = t(z, s 2 ). | the two sites conflict since the probabilities of setting b s1 and b s2 both depend on counts for the tree fragment rooted at NP. | contrasting |
train_13695 | It also matches the ordering for English (Cohn et al., 2010;Liang et al., 2010). | the standard baseline for TSG models is a simple parent-annotated PCFG (PA-PCFG). | contrasting |
train_13696 | We chose French for these experiments due to the pervasiveness of MWEs and the availability of an annotated corpus. | mWE lists and syntactic treebanks exist for many of the world's major languages. | contrasting |
train_13697 | For example, /lkn/but signals almost always a CON-TRAST relation. | there are connectives where this is not the case, such as /mnd /since which has a CAUSAL and a TEMPORAL sense. | contrasting |
train_13698 | For example, the arguments of the relation TEMPORAL.SYNCHRONOUS are likely to have the same tense. | arg1 tense is more likely to be prior to arg2 tense for TEMPO-RAL.ASYNCHRONOUS and CAUSE relations. | contrasting |
train_13699 | We designed our definitions and guidelines to reflect language use in the text genre of message board posts, trying to be as domain-independent as possible so that these definitions should also apply to message board texts representing other topics. | we give examples from the veterinary domain to illustrate how these speech act classes are manifested in our data set. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.