id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_13000 | Model III, as an interpolation of the above two models, achieves a much better F-measure on GEO-QUERY corpus. | it is shown to be less effective on ROBOCUP corpus. | contrasting |
train_13001 | (2006) and Hickl and Bensley (2007). | each of these systems has pursued alignment in idiosyncratic and poorly-documented ways, often using proprietary data, making comparisons and further development difficult. | contrasting |
train_13002 | The low recall figures are particularly noteworthy. | a partial explanation is readily available: by design, the Stanford system ignores punctuation. | contrasting |
train_13003 | 12 Because punctuation tokens constitute about 15% of the aligned pairs in the MSR data, this sharply reduces measured recall. | since punctuation matters little in inference, such recall errors probably should be forgiven. | contrasting |
train_13004 | Thus, these knowledge was integrated in the representation level, and then the joint probabilities of words and corresponding SuperARVs were estimated. | in the class-based language models, words are taken as the model units, while other units smaller or larger than words are unfeasible for modeling simultaneously, such as the Chinese characters for Chinese names. | contrasting |
train_13005 | Complementary to our work, this technique also utilizes segmentation and metadata information. | our model enables the simultaneous use of all metadata attributes by combining features derived from different partitions of the training documents. | contrasting |
train_13006 | In theory, n-gram weighting can be applied to any smoothing algorithm based on counts. | because many of these algorithms assume integer counts, we will apply the weighting factors to the smoothed counts, instead. | contrasting |
train_13007 | The interpolated model with n-gram weighting achieves perplexity improvements roughly additive of the reductions obtained with the individual models. | the 1.0% WER drop for the interpolated model significantly exceeds the sum of the individual reductions. | contrasting |
train_13008 | In terms of WER, the Random feature again shows no effect on the baseline WER of 33.7%. | to our surprise, the use of the simple log(c) feature achieves nearly the same WER improvement as the best segmentation-based feature, whereas the more sophisticated features computed from HMM-LDA labels only obtain half of the reduction even though they have the best perplexities. | contrasting |
train_13009 | As expected, the use of more sophisticated interpolation techniques decreases the perplexity and WER reductions achieved by n-gram weighting by roughly half for a variety of feature combinations. | all improvements remain statistically significant. | contrasting |
train_13010 | As expected, GLI outperforms both LI and CM. | whereas LI and CM essentially converge in test set perplexity with only 100 words of devel- opment data, it takes about 500 words before GLI converges due to the increased number of parameters. | contrasting |
train_13011 | A manual approach might take the category NP and subdivide it into one subcategory NPˆS for subjects and another subcategory NPˆVP for objects (Johnson, 1998;Klein and Manning, 2003). | rather than devising linguistically motivated features or splits, latent variable parsing takes a fully automated approach, in which each symbol is split into unconstrained subcategories. | contrasting |
train_13012 | (2006), where categories were repeatedly split and some splits were re-merged if the gains were too small. | while the grammars are indeed compact at the (sub-)category level, they are still dense at the production level, which we address here. | contrasting |
train_13013 | Since it can occur in subject and object position, the production NP → it has remained unsplit. | in a single-scale grammar, two productions NP 0 → it and NP 1 → it would have been necessary. | contrasting |
train_13014 | The original unsplit production (at top) would naively be split into a tree of many subproductions (downward in the diagram) as the grammar categories are incrementally split. | it may be that many of the fully refined productions share the same weights. | contrasting |
train_13015 | It is possible to directly define a derivational semantics for multi-scale grammars which does not appeal to the underlying single scale grammar. | in the present work, we use our multiscale grammars only to compute expectations of the underlying grammars in an efficient, implicit way. | contrasting |
train_13016 | Head word alignment features: When considering a node pair (n, n ), especially one which dominates a large area, the above measures treat all spanned words as equally important. | lexical heads are generally more representative than other spanned words. | contrasting |
train_13017 | Then, we simplify (2) by fixing the alignments a 0 : This optimization has no latent variables and is therefore convex and straightforward. | while we did use this as a rapid training procedure during development, fixing the alignments a priori is both unsatisfying and also less effective than a procedure which allows the alignments a to adapt during training. | contrasting |
train_13018 | Although a real estimate of the impact of a parser design decision in this scenario can only be gauged from the quality of the translations produced, it is impractical to create such estimates for each design decision. | estimates using the solution proposed in this paper can be obtained fast, before submitting the parser output to a costly training procedure. | contrasting |
train_13019 | Note that this feature is meant as a refinement of the previous UNK feature, in the sense that perplexity numbers are meant to signal the occurrence of unknown words, as well as rare (from the training data perspective) words. | the correlation we observe for this feature is similar to the correlation observed for the UNK feature, which seems to suggest that the smoothing techniques used by the parsers employed in these experiments lead to correct treatment of the rare words. | contrasting |
train_13020 | The experiments shown in the paper were limited to readily available statistical parsers (which are widely deployed in a number of applications), and certain domains/genres (because of ready access to gold-standard data on which we could verify predictions). | the features we use in our predictor are independent of the particular type of parser or domain, and the same technique could be applied for making predictions on other parsers as well. | contrasting |
train_13021 | To avoid overfitting, we consider only the two models that performed optimally in in the SYN space in Experiment 1 (SELPREF-POW with n=30 and M&L). | since we found that vectors with raw frequency components could model the data, while PMI components could not, we only report the former. | contrasting |
train_13022 | In this work, the features encoded are binary. | features can be assugned numeric weights that corresponds to the probability of the indicator being true for any path between x and z ij (Cohen and . | contrasting |
train_13023 | 3 Obviously, the notion of local syntactic coherence only captures some aspects of syntax -e.g., it does not capture long-distance dependencies. | it is a plausible component of syntactic competence and a plausible intermediate step in the acquisition of syntax. | contrasting |
train_13024 | This is surprising because linguistically and computationally syntagmatic and paradigmatic relations are fundamentally different. | on closer inspection, we observe that limiting the number of iterations is often beneficial when computing solutions to a problem iteratively. | contrasting |
train_13025 | Features are the key to obtain an accurate question classifier. | to Li and Roth (2002)'s approach which makes use of very rich feature space, we propose a compact yet effective feature set. | contrasting |
train_13026 | The word of turkeys in the chunk (or span) contributes to the classification of type ENTY:animal if the hypernyms of WordNet are employed (as described in next section). | the extra word group would introduce ambiguity to misclassify such question into HUMAN:group, as all words appearing in chunk are treated equally. | contrasting |
train_13027 | It is not surprising that the word shape feature only achieves small gain in question classification, as the use of five shape type does not provide enough information for question classification. | this feature is treated as an auxiliary one to boost a good classifier, as we will see in the second experiment. | contrasting |
train_13028 | For instance, the head word extracted from What is the speed hummingbirds fly is hummingbirds (the correct one should be speed), thus leading to the incorrect classification of ENTY:animal (rather than the correct NUM:speed). | to Li and Roth (2006)'s approach which makes use of very rich feature space, we proposed a compact yet effective feature set. | contrasting |
train_13029 | The relatively high ratio of subjective questions in the Science category is surprising. | we find that users often post polemics and statements instead of questions, using CQA as a forum to share their opinions on controversial topics. | contrasting |
train_13030 | We can use both of them to obtain better results in the offline setting, while in online setting, we can use the text of the question alone. | gE may not have this flexibility. | contrasting |
train_13031 | Question analysis, especially question classification, has been long studied in the question answering research community. | most of the previous research primarily considered factual questions, with the notable exception of the most recent TREC opinion QA track. | contrasting |
train_13032 | SEAL's extractor requires the longest common contexts to bracket at least one instance of every seed per web page. | when seeds are noisy, such common contexts usually do not exist or are too short to be useful. | contrasting |
train_13033 | The improvements are most apparent for the TREC 13 dataset, where Ephyra has a much higher performance compared to TREC 14 and 15. | the best-configured SEAL did not improve the F 1 score on TREC 14, as reported in Table 5. | contrasting |
train_13034 | When predicted segments were clustered, the quality of the output (2 nd row) is not as good as when the reference segments were used (1 st row) as inaccurate segment boundaries affected the performance of the clustering algorithm. | the qualities of subtasks that occur frequently are not much different. | contrasting |
train_13035 | As we can see, in clean conditions the accuracy of the 3 features is quite high but as the noise conditions increase the accuracy of the 3 techniques decreases substantially. | the TF feature is much more sensitive to noise than the other two techniques. | contrasting |
train_13036 | Otherwise, if we look also into the future, we could just do language identification to extract the CS points. | our goal is to provide methods that can be used in real time applications, where we do not have access to observations beyond the point of interest. | contrasting |
train_13037 | It should be noted that the Spanglish data set is a transcription of spoken CS. | this new evaluation set contains only written CS. | contrasting |
train_13038 | 1 It should be noted that absolute values in the range between 0 and 1 are meaningless by themselves. | if a set of word pairs is shown to consistently have higher values than another set, then we can conclude that the members of the former set tend to be semantically closer than those of the latter. | contrasting |
train_13039 | Of course, not all of them are antonymous, for example sect-insect and coy-decoy. | these are relatively few in Table 1: Sixteen affix rules to generate antonym pairs. | contrasting |
train_13040 | If antonymous occurrences are to be exploited for any of the purposes listed in the beginning of this paper, then the text must be sense disambiguated. | word sense disambiguation is a hard problem. | contrasting |
train_13041 | Previous studies have mostly focused on the idiom type identification (Lin, 1999;Krenn and Evert, 2001;Baldwin et al., 2003;Shudo et al., 2004;Fazly and Stevenson, 2006). | there has been a growing interest in idiom token identification in recent times (Katz and Giesbrecht, 2006;Hashimoto et al., 2006b;Hashimoto et al., 2006a;Birke and Sarkar, 2006;Cook et al., 2007). | contrasting |
train_13042 | This suggests that when one wishes to apply a WSD system to a new domain of interest, it is worth the effort to annotate a small number of examples gathered from the new domain. | instead of randomly selecting in-domain examples to annotate, we could use active learning (Lewis and Gale, 1994) to help select in-domain examples to annotate. | contrasting |
train_13043 | Also, these prior research efforts only experimented with a few word types. | we perform active learning experiments on the hundreds of word types in the OntoNotes data, with the aim of adapting our WSD system trained on SEMCOR to the WSJ domain represented by the OntoNotes data. | contrasting |
train_13044 | Indeed, the ranking of the instances after convergence was identical to the HITS authority ranking computed from instance-pattern matrix M (i.e., the ranking induced by the dominant eigenvector of M T M ). | filtered Espresso suffers less from semantic drift. | contrasting |
train_13045 | The final recall achieved was 0.773 after convergence on the 20th iteration, outperforming the most-frequent sense baseline by 0.10. | a closer look reveals that the filtering heuristics is limited in effectiveness. | contrasting |
train_13046 | Filtered Espresso halted after the seventh iteration (Filtered Espresso (optimal stopping)) is comparable to the proposed methods. | in bootstrapping, not only the number of iterations but also a large number of parameters must be adjusted for each task and domain. | contrasting |
train_13047 | Since search queries are a fundamental part of the information retrieval task, it is essential that we interpret them correctly. | the variable forms queries take complicate interpretation significantly. | contrasting |
train_13048 | We can conclude from this that capitalization in a mixed-case query is a fair indicator that a word is a proper noun. | the great majority of queries contain no informative capitalization, so the great majority of proper nouns in search queries must be uncapitalized. | contrasting |
train_13049 | We did not see a significant improvement in this metric. | we feel that our feature's high ranking warrants reporting and hints at a potentially genuine boost in retrieval performance in a system less feature-rich. | contrasting |
train_13050 | For example, in a machine translation system (Koehn et al., 2007), if the bilingual training data does not contain the word " " (the second example in Table 1), it leaves the word untranslated. | if the word " " does appear in the training data but it has only a translation "gruel" as that is the meaning in the formal text, the translation system may wrongly translate " " into "gruel" for the informal text where the word " " is more likely to mean "like". | contrasting |
train_13051 | Rule-driven Hypothesis Generation: One can use the rules described in Section 2 to generate a set of hypotheses. | with this approach, one may generate an exponential number of hypotheses. | contrasting |
train_13052 | For each candidate relation (x, y), we use the pair as a query to search the web, and treat the number of pages returned by the search engine as a feature value. | 3 these features are quite expensive as millions of queries may need to be served. | contrasting |
train_13053 | More formally, if we denote the proposal distribution as Q(z), the target distribution as P (z), and the previous sample as z, then the probability of accepting a new sample z * ∼ Q is set at: Theoretically any non-degenerate proposal distribution may be used. | a higher acceptance rate and faster convergence is achieved when the proposal Q is a close approximation of P . | contrasting |
train_13054 | We also explore the use of different language identification methods to select POS tags from the appropriate monolingual tagger. | the best results are achieved by a machine learning approach using features generated by the monolingual POS taggers. | contrasting |
train_13055 | To evaluate their language model, they asked a human subject to judge sentences generated by a PCFG induced from training data and the language model. | they only used one human judge. | contrasting |
train_13056 | Our original motivation came from the lack of linguistic resources to process Spanglish text. | we did train from scratch a sequential model for POS tagging Spanglish, namely Conditional Random Fields (CRFs) (Lafferty et al., 2001). | contrasting |
train_13057 | In the error analysis we showed that most of the mistakes made by the language identification method, and the oracle itself, occur in sentences with intrasentential code-switching, showing the difficulty of the task. | our machine learning approach was less sensitive to the complexity of this alternation pattern. | contrasting |
train_13058 | There are some researches show that when compound words are split into smaller constituents, better retrieval results can be achieved (Peng et al., 2002a). | it is reasonable that the longer the word which co-exists in query and corpus, the more similarity they may have. | contrasting |
train_13059 | Since A is on the decision boundary, it will be queried as the most uncertain. | querying B is likely to result in more information about the data as a whole. | contrasting |
train_13060 | For pool-based active learning, we often assume that the size of U is very large. | these densities only need to be computed once, and are independent of the base information measure. | contrasting |
train_13061 | In most interesting natural language applications, K is very large, making this algorithm intractable. | it is common in similar situations to approximate the Fisher information matrix with its diagonal (Nyffenegger et al., 2006). | contrasting |
train_13062 | Though the time complexity of the algorithm given by (Jiang and Ng, 2006) is also linear, it should assume all feature templates in the initial selected set 'good' enough and handles other feature template candidates in a strict incremental way. | these two constraints are not easily satisfied in our case, while Algorithm 1 may release these two constraints. | contrasting |
train_13063 | A syntactic-semantic reranking was performed to output the final results according to (Johansson and Nugues, 2008). | only 1-best outputs of the parser before reranking are used for our evaluation. | contrasting |
train_13064 | Under the first-order expectation semiring E R,R n , the inside algorithm of Figure 2 will return Z, r where r is a vector of n feature expectations. | eisner (2002, section 5) observes that this is inefficient when n is large. | contrasting |
train_13065 | Thus, we must be finding ∇Z by formulating it as a certain expectation r. Specifically, to be rs T , a matrix. | when using this semiring to compute second derivatives (Case 2) or covariances, one may exploit the invariant that r = s, e.g., to avoid storing s and to compute r1s2 + s1r2 in multiplication simply as 2 • r1r2. | contrasting |
train_13066 | 18 This shows that DA helps with the local minimum problem, as hoped. | dA's improvement on the dev set did not transfer to the test set. | contrasting |
train_13067 | Whereas this is composable in MERT, as they can be recalculated at each critical point. | this would slow down the optimization process quite a bit, since one cannot traverse the dimension by simply adjusting the sufficient statistics to reflect changes in 1-best candidates. | contrasting |
train_13068 | The cost would further grow linearly with the number of MERT iterations and the n-best list size. | optimizing for BLEU takes on the order of minutes per iteration, and costs nothing. | contrasting |
train_13069 | This method is very attractive, since it opens the door to rich lexical features. | in order to robustly optimize the feature weights, one has to use a substantially large development set, which results in significantly slower tuning. | contrasting |
train_13070 | A possible generic solution is to cluster the lexical features in some way. | how to make it work on such a large space of bi-lingual features is still an open question. | contrasting |
train_13071 | This method avoids the over-fitting problem, at the expense of losing the benefit of discriminative training of rich features directly for MT. | the feature space problem still exists in these published models. | contrasting |
train_13072 | Similar ideas were explored in (He et al., 2008). | their length features only provided insignificant improvement of 0.1 BLEU point. | contrasting |
train_13073 | Some candidate rules were thrown away due to the source side constraint. | one string-to-dependency rule may split into several dependency-to-dependency rules due to different source dependency structures. | contrasting |
train_13074 | introduced features defined on constituent labels to improve the Hiero system (Chiang, 2005). | due to the limitation of MER training, only part of the feature space could used in the system. | contrasting |
train_13075 | Features similar to those already labeled are likely to be discriminative, and therefore likely to be labeled (rather than skipped). | insufficient diversity may also result in an inaccurate model, suggesting that coverage should select more useful queries than similarity. | contrasting |
train_13076 | In order to better understand differences in feature query selection methodology, we proposed a feature query selection method motivated 6 by the method used in Tandem Learning in Section 4.1. | this method performs poorly in the experiments in Section 5. | contrasting |
train_13077 | This is typically done between string pairs, where a pronunciation is mapped to its spelling, an inflected form to its lemma, a spelling variant to its canonical spelling, or a name is transliterated from one alphabet into another. | many problems involve more than just two strings: • in morphology, the inflected forms of a (possibly irregular) verb are naturally considered together as a whole morphological paradigm in which different forms reinforce one another; • mapping an English word to its foreign transliteration may be easier when one considers the orthographic and phonological forms of both words; • similar cognates in multiple languages are naturally described together, in orthographic or phonological representations, or both; • modern and ancestral word forms form a phylogenetic tree in historical linguistics; • in bioinformatics and in system combination, multiple sequences need to be aligned in order to identify regions of similarity. | contrasting |
train_13078 | The alignment CRF (AlignCRF) model described in Section 3.1 is able to predict labels for a text sequence given a matching DB record. | without corresponding records for texts the model does not perform well as an extractor because it has learned to rely on the DB record and alignment features (Sutton et al., 2006). | contrasting |
train_13079 | Both our models have a high F1 value for the other label O because we provide our algorithm with constraints for the label O. | since there is no realization of the O field in the DB records, both M-CRF and M+R-CRF methods fail to label such tokens correctly. | contrasting |
train_13080 | In the AnCora corpus of Spanish and Catalan newspaper text (Martí et al., 2007), nearly half of the entities are embedded. | work on named entity recognition (NER) has almost entirely ignored nested entities and instead chosen to focus on the outermost entities. | contrasting |
train_13081 | They collapsed all DNA subtypes into DNA; all RNA subtypes into RNA; all protein subtypes into protein; kept cell line and cell type; and removed all other entities. | they also removed all embedded entities, while we kept them. | contrasting |
train_13082 | For example, some phrases explicitly refer to an event, so they almost certainly warrant extraction regardless of the wider context (e.g., terrorists launched an attack). | 1 some phrases are potentially relevant but too general to warrant extraction on their own (e.g., people died could be the result of different incident types). | contrasting |
train_13083 | For example, most readers would agree that sentence (1) below is describing a terrorist event, while sen- Table 4: Sentential Event Recognizer Results for MUC-4 using 1300 Documents for Training tence (2) is not. | it is difficult to draw a clear line. | contrasting |
train_13084 | For the Disease role, the Fscore jumped 6%, from 0.43 for the best baseline systems (AutoSlog-TS and the NB baseline) to 0.49 for GLACIER NB/NB . | to the MUC-4 data, this improvement was mostly due to an increase in precision (up to 0.41), indicating that our unified IE model was effective at eliminating many false hits. | contrasting |
train_13085 | On this event role, the F-score of GLACIER NB/NB (0.44) matches that of the best baseline system (Sem Affinity, with 0.44). | note that GLACIER NB/NB can achieve a 5% gain in recall over this baseline, at the cost of a 3% precision loss. | contrasting |
train_13086 | Researchers, such as (Polanyi and Zaenen, 2006), have discussed how the discourse structure can influence opinion interpretation; and previous work, such as (Asher et al., 2008;Somasundaran et al., 2008), have developed annota-tion schemes for interpreting opinions with discourse relations. | they do not empirically demonstrate how automatic methods can use their ideas to improve polarity classification. | contrasting |
train_13087 | Here, in interpreting the ambiguous opinion a bit different as being positive, we use the knowledge that it participates in a reinforcing discourse, and that all its neighbors (e.g., ergonomic, durable) are positive opinions regarding the same thing. | if it had been a non-reinforcing discourse, then the polarity of a bit different, when viewed with respect to the other opinions, could have been interpreted as negative. | contrasting |
train_13088 | Our first baseline, Base, is a simple distributionbased classifier that classifies the test data based on the overall distribution of the classes in the training data. | in Table 3, the class distribution is different for the Connected and Singleton conditions. | contrasting |
train_13089 | Similarly, "being different" could be deemed negative in other discourse contexts. | iLP is able to arrive at the correct predictions for all the instances. | contrasting |
train_13090 | For example, in the sentence, if your Nokia phone is not good, buy this great Samsung phone, the author is positive about "Samsung phone" but does not express an opinion on "Nokia phone" (although the owner of the "Nokia phone" may be negative about it). | if the sentence does not have "if", the first clause is clearly negative. | contrasting |
train_13091 | A large majority of conditional sentences are introduced by the subordinating conjunction If. | there are also many other conditional connectives, e.g., even if, unless, in case, assuming/supposing, as long as, etc. | contrasting |
train_13092 | Popular types of conditionals include actualization conditionals, inferential conditionals, implicative conditionals, etc (Declerck and Reed, 2001). | these classifications are mainly based on semantic meanings which are difficult to recognize by a computer program. | contrasting |
train_13093 | The lexicon covers a substantial subset of the subjective expressions in the corpus: 67.1% of the subjective expressions contain one or more lexicon entries. | fully 42.9% of the instances of the lexicon entries in the MPQA corpus are not in subjective expressions. | contrasting |
train_13094 | For our experiments, OP gives a better idea of the impact of SWSD, because most of the keyword instances SWSD disambiguates are weaksubj clues, and weaksubj keywords figure more prominently in objective classification. | rE has both lower OP and SP than O rB . | contrasting |
train_13095 | The overall improvements for this language pair are +1% BLEU and -1.4% TER. | to the GALE Chinese-English task, the triplet lexicon model for the Arabic-English language pair performs slightly better than the discriminative word lexicon. | contrasting |
train_13096 | For example, in a given bitext sentence, the Arabic word AlAw-DAE might be translated as situational, for which there might be no support in the alignment lexicon. | the PMI between AlAwDAE and situation might be sufficiently high. | contrasting |
train_13097 | Alignment in that case is straightforward. | some Arabic prepositions and even more English prepositions do not have an explicit counterpart on the other side. | contrasting |
train_13098 | Our approach needs and uses a parser for only one side (English) and not for the other (Arabic). | some of the components of this aligner are language-specific, such as word order heuristics, the list of specific function words, and morphological variation lists. | contrasting |
train_13099 | The derivation of the algorithm so far has focused on its relationship to LDA. | labeled lDA can also be seen as an extension of the event model of a traditional Multinomial Naive Bayes classifier (McCallum and Nigam, 1998) by the introduction of a mixture model. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.