id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_1600 | Producing this kind of knowledge is extremely costly: at a throughput of one sense annotation per minute (Edmonds, 2000) and tagging one thousand examples per word, dozens of person-years would be required for enabling a supervised classifier to disambiguate all the words in the English lexicon with high accuracy. | knowledge-based approaches exploit the information contained in wide-coverage lexical resources, such as WordNet (Fellbaum, 1998). | contrasting |
train_1601 | In contrast, knowledge-based approaches exploit the information contained in wide-coverage lexical resources, such as WordNet (Fellbaum, 1998). | it has been demonstrated that the amount of lexical and semantic information contained in such resources is typically insufficient for high-performance WSD (Cuadros and Rigau, 2006). | contrasting |
train_1602 | The better performance of Wikipedia against WordNet when using ExtLesk (+3.7%) highlights the quality of the relations extracted. | no such improvement is found with De- gree, due to its lower recall. | contrasting |
train_1603 | In this paper, we focused on English Word Sense Disambiguation. | since WordNet++ is part of a multilingual semantic network (Navigli and Ponzetto, 2010), we plan to explore the impact of this knowledge in a multilingual setting. | contrasting |
train_1604 | (2009b) for unsupervised and knowledge based approaches respectively have cast a doubt on the viability of supervised approaches which rely on sense tagged corpora. | these conclusions were drawn only from the performance on certain target words, leaving open the question of their utility in all words WSD. | contrasting |
train_1605 | Our modifications are described as follows: JCN for Verb-Verb Similarity In our implementation of the In-Degree algorithm, we use the JCN similarity measure for both Noun-Noun similarity calculation similar to SM07. | different from SM07, instead of using LCH for Verb-Verb similarity, we use the JCN metric as it yields better performance in our experimentations. | contrasting |
train_1606 | We still rely on parallel corpora, we extract typesets based on the intersection of word alignments in both alignment directions using more advanced GIZA++ machinery. | to DR02, we experiment with all four POS: Verbs (V), Nouns (N), Adjectives (A) and Adverbs (R). | contrasting |
train_1607 | In this combination scheme, the words in the typeset that result from the TransCont approach are added to the context of the target word in the RelCont approach. | the typeset words are not treated the same as the words that come from the surrounding context in the In-Degree algorithm as we recognize that words that are yielded in the typesets are semantically similar in terms of content rather than being co-occurring words as is the case for contextual words in Rel-Cont. | contrasting |
train_1608 | contiguous words belonging to the same semantic stack are modelled as an atomic observation unit or phrase. | 1 with wordlevel models, a major advantage of phrase-based generation models is that they can model longrange dependencies and domain-specific idiomatic phrases with fewer parameters. | contrasting |
train_1609 | Again, we look for nodes that can be merged based on the identity of the actions involved and the (WordNet) similarity of their arguments. | we disallow the merging of nodes with focal entities appearing in the same argument slot (e.g., "[prince, princess] cries"). | contrasting |
train_1610 | Ideally, the realizer should also select an appropriate tense for the sentence. | we make the simplifying assumption that all sentences are in the present tense. | contrasting |
train_1611 | Compared to previous work (e.g., Karamanis and Manurung 2002) our crossover rate may seem low and the mutation rate high. | it makes intuitively sense, as high crossover may lead to incoherence by disrupting canonical action sequences found in the plots. | contrasting |
train_1612 | However, it makes intuitively sense, as high crossover may lead to incoherence by disrupting canonical action sequences found in the plots. | a higher mutation will raise the likelihood of a lexical item being swapped for another and may improve overall coherence and interest. | contrasting |
train_1613 | Let's say, for example, that we have a knowledge base Then we can combine instances of the trees for "John", "pushes", and "the button" into a grammatically complete derivation. | because both b 1 and b 2 satisfy the semantic content of "the button", we must adjoin "red" into the derivation to make the RE refer uniquely to b 1 . | contrasting |
train_1614 | "upper left" vs. "left upper" example above). | there are also other constraints at work: "? | contrasting |
train_1615 | We did not find any significant differences in the success rates or task completion times between this system and SCRISP, but the former achieved a higher RE success rate (see Table 2). | a closer analysis shows that SCRISP was able to generate REs from significantly further away. | contrasting |
train_1616 | Dyer (2009) also employed a segmentation lattice, which represents ambiguities of compound word segmentation in German, Hungarian and Turkish translation. | to the best of our knowledge, there is no work which employed a lattice representing paraphrases of an input sentence. | contrasting |
train_1617 | However, to the best of our knowledge, there is no work which employed a lattice representing paraphrases of an input sentence. | paraphrasing has been used to enrich the SMT model. | contrasting |
train_1618 | Linguistic parse trees can provide very useful reordering constraints for SMT. | they are far from perfect because of both parsing errors and the crossing of the constituents and formal phrases extracted from parallel training data. | contrasting |
train_1619 | E n itself may have more states than E due to the association of distinct n-gram histories with states. | the counting transducer for unigrams is simpler than the corresponding counting transducer for higherorder n-grams. | contrasting |
train_1620 | The log semiring ǫ-removal and determinization required to sum the probabilities of paths labelled with each u can be slow. | if we use the proposed Ψ R n , then each path in E n • Ψ R n has only one non-ǫ output label u and all paths leading to a given final state share the same u. | contrasting |
train_1621 | These features, which we denote f (n i , g j , D), may depend on their relative position in the document D, and on any features of g j , since we have already generated its tree. | we cannot extract features from the subtree under n i , since we have yet to generate it! | contrasting |
train_1622 | We found that the interpolation approach resulted in a substantial improvement in the performance of the PCFG model for all but the Football dataset (discussed below). | for some datasets, even this improvement was not sufficient to outperform the best baseline. | contrasting |
train_1623 | This provides a measure of robustness, and previous evaluations of ASR in spoken tutorial dialogue systems indicate that neither word error rate nor concept error rate in such systems affect learning gain (Litman and Forbes-Riley, 2005;Pon-Barry et al., 2004). | limiting the range of possible input limits the contentful talk that the students are expected to produce, and therefore may limit the overall effectiveness of the system. | contrasting |
train_1624 | In informal comments after the session many students said that they were frustrated when the system said that it did not understand them. | some students in BASE also mentioned that they sometimes were not sure if the system's answer was correcting a problem with their answer, or simply phrasing it in a different way. | contrasting |
train_1625 | 2 The frequency of nonunderstandings was negatively correlated with learning gain in FULL: r = −0.47, p < 0.005, but not significantly correlated with learning gain in BASE: r = −0.09, p = 0.59. | in both conditions the frequency of non-understandings was negatively correlated with user satisfaction: FULL r = −0.36, p = 0.03, BASE r = −0.4, p = 0.01. | contrasting |
train_1626 | This is provides evidence that such errors are negatively impacting the learning process, and therefore improving recovery strategies for those error types is likely to improve overall system effectiveness, The results, shown in Table 1, indicate that the majority of interpretation problems are not significantly correlated with learning gain. | several types of problems appear to be particularly significant, and are all related to improper use of domain terminology. | contrasting |
train_1627 | OntoNotes comes closest to providing a corpus with multiple layers of annotation that can be analyzed as a unit via its representation of the annotations in a "normal form". | like the Wall Street Journal corpus, OntoNotes is limited in the range of genres it includes. | contrasting |
train_1628 | Also, they can be used directly for testing paraphrase applicability (Szpektor et al., 2008), a task that has recently become prominent in the context of textual entailment (Bar-Haim et al., 2007). | polysemy is a fundamental problem for distributional models. | contrasting |
train_1629 | In predicate-argument structure analysis, it is important to capture non-local dependencies among arguments and interdependencies between the sense of a predicate and the semantic roles of its arguments. | no existing approach explicitly handles both non-local dependencies and semantic dependencies between predicates and arguments. | contrasting |
train_1630 | (2010) proposed a generative model that captures both predicate senses and its argument roles. | the first-order markov assumption of the model eliminates ability to capture non-local dependencies among arguments. | contrasting |
train_1631 | The systems by Björkelund and Zhao applied feature selection algorithms in order to select the best set of feature templates for each language, requiring about 1 to 2 months to obtain the best feature set. | our system achieved the competitive results with the top two systems, despite the fact that we used the same feature templates for all languages without applying any feature engineering procedure. | contrasting |
train_1632 | Under independence, this estimate will be high, as the itself is very frequent. | with our knowledge of English syntax, we would say p exp (the the) is low. | contrasting |
train_1633 | The event information is more effective to express the information about the term dependencies while the unigram RM ignores this information and only takes Table 3: Performance (MAP) comparison of query expansion using the combination of RM and term dependencies the occurrence frequencies of individual words into account, which is not well-captured by the events. | the performance of Scheme 2 is more promising. | contrasting |
train_1634 | Taxonomy deduction is an important task to understand and manage information. | building taxonomies manually for specific domains or data sources is time consuming and expensive. | contrasting |
train_1635 | The quality of extraction is often controlled using statistical measures (Pantel and Pennacchiotti, 2006) and external resources such as wordnet (Girju et al., 2006). | there are domains (such as the one introduced in Section 3.2) where the text does not allow the derivation of linguistic relations. | contrasting |
train_1636 | Semi-supervised approaches start with known terms belonging to a category, construct context vectors of classified terms, and associate categories to previously unclassified terms depending on the similarity of their context (Tanev and Magnini, 2006). | providing training data and hand-crafted patterns can be tedious. | contrasting |
train_1637 | Unsupervised methods use clustering of wordcontext vectors (Lin, 1998), co-occurrence (Yang and Callan, 2008), and conjunction features (Caraballo, 1999) to discover implicit relationships. | these approaches do not perform well for small corpora. | contrasting |
train_1638 | It mostly holds that f (s) where s is a state name, d is a district name, c is a city name, and z is a ZIP code. | sometimes the name of a large city may be more frequent than the name of a small state. | contrasting |
train_1639 | In detail, these CNLs display some variations: thus an inclusion relationship between the classes Admiral and Sailor would be expressed by the pattern 'Admirals are a type of sailor' in CLOnE, 'Every admiral is a kind of sailor' in Rabbit, and 'Every admiral is a sailor' in ACE and SOS. | at the level of general strategy, all the CNLs rely on the same set of assumptions concerning the mapping from natural to formal language; for convenience we will refer to these assumptions as the consensus model. | contrasting |
train_1640 | An example of axiom-splitting rules is found in a computational complexity proof for the description logic EL+ (Baader et al., 2005), which requires class inclusion axioms to be rewritten to a maximally simple 'normal form' permitting only four patterns: A 2 , where P and all A N are atomic terms. | this simplification of axiom structure can be achieved only by introducing new atomic terms. | contrasting |
train_1641 | Our results indicate that although in principle the consensus model cannot guarantee transparent realisations, in practice these are almost always attainable, since ontology developers overwhelmingly favour terms and axioms with relatively simple content. | in an analysis of around 50 ontologies we have found that over 90% of axioms fit a mere seven patterns (table 2); the following examples show that each of these patterns can be verbalised by a clear unambiguous sentence -provided, of course, that no problems arise in lexicalising the atomic terms: since identifiers containing 3-4 words are fairly common (figure 1), we need to consider whether these formulations will remain transparent when combined with more complex lexical entries. | contrasting |
train_1642 | Part of speech tagging, constituent and dependency parsing, combinatory categorical grammar super tagging are used extensively in most applications when syntactic representations are needed. | training these tools require medium size treebanks and tagged data, which for most languages will not be available for a while. | contrasting |
train_1643 | Or, conversely, a sentence-alignment stage can be followed by a segmentation stage. | as we will see in our experiments, these strategies may result in poor segmentation and alignment quality. | contrasting |
train_1644 | Several formal evaluations have been conducted for the coreference resolution task (e.g., MUC-6 (1995), ACE NIST (2004)), and the data sets created for these evaluations have become standard benchmarks in the field (e.g., MUC and ACE data sets). | it is still frustratingly difficult to compare results across different coreference resolution systems. | contrasting |
train_1645 | 2001) because of the popularity and relatively good performance of these systems. | there have been other approaches to coreference resolution, including unsupervised and semi-supervised approaches (e.g. | contrasting |
train_1646 | The performance for the location type argument improved drastically. | the total performance of the arguments was below the original TBL. | contrasting |
train_1647 | Unfortunately, it is very hard to rightly produce full parses for Chinese text. | given a constituent, SRL systems should identify whether it is an argument and further predict detailed semantic types if it is an argument. | contrasting |
train_1648 | Our algorithm, TKA * is a variant of the kbest A * (KA *) algorithm of Pauls and Klein (2009). | to KA * , which performs an inside and outside pass before performing k-best extraction bottom up, TKA * performs only the inside pass before extracting k-best lists top down. | contrasting |
train_1649 | When building derivations bottom-up, the only way to expand a particular partial inside derivation is to combine it with another partial inside derivation to build a bigger tree. | an outside derivation item can be expanded anywhere along its frontier. | contrasting |
train_1650 | This study is preliminary, however, in that we have not yet shown improved end-to-end task performance applying this approach, such as improved BLEU scores in a machine translation task. | we believe there is reason to be optimistic about this. | contrasting |
train_1651 | FrameNet groups LUs in frames and describes relations between frames. | relations between LUs are not explicitly defined. | contrasting |
train_1652 | It is also possible that the small size of BL-I leads to overfitting and low accuracies. | pBA subset with only 151 items (only 2002 and 2003 speeches) is still 96% classifiable, so size alone does not explain low BL-I performance. | contrasting |
train_1653 | The co-training approach manages to boost the performance as it allows the text similarity in the target language to compete with the "fake" similarity from the translated texts. | the translated texts are still used as training data and thus can potentially mislead the classifier. | contrasting |
train_1654 | It is not a marginal phenomenon, since Kessler and Nicolov (2009) report that in their data, 14% of the opinion targets are pronouns. | the task of resolving anaphora to mine opinion targets has not been addressed and evaluated yet to the best of our knowledge. | contrasting |
train_1655 | A candidate selection or extraction step for the opinion targets is not required, since they rely on manually annotated targets and focus solely on the coreference resolution. | they do not resolve pronominal anaphora in order to achieve that. | contrasting |
train_1656 | (2006): The dependency paths only identify connections between pairs of single words. | almost 50% of the opinion target candidates are multiword expressions. | contrasting |
train_1657 | We observe that the MARS algorithm yields an improvement regarding recall compared to the baseline system. | it also extracts a high number of false positives for both the personal and impersonal / demonstrative pronouns. | contrasting |
train_1658 | Another direction is to divide the decoding into two steps of segmentation and conversion, which is this paper's method. | exact inference by listing all possible candidates explicitly and summing over all possible segmentations is intractable, because of the exponential computation complexity with the source word's increasing length. | contrasting |
train_1659 | Generally, the LOC and PER classes benefitted more from the head word features, SHW), than the other classes. | for the syntactic environment feature (SE), the PER class seemed not to benefit much from the presence of this feature. | contrasting |
train_1660 | Classical IE systems fill slots in domain-specific frames such as the time and location slots in seminar announcements (Freitag, 2000) or the terrorist organization slot in news stories (Chieu et al., 2003). | open IE systems are domainindependent, but extract "flat" sets of assertions that are not organized into frames and slots (Sekine, 2006;Banko et al., 2007). | contrasting |
train_1661 | This is consistent with previous work on redundancy-based extractors on the Web. | rEDUND still suffered from the problems of over-specification and over-generalization described in Section 2. | contrasting |
train_1662 | For example, when we read the title "China Tightens Grip on the Web", we can only have a glimpse of what the document says. | the key phrases, such as "China", "Censorship", "Web", "Domain name", "Internet", and "CNNIC", etc. | contrasting |
train_1663 | In large vocabulary speech recognition, a language model (LM) is typically estimated from large amounts of written text data. | recognition is typically applied to speech that is stylistically different from written language. | contrasting |
train_1664 | A LM is then built by interpolating the models estimated from large corpus of written language and the small corpus of transcribed data. | in practice, different models might be of different importance depending on the word context. | contrasting |
train_1665 | Moreover, training hierarchical ME models requires even more memory than training simple ME models, proportional to the number of nodes in the hierarchy. | it should be possible to alleviate this problem by profiting from the hierarchical nature of n-gram features, as proposed in (Wu and Khudanpur, 2002). | contrasting |
train_1666 | In fact, even when the SVM uses lexical as well as non-lexical features, its F1-score is still lower than the HGM classifier. | with the hierarchical SVM and rulebased DGM methods, the HGM method identifies decision-related utterances by exploiting not just DDAs but also direct dependencies between decision regions and UTT, DA, and PROS features. | contrasting |
train_1667 | This of course entails cognitive effort, which is very limited in the context of driving. | a dictation approach to replying to SMS messages may be far worse due to misrecognitions. | contrasting |
train_1668 | For instance, a correct dictation answer for Example (1) above was "no I'm never with my GPS". | the voice search condition had more cases (2-4 messages) in which the correct answer was not an exact copy (e.g., "no I have GPS") due to the nature of the template approach. | contrasting |
train_1669 | In other words, we could not reject the null hypothesis that the two approaches were the same in terms of their influence on driving performance. | for the SMS reply task, we did find a main effect for SMS Reply Approach (F 1,47 = 81.28, p < .001, µ Dictation = 2.13 (.19), µ VoiceSearch = .38 (.10)). | contrasting |
train_1670 | Although the reviews were randomly selected, 32 sentences extracted out of 16 reviews might seem like a small sample. | the upper time limit for reliable psycholinguistic experiments is 20-25 minute. | contrasting |
train_1671 | In 2 psycholinguistic and psychophysical experiments, we showed that rating whole customer-reviews as compared to rating final sentences of these reviews showed an (expected) insignificant difference. | rating whole customer-reviews as compared to rating second sentences of these reviews, showed a considerable difference. | contrasting |
train_1672 | We would like to do a direct comparison by simply runing the above systems on the exact same data and evaluating them the same way. | this unfortunately has to wait until new versions are released that work with the current version of the SAMA morphological analyzer and ATB. | contrasting |
train_1673 | Conceptually, these deduction rules operate by first computing inside scores bottom-up in the coarsest grammar, then outside scores top-down in the same grammar, then inside scores in the next finest grammar, and so on. | the crucial aspect of HA * is that items from all levels of the hierarchy compete on the same queue, interleaving the computation of inside and outside scores at all levels. | contrasting |
train_1674 | This is clearly a classification problem which requires arriving at a binary decision for each entity in D (belonging to S or not). | in practice, the problem is often solved as a ranking problem, i.e., ranking the entities in D based on their likelihoods of belonging to S. The classic method for solving this problem is based on distributional similarity (Pantel et al. | contrasting |
train_1675 | If the values in V d are skewed towards the high side (negative skew), it means that the candidate entity is very likely to be a true entity, and we should take the median as it is also high (higher than the mean). | if the skew is towards the low side (positive skew), it means that the candidate entity is unlikely to be a true entity and we should again use the median as it is low (lower than the mean) under this condition. | contrasting |
train_1676 | Current techniques for this task typically bootstrap a classifier based on a fixed seed set. | our system involves the user throughout the labeling process, using active learning to intelligently explore the space of similar words. | contrasting |
train_1677 | Also, there are separate similarity lists for each of nouns, verbs, and modifiers; we only used the lists matching the seed word's part of speech. | given a seed set s and a complete target set g, it is easy to evaluate our system; we say "Yes" to anything in g, "No" to everything else, and see how many of the candidate words are in g. building a complete gold-standard g is in practice prohibitively difficult; instead, we are only capable of saying whether or not a word belongs to g when presented with that word. | contrasting |
train_1678 | Thus, flat phrases (Koehn et al., 2003), hierarchical phrases (Chiang, 2005), and syntactic tree fragments (Galley et al., 2006;Wu et al., 2010) are gradually used in SMT. | the use of syntactic phrases continues due to the requirement for phrase coverage in most syntax-based systems. | contrasting |
train_1679 | This is the only language-specific component of our translation model. | we expect this approach to work for other agglutinative languages as well. | contrasting |
train_1680 | 3, all three translations correspond to the English text, 'with the basque nationalists.' | the CRF-LM output is more grammatical than the baseline, because not only do the adjective and noun agree for case, but the noun 'baskien' to which the postposition 'kanssa' belongs is marked with the correct genitive case. | contrasting |
train_1681 | 1 1 One may further improve the index structure by using a trie rather than a ranking list to store βs associated with the same α. | the improvement would not be significant because the number of βs associated with each α is usually very small. | contrasting |
train_1682 | At least some of these clusters, when induced by maximizing the likelihood L(θ, α) with sufficiently large α, will be useful for the classification task on the source domain. | when the domains are substantially different, these predictive clusters are likely to be specific only to the source domain. | contrasting |
train_1683 | Minimizing the difference in the marginal distributions can be regarded as a coarse approximation to the minimization of the distance. | we have to concede that the above argument is fairly informal, as the generalization bounds do not directly apply to our case: (1) our feature representation is learned from the same data as the classifier, (2) we cannot guarantee that the existence of a domainindependent scoring function is preserved under the learned transformation x→z and (3) in our setting we have access not only to samples from P (z|x, θ) but also to the distribution itself. | contrasting |
train_1684 | The average error reductions for our method Reg+ and for the SCL method are virtually equal. | formally, these two numbers are not directly comparable. | contrasting |
train_1685 | This approach bears some similarity to the adaptation methods standard for the setting where labelled data is available for both domains (Chelba and Acero, 2004;Daumé and Marcu, 2006). | instead of ensuring that the classifier parameters are similar across domains, we favor models resulting in similar marginal distributions of latent variables. | contrasting |
train_1686 | Traditional search engines accept several terms submitted by a user as a query and return a set of docs that are relevant to the query. | for those users who are not search experts, it is always difficult to accurately specify some query terms to express their search purposes. | contrasting |
train_1687 | Regardless of different objectives, both methods derive hash functions via Principle Component Analysis (PCA) (Jolliffe, 1986). | pCA is computationally expensive, which limits their usage for high-dimensional data. | contrasting |
train_1688 | KLSH can improve hashing results via the kernel trick. | kLSH is unsupervised, thus designing a data-specific kernel remains a big challenge. | contrasting |
train_1689 | LDA seeks the best separative direction in the original attribute space. | s 3 H firstly maps data from R M to R M ×L through the following projection function where r l ∈ R M , l = 1, . | contrasting |
train_1690 | This implies FCC can also improve the satiability of S 3 H. As we see, S 3 H f ignores the contribution of features to different classes. | besides the local description of data locality in the form of object-pairs, such (global) information also provides a proper guidance for hashing. | contrasting |
train_1691 | We should emphasize that KLSH needs 0.3ms to return the results for a query document for hash lookup, and S 3 H needs <0.1ms. | iL requires about 75ms to finish searching. | contrasting |
train_1692 | It is essential for the search engine to correctly annotate the query structure, and the quality of these query annotations has been shown to be a crucial first step towards the development of reliable and robust query processing, representation and understanding algorithms (Barr et al., 2008;Guo et al., 2008;Guo et al., 2009;Manshadi and Li, 2009;Li, 2010). | in current query annotation systems, even sentence-like queries are often hard to parse and annotate, as they are prone to contain misspellings and idiosyncratic grammatical structures. | contrasting |
train_1693 | For instance, a keyword query hawaiian falls, which refers to a location, is inaccurately interpreted by a standard POS tagger as a noun-verb pair. | given a sentence from a corpus that is relevant to the query such as "Hawaiian Falls is a family-friendly waterpark", the word "falls" is correctly identified by a standard POS tagger as a proper noun. | contrasting |
train_1694 | For instance, using j-PRF always leads to statistically significant improvements over the i-PRF baseline for questions. | it is either statistically indistinguishable, or even significantly worse (in the case of capitalization) than the i-PRF baseline for the verbal phrases. | contrasting |
train_1695 | typically assume that classification models are trained and tested using data drawn from some fixed distribution. | in many practical cases, we may have plentiful labeled examples in the source domain, but very few or no labeled examples in the target domain with a different distribution. | contrasting |
train_1696 | For example, in the domain of reviews about electronics products, the words "durable" and "light" are used to express positive sentiment, whereas "expensive" and "short battery life" often indicate negative sentiment. | if we consider the books domain the words "exciting" and "thriller" express positive sentiment, whereas the words "boring" and "lengthy" usually express negative sentiment. | contrasting |
train_1697 | Methods that use related features have been successfully used in numerous tasks such as query expansion (Fang, 2008), and document classification (Shen et al., 2009). | feature expansion techniques have not previously been applied to the task of cross-domain sentiment classification. | contrasting |
train_1698 | Aue and Gammon (2005) report a number of empirical tests into domain adaptation of sentiment classifiers using an ensemble of classifiers. | most of these tests were unable to outperform a simple baseline classifier that is trained using all labeled data for all domains. | contrasting |
train_1699 | The selection of pivots is vital to the performance of SCL and heuristically selected pivot features might not guarantee the best performance on target domains. | our method uses all features when creating the thesaurus and selects a subset of features during training using L1 regularization. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.