id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_3800 | On the contrary, the context-based distributed models showed strong selective activation towards country names in Indonesian. | the selectivity of all the AE neurons is below 0.7 towards these semantically-related words. | contrasting |
train_3801 | Words like yet, even, and still are used in many diverse ways and are highly polysemous. | words like photocopying, postage, and holster tend to be used in very specific well-clustered contexts, corresponding to a single sense; for example, mail and letter are both very likely to occur in the context of postage and are also likely to co-occur with each other, independent of postage. | contrasting |
train_3802 | Previous works argued that semantic change leads to polysemy (Wilkins, 1993;Hopper and Traugott, 2003). | our results show that polysemous words change faster, which suggests that polysemy may actually lead to semantic change. | contrasting |
train_3803 | (2004b) outperformed SEMCOR for words with SEMCOR frequency less than 5. | their analysis was based on the accuracy of the first sense heuristic, rather than the entire sense distribution, and they used very different datasets to us. | contrasting |
train_3804 | The idea of a book excerpt completion task was originally introduced in the MSRCC dataset (Zweig and Burges, 2011). | the latter limited context to single sentences, not attempting to measure broader passage understanding. | contrasting |
train_3805 | In particular, when the context clearly demands a referential expression, the constraint that the blank be filled by a single word excludes other possibilities such as noun phrases with articles, and there are reasons to suspect that co-reference is easier than other discourse phenomena in our task (see below). | although co-reference seems to play a big role, only 0.3% of target words are pronouns. | contrasting |
train_3806 | Note that LAMBADA was designed to challenge language models with harder-than-average examples where broad context understanding is crucial. | the average case should not be disregarded either, since we want language models to be able to handle both cases. | contrasting |
train_3807 | An answer is correct when there is an exact string match between the predicted answer and the gold answer. | as describe in Section 2.2, some answers are composed from a set of values (e.g. | contrasting |
train_3808 | This means that spectral algorithms give a natural way for the selection of the number of latent states for each nonterminal a in the grammar. | when the data from which we estimate an L-PCFG model are not drawn from an L-PCFG (the model is "incorrect"), the number of non-zero singular values (or the number of singular values which are large) is no longer sufficient to determine the number of latent states for each nonterminal. | contrasting |
train_3809 | 11 Our main focus is on comparing the coarse-to-fine Berkeley parser (Petrov et al., 2006) to our method. | for the sake of completeness, we also present results for other parsers, such as parsers of Hall et al. | contrasting |
train_3810 | In this table, we see that in most cases, on average, the optimization algorithm chooses to enlarge the number of latent states. | for German-T and Korean, for example, the optimization algorithm actually chooses a smaller model than the original vanilla model. | contrasting |
train_3811 | When estimating an HMM of a low order with data which was generated from a higher order model, EM does quite poorly. | if the number of latent states (and feature functions) is properly controlled with spectral algorithms, a spectral algorithm would learn a "product" HMM, where the states in the lower order model are the product of states of a higher order. | contrasting |
train_3812 | either using semi-supervised clustering instead of POS tags (Koo et al., 2008) or building recurrent representations of words using neural networks . | the best accuracy for these approaches is still achieved by running a POS tagger over the data first and combining the predicted POS tags with additional representations. | contrasting |
train_3813 | Researchers have used supervised learning models trained on lexical word ngram features, synsets, emoticons, topics, and lexicon frameworks to determine which emotions are expressed on Twitter Roberts et al., 2012;Qadir and Riloff, 2013;. | sentiment classification in social media has been extensively studied (Pang et al., 2002;Pang and Lee, 2008;Pak and Paroubek, 2010;Hassan Saif, Miriam Fernandez and Alani, 2013;Nakov et al., 2013;Zhu et al., 2014). | contrasting |
train_3814 | One assumption that we made about the nature of state adoption of model legislation is that the legislatures make modifications that largely preserve the model language in an effort to preserve policy. | we currently do not consider cases in which a legislature has intentionally obscured the text while still retaining the same meaning. | contrasting |
train_3815 | LOBBYBACK performs well on reconstructing model legislation from automatically generated bill clusters. | there are a number of improvements that can refine part of the pipeline. | contrasting |
train_3816 | If we take Argument 1 from Figure 1, assigning a single "convincingness score" is highly subjective, given the lack of context, reader's prejudice, beliefs, etc. | when comparing both arguments from the same example, one can decide that A1 is probably more convincing than A2, because it uses at least some statistics, addresses the health factor, and A2 is just harsh and attacks. | contrasting |
train_3817 | For comparison, the partial information classification algorithm Banditron (Kakade et al., 2008) (after adjusting the exploration/exploitation constant on the dev set) scored 0.047 on the test set. | our main interest is in convergence speed. | contrasting |
train_3818 | For the OCR task, bandit learning does decrease Hamming loss, but it does not quite achieve full-information performance. | pairwise ranking (Algorithm 2) again converges faster than the alternative bandit algorithms by a factor of 2-4, despite similar learning rates for Algorithms 1 and 2 and a compensa- tion of smaller learning rates in Algorithm 3 by variance reduction and regularization. | contrasting |
train_3819 | Both the canonical encoder-decoder and its variants with attention mechanism rely heavily on the representation of "meaning", which might not be sufficiently inaccurate in cases in which the system needs to refer to sub-sequences of input like entity names or dates. | the copying mechanism is closer to the rote memorization in language processing of human being, deserving a different modeling strategy in neural network-based models. | contrasting |
train_3820 | (2) for the generic attention-based Seq2Seq model. | there is some minor changes in the y t−1 −→s t path for the copying mechanism. | contrasting |
train_3821 | It has been recently formulated as a Seq2Seq learning problem in (Rush et al., 2015;Hu et al., 2015), which essentially gives abstractive summarization since the summary is generated based on a representation of the document. | extractive summarization extracts sentences or phrases from the original text to fuse them into the summaries, therefore making better use of the overall structure of the original document. | contrasting |
train_3822 | In contrast COPYNET addresses the OOV problem in a more systemic way with an end-to-end model. | as COPY-NET copies the exact source words as the output, it cannot be directly applied to machine translation. | contrasting |
train_3823 | (2015) address a sequential labeling problem in NLU where the fine grained label sets across domains differ. | they assume that there exists a bijective mapping between the coarse and fine-grained label sets across domains. | contrasting |
train_3824 | They learn this mapping using labeled instances from the target domain to reduce the problem to a standard domain adaptation problem (Scenario 2). | this paper caters to multiple source domains with disparate label sets without assuming availability of any labeled data from the target domain or fine-to-coarse label mappings across domains. | contrasting |
train_3825 | The similarity matrix R associates target domain instances to the source domain clusters in proportion to their similarity. | the objective is to select the optimal K source domain clusters that fit the maximum number of target domain instances. | contrasting |
train_3826 | Several models for inducing cross-lingual embeddings have been proposed, each requiring a different form of cross-lingual supervision -some can use document-level alignments (Vulić and Moens, 2015), others need alignments at the sentence (Hermann and Blunsom, 2014; or word level (Faruqui and Dyer, 2014;Gouws and Søgaard, 2015), while some require both sentence and word alignments (Luong et al., 2015). | a systematic comparison of these models is missing from the literature, making it difficult to analyze which approach is suitable for a particular NLP task. | contrasting |
train_3827 | This can be attributed to the fact that BiSkip and BiCVM are trained on parallel sentences, and if two antonyms are present in the same sentence in English, they will also be present together in its French translation. | biCCA uses bilingual dictionary and biVCD use comparable sentence context, which helps in pulling apart the synonyms and antonyms. | contrasting |
train_3828 | Some methods do not have publicly available code (Coulmance et al., 2015;Zou et al., 2013); for others, like BilBOWA , we identified problems in the available code, which caused it to consistently produced results that are inferior even to mono-lingually trained vectors. | 11 the models that we included for comparison in our survey are representative of other cross-lingual models in terms of the form of crosslingual supervision required by them. | contrasting |
train_3829 | (2015) is somewhat more constrained than the set we use, there is a good deal of overlap. | their evaluation is performed in the context of relational similarity, and they do not perform clustering or classification on the DIFFVECs. | contrasting |
train_3830 | HLBL and SENNA performed very The lower V-measure for w2v wiki and GloVe wiki (as compared to w2v and GloVe, respectively) indicates that the volume of training data plays a role in the clustering results. | both methods still perform well above SENNA and HLBL, and w2v has a clear empirical advantage over GloVe. | contrasting |
train_3831 | Figure 2 shows the effect of sample size k on the Chinese-English validation set. | it is clear that BLEU scores consistently rise with the increase of k. we find that a sample size larger than 100 (e.g., k = 200) usually does not lead to significant improvements and increases the GPU memory requirement. | contrasting |
train_3832 | Only the unigram representation is truly open-vocabulary. | the unigram representation performed poorly in preliminary experiments, and we report translation results with a bigram representation, which is empirically better, but unable to produce some tokens in the test set with the training set vocabulary. | contrasting |
train_3833 | For OOVs, the baseline strategy of copying unknown words works well for English→German. | when alphabets differ, like in English→Russian, the subword models do much better. | contrasting |
train_3834 | The English→Russian examples show that the subword systems are capable of transliteration. | transliteration errors do occur, either due to ambiguous transliterations, or because of non-consistent segmentations between source and target text which make it hard for the system to learn a transliteration mapping. | contrasting |
train_3835 | Word pairs, which are one of the most easily accessible features between two text segments, have been proven to be very useful for detecting the discourse relations held between text segments. | because of the data sparsity problem, the performance achieved by using word pair features is limited. | contrasting |
train_3836 | The main reason is their length, as remarked above: Most sequence labeling tasks in NLP (such as most cases of named entity recognition) deal with spans of a few tokens. | the median quotation length on the Penn Attributions Relation Corpus (PARC, Pareti et al. | contrasting |
train_3837 | We hypothesize that some of the errors made by the local classifier could be corrected by employing a global joint model that performs a collective classification taking into account the conversational dependencies between sentences (e.g., adjacency relations). | unlike synchronous conversations (e.g., phone, meeting), modeling conversational dependencies between sentences in asynchronous conversation is challenging, especially in those where explicit thread structure (reply-to relations) is missing, which is also our case. | contrasting |
train_3838 | There exist large corpora of utterances annotated with speech acts in synchronous spoken domains, e.g., Switchboard-DAMSL or SWBD (Jurafsky et al., 1997) and Meeting Recorder Dialog Act or MRDA (Dhillon et al., 2004). | such large corpus does not exist in asynchronous domains. | contrasting |
train_3839 | We notice that the two sentences in comment C 4 were mistakenly identified as Statement and Response, respectively, by the B-LSTM p local model. | by considering these two sentences together with others in the conversation, the global CRF (FC-FC) model could correct them. | contrasting |
train_3840 | Linear chain (for sequence labeling) and tree structured CRFs (for parsing) are the common ones in NLP. | speech act recognition in asynchronous conversation posits a different problem, where the challenge is to model arbitrary conversational structures. | contrasting |
train_3841 | As our corpus is also annotated with this information, we also trained separate models for these subtasks and assigned the SE type label accordingly. | such a pipeline approach is not competitive with the model trained directly on SE types (see Section 6.3). | contrasting |
train_3842 | Such image representation has been successfully applied in various vision tasks. | the category name t is represented by its word embedding v t ∈ R b , a low-dimensional dense vector induced by the Skip-gram model which is widely used in diverse NLP applications too. | contrasting |
train_3843 | When comparing to Bansal2014, our model with only word embedding-based features underperforms theirs. | when introducing visual features, our performance is comparable (pvalue = 0.058).Furthermore, if we discard visual features but add semantic features from Bansal et al. | contrasting |
train_3844 | As a result, we have pushed all estimation steps into supervised ML components, which leaves the subset selection step fully principled. | we found in our experiments that even a simple heuristic yields a decent approximation of . | contrasting |
train_3845 | Therefore, we continue our experiments in the following sec-tions with ILP-R only. | sBL-R offers a nice trade-off between performance and computation cost. | contrasting |
train_3846 | We observe that the model trained on ROUGE-2 is performing better than the model trained on ROUGE-1, although learning the ROUGE-2 scores seems to be harder than learning ROUGE-1 8 The symbol * indicates that the difference compared to the previous best baseline is significant with p ≤ 0.05. scores (as shown in table 2). | errors and approximations propagate less easily in ROUGE-2, because the number of bi-grams in the intersection of two given sentences is far less. | contrasting |
train_3847 | Maximizing JSD can not be solved exactly with an ILP because it can not be factorized into individual sentences. | applying an efficient greedy algorithm or maximizing a factorizable relaxation might produce strong results as well (for example, a simple greedy maximization of Kullback-Leibler divergence already yields good results (Haghighi and Vanderwende, 2009)). | contrasting |
train_3848 | Specifically, In order to prove the formula (20), we have to find an expression for the (k) that gives to g the correct contribution to the formula: First, we observe that g does not appear in the terms that contain the intersection of more than k sentences. | specifically, (t) is not affected by g if t ≥ k. g is affected by all the (t) for which t ≤ k. Given that g appears in the sentences {s i 1 , . | contrasting |
train_3849 | As related work, Foster (2007a;2007b) and Foster and Andersen (2009) propose a method for creating a pseudo-learner corpus by artificially generating errors in a native corpus with phrase structures. | the resulting corpus does not capture various error patterns in learner English. | contrasting |
train_3850 | Thus, they cause significant modifications to PTB-II, which violates (P2). | a preposition normally constitutes a prepositional phrase with another phrase (although not normally with an adjective phrase). | contrasting |
train_3851 | (correctly, The lunch I ate was delicious.). | according to the superficial forms and local contexts, the phrase I ate lunch would form an S: S NP I VP ate lunch the relations of the S to the rest of the constituents are not clear. | contrasting |
train_3852 | The high agreement shows that the annotation scheme provides an effective way of consistently annotating learner corpora with phrase structures. | one might argue that the annotation does not represent the characteristics of learner English well because it favors consistency (and rather simple annotation rules) over completeness. | contrasting |
train_3853 | At first sight, this does not seem so surprising because ϕ never appears in the native corpus. | the rules actually show in which syntactic environment missing heads tend to occur. | contrasting |
train_3854 | In one study, after a day of cramming he could accurately recite 12-syllable sequences (of gibberish, apparently). | he could achieve comparable results with half as many practices spread out over three days. | contrasting |
train_3855 | This is reasonable, since they were largely developed during the 1960s-80s, when people would have had to manage practice schedules without the aid of computers. | the recent popularity of large-scale online learning software makes it possible to collect vast amounts of parallel student data, which can be used to empirically train richer statistical models. | contrasting |
train_3856 | He referred to his method as graduated-interval recall, whereby new vocabulary is introduced and then tested at exponentially increasing intervals, interspersed with the introduction or review of other vocabulary. | this approach is limited since the schedule is pre-recorded and cannot adapt to the learner's actual ability. | contrasting |
train_3857 | The Leitner method did yield the highest AUC 7 values among the algorithms we tried. | the top two HLR variants are not far behind, and they also reduce MAE compared to Leitner by least 45%. | contrasting |
train_3858 | 2014, where the focus has been to build a system to help learners retain new vocabulary. | much of the existing work on incidental learning is found in the education and cognitive science literature rather than NLP. | contrasting |
train_3859 | Table 1 depicts statistics of the dataset. | 4 to other learner corpora such as ICLE (Granger, 2003), EFCAMDAT (Geertzen et al., 2013) or TOEFL-11 , this corpus contains translations, native, and nonnative English of high proficiency speakers. | contrasting |
train_3860 | Thus broader (not only local) context is needed to judge their semantic similarity. | we don't know the reason for improvement on the A-A category as, in context, adjective interpretation is often affected by local context (e.g., the nouns that adjectives modify). | contrasting |
train_3861 | On one hand, we can offer valuable insights with respect to what constitutes an engaging, good quality news article. | we can identify benchmarks for characterising news article quality in an automatic and scalable way and, thus, predict poor writing before a news article is even published. | contrasting |
train_3862 | Templates have the advantage that the generation system does not have to deal with the internal syntax and coherence of each template, and can instead focus on document-level discourse coherence and on local coreference issues. | templates have two major disadvantages. | contrasting |
train_3863 | Duma and Klein (2013) extract templates from Wikipedia pages aligned with RDF information from DBPedia, and although they do not explicitly mention aligning multiple templates to the same set of RDF templates, the possibility seems to exist in their framework. | we are interested in extracting paraphrasal templates from non-aligned text for general NLG, as aligned corpora are difficult to obtain for most domains. | contrasting |
train_3864 | We intend this figure as the closest thing to recall that we can conceive for mining paraphrases. | keep in mind that it is not a comparable figure across the methods, since different corpora are used. | contrasting |
train_3865 | Although the parsing models outperform MARMOT, the improvements in F 1 are not significant. | all systems fare considerably worse on WSJ * which confirms that the orthographic clues in newspaper text suffice to segment the sentences properly. | contrasting |
train_3866 | Efforts have been made to create standard metrics (Papineni et al., 2001;Lin, 2004;Denkowski and Lavie, 2014;Vedantam et al., 2014) to help advance the state-of-the-art. | most such popular metrics, despite their wide use, have serious deficiencies. | contrasting |
train_3867 | Historically, MT metrics have been evaluated by how well they correlate with human annotations (Callison-Burch et al., 2010;Machacek and Bojar, 2014). | as we demonstrate in Sec. | contrasting |
train_3868 | In a sense, W2V-AVG does well on passive sentences for the wrong reasonsrather than understanding that the semantics are unchanged, it simply observes that most of the words are the same. | we still see the trend that performance goes down as the number of reference sentences increases. | contrasting |
train_3869 | When we fully saturate our professional-grade CPU, using all sixteen cores and sixteen hyperthreads, KenLM is about twice as fast as gLM. | our CPU costs nearly four times as much as our GPU, so economically, this comparison favors the GPU. | contrasting |
train_3870 | Swedish and Danish also transfer well to each other, while English transfers best to Dutch, which the former is most closely related to among the languages compared here. | there are also some cases of unrelated source languages performing best: Using Danish as source language gives the highest performing models for both Bulgarian and Czech. | contrasting |
train_3871 | Unfortunately, parallel corpora are usually only available for a handful of researchrich languages and restricted to limited domains such as government documents and news reports. | sMT is capable of exploiting abundant target-side monolingual corpora to boost fluency of translations. | contrasting |
train_3872 | There are significant gaps between k = 1 and k = 5. | keeping increasing k does not result in significant improvements and decreases the training efficiency. | contrasting |
train_3873 | Important model parameters that have been studied include the choice of association and similarity measures (Curran and Moens, 2002) and the use of subsampling and negative sampling techniques (Mikolov et al., 2013c). | the particular effects may be heterogeneous and depend on the task and model (Lapesa and Evert, 2014). | contrasting |
train_3874 | This shows that pre-trained source embeddings can be extremely helpful in bootstrapping multilingual ones. | 8 the performance of the Joint w/ Aux system with 1 regularization is rather disappointing. | contrasting |
train_3875 | The same is visible in Figure 2, where these supersense embeddings are more central, with closer neighbors. | to the observations by Schneider et al. | contrasting |
train_3876 | As a consequence, condensing the invhom is much less helpful. | the sibling-finder algorithm excels at maintaining state information within each elementary tree, yielding a 1000x speedup over the naive bottom-up algorithm when it was cancelled. | contrasting |
train_3877 | (2014), who try a number of unsupervised and semi-supervised models, and use the same testing methodology and hyponymy data. | note that their word embeddings are different. | contrasting |
train_3878 | For instance, an utterance in AirTicketBooking dataset, "Tomorrow afternoon, about 3 o'clock" corre-sponds to the latent state "Time Information". | by carefully examining words in dialogues we can observe that not all words are generated from the latent states (Ritter et al., 2010;Zhai and Williams, 2014). | contrasting |
train_3879 | (2015) generate text for spoken dialogue systems with a two-stage approach, comprising an LSTM decoder semantically conditioned on the logical representation of speech acts, and a reranker to generate the final output. | we design an end-to-end attention-based model for source code. | contrasting |
train_3880 | This is in part because, unlike SQL, C# code contains informative intermediate variable names that are directly related to the objective of the code. | sQL is more challenging in that it only has a handful of keywords and functions, and summarization models need to rely on other structural aspects of the code. | contrasting |
train_3881 | Using a very standard, and in fact somewhat dated sentiment analyzer, we are regularly able to garner annualized returns over twice that percentage, and in a manner that highlights two of the better design decisions that Zhang and Skiena (2010) made, viz., (1) their decision to trade based upon numerical SVM scores rather than upon discrete positive or negative sentiment classes, and (2) their decision to go long (resp., short) in the n best-(worst-) ranking securities rather than to treat all positive (negative) securities equally. | we trade based upon the raw SVM score itself, rather than its relative rank within a basket of other securities as Zhang and Skiena (2010) did, and we experimentally tune a threshold for that score that determines whether to go long, neutral or short. | contrasting |
train_3882 | Differences in the amount or degree of improvement might arguably be rescalable, but Section 4.3 shows that such intrinsic measures are not even accurate up to a determination of the delta's sign. | the results reported here should not be construed as an indictment of sentiment analysis as a technology or its potential application. | contrasting |
train_3883 | Indeed, U-MSTuf-lep outperforms U-MST in all 20 setups in D-UAS evaluation and in 15 out of 20 setups in U-UAS evaluation (in one setup there is a tie). | the improvement this procedure provides is much more noticeable for D-UAS, with an averaged improvement of 2.35% across setups, compared to an averaged U-UAS improvement of only 0.26% across setups. | contrasting |
train_3884 | Distant supervised relation extraction has been widely used to find novel relational facts from text. | distant supervision inevitably accompanies with the wrong labelling problem, and these noisy data will substantially hurt the performance of relation extraction. | contrasting |
train_3885 | To address this issue, (Mintz et al., 2009) aligns plain text with Freebase by distant supervision. | distant supervision inevitably accompanies with the wrong labelling problem. | contrasting |
train_3886 | It means the embedding of the set S is the average of all the sentence vectors: It's a naive baseline of our selective attention. | selective Attention: the wrong labelling problem inevitably occurs. | contrasting |
train_3887 | 2For both CNN and PCNN, the AVE method is comparable to the ATT method in the One test setting. | when the number of testing sentences per entity pair grows, the performance of the AVE methods has almost no improvement. | contrasting |
train_3888 | Solving a word problem in general, requires several such applications in series or parallel, generating multiple equations. | in this research, we restrict the problems to be of a single equation which requires only one application. | contrasting |
train_3889 | Since, the template involves two sets, there is a 3 n−3 factor present in the formula of N change . | any application of change concept with gains and losses slots containing a collection of variables can be broken down into multiple instances of change concept where the gains and losses slots accepts only a single variable by introducing more intermediate unknown variables. | contrasting |
train_3890 | ", it is important to know that 'skateboard' and 'marbles' are toys but 'shorts' are not. | such knowledge is not always present in ConceptNet which results in error. | contrasting |
train_3891 | Literary theory suggests that it should be possible, because fictional character names function as expressions of experience, ethos, teleology, values, culture, ideology, and attitudes of the character. | work in literary theory, psychology, linguistics and philosophy has studied fictional names by analysing individual works or small clusters of closely related works, such as those of a particular author. | contrasting |
train_3892 | The analysis showed that the features that calculate the emotional load of fictional names based on SentiWordNet contribute to the classification task. | we believe that there is still room for improvement for the performance of this feature mainly towards the optimization of the selection threshold in order to reduce the degree of false positive matches as well as the addition of more lexical resources for example WordNet Affect or LIWC. | contrasting |
train_3893 | Our stancetaking pathbased features that we identified as intuitively having a connection to the Disagree Strongly class together cover only 51% of Disagree Strongly instances, meaning that it is in principle impossible for our system to identify the remaining 49%. | our decision to incorporate only features that are expected to have fairly high precision for some class was intentional, as the lesson we learned from the Faulkner-based system is that it is difficult to learn a good classifier for stance classification using a large number of weakly or non-predictive features. | contrasting |
train_3894 | To evaluate a word segmenter, the standard metric consists of precision p, recall r, and an evenly-weighted F-score f 1 . | with the successive improvement of performance, state-of-the-art segmenters are hard to be distinguished under the standard metric. | contrasting |
train_3895 | We can see from Figure 2c that OOV generally has high difficulty. | a lot of OOV is relatively easy for segmenters. | contrasting |
train_3896 | One problem is that the number of possible TLINKs grows quadratic with the number of event mentions, therefore most annotation studies concentrate on links for mentions in the same or in adjacent sentences. | as our annotation study shows, this restriction results for 58% of the event mentions in a less precise information when the event took place. | contrasting |
train_3897 | The input gate by allowing incoming signal to alter the state of the memory cell, regulates proportion of history information memory cell will keep. | the output gate regulates what proportion of stored information in the memory cell will influence other neurons. | contrasting |
train_3898 | Cross-Domain Authorship Attribution Almost all previous authorship attribution studies have tackled traditional (single-domain) authorship problems where the distribution of the test data is the same as that of the training data (Madigan et al., 2005;Stamatatos, 2006;Luyckx and Daelemans, 2008;Escalante et al., 2011). | there are a handful of authorship attribution studies that explore cross-domain authorship attribution scenarios (Mikros and Argiri, 2007;Goldstein-Stewart et al., 2009;Schein et al., 2010;Stamatatos, 2013;Sapkota et al., 2014). | contrasting |
train_3899 | Therefore, the authors suggest their use in authorship attribution should be done with care. | the study did not attempt to construct authorship attribution models where the source and target domains differ. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.