id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_10900 | Future work is to reduce the bit size of each counter (instead of the number of counters), as has been tried for other summaries (Talbot and Osborne, 2007;Talbot, 2009;Van Durme and Lall, 2009a) in NLP. | it may be challenging to combine this with conservative update. | contrasting |
train_10901 | Approximate sampling techniques have been developed over the last century and seem sufficient for most purposes. | the cases where one actually knows the quality of a sampling algorithm are very rare, and it is common practice to forget about the approximation and simply treat the result of a sampler as a set of i.i.d. | contrasting |
train_10902 | Accept x with probability p(x)/q(x), otherwise reject x; To obtain multiple samples, the algorithm is repeated several times. | for simple bounds, Figure 1: An example of an initial q-automaton (a), and the refined q-automaton (b) Each state corresponds to a context (only state 6 has a non-empty context) and each edge represents the emission of a symbol. | contrasting |
train_10903 | optimization) with such models, by decoupling the problem into two alternating steps that can each be handled by dynamic programming or other polynomial-time algorithms (Rush et al., 2010), an approach that has been applied to Statistical Machine Translation (phrase-based (Chang and Collins, 2011) and hierarchical (Rush and Collins, 2011)) among others. | sampling such distributions remains a difficult problem. | contrasting |
train_10904 | With two states (or one bit of annotation), our version of this parser gets 81.7 F1, edging out the comparable parser of Petrov and Klein (2008a). | our parser gets 83.2 with four states (two bits), short of the performance of prior work. | contrasting |
train_10905 | This approach worked largely because training was intractable: if the training algorithm could reach the global optimum, then this approach might have yielded no gain. | because the optimization technique is local, the same algorithm produced multiple grammars. | contrasting |
train_10906 | θ m will be particularly motivated to correct the latter, because they are less like the gold tree. | θ m will ignore the other "correct" segments, because q has already sufficiently captured them. | contrasting |
train_10907 | It scored 90.1/89.4 F1 on length 40 and all sentences respectively, slightly edging out the 90.0/89.3 F1 of Petrov and Klein (2008a). | it is not quite as good at exact match: 37.7/35.3 vs 40.1/37.7. | contrasting |
train_10908 | Language models are designed to assign probability to sentences. | approximate search algorithms use estimates for sentence fragments. | contrasting |
train_10909 | As alluded to in the introduction, the first few words of a sentence fragment are typically scored using lower-order entries from an N -gram language model. | kneser-Ney smoothing (kneser and Ney, 1995) conditions lower-order probabilities on backing off. | contrasting |
train_10910 | Developing good discourse-level models is difficult, and considering the modest translation quality that has long been achieved by SMT, there have been more pressing problems to solve and lower hanging fruit to pick. | we argue that the popular DP beam search algorithm, which delivers excellent decoding performance, but imposes a particular kind of local dependency structure on the feature models, has also had its share in driving researchers away from discourse-level problems. | contrasting |
train_10911 | While these representations are task-specific, they could be used across tasks in a multi-task learning setup. | in order to fairly compare to related work, we use only the supervised data of each task. | contrasting |
train_10912 | (2002) in requiring a representation in which two lexical items in an antonymy relation should lie at opposite ends of an axis. | in contrast to the logical axes used previously, we desire that antonyms should lie at the opposite ends of a sphere lying in a continuous and automatically induced vector space. | contrasting |
train_10913 | Given the context space model, we may use a linear regression or a k-nearest neighbors approach to embed out-of-thesaurus words into the thesaurusspace representation. | as near words in the context space may be synonyms in addition to other semantically related words (including antonyms), such approaches can potentially be noisy. | contrasting |
train_10914 | WordNet provides significantly greater coverage with approximately 227k synsets involving multiple words, and a vocabulary of about 190k words. | it is also much sparser, with 5.3 words per sense on average as opposed to 10.3 in the thesaurus, and has only 62,821 pairs of antonyms. | contrasting |
train_10915 | The Bloomsbury-based system is able to answer 153 questions, and the best dimension setting is 300, which answers 132 questions correctly and thus archives 0.863 in precision. | the larger vocabulary in WordNet helps the system answer 160 questions but the quality is not as good. | contrasting |
train_10916 | In contrast, the WordNet-based methods (Lines 1-3) attempted 936 questions. | consistent with what we observed on the development set, the WordNet-based model is inferior. | contrasting |
train_10917 | One of the most appealing aspects of so-called distributional semantic models (see Turney and Pantel (2010) for a recent overview) is that they afford some hope for a non-trivial, computationally tractable treatment of the context dependence of lexical meaning that might also approximate in interesting ways the psychological representation of that meaning (Andrews et al., 2009). | in order to have a complete theory of natural language meaning, these models must be supplied with or connected to a compositional semantics; otherwise, we will have no account of the recursive potential that natural language affords for the construction of novel complex contents. | contrasting |
train_10918 | If the first paragraph of a Wikipedia page contains the pronoun "she", but not "he", the article is considered to be about a female (and vice-versa). | when the page is assigned a non-person-related fine-grained NE type (e.g. | contrasting |
train_10919 | Therefore, when a similar-looking named entities appear in the same sentence, they are actually likely to refer to different entities. | in the sentence "Reggie Jackson, nicknamed Mr. October . | contrasting |
train_10920 | We design entity-based features so that the subsequent sieves "see" the decisions of the previous sieves and use entity-based features based on the intermediate clustering. | unlike (Raghunathan et al., 2010), we allow the subsequent sieves to change the decisions made by the lower sieves (since additional information becomes available). | contrasting |
train_10921 | We note that conceptually, the nested (B)+Predictions sieve should be identical to the baseline. | in practice, the surface form compatibility (SFC) features are generated for the nested sieve as well. | contrasting |
train_10922 | A Markov logic network consists of a set of first-order clauses (which we will refer to as formulas in the rest of the paper) just like in first-order logic. | different from first-order logic where a formula represents a hard constraint, in an MLN, these constraints are softened and they can be violated with some penalty. | contrasting |
train_10923 | In addition to research on resolution, there is also some work on effective annotation of abstract anaphora (Strube and Müller, 2003;Botley, 2006;Poesio and Artstein, 2008;Dipper and Zinsmeister, 2011). | to the best of our knowledge, there is currently no English corpus annotated for issue anaphora antecedents. | contrasting |
train_10924 | Taxonomies can serve as browsing tools for document collections. | given an arbitrary collection, pre-constructed taxonomies could not easily adapt to the specific topic/task present in the collection. | contrasting |
train_10925 | We filter out spams and advertisements and then search for more relevant Web documents to make the total number 1000. | not all topics can retrieve 1000 documents. | contrasting |
train_10926 | It is defined as: where P (c, v) = d∈Z P (d)P (c|d)P (v|d), C is the set of concepts in T , V is the set of nonstopwords in Z, and d is a document in Z. EMIM only evaluates the content of a browsing taxonomy, not its structure. | it is still popularly used to indicate how representative a browsing taxonomy is for a document collection. | contrasting |
train_10927 | Users that answered questions later in the question had higher accuracy. | there were users that were able to answer questions relatively early without sacrificing accuracy. | contrasting |
train_10928 | An alternative would be a more expressive action space (e.g., an action for every possible answer). | this conflates the question of when to buzz with what to answer. | contrasting |
train_10929 | This resembles approaches that merge different classifiers (Riedel et al., 2011) or attempt to estimate confidence of models (Blatz et al., 2004). | here we use partial observations. | contrasting |
train_10930 | In this light, these methods are a complex form of instance bagging, and their development could be justified from this perspective. | given this justification, are improvements from MDL simply the result of standard ensemble learning effects, or are these methods really learning something about domain behavior? | contrasting |
train_10931 | Our empirical results suggest that MDL can be more effective in settings with domain-specific class biases. | we also saw differences in improvements for each method, and for different domains. | contrasting |
train_10932 | In this paper, we focus on incorporating the biases with HMM-type representations (Hidden Markov Model). | this technique can also be applied to other graphical model-based representations with little modification. | contrasting |
train_10933 | We suspect that the entropy feature, which is learned only from labeled sourcedomain data, makes the representation biased towards features that are important in the source domain only. | after we add in the distance bias and a parameter to balance the weights from both biases, the representation is able to capture the label information as well as the target domain features. | contrasting |
train_10934 | (2010) speculated that the observed advantage of Viterbi EM over standard EM is due to standard EM reserving too much probability mass to spurious parses in the E-step. | it is still unclear as to why Viterbi EM can avoid this problem. | contrasting |
train_10935 | However, entropy regularization is either motivated by the theoretical result that unlabeled data samples are informa-tive when classes are well separated (Grandvalet and Bengio, 2005), or derived from the expected conditional log-likelihood (Smith and Eisner, 2007). | our approach is motivated by the observed unambiguity of natural language grammars. | contrasting |
train_10936 | More and more English words are used in Chinese texts as names of organizations, products, terms and abbreviations, such as "eBay", "iPhone", "GDP", "Android" etc. | it is also a common phenomenon to use Chinese-English mixed texts in daily conversation, especially in communication among employers in large international corporations. | contrasting |
train_10937 | The mainstream method is to regard POS tagging as sequence labeling problems (Rabiner, 1990;Xue, 2003;Peng et al., 2004;Ng and Low, 2004). | the analysis of Chinese-English mixed texts is rarely involved in previous literature. | contrasting |
train_10938 | With a character-based perceptron as the core, combined with real-valued features such as language models, the cascaded model can efficiently utilize knowledge sources that are inconvenient to incorporate into the perceptron directly. | they use POS tags or word information in a Brute-Force way, which may suffer from the problem of time complexity. | contrasting |
train_10939 | English) and use parallel data to build a dictionary in the desired language and extend the dictionary coverage using label propagation. | parallel text does not exist for many pairs of languages and the proposed bilingual projection algorithms are fairly complex. | contrasting |
train_10940 | In these approaches, each observation corresponds to a particular word and each hidden state corresponds to a cluster. | using maximum likelihood training for such models does not achieve good results (Clark, 2003): maximum likelihood training tends to result in very ambiguous distributions for common words, in contradiction with the rather sparse word-tag distribution. | contrasting |
train_10941 | These two aspects are strongly intertwined: on the one hand, enabling language-independent text understanding would allow for the harvesting of more knowledge in arbitrary languages, while, on the other hand, bringing together the lexical and semantic information available in different languages would improve the quality of text understanding in arbitrary languages. | these two goals have hitherto never been achieved, as is attested to by the fact that research in a core language understanding task such as Word Sense Disambiguation (Navigli, 2009, WSD) has always been focused mostly on English. | contrasting |
train_10942 | All the above approaches to multilingual or crosslingual WSD rely on bilingual corpora, including those which exploit existing multilingual WordNetlike resources (Ide et al., 2002), or use automatically induced multilingual co-occurrence graphs (Silberer and Ponzetto, 2010). | this requirement is often very hard to satisfy, especially if we need wide coverage. | contrasting |
train_10943 | Formally, it computes the score for the j-th sense of w as follows: For instance, using the (normalized) sense distributions from our example, the ensemble distribution will be the following: Computing a sense distribution for each translation using the same graph connectivity measure assumes that all translations are equal. | a leitmotif of multilingual WSD research is that translations restrict the set of candidate senses of the target word in the source language. | contrasting |
train_10944 | Domain-driven approaches have been shown to obtain the best performance among the unsupervised alternatives , especially when domain kernels are coupled with a syntagmatic one (Gliozzo et al., 2005). | their performance is typically lower than supervised systems. | contrasting |
train_10945 | The average number of glosses per term in our inventory is 1.9 (3.6 glosses on polysemous terms). | note that a monosemous word in our domain sense inventory does not necessarily make disambiguation easier, as i) we might have missed other domain-specific senses, ii) an uncovered, non-domain sense might fit a word occurrence (in this case, the domain WSD algorithms might be (wrongly) biased towards returning the only possible choice if a non-zero disambiguation score is calculated for it). | contrasting |
train_10946 | In sum, we can conclude that the higher correlation with human judgments indicates that integrating textual and perceptual modalities jointly is preferable to concatenation. | note that all models in Table 2 fall short of the human upper bound which we measured by calculating the reliability of Nelson et al. | contrasting |
train_10947 | On top of this cost, we need to alter the internal structure of the sentence-level models. | we can construct a dual decomposition algorithm which is efficient, produces a certificate when it finds an exact solution, and directly uses the sentence-level parsing models. | contrasting |
train_10948 | Graph features are defined over the factors of a graph-based dependency parser, which was shown to improve the accuracy of a transition-based parser by Zhang and Clark (2008). | while their features were limited to certain first-and second-order factors, we use features over second-and third-order factors as found in the parsers of Bohnet and Kuhn (2012). | contrasting |
train_10949 | In fact, this is the first example shown in Table 1, which is a noisy burst. | in Figure 2(b), the state sequence for the query "Nobel" is "0000011111," in which the longer and smoother burst corresponds to a true event. | contrasting |
train_10950 | In Figure 3, we can easily derive X 1 and X 2 have the same value 9 A simple evaluation method is that we label each one hour time slot as being part of a burst or not and compare with the gold standard. | in our experiments, we find that some methods tend to break one meaningful burst into small parts and easier to be affected by small fluctuations although they may have a good coverage of bursty points. | contrasting |
train_10951 | Another possible baseline is that we first merge all the activities, then apply the single-stream algorithm. | in our data set, we find that the number of activities in S t is significantly larger than that of the two types. | contrasting |
train_10952 | Their corpus consist of 1,400/450 posts written by 47 females and 24 males, respectively. | the ngram features were preselected based on whether they occurred with significant relative frequency in the language of one gender over the other. | contrasting |
train_10953 | We are not the first to predict gender from language features with online data. | to our knowledge, we are the first to contrast the two views, social and language-based, using online data and to question whether there is a clear understanding of what gender classifiers actually learn to predict from language. | contrasting |
train_10954 | While this seems like an interesting application of information theory for linguistic studies, it has also generated some controversies (Farmer et al., 2004). | our work departs from traditional scenarios significantly. | contrasting |
train_10955 | A simplistic approach might indeed involve comparing a test document to each training document. | in the winner-takes-all model described above, we can rely only on the result of comparing with the single best training document, which may not contain enough information to make a good prediction. | contrasting |
train_10956 | Nonetheless, using the centroid has the benefit of making a uniform grid less sensitive to cell size, such that larger cells can be used more reliably -especially important when there are few training documents. | when choosing representative locations for the leaves of a k-d tree, it is quite important to use the centroid because the leaves necessarily span the entire earth and none are discarded (since all have a roughly similar number of documents in them). | contrasting |
train_10957 | Operettas are a cultural phenomenon largely associated with France, Germany, and England and particularly with specific theaters in these countries. | other highly specific tokens such as KS01 have a very low average error because they occur in few documents and are thus highly unambiguous indicators of location. | contrasting |
train_10958 | Much of the writing styles recognized in rhetorical and composition theories involve deep syntactic elements in style (e.g., Bain (1887), Kemper (1987) Strunk and White (2008)). | previous research for automatic authorship attribution and computational stylometric analysis have relied mostly on shallow lexico-syntactic patterns (e.g., Mendenhall (1887), Mosteller and Wallace (1984), Stamatatos et al. | contrasting |
train_10959 | Some very recent works have shown that PCFG models can detect distributional difference in sentence structure in gender attribution (Sarawgi et al., 2011), authorship attribution (Raghavan et al., 2010), and native language identification (Wong and Dras, 2011). | still very little has been understood exactly what constitutes salient stylistic elements in sentence structures that characterize each author. | contrasting |
train_10960 | Among other corpora, a small subset (∼120K) of English portion of OntoNotes was used for this purpose. | the lack of a strong participation prevented the organizers from reaching any firm conclusions. | contrasting |
train_10961 | Since Arabic portion of the corpus is all newswire, this had no impact on it. | for both Chinese and Arabic, since we remove trace tokens corresponding to dropped pronouns, all the other layers of annotation had to be remapped to the remaining sequence of tree tokens. | contrasting |
train_10962 | This is somewhat expected as this is the second year for the English task, and so it does show a more mature and stable performance. | both Chinese and Arabic plots show much more divergence, with the Chinese and Arabic GB case showing the highest divergence. | contrasting |
train_10963 | For a given document, we have a forest of coreference trees, one tree for each coreferring cluster. | for the sake of simplicity, we link the root node of every coreference tree to an artificial root node, obtaining the document tree. | contrasting |
train_10964 | We use feature templates to generate such complex features. | we automatically generate templates using the entropy guided feature induction approach (Fernandes and Milidiú, 2012;Milidiú et al., 2008). | contrasting |
train_10965 | The development results are obtained with systems trained only on the training sets. | test set results are obtained by training on a larger dataset -the one obtained by concatenating training and development sets. | contrasting |
train_10966 | That is usually the case on NLP tasks, since golden values eliminate the additional noise introduced by automatic features. | during evaluation, we use the automatic values provided in the CoNLL shared task corpora. | contrasting |
train_10967 | We believe the reason for this is that these decoders are too similar and hence can not really benefit from each other. | when we used the AMP decoder as the first step, and a pair-wise decoder as the second, we saw an increase in performance, particularly with respect to the CEAFE metric. | contrasting |
train_10968 | Modifying the threshold of the AMP decoder gave very small differences in overall score and we kept the threshold for this decoder at 0.5. | when we increased the probability threshold for the second resolver, we found that performance increased across all languages. | contrasting |
train_10969 | For Chinese we also attempt to train a model for pronouns '你'(you) and '那'(that). | the results are not acceptable either since the features we select are not enough for the classifier. | contrasting |
train_10970 | The pronouns ' 这 '(this), ' 那 '(that), ' 这 里 '(here), ' 那 里 '(there) are not processed for we did not find a good solution. | in some cases the provided gender and number are not correct or missing and we had to label these mentions based on the appellation words of the training data. | contrasting |
train_10971 | The result is not as good as we supposed since the feature errors caused by these tools also made the coreferential errors. | a deeper error analysis is needed in the construction of deterministic rules. | contrasting |
train_10972 | From these figures, we can see that using feature selection in both initial feature sets, the performance improves. | the performance of our system is improved only on a few iteration. | contrasting |
train_10973 | The result seems reasonable because the model for testing use additional development data which is much smaller than training data. | the result on English test data seem a little odd. | contrasting |
train_10974 | Six systems participated in that task, UBIU (Zhekova and Kübler, 2010) among them. | since systems participated across the various languages rather irregularly, Recasens et al. | contrasting |
train_10975 | Following these works, we include a k nearest neighbor classifier for singleton mentions in UBIU with 19 commonly-used features described below. | unlike Ng (2004), we use a combination of the feature-and constraint-based approaches to incorporate the classifier's results. | contrasting |
train_10976 | A comparison of the results shows that there are only minor differences between them with gold outperforming auto apart from Arabic for which there is a drop of 3.75 points in the gold setting. | the small difference between all results shows that the quality of the automatic annotation is good enough for a CR system and that further improvements in the quality of the linguistic information will not necessarily improve CR. | contrasting |
train_10977 | This kind of model focuses on filtering with ordered tiers: One filter is applied at one time, from highest to lowest precision. | compared with learning approaches (Soon et al., 2001), since effective rules are quite heterogeneous in different languages, several filtering methods should be redesigned when different languages are considered. | contrasting |
train_10978 | In this case, we prefer to keep the candidate with a larger span. | we may predict "President Bush at Dayton" instead of "President Bush", if the parser incorrectly attaches the prepositional phrase. | contrasting |
train_10979 | The baseline system uses an identical model for coreference resolution on both pronouns and nonpronominal mentions. | in the literature (Bengtson and Roth, 2008;Rahman and Ng, 2011;Denis and Baldridge, 2007) the features for coreference resolution on pronouns and nonpronouns are usually different. | contrasting |
train_10980 | For example, lexical features play an important role in non-pronoun coreference resolution, but are less important for pronoun anaphora resolution. | gender features are not as important in non-pronoun coreference resolution. | contrasting |
train_10981 | It seems that the performance of the whole system is highly bottlenecked by that of the mention detection component. | it may not be true as the task requires removing singleton mentions that do not refer to any other mentions. | contrasting |
train_10982 | Thus more labeled instances can be collected within the fix budget. | the more useful and relevant but expensive instances from the target domain should also be queried at a certain rate. | contrasting |
train_10983 | One way to adapt it for document significance is to alter the numerator such that only the span-constrained bigram occurrences in -significant documents are considered in computing f (x, y). | this simple adaptation is problematic. | contrasting |
train_10984 | Subsequent to model training, the methods uncover morph boundaries for new word forms by generating their most likely morph sequences according to the morph lexicons. | to learning morph lexicons (Poon et al., 2009;Kohonen et al., 2010), we study morphological segmentation by learning to directly predict morph boundaries based on their local substring contexts. | contrasting |
train_10985 | Intuitively, instead of the four class set {B, M, E, S}, a segmentation could be accomplished using only a set of two classes {B, M} as in (Green and DeNero, 2012). | similarly to Chinese word segmentation (Zhao et al., 2006), our preliminary experiments suggested that using the more fine-grained four class set {B, M, E, S} performed slightly better. | contrasting |
train_10986 | Nevertheless, for completeness, we computed the character accuracy for our Arabic data set, obtaining the accuracy 99.1%, which is close to their reported accuracy of 98.6%. | these values are not directly comparable due to our use of the Bible corpus by Snyder and Barzilay (2008) and their use of the Penn Arabic Treebank (Maamouri et al., 2004). | contrasting |
train_10987 | The result of McNemar test indicates that there is a significant difference (p < 0.01) between Semi-Boost and Semi-CRF. | there is no significant difference between Semi-Boost and Semi-PER. | contrasting |
train_10988 | When we trained models for ENER, Semi-PER consumed 32 GB and Semi-Boost consumed 33 GB. | semi-CRF could not train models because of the lack of memory. | contrasting |
train_10989 | This is because Semi-CRF maintains a weight vector and a parameter vector for L1-norm regularization and Semi-CRF considers all possible patterns generated from given sequences in training. | semi-PER and semi-Boost only consider features that appeared in correct ones and incorrectly recognized ones. | contrasting |
train_10990 | Models for sentence compression often compose text from units that are larger than individual tokens, such as n-grams which describe a token sequence or syntactic relations which comprise a dependency tree. | our approach is specifically motivated by the perspective that both these representations of a sentence-a sequence of tokens and a tree of dependency relations-are equally meaningful when considering its underlying fluency and integrity. | contrasting |
train_10991 | Similarly, consideration of dependency arcs allows the compressed dependency tree z to be scored using a rich set of indicator features over dependency labels, part-of-speech tags and even lexical features as in Filippova and Strube (2008). | unlike the bag-of-tokens scenario, these output structures cannot be constructed efficiently due to their interdependence. | contrasting |
train_10992 | n − 2, there can be no n-grams that feature the last n−r −1 tokens in the r'th position or the first n − r − 1 tokens in the (n − r + 1)'th position. | this is easily tackled computationally by assuming that the terminal n-gram replaces these missing n-grams for near-terminal tokens in constraint (6). | contrasting |
train_10993 | ∀c ∈ {adj, dep} By itself, (8) would simply set all token indicators x i simultaneously to 0. | since START and ROOT have no incoming flow variables, the amount of commodity in the respective outgoing flow variables γ adj * j and γ dep * j remains unconstrained. | contrasting |
train_10994 | A vectorial representation of such pairs is the difference between the vectors representing the hypotheses in a pair. | this assumes that features are explicit and already available whereas we aim at automatically generating implicit patterns with kernel methods. | contrasting |
train_10995 | Vector-space word representations have been very successful in recent years at improving performance across a variety of NLP tasks. | common to most existing work, words are regarded as independent entities without any explicit relationship among morphologically related words being modeled. | contrasting |
train_10996 | The objective function is then simply the sum of all individual costs over N training examples, plus a regularization term, which we try to minimize: The cimRNN model, though simple, is interesting to attest if morphemic semantics could be learned solely from an embedding. | it is limited in several aspects. | contrasting |
train_10997 | Morfessor captures a general word structure of the form (pre * stm suf * ) + , which is handy for words in morphologically rich languages like Finnish or Turkish. | such general form is currently unnecessary in our models as the mor-phoRNNs assume input of the form pre * stm suf * for efficient learning of the RNN structures: a stem is always combined with an affix to yield a new stem. | contrasting |
train_10998 | For example, it returns V-ing as nearest neighbors for "commenting" and similarly, JJ-ness for "fearlessness", an unknown word that C&W cannot handle. | for those cases, the nearest neighbors are badly unrelated. | contrasting |
train_10999 | First, in ACs, the same-author posts can only interact via the confidence values assigned to them. | in our proposal, the same-author posts interact via Feature Definition SameDebate whether authors posted in same debate SameThread whether authors posted in same thread Replied whether one author replied to the other Table 5: Interaction features for the authoragreement classifier. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.