id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_18200 | In other words, precision is computed by reversing the roles of the key partition K(d) and the system partition S(d) used to compute recall for document d. If we wanted precision and recall to also play a symmetric role in the linguistically aware versions of these scoring metrics, it would be natural to define w L s in the same way as w L k , where E is the set of edges appearing in the maximum spanning tree defined over the mentions in S j . | there is a reason why it is undesirable for us to define w L s in this manner. | contrasting |
train_18201 | Research shows that topical discussions around events tend to evolve socially on microblogs. | sources like Twitter have no explicit discussion thread which will link semantically similar posts. | contrasting |
train_18202 | Those formulas, despite of their simplicity, performed well and were widely used by editors to grade reading material for young readers. | content producers might be tempted to adapt their manuscripts by tweaking the text features present in readability formulas, without gaining (or even degrading) real readability (Davison and Kantor, 1982). | contrasting |
train_18203 | Those features alone were found not to be as discriminant as the lexical ones, but performed well in combination with them. | the effects were not additive, which suggests that variables correlated with each other to a certain extent. | contrasting |
train_18204 | (Blair-Goldensohn and McKeown, 2006;Bosma, 2004)) to improve coherence and better simulate human writing. | most of these work have been developed for formal, well-written and factual documents. | contrasting |
train_18205 | Indeed, Table 1 shows that illustration, contingency, and comparison relations occur quite frequently irrespective of the textual genre. | in contrast to the TAC dataset, attributive, topic-opinion, and attribution relations occur very rarely in DUC 2007. | contrasting |
train_18206 | For example, if TAC-Best does not consider illustration relations, then the R-2 score decreases from 0.138 to 0.112, 0.102 and 0.113, respectively. | the relations of topic-opinion, attribution, and attributive do not consistently lead to a statistically significant improvement on ROUGE scores. | contrasting |
train_18207 | This procedure has been observed in human summarization (Jing and McKeown, 2000) and has been shown to be a valuable component of automated summarization systems (Barzilay and McKeown, 2005). | research in sentence fusion has long been hampered by the absence of datasets for the task, and the difficulty of generating one has cast doubt on the viability of automated fusion (Daumé III and Marcu, 2004). | contrasting |
train_18208 | The precision and recall measures for each experiment are described in Tables 1-3 Following the native speakers' intuitions from the experiment described in Section 3.2, we can assume that the discriminative power of errors should not be surprising; learners of Czech of a NIE language background are likely to make more errors than the learners of the IE group due to the differences between L1 and L2. | we need to perform a more detailed error analysis to conform or disagree with these intuitions. | contrasting |
train_18209 | Thus, greater differences between L1 and L2 grammatical structures might trigger a higher amount of errors within the NIE group. | a more detailed analysis of error distribution within each group and probably a larger data set are needed to investigate this claim. | contrasting |
train_18210 | 1 Neural machine translation (NMT) offers an elegant end-to-end architecture, while at the same time improving translation quality. | little is known about the inner workings of these models and their interpretability is limited. | contrasting |
train_18211 | 's model also employs dependency parsing but their model separately predicts the target translation sequence and parsing action sequence which maps to translation. | our proposed model's decoder directly predicts the linearized dependency tree itself in a single neural network in Depth-first preorder order so that the next-word token is generated based on syntactic relations and tree construction itself. | contrasting |
train_18212 | The neural system for their experimental analysis is not an attentional model and they argue that attention does not have any impact for learning syntactic information. | performing the same analysis for morphological information, Belinkov et al. | contrasting |
train_18213 | Considering this difference and the observations in Section 5.1, a natural follow-up would be to focus on getting the attention of verbs to be closer to alignments. | figure 3b shows that the average word prediction loss for verbs is actually smaller compared to the loss for nouns. | contrasting |
train_18214 | The low correlation for verbs confirms that attention to other parts of source sentence rather than the aligned word is necessary for translating verbs and that attention does not necessarily have to follow alignments. | the higher correla- tion for nouns means that consistency of attention with alignments is more desirable. | contrasting |
train_18215 | Their approach uses word embeddings learned from a large-scale native corpus to address the data sparseness problem of learner corpora. | most of the word embeddings, including the one used by , model only the context of the words from a raw corpus written by native speakers, and do not consider specific grammatical errors of language learners. | contrasting |
train_18216 | EWE learns word embeddings using the same model as C&W embeddings. | rather than creating negative samples randomly, we created them by replacing the target word w t with words w c that learners tend to easily confuse with the target word w t . | contrasting |
train_18217 | Our experiments on FCE+EWE-L8 and FCE+E&GWE-L8 were conducted by combining error patterns from all of Lang-8 corpus and the training part of FCE-public corpus to train word embeddings. | since the number of error patterns of Lang-8 is larger than that of FCE-public, we normalized each frequency so that the ratio was 1:1. | contrasting |
train_18218 | They use parsers coarsely trained on ex-isting data with FA for completion via constrained decoding. | our experiments show that this leads to dramatic decrease in parsing accuracy. | contrasting |
train_18219 | This is a consequence of having to predict the probability distribution over an entire vocabulary V , which is generally very large in the real world. | the WON predicts the probability distribution over entire sentences, whose length N is usually less than 50 |V |. | contrasting |
train_18220 | Their method heavily relies on pre-trained dependency parsers to produce words' relations for each sentence in training corpora, thus encountering error propagation problems. | our method only requires raw corpora, and our aim is to produce word embeddings that improve syntax-related tasks, such as parsing, without using any human annotations. | contrasting |
train_18221 | (2015) evaluated the PtrNet on geometric sorting tasks (e.g., Travelling Salesman Problem) where each input w i forms a continuous vector that represents the cartesian coordinate of the point (e.g., a city). | in the word ordering task, Equation 10 suffers from the data sparseness problem, as each input w i forms a high-dimensional discrete symbol. | contrasting |
train_18222 | The bi-directional bilingual pivoting of PPDB (Ganitkevitch et al., 2013) constrains paraphrase acquisition to be strictly symmetric. | although it is extremely effective for extracting synonymous expressions, it tends to give high scores to frequent but irrelevant phrases, since bilingual pivoting itself contains noisy phrase pairs because of word alignment errors. | contrasting |
train_18223 | Language Modeling Language models, from ngram models to continuous space language models (Mikolov et al., 2013;Pennington et al., 2014), provide probability distributions over sequences of words and have shown their usefulness in many natural language processing tasks. | to our knowledge, they have not yet been used to model semantic frames. | contrasting |
train_18224 | Then the two-layered SVM classifier re-predicted whether there was an inference relation for the lexical pair w 1 → w 3 . | none of these models takes into account transitivity in the observed layer or transitivity between two layers. | contrasting |
train_18225 | Therefore, their transitivity framework may involve the noise from the first prediction. | in our PSL models, all possible feature-layered transitivities between pairs are explored. | contrasting |
train_18226 | This is likely because it is easier to learn the parameters of the image prediction model that has fewer parameters (8.192 million for VGG-19 vs. 4.096 million for Inception-V3 and ResNet-50). | it is not clear why there is such a pronounced difference between the Inception-V3 and ResNet-50 models 4 . | contrasting |
train_18227 | (2017a) found that word representations learned from the encoder are rich in morphological information, while representations learned from the decoder are significantly poorer. | the paper does not present a convincing explanation for this finding. | contrasting |
train_18228 | We did not apply byte-pair encoding (BPE) (Sennrich et al., 2016b), which has recently become a common part of the NMT pipeline, because both our analysis and the annotation tools are word level. | 2 experimenting with BPE and other representations such as character-based models (Kim et al., 2015) would be interesting. | contrasting |
train_18229 | DEC t i probably encodes morphological information about both the current word (t i ) and the next word (t i+1 ). | we leave this exploration for future work, and work with the assumption that DEC t i encodes information about word t i . | contrasting |
train_18230 | For example, it is possible to incorporate neural features into traditional SMT models to disambiguate hypotheses (Neubig et al., 2015;Stahlberg et al., 2016). | the search space of traditional SMT is usually limited by translation rule tables, reducing the ability of these models to generate hypotheses on the same level of fluency as NMT, even after reranking. | contrasting |
train_18231 | In T 1 , "hypophysectomized (hypop hy sec to mized)" is incorrectly translated into "低(low) 酪(cheese) 蛋白(protein) 切除(remove)". | from Table 9, we can see that the forced decoding algorithm learns it as unlikely translation (hy→低(low)), over-translation (null→酪(cheese), null→蛋 白(protein)) and under-translation (hypop→null, sec→null), because there is no translation rule between "hypop" "sec" and "酪(cheese)" "蛋白(protein)". | contrasting |
train_18232 | One of the strengths of neural networks is the ability to learn features automatically. | this strength has not been well exploited in their works. | contrasting |
train_18233 | (2016); utilized word embeddings to boost performance of word-based CWS models. | for character-based CWS models, word information is not easy to be integrated. | contrasting |
train_18234 | When we increase the depth from 1 to 5, the performance is improved significantly. | when we increase depth from 5 to 7, even to 11 and 15, the performance is almost unchanged. | contrasting |
train_18235 | The feature experiments indicate that concatenated ngrams contribute substantially. | both radicals and graphical features as sub-character level information are less effective. | contrasting |
train_18236 | Distributional models, which describe the meaning of a word in terms of its observed contexts (Turney and Pantel, 2010), have been suggested as a model for how humans learn word meanings (Landauer and Dumais, 1997). | distributional models typically need hundreds of instances of a word to derive a highquality representation for it, while humans can often infer a passable meaning approximation from one sentence only (as in the above example). | contrasting |
train_18237 | (2016) were the first to explore fast mapping for text-based word learning, using an extension to word2vec with both textual and visual features. | they model the unknown word simply by averaging the vectors of known words in the sentence, and do not explore what types of knowl-edge enable fast mapping. | contrasting |
train_18238 | Word representations based on the distributional hypothesis of Harris (1954) have become a dominant approach including word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014), which show remarkable performances in a wide spectrum of natural language processing. | a question arises about a relationship between a true word meaning and its distributed representation. | contrasting |
train_18239 | For instance, since concrete concepts are perceptionrevealing, they would benefit from a strong emphasis on the perception embedding. | emotion-revealing word groups such as abstract concepts would be opposite. | contrasting |
train_18240 | In the SimLex-999 dataset which focuses on the word similarity, the cognition (lexical relation) and the sentiment modules turned out to be important. | in the WordSim-353 dataset which focuses on the word relatedness, both linear context and syntactic context are turned out to be critical. | contrasting |
train_18241 | vec(king)vec(queen)=vec(man)-vec(woman)). | the assumption that each word is represented by only one single vector is problematic when dealing with the polysemous words. | contrasting |
train_18242 | Pervious work mostly focus on using clustering to induce word senses (each cluster refers to one of the senses) and then learn the word sense representations respectively (Reisinger and Mooney, 2010;Huang et al., 2012;Tian et al., 2014;Neelakantan et al., 2014;Li and Jurafsky, 2015). | the above approaches ignore the relatedness among the word senses. | contrasting |
train_18243 | Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (Kim, 2014; Kalchbrenner et al., 2014; Johnson and Zhang, 2014; Zhang et al., 2016). | these models require practitioners to specify an exact model architecture and set accompanying hyperparameters, including the filter region size, regularization parameters, and so on. | contrasting |
train_18244 | Emerging research has begun to explore hyperparameter optimization methods, including random search (Bengio, 2012), and Bayesian optimization (Yogatama and Smith, 2015;Bergstra et al., 2013). | these sophisticated search methods still require knowing which hyperparameters are worth exploring to begin with (and reasonable ranges for each). | contrasting |
train_18245 | The best performing strategy is to simply use many feature maps (here, 400) all with region size equal to 7, i.e., the single best region size. | we note that in some cases (e.g., for the TREC dataset), using multiple different, but nearoptimal, region sizes performs best. | contrasting |
train_18246 | The 'best' number of feature maps for each filter region size depends on the dataset. | it would seem that increasing the number of maps beyond 600 yields at best very marginal returns, and often hurts performance (likely due to overfitting). | contrasting |
train_18247 | In the baseline configuration, we performed 1max pooling globally over feature maps, inducing a feature vector of length 1 for each filter. | pooling may also be performed over small equal sized local regions rather than over the entire feature map (Boureau et al., 2011). | contrasting |
train_18248 | 4) beddings helps to prevent overfitting (compared to bag of words based encodings). | we are not advocating completely foregoing regularization. | contrasting |
train_18249 | More precisely, i corresponds in this case to an entry and j and k to two of its neighbors such that rank(j) > rank(k). | the method of Liu et al. | contrasting |
train_18250 | The result of the process described in the previous section is what we could call a knowledge-boosted distributional thesaurus. | its form is not different from a classical distributional thesaurus and it can be embedded similarly by applying the method of Section 2. | contrasting |
train_18251 | For instance, word senses that are underrepresented or absent in the training corpus will not be assigned a functional embedding. | due to the ability of these models to process large amounts of data, well-represented word senses will acquire meaningful representations. | contrasting |
train_18252 | A characteristic of this approach is that these models can generate embeddings for a complete inventory of word senses. | the dependence on manually crafted resources can potentially lead to incompleteness, in case of unlisted word senses, or to inflexibility in the face of changes in meaning, failing to account for new meanings of a word. | contrasting |
train_18253 | It can be noted that Moses is much more conservative than Nematus and simply tends to copy the original as the output ("Identical" cases). | as the majority (57%) of aligned sentences in the professional Newsela simplifications are edited, we do not consider copying a valid "simplification" in most cases. | contrasting |
train_18254 | We have a small set of discourse connective templates for each one of the 4 class-level PDTB relations (for example, "m i . | m j " is one of the templates for the comparison relation), and we know the relation between the message and the previous message. | contrasting |
train_18255 | In extractive approaches, which are usually applicable in text and ontology summarization (Jones, 2007) (Zhang et al., 2007), a set of features is extracted directly from the input data. | in non-extractive methods, which generally are employed in graph (Navlakha et al., 2008) and database (Bu et al., 2005) summarization, new sentences from the input data are generated (Hahn and Mani, 2000) to form a summary. | contrasting |
train_18256 | It is considered preparing tailored data for motion recognition for each kind of procedure execution videos have high cost because it is often vague even for human annotators to assign every concrete motions into text-level motion categories. | objects directly appear in texts and there is much less ambiguity than motions. | contrasting |
train_18257 | The model from Section 4.3 gives equal importance to the similarity scores from all Wikipedia articles. | it's more intuitive for more relevant articles to have more importance. | contrasting |
train_18258 | It is not appropriate to treat identifying the triggers that contains multiple words as a word classification task, because most of the triggers of multiple words contain prepositions. | the prepositions in such triggers do not trigger event independently. | contrasting |
train_18259 | When generating the unaligned target words , our model only depends on the words previously generated without considering the context of source side. | we redesign the objective function so as to emphasize the partially aligned parts in addition to maximizing the log-likelihood of the target sentence. | contrasting |
train_18260 | It indicates that conventional NMT system cannot learn a good model on the small-scale datasets. | when fine-tuning our partially aligned model with this small parallel corpus, we can get a surprising improvement. | contrasting |
train_18261 | When using more than 60K sentence pairs, we still get a relatively high promotion of translation quality. | the promotion is not very remarkable as Row1-3 reveal in Table 4. | contrasting |
train_18262 | Above methods are designed for different scenarios, and their work can achieve great results on these scenarios. | when in the scenario we propose in this work, that is we only have monolingual sentences and some phrase pairs, their methods are hard to be utilized to train an NMT model. | contrasting |
train_18263 | This architecture has been applied in many applications such as machine translation (Sutskever et al., 2014;Cho et al., 2014b), image captioning (Karpathy and Fei-Fei, 2015), and so on. | such architecture encounters difficulties, especially for coping with long sequences. | contrasting |
train_18264 | Specifically, when the source and the target languages have different sentence structures and the last part of the target sequence may depend on the first part of the source sequence. | although the global attention mechanism has often improved performance in some tasks, it is very computationally expensive. | contrasting |
train_18265 | (Chorowski et al., 2014) also proposed a soft constraint to encourage monotonicity by invoking a penalty based on the current alignment and previous alignments. | the methods still did not guarantee a monotonicity movement of the attention. | contrasting |
train_18266 | Partly inspired by the P&P framework, we use a sequence of binary variables as the latent representation of a language. | there are non-negligible differences between P&P and ours, which are discussed in Section S.2 of the supplementary material. | contrasting |
train_18267 | If a given value, z * ,k , has a relatively large V (z * ,k ), then setting a large value for v k enables it to appropriate fractions of the mass from its weaker rivals. | if too large a value is set for v k , then it will be overwhelmed by its stronger rivals. | contrasting |
train_18268 | It will increase with a rise in the number of phylogenetic neighbors that assume value b. | this probability depends not only on the phy- logenetic neighbors of language l, but it also depends on its spatial neighbors and on universality. | contrasting |
train_18269 | Due to the high ratio of missing values, the model might have overfitted the data with larger K. The fact that SYN outperformed Surface-DIA suggests that inter-feature dependencies have more predictive power than inter-language dependencies in the dataset. | they are complimentary in nature as SYNDIA outperformed SYN. | contrasting |
train_18270 | In the interests of feasible multi-reference evaluation, we pose question and response generation as two separate tasks. | all the models presented in this paper can be fed with their own generated question to generate a response. | contrasting |
train_18271 | Additionally, the dataset did not assign an entity iden- tifier to a pronoun. | as our dataset has access to the manual annotations of coreferences, we are able to investigate the ability of the language model to capture meanings from contexts. | contrasting |
train_18272 | Dynamic updating could be applied to words in all lexical categories, including verbs, adjectives, and nouns without requiring additional extensions. | verbs and adjectives were excluded from targets of dynamic updates in the experiments, for two reasons. | contrasting |
train_18273 | Only inter-sentential implicit relations are annotated in the PDTB, due to time and resource constraints (Prasad et al., 2008). | this does not mean that implicit relations only hold between consecutive sentences. | contrasting |
train_18274 | Both approaches allow the sentiment portion of training and testing data to be in the same vector space. | many languages have no MT system, and it is extremely expensive to create one on a language-by-language basis. | contrasting |
train_18275 | Though POS-tag information can generate dependency relations, we use the PTB3 data to pre-train the bottom level models, where noise may weaken the advantages. | the dependency model contains more detailed information, and is useful for PTB-like formal data. | contrasting |
train_18276 | In the ontology graph construction process, we keep adding unexplored vertices to the vertex set, Domain Features Nodes Edges Automobile 132 114 778 Camera 986 979 1280 Kitchen 767 670 10629 Software 150 135 842 Table 3: Ontology-graph Statistics as long as, there is at least one edge between the corresponding concept to an existing vertex in the vertex-set, of one of the types functional, hierarchical or synonymous. | we restrict to adding vertices such that the maximum distance between the seed word and the newly added concept remains less than a given threshold n. We empirically fix n = 4, which practically provides a sufficiently large number of concepts that are realistically related to the concept of the seed word. | contrasting |
train_18277 | We further compare our work against the reported approach for the same task by (Mukherjee and Joshi, 2013) which also uses ConceptNet, and has an approach similar to ours. | as mentioned earlier, they consider ontology as a tree while we construct a graph. | contrasting |
train_18278 | Inspired by the models above, the goal of this research is to build a model for exploiting syntax, semantic, sentiment and context of tweets by constructing four kinds of embeddings: Char-AVs, LexW2Vs, ContinuousW2Vs and Depen-dencyW2Vs. | we modify Bi-GRNN of (Chung et al., 2014) into Bi-CGRNN to take word embeddings in order to produce a sentence-wide representation from sentence compositions. | contrasting |
train_18279 | The Stanford test set is small. | it has been widely used in different evaluation tasks (Go et al., 2009) (Dos Santos and Gatti, 2014). | contrasting |
train_18280 | Therefore, the emoticons would be useful when classifying test data by using deep learning model. | our preprocessing steps are different from (Go et al., 2009), they remove the emoticons out from their training datasets because they revealed that the training process makes the usage of emoticons as noisy labels and if they consider the emoticons, there is a negative impact on classification accuracy. | contrasting |
train_18281 | We use GRUs for our model because GRUs are quite new and their tradeoffs have not been fully explored yet. | gRUs have fewer parameters (U and W are smaller) and thus may train a bit faster or need less data to generalize. | contrasting |
train_18282 | These experiments show that CharAVs and LexW2Vs achieve good performances and contribute in enhancing information for words. | the experiments indicate that (Go et al., 2009) 83.0 -NB (Go et al., 2009) 82.7 -SVM (Go et al., 2009) 82. | contrasting |
train_18283 | Lastly, while the advertisers can provide prior knowledge for the class they want to target, they cannot accurately specify the irrelevant (or negative) category because it likely covers broad topics in the wild. | to train a classifier, usually labeled instances for each class are required. | contrasting |
train_18284 | Long short-term memories ("LSTMs": Hochreiter and Schmidhuber (1997)), a particular variant of RNN, have become particularly popular, and been successfully applied to a large number of tasks: speech recognition (Graves et al., 2013), sequence tagging (Huang et al., 2015), document categorisation (Yang et al., 2016), and machine translation . | as pointed out by and Linzen et al. | contrasting |
train_18285 | It seems to work very well with English, where it improves performance substantially even though this improvement is not specially significant. | the LM does not improve the performance at all in Spanish. | contrasting |
train_18286 | the based log-likelihood model might favors precision or recall. | from the analysis of the biases, we found no obvious trends favoring precision or recall. | contrasting |
train_18287 | , y n , functioning as the decoder. | rNNs struggle to train on long term dependencies sufficiently; and therefore, Long Short Term Memory Models (LSTM) and Gated recurrent Units (GrUs) are more common for such sequence to sequence learning. | contrasting |
train_18288 | They outperform statistical, heuristic, neural single instance and mixture of experts ensemble models over multiple datasets. | these ensemble models are unable to capture the stringent rules and restrictions that disallow certain character combinations like bxy, ii, gls. | contrasting |
train_18289 | This task typically makes use of stylometric cues at the surface lexical and syntactic level (Stamatatos et al., 2015), although Feng and Hirst (2014) and Feng (2015) go beyond the sentence level, showing that discourse information can help. | they achieve limited performance gains and lack an in-depth analysis of discourse featurization techniques. | contrasting |
train_18290 | Ji and Smith (2017) propose an advanced Recursive Neural Network (RecNN) architecture to work with RST in the more general area of text categorization and present impressive results. | we suspect that the massive number of parameters of RecNNs would likely cause overfitting when working with smaller datasets, as is often the case in AA tasks. | contrasting |
train_18291 | The discourse embedding features, on the other hand, manage to increase the F1 score by a noticeable amount, with the maximal improvement seen in the CNN2-DE (global) model with RST features (by 2.6 points). | the discourse-enhanced SVM2-PVs increase F1 by about 1 point, with overall much lower scores in comparison to the CNNs. | contrasting |
train_18292 | We speculate that although the document must be a certain length for discourse to "kick in", these features are effective even with few training examples. | inspecting the gradients of the character bigrams for these cases reveals a higher incidence of 0s, suggesting the bigram feature is not as robust in the smaller sample space. | contrasting |
train_18293 | In our case, this sharing is between two models: On one hand, a standard Sequence-to-Sequence conversational models is trained to predict the current response given the previous context. | using the non-conversational data, we introduce an autoencoder multi-task learning strategy that predicts the response given the same sequence, but with the target parameters tied with the general conversational model. | contrasting |
train_18294 | Our multi-task approaches consistently outperform baseline on perplexity. | the performance between individual target users can vary substantially. | contrasting |
train_18295 | Previous work has explored modeling semantic relationships between messages using a predefined list of technical words (Elsner and Charniak, 2010) and applying Latent Dirichlet Allocation (Adams and Martel, 2010). | we apply the research done by Lowe et al. | contrasting |
train_18296 | The relatively high number of children suggests that typically a message receives more than one reply. | the lower number of direct parents suggests that a message is typically replying to a single parent message. | contrasting |
train_18297 | (2009) who propose using NPIs for shifter extraction. | 10 our work substantially extends that previous work. | contrasting |
train_18298 | Another line of work is based on a purely unsupervised learning method, denoising autoencoders, where the hidden layers in multi-layer neural networks are believed to be robust against domain shift (Glorot et al., 2011;Chen et al., 2012;Zhou et al., 2016). | all these methods are still based on traditional discrete representations, and the shared representations are learned separately from the final classifier and therefore not directly related to sentiment classification. | contrasting |
train_18299 | As shown in Table 2, HNN only correctly predicts the sentiment of the first document but gives wrong predictions on another two documents, since worth watching only occurs once in the source Book domain. | our model DSR can make correct predictions for all of them. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.