id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_2000 | It has been shown that better alignment quality generally leads to better results (Ganchev et al., 2008). | the relationship between the word alignment quality and the results is not straightforward, and it was shown in (Vilar et al., 2006) that better alignments in terms of F-measure do not always lead to better translation quality. | contrasting |
train_2001 | The method based on scores showed a good performance for the Chinese-English language pair, but the performance for the English-French pair was similar to the MSD model. | the method based on context improves the results on both pairs. | contrasting |
train_2002 | English used to have a T/V distinction until the 18th century, using you as V and thou as T pronoun. | in contemporary English, you has taken over both uses, and the T/V distinction is not marked morphosyntactically any more. | contrasting |
train_2003 | This allows us to train and evaluate a monolingual English classifier for this phenomenon. | two problems arise on the way: German has three relevant personal pronouns: du, sie, and ihr. | contrasting |
train_2004 | On the German side, we assign the T/V labels to pronouns, and the most straightforward way of setting up annotation projection would be to label their word-aligned English pronouns as T/V. | pronouns are not necessarily translated into pronouns; additionally, we found word alignment accuracy for pronouns, as a function of word class, to be far from perfect. | contrasting |
train_2005 | We follow here the same general idea and aim, in a first step, at improving the comparability of a given corpus while preserving most of its vocabulary. | unlike the previous work, we show here that it is possible to guarantee a certain degree of homogeneity for the improved corpus, and that this homogeneity translates into a significant improvement of both the quality of the resulting corpora and the bilingual lexicons extracted. | contrasting |
train_2006 | Since the content relatedness in the comparable corpus is basically reflected by the relations between all the possible bilingual document pairs, we use here the number of document pairs to represent the scale of the comparable corpus. | the weight β can thus be defined as the proportion of possible document pairs in the current comparable corpus (C e 1 , C f 2 ) to all the possible document pairs, which is: where # d (C) stands for the number of documents in C. this measure does not integrate the relative length of the French and English parts, which actually impacts the performance of bilingual lexicon extraction. | contrasting |
train_2007 | 6. Letters 'q', 'k' and 'c' are often mixed up with each other because they sound alike in English although they are apart on the keyboard. | the three letters are not connected in Fig. | contrasting |
train_2008 | Prior studies of peer review in the Natural Language Processing field have not focused on helpfulness prediction, but instead have been concerned with issues such as highlighting key sentences in papers (Sandor and Vorndran, 2009), detecting important feedback features in reviews (Cho, 2008;, and adapting peer-review assignment (Garcia, 2010). | given some similarity between peer reviews and other review types, we hypothesize that techniques used to predict review helpfulness in other domains can also be applied to peer reviews. | contrasting |
train_2009 | Note that in isolation, MET (paper ratings) are not significantly correlated with peer-review helpfulness, which is different from prior findings of product reviews (Kim et al., 2006) where product scores are significantly correlated with product-review helpfulness. | when combined with other features, MET does appear to add value (last row). | contrasting |
train_2010 | Consider a contentious instance in a small dataset where 7 out of 15 Turkers (a minority) classified it as Error. | it might easily have happened that 8 Turkers (a majority) classified it as Error instead of 7. | contrasting |
train_2011 | While this example uses the same dataset for evaluating two systems, the procedure is general enough to allow two systems to be compared on two different datasets by simply examining the two plots. | two potential issues arise in that case. | contrasting |
train_2012 | The first is that the bin sizes will likely vary across the two plots. | this should not be a significant problem as long as the bins are sufficiently large. | contrasting |
train_2013 | A system using "social information" to find friend groups may work well in the latter case, but might not effectively suggest correct group members in the former case. | a system using "textual information" may be effective in the first case, but is probably weak in finding friends in the second case. | contrasting |
train_2014 | The cluster of an OOV word w can be defined as the cluster whose centroid is closest to the feature vector of w. The formerly removed high-frequency words are added as singleton clusters to produce a complete clustering. | ooV words can only be assigned to the original k-means clusters. | contrasting |
train_2015 | The c 1 model with θ = 1 is specialized for predicting words after unknown nouns and cardinal numbers and two thirds of the unknown words are of exactly that type. | with rising θ, other word classes get a higher influence and different probability distributions are superimposed. | contrasting |
train_2016 | The above grammar could capture inner-syllable dependencies. | the selection of the target characters also depend on the context. | contrasting |
train_2017 | These support the assumption that the context information are helpful to identify syllable equivalents. | the collocation grammars do not further improve performance. | contrasting |
train_2018 | Several results in the word segmentation literature suggest that description length provides a useful estimate of segmentation quality in fully unsupervised settings. | since the space of potential segmentations grows exponentially with the length of the corpus, no tractable algorithm follows directly from the Minimum Description Length (MDL) principle. | contrasting |
train_2019 | Of course, existing paraphrasing approaches do not explicitly account for redundancy, and hence this evaluation is not completely fair. | these findings suggest that redundancy may be an important issue to consider when developing and evaluating data-driven paraphrase approaches. | contrasting |
train_2020 | 3, assumes that their entailment probability is independent of the rest of the hypothesis. | when the number of covered hypothesis terms increases the probability that the remaining terms are actually entailed by T increases too (even though we do not have supporting knowledge for their entailment). | contrasting |
train_2021 | The factor graph corresponding to this model is outlined in Figure 1b. | the fully supervised model might benefit from factors that directly connect the document variable, y d , with the inputs s. as argued by täckström and McDonald (2011), when only document-level supervision is available, the document variable, y d , should be independent of the input, s, conditioned on the latent variables, y s . | contrasting |
train_2022 | Esuli and Sebastiani (2006) used WordNet to determine polarities of words, which can include nouns. | dictionaries do not contain domain specific information. | contrasting |
train_2023 | It expresses a neutral feeling from the person. | it also implies a negative opinion about "hump," which indicates a product feature. | contrasting |
train_2024 | For example, for "voice quality", people can say "good voice quality" or "bad voice quality." | for features with context dependent opinions, people often have a fixed opinion, either positive or negative but not both. | contrasting |
train_2025 | Because the global hierarchical categorization can avoid the drawbacks about those high-level irrecoverable error, it is more popular in the machine learning domain. | the taxonomy is defined artificially and is usually very difficult to organize for large scale taxonomy. | contrasting |
train_2026 | There are several different theories about relative prominence assignment in noun-noun (henceforth, NN) compounds, such as the structural theory (Bloomfield, 1933;Marchand, 1969;Heinz, 2004), the analogical theory (Schmerling, 1971;Olsen, 2000), the semantic theory (Fudge, 1984;Liberman and Sproat, 1992) and the informativeness theory (Bolinger, 1972;Ladd, 1984). | 1 in most studies, the different theories are examined and applied in isolation, thus making it difficult to compare them directly. | contrasting |
train_2027 | For n-gram models, Property 5 trivially holds since BO n−1 (w i−1 1 ) and Φ n (w i−1 1 ) are defined as sets of sequences ending with w i−1 i−n+2 and w i−1 i−n+1 with the former clearly being a superset of the latter. | when Φ can be arbitrary, e.g., a decision tree, that is not necessarily so. | contrasting |
train_2028 | Van den Bosch 2005proposes a decision-tree classifier which has been applied to training datasets with more than 100M words. | his model is non-probabilistic and thus a standard comparison with probabilistic models in terms of perplexity isn't possible. | contrasting |
train_2029 | The most commonly used composition function adds the probabilities of the words in a sentence together, and then divides by the number of words in that sentence. | to reduce redundancy, once a sentence has been chosen for summary inclusion, the probability distribution is recalculated such that any word that appears in the chosen sentence has its probability diminished. | contrasting |
train_2030 | One approach that could achieve this would be to build separate stopword lists for specific domains, and there are approaches to automatically build such lists (Lo et al., 2005). | a list-based approach cannot take context into account and therefore, among other things, will encounter problems with polysemy and synonymy. | contrasting |
train_2031 | Like (Haghighi and Vanderwende, 2009), (Daumé and Marcu, 2006), and (Barzilay and Lee, 2004), we model words as being generated from latent distributions. | instead of background, content, and document-specific distributions, we model all words in a document set as being there for one of only two purposes: a semantic (content) purpose, or a syntactic (functional) purpose. | contrasting |
train_2032 | It is natural to select the explicit comparative sentences as comparative summary, because they express comparison explicitly in good qualities. | they do not appear frequently in regular news articles so that the coverage is limited. | contrasting |
train_2033 | frequency, consistency, and variation. | linguistic and psychological studies (cited above) show that such phenomena are indeed worth modelling in an NLG system. | contrasting |
train_2034 | the corpus structure to represent the content of the summaries. | the Regional portion of the dataset seems to contribute a significant amount of noise to the hierarchy, leading to a loss in performance for those models. | contrasting |
train_2035 | Thus, pruning constituents in lower cells directly affects the overall efficiency of parsing. | with the grammar loop method there is a constant number of grammar access operations (i.e., the number of grammar rules) and the number of active states in each child cell has no impact on efficiency. | contrasting |
train_2036 | According to the criteria used in Zhu and Zhu (2010), any CTB-style constituents with "认为" being the left boundary are thought to be inconsistent with the bracketing structure of the TCT-style parse and will be pruned. | if we prune such "inconsistent" constituents, the correct conversion result (right side of Fig. | contrasting |
train_2037 | Moreover, basic tie-breaking variants and lexical augmentation are insufficient to achieve competitive accuracies. | 1 SDP is dramatically improved in both speed and accuracy when a simple, unlexicalized PCFG is used for coarseto-fine pruning (and tie-breaking). | contrasting |
train_2038 | Bansal and Klein (2010) use a carefully pa-rameterized weighting of the substructures in their grammar in an effort to extend the original DOP1 model (Bod, 1993;Goodman, 1996a). | for SDP, the grammar is even simpler (Goodman, 2003). | contrasting |
train_2039 | Their results hold up without pruning: the results of the unpruned version are only around 0.5% less (in parsing F1) than the results achieved with pruning (see Table 1). | in the case of our shortest-derivation parser, the coarse-pass is essential for high accuracies (and for speed and memory, as always). | contrasting |
train_2040 | These methods are effective because they tune the system to maximize an automatic evaluation metric such as BLEU, which serve as surrogate objective for translation quality. | we know that a single metric such as BLEU is not enough. | contrasting |
train_2041 | (2010) who applied multi-task learning for improved generalization in n-best reranking. | to our work, Duh et al. | contrasting |
train_2042 | and tuning the full set of 180,000 features are not significant. | scaling all features to the full training set shows significant improvements for algorithm 3, and especially for algorithm 4, which gains 0.8 BLEU points over tuning 12 features on the development set. | contrasting |
train_2043 | Any remaining errors are typically corrected manually. | it may be more useful to give users more control during the input stage, instead of having a post-processing step for error correction. | contrasting |
train_2044 | The role of the haptic model and PLI model will be described in the following sub-sections. | similar to having an acoustic model as a statistical representation of the phoneme sequence generating the observed acoustic features, a haptic model is used to model the PLI sequence generating the observed haptic inputs, H. The haptic likelihood can be factorised as where it is also possible to have a non-diagonal matrix for p(h i |l i ) in order to accommodate typing errors, so that non-zero probabilities are assigned to cases where For handwriting input, h i denote a sequence of 2dimensional feature vectors, which can be modelled using Hidden Markov Models (HMMs) (Rabiner, 1989). | contrasting |
train_2045 | If M = N , the PLI model likelihood, P (L|W), can be expressed in the following form: where P (l i |w i ) is the likelihood of the ith word, w i , generating the ith PLI, l i . | since each word is represented by a unique PLI (the initial letter) in this work, the PLI model score is given by if N = M , insertions and deletions have to be taken into consideration: where represents an empty token. | contrasting |
train_2046 | This is not surprising since key taps are much quicker to generate compared to handwriting gestures. | the individual speech and letter input speed are faster for asynchronous mode because users do not need to multi-task. | contrasting |
train_2047 | Therefore, in clean condition, the acoustic models are able to recover some of the errors introduced by the handwriting recognizer, bringing the LER down to as low as 0.3%. | in noisy conditions, the LER performance is similar to those using keyboard input. | contrasting |
train_2048 | The approach used in this paper is to build a standard FST for the current examination topic. | the annotation of the corpus is necessary before the building. | contrasting |
train_2049 | We used the Wordnet::Similarity software package (Pedersen et al., 2004) to calculate the similarity between every two words at first. | the performance's reduction of the AES system indicates that the similarity is not good enough to extend the FST model. | contrasting |
train_2050 | As a usual best feature for AES, the length shows its outstanding performance in CRR transcription. | it fails in the ASR transcription. | contrasting |
train_2051 | In regular text essay scoring, the BOW algorithm can have excellent performance. | in certain situations, such as towards ASR transcription of oral English speech, its weakness of sequence neglect will be magnified, leading to drastic decline of performance. | contrasting |
train_2052 | Turning to the hybrid condition, the performance of Full features is surprisingly good, probably because we have more available training data than the other two conditions. | with contextual features removed, our features perform quite similarly to those of Hernault et al. | contrasting |
train_2053 | Topic segmentation approaches range from simple heuristic methods based on lexical similarity (Morris and Hirst, 1991;Hearst, 1997) to more intricate generative models and supervised methods (Georgescul et al., 2006;Purver et al., 2006;Gruber et al., 2007;Eisenstein and Barzilay, 2008), which have been shown to outperform the established heuristics. | previous computational work on conversational structure, particularly in topic discovery and topic segmentation, focuses primarily on content, ignoring the speakers. | contrasting |
train_2054 | For example: models having sticky topics over ngrams (Johnson, 2010), sticky HDP-HMM (Fox et al., 2008); models that are an amalgam of sequential models and topic models (Griffiths et al., 2005;Wal-lach, 2006;Gruber et al., 2007;Ahmed and Xing, 2008;Boyd-Graber and Blei, 2008;Du et al., 2010); or explicit models of time or other relevant features as a distinct latent variable (Wang and McCallum, 2006;Eisenstein et al., 2010). | sITs jointly models topic and individuals' tendency to control a conversation. | contrasting |
train_2055 | As shown in column Filled our approach returns less triples than other systems, explaining low recall. | our system achieves the highest precision for the complete task of temporally anchored relation extraction. | contrasting |
train_2056 | We note that this is not always the case: for example, the entailment graph in Figure 2 is not an FRG, because 'X annex Y' entails both 'Y be part of X' and 'X invade Y', while the latter two do not entail one another. | we hypothesize that this scenario is rather uncommon. | contrasting |
train_2057 | To conclude, TNF learns transitive entailment graphs of good quality much faster than Exactgraph. | our experiment utilized an available data set of moderate size; we expect TNF to scale to large data sets (that are currently unavailable), where other baselines would be impractical. | contrasting |
train_2058 | Similar to these methods, our algorithm capitalizes on surface linguistic cues to learn preconditions from text. | our only source of supervision is the feedback provided by the planning task which utilizes the predictions. | contrasting |
train_2059 | Like us, Ozbal and colleagues use both a textual model and a visual model (as well as Google adjective-noun cooccurrence counts) to find the typical color of an object. | their visual model works by analyzing pictures associated with an object, and determining the color of the object directly by image analysis. | contrasting |
train_2060 | It has also been suggested that this setting requires morphological generation because the bitext may not contain all inflected variants (Minkov et al., 2007;Toutanova et al., 2008;Fraser et al., 2012). | using lexical coverage experiments, we show that there is ample room for translation quality improvements through better selection of forms that already exist in the translation model. | contrasting |
train_2061 | The ATB does not contain animacy annotations, so our agreement model cannot discriminate between these two cases. | alkuhlani and Habash (2011) have recently started annotating the aTB for animacy, and our model could benefit as more data is released. | contrasting |
train_2062 | Our work uses the same training criterion and is based on the same generative story. | we use a new training procedure whose critical parts have constant time and memory complexity with respect to the vocabulary size so that our methods can scale to much larger vocabulary sizes while also being faster. | contrasting |
train_2063 | We use a greedy method similar to (Koehn and Knight, 2002) for extending a given lexicon, and we implicitly also use the frequency as a feature. | we perform fully unsupervised training and do not start with a seed lexicon or use linguistic features. | contrasting |
train_2064 | This defines a diagonal beam 4 when visualizing the lexicon entries in a matrix where both source and target words are sorted by their frequency rank. | note that the result of sorting by frequency (e, f ) for which e is a translation candidate of f , while dots represent word pairs (e, f ) for which this is not the case. | contrasting |
train_2065 | In the formalism presented above, this means that each e i must be included in at most one span, and for each span u = v. Traditionally, these models are run in both directions and combined using heuristics to create many-to-many alignments (Koehn et al., 2003). | in order for one-to-many alignment methods to be effective, each f j must contain enough information to allow for effective alignment with its corresponding elements in e I 1 . | contrasting |
train_2066 | While this is often the case in word-based models, for characterbased models this assumption breaks down, as there is often no clear correspondence between characters. | in recent years, there have been advances in many-to-many alignment techniques that are able to align multi-element chunks on both sides of the translation (Marcu and Wong, 2002;DeNero et al., 2008;Blunsom et al., 2009;. | contrasting |
train_2067 | Discriminative models, which directly distinguish correct from incorrect hypothesis, are particularly attractive because they allow the inclusion of arbitrary features (Kuo et al., 2002;Roark et al., 2007;Collins et al., 2005); these models with syntactic information have obtained state of the art results. | both generative and discriminative LMs with long-span dependencies can be slow, for they often cannot work directly with lattices and require rescoring large N -best lists (Khudanpur and Wu, 2000;Collins et al., 2005;Kuo et al., 2009). | contrasting |
train_2068 | In other words, they search in the joint space of word sequences present in the lattice and their syntactic analyses; they are not guaranteed to produce a syntactic analysis for all hypotheses. | substructure sharing is a general purpose method that we have applied to two different algorithms. | contrasting |
train_2069 | The lattice parser therefore, is itself a language model. | our tools are completely separated from the ASR system, which allows the system to create whatever features are needed. | contrasting |
train_2070 | (2009) use a real-valued representation for vowels (formant values), but assume no variability in consonants, and treat each word token independently. | our model uses a symbolic representation for sounds, but models variability in all segment types and incorporates a bigram word-level language model. | contrasting |
train_2071 | Several other related systems work directly from the acoustic signal and many of these do use naturalistic corpora. | they do not learn at both the lexical and phonetic/acoustic level. | contrasting |
train_2072 | Firstly, we show that pregrouping multiword expressions before parsing with a state-of-the-art recognizer improves multiword recognition accuracy and unlabeled attachment score. | it has no statistically significant impact in terms of F-score as incorrect multiword expression recognition has important side effects on parsing. | contrasting |
train_2073 | It indicates that both discriminative strategies are of interest in locating multiword adjectives, determiners and prepositions; the pre-grouping method appears to be particularly relevant for multiword nouns and adverbs. | it performs very poorly in multiword verb recognition. | contrasting |
train_2074 | This method has the potential to be very fast. | because the performance of this method is restricted to the K-best list, we may have to set K to a high number in order to find the best parsing tree (with DLM) or a tree acceptably close to the best (Shen et al., 2008). | contrasting |
train_2075 | On one hand, the heterogeneous solver provides structural information, which is the basis to construct the sub-word sequence. | this tagger provides additional POS information, which is helpful for disambiguation. | contrasting |
train_2076 | The stacking models can be viewed as data-driven annotation converting models. | they are not trained on "real" labeled samples. | contrasting |
train_2077 | From this table, we can see that words with low frequency, especially the out-of-vocabulary (OOV) words, are hard to label. | when a word is very frequently used, its behavior is very complicated and therefore hard to predict. | contrasting |
train_2078 | We choose to work with these two algorithms considering their prior success in other NLP applications. | we expect that our approach can function with other clustering algorithms. | contrasting |
train_2079 | A natural strategy for extending current experiments is to include both clustering results together, or to include more than one cluster granularity. | we find no further improvement. | contrasting |
train_2080 | However, a disadvantage of the perceptron style systems is that they can not provide probabilistic information. | new word detection is also one of the important problems in Chinese information processing. | contrasting |
train_2081 | There were studies trying to solve this problem jointly with CWS. | the current studies are limited. | contrasting |
train_2082 | As we can see, the new features improved performance on both word segmentation and new word detection. | we also noticed that the training cost became more expensive via adding high dimensional new features. | contrasting |
train_2083 | On one hand, this is Table 3: VerbNet accuracy without the none class generally in contrast with standard text categorization tasks, for which n-gram models show accuracy comparable to the simpler BOW. | it simply confirms that verb classification requires the dependency information between words (i.e., at least the sequential structure information provided by SK). | contrasting |
train_2084 | Second, SK is 2.56 percent points below the stateof-the-art achieved in (Brown et al., 2011) (BR), i.e, 82.08 vs. 84.64. | sTK applied to our representation (CT, GRCT and LCT) produces comparable accuracy, e.g., 84.83, confirming that syntactic representation is needed to reach the state-of-the-art. | contrasting |
train_2085 | Overall, IR systems can potentially benefit from the correct meanings of words provided by WSD systems. | in previous investigations of the usage of WSD in IR, different researchers arrived at conflicting observations and conclusions. | contrasting |
train_2086 | Similarly, Harmeling (2009) suggested a heuristic set of 28 transformations, which include various types of node-substitutions as well as restructuring of the entire parse-tree. | to such predefined sets of transformations, knowledge oriented approaches were sug-gested by Bar-Haim et al. | contrasting |
train_2087 | In addition, they used knowledge-based lexical substitutions. | when only knowledge-based transformations are allowed, transforming the text into the hypothesis is impossible in many cases. | contrasting |
train_2088 | The results, presented in Table 6, show that by avoiding any type of lookahead one can achieve fast runtime, while compromising proof quality. | both exhaustive and local lookahead yield better proofs and accuracy, while local lookahead is more than 4 times faster than exhaustive lookahead. | contrasting |
train_2089 | (2009) improved a syntactic SMT system by adding as many as ten thousand syntactic features, and used Margin Infused Relaxed Algorithm (MIRA) to train the feature weights. | the number of parameters in common phrase and lexicon translation models is much larger. | contrasting |
train_2090 | To prevent overfitting, the statistics of phrase pairs from a particular sentence was excluded from the phrase table when aligning that sentence. | as pointed out by Liang et al (2006), the same problem as in the bold updating existed, i.e., forced alignment between a source sentence and its reference translation was tricky, and the proposed alignment was likely to be unreliable. | contrasting |
train_2091 | Projection step In this step, we compute: This moves θ in the direction of steepest descent (∇F) with step size s, and then the function [•] ∆ projects the resulting point onto the simplex; that is, it finds the nearest point that satisfies the constraints (8). | the gradient ∇F(θ k ) is to Schoenemann (2011b), we use an O(n log n) algorithm for the projection step due to Duchi et. | contrasting |
train_2092 | The fact that we had to use hand-aligned data to tune the hyperparameters α and β means that our method is no longer completely unsupervised. | our observation is that alignment accuracy is actually fairly robust to the choice of these hyperparameters, as shown in Table 2. | contrasting |
train_2093 | We find that TME can discover and cluster many correct Cexpressions, e.g., "great review", "review helped me" in Thumbs-up; "poor review", "very unfair review" in Thumbs-down; "how do I", "help me decide" in Question; "good reply", "thank you for clarifying" in Answer Acknowledgement; "I disagree", "I refute" in Disagreement; and "I agree", "true in fact" in Agreement. | with the guidance of Max-Ent priors, ME-TME did much better (Table 2). | contrasting |
train_2094 | Language understanding has been well studied in the context of question/answering (Harabagiu and Hickl, 2006;Liang et al., 2011), entailment (Sammons et al., 2010), summarization (Hovy et al., 2005;Daumé-III and Marcu, 2006), spoken language understanding (Tur and Mori, 2011;Dinarelli et al., 2009), query understanding (Popescu et al., 2010;Li, 2010;Reisinger and Pasca, 2011), etc. | data sources in VPA systems pose new challenges, such as variability and ambiguities in natural language, or short utterances that rarely contain contextual information, etc. | contrasting |
train_2095 | Their discriminative approach represents semantic slots and discourse-level utterance labels (domain or dialog act) in a single structure to encode dependencies. | their model requires fully labeled utterances for training, which can be time consuming and expensive to generate for dynamic systems. | contrasting |
train_2096 | Topic models are often evaluated quantitatively using perplexity and likelihood on held-out test data (Blei et al., 2003). | perplexity does not reflect our purpose since our aim is not to predict whether an unseen document is likely to be a review of some particular aspect. | contrasting |
train_2097 | Most information extraction (IE) systems identify facts that are explicitly stated in text. | in natural language, some facts are implicit, and identifying them requires "reading between the lines". | contrasting |
train_2098 | ", standard IE systems cannot identify the answer since citizenship is not explicitly stated in the text. | a human reader possesses the commonsense knowledge that the president of a country is almost always a citizen of that country, and easily infers the correct answer. | contrasting |
train_2099 | In our task, the supervised training data consists of facts that are extracted from the natural language text. | we usually do not have evidence for inferred facts as well as noisy-or nodes. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.