--- annotations_creators: - crowdsourced language: - en language_creators: - found - crowdsourced license: - cc-by-4.0 multilinguality: - monolingual pretty_name: condaqa size_categories: - 10K70% of the questions correctly for the next stage of the dataset construction task." We use the CrowdAQ platform for the exam and Amazon Mechanical Turk for annotations. ### Personal and Sensitive Information We expect that such information has already been redacted from Wikipedia. ## Considerations for Using the Data ### Social Impact of Dataset A model that solves this dataset might be (mis-)represented as an evidence that the model understands the entirety of English language and consequently deployed where it will have immediate and/or downstream impact on stakeholders. ### Discussion of Biases We are not aware of societal biases that are exhibited in this dataset. ### Other Known Limitations From the paper: "Though CondaQA currently represents the largest NLU dataset that evaluates a model’s ability to process the implications of negation statements, it is possible to construct a larger dataset, with more examples spanning different answer types. Further CONDAQA is an English dataset, and it would be useful to extend our data collection procedures to build high-quality resources in other languages. Finally, while we attempt to extensively measure and control for artifacts in our dataset, it is possible that our dataset has hidden artifacts that we did not study." ## Additional Information ### Dataset Curators From the paper: "In order to estimate human performance, and to construct a high-quality evaluation with fewer ambiguous examples, we have five verifiers provide answers for each question in the development and test sets." The first author has been manually checking the annotations throughout the entire data collection process that took ~7 months. ### Licensing Information license: apache-2.0 ### Citation Information ``` @inproceedings{ravichander-et-al-2022-condaqa, title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation}, author={‪Ravichander‬, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana}, proceedings={EMNLP 2022}, year={2022} } ```