QA2D / README.md
albertvillanova's picture
Fix language and license tag names (#1)
d38d3f4
|
raw
history blame
5.43 kB
metadata
annotations_creators:
  - machine-generated
  - crowdsourced
  - found
language_creators:
  - machine-generated
  - crowdsourced
language: []
license:
  - mit
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - original
  - extended|squad
  - extended|race
  - extended|newsqa
  - extended|qamr
  - extended|movieQA
task_categories:
  - text2text-generation
task_ids:
  - text-simplification
pretty_name: QA2D

Dataset Card for QA2D

Table of Contents

Dataset Description

Dataset Summary

Existing datasets for natural language inference (NLI) have propelled research on language understanding. We propose a new method for automatically deriving NLI datasets from the growing abundance of large-scale question answering datasets. Our approach hinges on learning a sentence transformation model which converts question-answer pairs into their declarative forms. Despite being primarily trained on a single QA dataset, we show that it can be successfully applied to a variety of other QA resources. Using this system, we automatically derive a new freely available dataset of over 500k NLI examples (QA-NLI), and show that it exhibits a wide range of inference phenomena rarely seen in previous NLI datasets.

This Question to Declarative Sentence (QA2D) Dataset contains 86k question-answer pairs and their manual transformation into declarative sentences. 95% of question answer pairs come from SQuAD (Rajkupar et al., 2016) and the remaining 5% come from four other question answering datasets.

Supported Tasks and Leaderboards

[Needs More Information]

Languages

en

Dataset Structure

Data Instances

See below.

Data Fields

  • dataset: lowercased name of dataset (movieqa, newsqa, qamr, race, squad)
  • example_uid: unique id of example within dataset (there are examples with the same uids from different datasets, so the combination of dataset + example_uid should be used for unique indexing)
  • question: tokenized (space-separated) question from the source QA dataset
  • answer: tokenized (space-separated) answer span from the source QA dataset
  • turker_answer: tokenized (space-separated) answer sentence collected from MTurk
  • rule-based: tokenized (space-separated) answer sentence, generated by the rule-based model

Data Splits

Dataset Split Number of Instances in Split
Train 60,710
Dev 10,344

Dataset Creation

Curation Rationale

This Question to Declarative Sentence (QA2D) Dataset contains 86k question-answer pairs and their manual transformation into declarative sentences. 95% of question answer pairs come from SQuAD (Rajkupar et al., 2016) and the remaining 5% come from four other question answering datasets.

Source Data

Initial Data Collection and Normalization

[Needs More Information]

Who are the source language producers?

[Needs More Information]

Annotations

Annotation process

[Needs More Information]

Who are the annotators?

[Needs More Information]

Personal and Sensitive Information

[Needs More Information]

Considerations for Using the Data

Social Impact of Dataset

[Needs More Information]

Discussion of Biases

[Needs More Information]

Other Known Limitations

[Needs More Information]

Additional Information

Dataset Curators

[Needs More Information]

Licensing Information

[Needs More Information]

Citation Information

@article{DBLP:journals/corr/abs-1809-02922, author = {Dorottya Demszky and Kelvin Guu and Percy Liang}, title = {Transforming Question Answering Datasets Into Natural Language Inference Datasets}, journal = {CoRR}, volume = {abs/1809.02922}, year = {2018}, url = {http://arxiv.org/abs/1809.02922}, eprinttype = {arXiv}, eprint = {1809.02922}, timestamp = {Fri, 05 Oct 2018 11:34:52 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1809-02922.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }