Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Dutch
Libraries:
Datasets
pandas
License:
squad-nl-v2.0 / README.md
wietsedv's picture
Update README.md
b6996d2 verified
metadata
dataset_info:
  features:
    - name: question
      dtype: string
    - name: context
      dtype: string
    - name: id
      dtype: string
    - name: title
      dtype: string
    - name: answers
      struct:
        - name: answer_start
          sequence: int64
        - name: text
          sequence: string
  splits:
    - name: train
      num_bytes: 126953808
      num_examples: 130319
    - name: dev
      num_bytes: 10690828
      num_examples: 10174
    - name: test
      num_bytes: 1779073
      num_examples: 1699
  download_size: 18660248
  dataset_size: 139423709
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: dev
        path: data/dev-*
      - split: test
        path: data/test-*
license: cc-by-sa-4.0
task_categories:
  - question-answering
language:
  - nl
pretty_name: SQuAD-NL v2.0

SQuAD-NL v2.0 [translated SQuAD / XQuAD]

SQuAD-NL v2.0 is a translation of The Stanford Question Answering Dataset (SQuAD) v2.0.

Since the original English SQuAD test data is not public, we reserve the same documents that were used for XQuAD for testing purposes. These documents are sampled from the original dev data split. The English data is automatically translated using Google Translate (February 2023) and the test data is manually post-edited.

This version of SQuAD-NL also contains unanswerable questions. If you want to only include answerable questions, use SQuAD-NL v1.1.

Split Source Procedure English Dutch
train SQuAD-train-v2.0 Google Translate 130,319 130,319
dev SQuAD-dev-v2.0 \ XQuAD Google Translate 10,174 10,174
test SQuAD-dev-v2.0 & XQuAD Google Translate + Human 1,699 1,699

Source

SQuAD-NL was first used in the Dutch Model Benchmark (DUMB). The accompanying paper can be found here.

Citation

If you use SQuAD-NL, please cite the DUMB, SQuAD and XQuAD papers:

@inproceedings{de-vries-etal-2023-dumb,
    title = "{DUMB}: A Benchmark for Smart Evaluation of {D}utch Models",
    author = "de Vries, Wietse  and
      Wieling, Martijn  and
      Nissim, Malvina",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.emnlp-main.447",
    doi = "10.18653/v1/2023.emnlp-main.447",
    pages = "7221--7241",
    abstract = "We introduce the Dutch Model Benchmark: DUMB. The benchmark includes a diverse set of datasets for low-, medium- and high-resource tasks. The total set of nine tasks includes four tasks that were previously not available in Dutch. Instead of relying on a mean score across tasks, we propose Relative Error Reduction (RER), which compares the DUMB performance of language models to a strong baseline which can be referred to in the future even when assessing different sets of language models. Through a comparison of 14 pre-trained language models (mono- and multi-lingual, of varying sizes), we assess the internal consistency of the benchmark tasks, as well as the factors that likely enable high performance. Our results indicate that current Dutch monolingual models under-perform and suggest training larger Dutch models with other architectures and pre-training objectives. At present, the highest performance is achieved by DeBERTaV3 (large), XLM-R (large) and mDeBERTaV3 (base). In addition to highlighting best strategies for training larger Dutch models, DUMB will foster further research on Dutch. A public leaderboard is available at https://dumbench.nl.",
}

@inproceedings{rajpurkar-etal-2016-squad,
    title = "{SQ}u{AD}: 100,000+ Questions for Machine Comprehension of Text",
    author = "Rajpurkar, Pranav  and
      Zhang, Jian  and
      Lopyrev, Konstantin  and
      Liang, Percy",
    editor = "Su, Jian  and
      Duh, Kevin  and
      Carreras, Xavier",
    booktitle = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2016",
    address = "Austin, Texas",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D16-1264",
    doi = "10.18653/v1/D16-1264",
    pages = "2383--2392",
}

@inproceedings{artetxe-etal-2020-cross,
    title = "On the Cross-lingual Transferability of Monolingual Representations",
    author = "Artetxe, Mikel  and
      Ruder, Sebastian  and
      Yogatama, Dani",
    editor = "Jurafsky, Dan  and
      Chai, Joyce  and
      Schluter, Natalie  and
      Tetreault, Joel",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.acl-main.421",
    doi = "10.18653/v1/2020.acl-main.421",
    pages = "4623--4637",
    abstract = "State-of-the-art unsupervised multilingual models (e.g., multilingual BERT) have been shown to generalize in a zero-shot cross-lingual setting. This generalization ability has been attributed to the use of a shared subword vocabulary and joint training across multiple languages giving rise to deep multilingual abstractions. We evaluate this hypothesis by designing an alternative approach that transfers a monolingual model to new languages at the lexical level. More concretely, we first train a transformer-based masked language model on one language, and transfer it to a new language by learning a new embedding matrix with the same masked language modeling objective, freezing parameters of all other layers. This approach does not rely on a shared vocabulary or joint training. However, we show that it is competitive with multilingual BERT on standard cross-lingual classification benchmarks and on a new Cross-lingual Question Answering Dataset (XQuAD). Our results contradict common beliefs of the basis of the generalization ability of multilingual models and suggest that deep monolingual models learn some abstractions that generalize across languages. We also release XQuAD as a more comprehensive cross-lingual benchmark, which comprises 240 paragraphs and 1190 question-answer pairs from SQuAD v1.1 translated into ten languages by professional translators.",
}