Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Dutch
Libraries:
Datasets
pandas
License:
wietsedv commited on
Commit
b6996d2
·
verified ·
1 Parent(s): 694e301

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md CHANGED
@@ -36,4 +36,92 @@ configs:
36
  path: data/dev-*
37
  - split: test
38
  path: data/test-*
 
 
 
 
 
 
39
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  path: data/dev-*
37
  - split: test
38
  path: data/test-*
39
+ license: cc-by-sa-4.0
40
+ task_categories:
41
+ - question-answering
42
+ language:
43
+ - nl
44
+ pretty_name: SQuAD-NL v2.0
45
  ---
46
+
47
+ # SQuAD-NL v2.0 [translated SQuAD / XQuAD]
48
+
49
+ SQuAD-NL v2.0 is a translation of [The Stanford Question Answering Dataset](https://rajpurkar.github.io/SQuAD-explorer/) (SQuAD) v2.0.
50
+
51
+ Since the original English SQuAD test data is not public, we reserve the same documents that were used for [XQuAD](https://github.com/google-deepmind/xquad) for testing purposes. These documents are sampled from the original dev data split. The English data is automatically translated using Google Translate (February 2023) and the test data is manually post-edited.
52
+
53
+ This version of SQuAD-NL also contains unanswerable questions. If you want to only include answerable questions, use [SQuAD-NL v1.1](https://huggingface.co/datasets/GroNLP/squad-nl-v1.1/).
54
+
55
+ | Split | Source | Procedure | English | Dutch |
56
+ | ----- | ---------------------- | ------------------------ | ------: | ------: |
57
+ | train | SQuAD-train-v2.0 | Google Translate | 130,319 | 130,319 |
58
+ | dev | SQuAD-dev-v2.0 \ XQuAD | Google Translate | 10,174 | 10,174 |
59
+ | test | SQuAD-dev-v2.0 & XQuAD | Google Translate + Human | 1,699 | 1,699 |
60
+
61
+ ## Source
62
+
63
+ SQuAD-NL was first used in the [Dutch Model Benchmark](https://dumbench.nl) (DUMB). The accompanying paper can be found [here](https://aclanthology.org/2023.emnlp-main.447/).
64
+
65
+ ## Citation
66
+
67
+ If you use SQuAD-NL, please cite the DUMB, SQuAD and XQuAD papers:
68
+
69
+ ```bibtex
70
+ @inproceedings{de-vries-etal-2023-dumb,
71
+ title = "{DUMB}: A Benchmark for Smart Evaluation of {D}utch Models",
72
+ author = "de Vries, Wietse and
73
+ Wieling, Martijn and
74
+ Nissim, Malvina",
75
+ editor = "Bouamor, Houda and
76
+ Pino, Juan and
77
+ Bali, Kalika",
78
+ booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
79
+ month = dec,
80
+ year = "2023",
81
+ address = "Singapore",
82
+ publisher = "Association for Computational Linguistics",
83
+ url = "https://aclanthology.org/2023.emnlp-main.447",
84
+ doi = "10.18653/v1/2023.emnlp-main.447",
85
+ pages = "7221--7241",
86
+ abstract = "We introduce the Dutch Model Benchmark: DUMB. The benchmark includes a diverse set of datasets for low-, medium- and high-resource tasks. The total set of nine tasks includes four tasks that were previously not available in Dutch. Instead of relying on a mean score across tasks, we propose Relative Error Reduction (RER), which compares the DUMB performance of language models to a strong baseline which can be referred to in the future even when assessing different sets of language models. Through a comparison of 14 pre-trained language models (mono- and multi-lingual, of varying sizes), we assess the internal consistency of the benchmark tasks, as well as the factors that likely enable high performance. Our results indicate that current Dutch monolingual models under-perform and suggest training larger Dutch models with other architectures and pre-training objectives. At present, the highest performance is achieved by DeBERTaV3 (large), XLM-R (large) and mDeBERTaV3 (base). In addition to highlighting best strategies for training larger Dutch models, DUMB will foster further research on Dutch. A public leaderboard is available at https://dumbench.nl.",
87
+ }
88
+
89
+ @inproceedings{rajpurkar-etal-2016-squad,
90
+ title = "{SQ}u{AD}: 100,000+ Questions for Machine Comprehension of Text",
91
+ author = "Rajpurkar, Pranav and
92
+ Zhang, Jian and
93
+ Lopyrev, Konstantin and
94
+ Liang, Percy",
95
+ editor = "Su, Jian and
96
+ Duh, Kevin and
97
+ Carreras, Xavier",
98
+ booktitle = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
99
+ month = nov,
100
+ year = "2016",
101
+ address = "Austin, Texas",
102
+ publisher = "Association for Computational Linguistics",
103
+ url = "https://aclanthology.org/D16-1264",
104
+ doi = "10.18653/v1/D16-1264",
105
+ pages = "2383--2392",
106
+ }
107
+
108
+ @inproceedings{artetxe-etal-2020-cross,
109
+ title = "On the Cross-lingual Transferability of Monolingual Representations",
110
+ author = "Artetxe, Mikel and
111
+ Ruder, Sebastian and
112
+ Yogatama, Dani",
113
+ editor = "Jurafsky, Dan and
114
+ Chai, Joyce and
115
+ Schluter, Natalie and
116
+ Tetreault, Joel",
117
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
118
+ month = jul,
119
+ year = "2020",
120
+ address = "Online",
121
+ publisher = "Association for Computational Linguistics",
122
+ url = "https://aclanthology.org/2020.acl-main.421",
123
+ doi = "10.18653/v1/2020.acl-main.421",
124
+ pages = "4623--4637",
125
+ abstract = "State-of-the-art unsupervised multilingual models (e.g., multilingual BERT) have been shown to generalize in a zero-shot cross-lingual setting. This generalization ability has been attributed to the use of a shared subword vocabulary and joint training across multiple languages giving rise to deep multilingual abstractions. We evaluate this hypothesis by designing an alternative approach that transfers a monolingual model to new languages at the lexical level. More concretely, we first train a transformer-based masked language model on one language, and transfer it to a new language by learning a new embedding matrix with the same masked language modeling objective, freezing parameters of all other layers. This approach does not rely on a shared vocabulary or joint training. However, we show that it is competitive with multilingual BERT on standard cross-lingual classification benchmarks and on a new Cross-lingual Question Answering Dataset (XQuAD). Our results contradict common beliefs of the basis of the generalization ability of multilingual models and suggest that deep monolingual models learn some abstractions that generalize across languages. We also release XQuAD as a more comprehensive cross-lingual benchmark, which comprises 240 paragraphs and 1190 question-answer pairs from SQuAD v1.1 translated into ten languages by professional translators.",
126
+ }
127
+ ```