Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Size:
10K - 100K
License:
File size: 10,024 Bytes
c5c471e a9f3d48 c5c471e dbc3e3b c5c471e dbc3e3b c5c471e dbc3e3b 48d8034 dbc3e3b 48d8034 dbc3e3b 48d8034 dbc3e3b f48f9db dbc3e3b f48f9db dbc3e3b f48f9db dbc3e3b 753cba5 dbc3e3b 753cba5 dbc3e3b d8eb009 dbc3e3b d8eb009 dbc3e3b d8eb009 dbc3e3b c157e45 dbc3e3b c157e45 dbc3e3b c157e45 dbc3e3b f6db471 794d5f0 d79054c c5c471e 48d8034 f48f9db 753cba5 d8eb009 c157e45 f6db471 794d5f0 d79054c c5c471e a9f3d48 d1ea25a a9f3d48 dbc3e3b a9f3d48 6774368 a9f3d48 dbc3e3b a9f3d48 f4dbe2d a9f3d48 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 |
---
license: cc-by-nd-4.0
language:
- de
- zh
- tr
size_categories:
- 10K<n<100K
multilinguality:
- multilingual
pretty_name: M2QA
task_categories:
- question-answering
task_ids:
- extractive-qa
dataset_info:
- config_name: m2qa.german.creative_writing
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: text
sequence: string
- name: answer_start
sequence: int64
splits:
- name: validation
num_bytes: 2083548
num_examples: 1500
download_size: 2047695
dataset_size: 2083548
- config_name: m2qa.german.news
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: text
sequence: string
- name: answer_start
sequence: int64
splits:
- name: validation
num_bytes: 2192833
num_examples: 1500
- name: train
num_bytes: 1527473
num_examples: 1500
download_size: 2438496
dataset_size: 3720306
- config_name: m2qa.german.product_reviews
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: text
sequence: string
- name: answer_start
sequence: int64
splits:
- name: validation
num_bytes: 1652573
num_examples: 1500
- name: train
num_bytes: 1158154
num_examples: 1500
download_size: 1830972
dataset_size: 2810727
- config_name: m2qa.chinese.creative_writing
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: text
sequence: string
- name: answer_start
sequence: int64
splits:
- name: validation
num_bytes: 1600001
num_examples: 1500
download_size: 1559229
dataset_size: 1600001
- config_name: m2qa.chinese.news
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: text
sequence: string
- name: answer_start
sequence: int64
splits:
- name: validation
num_bytes: 1847465
num_examples: 1500
- name: train
num_bytes: 1135914
num_examples: 1500
download_size: 2029530
dataset_size: 2983379
- config_name: m2qa.chinese.product_reviews
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: text
sequence: string
- name: answer_start
sequence: int64
splits:
- name: validation
num_bytes: 1390223
num_examples: 1500
- name: train
num_bytes: 1358895
num_examples: 1500
download_size: 1597724
dataset_size: 2749118
- config_name: m2qa.turkish.creative_writing
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: text
sequence: string
- name: answer_start
sequence: int64
splits:
- name: validation
num_bytes: 1845140
num_examples: 1500
download_size: 1808676
dataset_size: 1845140
- config_name: m2qa.turkish.news
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: text
sequence: string
- name: answer_start
sequence: int64
splits:
- name: validation
num_bytes: 2071770
num_examples: 1500
- name: train
num_bytes: 1362485
num_examples: 1500
download_size: 2287668
dataset_size: 3434255
- config_name: m2qa.turkish.product_reviews
features:
- name: id
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: text
sequence: string
- name: answer_start
sequence: int64
splits:
- name: validation
num_bytes: 1996826
num_examples: 1500
download_size: 1958662
dataset_size: 1996826
configs:
- config_name: m2qa.chinese.creative_writing
data_files:
- split: validation
path: m2qa.chinese.creative_writing/validation-*
- config_name: m2qa.chinese.news
data_files:
- split: validation
path: m2qa.chinese.news/validation-*
- split: train
path: m2qa.chinese.news/train-*
- config_name: m2qa.chinese.product_reviews
data_files:
- split: validation
path: m2qa.chinese.product_reviews/validation-*
- split: train
path: m2qa.chinese.product_reviews/train-*
- config_name: m2qa.german.creative_writing
data_files:
- split: validation
path: m2qa.german.creative_writing/validation-*
- config_name: m2qa.german.news
data_files:
- split: validation
path: m2qa.german.news/validation-*
- split: train
path: m2qa.german.news/train-*
- config_name: m2qa.german.product_reviews
data_files:
- split: validation
path: m2qa.german.product_reviews/validation-*
- split: train
path: m2qa.german.product_reviews/train-*
- config_name: m2qa.turkish.creative_writing
data_files:
- split: validation
path: m2qa.turkish.creative_writing/validation-*
- config_name: m2qa.turkish.news
data_files:
- split: validation
path: m2qa.turkish.news/validation-*
- split: train
path: m2qa.turkish.news/train-*
- config_name: m2qa.turkish.product_reviews
data_files:
- split: validation
path: m2qa.turkish.product_reviews/validation-*
---
M2QA: Multi-domain Multilingual Question Answering
=====================================================
M2QA (Multi-domain Multilingual Question Answering) is an extractive question answering benchmark for evaluating joint language and domain transfer. M2QA includes 13,500 SQuAD 2.0-style question-answer instances in German, Turkish, and Chinese for the domains of product reviews, news, and creative writing.
This Hugging Face datasets repo accompanies our paper "[M2QA: Multi-domain Multilingual Question Answering](https://aclanthology.org/2024.findings-emnlp.365/)". If you want an explanation and code to reproduce all our results or want to use our custom-built annotation platform, have a look at our GitHub repository: [https://github.com/UKPLab/m2qa](https://github.com/UKPLab/m2qa)
Loading & Decrypting the Dataset
-----------------
Following [Jacovi et al. (2023)](https://aclanthology.org/2023.emnlp-main.308/), we encrypt the validation data to prevent leakage of the dataset into LLM training datasets. But loading the dataset is still easy:
To load the dataset, you can use the following code:
```python
from datasets import load_dataset
from cryptography.fernet import Fernet
# Load the dataset
subset = "m2qa.german.news" # Change to the subset that you want to use
dataset = load_dataset("UKPLab/m2qa", subset)
# Decrypt it
fernet = Fernet(b"aRY0LZZb_rPnXWDSiSJn9krCYezQMOBbGII2eGkN5jo=")
def decrypt(example):
example["question"] = fernet.decrypt(example["question"].encode()).decode()
example["context"] = fernet.decrypt(example["context"].encode()).decode()
example["answers"]["text"] = [fernet.decrypt(answer.encode()).decode() for answer in example["answers"]["text"]]
return example
dataset["validation"] = dataset["validation"].map(decrypt)
```
The M2QA dataset is licensed under a "no derivative" agreement. To prevent contamination of LLM training datasets and thus preserve the dataset's usefulness to our research community, please upload the dataset only in encrypted form. Additionally, please use only APIs that do not utilize the data for training.
Overview / Data Splits
----------
All used text passages stem from sources with open licenses. We list the licenses here: [https://github.com/UKPLab/m2qa/tree/main/m2qa_dataset](https://github.com/UKPLab/m2qa/tree/main/m2qa_dataset)
We have validation data for the following domains and languages:
| Subset Name | Domain | Language | #Question-Answer instances |
| --- | --- | --- | --- |
| `m2qa.german.product_reviews` | product_reviews | German | 1500 |
| `m2qa.german.creative_writing` | creative_writing | German | 1500 |
| `m2qa.german.news` | news | German | 1500 |
| `m2qa.chinese.product_reviews` | product_reviews | Chinese | 1500 |
| `m2qa.chinese.creative_writing` | creative_writing | Chinese | 1500 |
| `m2qa.chinese.news` | news | Chinese | 1500 |
| `m2qa.turkish.product_reviews` | product_reviews | Turkish | 1500 |
| `m2qa.turkish.creative_writing` | creative_writing | Turkish | 1500 |
| `m2qa.turkish.news` | news | Turkish | 1500 |
### Additional Training Data
We also provide training data for five domain-language pairs, consisting of 1500 question-answer instances each, totalling 7500 training examples. These are the subsets that contain training data:
- `m2qa.chinese.news`
- `m2qa.chinese.product_reviews`
- `m2qa.german.news`
- `m2qa.german.product_reviews`
- `m2qa.turkish.news`
The training data is not encrypted.
Citation
----------
If you use this dataset, please cite our paper:
```
@inproceedings{englander-etal-2024-m2qa,
title = "M2QA: Multi-domain Multilingual Question Answering",
author = {Engl{\"a}nder, Leon and
Sterz, Hannah and
Poth, Clifton A and
Pfeiffer, Jonas and
Kuznetsov, Ilia and
Gurevych, Iryna},
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.365",
pages = "6283--6305",
}
```
License
-------
This dataset is distributed under the [CC-BY-ND 4.0 license](https://creativecommons.org/licenses/by-nd/4.0/legalcode).
Following [Jacovi et al. (2023)](https://aclanthology.org/2023.emnlp-main.308/), we decided to publish with a "No Derivatives" license to mitigate the risk of data contamination of crawled training datasets. |