Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Size:
10K - 100K
License:
license: cc-by-nd-4.0 | |
language: | |
- de | |
- zh | |
- tr | |
size_categories: | |
- 10K<n<100K | |
multilinguality: | |
- multilingual | |
pretty_name: M2QA | |
task_categories: | |
- question-answering | |
task_ids: | |
- extractive-qa | |
dataset_info: | |
- config_name: m2qa.german.creative_writing | |
features: | |
- name: id | |
dtype: string | |
- name: question | |
dtype: string | |
- name: context | |
dtype: string | |
- name: answers | |
struct: | |
- name: text | |
sequence: string | |
- name: answer_start | |
sequence: int64 | |
splits: | |
- name: validation | |
num_bytes: 2083548 | |
num_examples: 1500 | |
download_size: 2047695 | |
dataset_size: 2083548 | |
- config_name: m2qa.german.news | |
features: | |
- name: id | |
dtype: string | |
- name: question | |
dtype: string | |
- name: context | |
dtype: string | |
- name: answers | |
struct: | |
- name: text | |
sequence: string | |
- name: answer_start | |
sequence: int64 | |
splits: | |
- name: validation | |
num_bytes: 2192833 | |
num_examples: 1500 | |
- name: train | |
num_bytes: 1527473 | |
num_examples: 1500 | |
download_size: 2438496 | |
dataset_size: 3720306 | |
- config_name: m2qa.german.product_reviews | |
features: | |
- name: id | |
dtype: string | |
- name: question | |
dtype: string | |
- name: context | |
dtype: string | |
- name: answers | |
struct: | |
- name: text | |
sequence: string | |
- name: answer_start | |
sequence: int64 | |
splits: | |
- name: validation | |
num_bytes: 1652573 | |
num_examples: 1500 | |
- name: train | |
num_bytes: 1158154 | |
num_examples: 1500 | |
download_size: 1830972 | |
dataset_size: 2810727 | |
- config_name: m2qa.chinese.creative_writing | |
features: | |
- name: id | |
dtype: string | |
- name: question | |
dtype: string | |
- name: context | |
dtype: string | |
- name: answers | |
struct: | |
- name: text | |
sequence: string | |
- name: answer_start | |
sequence: int64 | |
splits: | |
- name: validation | |
num_bytes: 1600001 | |
num_examples: 1500 | |
download_size: 1559229 | |
dataset_size: 1600001 | |
- config_name: m2qa.chinese.news | |
features: | |
- name: id | |
dtype: string | |
- name: question | |
dtype: string | |
- name: context | |
dtype: string | |
- name: answers | |
struct: | |
- name: text | |
sequence: string | |
- name: answer_start | |
sequence: int64 | |
splits: | |
- name: validation | |
num_bytes: 1847465 | |
num_examples: 1500 | |
- name: train | |
num_bytes: 1135914 | |
num_examples: 1500 | |
download_size: 2029530 | |
dataset_size: 2983379 | |
- config_name: m2qa.chinese.product_reviews | |
features: | |
- name: id | |
dtype: string | |
- name: question | |
dtype: string | |
- name: context | |
dtype: string | |
- name: answers | |
struct: | |
- name: text | |
sequence: string | |
- name: answer_start | |
sequence: int64 | |
splits: | |
- name: validation | |
num_bytes: 1390223 | |
num_examples: 1500 | |
- name: train | |
num_bytes: 1358895 | |
num_examples: 1500 | |
download_size: 1597724 | |
dataset_size: 2749118 | |
- config_name: m2qa.turkish.creative_writing | |
features: | |
- name: id | |
dtype: string | |
- name: question | |
dtype: string | |
- name: context | |
dtype: string | |
- name: answers | |
struct: | |
- name: text | |
sequence: string | |
- name: answer_start | |
sequence: int64 | |
splits: | |
- name: validation | |
num_bytes: 1845140 | |
num_examples: 1500 | |
download_size: 1808676 | |
dataset_size: 1845140 | |
- config_name: m2qa.turkish.news | |
features: | |
- name: id | |
dtype: string | |
- name: question | |
dtype: string | |
- name: context | |
dtype: string | |
- name: answers | |
struct: | |
- name: text | |
sequence: string | |
- name: answer_start | |
sequence: int64 | |
splits: | |
- name: validation | |
num_bytes: 2071770 | |
num_examples: 1500 | |
- name: train | |
num_bytes: 1362485 | |
num_examples: 1500 | |
download_size: 2287668 | |
dataset_size: 3434255 | |
- config_name: m2qa.turkish.product_reviews | |
features: | |
- name: id | |
dtype: string | |
- name: question | |
dtype: string | |
- name: context | |
dtype: string | |
- name: answers | |
struct: | |
- name: text | |
sequence: string | |
- name: answer_start | |
sequence: int64 | |
splits: | |
- name: validation | |
num_bytes: 1996826 | |
num_examples: 1500 | |
download_size: 1958662 | |
dataset_size: 1996826 | |
configs: | |
- config_name: m2qa.chinese.creative_writing | |
data_files: | |
- split: validation | |
path: m2qa.chinese.creative_writing/validation-* | |
- config_name: m2qa.chinese.news | |
data_files: | |
- split: validation | |
path: m2qa.chinese.news/validation-* | |
- split: train | |
path: m2qa.chinese.news/train-* | |
- config_name: m2qa.chinese.product_reviews | |
data_files: | |
- split: validation | |
path: m2qa.chinese.product_reviews/validation-* | |
- split: train | |
path: m2qa.chinese.product_reviews/train-* | |
- config_name: m2qa.german.creative_writing | |
data_files: | |
- split: validation | |
path: m2qa.german.creative_writing/validation-* | |
- config_name: m2qa.german.news | |
data_files: | |
- split: validation | |
path: m2qa.german.news/validation-* | |
- split: train | |
path: m2qa.german.news/train-* | |
- config_name: m2qa.german.product_reviews | |
data_files: | |
- split: validation | |
path: m2qa.german.product_reviews/validation-* | |
- split: train | |
path: m2qa.german.product_reviews/train-* | |
- config_name: m2qa.turkish.creative_writing | |
data_files: | |
- split: validation | |
path: m2qa.turkish.creative_writing/validation-* | |
- config_name: m2qa.turkish.news | |
data_files: | |
- split: validation | |
path: m2qa.turkish.news/validation-* | |
- split: train | |
path: m2qa.turkish.news/train-* | |
- config_name: m2qa.turkish.product_reviews | |
data_files: | |
- split: validation | |
path: m2qa.turkish.product_reviews/validation-* | |
M2QA: Multi-domain Multilingual Question Answering | |
===================================================== | |
M2QA (Multi-domain Multilingual Question Answering) is an extractive question answering benchmark for evaluating joint language and domain transfer. M2QA includes 13,500 SQuAD 2.0-style question-answer instances in German, Turkish, and Chinese for the domains of product reviews, news, and creative writing. | |
This Hugging Face datasets repo accompanies our paper "[M2QA: Multi-domain Multilingual Question Answering](https://arxiv.org/abs/2407.01091)". If you want an explanation and code to reproduce all our results or want to use our custom-built annotation platform, have a look at our GitHub repository: [https://github.com/UKPLab/m2qa](https://github.com/UKPLab/m2qa) | |
Loading & Decrypting the Dataset | |
----------------- | |
Following [Jacovi et al. (2023)](https://aclanthology.org/2023.emnlp-main.308/), we encrypt the validation data to prevent leakage of the dataset into LLM training datasets. But loading the dataset is still easy: | |
To load the dataset, you can use the following code: | |
```python | |
from datasets import load_dataset | |
from cryptography.fernet import Fernet | |
# Load the dataset | |
subset = "m2qa.german.news" # Change to the subset that you want to use | |
dataset = load_dataset("UKPLab/m2qa", subset) | |
# Decrypt it | |
fernet = Fernet(b"aRY0LZZb_rPnXWDSiSJn9krCYezQMOBbGII2eGkN5jo=") | |
def decrypt(example): | |
example["question"] = fernet.decrypt(example["question"].encode()).decode() | |
example["context"] = fernet.decrypt(example["context"].encode()).decode() | |
example["answers"]["text"] = [fernet.decrypt(answer.encode()).decode() for answer in example["answers"]["text"]] | |
return example | |
dataset["validation"] = dataset["validation"].map(decrypt) | |
``` | |
The M2QA dataset is licensed under a "no derivative" agreement. To prevent contamination of LLM training datasets and thus preserve the dataset's usefulness to our research community, please upload the dataset only in encrypted form. Additionally, please use only APIs that do not utilize the data for training. | |
Overview / Data Splits | |
---------- | |
All used text passages stem from sources with open licenses. We list the licenses here: [https://github.com/UKPLab/m2qa/tree/main/m2qa_dataset](https://github.com/UKPLab/m2qa/tree/main/m2qa_dataset) | |
We have validation data for the following domains and languages: | |
| Subset Name | Domain | Language | #Question-Answer instances | | |
| --- | --- | --- | --- | | |
| `m2qa.german.product_reviews` | product_reviews | German | 1500 | | |
| `m2qa.german.creative_writing` | creative_writing | German | 1500 | | |
| `m2qa.german.news` | news | German | 1500 | | |
| `m2qa.chinese.product_reviews` | product_reviews | Chinese | 1500 | | |
| `m2qa.chinese.creative_writing` | creative_writing | Chinese | 1500 | | |
| `m2qa.chinese.news` | news | Chinese | 1500 | | |
| `m2qa.turkish.product_reviews` | product_reviews | Turkish | 1500 | | |
| `m2qa.turkish.creative_writing` | creative_writing | Turkish | 1500 | | |
| `m2qa.turkish.news` | news | Turkish | 1500 | | |
### Additional Training Data | |
We also provide training data for five domain-language pairs, consisting of 1500 question-answer instances each, totalling 7500 training examples. These are the subsets that contain training data: | |
- `m2qa.chinese.news` | |
- `m2qa.chinese.product_reviews` | |
- `m2qa.german.news` | |
- `m2qa.german.product_reviews` | |
- `m2qa.turkish.news` | |
The training data is not encrypted. | |
Citation | |
---------- | |
If you use this dataset, please cite our paper: | |
``` | |
@article{englaender-etal-2024-m2qa, | |
title="M2QA: Multi-domain Multilingual Question Answering", | |
author={Engl{\"a}nder, Leon and | |
Sterz, Hannah and | |
Poth, Clifton and | |
Pfeiffer, Jonas and | |
Kuznetsov, Ilia and | |
Gurevych, Iryna}, | |
journal={arXiv preprint}, | |
url="https://arxiv.org/abs/2407.01091", | |
month = jul, | |
year="2024" | |
} | |
``` | |
License | |
------- | |
This dataset is distributed under the [CC-BY-ND 4.0 license](https://creativecommons.org/licenses/by-nd/4.0/legalcode). | |
Following [Jacovi et al. (2023)](https://aclanthology.org/2023.emnlp-main.308/), we decided to publish with a "No Derivatives" license to mitigate the risk of data contamination of crawled training datasets. |