Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Libraries:
Datasets
pandas
License:
lenglaender commited on
Commit
dbc3e3b
1 Parent(s): 6774368

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -31
README.md CHANGED
@@ -14,7 +14,7 @@ task_categories:
14
  task_ids:
15
  - extractive-qa
16
  dataset_info:
17
- - config_name: m2qa.chinese.creative_writing
18
  features:
19
  - name: id
20
  dtype: string
@@ -30,11 +30,11 @@ dataset_info:
30
  sequence: int64
31
  splits:
32
  - name: validation
33
- num_bytes: 1600001
34
  num_examples: 1500
35
- download_size: 1559229
36
- dataset_size: 1600001
37
- - config_name: m2qa.chinese.news
38
  features:
39
  - name: id
40
  dtype: string
@@ -50,14 +50,14 @@ dataset_info:
50
  sequence: int64
51
  splits:
52
  - name: validation
53
- num_bytes: 1847465
54
  num_examples: 1500
55
  - name: train
56
- num_bytes: 1135914
57
  num_examples: 1500
58
- download_size: 2029530
59
- dataset_size: 2983379
60
- - config_name: m2qa.chinese.product_reviews
61
  features:
62
  - name: id
63
  dtype: string
@@ -73,14 +73,14 @@ dataset_info:
73
  sequence: int64
74
  splits:
75
  - name: validation
76
- num_bytes: 1390223
77
  num_examples: 1500
78
  - name: train
79
- num_bytes: 1358895
80
  num_examples: 1500
81
- download_size: 1597724
82
- dataset_size: 2749118
83
- - config_name: m2qa.german.creative_writing
84
  features:
85
  - name: id
86
  dtype: string
@@ -96,11 +96,11 @@ dataset_info:
96
  sequence: int64
97
  splits:
98
  - name: validation
99
- num_bytes: 2083548
100
  num_examples: 1500
101
- download_size: 2047695
102
- dataset_size: 2083548
103
- - config_name: m2qa.german.news
104
  features:
105
  - name: id
106
  dtype: string
@@ -116,14 +116,14 @@ dataset_info:
116
  sequence: int64
117
  splits:
118
  - name: validation
119
- num_bytes: 2192833
120
  num_examples: 1500
121
  - name: train
122
- num_bytes: 1527473
123
  num_examples: 1500
124
- download_size: 2438496
125
- dataset_size: 3720306
126
- - config_name: m2qa.german.product_reviews
127
  features:
128
  - name: id
129
  dtype: string
@@ -139,13 +139,13 @@ dataset_info:
139
  sequence: int64
140
  splits:
141
  - name: validation
142
- num_bytes: 1652573
143
  num_examples: 1500
144
  - name: train
145
- num_bytes: 1158154
146
  num_examples: 1500
147
- download_size: 1830972
148
- dataset_size: 2810727
149
  - config_name: m2qa.turkish.creative_writing
150
  features:
151
  - name: id
@@ -263,7 +263,7 @@ M2QA: Multi-domain Multilingual Question Answering
263
 
264
  M2QA (Multi-domain Multilingual Question Answering) is an extractive question answering benchmark for evaluating joint language and domain transfer. M2QA includes 13,500 SQuAD 2.0-style question-answer instances in German, Turkish, and Chinese for the domains of product reviews, news, and creative writing.
265
 
266
- This Hugging Face datasets repo accompanies our paper "[M2QA: Multi-domain Multilingual Question Answering](TODO_INSERT_ARXIV_LINK)". If you want an explanation and code to reproduce all our results or want to use our custom-built annotation platform, have a look at our GitHub repository: [https://github.com/adapter-hub/m2qa](https://github.com/adapter-hub/m2qa)
267
 
268
 
269
  Loading & Decrypting the Dataset
@@ -278,7 +278,7 @@ from cryptography.fernet import Fernet
278
 
279
  # Load the dataset
280
  subset = "m2qa.german.news" # Change to the subset that you want to use
281
- dataset = load_dataset("lenglaender/m2qa", subset) # TODO change to new repo name
282
 
283
  # Decrypt it
284
  fernet = Fernet(b"aRY0LZZb_rPnXWDSiSJn9krCYezQMOBbGII2eGkN5jo=")
@@ -297,7 +297,7 @@ The M2QA dataset is licensed under a "no derivative" agreement. To prevent conta
297
 
298
  Overview / Data Splits
299
  ----------
300
- All used text passages stem from sources with open licenses. We list the licenses here: [https://github.com/adapter-hub/m2qa/tree/main/m2qa_dataset](https://github.com/adapter-hub/m2qa/tree/main/m2qa_dataset)
301
 
302
  We have validation data for the following domains and languages:
303
 
@@ -337,6 +337,8 @@ If you use this dataset, please cite our paper:
337
  Kuznetsov, Ilia and
338
  Gurevych, Iryna},
339
  journal={arXiv preprint},
 
 
340
  year="2024"
341
  }
342
  ```
 
14
  task_ids:
15
  - extractive-qa
16
  dataset_info:
17
+ - config_name: m2qa.german.creative_writing
18
  features:
19
  - name: id
20
  dtype: string
 
30
  sequence: int64
31
  splits:
32
  - name: validation
33
+ num_bytes: 2083548
34
  num_examples: 1500
35
+ download_size: 2047695
36
+ dataset_size: 2083548
37
+ - config_name: m2qa.german.news
38
  features:
39
  - name: id
40
  dtype: string
 
50
  sequence: int64
51
  splits:
52
  - name: validation
53
+ num_bytes: 2192833
54
  num_examples: 1500
55
  - name: train
56
+ num_bytes: 1527473
57
  num_examples: 1500
58
+ download_size: 2438496
59
+ dataset_size: 3720306
60
+ - config_name: m2qa.german.product_reviews
61
  features:
62
  - name: id
63
  dtype: string
 
73
  sequence: int64
74
  splits:
75
  - name: validation
76
+ num_bytes: 1652573
77
  num_examples: 1500
78
  - name: train
79
+ num_bytes: 1158154
80
  num_examples: 1500
81
+ download_size: 1830972
82
+ dataset_size: 2810727
83
+ - config_name: m2qa.chinese.creative_writing
84
  features:
85
  - name: id
86
  dtype: string
 
96
  sequence: int64
97
  splits:
98
  - name: validation
99
+ num_bytes: 1600001
100
  num_examples: 1500
101
+ download_size: 1559229
102
+ dataset_size: 1600001
103
+ - config_name: m2qa.chinese.news
104
  features:
105
  - name: id
106
  dtype: string
 
116
  sequence: int64
117
  splits:
118
  - name: validation
119
+ num_bytes: 1847465
120
  num_examples: 1500
121
  - name: train
122
+ num_bytes: 1135914
123
  num_examples: 1500
124
+ download_size: 2029530
125
+ dataset_size: 2983379
126
+ - config_name: m2qa.chinese.product_reviews
127
  features:
128
  - name: id
129
  dtype: string
 
139
  sequence: int64
140
  splits:
141
  - name: validation
142
+ num_bytes: 1390223
143
  num_examples: 1500
144
  - name: train
145
+ num_bytes: 1358895
146
  num_examples: 1500
147
+ download_size: 1597724
148
+ dataset_size: 2749118
149
  - config_name: m2qa.turkish.creative_writing
150
  features:
151
  - name: id
 
263
 
264
  M2QA (Multi-domain Multilingual Question Answering) is an extractive question answering benchmark for evaluating joint language and domain transfer. M2QA includes 13,500 SQuAD 2.0-style question-answer instances in German, Turkish, and Chinese for the domains of product reviews, news, and creative writing.
265
 
266
+ This Hugging Face datasets repo accompanies our paper "[M2QA: Multi-domain Multilingual Question Answering](https://arxiv.org/abs/2407.01091)". If you want an explanation and code to reproduce all our results or want to use our custom-built annotation platform, have a look at our GitHub repository: [https://github.com/UKPLab/m2qa](https://github.com/UKPLab/m2qa)
267
 
268
 
269
  Loading & Decrypting the Dataset
 
278
 
279
  # Load the dataset
280
  subset = "m2qa.german.news" # Change to the subset that you want to use
281
+ dataset = load_dataset("UKPLab/m2qa", subset)
282
 
283
  # Decrypt it
284
  fernet = Fernet(b"aRY0LZZb_rPnXWDSiSJn9krCYezQMOBbGII2eGkN5jo=")
 
297
 
298
  Overview / Data Splits
299
  ----------
300
+ All used text passages stem from sources with open licenses. We list the licenses here: [https://github.com/UKPLab/m2qa/tree/main/m2qa_dataset](https://github.com/UKPLab/m2qa/tree/main/m2qa_dataset)
301
 
302
  We have validation data for the following domains and languages:
303
 
 
337
  Kuznetsov, Ilia and
338
  Gurevych, Iryna},
339
  journal={arXiv preprint},
340
+ url="https://arxiv.org/abs/2407.01091",
341
+ month = jul,
342
  year="2024"
343
  }
344
  ```