Datasets:
QCRI
/

Firoj commited on
Commit
6f11135
·
verified ·
1 Parent(s): c7e86f2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +170 -6
README.md CHANGED
@@ -3,17 +3,21 @@ license: cc-by-nc-sa-4.0
3
  task_categories:
4
  - text-classification
5
  - question-answering
 
6
  language:
7
  - ar
8
  tags:
9
  - MMLU
10
- - exams
11
- - BoolQ
 
 
 
12
  pretty_name: 'AraDiCE -- Arabic Dialect and Cultural Evaluation'
13
  size_categories:
14
  - 10K<n<100K
15
  dataset_info:
16
- - config_name: ArabicMMLU-egy
17
  splits:
18
  - name: test
19
  num_examples: 14455
@@ -21,15 +25,167 @@ dataset_info:
21
  splits:
22
  - name: test
23
  num_examples: 14455
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  configs:
25
  - config_name: ArabicMMLU-egy
26
  data_files:
27
  - split: test
28
  path: ArabicMMLU_egy/test.json
29
- - config_name: ArabicMMLU-lev
30
  data_files:
31
  - split: test
32
  path: ArabicMMLU_lev/test.json
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  ---
34
 
35
  # AraDiCE: Benchmarks for Dialectal and Cultural Capabilities in LLMs
@@ -40,12 +196,19 @@ The **AraDiCE** dataset is designed to evaluate dialectal and cultural capabilit
40
 
41
  As part of the supplemental materials, we have selected a few datasets (see below) for the reader to review. We will make the full AraDiCE benchmarking suite publicly available to the community.
42
 
43
- ## File/Directory
44
 
45
  TO DO:
46
 
47
  - **licenses_by-nc-sa_4.0_legalcode.txt** License information.
48
- - **README.md** This file.
 
 
 
 
 
 
 
49
 
50
 
51
  ## Dataset Usage
@@ -63,6 +226,7 @@ The dataset is distributed under the **Creative Commons Attribution-NonCommercia
63
 
64
 
65
  ## Citation
 
66
 
67
  ```
68
  @article{mousi2024aradicebenchmarksdialectalcultural,
 
3
  task_categories:
4
  - text-classification
5
  - question-answering
6
+ - multiple-choice-question
7
  language:
8
  - ar
9
  tags:
10
  - MMLU
11
+ - reading-comprehension
12
+ - commonsense-reasoning
13
+ - capabilities
14
+ - cultural-understanding
15
+ - world-knowledge
16
  pretty_name: 'AraDiCE -- Arabic Dialect and Cultural Evaluation'
17
  size_categories:
18
  - 10K<n<100K
19
  dataset_info:
20
+ - config_name: ArabicMMLU-lev
21
  splits:
22
  - name: test
23
  num_examples: 14455
 
25
  splits:
26
  - name: test
27
  num_examples: 14455
28
+ - config_name: PIQA-msa
29
+ splits:
30
+ - name: test
31
+ num_examples: 1838
32
+ - config_name: PIQA-lev
33
+ splits:
34
+ - name: test
35
+ num_examples: 1838
36
+ - config_name: PIQA-egy
37
+ splits:
38
+ - name: test
39
+ num_examples: 1838
40
+ - config_name: OBQA-msa
41
+ splits:
42
+ - name: test
43
+ num_examples: 497
44
+ - config_name: OBQA-lev
45
+ splits:
46
+ - name: test
47
+ num_examples: 497
48
+ - config_name: OBQA-egy
49
+ splits:
50
+ - name: test
51
+ num_examples: 497
52
+ - config_name: Winogrande-msa
53
+ splits:
54
+ - name: test
55
+ num_examples: 1267
56
+ - config_name: Winogrande-lev
57
+ splits:
58
+ - name: test
59
+ num_examples: 1267
60
+ - config_name: Winogrande-egy
61
+ splits:
62
+ - name: test
63
+ num_examples: 1267
64
+ - config_name: TruthfulQA-msa
65
+ splits:
66
+ - name: test
67
+ num_examples: 780
68
+ - config_name: TruthfulQA-lev
69
+ splits:
70
+ - name: test
71
+ num_examples: 780
72
+ - config_name: TruthfulQA-egy
73
+ splits:
74
+ - name: test
75
+ num_examples: 780
76
+ - config_name: BoolQ-msa
77
+ splits:
78
+ - name: test
79
+ num_examples: 892
80
+ - config_name: BoolQ-lev
81
+ splits:
82
+ - name: test
83
+ num_examples: 892
84
+ - config_name: BoolQ-egy
85
+ splits:
86
+ - name: test
87
+ num_examples: 892
88
+ - config_name: BoolQ-eng
89
+ splits:
90
+ - name: test
91
+ num_examples: 892
92
+ - config_name: AraDiCE-Culture-glf
93
+ splits:
94
+ - name: test
95
+ num_examples: 30
96
+ - config_name: AraDiCE-Culture-lev
97
+ splits:
98
+ - name: test
99
+ num_examples: 120
100
+ - config_name: AraDiCE-Culture-egy
101
+ splits:
102
+ - name: test
103
+ num_examples: 30
104
  configs:
105
  - config_name: ArabicMMLU-egy
106
  data_files:
107
  - split: test
108
  path: ArabicMMLU_egy/test.json
109
+ - config_name: ArabicMMLU-egy
110
  data_files:
111
  - split: test
112
  path: ArabicMMLU_lev/test.json
113
+ - config_name: PIQA-msa
114
+ data_files:
115
+ - split: test
116
+ path: PIQA_msa/test.json
117
+ - config_name: PIQA-lev
118
+ data_files:
119
+ - split: test
120
+ path: PIQA_lev/test.json
121
+ - config_name: PIQA-egy
122
+ data_files:
123
+ - split: test
124
+ path: PIQA_egy/test.json
125
+ - config_name: OBQA-msa
126
+ data_files:
127
+ - split: test
128
+ path: OBQA_msa/test.json
129
+ - config_name: PIQA-lev
130
+ data_files:
131
+ - split: test
132
+ path: OBQA_lev/test.json
133
+ - config_name: PIQA-egy
134
+ data_files:
135
+ - split: test
136
+ path: OBQA_egy/test.json
137
+ - config_name: Winogrande-msa
138
+ data_files:
139
+ - split: test
140
+ path: Winogrande_msa/test.json
141
+ - config_name: Winogrande-lev
142
+ data_files:
143
+ - split: test
144
+ path: Winogrande_lev/test.json
145
+ - config_name: Winogrande-egy
146
+ data_files:
147
+ - split: test
148
+ path: Winogrande_egy/test.json
149
+ - config_name: TruthfulQA-msa
150
+ data_files:
151
+ - split: test
152
+ path: TruthfulQA_msa/test.json
153
+ - config_name: TruthfulQA-lev
154
+ data_files:
155
+ - split: test
156
+ path: TruthfulQA_lev/test.json
157
+ - config_name: TruthfulQA-egy
158
+ data_files:
159
+ - split: test
160
+ path: TruthfulQA_egy/test.json
161
+ - config_name: BoolQ-msa
162
+ data_files:
163
+ - split: test
164
+ path: BoolQ_msa/test.json
165
+ - config_name: BoolQ-lev
166
+ data_files:
167
+ - split: test
168
+ path: BoolQ_lev/test.json
169
+ - config_name: BoolQ-egy
170
+ data_files:
171
+ - split: test
172
+ path: BoolQ_egy/test.json
173
+ - config_name: BoolQ-eng
174
+ data_files:
175
+ - split: test
176
+ path: BoolQ_eng/test.json
177
+ - config_name: AraDiCE-Culture-glf
178
+ data_files:
179
+ - split: test
180
+ path: AraDiCE-Culture_glf/test.json
181
+ - config_name: AraDiCE-Culture-lev
182
+ data_files:
183
+ - split: test
184
+ path: AraDiCE-Culture_lev/test.json
185
+ - config_name: AraDiCE-Culture-egy
186
+ data_files:
187
+ - split: test
188
+ path: AraDiCE-Culture_egy/test.json
189
  ---
190
 
191
  # AraDiCE: Benchmarks for Dialectal and Cultural Capabilities in LLMs
 
196
 
197
  As part of the supplemental materials, we have selected a few datasets (see below) for the reader to review. We will make the full AraDiCE benchmarking suite publicly available to the community.
198
 
199
+ <!-- ## File/Directory
200
 
201
  TO DO:
202
 
203
  - **licenses_by-nc-sa_4.0_legalcode.txt** License information.
204
+ - **README.md** This file. -->
205
+ ## Dataset Statistics
206
+
207
+ The datasets used in this study include: *i)* four existing Arabic datasets for understanding and generation: *Arabic Dialects Dataset (ADD)*, *ADI*, *QADI*, along with a dialectal response generation dataset, and *MADAR*; *ii)* seven datasets translated and post-edited into MSA and dialects (Levantine and Egyptian), which include *ArabicMMLU*, *BoolQ*, *PIQA*, *OBQA*, *Winogrande*, *Belebele*, and *TruthfulQA*; and *iii)* *AraDiCE-Culture*, an in-house developed regional Arabic cultural understanding dataset. Please find below the types of dataset and their statistics benchmarked in **AraDiCE**.
208
+
209
+ <p align="left"> <img src="./benchmarking_tasks_datasets.png" style="width: 40%;" id="title-icon"> </p>
210
+
211
+ <p align="left"> <img src="./data_stat_table.png" style="width: 40%;" id="title-icon"> </p>
212
 
213
 
214
  ## Dataset Usage
 
226
 
227
 
228
  ## Citation
229
+ Please find the paper <a href="https://arxiv.org/pdf/2409.11404/" target="_blank" style="margin-right: 15px; margin-left: 10px">here.</a>
230
 
231
  ```
232
  @article{mousi2024aradicebenchmarksdialectalcultural,