Datasets:
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- bem
- fon
- ha
- ig
- kin
- sw
- wo
- yo
- zu
- tw
pretty_name: AfriQA
size_categories:
- 10K<n<100K
multilinguality:
- multilingual
tags:
- cross-lingual
- question-answering
- qa
Dataset Card for [Dataset Name]
Table of Contents
- Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: homepage
- Repository: github
- Paper: paper
- Point of Contact: Masakhane or [email protected]
Dataset Summary
AfriQA is the first cross-lingual question answering (QA) dataset with a focus on African languages. The dataset includes over 12,000 XOR QA examples across 10 African languages, making it an invaluable resource for developing more equitable QA technology.
The train/validation/test sets are available for all the 10 languages.
For more details see ...
Supported Tasks and Leaderboards
[More Information Needed]
question-answering
: The performance in this task is measured with F1 (higher is better) and Exact Match Accuracy.
Languages
There are 20 languages available :
- Bemba (bem)
- Fon (fon)
- Hausa (hau)
- Igbo (ibo)
- Kinyarwanda (kin)
- Swahili (swą)
- Twi (twi)
- Wolof (wol)
- Yorùbá (yor)
- Zulu (zul)
Dataset Structure
Data Instances
- Data Format:
- id : Question ID
- question : Question in African Language
- translated_question : Question translated into a pivot language (English/French)
- answers : Answer in African Language
- lang : Datapoint Language (African Language) e.g
bem
- split : Dataset Split
- translated_answer : Answer in Pivot Language
- translation_type : Translation type of question and answers
{ "id": 0,
"question": "Bushe icaalo ca Egypt caali tekwapo ne caalo cimbi?",
"translated_question": "Has the country of Egypt been colonized before?",
"answers": "['Emukwai']",
"lang": "bem",
"split": "dev",
"translated_answer": "['yes']",
"translation_type": "human_translation"
}
Data Splits
For all languages, there are three splits.
The original splits were named train
, dev
and test
and they correspond to the train
, validation
and test
splits.
The splits have the following sizes :
Language | train | dev | test |
---|---|---|---|
Bemba | 502 | 503 | 314 |
Fon | 427 | 428 | 386 |
Hausa | 435 | 436 | 300 |
Igbo | 417 | 418 | 409 |
Kinyarwanda | 407 | 409 | 347 |
Swahili | 415 | 417 | 302 |
Twi | 451 | 452 | 490 |
Wolof | 503 | 504 | 334 |
Yoruba | 360 | 361 | 332 |
Zulu | 387 | 388 | 325 |
Total | 4333 | 4346 | 3560 |
Dataset Creation
Curation Rationale
The dataset was introduced to introduce question-answering resources to 10 languages that were under-served for natural language processing.
[More Information Needed]
Source Data
...
Initial Data Collection and Normalization
...
Who are the source language producers?
...
Annotations
Annotation process
Details can be found here ...
Who are the annotators?
Annotators were recruited from Masakhane
Personal and Sensitive Information
...
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
Additional Information
Dataset Curators
Licensing Information
The licensing status of the data is CC 4.0 Non-Commercial
Citation Information
Provide the BibTex-formatted reference for the dataset. For example:
Incoming ...
Contributions
Thanks to @ToluClassics for adding this dataset.