File size: 2,494 Bytes
94e28b5 dad0c28 d95e03a bc12849 d95e03a bc12849 d95e03a bc12849 d95e03a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
dataset_info:
features:
- name: audio
dtype: audio
- name: Surah
dtype: string
- name: Aya
dtype: string
- name: duration_ms
dtype: int64
- name: create_date
dtype: string
- name: golden
dtype: bool
- name: final_label
dtype: string
- name: reciter_id
dtype: string
- name: reciter_country
dtype: string
- name: reciter_gender
dtype: string
- name: reciter_age
dtype: string
- name: reciter_qiraah
dtype: string
- name: judgments_num
dtype: int64
- name: annotation_metadata
dtype: string
splits:
- name: train
num_bytes: 1290351809.656
num_examples: 6828
download_size: 1258070687
dataset_size: 1290351809.656
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- automatic-speech-recognition
- audio-classification
language:
- ar
tags:
- Crowdsourcing
- Quranic recitation
- Non-Arabic Speakers
pretty_name: Quranic Audio Dataset - Crowdsourced and Labeled Recitation from Non-Arabic Speakers
---
# Dataset Card for Quranic Audio Dataset : Crowdsourced and Labeled Recitation from Non-Arabic Speakers
### Dataset Summary
We explore the possibility of crowdsourcing a carefully annotated Quranic dataset, on top of which AI models can be built to simplify the learning process.
In particular, we use the volunteer-based crowdsourcing genre and implement a crowdsourcing API to gather audio assets.
We developed a crowdsourcing platform called Quran Voice for annotating the gathered audio assets.
As a result, we have collected around 7000 Quranic recitations from a pool of 1287 participants across more than 11 non-Arabic countries, and we have annotated 1166 recitations from the dataset in six categories.
We have achieved a crowd accuracy of 0.77, an inter-rater agreement of 0.63 between the annotators, and 0.89 between the labels assigned by the algorithm and the expert judgments.
## How to use
## Dataset Structure
### Data Instances
### Data Fields
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Salameh, R., Mdfaa, M. A., Askarbekuly, N., & Mazzara, M.},
title = {Quranic Audio Dataset: Crowdsourced and Labeled Recitation from Non-Arabic Speakers},
year = 2024,
eprint = {2405.02675},
eprinttype = {arxiv},
eprintclass = {cs.SD},
url = {https://arxiv.org/abs/2405.02675},
language = {english},
booktitle = {},
pages = {}
}
``` |