--- dataset_info: - config_name: bm-to-bm features: - name: audio dtype: audio - name: text dtype: string - name: duration dtype: float32 - name: source_dataset dtype: string splits: - name: train num_bytes: 18734296593.8 num_examples: 44650 - name: test num_bytes: 986015610.2 num_examples: 2350 download_size: 19697881567 dataset_size: 19720312204.0 - config_name: en-to-en features: - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: duration dtype: float64 - name: source_dataset dtype: string splits: - name: train num_bytes: 11363400013.326242 num_examples: 28177 - name: test num_bytes: 1262689620.6737576 num_examples: 3131 download_size: 12417493989 dataset_size: 12626089634.0 - config_name: fr-to-fr features: - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: duration dtype: float64 - name: source_dataset dtype: string splits: - name: train num_bytes: 2592589580.6535 num_examples: 67050 - name: test num_bytes: 288730324.6115 num_examples: 7450 download_size: 2884967897 dataset_size: 2881319905.2650003 - config_name: multi-combined features: - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: duration dtype: float32 - name: source_dataset dtype: string - name: language dtype: string - name: task_type dtype: string splits: - name: train num_bytes: 40121923253.388 num_examples: 351198 - name: test num_bytes: 3068301248.975 num_examples: 28287 download_size: 42790780095 dataset_size: 43190224502.363 - config_name: semi-annotated features: - name: audio dtype: audio - name: duration dtype: float32 - name: speaker dtype: string - name: bambara dtype: string splits: - name: train num_bytes: 19722323609 num_examples: 47000 download_size: 19698719429 dataset_size: 19722323609 - config_name: semi-annotated-2 default: true features: - name: index dtype: int64 - name: audio dtype: audio - name: duration dtype: float32 - name: bambara dtype: string - name: speaker dtype: string - name: initial_transcription dtype: string - name: second_half_transcription dtype: string splits: - name: train num_bytes: 19741333038.0 num_examples: 47000 download_size: 19711870447 dataset_size: 19741333038.0 - config_name: semi-corrected features: - name: audio dtype: audio: sampling_rate: 16000 - name: duration dtype: float32 - name: speaker dtype: string - name: bambara dtype: string - name: corrected_bambara dtype: string splits: - name: train num_bytes: 3134604003.75 num_examples: 7050 download_size: 3131837091 dataset_size: 3134604003.75 configs: - config_name: bm-to-bm data_files: - split: train path: bm-to-bm/train-* - split: test path: bm-to-bm/test-* - config_name: en-to-en data_files: - split: train path: en-to-en/train-* - split: test path: en-to-en/test-* - config_name: fr-to-fr data_files: - split: train path: fr-to-fr/train-* - split: test path: fr-to-fr/test-* - config_name: multi-combined data_files: - split: train path: multi-combined/train-* - split: test path: multi-combined/test-* - config_name: semi-annotated data_files: - split: train path: semi-annotated/train-* - config_name: semi-annotated-2 data_files: - split: train path: semi-annotated-2/train-* - config_name: semi-corrected data_files: - split: train path: semi-corrected/train-* task_categories: - automatic-speech-recognition - text-to-speech language: - bm --- # Djelia Bambara Audio Dataset ## Dataset Description The **Djelia Bambara Audio Dataset** is a comprehensive resource aimed at supporting research and development in Bambara language processing. This dataset consists of audio extracted from YouTube videos, denoised and diarized to ensure high-quality segments. Additionally, it features a semi-annotated subset with transcriptions generated using the **Djelia Whisper v1 model**. ### Features - **Audio**: High-quality audio clips extracted from YouTube videos. - **Duration**: Each audio clip includes a precise duration field. - **Speaker**: Metadata includes speaker information identified through diarization. - **Semi-Annotated Config**: The dataset includes transcriptions generated using the Djelia Whisper v1 model. ### Configuration: Semi-Annotated The `semi-annotated` subset contains audio clips with corresponding transcriptions generated by the Djelia Whisper v1 model. This subset is particularly suited for applications in automatic speech recognition (ASR) and Text-to-Speech (TTS). ### Statistics - **Total Hours**: 171.0382 hours of audio. - **Source**: Audio segments extracted from publicly available YouTube channels. ## Project This dataset is part of a larger initiative aimed at empowering Bambara speakers to access global knowledge without language barriers. Our goal is to eliminate the need for Bambara speakers to learn a secondary language before they can acquire new information or skills. By providing a robust dataset for Text-to-Speech (TTS) applications, we aim to support the creation of tools for Bambara language, thus democratizing access to knowledge. ## Bambara Language Bambara, also known as Bamanankan, is a Mande language spoken primarily in Mali by millions of people as a mother tongue and second language. It serves as a lingua franca in Mali and is also spoken in neighboring countries, including Burkina Faso and Ivory Coast. Bambara is written in both the Latin script and N'Ko script, and it has a rich oral tradition that is integral to Malian culture. ## Data Collection and Processing The dataset was created by extracting audio from YouTube videos. To ensure the quality and usability of the data, the following steps were performed: 1. **Audio Denoising**: We utilized tools from **Resemble AI** to remove background noise and enhance audio clarity. 2. **Speaker Diarization**: Using **PyAnnote**, we identified distinct speakers in the audio segments, ensuring accurate speaker metadata. 3. **Semi-Annotation**: The Djelia Whisper v1 model was used to transcribe the audio clips, creating the `semi-annotated` configuration. ## Acknowledgements We extend our gratitude to the following contributors and tools that made this dataset possible: - **RAS BATH**, **DIANY.ML_FM**, and **ORTM YouTube Channel** for the audio content. - **Resemble AI** for providing advanced denoising tools. - **PyAnnote** for their speaker diarization toolkit.