Datasets:
metadata
dataset_info:
features:
- name: first_line
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 2285150618.152
num_examples: 21676
download_size: 2332238384
dataset_size: 2285150618.152
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-nc-4.0
task_categories:
- automatic-speech-recognition
size_categories:
- 10K<n<100K
π Dataset Summary
This dataset contains paired audio-text data generated using:
- Text prompts from the BridgeData V2 dataset.
- Synthetic speech generated using Coqui-TTS.
π Dataset Structure
Each entry in the dataset contains:
first_line
: Textual instruction or caption from BridgeData V2.audio
: Corresponding TTS-generated.wav
file.
π Licenses and Usage
πΉ BridgeData V2
- License: CC BY-NC 4.0
- Attribution: The original text data is sourced from BridgeData V2, developed by the RAIL lab at UC Berkeley.
- Restrictions: Non-commercial use only. Any derivative dataset must also respect this condition.
πΉ Coqui-TTS
- License: Mozilla Public License 2.0
- Attribution: The TTS system used to generate the synthetic speech is based on the Coqui-TTS open-source toolkit.
β Terms of Use
By using this dataset, you agree to the terms of the licenses listed above.
Please make sure any downstream usage (e.g., fine-tuning models, publication) complies with the original licenses.
π Citation
If you use this dataset, please cite the original BridgeData V2 and Coqui-TTS projects accordingly.