Datasets:

Modalities:
Audio
Languages:
English
ArXiv:
License:
aboots commited on
Commit
9fcc038
·
verified ·
1 Parent(s): 9fbadb3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -3
README.md CHANGED
@@ -1,3 +1,42 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ pretty_name: Speech Brown
6
+ size_categories:
7
+ - 10K<n<100K
8
+ task_categories:
9
+ - text-to-speech
10
+
11
+ ---
12
+
13
+ ## Dataset Summary
14
+
15
+ **Speech Brown** is a comprehensive, synthetic, and diverse paired speech-text dataset in 15 categories, covering a wide range of topics from fiction to religion. This dataset consists of over 55,000 sentence-level samples.
16
+
17
+ To train the [CLASP](https://huggingface.co/llm-lab/CLASP) model, we created this dataset based on the Brown Corpus. The synthetic speech was generated using the [NVIDIA Tacotron 2](https://pytorch.org/hub/nvidia_deeplearningexamples_tacotron2/) text-to-speech model.
18
+
19
+ For more information about our proposed model, please refer to this [paper](https://arxiv.org/abs/2412.13071). The dataset generation pipeline, along with code and usage instructions, is available on this [GitHub page](https://github.com/language-modeling-lab/CLASP).
20
+
21
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/5dy1Cb3-ZmGytf3QbQN9a.png)
22
+
23
+ ## Dataset Statistics
24
+ 1. Total size: Approximately 30 GB.
25
+ 2. Number of samples: 55,173 pairs of speech and text.
26
+ 3. Average words per sample: 17.78.
27
+ 4. Maximum words in a sample: 48.
28
+ 5. Average characters per sample: 96.72.
29
+ 6. Categories: `adventure`, `belles_lettres`, `editorial`, `fiction`, `government`, `hobbies`, `humor`, `learned`, `lore`, `mystery`, `news`, `religion`, `reviews`, `romance`, `science_fiction`.
30
+
31
+ ## Dataset Structure
32
+ To ensure ease of use, the dataset is partitioned into 10 parts. Each part can be used independently if it meets the requirements of your task and model.
33
+
34
+ ### Metadata Files:
35
+ 1. **global_metadata**: A JSON file containing metadata for all 55,173 samples.
36
+ 2. **localized_metadata**: A JSON file containing metadata for all samples, categorized into the 10 dataset partitions.
37
+
38
+ ### Metadata Fields:
39
+ 1. **id**: The unique identifier for the sample.
40
+ 2. **audio_file_path**: The file path for the audio in the dataset.
41
+ 3. **category**: The category of the sample's text.
42
+ 4. **text**: The corresponding text of the audio file.