Datasets:
Tasks:
Automatic Speech Recognition
Formats:
parquet
Languages:
English
Size:
10M - 100M
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -123,6 +123,8 @@ configs:
|
|
123 |
|
124 |
# LargeScaleASR: 25,000 hours of transcribed and heterogeneous English speech recognition data for research and commercial use.
|
125 |
|
|
|
|
|
126 |
Made of 6 subsets:
|
127 |
1. **large** contains 25,000 hours of read / spontaneous and clean / noisy transcribed speech.
|
128 |
2. **medium** contains 2,500 hours of read / spontaneous and clean / noisy transcribed speech.
|
@@ -173,8 +175,8 @@ More information relative to each dataset is given as:
|
|
173 |
- [**voxpopuli**](https://arxiv.org/abs/2101.00390): we follow the standard SpeechBrain data preparation.
|
174 |
- [**LibriHeavy**](https://arxiv.org/html/2309.08105v2): samples are randomly selected, but we follow the standard data preparation.
|
175 |
- [**Librispeech**](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf): Librispeech is only used for the validation and test sets of LargeScaleASR. More precisely, we extract samples from *dev-others* and *test-others* as they are the most challenging subsets.
|
176 |
-
- [**YODAS**](https://arxiv.org/abs/2406.00899): The YODAS dataset is unfortunately unreliable. Indeed, audio are crawled from YouTube, and a lot of them (almost half) do not have the correct language. We used a [SpeechBrain language ID model](https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa) to make sure that we only integrate samples where people speak in English. Transcriptions have also been heavily normalised (see next section). We decided arbitrarily to use the *en000* and *en001* subsets of Yodas. Transcriptions may be a bit noisy. This is why
|
177 |
-
- [**People's Speech**](https://huggingface.co/datasets/MLCommons/peoples_speech): Only the *clean* subset of this dataset is used in LargeScaleASR as the transcriptions there already have errors.
|
178 |
- [**CommonVoice 18.0**](https://commonvoice.mozilla.org/en): We removed a few speakers that had too many samples (above 9000 samples) to avoid any bias. Aside from this, we used only samples coming from the *validated* csv to ensure an optimal level of transcriptions. Text was also heavily normalised (see next section).
|
179 |
|
180 |
### Text and audio normalisation
|
|
|
123 |
|
124 |
# LargeScaleASR: 25,000 hours of transcribed and heterogeneous English speech recognition data for research and commercial use.
|
125 |
|
126 |
+
The full details [are available in the paper](https://arxiv.org/abs/2505.21578).
|
127 |
+
|
128 |
Made of 6 subsets:
|
129 |
1. **large** contains 25,000 hours of read / spontaneous and clean / noisy transcribed speech.
|
130 |
2. **medium** contains 2,500 hours of read / spontaneous and clean / noisy transcribed speech.
|
|
|
175 |
- [**voxpopuli**](https://arxiv.org/abs/2101.00390): we follow the standard SpeechBrain data preparation.
|
176 |
- [**LibriHeavy**](https://arxiv.org/html/2309.08105v2): samples are randomly selected, but we follow the standard data preparation.
|
177 |
- [**Librispeech**](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf): Librispeech is only used for the validation and test sets of LargeScaleASR. More precisely, we extract samples from *dev-others* and *test-others* as they are the most challenging subsets.
|
178 |
+
- [**YODAS**](https://arxiv.org/abs/2406.00899): The YODAS dataset is unfortunately unreliable. Indeed, audio are crawled from YouTube, and a lot of them (almost half) do not have the correct language. We used a [SpeechBrain language ID model](https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa) to make sure that we only integrate samples where people speak in English. Transcriptions have also been heavily normalised (see next section). We decided arbitrarily to use the *en000* and *en001* subsets of Yodas. Transcriptions may be a bit noisy. This is why we manually transcribed data for the dev and test sets.
|
179 |
+
- [**People's Speech**](https://huggingface.co/datasets/MLCommons/peoples_speech): Only the *clean* subset of this dataset is used in LargeScaleASR as the transcriptions there already have errors.
|
180 |
- [**CommonVoice 18.0**](https://commonvoice.mozilla.org/en): We removed a few speakers that had too many samples (above 9000 samples) to avoid any bias. Aside from this, we used only samples coming from the *validated* csv to ensure an optimal level of transcriptions. Text was also heavily normalised (see next section).
|
181 |
|
182 |
### Text and audio normalisation
|