Datasets:
Tasks:
Automatic Speech Recognition
Formats:
parquet
Languages:
English
Size:
100K - 1M
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -94,9 +94,9 @@ Librispeech is a corpus of read English speech, designed for training and evalua
|
|
94 |
The Montreal Forced Aligner (MFA) was used to generate word and phoneme level alignments for the Librispeech dataset.
|
95 |
|
96 |
|
97 |
-
- **Curated by:** Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur
|
98 |
- **Funded by:** DARPA LORELEI
|
99 |
-
- **Shared by:** Loren Lugosch
|
100 |
- **Language(s) (NLP):** English
|
101 |
- **License:** Creative Commons Attribution 4.0 International License
|
102 |
|
@@ -118,10 +118,32 @@ The dataset only contains read speech, so may not perform as well on spontaneous
|
|
118 |
|
119 |
## Dataset Structure
|
120 |
|
121 |
-
The dataset contains 1000 hours of segmented read English speech from audiobooks. There are three subsets: 100 hours (train-clean-100), 360 hours (train-clean-360) and 500 hours (train-other-500).
|
122 |
|
123 |
The alignments connect the audio to the reference text transcripts on word and phoneme level.
|
124 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
125 |
## Dataset Creation
|
126 |
|
127 |
### Curation Rationale
|
@@ -143,10 +165,11 @@ The audiobooks are read by volunteers for the LibriVox project. Information abou
|
|
143 |
#### Annotation process
|
144 |
|
145 |
The Montreal Forced Aligner was used to create word and phoneme level alignments between the audio and reference texts. The aligner is based on Kaldi.
|
|
|
146 |
|
147 |
#### Who are the annotators?
|
148 |
|
149 |
-
The alignments were generated automatically by the Montreal Forced Aligner and shared by Loren Lugosch.
|
150 |
|
151 |
#### Personal and Sensitive Information
|
152 |
|
@@ -158,11 +181,11 @@ The dataset contains only read speech from published books, not natural conversa
|
|
158 |
|
159 |
### Recommendations
|
160 |
|
161 |
-
Users should understand that the alignments may contain errors and account for this in applications.
|
162 |
|
163 |
## Citation
|
164 |
|
165 |
-
**
|
166 |
```
|
167 |
@inproceedings{panayotov2015librispeech,
|
168 |
title={Librispeech: an ASR corpus based on public domain audio books},
|
@@ -173,7 +196,12 @@ Users should understand that the alignments may contain errors and account for t
|
|
173 |
}
|
174 |
```
|
175 |
|
176 |
-
**
|
|
|
|
|
|
|
|
|
|
|
177 |
```
|
178 |
-
|
179 |
```
|
|
|
94 |
The Montreal Forced Aligner (MFA) was used to generate word and phoneme level alignments for the Librispeech dataset.
|
95 |
|
96 |
|
97 |
+
- **Curated by:** Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur (for Librispeech)
|
98 |
- **Funded by:** DARPA LORELEI
|
99 |
+
- **Shared by:** Loren Lugosch (for Alignments)
|
100 |
- **Language(s) (NLP):** English
|
101 |
- **License:** Creative Commons Attribution 4.0 International License
|
102 |
|
|
|
118 |
|
119 |
## Dataset Structure
|
120 |
|
121 |
+
The dataset contains 1000 hours of segmented read English speech from audiobooks. There are three train subsets: 100 hours (train-clean-100), 360 hours (train-clean-360) and 500 hours (train-other-500).
|
122 |
|
123 |
The alignments connect the audio to the reference text transcripts on word and phoneme level.
|
124 |
|
125 |
+
### Data Fields
|
126 |
+
|
127 |
+
- sex: M for male, F for female
|
128 |
+
|
129 |
+
- subset: dev_clean, dev_other, test_clean, test_other, train_clean_100, train_clean_360, train_other_500
|
130 |
+
|
131 |
+
- id: unique id of the data sample. (speaker id)-(chapter-id)-(utterance-id)
|
132 |
+
|
133 |
+
- audio: the audio, 16kHz
|
134 |
+
|
135 |
+
- transcript: the spoken text of the dataset, normalized and lowercased
|
136 |
+
|
137 |
+
- words: a list of words with fields:
|
138 |
+
- word: the text of the word
|
139 |
+
- start: the start time in seconds
|
140 |
+
- end: the end time in seconds
|
141 |
+
|
142 |
+
- phonemes: a list of phonemes with fields:
|
143 |
+
- phoneme: the phoneme spoken
|
144 |
+
- start: the start time in seconds
|
145 |
+
- end: the end time in seconds
|
146 |
+
|
147 |
## Dataset Creation
|
148 |
|
149 |
### Curation Rationale
|
|
|
165 |
#### Annotation process
|
166 |
|
167 |
The Montreal Forced Aligner was used to create word and phoneme level alignments between the audio and reference texts. The aligner is based on Kaldi.
|
168 |
+
In the process of formatting this into a HuggingFace dataset, words with empty text and phonemes with empty text, silence tokens, or spacing tokens were removed
|
169 |
|
170 |
#### Who are the annotators?
|
171 |
|
172 |
+
The alignments were generated automatically by the Montreal Forced Aligner and shared by Loren Lugosch. The TextGrid files were parsed and integrated into this dataset by Kim Gilkey.
|
173 |
|
174 |
#### Personal and Sensitive Information
|
175 |
|
|
|
181 |
|
182 |
### Recommendations
|
183 |
|
184 |
+
Users should understand that the alignments may contain errors and account for this in applications. For example, be wary of <UNK> tokens.
|
185 |
|
186 |
## Citation
|
187 |
|
188 |
+
**Librispeech:**
|
189 |
```
|
190 |
@inproceedings{panayotov2015librispeech,
|
191 |
title={Librispeech: an ASR corpus based on public domain audio books},
|
|
|
196 |
}
|
197 |
```
|
198 |
|
199 |
+
**Librispeech Alignments:**
|
200 |
+
```
|
201 |
+
Loren Lugosch, Mirco Ravanelli, Patrick Ignoto, Vikrant Singh Tomar, and Yoshua Bengio, "Speech Model Pre-training for End-to-End Spoken Language Understanding", Interspeech 2019.
|
202 |
+
```
|
203 |
+
|
204 |
+
**Montreal Forced Aligner:**
|
205 |
```
|
206 |
+
Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. "Montreal Forced Aligner: trainable text-speech alignment using Kaldi", Interspeech 2017.
|
207 |
```
|