File size: 6,342 Bytes
6f249f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4abc9d0
 
 
 
 
 
 
 
6f249f4
4abc9d0
757c9f7
 
 
4abc9d0
 
 
 
 
 
 
 
757c9f7
4abc9d0
757c9f7
4abc9d0
 
0daa1eb
757c9f7
0daa1eb
757c9f7
 
4abc9d0
757c9f7
4abc9d0
757c9f7
 
 
4abc9d0
 
 
 
 
757c9f7
4abc9d0
 
 
757c9f7
4abc9d0
 
 
0daa1eb
4abc9d0
757c9f7
4abc9d0
0daa1eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4abc9d0
 
 
 
757c9f7
4abc9d0
757c9f7
4abc9d0
 
 
757c9f7
4abc9d0
757c9f7
4abc9d0
757c9f7
4abc9d0
757c9f7
4abc9d0
757c9f7
4abc9d0
757c9f7
0daa1eb
4abc9d0
 
 
0daa1eb
4abc9d0
 
 
757c9f7
4abc9d0
 
 
757c9f7
4abc9d0
 
 
0daa1eb
4abc9d0
757c9f7
4abc9d0
0daa1eb
4abc9d0
757c9f7
4abc9d0
757c9f7
 
 
 
4abc9d0
 
 
0daa1eb
 
 
 
 
 
4abc9d0
0daa1eb
757c9f7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
---
dataset_info:
  features:
  - name: sex
    dtype: string
  - name: subset
    dtype: string
  - name: id
    dtype: string
  - name: audio
    dtype: audio
  - name: transcript
    dtype: string
  - name: words
    list:
    - name: end
      dtype: float64
    - name: start
      dtype: float64
    - name: word
      dtype: string
  - name: phonemes
    list:
    - name: end
      dtype: float64
    - name: phoneme
      dtype: string
    - name: start
      dtype: float64
  splits:
  - name: dev_clean
    num_bytes: 365310608.879
    num_examples: 2703
  - name: dev_other
    num_bytes: 341143993.784
    num_examples: 2864
  - name: test_clean
    num_bytes: 377535532.98
    num_examples: 2620
  - name: test_other
    num_bytes: 351207892.569557
    num_examples: 2938
  - name: train_clean_100
    num_bytes: 6694747231.610863
    num_examples: 28538
  - name: train_clean_360
    num_bytes: 24163659711.787865
    num_examples: 104008
  - name: train_other_500
    num_bytes: 32945085271.89443
    num_examples: 148645
  download_size: 62101682957
  dataset_size: 65238690243.50571
configs:
- config_name: default
  data_files:
  - split: dev_clean
    path: data/dev_clean-*
  - split: dev_other
    path: data/dev_other-*
  - split: test_clean
    path: data/test_clean-*
  - split: test_other
    path: data/test_other-*
  - split: train_clean_100
    path: data/train_clean_100-*
  - split: train_clean_360
    path: data/train_clean_360-*
  - split: train_other_500
    path: data/train_other_500-*
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
pretty_name: Librispeech Alignments
size_categories:
- 100K<n<1M
---



# Dataset Card for Librispeech Alignments

Librispeech with alignments generated by the [Montreal Forced Aligner](https://montreal-forced-aligner.readthedocs.io/en/latest/). The original alignments in TextGrid format can be found [here](https://zenodo.org/records/2619474)


## Dataset Details

### Dataset Description

Librispeech is a corpus of read English speech, designed for training and evaluating automatic speech recognition (ASR) systems. The dataset contains 1000 hours of 16kHz read English speech derived from audiobooks.

The Montreal Forced Aligner (MFA) was used to generate word and phoneme level alignments for the Librispeech dataset.


- **Curated by:** Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur (for Librispeech)
- **Funded by:** DARPA LORELEI
- **Shared by:** Loren Lugosch (for Alignments)
- **Language(s) (NLP):** English
- **License:** Creative Commons Attribution 4.0 International License

### Dataset Sources

- **Repository:** https://www.openslr.org/12
- **Paper:** https://arxiv.org/abs/1512.02595
- **Alignments:** https://zenodo.org/record/2619474

## Uses

### Direct Use

The Librispeech dataset can be used to train and evaluate ASR systems. The alignments allow for forced alignment techniques.

### Out-of-Scope Use

The dataset only contains read speech, so may not perform as well on spontaneous conversational speech.

## Dataset Structure

The dataset contains 1000 hours of segmented read English speech from audiobooks. There are three train subsets: 100 hours (train-clean-100), 360 hours (train-clean-360) and 500 hours (train-other-500).

The alignments connect the audio to the reference text transcripts on word and phoneme level.

### Data Fields

- sex: M for male, F for female

- subset: dev_clean, dev_other, test_clean, test_other, train_clean_100, train_clean_360, train_other_500

- id: unique id of the data sample. (speaker id)-(chapter-id)-(utterance-id)

- audio: the audio, 16kHz

- transcript: the spoken text of the dataset, normalized and lowercased

- words: a list of words with fields:
  - word: the text of the word
  - start: the start time in seconds
  - end: the end time in seconds
 
- phonemes: a list of phonemes with fields:
  - phoneme: the phoneme spoken
  - start: the start time in seconds
  - end: the end time in seconds
    
## Dataset Creation

### Curation Rationale

Librispeech was created to further speech recognition research and to benchmark progress in the field.

### Source Data  

#### Data Collection and Processing

The audio and reference texts were sourced from read English audiobooks in the LibriVox project. The data was segmented, filtered and prepared for speech recognition.

#### Who are the source data producers? 

The audiobooks are read by volunteers for the LibriVox project. Information about the readers is available in the LibriVox catalog.

### Annotations

#### Annotation process  

The Montreal Forced Aligner was used to create word and phoneme level alignments between the audio and reference texts. The aligner is based on Kaldi.
In the process of formatting this into a HuggingFace dataset, words with empty text and phonemes with empty text, silence tokens, or spacing tokens were removed

#### Who are the annotators?

The alignments were generated automatically by the Montreal Forced Aligner and shared by Loren Lugosch. The TextGrid files were parsed and integrated into this dataset by Kim Gilkey.

#### Personal and Sensitive Information

The data contains read speech and transcripts. No personal or sensitive information expected.

## Bias, Risks, and Limitations

The dataset contains only read speech from published books, not natural conversational speech. Performance on other tasks may be reduced.

### Recommendations

Users should understand that the alignments may contain errors and account for this in applications. For example, be wary of <UNK> tokens.

## Citation  

**Librispeech:**
```
@inproceedings{panayotov2015librispeech,  
  title={Librispeech: an ASR corpus based on public domain audio books},
  author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},  
  booktitle={ICASSP},   
  year={2015},   
  organization={IEEE} 
}
```

**Librispeech Alignments:**
```
Loren Lugosch, Mirco Ravanelli, Patrick Ignoto, Vikrant Singh Tomar, and Yoshua Bengio, "Speech Model Pre-training for End-to-End Spoken Language Understanding", Interspeech 2019.
```

**Montreal Forced Aligner:**
```
Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. "Montreal Forced Aligner: trainable text-speech alignment using Kaldi", Interspeech 2017.
```