raianand commited on
Commit
a7d73e0
1 Parent(s): 170c0cf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +264 -1
README.md CHANGED
@@ -8,4 +8,267 @@ language:
8
  pretty_name: Technical Indian English
9
  size_categories:
10
  - 1K<n<10K
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  pretty_name: Technical Indian English
9
  size_categories:
10
  - 1K<n<10K
11
+ ---
12
+
13
+
14
+
15
+ # Dataset Card for Voxpopuli
16
+
17
+ ## Table of Contents
18
+ - [Table of Contents](#table-of-contents)
19
+ - [Dataset Description](#dataset-description)
20
+ - [Dataset Summary](#dataset-summary)
21
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
22
+ - [Languages](#languages)
23
+ - [Dataset Structure](#dataset-structure)
24
+ - [Data Instances](#data-instances)
25
+ - [Data Fields](#data-fields)
26
+ - [Data Splits](#data-splits)
27
+ - [Dataset Creation](#dataset-creation)
28
+ - [Curation Rationale](#curation-rationale)
29
+ - [Source Data](#source-data)
30
+ - [Annotations](#annotations)
31
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
32
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
33
+ - [Social Impact of Dataset](#social-impact-of-dataset)
34
+ - [Discussion of Biases](#discussion-of-biases)
35
+ - [Other Known Limitations](#other-known-limitations)
36
+ - [Additional Information](#additional-information)
37
+ - [Dataset Curators](#dataset-curators)
38
+ - [Licensing Information](#licensing-information)
39
+ - [Citation Information](#citation-information)
40
+ - [Contributions](#contributions)
41
+
42
+ ## Dataset Description
43
+
44
+ - **Homepage:** https://github.com/facebookresearch/voxpopuli
45
+ - **Repository:** https://github.com/facebookresearch/voxpopuli
46
+ - **Paper:** https://arxiv.org/abs/2101.00390
47
48
+
49
+ ### Dataset Summary
50
+
51
+ VoxPopuli is a large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation.
52
+ The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home). We acknowledge the European Parliament for creating and sharing these materials.
53
+ This implementation contains transcribed speech data for 18 languages.
54
+ It also contains 29 hours of transcribed speech data of non-native English intended for research in ASR for accented speech (15 L2 accents)
55
+
56
+ ### Example usage
57
+
58
+ VoxPopuli contains labelled data for 18 languages. To load a specific language pass its name as a config name:
59
+
60
+ ```python
61
+ from datasets import load_dataset
62
+
63
+ voxpopuli_croatian = load_dataset("facebook/voxpopuli", "hr")
64
+ ```
65
+
66
+ To load all the languages in a single dataset use "multilang" config name:
67
+
68
+ ```python
69
+ voxpopuli_all = load_dataset("facebook/voxpopuli", "multilang")
70
+ ```
71
+
72
+ To load a specific set of languages, use "multilang" config name and pass a list of required languages to `languages` parameter:
73
+
74
+ ```python
75
+ voxpopuli_slavic = load_dataset("facebook/voxpopuli", "multilang", languages=["hr", "sk", "sl", "cs", "pl"])
76
+ ```
77
+
78
+ To load accented English data, use "en_accented" config name:
79
+
80
+ ```python
81
+ voxpopuli_accented = load_dataset("facebook/voxpopuli", "en_accented")
82
+ ```
83
+
84
+ **Note that L2 English subset contains only `test` split.**
85
+
86
+
87
+ ### Supported Tasks and Leaderboards
88
+
89
+ * automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
90
+
91
+ Accented English subset can also be used for research in ASR for accented speech (15 L2 accents)
92
+
93
+ ### Languages
94
+
95
+ VoxPopuli contains labelled (transcribed) data for 18 languages:
96
+
97
+ | Language | Code | Transcribed Hours | Transcribed Speakers | Transcribed Tokens |
98
+ |:---:|:---:|:---:|:---:|:---:|
99
+ | English | En | 543 | 1313 | 4.8M |
100
+ | German | De | 282 | 531 | 2.3M |
101
+ | French | Fr | 211 | 534 | 2.1M |
102
+ | Spanish | Es | 166 | 305 | 1.6M |
103
+ | Polish | Pl | 111 | 282 | 802K |
104
+ | Italian | It | 91 | 306 | 757K |
105
+ | Romanian | Ro | 89 | 164 | 739K |
106
+ | Hungarian | Hu | 63 | 143 | 431K |
107
+ | Czech | Cs | 62 | 138 | 461K |
108
+ | Dutch | Nl | 53 | 221 | 488K |
109
+ | Finnish | Fi | 27 | 84 | 160K |
110
+ | Croatian | Hr | 43 | 83 | 337K |
111
+ | Slovak | Sk | 35 | 96 | 270K |
112
+ | Slovene | Sl | 10 | 45 | 76K |
113
+ | Estonian | Et | 3 | 29 | 18K |
114
+ | Lithuanian | Lt | 2 | 21 | 10K |
115
+ | Total | | 1791 | 4295 | 15M |
116
+
117
+
118
+ Accented speech transcribed data has 15 various L2 accents:
119
+
120
+ | Accent | Code | Transcribed Hours | Transcribed Speakers |
121
+ |:---:|:---:|:---:|:---:|
122
+ | Dutch | en_nl | 3.52 | 45 |
123
+ | German | en_de | 3.52 | 84 |
124
+ | Czech | en_cs | 3.30 | 26 |
125
+ | Polish | en_pl | 3.23 | 33 |
126
+ | French | en_fr | 2.56 | 27 |
127
+ | Hungarian | en_hu | 2.33 | 23 |
128
+ | Finnish | en_fi | 2.18 | 20 |
129
+ | Romanian | en_ro | 1.85 | 27 |
130
+ | Slovak | en_sk | 1.46 | 17 |
131
+ | Spanish | en_es | 1.42 | 18 |
132
+ | Italian | en_it | 1.11 | 15 |
133
+ | Estonian | en_et | 1.08 | 6 |
134
+ | Lithuanian | en_lt | 0.65 | 7 |
135
+ | Croatian | en_hr | 0.42 | 9 |
136
+ | Slovene | en_sl | 0.25 | 7 |
137
+
138
+ ## Dataset Structure
139
+
140
+ ### Data Instances
141
+
142
+ ```python
143
+ {
144
+ 'audio_id': '20180206-0900-PLENARY-15-hr_20180206-16:10:06_5',
145
+ 'language': 11, # "hr"
146
+ 'audio': {
147
+ 'path': '/home/polina/.cache/huggingface/datasets/downloads/extracted/44aedc80bb053f67f957a5f68e23509e9b181cc9e30c8030f110daaedf9c510e/train_part_0/20180206-0900-PLENARY-15-hr_20180206-16:10:06_5.wav',
148
+ 'array': array([-0.01434326, -0.01055908, 0.00106812, ..., 0.00646973], dtype=float32),
149
+ 'sampling_rate': 16000
150
+ },
151
+ 'raw_text': '',
152
+ 'normalized_text': 'poast genitalnog sakaenja ena u europi tek je jedna od manifestacija takve tetne politike.',
153
+ 'gender': 'female',
154
+ 'speaker_id': '119431',
155
+ 'is_gold_transcript': True,
156
+ 'accent': 'None'
157
+ }
158
+ ```
159
+
160
+ ### Data Fields
161
+
162
+ * `audio_id` (string) - id of audio segment
163
+ * `language` (datasets.ClassLabel) - numerical id of audio segment
164
+ * `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
165
+ * `raw_text` (string) - original (orthographic) audio segment text
166
+ * `normalized_text` (string) - normalized audio segment transcription
167
+ * `gender` (string) - gender of speaker
168
+ * `speaker_id` (string) - id of speaker
169
+ * `is_gold_transcript` (bool) - ?
170
+ * `accent` (string) - type of accent, for example "en_lt", if applicable, else "None".
171
+
172
+ ### Data Splits
173
+
174
+ All configs (languages) except for accented English contain data in three splits: train, validation and test. Accented English `en_accented` config contains only test split.
175
+
176
+ ## Dataset Creation
177
+
178
+ ### Curation Rationale
179
+
180
+ [More Information Needed]
181
+
182
+ ### Source Data
183
+
184
+ The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home)
185
+
186
+ #### Initial Data Collection and Normalization
187
+
188
+ The VoxPopuli transcribed set comes from aligning the full-event source speech audio with the transcripts for plenary sessions. Official timestamps
189
+ are available for locating speeches by speaker in the full session, but they are frequently inaccurate, resulting in truncation of the speech or mixture
190
+ of fragments from the preceding or the succeeding speeches. To calibrate the original timestamps,
191
+ we perform speaker diarization (SD) on the full-session audio using pyannote.audio (Bredin et al.2020) and adopt the nearest SD timestamps (by L1 distance to the original ones) instead for segmentation.
192
+ Full-session audios are segmented into speech paragraphs by speaker, each of which has a transcript available.
193
+
194
+ The speech paragraphs have an average duration of 197 seconds, which leads to significant. We hence further segment these paragraphs into utterances with a
195
+ maximum duration of 20 seconds. We leverage speech recognition (ASR) systems to force-align speech paragraphs to the given transcripts.
196
+ The ASR systems are TDS models (Hannun et al., 2019) trained with ASG criterion (Collobert et al., 2016) on audio tracks from in-house deidentified video data.
197
+
198
+ The resulting utterance segments may have incorrect transcriptions due to incomplete raw transcripts or inaccurate ASR force-alignment.
199
+ We use the predictions from the same ASR systems as references and filter the candidate segments by a maximum threshold of 20% character error rate(CER).
200
+
201
+ #### Who are the source language producers?
202
+
203
+ Speakers are participants of the European Parliament events, many of them are EU officials.
204
+
205
+ ### Annotations
206
+
207
+ #### Annotation process
208
+
209
+ [More Information Needed]
210
+
211
+ #### Who are the annotators?
212
+
213
+ [More Information Needed]
214
+
215
+ ### Personal and Sensitive Information
216
+
217
+ [More Information Needed]
218
+
219
+ ## Considerations for Using the Data
220
+
221
+ ### Social Impact of Dataset
222
+
223
+ [More Information Needed]
224
+
225
+ ### Discussion of Biases
226
+
227
+ Gender speakers distribution is imbalanced, percentage of female speakers is mostly lower than 50% across languages, with the minimum of 15% for the Lithuanian language data.
228
+
229
+ VoxPopuli includes all available speeches from the 2009-2020 EP events without any selections on the topics or speakers.
230
+ The speech contents represent the standpoints of the speakers in the EP events, many of which are EU officials.
231
+
232
+
233
+ ### Other Known Limitations
234
+
235
+
236
+ ## Additional Information
237
+
238
+ ### Dataset Curators
239
+
240
+ [More Information Needed]
241
+
242
+ ### Licensing Information
243
+
244
+ The dataset is distributet under CC0 license, see also [European Parliament's legal notice](https://www.europarl.europa.eu/legal-notice/en/) for the raw data.
245
+
246
+ ### Citation Information
247
+
248
+ Please cite this paper:
249
+
250
+ ```bibtex
251
+ @inproceedings{wang-etal-2021-voxpopuli,
252
+ title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation",
253
+ author = "Wang, Changhan and
254
+ Riviere, Morgane and
255
+ Lee, Ann and
256
+ Wu, Anne and
257
+ Talnikar, Chaitanya and
258
+ Haziza, Daniel and
259
+ Williamson, Mary and
260
+ Pino, Juan and
261
+ Dupoux, Emmanuel",
262
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
263
+ month = aug,
264
+ year = "2021",
265
+ address = "Online",
266
+ publisher = "Association for Computational Linguistics",
267
+ url = "https://aclanthology.org/2021.acl-long.80",
268
+ pages = "993--1003",
269
+ }
270
+ ```
271
+
272
+ ### Contributions
273
+
274
+ Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.