gogogogo-1 commited on
Commit
a0bbd14
·
1 Parent(s): 754c24e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +522 -0
README.md ADDED
@@ -0,0 +1,522 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: Common Voice Corpus 10.0
3
+ annotations_creators:
4
+ - crowdsourced
5
+ language_creators:
6
+ - crowdsourced
7
+ language_bcp47:
8
+ - ab
9
+ - ar
10
+ - as
11
+ - ast
12
+ - az
13
+ - ba
14
+ - bas
15
+ - be
16
+ - bg
17
+ - bn
18
+ - br
19
+ - ca
20
+ - ckb
21
+ - cnh
22
+ - cs
23
+ - cv
24
+ - cy
25
+ - da
26
+ - de
27
+ - dv
28
+ - el
29
+ - en
30
+ - eo
31
+ - es
32
+ - et
33
+ - eu
34
+ - fa
35
+ - fi
36
+ - fr
37
+ - fy-NL
38
+ - ga-IE
39
+ - gl
40
+ - gn
41
+ - ha
42
+ - hi
43
+ - hsb
44
+ - hu
45
+ - hy-AM
46
+ - ia
47
+ - id
48
+ - ig
49
+ - it
50
+ - ja
51
+ - ka
52
+ - kab
53
+ - kk
54
+ - kmr
55
+ - ky
56
+ - lg
57
+ - lt
58
+ - lv
59
+ - mdf
60
+ - mhr
61
+ - mk
62
+ - ml
63
+ - mn
64
+ - mr
65
+ - mt
66
+ - myv
67
+ - nan-tw
68
+ - ne-NP
69
+ - nl
70
+ - nn-NO
71
+ - or
72
+ - pa-IN
73
+ - pl
74
+ - pt
75
+ - rm-sursilv
76
+ - rm-vallader
77
+ - ro
78
+ - ru
79
+ - rw
80
+ - sah
81
+ - sat
82
+ - sc
83
+ - sk
84
+ - sl
85
+ - sr
86
+ - sv-SE
87
+ - sw
88
+ - ta
89
+ - th
90
+ - tig
91
+ - tok
92
+ - tr
93
+ - tt
94
+ - ug
95
+ - uk
96
+ - ur
97
+ - uz
98
+ - vi
99
+ - vot
100
+ - yue
101
+ - zh-CN
102
+ - zh-HK
103
+ - zh-TW
104
+ license:
105
+ - cc0-1.0
106
+ multilinguality:
107
+ - multilingual
108
+ size_categories:
109
+ ab:
110
+ - 10K<n<100K
111
+ ar:
112
+ - 100K<n<1M
113
+ as:
114
+ - 1K<n<10K
115
+ ast:
116
+ - n<1K
117
+ az:
118
+ - n<1K
119
+ ba:
120
+ - 100K<n<1M
121
+ bas:
122
+ - 1K<n<10K
123
+ be:
124
+ - 100K<n<1M
125
+ bg:
126
+ - 1K<n<10K
127
+ bn:
128
+ - 100K<n<1M
129
+ br:
130
+ - 10K<n<100K
131
+ ca:
132
+ - 1M<n<10M
133
+ ckb:
134
+ - 100K<n<1M
135
+ cnh:
136
+ - 1K<n<10K
137
+ cs:
138
+ - 10K<n<100K
139
+ cv:
140
+ - 10K<n<100K
141
+ cy:
142
+ - 100K<n<1M
143
+ da:
144
+ - 1K<n<10K
145
+ de:
146
+ - 100K<n<1M
147
+ dv:
148
+ - 10K<n<100K
149
+ el:
150
+ - 10K<n<100K
151
+ en:
152
+ - 1M<n<10M
153
+ eo:
154
+ - 1M<n<10M
155
+ es:
156
+ - 100K<n<1M
157
+ et:
158
+ - 10K<n<100K
159
+ eu:
160
+ - 100K<n<1M
161
+ fa:
162
+ - 100K<n<1M
163
+ fi:
164
+ - 10K<n<100K
165
+ fr:
166
+ - 100K<n<1M
167
+ fy-NL:
168
+ - 10K<n<100K
169
+ ga-IE:
170
+ - 1K<n<10K
171
+ gl:
172
+ - 10K<n<100K
173
+ gn:
174
+ - 1K<n<10K
175
+ ha:
176
+ - 1K<n<10K
177
+ hi:
178
+ - 10K<n<100K
179
+ hsb:
180
+ - 1K<n<10K
181
+ hu:
182
+ - 10K<n<100K
183
+ hy-AM:
184
+ - 1K<n<10K
185
+ ia:
186
+ - 10K<n<100K
187
+ id:
188
+ - 10K<n<100K
189
+ ig:
190
+ - 1K<n<10K
191
+ it:
192
+ - 100K<n<1M
193
+ ja:
194
+ - 10K<n<100K
195
+ ka:
196
+ - 1K<n<10K
197
+ kab:
198
+ - 100K<n<1M
199
+ kk:
200
+ - 1K<n<10K
201
+ kmr:
202
+ - 10K<n<100K
203
+ ky:
204
+ - 10K<n<100K
205
+ lg:
206
+ - 100K<n<1M
207
+ lt:
208
+ - 10K<n<100K
209
+ lv:
210
+ - 1K<n<10K
211
+ mdf:
212
+ - n<1K
213
+ mhr:
214
+ - 10K<n<100K
215
+ mk:
216
+ - n<1K
217
+ ml:
218
+ - 1K<n<10K
219
+ mn:
220
+ - 10K<n<100K
221
+ mr:
222
+ - 10K<n<100K
223
+ mt:
224
+ - 10K<n<100K
225
+ myv:
226
+ - 1K<n<10K
227
+ nan-tw:
228
+ - 10K<n<100K
229
+ ne-NP:
230
+ - n<1K
231
+ nl:
232
+ - 10K<n<100K
233
+ nn-NO:
234
+ - n<1K
235
+ or:
236
+ - 1K<n<10K
237
+ pa-IN:
238
+ - 1K<n<10K
239
+ pl:
240
+ - 100K<n<1M
241
+ pt:
242
+ - 100K<n<1M
243
+ rm-sursilv:
244
+ - 1K<n<10K
245
+ rm-vallader:
246
+ - 1K<n<10K
247
+ ro:
248
+ - 10K<n<100K
249
+ ru:
250
+ - 100K<n<1M
251
+ rw:
252
+ - 1M<n<10M
253
+ sah:
254
+ - 1K<n<10K
255
+ sat:
256
+ - n<1K
257
+ sc:
258
+ - n<1K
259
+ sk:
260
+ - 10K<n<100K
261
+ sl:
262
+ - 10K<n<100K
263
+ sr:
264
+ - 1K<n<10K
265
+ sv-SE:
266
+ - 10K<n<100K
267
+ sw:
268
+ - 100K<n<1M
269
+ ta:
270
+ - 100K<n<1M
271
+ th:
272
+ - 100K<n<1M
273
+ tig:
274
+ - n<1K
275
+ tok:
276
+ - 1K<n<10K
277
+ tr:
278
+ - 10K<n<100K
279
+ tt:
280
+ - 10K<n<100K
281
+ ug:
282
+ - 10K<n<100K
283
+ uk:
284
+ - 10K<n<100K
285
+ ur:
286
+ - 100K<n<1M
287
+ uz:
288
+ - 100K<n<1M
289
+ vi:
290
+ - 10K<n<100K
291
+ vot:
292
+ - n<1K
293
+ yue:
294
+ - 10K<n<100K
295
+ zh-CN:
296
+ - 100K<n<1M
297
+ zh-HK:
298
+ - 100K<n<1M
299
+ zh-TW:
300
+ - 100K<n<1M
301
+ source_datasets:
302
+ - extended|common_voice
303
+ task_categories:
304
+ - automatic-speech-recognition
305
+ paperswithcode_id: common-voice
306
+ extra_gated_prompt: "By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."
307
+ ---
308
+
309
+ # Dataset Card for Common Voice Corpus 10.0
310
+
311
+ ## Table of Contents
312
+ - [Dataset Description](#dataset-description)
313
+ - [Dataset Summary](#dataset-summary)
314
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
315
+ - [Languages](#languages)
316
+ - [Dataset Structure](#dataset-structure)
317
+ - [Data Instances](#data-instances)
318
+ - [Data Fields](#data-fields)
319
+ - [Data Splits](#data-splits)
320
+ - [Dataset Creation](#dataset-creation)
321
+ - [Curation Rationale](#curation-rationale)
322
+ - [Source Data](#source-data)
323
+ - [Annotations](#annotations)
324
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
325
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
326
+ - [Social Impact of Dataset](#social-impact-of-dataset)
327
+ - [Discussion of Biases](#discussion-of-biases)
328
+ - [Other Known Limitations](#other-known-limitations)
329
+ - [Additional Information](#additional-information)
330
+ - [Dataset Curators](#dataset-curators)
331
+ - [Licensing Information](#licensing-information)
332
+ - [Citation Information](#citation-information)
333
+ - [Contributions](#contributions)
334
+
335
+ ## Dataset Description
336
+
337
+ - **Homepage:** https://commonvoice.mozilla.org/en/datasets
338
+ - **Repository:** https://github.com/common-voice/common-voice
339
+ - **Paper:** https://arxiv.org/abs/1912.06670
340
+ - **Leaderboard:** https://paperswithcode.com/dataset/common-voice
341
+ - **Point of Contact:** [Anton Lozhkov](mailto:[email protected])
342
+
343
+ ### Dataset Summary
344
+
345
+ The Common Voice dataset consists of a unique MP3 and corresponding text file.
346
+ Many of the 20817 recorded hours in the dataset also include demographic metadata like age, sex, and accent
347
+ that can help improve the accuracy of speech recognition engines.
348
+
349
+ The dataset currently consists of 15234 validated hours in 96 languages, but more voices and languages are always added.
350
+ Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
351
+
352
+ ### Supported Tasks and Leaderboards
353
+
354
+ The results for models trained on the Common Voice datasets are available via the
355
+ [🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
356
+
357
+ ### Languages
358
+
359
+ ```
360
+ Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Toki Pona, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
361
+ ```
362
+
363
+ ## Dataset Structure
364
+
365
+ ### Data Instances
366
+
367
+ A typical data point comprises the `path` to the audio file and its `sentence`.
368
+ Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
369
+
370
+ ```python
371
+ {
372
+ 'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
373
+ 'path': 'et/clips/common_voice_et_18318995.mp3',
374
+ 'audio': {
375
+ 'path': 'et/clips/common_voice_et_18318995.mp3',
376
+ 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
377
+ 'sampling_rate': 48000
378
+ },
379
+ 'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
380
+ 'up_votes': 2,
381
+ 'down_votes': 0,
382
+ 'age': 'twenties',
383
+ 'gender': 'male',
384
+ 'accent': '',
385
+ 'locale': 'et',
386
+ 'segment': ''
387
+ }
388
+ ```
389
+
390
+ ### Data Fields
391
+
392
+ `client_id` (`string`): An id for which client (voice) made the recording
393
+
394
+ `path` (`string`): The path to the audio file
395
+
396
+ `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
397
+
398
+ `sentence` (`string`): The sentence the user was prompted to speak
399
+
400
+ `up_votes` (`int64`): How many upvotes the audio file has received from reviewers
401
+
402
+ `down_votes` (`int64`): How many downvotes the audio file has received from reviewers
403
+
404
+ `age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
405
+
406
+ `gender` (`string`): The gender of the speaker
407
+
408
+ `accent` (`string`): Accent of the speaker
409
+
410
+ `locale` (`string`): The locale of the speaker
411
+
412
+ `segment` (`string`): Usually an empty field
413
+
414
+ ### Data Splits
415
+
416
+ The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
417
+
418
+ The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
419
+
420
+ The invalidated data is data has been invalidated by reviewers
421
+ and received downvotes indicating that the data is of low quality.
422
+
423
+ The reported data is data that has been reported, for different reasons.
424
+
425
+ The other data is data that has not yet been reviewed.
426
+
427
+ The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
428
+
429
+ ## Data Preprocessing Recommended by Hugging Face
430
+
431
+ The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
432
+
433
+ Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
434
+
435
+ In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
436
+
437
+ ```python
438
+ from datasets import load_dataset
439
+ ds = load_dataset("mozilla-foundation/common_voice_10_0", "en", use_auth_token=True)
440
+ def prepare_dataset(batch):
441
+ """Function to preprocess the dataset with the .map method"""
442
+ transcription = batch["sentence"]
443
+
444
+ if transcription.startswith('"') and transcription.endswith('"'):
445
+ # we can remove trailing quotation marks as they do not affect the transcription
446
+ transcription = transcription[1:-1]
447
+
448
+ if transcription[-1] not in [".", "?", "!"]:
449
+ # append a full-stop to sentences that do not end in punctuation
450
+ transcription = transcription + "."
451
+
452
+ batch["sentence"] = transcription
453
+
454
+ return batch
455
+ ds = ds.map(prepare_dataset, desc="preprocess dataset")
456
+ ```
457
+
458
+ ## Dataset Creation
459
+
460
+ ### Curation Rationale
461
+
462
+ [Needs More Information]
463
+
464
+ ### Source Data
465
+
466
+ #### Initial Data Collection and Normalization
467
+
468
+ [Needs More Information]
469
+
470
+ #### Who are the source language producers?
471
+
472
+ [Needs More Information]
473
+
474
+ ### Annotations
475
+
476
+ #### Annotation process
477
+
478
+ [Needs More Information]
479
+
480
+ #### Who are the annotators?
481
+
482
+ [Needs More Information]
483
+
484
+ ### Personal and Sensitive Information
485
+
486
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
487
+
488
+ ## Considerations for Using the Data
489
+
490
+ ### Social Impact of Dataset
491
+
492
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
493
+
494
+ ### Discussion of Biases
495
+
496
+ [More Information Needed]
497
+
498
+ ### Other Known Limitations
499
+
500
+ [More Information Needed]
501
+
502
+ ## Additional Information
503
+
504
+ ### Dataset Curators
505
+
506
+ [More Information Needed]
507
+
508
+ ### Licensing Information
509
+
510
+ Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
511
+
512
+ ### Citation Information
513
+
514
+ ```
515
+ @inproceedings{commonvoice:2020,
516
+ author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
517
+ title = {Common Voice: A Massively-Multilingual Speech Corpus},
518
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
519
+ pages = {4211--4215},
520
+ year = 2020
521
+ }
522
+ ```