nakkhatra commited on
Commit
4cac875
1 Parent(s): 62a729f

Trial with commonvoice format

Browse files
Files changed (3) hide show
  1. README.md +519 -1
  2. common_voice_bn.py +63 -80
  3. languages.py +1 -1
README.md CHANGED
@@ -1,9 +1,512 @@
1
  ---
2
- license: cc0-1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
 
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  How to load the Common Voice Bangla dataset directly with the datasets library
9
 
@@ -11,3 +514,18 @@ Run
11
 
12
  1) from datasets import load_dataset
13
  2) dataset = load_dataset("bengaliAI/CommonVoiceBangla", "bn", delimiter='\t')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pretty_name: Common Voice Corpus 9.0
3
+ annotations_creators:
4
+ - crowdsourced
5
+ language_creators:
6
+ - crowdsourced
7
+ languages:
8
+ - ab
9
+ - ar
10
+ - as
11
+ - az
12
+ - ba
13
+ - bas
14
+ - be
15
+ - bg
16
+ - bn
17
+ - br
18
+ - ca
19
+ - ckb
20
+ - cnh
21
+ - cs
22
+ - cv
23
+ - cy
24
+ - da
25
+ - de
26
+ - dv
27
+ - el
28
+ - en
29
+ - eo
30
+ - es
31
+ - et
32
+ - eu
33
+ - fa
34
+ - fi
35
+ - fr
36
+ - fy-NL
37
+ - ga-IE
38
+ - gl
39
+ - gn
40
+ - ha
41
+ - hi
42
+ - hsb
43
+ - hu
44
+ - hy-AM
45
+ - ia
46
+ - id
47
+ - ig
48
+ - it
49
+ - ja
50
+ - ka
51
+ - kab
52
+ - kk
53
+ - kmr
54
+ - ky
55
+ - lg
56
+ - lt
57
+ - lv
58
+ - mdf
59
+ - mhr
60
+ - mk
61
+ - ml
62
+ - mn
63
+ - mr
64
+ - mt
65
+ - myv
66
+ - nan-tw
67
+ - nl
68
+ - nn-NO
69
+ - or
70
+ - pa-IN
71
+ - pl
72
+ - pt
73
+ - rm-sursilv
74
+ - rm-vallader
75
+ - ro
76
+ - ru
77
+ - rw
78
+ - sah
79
+ - sat
80
+ - sk
81
+ - sl
82
+ - sr
83
+ - sv-SE
84
+ - sw
85
+ - ta
86
+ - th
87
+ - tig
88
+ - tok
89
+ - tr
90
+ - tt
91
+ - ug
92
+ - uk
93
+ - ur
94
+ - uz
95
+ - vi
96
+ - vot
97
+ - yue
98
+ - zh-CN
99
+ - zh-HK
100
+ - zh-TW
101
+ licenses:
102
+ - cc0-1.0
103
+ multilinguality:
104
+ - multilingual
105
+ size_categories:
106
+ ab:
107
+ - 10K<n<100K
108
+ ar:
109
+ - 100K<n<1M
110
+ as:
111
+ - n<1K
112
+ az:
113
+ - n<1K
114
+ ba:
115
+ - 100K<n<1M
116
+ bas:
117
+ - 1K<n<10K
118
+ be:
119
+ - 100K<n<1M
120
+ bg:
121
+ - 1K<n<10K
122
+ bn:
123
+ - 100K<n<1M
124
+ br:
125
+ - 10K<n<100K
126
+ ca:
127
+ - 1M<n<10M
128
+ ckb:
129
+ - 10K<n<100K
130
+ cnh:
131
+ - 1K<n<10K
132
+ cs:
133
+ - 10K<n<100K
134
+ cv:
135
+ - 10K<n<100K
136
+ cy:
137
+ - 100K<n<1M
138
+ da:
139
+ - 1K<n<10K
140
+ de:
141
+ - 100K<n<1M
142
+ dv:
143
+ - 10K<n<100K
144
+ el:
145
+ - 10K<n<100K
146
+ en:
147
+ - 1M<n<10M
148
+ eo:
149
+ - 1M<n<10M
150
+ es:
151
+ - 100K<n<1M
152
+ et:
153
+ - 10K<n<100K
154
+ eu:
155
+ - 100K<n<1M
156
+ fa:
157
+ - 100K<n<1M
158
+ fi:
159
+ - 10K<n<100K
160
+ fr:
161
+ - 100K<n<1M
162
+ fy-NL:
163
+ - 10K<n<100K
164
+ ga-IE:
165
+ - 1K<n<10K
166
+ gl:
167
+ - 10K<n<100K
168
+ gn:
169
+ - 1K<n<10K
170
+ ha:
171
+ - 1K<n<10K
172
+ hi:
173
+ - 10K<n<100K
174
+ hsb:
175
+ - 1K<n<10K
176
+ hu:
177
+ - 10K<n<100K
178
+ hy-AM:
179
+ - 1K<n<10K
180
+ ia:
181
+ - 10K<n<100K
182
+ id:
183
+ - 10K<n<100K
184
+ ig:
185
+ - 1K<n<10K
186
+ it:
187
+ - 100K<n<1M
188
+ ja:
189
+ - 10K<n<100K
190
+ ka:
191
+ - 1K<n<10K
192
+ kab:
193
+ - 100K<n<1M
194
+ kk:
195
+ - 1K<n<10K
196
+ kmr:
197
+ - 10K<n<100K
198
+ ky:
199
+ - 10K<n<100K
200
+ lg:
201
+ - 100K<n<1M
202
+ lt:
203
+ - 10K<n<100K
204
+ lv:
205
+ - 1K<n<10K
206
+ mdf:
207
+ - n<1K
208
+ mhr:
209
+ - 10K<n<100K
210
+ mk:
211
+ - n<1K
212
+ ml:
213
+ - 1K<n<10K
214
+ mn:
215
+ - 10K<n<100K
216
+ mr:
217
+ - 10K<n<100K
218
+ mt:
219
+ - 10K<n<100K
220
+ myv:
221
+ - 1K<n<10K
222
+ nan-tw:
223
+ - 1K<n<10K
224
+ nl:
225
+ - 10K<n<100K
226
+ nn-NO:
227
+ - n<1K
228
+ or:
229
+ - 1K<n<10K
230
+ pa-IN:
231
+ - 1K<n<10K
232
+ pl:
233
+ - 100K<n<1M
234
+ pt:
235
+ - 100K<n<1M
236
+ rm-sursilv:
237
+ - 1K<n<10K
238
+ rm-vallader:
239
+ - 1K<n<10K
240
+ ro:
241
+ - 10K<n<100K
242
+ ru:
243
+ - 100K<n<1M
244
+ rw:
245
+ - 1M<n<10M
246
+ sah:
247
+ - 1K<n<10K
248
+ sat:
249
+ - n<1K
250
+ sk:
251
+ - 10K<n<100K
252
+ sl:
253
+ - 10K<n<100K
254
+ sr:
255
+ - 1K<n<10K
256
+ sv-SE:
257
+ - 10K<n<100K
258
+ sw:
259
+ - 100K<n<1M
260
+ ta:
261
+ - 100K<n<1M
262
+ th:
263
+ - 100K<n<1M
264
+ tig:
265
+ - n<1K
266
+ tok:
267
+ - 1K<n<10K
268
+ tr:
269
+ - 10K<n<100K
270
+ tt:
271
+ - 10K<n<100K
272
+ ug:
273
+ - 10K<n<100K
274
+ uk:
275
+ - 10K<n<100K
276
+ ur:
277
+ - 10K<n<100K
278
+ uz:
279
+ - 100K<n<1M
280
+ vi:
281
+ - 10K<n<100K
282
+ vot:
283
+ - n<1K
284
+ yue:
285
+ - 10K<n<100K
286
+ zh-CN:
287
+ - 10K<n<100K
288
+ zh-HK:
289
+ - 100K<n<1M
290
+ zh-TW:
291
+ - 100K<n<1M
292
+ source_datasets:
293
+ - extended|common_voice
294
+ task_categories:
295
+ - speech-processing
296
+ task_ids:
297
+ - automatic-speech-recognition
298
+ paperswithcode_id: common-voice
299
+ extra_gated_prompt: "By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."
300
  ---
301
 
302
+ # Dataset Card for Common Voice Corpus 9.0
303
 
304
+ ## Table of Contents
305
+ - [Dataset Description](#dataset-description)
306
+ - [Dataset Summary](#dataset-summary)
307
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
308
+ - [Languages](#languages)
309
+ - [Dataset Structure](#dataset-structure)
310
+ - [Data Instances](#data-instances)
311
+ - [Data Fields](#data-fields)
312
+ - [Data Splits](#data-splits)
313
+ - [Dataset Creation](#dataset-creation)
314
+ - [Curation Rationale](#curation-rationale)
315
+ - [Source Data](#source-data)
316
+ - [Annotations](#annotations)
317
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
318
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
319
+ - [Social Impact of Dataset](#social-impact-of-dataset)
320
+ - [Discussion of Biases](#discussion-of-biases)
321
+ - [Other Known Limitations](#other-known-limitations)
322
+ - [Additional Information](#additional-information)
323
+ - [Dataset Curators](#dataset-curators)
324
+ - [Licensing Information](#licensing-information)
325
+ - [Citation Information](#citation-information)
326
+ - [Contributions](#contributions)
327
 
328
+ ## Dataset Description
329
+
330
+ - **Homepage:** https://commonvoice.mozilla.org/en/datasets
331
+ - **Repository:** https://github.com/common-voice/common-voice
332
+ - **Paper:** https://arxiv.org/abs/1912.06670
333
+ - **Leaderboard:** https://paperswithcode.com/dataset/common-voice
334
+ - **Point of Contact:** [Anton Lozhkov](mailto:[email protected])
335
+
336
+ ### Dataset Summary
337
+
338
+ The Common Voice dataset consists of a unique MP3 and corresponding text file.
339
+ Many of the 20217 recorded hours in the dataset also include demographic metadata like age, sex, and accent
340
+ that can help improve the accuracy of speech recognition engines.
341
+
342
+ The dataset currently consists of 14973 validated hours in 93 languages, but more voices and languages are always added.
343
+ Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
344
+
345
+ ### Supported Tasks and Leaderboards
346
+
347
+ The results for models trained on the Common Voice datasets are available via the
348
+ [Papers with Code Leaderboards](https://paperswithcode.com/dataset/common-voice)
349
+
350
+ ### Languages
351
+
352
+ ```
353
+ Abkhaz, Arabic, Armenian, Assamese, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Toki Pona, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
354
+ ```
355
+
356
+ ## Dataset Structure
357
+
358
+ ### Data Instances
359
+
360
+ A typical data point comprises the `path` to the audio file and its `sentence`.
361
+ Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
362
+
363
+ ```python
364
+ {
365
+ 'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
366
+ 'path': 'et/clips/common_voice_et_18318995.mp3',
367
+ 'audio': {
368
+ 'path': 'et/clips/common_voice_et_18318995.mp3',
369
+ 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
370
+ 'sampling_rate': 48000
371
+ },
372
+ 'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
373
+ 'up_votes': 2,
374
+ 'down_votes': 0,
375
+ 'age': 'twenties',
376
+ 'gender': 'male',
377
+ 'accent': '',
378
+ 'locale': 'et',
379
+ 'segment': ''
380
+ }
381
+ ```
382
+
383
+ ### Data Fields
384
+
385
+ `client_id` (`string`): An id for which client (voice) made the recording
386
+
387
+ `path` (`string`): The path to the audio file
388
+
389
+ `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
390
+
391
+ `sentence` (`string`): The sentence the user was prompted to speak
392
+
393
+ `up_votes` (`int64`): How many upvotes the audio file has received from reviewers
394
+
395
+ `down_votes` (`int64`): How many downvotes the audio file has received from reviewers
396
+
397
+ `age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
398
+
399
+ `gender` (`string`): The gender of the speaker
400
+
401
+ `accent` (`string`): Accent of the speaker
402
+
403
+ `locale` (`string`): The locale of the speaker
404
+
405
+ `segment` (`string`): Usually an empty field
406
+
407
+ ### Data Splits
408
+
409
+ The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
410
+
411
+ The validated data is data that has been validated with reviewers and recieved upvotes that the data is of high quality.
412
+
413
+ The invalidated data is data has been invalidated by reviewers
414
+ and received downvotes indicating that the data is of low quality.
415
+
416
+ The reported data is data that has been reported, for different reasons.
417
+
418
+ The other data is data that has not yet been reviewed.
419
+
420
+ The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
421
+
422
+
423
+ ## Data Preprocessing Recommended by Hugging Face
424
+ The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
425
+
426
+ Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
427
+
428
+ In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
429
+
430
+ ```python
431
+ from datasets import load_dataset
432
+
433
+ ds = load_dataset("mozilla-foundation/common_voice_9_0", "en", use_auth_token=True)
434
+
435
+ def prepare_dataset(batch):
436
+ """Function to preprocess the dataset with the .map method"""
437
+ transcription = batch["sentence"]
438
+
439
+ if transcription.startswith('"') and transcription.endswith('"'):
440
+ # we can remove trailing quotation marks as they do not affect the transcription
441
+ transcription = transcription[1:-1]
442
+
443
+ if transcription[-1] not in [".", "?", "!"]:
444
+ # append a full-stop to sentences that do not end in punctuation
445
+ transcription = transcription + "."
446
+
447
+ batch["sentence"] = transcription
448
+
449
+ return batch
450
+
451
+ ds = ds.map(prepare_dataset, desc="preprocess dataset")
452
+ ```
453
+ ## Dataset Creation
454
+
455
+ ### Curation Rationale
456
+
457
+ [Needs More Information]
458
+
459
+ ### Source Data
460
+
461
+ #### Initial Data Collection and Normalization
462
+
463
+ [Needs More Information]
464
+
465
+ #### Who are the source language producers?
466
+
467
+ [Needs More Information]
468
+
469
+ ### Annotations
470
+
471
+ #### Annotation process
472
+
473
+ [Needs More Information]
474
+
475
+ #### Who are the annotators?
476
+
477
+ [Needs More Information]
478
+
479
+ ### Personal and Sensitive Information
480
+
481
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
482
+
483
+ ## Considerations for Using the Data
484
+
485
+ ### Social Impact of Dataset
486
+
487
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
488
+
489
+ ### Discussion of Biases
490
+
491
+ [More Information Needed]
492
+
493
+ ### Other Known Limitations
494
+
495
+ [More Information Needed]
496
+
497
+ ## Additional Information
498
+
499
+ ### Dataset Curators
500
+
501
+ [More Information Needed]
502
+
503
+ ### Licensing Information
504
+
505
+ Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
506
+
507
+
508
+
509
+ ## How to download the dataset
510
 
511
  How to load the Common Voice Bangla dataset directly with the datasets library
512
 
 
514
 
515
  1) from datasets import load_dataset
516
  2) dataset = load_dataset("bengaliAI/CommonVoiceBangla", "bn", delimiter='\t')
517
+
518
+
519
+
520
+
521
+ ### Citation Information
522
+
523
+ ```
524
+ @inproceedings{commonvoice:2020,
525
+ author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
526
+ title = {Common Voice: A Massively-Multilingual Speech Corpus},
527
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
528
+ pages = {4211--4215},
529
+ year = 2020
530
+ }
531
+ ```
common_voice_bn.py CHANGED
@@ -1,24 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  import csv
2
  import os
3
  import urllib
4
 
5
-
6
- import datasets
7
- from datasets.utils.py_utils import size_str
8
-
9
-
10
  import datasets
11
  import requests
12
  from datasets.utils.py_utils import size_str
13
  from huggingface_hub import HfApi, HfFolder
14
 
15
- # from .languages import LANGUAGES
16
- #Used to get tar.gz file from mozilla website
17
  from .release_stats import STATS
18
 
19
-
20
-
21
- #Hard Links
 
 
 
 
 
 
22
 
23
  _HOMEPAGE = "https://commonvoice.mozilla.org/en/datasets"
24
 
@@ -27,20 +44,17 @@ _LICENSE = "https://creativecommons.org/publicdomain/zero/1.0/"
27
  _API_URL = "https://commonvoice.mozilla.org/api/v1"
28
 
29
 
30
-
31
-
32
-
33
  class CommonVoiceConfig(datasets.BuilderConfig):
34
  """BuilderConfig for CommonVoice."""
35
 
36
  def __init__(self, name, version, **kwargs):
37
- self.language = "bn" # kwargs.pop("language", None)
38
- self.release_date = "2022-04-27" # kwargs.pop("release_date", None)
39
- self.num_clips = 231120 # kwargs.pop("num_clips", None)
40
- self.num_speakers = 19863 # kwargs.pop("num_speakers", None)
41
- self.validated_hr = 56.61 # kwargs.pop("validated_hr", None)
42
- self.total_hr = 399.47 # kwargs.pop("total_hr", None)
43
- self.size_bytes = 8262390506 # kwargs.pop("size_bytes", None)
44
  self.size_human = size_str(self.size_bytes)
45
  description = (
46
  f"Common Voice speech to text dataset in {self.language} released on {self.release_date}. "
@@ -63,27 +77,26 @@ class CommonVoice(datasets.GeneratorBasedBuilder):
63
 
64
  BUILDER_CONFIGS = [
65
  CommonVoiceConfig(
66
- name="bn"#lang,
67
- version= '9.0.0' #STATS["version"],
68
- language= "Bengali" #LANGUAGES[lang],
69
- release_date= "2022-04-27" #STATS["date"],
70
- num_clips= 231120 #lang_stats["clips"],
71
- num_speakers= 19863 #lang_stats["users"],
72
- validated_hr= float(56.61) #float(lang_stats["validHrs"]),
73
- total_hr= float(399.47) #float(lang_stats["totalHrs"]),
74
- size_bytes= int(8262390506) #int(lang_stats["size"]),
75
  )
76
- #for lang, lang_stats in STATS["locales"].items()
77
  ]
78
 
79
  def _info(self):
80
- # total_languages = len(STATS["locales"])
81
- # total_valid_hours = STATS["totalValidHrs"]
82
- total_languages = 1 #len(STATS["locales"])
83
- total_valid_hours = float(399.47) #STATS["totalValidHrs"]
84
  description = (
85
- "Common Voice Bangla is bengali AI's initiative to help teach machines how real people speak in Bangla. "
86
- f"The dataset is for initial training of a general speech recognition model for Bangla."
 
87
  )
88
  features = datasets.Features(
89
  {
@@ -105,25 +118,22 @@ class CommonVoice(datasets.GeneratorBasedBuilder):
105
  description=description,
106
  features=features,
107
  supervised_keys=None,
108
- # homepage=_HOMEPAGE,
109
  license=_LICENSE,
110
- # citation=_CITATION,
111
- version=self.config.version,
112
- #task_templates=[
113
- # AutomaticSpeechRecognition(audio_file_path_column="path", transcription_column="sentence")
114
- #],
115
  )
116
 
117
-
118
  def _get_bundle_url(self, locale, url_template):
119
  # path = encodeURIComponent(path)
120
  path = url_template.replace("{locale}", locale)
121
  path = urllib.parse.quote(path.encode("utf-8"), safe="~()*!.'")
122
  # use_cdn = self.config.size_bytes < 20 * 1024 * 1024 * 1024
123
  # response = requests.get(f"{_API_URL}/bucket/dataset/{path}/{use_cdn}", timeout=10.0).json()
124
- response = requests.get(
125
- f"{_API_URL}/bucket/dataset/{path}", timeout=10.0
126
- ).json()
127
  return response["url"]
128
 
129
  def _log_download(self, locale, bundle_version, auth_token):
@@ -147,12 +157,8 @@ class CommonVoice(datasets.GeneratorBasedBuilder):
147
  dl_manager.download_config.ignore_url_params = True
148
 
149
  self._log_download(self.config.name, bundle_version, hf_auth_token)
150
- archive_path = dl_manager.download(
151
- self._get_bundle_url(self.config.name, bundle_url_template)
152
- )
153
- local_extracted_archive = (
154
- dl_manager.extract(archive_path) if not dl_manager.is_streaming else None
155
- )
156
 
157
  if self.config.version < datasets.Version("5.0.0"):
158
  path_to_data = ""
@@ -160,6 +166,7 @@ class CommonVoice(datasets.GeneratorBasedBuilder):
160
  path_to_data = "/".join([bundle_version, self.config.name])
161
  path_to_clips = "/".join([path_to_data, "clips"]) if path_to_data else "clips"
162
 
 
163
  #we provide our custom csvs with the huggingface repo so,
164
  path_to_tsvs = "/" + "bengali_ai_tsv" + "/"
165
 
@@ -169,10 +176,7 @@ class CommonVoice(datasets.GeneratorBasedBuilder):
169
  gen_kwargs={
170
  "local_extracted_archive": local_extracted_archive,
171
  "archive_iterator": dl_manager.iter_archive(archive_path),
172
- #"metadata_filepath": "/".join([path_to_data, "train.tsv"])
173
- # if path_to_data
174
- # else "train.tsv",
175
- #custom train.tsv
176
  "metadata_filepath": "/".join([path_to_tsvs, "train.tsv"]),
177
  "path_to_clips": path_to_clips,
178
  },
@@ -182,10 +186,7 @@ class CommonVoice(datasets.GeneratorBasedBuilder):
182
  gen_kwargs={
183
  "local_extracted_archive": local_extracted_archive,
184
  "archive_iterator": dl_manager.iter_archive(archive_path),
185
- #"metadata_filepath": "/".join([path_to_data, "test.tsv"])
186
- # if path_to_data
187
- # else "test.tsv",
188
- #custom test.tsv
189
  "metadata_filepath": "/".join([path_to_tsvs, "test.tsv"]),
190
  "path_to_clips": path_to_clips,
191
  },
@@ -195,18 +196,13 @@ class CommonVoice(datasets.GeneratorBasedBuilder):
195
  gen_kwargs={
196
  "local_extracted_archive": local_extracted_archive,
197
  "archive_iterator": dl_manager.iter_archive(archive_path),
198
- # "metadata_filepath": "/".join([path_to_data, "dev.tsv"])
199
- # if path_to_data
200
- # else "dev.tsv",
201
- #custom test.tsv
202
  "metadata_filepath": "/".join([path_to_tsvs, "dev.tsv"]),
203
  "path_to_clips": path_to_clips,
204
  },
205
  ),
206
  ]
207
 
208
-
209
-
210
  def _generate_examples(
211
  self,
212
  local_extracted_archive,
@@ -244,22 +240,9 @@ class CommonVoice(datasets.GeneratorBasedBuilder):
244
  if path in metadata:
245
  result = metadata[path]
246
  # set the audio feature and the path to the extracted file
247
- path = (
248
- os.path.join(local_extracted_archive, path)
249
- if local_extracted_archive
250
- else path
251
- )
252
  result["audio"] = {"path": path, "bytes": f.read()}
253
  # set path to None if the audio file doesn't exist locally (i.e. in streaming mode)
254
  result["path"] = path if local_extracted_archive else None
255
 
256
  yield path, result
257
-
258
-
259
-
260
- # 'bn': {'duration': 1438112808, 'reportedSentences': 693, 'buckets': {'dev': 7748, 'invalidated': 5844, 'other': 192522,
261
- # 'reported': 717, 'test': 7748, 'train': 14503, 'validated': 32754}, 'clips': 231120, 'splits': {'accent': {'': 1},
262
- # 'age': {'thirties': 0.02, 'twenties': 0.22, '': 0.72, 'teens': 0.04, 'fourties': 0},
263
- # 'gender': {'male': 0.24, '': 0.72, 'female': 0.04, 'other': 0}}, 'users': 19863, 'size': 8262390506,
264
- # 'checksum': '599a5f7c9e55a297928da390345a19180b279a1f013081e7255a657fc99f98d5', 'avgDurationSecs': 6.222,
265
- # 'validDurationSecs': 203807.316, 'totalHrs': 399.47, 'validHrs': 56.61},
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ Common Voice Dataset"""
16
+
17
+
18
  import csv
19
  import os
20
  import urllib
21
 
 
 
 
 
 
22
  import datasets
23
  import requests
24
  from datasets.utils.py_utils import size_str
25
  from huggingface_hub import HfApi, HfFolder
26
 
27
+ from .languages import LANGUAGES
 
28
  from .release_stats import STATS
29
 
30
+ _CITATION = """\
31
+ @inproceedings{commonvoice:2020,
32
+ author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
33
+ title = {Common Voice: A Massively-Multilingual Speech Corpus},
34
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
35
+ pages = {4211--4215},
36
+ year = 2020
37
+ }
38
+ """
39
 
40
  _HOMEPAGE = "https://commonvoice.mozilla.org/en/datasets"
41
 
 
44
  _API_URL = "https://commonvoice.mozilla.org/api/v1"
45
 
46
 
 
 
 
47
  class CommonVoiceConfig(datasets.BuilderConfig):
48
  """BuilderConfig for CommonVoice."""
49
 
50
  def __init__(self, name, version, **kwargs):
51
+ self.language = kwargs.pop("language", None)
52
+ self.release_date = kwargs.pop("release_date", None)
53
+ self.num_clips = kwargs.pop("num_clips", None)
54
+ self.num_speakers = kwargs.pop("num_speakers", None)
55
+ self.validated_hr = kwargs.pop("validated_hr", None)
56
+ self.total_hr = kwargs.pop("total_hr", None)
57
+ self.size_bytes = kwargs.pop("size_bytes", None)
58
  self.size_human = size_str(self.size_bytes)
59
  description = (
60
  f"Common Voice speech to text dataset in {self.language} released on {self.release_date}. "
 
77
 
78
  BUILDER_CONFIGS = [
79
  CommonVoiceConfig(
80
+ name=lang,
81
+ version=STATS["version"],
82
+ language=LANGUAGES[lang],
83
+ release_date=STATS["date"],
84
+ num_clips=lang_stats["clips"],
85
+ num_speakers=lang_stats["users"],
86
+ validated_hr=float(lang_stats["validHrs"]),
87
+ total_hr=float(lang_stats["totalHrs"]),
88
+ size_bytes=int(lang_stats["size"]),
89
  )
90
+ for lang, lang_stats in STATS["locales"].items()
91
  ]
92
 
93
  def _info(self):
94
+ total_languages = len(STATS["locales"])
95
+ total_valid_hours = STATS["totalValidHrs"]
 
 
96
  description = (
97
+ "Common Voice is Mozilla's initiative to help teach machines how real people speak. "
98
+ f"The dataset currently consists of {total_valid_hours} validated hours of speech "
99
+ f" in {total_languages} languages, but more voices and languages are always added."
100
  )
101
  features = datasets.Features(
102
  {
 
118
  description=description,
119
  features=features,
120
  supervised_keys=None,
121
+ homepage=_HOMEPAGE,
122
  license=_LICENSE,
123
+ citation=_CITATION,
124
+ version=self.config.version,
125
+ # task_templates=[
126
+ # AutomaticSpeechRecognition(audio_file_path_column="path", transcription_column="sentence")
127
+ # ],
128
  )
129
 
 
130
  def _get_bundle_url(self, locale, url_template):
131
  # path = encodeURIComponent(path)
132
  path = url_template.replace("{locale}", locale)
133
  path = urllib.parse.quote(path.encode("utf-8"), safe="~()*!.'")
134
  # use_cdn = self.config.size_bytes < 20 * 1024 * 1024 * 1024
135
  # response = requests.get(f"{_API_URL}/bucket/dataset/{path}/{use_cdn}", timeout=10.0).json()
136
+ response = requests.get(f"{_API_URL}/bucket/dataset/{path}", timeout=10.0).json()
 
 
137
  return response["url"]
138
 
139
  def _log_download(self, locale, bundle_version, auth_token):
 
157
  dl_manager.download_config.ignore_url_params = True
158
 
159
  self._log_download(self.config.name, bundle_version, hf_auth_token)
160
+ archive_path = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template))
161
+ local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else None
 
 
 
 
162
 
163
  if self.config.version < datasets.Version("5.0.0"):
164
  path_to_data = ""
 
166
  path_to_data = "/".join([bundle_version, self.config.name])
167
  path_to_clips = "/".join([path_to_data, "clips"]) if path_to_data else "clips"
168
 
169
+
170
  #we provide our custom csvs with the huggingface repo so,
171
  path_to_tsvs = "/" + "bengali_ai_tsv" + "/"
172
 
 
176
  gen_kwargs={
177
  "local_extracted_archive": local_extracted_archive,
178
  "archive_iterator": dl_manager.iter_archive(archive_path),
179
+ #"metadata_filepath": "/".join([path_to_data, "train.tsv"]) if path_to_data else "train.tsv",
 
 
 
180
  "metadata_filepath": "/".join([path_to_tsvs, "train.tsv"]),
181
  "path_to_clips": path_to_clips,
182
  },
 
186
  gen_kwargs={
187
  "local_extracted_archive": local_extracted_archive,
188
  "archive_iterator": dl_manager.iter_archive(archive_path),
189
+ #"metadata_filepath": "/".join([path_to_data, "test.tsv"]) if path_to_data else "test.tsv",
 
 
 
190
  "metadata_filepath": "/".join([path_to_tsvs, "test.tsv"]),
191
  "path_to_clips": path_to_clips,
192
  },
 
196
  gen_kwargs={
197
  "local_extracted_archive": local_extracted_archive,
198
  "archive_iterator": dl_manager.iter_archive(archive_path),
199
+ #"metadata_filepath": "/".join([path_to_data, "dev.tsv"]) if path_to_data else "dev.tsv",
 
 
 
200
  "metadata_filepath": "/".join([path_to_tsvs, "dev.tsv"]),
201
  "path_to_clips": path_to_clips,
202
  },
203
  ),
204
  ]
205
 
 
 
206
  def _generate_examples(
207
  self,
208
  local_extracted_archive,
 
240
  if path in metadata:
241
  result = metadata[path]
242
  # set the audio feature and the path to the extracted file
243
+ path = os.path.join(local_extracted_archive, path) if local_extracted_archive else path
 
 
 
 
244
  result["audio"] = {"path": path, "bytes": f.read()}
245
  # set path to None if the audio file doesn't exist locally (i.e. in streaming mode)
246
  result["path"] = path if local_extracted_archive else None
247
 
248
  yield path, result
 
 
 
 
 
 
 
 
 
languages.py CHANGED
@@ -1 +1 @@
1
- LANGUAGES = {"bn": "Bengali"}
 
1
+ LANGUAGES = {'ab': 'Abkhaz', 'ace': 'Acehnese', 'ady': 'Adyghe', 'af': 'Afrikaans', 'am': 'Amharic', 'an': 'Aragonese', 'ar': 'Arabic', 'arn': 'Mapudungun', 'as': 'Assamese', 'ast': 'Asturian', 'az': 'Azerbaijani', 'ba': 'Bashkir', 'bas': 'Basaa', 'be': 'Belarusian', 'bg': 'Bulgarian', 'bn': 'Bengali', 'br': 'Breton', 'bs': 'Bosnian', 'bxr': 'Buryat', 'ca': 'Catalan', 'cak': 'Kaqchikel', 'ckb': 'Central Kurdish', 'cnh': 'Hakha Chin', 'co': 'Corsican', 'cs': 'Czech', 'cv': 'Chuvash', 'cy': 'Welsh', 'da': 'Danish', 'de': 'German', 'dsb': 'Sorbian, Lower', 'dv': 'Dhivehi', 'el': 'Greek', 'en': 'English', 'eo': 'Esperanto', 'es': 'Spanish', 'et': 'Estonian', 'eu': 'Basque', 'fa': 'Persian', 'ff': 'Fulah', 'fi': 'Finnish', 'fo': 'Faroese', 'fr': 'French', 'fy-NL': 'Frisian', 'ga-IE': 'Irish', 'gl': 'Galician', 'gn': 'Guarani', 'gom': 'Goan Konkani', 'ha': 'Hausa', 'he': 'Hebrew', 'hi': 'Hindi', 'hr': 'Croatian', 'hsb': 'Sorbian, Upper', 'ht': 'Haitian', 'hu': 'Hungarian', 'hy-AM': 'Armenian', 'hyw': 'Armenian Western', 'ia': 'Interlingua', 'id': 'Indonesian', 'ie': 'Interlingue', 'ig': 'Igbo', 'is': 'Icelandic', 'it': 'Italian', 'izh': 'Izhorian', 'ja': 'Japanese', 'ka': 'Georgian', 'kaa': 'Karakalpak', 'kab': 'Kabyle', 'kbd': 'Kabardian', 'ki': 'Kikuyu', 'kk': 'Kazakh', 'km': 'Khmer', 'kmr': 'Kurmanji Kurdish', 'knn': 'Konkani (Devanagari)', 'ko': 'Korean', 'kpv': 'Komi-Zyrian', 'kw': 'Cornish', 'ky': 'Kyrgyz', 'lb': 'Luxembourgish', 'lg': 'Luganda', 'lij': 'Ligurian', 'lt': 'Lithuanian', 'lv': 'Latvian', 'mai': 'Maithili', 'mdf': 'Moksha', 'mg': 'Malagasy', 'mhr': 'Meadow Mari', 'mk': 'Macedonian', 'ml': 'Malayalam', 'mn': 'Mongolian', 'mni': 'Meetei Lon', 'mos': 'Mossi', 'mr': 'Marathi', 'mrj': 'Hill Mari', 'ms': 'Malay', 'mt': 'Maltese', 'my': 'Burmese', 'myv': 'Erzya', 'nan-tw': 'Taiwanese (Minnan)', 'nb-NO': 'Norwegian Bokmål', 'ne-NP': 'Nepali', 'nia': 'Nias', 'nl': 'Dutch', 'nn-NO': 'Norwegian Nynorsk', 'nyn': 'Runyankole', 'oc': 'Occitan', 'or': 'Odia', 'pa-IN': 'Punjabi', 'pap-AW': 'Papiamento (Aruba)', 'pl': 'Polish', 'ps': 'Pashto', 'pt': 'Portuguese', 'quc': "K'iche'", 'quy': 'Quechua Chanka', 'rm-sursilv': 'Romansh Sursilvan', 'rm-vallader': 'Romansh Vallader', 'ro': 'Romanian', 'ru': 'Russian', 'rw': 'Kinyarwanda', 'sah': 'Sakha', 'sat': 'Santali (Ol Chiki)', 'sc': 'Sardinian', 'scn': 'Sicilian', 'shi': 'Shilha', 'si': 'Sinhala', 'sk': 'Slovak', 'skr': 'Saraiki', 'sl': 'Slovenian', 'so': 'Somali', 'sq': 'Albanian', 'sr': 'Serbian', 'sv-SE': 'Swedish', 'sw': 'Swahili', 'syr': 'Syriac', 'ta': 'Tamil', 'te': 'Telugu', 'tg': 'Tajik', 'th': 'Thai', 'ti': 'Tigrinya', 'tig': 'Tigre', 'tk': 'Turkmen', 'tl': 'Tagalog', 'tok': 'Toki Pona', 'tr': 'Turkish', 'tt': 'Tatar', 'tw': 'Twi', 'ty': 'Tahitian', 'uby': 'Ubykh', 'udm': 'Udmurt', 'ug': 'Uyghur', 'uk': 'Ukrainian', 'ur': 'Urdu', 'uz': 'Uzbek', 'vec': 'Venetian', 'vi': 'Vietnamese', 'vot': 'Votic', 'yi': 'Yiddish', 'yo': 'Yoruba', 'yue': 'Cantonese', 'zh-CN': 'Chinese (China)', 'zh-HK': 'Chinese (Hong Kong)', 'zh-TW': 'Chinese (Taiwan)'}