File size: 1,853 Bytes
deac7f3
50c5199
 
 
 
 
 
 
deac7f3
 
 
 
 
 
 
 
 
 
50c5199
0cb964c
deac7f3
 
0cb964c
deac7f3
0cb964c
 
deac7f3
 
 
 
 
a323e28
 
deac7f3
7a1623c
 
 
 
 
 
 
 
 
 
 
0fc356e
7a1623c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0fc356e
 
 
a756346
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
language:
- cv
license: cc0-1.0
task_categories:
- automatic-speech-recognition
- text-to-speech
pretty_name: Chuvash Voice
dataset_info:
  features:
  - name: audio
    dtype: audio
  - name: path
    dtype: string
  - name: sentence
    dtype: string
  - name: locale
    dtype: string
  - name: client_id
    dtype: string
  splits:
  - name: train
    num_bytes: 1343571989.56
    num_examples: 29860
  download_size: 1346925000
  dataset_size: 1343571989.56
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
size_categories:
- 10K<n<100K
---

## How to use

We recommend using our dataset in conjunction with the Common Voice Corpus. We have attempted to maintain a consistent structure.

```python
from datasets import load_dataset, DatasetDict, concatenate_datasets, Audio

comm_voice = DatasetDict()
comm_voice["train"] = load_dataset("mozilla-foundation/common_voice_17_0", "cv", split="train+validation", use_auth_token=True)
comm_voice["test"] = load_dataset("mozilla-foundation/common_voice_17_0", "cv", split="test", use_auth_token=True)
comm_voice = comm_voice.remove_columns(["accent", "age", "down_votes", "gender", "segment", "up_votes", "variant"])
comm_voice = comm_voice.cast_column("audio", Audio(sampling_rate=16000))

print(comm_voice)
print(comm_voice["train"][0])

chuvash_voice = DatasetDict()
chuvash_voice = load_dataset("alexantonov/chuvash_voice")
chuvash_voice = chuvash_voice.cast_column("audio", Audio(sampling_rate=16000))

print(chuvash_voice)
print(chuvash_voice["train"][0])


common_voice = DatasetDict({"train": concatenate_datasets([comm_voice["train"], chuvash_voice["train"]]), "test": comm_voice["test"]})

print(common_voice)
```

## Text to Speech

Most of the corpus is a unique voice (**client_id='177'**). Therefore, the corpus can also be used for synthesis tasks.