--- language: - cv license: cc0-1.0 task_categories: - automatic-speech-recognition - text-to-speech pretty_name: Chuvash Voice dataset_info: features: - name: audio dtype: audio - name: path dtype: string - name: sentence dtype: string - name: locale dtype: string - name: client_id dtype: string splits: - name: train num_bytes: 1343571989.56 num_examples: 29860 download_size: 1346925000 dataset_size: 1343571989.56 configs: - config_name: default data_files: - split: train path: data/train-* --- ## How to use We recommend using our dataset in conjunction with the Common Voice Corpus. We have attempted to maintain a consistent structure. ```python from datasets import load_dataset, DatasetDict, concatenate_datasets, Audio comm_voice = DatasetDict() comm_voice["train"] = load_dataset("mozilla-foundation/common_voice_17_0", "cv", split="train+validation", use_auth_token=True) comm_voice["test"] = load_dataset("mozilla-foundation/common_voice_17_0", "cv", split="test", use_auth_token=True) comm_voice = comm_voice.remove_columns(["accent", "age", "down_votes", "gender", "segment", "up_votes", "variant"]) comm_voice = comm_voice.cast_column("audio", Audio(sampling_rate=16000)) print(comm_voice) print(comm_voice["train"][0]) chuvash_voice = DatasetDict() chuvash_voice = load_dataset("alexantonov/chuvash_voice") chuvash_voice = chuvash_voice.cast_column("audio", Audio(sampling_rate=16000)) print(chuvash_voice) print(chuvash_voice["train"][0]) common_voice = DatasetDict({"train": concatenate_datasets([comm_voice["train"], chuvash_voice["train"]]), "test": comm_voice["test"]}) print(common_voice) ``` ## Text to Speech Most of the corpus is a unique voice (**client_id='177'**). Therefore, the corpus can also be used for synthesis tasks.