PyTorch
Catalan
TTS
audio
synthesis
VITS
speech
coqui.ai
Baybars commited on
Commit
4f5a638
·
1 Parent(s): 681919a

tags added to model card

Browse files
Files changed (1) hide show
  1. README.md +21 -1
README.md CHANGED
@@ -1,7 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Aina Project's Catalan multi-speaker text-to-speech model
2
  ## Model description
3
 
4
- This model was trained from scratch using the [Coqui TTS](https://github.com/coqui-ai/TTS) toolkit on a combination of 3 datasets: [Festcat](http://festcat.talp.cat/devel.php), [OpenSLR](http://openslr.org/69/) and [Common Voice](https://commonvoice.mozilla.org/ca). For the training, 101460 utterances consisting of 257 speakers were used, which corresponds to nearly 138 hours of speech. [Here](https://huggingface.co/spaces/projecte-aina/VITS_ca_multispeaker) you can find a demo of the model.
5
 
6
  ## Intended uses and limitations
7
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+
4
+ language:
5
+ - ca
6
+
7
+ tags:
8
+ - TTS
9
+ - audio
10
+ - synthesis
11
+ - VITS
12
+ - speech
13
+ - coqui.ai
14
+ - pytorch
15
+
16
+ datasets:
17
+ - mozilla-foundation/common_voice_8_0
18
+
19
+ ---
20
+
21
  # Aina Project's Catalan multi-speaker text-to-speech model
22
  ## Model description
23
 
24
+ This model was trained from scratch using the [Coqui TTS](https://github.com/coqui-ai/TTS) toolkit on a combination of 3 datasets: [Festcat](http://festcat.talp.cat/devel.php), [OpenSLR](http://openslr.org/69/) and [Common Voice](https://commonvoice.mozilla.org/ca). For the training, 101460 utterances consisting of 257 speakers were used, which corresponds to nearly 138 hours of speech. [Here](https://huggingface.co/spaces/projecte-aina/VITS_ca_multispeaker) you can find a demo of the model. A live inference of the demo can be found in [here](https://huggingface.co/spaces/projecte-aina/tts-ca-coqui-vits-multispeaker)
25
 
26
  ## Intended uses and limitations
27