File size: 6,095 Bytes
4f5a638 c5aebf4 4f5a638 309b5b6 4f5a638 a5fbdd4 40dddcf 74c389d a5fbdd4 309b5b6 a5fbdd4 309b5b6 a5fbdd4 309b5b6 a5fbdd4 b1d83a7 a5fbdd4 b1d83a7 a5fbdd4 e8e7733 a5fbdd4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
---
license: apache-2.0
language:
- ca
tags:
- TTS
- audio
- synthesis
- VITS
- speech
- coqui.ai
- pytorch
datasets:
- mozilla-foundation/common_voice_12_0
- projecte-aina/festcat_trimmed_denoised
- projecte-aina/openslr-slr69-ca-trimmed-denoised
---
# Aina Project's Catalan multi-speaker text-to-speech model
## Model description
This model was trained from scratch using the [Coqui TTS](https://github.com/coqui-ai/TTS) toolkit on a combination of 3 datasets:
[Festcat](http://festcat.talp.cat/devel.php), [OpenSLR69](http://openslr.org/69/) and [Common Voice v12](https://commonvoice.mozilla.org/ca).
For the training, we used 487 hours of recordings from 255 speakers.
We have trimmed and denoised the data which all except Common Voice can be found in a seperate dataset
in [festcat_trimmed_denoised](https://huggingface.co/datasets/projecte-aina/festcat_trimmed_denoised) and [openslr69_trimmed_denoised](https://huggingface.co/datasets/projecte-aina/openslr-slr69-ca-trimmed-denoised).
A live inference demo can be found in our spaces, [here](https://huggingface.co/spaces/projecte-aina/tts-ca-coqui-vits-multispeaker).
The model needs our fork of [espeak-ng](https://github.com/projecte-aina/espeak-ng) to work correctly. For installation and deployment please consult the docker file of our [inference demo](https://huggingface.co/spaces/projecte-aina/tts-ca-coqui-vits-multispeaker/blob/main/Dockerfile).
## Intended uses and limitations
You can use this model to generate synthetic speech in Catalan with different voices.
## How to use
### Usage
Required libraries:
```bash
pip install git+https://github.com/coqui-ai/TTS@dev#egg=TTS
```
Synthesize a speech using python:
```bash
import tempfile
import gradio as gr
import numpy as np
import os
import json
from typing import Optional
from TTS.config import load_config
from TTS.utils.manage import ModelManager
from TTS.utils.synthesizer import Synthesizer
model_path = # Absolute path to the model checkpoint.pth
config_path = # Absolute path to the model config.json
speakers_file_path = # Absolute path to speakers.pth file
text = "Text to synthetize"
speaker_idx = "Speaker ID"
synthesizer = Synthesizer(
model_path, config_path, speakers_file_path, None, None, None,
)
wavs = synthesizer.tts(text, speaker_idx)
```
## Training
### Training Procedure
### Data preparation
### Hyperparameter
The model is based on VITS proposed by [Kim et al](https://arxiv.org/abs/2106.06103). The following hyperparameters were set in the coqui framework.
| Hyperparameter | Value |
|------------------------------------|----------------------------------|
| Model | vits |
| Batch Size | 16 |
| Eval Batch Size | 8 |
| Mixed Precision | false |
| Window Length | 1024 |
| Hop Length | 256 |
| FTT size | 1024 |
| Num Mels | 80 |
| Phonemizer | espeak |
| Phoneme Lenguage | ca |
| Text Cleaners | multilingual_cleaners |
| Formatter | vctk_old |
| Optimizer | adam |
| Adam betas | (0.8, 0.99) |
| Adam eps | 1e-09 |
| Adam weight decay | 0.01 |
| Learning Rate Gen | 0.0001 |
| Lr. schedurer Gen | ExponentialLR |
| Lr. schedurer Gamma Gen | 0.999875 |
| Learning Rate Disc | 0.0001 |
| Lr. schedurer Disc | ExponentialLR |
| Lr. schedurer Gamma Disc | 0.999875 |
The model was trained for 730962 steps.
## Additional information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center
### Contact information
For further information, send an email to [email protected]
### Copyright
Copyright (c) 2023 Language Technologies Unit (LangTech) at Barcelona Supercomputing Center
### Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
The training of the model was possible thanks to the compute time given by Galician Supercomputing Center CESGA ([Centro de Supercomputación de Galicia](https://www.cesga.es/)).
## Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|