File size: 2,234 Bytes
fa78634
 
 
 
 
 
 
 
9508030
fa78634
 
 
 
 
 
 
 
 
 
 
 
 
 
94ed194
fa78634
 
 
d490bb1
65d0828
 
fa78634
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94ed194
fa78634
 
 
 
 
94ed194
 
 
 
 
 
 
 
fa78634
 
 
 
 
 
 
9508030
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
language:
- ca
license: mit
base_model: microsoft/speecht5_tts
tags:
- TTS
- generated_from_trainer
- text-to-speech
datasets:
- openslr
model-index:
- name: SpeechT5 TTS Catalan
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# SpeechT5 TTS Catalan

This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the OpenSLR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4360

## Model description

This model was trained using the instructions provided on this [notebook](https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ)
but using the catalan subset of OpenSLR dataset. The main change is the use of trimming to delete large parts of silence that this
dataset originally have. You can check the notebook used for this training [here](https://colab.research.google.com/drive/1B4idPGWxtAftOft6I47UjOB_l1yoiXzn)

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5039        | 8.37  | 1000 | 0.4530          |
| 0.4723        | 16.74 | 2000 | 0.4345          |
| 0.4583        | 25.1  | 3000 | 0.4316          |
| 0.4565        | 33.47 | 4000 | 0.4294          |
| 0.4363        | 41.84 | 5000 | 0.4329          |
| 0.446         | 50.21 | 6000 | 0.4331          |
| 0.4508        | 58.58 | 7000 | 0.4336          |
| 0.4529        | 66.95 | 8000 | 0.4360          |


### Framework versions

- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3