End of training
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
21 |
|
22 |
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_17_0 dataset.
|
23 |
It achieves the following results on the evaluation set:
|
24 |
-
- Loss: 0.
|
25 |
|
26 |
## Model description
|
27 |
|
@@ -49,26 +49,28 @@ The following hyperparameters were used during training:
|
|
49 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
50 |
- lr_scheduler_type: linear
|
51 |
- lr_scheduler_warmup_steps: 100
|
52 |
-
-
|
53 |
- mixed_precision_training: Native AMP
|
54 |
|
55 |
### Training results
|
56 |
|
57 |
| Training Loss | Epoch | Step | Validation Loss |
|
58 |
|:-------------:|:------:|:----:|:---------------:|
|
59 |
-
| 5.
|
60 |
-
| 4.
|
61 |
-
| 4.
|
62 |
-
| 4.
|
63 |
-
| 4.
|
64 |
-
| 4.
|
65 |
-
| 4.
|
66 |
-
|
|
|
|
|
|
67 |
|
68 |
|
69 |
### Framework versions
|
70 |
|
71 |
- Transformers 4.46.0.dev0
|
72 |
- Pytorch 2.4.1+cu121
|
73 |
-
- Datasets 3.0.
|
74 |
- Tokenizers 0.20.1
|
|
|
21 |
|
22 |
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_17_0 dataset.
|
23 |
It achieves the following results on the evaluation set:
|
24 |
+
- Loss: 0.4619
|
25 |
|
26 |
## Model description
|
27 |
|
|
|
49 |
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
50 |
- lr_scheduler_type: linear
|
51 |
- lr_scheduler_warmup_steps: 100
|
52 |
+
- training_steps: 1000
|
53 |
- mixed_precision_training: Native AMP
|
54 |
|
55 |
### Training results
|
56 |
|
57 |
| Training Loss | Epoch | Step | Validation Loss |
|
58 |
|:-------------:|:------:|:----:|:---------------:|
|
59 |
+
| 5.2577 | 0.3442 | 100 | 0.5713 |
|
60 |
+
| 4.7919 | 0.6885 | 200 | 0.5288 |
|
61 |
+
| 4.4862 | 1.0327 | 300 | 0.5080 |
|
62 |
+
| 4.2187 | 1.3769 | 400 | 0.4920 |
|
63 |
+
| 4.1509 | 1.7212 | 500 | 0.4841 |
|
64 |
+
| 4.1573 | 2.0654 | 600 | 0.4809 |
|
65 |
+
| 4.0679 | 2.4096 | 700 | 0.4745 |
|
66 |
+
| 4.0088 | 2.7539 | 800 | 0.4671 |
|
67 |
+
| 3.9249 | 3.0981 | 900 | 0.4624 |
|
68 |
+
| 3.9106 | 3.4423 | 1000 | 0.4619 |
|
69 |
|
70 |
|
71 |
### Framework versions
|
72 |
|
73 |
- Transformers 4.46.0.dev0
|
74 |
- Pytorch 2.4.1+cu121
|
75 |
+
- Datasets 3.0.2
|
76 |
- Tokenizers 0.20.1
|