speecht5_kavinda4
This model is a fine-tuned version of microsoft/speecht5_tts on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3649
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7372 | 1.0 | 23 | 0.5522 |
0.592 | 2.0 | 46 | 0.5092 |
0.5346 | 3.0 | 69 | 0.4220 |
0.487 | 4.0 | 92 | 0.4280 |
0.4494 | 5.0 | 115 | 0.4160 |
0.4327 | 6.0 | 138 | 0.3919 |
0.4263 | 7.0 | 161 | 0.3925 |
0.4084 | 8.0 | 184 | 0.3778 |
0.4011 | 9.0 | 207 | 0.3755 |
0.4002 | 10.0 | 230 | 0.3725 |
0.3875 | 11.0 | 253 | 0.3659 |
0.3824 | 12.0 | 276 | 0.3705 |
0.381 | 13.0 | 299 | 0.3717 |
0.3792 | 14.0 | 322 | 0.3663 |
0.3842 | 15.0 | 345 | 0.3660 |
0.3714 | 16.0 | 368 | 0.3679 |
0.37 | 17.0 | 391 | 0.3621 |
0.3646 | 18.0 | 414 | 0.3693 |
0.3675 | 19.0 | 437 | 0.3620 |
0.3623 | 20.0 | 460 | 0.3649 |
Framework versions
- Transformers 4.53.1
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.2
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for kavinda123321/speecht5_kavinda4
Base model
microsoft/speecht5_tts