JackismyShephard's picture
train without enhancement
93b2b84
|
raw
history blame
2.43 kB
metadata
language:
  - da
license: mit
base_model: microsoft/speecht5_tts
tags:
  - generated_from_trainer
datasets:
  - alexandrainst/nst-da
model-index:
  - name: speecht5_tts-finetuned-nst-da
    results: []

speecht5_tts-finetuned-nst-da

This model is a fine-tuned version of microsoft/speecht5_tts on the NST Danish ASR Database dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3298

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
0.3762 1.0 9429 0.3670
0.3596 2.0 18858 0.3577
0.3498 3.0 28287 0.3535
0.3356 4.0 37716 0.3414
0.3405 5.0 47145 0.3378
0.3312 6.0 56574 0.3397
0.3326 7.0 66003 0.3377
0.3299 8.0 75432 0.3384
0.3279 9.0 84861 0.3363
0.3203 10.0 94290 0.3335
0.3235 11.0 103719 0.3367
0.3188 12.0 113148 0.3365
0.3141 13.0 122577 0.3324
0.3176 14.0 132006 0.3345
0.3221 15.0 141435 0.3331
0.3157 16.0 150864 0.3317
0.314 17.0 160293 0.3298
0.3164 18.0 169722 0.3316
0.3172 19.0 179151 0.3315
0.3179 20.0 188580 0.3318

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2