chuksDev commited on
Commit
c3d3fd8
·
verified ·
1 Parent(s): 0e782d1

End of training

Browse files
Files changed (1) hide show
  1. README.md +7 -9
README.md CHANGED
@@ -9,18 +9,18 @@ tags:
9
  datasets:
10
  - fsicoli/twi
11
  model-index:
12
- - name: SpeechT5 TTS Twi
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # SpeechT5 TTS Twi
20
 
21
- This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the npontu_twi dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: nan
24
 
25
  ## Model description
26
 
@@ -48,17 +48,15 @@ The following hyperparameters were used during training:
48
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_steps: 500
51
- - training_steps: 4000
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Training results
55
 
56
  | Training Loss | Epoch | Step | Validation Loss |
57
  |:-------------:|:-------:|:----:|:---------------:|
58
- | 0.691 | 14.4928 | 1000 | nan |
59
- | 0.6337 | 28.9855 | 2000 | nan |
60
- | 0.6188 | 43.4783 | 3000 | nan |
61
- | 0.5857 | 57.9710 | 4000 | nan |
62
 
63
 
64
  ### Framework versions
 
9
  datasets:
10
  - fsicoli/twi
11
  model-index:
12
+ - name: SpeechT5 TTS Npontu Twi
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # SpeechT5 TTS Npontu Twi
20
 
21
+ This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the FsicoliTwi dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.3824
24
 
25
  ## Model description
26
 
 
48
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_steps: 500
51
+ - training_steps: 2000
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Training results
55
 
56
  | Training Loss | Epoch | Step | Validation Loss |
57
  |:-------------:|:-------:|:----:|:---------------:|
58
+ | 0.4207 | 14.4928 | 1000 | 0.3869 |
59
+ | 0.41 | 28.9855 | 2000 | 0.3824 |
 
 
60
 
61
 
62
  ### Framework versions