WasuratS commited on
Commit
f1f1e34
·
1 Parent(s): 11463ff

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -5
README.md CHANGED
@@ -2,6 +2,7 @@
2
  language:
3
  - nl
4
  license: mit
 
5
  tags:
6
  - generated_from_trainer
7
  datasets:
@@ -17,6 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
17
  # SpeechT5 Dutch - WasuratS
18
 
19
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
 
 
20
 
21
  ## Model description
22
 
@@ -39,18 +42,26 @@ The following hyperparameters were used during training:
39
  - train_batch_size: 4
40
  - eval_batch_size: 2
41
  - seed: 42
42
- - distributed_type: multi-GPU
43
  - gradient_accumulation_steps: 8
44
  - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 500
48
  - training_steps: 4000
49
- - mixed_precision_training: Native AMP
 
 
 
 
 
 
 
 
 
50
 
51
  ### Framework versions
52
 
53
- - Transformers 4.29.2
54
- - Pytorch 2.0.1+cu117
55
- - Datasets 2.12.0
56
  - Tokenizers 0.13.3
 
2
  language:
3
  - nl
4
  license: mit
5
+ base_model: microsoft/speecht5_tts
6
  tags:
7
  - generated_from_trainer
8
  datasets:
 
18
  # SpeechT5 Dutch - WasuratS
19
 
20
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.4614
23
 
24
  ## Model description
25
 
 
42
  - train_batch_size: 4
43
  - eval_batch_size: 2
44
  - seed: 42
 
45
  - gradient_accumulation_steps: 8
46
  - total_train_batch_size: 32
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_steps: 500
50
  - training_steps: 4000
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss |
55
+ |:-------------:|:-----:|:----:|:---------------:|
56
+ | 0.5223 | 4.3 | 1000 | 0.4793 |
57
+ | 0.4936 | 8.61 | 2000 | 0.4668 |
58
+ | 0.4969 | 12.91 | 3000 | 0.4640 |
59
+ | 0.4929 | 17.21 | 4000 | 0.4614 |
60
+
61
 
62
  ### Framework versions
63
 
64
+ - Transformers 4.31.0
65
+ - Pytorch 2.0.1+cu118
66
+ - Datasets 2.13.1
67
  - Tokenizers 0.13.3