pranavdaware commited on
Commit
550b9c5
Β·
verified Β·
1 Parent(s): c81b1d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -35
README.md CHANGED
@@ -57,40 +57,35 @@ The *SpeechT5 TTS Technical Train2* is built on the *SpeechT5* architecture and
57
 
58
  The model was fine-tuned on a *custom dataset*, curated for enhancing TTS outputs. This dataset consists of various types of text that help the model generate more natural speech, making it suitable for TTS applications.
59
 
60
- ---
61
-
62
- ## βš™ Training Procedure
63
-
64
- ### βš™ *Hyperparameters*:
65
 
66
  The model was trained with the following hyperparameters:
67
- ```yaml
68
- learning_rate: 1e-05
69
- train_batch_size: 16
70
- eval_batch_size: 8
71
- seed: 42
72
- gradient_accumulation_steps: 2
73
- total_train_batch_size: 32
74
- optimizer: adamw_torch (betas=(0.9, 0.999), epsilon=1e-08)
75
- lr_scheduler_type: linear
76
- lr_scheduler_warmup_steps: 50
77
- training_steps: 500
78
- mixed_precision_training: Native AMP
79
-
80
- ### Training results
81
-
82
- | Training Loss | Epoch | Step | Validation Loss |
83
- |:-------------:|:-----:|:----:|:---------------:|
84
- | 1.1921 | 100.0 | 100 | 0.4136 |
85
- | 0.8435 | 200.0 | 200 | 0.3791 |
86
- | 0.8294 | 300.0 | 300 | 0.3766 |
87
- | 0.7959 | 400.0 | 400 | 0.3744 |
88
- | 0.7918 | 500.0 | 500 | 0.3763 |
89
-
90
-
91
- ### Framework versions
92
-
93
- - Transformers 4.46.0.dev0
94
- - Pytorch 2.4.1+cu121
95
- - Datasets 3.0.2
96
- - Tokenizers 0.20.1
 
57
 
58
  The model was fine-tuned on a *custom dataset*, curated for enhancing TTS outputs. This dataset consists of various types of text that help the model generate more natural speech, making it suitable for TTS applications.
59
 
60
+ ### βš™ *Hyperparameters:*
 
 
 
 
61
 
62
  The model was trained with the following hyperparameters:
63
+
64
+ - *Learning Rate*: 1e-05
65
+ - *Train Batch Size*: 16
66
+ - *Eval Batch Size*: 8
67
+ - *Seed*: 42
68
+ - *Gradient Accumulation Steps*: 2
69
+ - *Total Train Batch Size*: 32
70
+ - *Optimizer*: AdamW (betas=(0.9, 0.999), epsilon=1e-08)
71
+ - *LR Scheduler Type*: Linear
72
+ - *Warmup Steps*: 50
73
+ - *Training Steps*: 500
74
+ - *Mixed Precision Training*: Native AMP
75
+
76
+ ### βš™ *πŸ“Š Training Results:*:
77
+ | πŸ‹β€β™‚ Training Loss | πŸ•‘ Epoch | πŸ›€ Step | πŸ“‰ Validation Loss |
78
+ |:-------------------:|:-------:|:-------:|:-----------------:|
79
+ | 1.1921 | 100.0 | 100 | 0.4136 |
80
+ | 0.8435 | 200.0 | 200 | 0.3791 |
81
+ | 0.8294 | 300.0 | 300 | 0.3766 |
82
+ | 0.7959 | 400.0 | 400 | 0.3744 |
83
+ | 0.7918 | 500.0 | 500 | 0.3763 |
84
+
85
+
86
+ ### πŸ“¦ Framework Versions
87
+
88
+ - *Transformers*: 4.46.0.dev0
89
+ - *PyTorch*: 2.4.1+cu121
90
+ - *Datasets*: 3.0.2
91
+ - *Tokenizers*:Β 0.20.1