Chan-Y commited on
Commit
eef3d69
·
verified ·
1 Parent(s): 2fedd59

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -58
README.md CHANGED
@@ -1,58 +1,60 @@
1
- ---
2
- library_name: transformers
3
- tags:
4
- - generated_from_trainer
5
- model-index:
6
- - name: speecht5_finetuned_tr_mediaspeech_2
7
- results: []
8
- ---
9
-
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
-
13
- # speecht5_finetuned_tr_mediaspeech_2
14
-
15
- This model was trained from scratch on an unknown dataset.
16
- It achieves the following results on the evaluation set:
17
- - eval_loss: 0.5293
18
- - eval_runtime: 319.9451
19
- - eval_samples_per_second: 31.702
20
- - eval_steps_per_second: 15.853
21
- - epoch: 1.7388
22
- - step: 2000
23
-
24
- ## Model description
25
-
26
- More information needed
27
-
28
- ## Intended uses & limitations
29
-
30
- More information needed
31
-
32
- ## Training and evaluation data
33
-
34
- More information needed
35
-
36
- ## Training procedure
37
-
38
- ### Training hyperparameters
39
-
40
- The following hyperparameters were used during training:
41
- - learning_rate: 1e-05
42
- - train_batch_size: 4
43
- - eval_batch_size: 2
44
- - seed: 42
45
- - gradient_accumulation_steps: 8
46
- - total_train_batch_size: 32
47
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
- - lr_scheduler_type: linear
49
- - lr_scheduler_warmup_steps: 500
50
- - training_steps: 4000
51
- - mixed_precision_training: Native AMP
52
-
53
- ### Framework versions
54
-
55
- - Transformers 4.46.3
56
- - Pytorch 2.5.1+cu124
57
- - Datasets 3.1.0
58
- - Tokenizers 0.20.3
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: speecht5_finetuned_tr_mediaspeech_2
7
+ results: []
8
+ base_model:
9
+ - microsoft/speecht5_tts
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # speecht5_finetuned_tr_mediaspeech_2
16
+
17
+ This model was trained from scratch on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - eval_loss: 0.5293
20
+ - eval_runtime: 319.9451
21
+ - eval_samples_per_second: 31.702
22
+ - eval_steps_per_second: 15.853
23
+ - epoch: 1.7388
24
+ - step: 2000
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 1e-05
44
+ - train_batch_size: 4
45
+ - eval_batch_size: 2
46
+ - seed: 42
47
+ - gradient_accumulation_steps: 8
48
+ - total_train_batch_size: 32
49
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
+ - lr_scheduler_type: linear
51
+ - lr_scheduler_warmup_steps: 500
52
+ - training_steps: 4000
53
+ - mixed_precision_training: Native AMP
54
+
55
+ ### Framework versions
56
+
57
+ - Transformers 4.46.3
58
+ - Pytorch 2.5.1+cu124
59
+ - Datasets 3.1.0
60
+ - Tokenizers 0.20.3