miosipof commited on
Commit
12057ae
1 Parent(s): ddb19b1

End of training

Browse files
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  language:
3
  - it
4
  license: apache-2.0
@@ -9,7 +10,6 @@ datasets:
9
  - ASR_Synthetic_Speecht5_TTS
10
  metrics:
11
  - wer
12
- library_name: peft
13
  model-index:
14
  - name: Whisper Medium
15
  results:
@@ -24,7 +24,7 @@ model-index:
24
  args: default
25
  metrics:
26
  - type: wer
27
- value: 71.81688125894135
28
  name: Wer
29
  ---
30
 
@@ -33,10 +33,10 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  # Whisper Medium
35
 
36
- This model is a fine-tuned version of [b-brave/asr_michael_medium_04-09-2024](https://huggingface.co/b-brave/asr_michael_medium_04-09-2024) on the ASR_Synthetic_Speecht5_TTS dataset.
37
  It achieves the following results on the evaluation set:
38
- - Loss: 2.5412
39
- - Wer: 71.8169
40
 
41
  ## Model description
42
 
@@ -56,55 +56,35 @@ More information needed
56
 
57
  The following hyperparameters were used during training:
58
  - learning_rate: 0.001
59
- - train_batch_size: 16
60
- - eval_batch_size: 16
61
  - seed: 42
 
 
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_steps: 50
65
- - num_epochs: 3
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer |
71
  |:-------------:|:------:|:----:|:---------------:|:--------:|
72
- | 4.8154 | 0.0977 | 50 | 3.5107 | 336.0515 |
73
- | 1.533 | 0.1953 | 100 | 3.2969 | 187.2675 |
74
- | 1.149 | 0.2930 | 150 | 2.7849 | 151.6452 |
75
- | 0.9487 | 0.3906 | 200 | 2.6655 | 155.3648 |
76
- | 0.7776 | 0.4883 | 250 | 2.3826 | 97.4249 |
77
- | 0.6215 | 0.5859 | 300 | 2.5996 | 126.3233 |
78
- | 0.5563 | 0.6836 | 350 | 2.5859 | 155.2217 |
79
- | 0.4523 | 0.7812 | 400 | 2.6057 | 141.6309 |
80
- | 0.4472 | 0.8789 | 450 | 2.6161 | 92.1316 |
81
- | 0.3825 | 0.9766 | 500 | 2.5222 | 75.8226 |
82
- | 0.2964 | 1.0742 | 550 | 2.7815 | 156.7954 |
83
- | 0.263 | 1.1719 | 600 | 2.7807 | 101.4306 |
84
- | 0.2474 | 1.2695 | 650 | 2.4792 | 77.9685 |
85
- | 0.2133 | 1.3672 | 700 | 2.5918 | 84.6924 |
86
- | 0.2148 | 1.4648 | 750 | 2.7335 | 86.4092 |
87
- | 0.1955 | 1.5625 | 800 | 2.5843 | 79.1130 |
88
- | 0.1943 | 1.6602 | 850 | 2.5548 | 70.6724 |
89
- | 0.1648 | 1.7578 | 900 | 2.5600 | 84.6924 |
90
- | 0.1411 | 1.8555 | 950 | 2.5128 | 70.8155 |
91
- | 0.156 | 1.9531 | 1000 | 2.6710 | 76.8240 |
92
- | 0.1215 | 2.0508 | 1050 | 2.6387 | 68.0973 |
93
- | 0.092 | 2.1484 | 1100 | 2.5359 | 73.3906 |
94
- | 0.0896 | 2.2461 | 1150 | 2.5957 | 75.5365 |
95
- | 0.0861 | 2.3438 | 1200 | 2.5860 | 78.6838 |
96
- | 0.0827 | 2.4414 | 1250 | 2.5877 | 74.5351 |
97
- | 0.0893 | 2.5391 | 1300 | 2.5659 | 74.5351 |
98
- | 0.0815 | 2.6367 | 1350 | 2.5564 | 74.9642 |
99
- | 0.0599 | 2.7344 | 1400 | 2.5302 | 71.3877 |
100
- | 0.0621 | 2.8320 | 1450 | 2.5382 | 71.8169 |
101
- | 0.0598 | 2.9297 | 1500 | 2.5412 | 71.8169 |
102
 
103
 
104
  ### Framework versions
105
 
106
  - PEFT 0.13.2
107
- - Transformers 4.43.4
108
  - Pytorch 2.2.0
109
  - Datasets 3.1.0
110
  - Tokenizers 0.19.1
 
1
  ---
2
+ library_name: peft
3
  language:
4
  - it
5
  license: apache-2.0
 
10
  - ASR_Synthetic_Speecht5_TTS
11
  metrics:
12
  - wer
 
13
  model-index:
14
  - name: Whisper Medium
15
  results:
 
24
  args: default
25
  metrics:
26
  - type: wer
27
+ value: 171.5307582260372
28
  name: Wer
29
  ---
30
 
 
33
 
34
  # Whisper Medium
35
 
36
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the ASR_Synthetic_Speecht5_TTS dataset.
37
  It achieves the following results on the evaluation set:
38
+ - Loss: 2.9413
39
+ - Wer: 171.5308
40
 
41
  ## Model description
42
 
 
56
 
57
  The following hyperparameters were used during training:
58
  - learning_rate: 0.001
59
+ - train_batch_size: 4
60
+ - eval_batch_size: 4
61
  - seed: 42
62
+ - gradient_accumulation_steps: 2
63
+ - total_train_batch_size: 8
64
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
  - lr_scheduler_type: linear
66
  - lr_scheduler_warmup_steps: 50
67
+ - training_steps: 200
68
  - mixed_precision_training: Native AMP
69
 
70
  ### Training results
71
 
72
  | Training Loss | Epoch | Step | Validation Loss | Wer |
73
  |:-------------:|:------:|:----:|:---------------:|:--------:|
74
+ | 6.8678 | 0.0244 | 25 | 4.4434 | 154.2203 |
75
+ | 2.6877 | 0.0489 | 50 | 3.4026 | 144.0629 |
76
+ | 1.8792 | 0.0733 | 75 | 3.2962 | 77.3963 |
77
+ | 1.5587 | 0.0978 | 100 | 3.2969 | 78.9700 |
78
+ | 1.4194 | 0.1222 | 125 | 2.9920 | 75.1073 |
79
+ | 1.2356 | 0.1467 | 150 | 2.9471 | 184.2632 |
80
+ | 1.1741 | 0.1711 | 175 | 2.9542 | 189.4134 |
81
+ | 1.0451 | 0.1956 | 200 | 2.9413 | 171.5308 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
 
84
  ### Framework versions
85
 
86
  - PEFT 0.13.2
87
+ - Transformers 4.44.2
88
  - Pytorch 2.2.0
89
  - Datasets 3.1.0
90
  - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3d559fccd5c6b94c353905963f4e047978548480be2ea58b9de42a0f1307b557
3
  size 18915424
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:002912ddd11806afd2c9104a270119e1bfb83923eab43726cb25ce3bb21cdbec
3
  size 18915424
runs/Nov02_15-35-40_a31dfab43883/events.out.tfevents.1730561792.a31dfab43883.8001.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f39ee68da94d4c73d847a3fbcf248278f86f5bf1ddcdb26ed3dd8c1f9e482f39
3
+ size 6733
runs/Nov02_15-37-35_a31dfab43883/events.out.tfevents.1730561865.a31dfab43883.8763.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e645fbdf79154587b225bbb7b43c30a0b09b9fb352f081505ce8e25ef628e0a8
3
+ size 6732
runs/Nov02_15-38-55_a31dfab43883/events.out.tfevents.1730561950.a31dfab43883.8929.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55ef468e7b445fea8b16a5e3c7d920e2478ddae8968b85d9a1d5a192a663c67e
3
+ size 7458
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0326badaa8ec2f36e9ed530f08a8c632e14461670ff352440e862305cbec06a0
3
  size 5304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbd761ce3ae3b878a3c0b074291501e5aafd5d9e9d57840ee210b306cbf65837
3
  size 5304