kujirahand commited on
Commit
6243025
·
1 Parent(s): 89fdacd

Model save

Browse files
Files changed (3) hide show
  1. README.md +90 -25
  2. generation_config.json +1 -0
  3. model.safetensors +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: apache-2.0
3
- base_model: openai/whisper-large
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -15,10 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # whisper-medium-r22-e
17
 
18
- This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.2567
21
- - Wer: 32.4317
22
 
23
  ## Model description
24
 
@@ -38,39 +38,104 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 1e-05
41
- - train_batch_size: 8
42
  - eval_batch_size: 8
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 5
47
- - training_steps: 150
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss | Wer |
53
- |:-------------:|:-----:|:----:|:---------------:|:-------:|
54
- | 4.2827 | 0.02 | 10 | 2.4594 | 30.6262 |
55
- | 1.5383 | 0.04 | 20 | 0.9529 | 36.0561 |
56
- | 0.5967 | 0.05 | 30 | 0.4230 | 34.7607 |
57
- | 0.3559 | 0.07 | 40 | 0.3960 | 33.9352 |
58
- | 0.314 | 0.09 | 50 | 0.3285 | 32.7270 |
59
- | 0.3339 | 0.11 | 60 | 0.3362 | 33.3244 |
60
- | 0.3148 | 0.13 | 70 | 0.2927 | 31.6464 |
61
- | 0.3128 | 0.14 | 80 | 0.2896 | 32.5458 |
62
- | 0.3136 | 0.16 | 90 | 0.2828 | 32.8613 |
63
- | 0.272 | 0.18 | 100 | 0.2818 | 33.9419 |
64
- | 0.1936 | 0.2 | 110 | 0.2702 | 30.9148 |
65
- | 0.2541 | 0.22 | 120 | 0.2644 | 31.8209 |
66
- | 0.2957 | 0.23 | 130 | 0.2614 | 31.6531 |
67
- | 0.2867 | 0.25 | 140 | 0.2574 | 31.6397 |
68
- | 0.2085 | 0.27 | 150 | 0.2567 | 32.4317 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
 
70
 
71
  ### Framework versions
72
 
73
- - Transformers 4.35.0.dev0
74
  - Pytorch 2.1.0+cu118
75
- - Datasets 2.14.6
76
  - Tokenizers 0.14.1
 
1
  ---
2
  license: apache-2.0
3
+ base_model: openai/whisper-medium
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
15
 
16
  # whisper-medium-r22-e
17
 
18
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.3784
21
+ - Wer: 100.0
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 1e-05
41
+ - train_batch_size: 16
42
  - eval_batch_size: 8
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 5
47
+ - training_steps: 800
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
53
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
54
+ | 6.0027 | 0.06 | 10 | 3.8236 | 29.3631 |
55
+ | 2.668 | 0.12 | 20 | 1.8668 | 27.7539 |
56
+ | 1.5247 | 0.18 | 30 | 1.0451 | 25.8273 |
57
+ | 0.7177 | 0.24 | 40 | 0.3820 | 100.0 |
58
+ | 0.342 | 0.3 | 50 | 0.3398 | 100.0 |
59
+ | 0.331 | 0.36 | 60 | 0.3243 | 100.0340 |
60
+ | 0.3139 | 0.42 | 70 | 0.3175 | 100.0227 |
61
+ | 0.291 | 0.48 | 80 | 0.2983 | 100.0340 |
62
+ | 0.3178 | 0.54 | 90 | 0.2907 | 100.0340 |
63
+ | 0.2516 | 0.6 | 100 | 0.2933 | 100.0567 |
64
+ | 0.3004 | 0.66 | 110 | 0.2860 | 100.0907 |
65
+ | 0.2923 | 0.72 | 120 | 0.2962 | 100.1587 |
66
+ | 0.3067 | 0.78 | 130 | 0.2887 | 100.0340 |
67
+ | 0.2967 | 0.84 | 140 | 0.2802 | 100.0 |
68
+ | 0.3059 | 0.9 | 150 | 0.2734 | 100.0 |
69
+ | 0.2465 | 0.96 | 160 | 0.2686 | 100.0 |
70
+ | 0.1953 | 1.02 | 170 | 0.2677 | 100.0793 |
71
+ | 0.1611 | 1.08 | 180 | 0.2665 | 100.0453 |
72
+ | 0.1548 | 1.14 | 190 | 0.2644 | 100.0 |
73
+ | 0.1379 | 1.2 | 200 | 0.2781 | 100.0 |
74
+ | 0.1593 | 1.27 | 210 | 0.2765 | 100.0 |
75
+ | 0.1266 | 1.33 | 220 | 0.2805 | 100.0 |
76
+ | 0.1407 | 1.39 | 230 | 0.2669 | 100.0567 |
77
+ | 0.1301 | 1.45 | 240 | 0.2708 | 100.0793 |
78
+ | 0.1546 | 1.51 | 250 | 0.2713 | 100.0793 |
79
+ | 0.1447 | 1.57 | 260 | 0.2723 | 100.0793 |
80
+ | 0.1762 | 1.63 | 270 | 0.2689 | 100.0 |
81
+ | 0.148 | 1.69 | 280 | 0.2693 | 100.0680 |
82
+ | 0.1468 | 1.75 | 290 | 0.2682 | 100.0340 |
83
+ | 0.1747 | 1.81 | 300 | 0.2688 | 100.0340 |
84
+ | 0.106 | 1.87 | 310 | 0.2606 | 100.0 |
85
+ | 0.1517 | 1.93 | 320 | 0.2606 | 100.0 |
86
+ | 0.143 | 1.99 | 330 | 0.2644 | 100.0 |
87
+ | 0.085 | 2.05 | 340 | 0.2644 | 100.0 |
88
+ | 0.0733 | 2.11 | 350 | 0.2840 | 100.0 |
89
+ | 0.0606 | 2.17 | 360 | 0.2879 | 100.0 |
90
+ | 0.071 | 2.23 | 370 | 0.2851 | 100.0 |
91
+ | 0.0518 | 2.29 | 380 | 0.2975 | 100.0 |
92
+ | 0.068 | 2.35 | 390 | 0.2936 | 100.0 |
93
+ | 0.0553 | 2.41 | 400 | 0.3062 | 100.0 |
94
+ | 0.049 | 2.47 | 410 | 0.3019 | 100.0 |
95
+ | 0.0621 | 2.53 | 420 | 0.3021 | 100.0 |
96
+ | 0.0593 | 2.59 | 430 | 0.2941 | 100.0 |
97
+ | 0.0604 | 2.65 | 440 | 0.2960 | 100.0 |
98
+ | 0.0711 | 2.71 | 450 | 0.2996 | 100.0 |
99
+ | 0.0643 | 2.77 | 460 | 0.2907 | 100.0 |
100
+ | 0.0554 | 2.83 | 470 | 0.2902 | 100.0 |
101
+ | 0.0595 | 2.89 | 480 | 0.2992 | 100.0 |
102
+ | 0.0693 | 2.95 | 490 | 0.2936 | 99.8527 |
103
+ | 0.0411 | 3.01 | 500 | 0.2937 | 100.0 |
104
+ | 0.0192 | 3.07 | 510 | 0.3174 | 100.0 |
105
+ | 0.0105 | 3.13 | 520 | 0.3468 | 100.0 |
106
+ | 0.0339 | 3.19 | 530 | 0.3439 | 100.0 |
107
+ | 0.0222 | 3.25 | 540 | 0.3571 | 100.0 |
108
+ | 0.0372 | 3.31 | 550 | 0.3393 | 100.0 |
109
+ | 0.0219 | 3.37 | 560 | 0.3468 | 100.0 |
110
+ | 0.0223 | 3.43 | 570 | 0.3341 | 100.0 |
111
+ | 0.0239 | 3.49 | 580 | 0.3393 | 100.0 |
112
+ | 0.0322 | 3.55 | 590 | 0.3378 | 100.0 |
113
+ | 0.0299 | 3.61 | 600 | 0.3296 | 100.0 |
114
+ | 0.0223 | 3.67 | 610 | 0.3367 | 100.0 |
115
+ | 0.0234 | 3.73 | 620 | 0.3345 | 100.0 |
116
+ | 0.0191 | 3.8 | 630 | 0.3395 | 100.0 |
117
+ | 0.0207 | 3.86 | 640 | 0.3439 | 100.0 |
118
+ | 0.0258 | 3.92 | 650 | 0.3440 | 100.0 |
119
+ | 0.0209 | 3.98 | 660 | 0.3442 | 100.0 |
120
+ | 0.0164 | 4.04 | 670 | 0.3551 | 100.0 |
121
+ | 0.0067 | 4.1 | 680 | 0.3559 | 100.0 |
122
+ | 0.0094 | 4.16 | 690 | 0.3628 | 100.0 |
123
+ | 0.0096 | 4.22 | 700 | 0.3661 | 100.0 |
124
+ | 0.0073 | 4.28 | 710 | 0.3682 | 100.0 |
125
+ | 0.0106 | 4.34 | 720 | 0.3717 | 100.0 |
126
+ | 0.0067 | 4.4 | 730 | 0.3749 | 100.0 |
127
+ | 0.005 | 4.46 | 740 | 0.3785 | 100.0 |
128
+ | 0.0101 | 4.52 | 750 | 0.3803 | 100.0 |
129
+ | 0.0084 | 4.58 | 760 | 0.3784 | 100.0 |
130
+ | 0.0079 | 4.64 | 770 | 0.3770 | 100.0 |
131
+ | 0.0038 | 4.7 | 780 | 0.3772 | 100.0 |
132
+ | 0.0057 | 4.76 | 790 | 0.3780 | 100.0 |
133
+ | 0.0103 | 4.82 | 800 | 0.3784 | 100.0 |
134
 
135
 
136
  ### Framework versions
137
 
138
+ - Transformers 4.36.0.dev0
139
  - Pytorch 2.1.0+cu118
140
+ - Datasets 2.14.7.dev0
141
  - Tokenizers 0.14.1
generation_config.json CHANGED
@@ -148,6 +148,7 @@
148
  "max_length": 448,
149
  "no_timestamps_token_id": 50363,
150
  "pad_token_id": 50257,
 
151
  "suppress_tokens": [
152
  1,
153
  2,
 
148
  "max_length": 448,
149
  "no_timestamps_token_id": 50363,
150
  "pad_token_id": 50257,
151
+ "return_timestamps": false,
152
  "suppress_tokens": [
153
  1,
154
  2,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0b3d71908e5688c3a063824263823674f911d1cea14100c24168e8139cff5652
3
  size 3055544304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16c17158d246b866cc2702254feab189566860c45f747aa9ed59078c0c1b9db8
3
  size 3055544304