whisper_havest_0035
This model is a fine-tuned version of openai/whisper-tiny on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 3.5897
- Train Accuracy: 0.0150
- Train Do Wer: 1.0
- Validation Loss: 4.5822
- Validation Accuracy: 0.0130
- Validation Do Wer: 1.0
- Epoch: 34
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
Training results
Train Loss | Train Accuracy | Train Do Wer | Validation Loss | Validation Accuracy | Validation Do Wer | Epoch |
---|---|---|---|---|---|---|
9.9191 | 0.0046 | 1.0 | 8.5836 | 0.0067 | 1.0 | 0 |
8.0709 | 0.0083 | 1.0 | 7.4667 | 0.0089 | 1.0 | 1 |
7.1652 | 0.0100 | 1.0 | 6.8204 | 0.0112 | 1.0 | 2 |
6.7196 | 0.0114 | 1.0 | 6.5192 | 0.0114 | 1.0 | 3 |
6.4115 | 0.0115 | 1.0 | 6.2357 | 0.0115 | 1.0 | 4 |
6.1085 | 0.0115 | 1.0 | 5.9657 | 0.0115 | 1.0 | 5 |
5.8206 | 0.0115 | 1.0 | 5.7162 | 0.0115 | 1.0 | 6 |
5.5567 | 0.0115 | 1.0 | 5.4963 | 0.0115 | 1.0 | 7 |
5.3223 | 0.0116 | 1.0 | 5.3096 | 0.0116 | 1.0 | 8 |
5.1222 | 0.0117 | 1.0 | 5.1600 | 0.0117 | 1.0 | 9 |
4.9580 | 0.0117 | 1.0 | 5.0391 | 0.0118 | 1.0 | 10 |
4.8251 | 0.0119 | 1.0 | 4.9427 | 0.0118 | 1.0 | 11 |
4.7171 | 0.0119 | 1.0 | 4.8691 | 0.0119 | 1.0 | 12 |
4.6284 | 0.0121 | 1.0 | 4.8123 | 0.0120 | 1.0 | 13 |
4.5508 | 0.0121 | 1.0 | 4.7620 | 0.0121 | 1.0 | 14 |
4.4855 | 0.0123 | 1.0 | 4.7260 | 0.0121 | 1.0 | 15 |
4.4305 | 0.0124 | 1.0 | 4.7018 | 0.0123 | 1.0 | 16 |
4.3788 | 0.0125 | 1.0 | 4.6738 | 0.0123 | 1.0 | 17 |
4.3305 | 0.0127 | 1.0 | 4.6525 | 0.0124 | 1.0 | 18 |
4.2860 | 0.0128 | 1.0 | 4.6401 | 0.0125 | 1.0 | 19 |
4.2451 | 0.0130 | 1.0 | 4.6234 | 0.0126 | 1.0 | 20 |
4.1994 | 0.0132 | 1.0 | 4.6077 | 0.0128 | 1.0 | 21 |
4.1521 | 0.0133 | 1.0 | 4.6098 | 0.0129 | 1.0 | 22 |
4.1148 | 0.0134 | 1.0 | 4.5919 | 0.0129 | 1.0 | 23 |
4.0701 | 0.0135 | 1.0 | 4.6038 | 0.0128 | 1.0 | 24 |
4.0199 | 0.0137 | 1.0 | 4.5777 | 0.0130 | 1.0 | 25 |
3.9631 | 0.0138 | 1.0 | 4.5734 | 0.0131 | 1.0 | 26 |
3.9175 | 0.0140 | 1.0 | 4.5866 | 0.0129 | 1.0 | 27 |
3.8690 | 0.0142 | 1.0 | 4.5900 | 0.0129 | 1.0 | 28 |
3.8276 | 0.0143 | 1.0 | 4.5602 | 0.0131 | 1.0 | 29 |
3.7499 | 0.0145 | 1.0 | 4.5619 | 0.0132 | 1.0 | 30 |
3.6968 | 0.0147 | 1.0 | 4.6203 | 0.0133 | 1.0 | 31 |
3.6714 | 0.0149 | 1.0 | 4.7075 | 0.0133 | 1.0 | 32 |
3.6318 | 0.0149 | 1.0 | 4.6638 | 0.0125 | 1.0 | 33 |
3.5897 | 0.0150 | 1.0 | 4.5822 | 0.0130 | 1.0 | 34 |
Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.