Whisper Small ko

This model is a fine-tuned version of openai/whisper-large-v3-turbo on the custom dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1905
  • Wer: 12.1097

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.689 0.0107 10 1.0086 45.3169
0.0756 0.0214 20 0.6343 38.0322
0.0145 0.0322 30 0.6367 41.3434
0.0212 0.0429 40 0.7120 42.6679
0.0205 0.0536 50 0.4694 32.6395
0.016 0.0643 60 0.5533 38.7890
0.014 0.0750 70 0.4716 30.8420
0.0115 0.0857 80 0.6191 30.9366
0.0228 0.0965 90 0.7998 43.8978
0.0191 0.1072 100 0.7273 36.4238
0.026 0.1179 110 0.7720 42.3841
0.0196 0.1286 120 0.9171 79.4702
0.0178 0.1393 130 1.1460 136.0454
0.037 0.1501 140 0.5558 62.8193
0.0237 0.1608 150 0.6369 109.6500
0.0195 0.1715 160 0.6671 38.7890
0.0151 0.1822 170 0.6717 53.9262
0.0479 0.1929 180 0.5412 68.1173
0.0187 0.2036 190 0.5311 60.2649
0.0191 0.2144 200 0.4761 33.3964
0.0149 0.2251 210 0.6630 38.5998
0.0285 0.2358 220 0.6162 36.8023
0.0134 0.2465 230 0.5166 31.5043
0.0143 0.2572 240 0.6748 55.3453
0.0185 0.2680 250 0.5091 28.1930
0.0106 0.2787 260 0.4697 28.0984
0.0163 0.2894 270 0.4483 24.4087
0.0186 0.3001 280 0.3112 22.1381
0.018 0.3108 290 0.3752 26.7739
0.0067 0.3215 300 0.5734 28.0984
0.0129 0.3323 310 0.3768 22.3273
0.0196 0.3430 320 0.3069 23.4626
0.0096 0.3537 330 0.3197 20.5298
0.0143 0.3644 340 0.3839 43.8032
0.0082 0.3751 350 0.3098 80.1325
0.0099 0.3859 360 0.2946 77.6727
0.0146 0.3966 370 0.3007 19.3945
0.0115 0.4073 380 0.2685 17.3132
0.0058 0.4180 390 0.2686 16.7455
0.0067 0.4287 400 0.2572 15.6102
0.0095 0.4394 410 0.2400 14.9480
0.0085 0.4502 420 0.2436 15.2318
0.005 0.4609 430 0.2426 15.0426
0.0044 0.4716 440 0.2318 13.8127
0.0063 0.4823 450 0.2262 12.7720
0.0093 0.4930 460 0.2098 12.1097
0.0054 0.5038 470 0.2042 12.2990
0.0046 0.5145 480 0.1941 11.9205
0.0071 0.5252 490 0.1913 12.1097
0.0066 0.5359 500 0.1905 12.1097

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
0
Safetensors
Model size
809M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nomnoos37/stt-turbo-1225-v1-full

Finetuned
(132)
this model