Edit model card

openai/whisper-tiny

This model is a fine-tuned version of openai/whisper-tiny on the Hanhpt23/SMMMU dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5068
  • Wer: 16.9043

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Wer
No log 1.0 14 1.0476 17.5967
1.1834 2.0 28 0.6758 19.3686
1.1834 3.0 42 0.5599 20.5458
0.3274 4.0 56 0.4398 13.8208
0.3274 5.0 70 0.4560 24.6558
0.0781 6.0 84 0.5055 21.6456
0.0781 7.0 98 0.5160 33.9348
0.0947 8.0 112 0.5665 19.8289
0.0674 9.0 126 0.5255 17.9878
0.0674 10.0 140 0.5068 16.9043

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
7
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Hanhpt23/whisper-tiny-smmmu

Finetuned
(1215)
this model
Finetunes
1 model