whisper-medium-atco2-asr

This model is a fine-tuned version of openai/whisper-medium on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7769
  • Wer: 23.9769

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Wer
0.5002 12.5 50 0.6266 23.6655
0.0013 25.0 100 0.6931 21.6637
0.0004 37.5 150 0.7297 21.7082
0.0003 50.0 200 0.7492 21.8416
0.0002 62.5 250 0.7622 21.8861
0.0002 75.0 300 0.7703 21.9306
0.0002 87.5 350 0.7753 23.0872
0.0002 100.0 400 0.7769 23.9769

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.2
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
48
Safetensors
Model size
764M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jlvdoorn/whisper-medium-atco2-asr

Finetuned
(498)
this model