Edit model card

Whisper Medium CGN

This model is a fine-tuned version of openai/whisper-medium on the kul-speech-lab/CGN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2639
  • Wer: 10.7278

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 15000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1116 1.01 1000 0.2978 15.2127
0.0786 2.03 2000 0.2842 13.4852
0.2042 3.04 3000 0.2656 13.3590
0.1183 4.05 4000 0.2667 12.6977
0.0584 6.01 5000 0.2604 12.0993
0.0126 7.02 6000 0.2776 12.1477
0.0837 8.04 7000 0.2541 11.9397
0.0229 9.05 8000 0.2663 11.3221
0.042 11.01 9000 0.2549 11.4863
0.0075 12.02 10000 0.2775 11.0780
0.008 13.03 11000 0.2499 10.9759
0.0739 14.05 12000 0.2308 10.9441
0.0379 16.01 13000 0.2423 10.7926
0.02 17.02 14000 0.2629 10.7699
0.0111 18.03 15000 0.2639 10.7278

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.0
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2

Whisper medium model finetuned on Flemish part of Corpus Gesproken Nederlands (CGN).

Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results