Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Whisper Medium

This model is a fine-tuned version of b-brave/asr_double_training_15-10-2024_merged on the ASR_BB_and_EC dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4620
  • Wer: 36.9269

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: reduce_lr_on_plateau
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 12
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.7705 0.8929 100 0.4885 36.5551
0.7193 1.7857 200 0.4840 36.6791
0.7376 2.6786 300 0.4808 36.4312
0.6975 3.5714 400 0.4783 36.4312
0.6499 4.4643 500 0.4763 35.8116
0.7137 5.3571 600 0.4744 35.9356
0.6397 6.25 700 0.4727 35.9356
0.6441 7.1429 800 0.4708 35.9356
0.6756 8.0357 900 0.4690 35.9356
0.6331 8.9286 1000 0.4673 36.3073
0.6411 9.8214 1100 0.4656 36.3073
0.6029 10.7143 1200 0.4638 36.6791
0.6229 11.6071 1300 0.4620 36.9269

Framework versions

  • PEFT 0.13.2
  • Transformers 4.45.2
  • Pytorch 2.2.0
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for miosipof/whisper_medium_BB_and_EC_v2.1

Adapter
(5)
this model

Evaluation results