Thecoder3281f's picture
End of training
03fca5c verified
metadata
library_name: peft
language:
  - en
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
datasets:
  - mesolitica/IMDA-TTS
metrics:
  - wer
model-index:
  - name: Whisper Small NSC small (500 steps) - Jarrett Er
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: NSC Small section
          type: mesolitica/IMDA-TTS
          split: None
          args: 'config: en, split: train'
        metrics:
          - type: wer
            value: 3.0164184803360063
            name: Wer

Whisper Small NSC small (500 steps) - Jarrett Er

This model is a fine-tuned version of Thecoder3281f/whisper-small-hi-commonvoice17-1000 on the NSC Small section dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0777
  • Wer: 3.0164

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0822 0.8850 100 0.0686 3.0164
0.0585 1.7699 200 0.0700 3.0928
0.0317 2.6549 300 0.0726 3.0546
0.0184 3.5398 400 0.0781 3.2455
0.0194 4.4248 500 0.0777 3.0164

Framework versions

  • PEFT 0.14.0
  • Transformers 4.45.2
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.1.dev0
  • Tokenizers 0.20.3