whisper-small-en / README.md
sridhar1ga's picture
End of training
3d120f6 verified
|
raw
history blame
2.35 kB
metadata
language:
  - en
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
datasets:
  - UCLASS
metrics:
  - wer
model-index:
  - name: Whisper Small En - Sridhar Vanga
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: sridhar1ga/UCLASS
          type: UCLASS
          args: 'config: en'
        metrics:
          - name: Wer
            type: wer
            value: 41.49512459371614

Whisper Small En - Sridhar Vanga

This model is a fine-tuned version of openai/whisper-small on the sridhar1ga/UCLASS dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0974
  • Wer: 41.4951

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0513 18.5185 500 1.5771 61.3218
0.0056 37.0370 1000 1.8013 42.1452
0.0089 55.5556 1500 1.8905 65.0054
0.0054 74.0741 2000 1.7860 44.9621
0.0016 92.5926 2500 1.9571 41.9285
0.0001 111.1111 3000 2.0281 41.0618
0.0001 129.6296 3500 2.0595 41.9285
0.0001 148.1481 4000 2.0805 41.3868
0.0001 166.6667 4500 2.0927 41.4951
0.0001 185.1852 5000 2.0974 41.4951

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1