drone_test1 / README.md
CSY1109's picture
End of training
1e124f7 verified
metadata
library_name: transformers
language:
  - en
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Drone test En - Siang Yi
    results: []

Drone test En - Siang Yi

This model is a fine-tuned version of openai/whisper-small on the drone command test3 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6268
  • Wer: 8.3333

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.86 10.0 10 0.6997 25.0
0.0773 20.0 20 0.1492 16.6667
0.0062 30.0 30 0.5699 8.3333
0.0028 40.0 40 0.5715 8.3333
0.0006 50.0 50 0.5512 8.3333
0.0228 60.0 60 0.6065 8.3333
0.0 70.0 70 0.5899 8.3333
0.0001 80.0 80 0.6822 8.3333
0.0 90.0 90 0.6161 8.3333
0.0 100.0 100 0.6305 8.3333
0.0 110.0 110 0.6301 8.3333
0.0 120.0 120 0.6296 8.3333
0.0 130.0 130 0.6284 8.3333
0.0 140.0 140 0.6283 8.3333
0.0 150.0 150 0.6281 8.3333
0.0 160.0 160 0.6270 8.3333
0.0 170.0 170 0.6280 8.3333
0.0 180.0 180 0.6280 8.3333
0.0 190.0 190 0.6269 8.3333
0.0 200.0 200 0.6268 8.3333

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0