English_32r_LoRA_1h / README.md
xuliu15's picture
End of training
c8712f6 verified
metadata
license: apache-2.0
library_name: peft
tags:
  - generated_from_trainer
base_model: openai/whisper-small
datasets:
  - librispeech
model-index:
  - name: Whisper Small English 1h
    results: []

Whisper Small English 1h

This model is a fine-tuned version of openai/whisper-small on the librispeech dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3666

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.8938 1.0 39 2.5946
1.4637 2.0 78 2.0459
1.3003 3.0 117 1.6739
0.9415 4.0 156 1.2729
0.8165 5.0 195 1.0158
0.6326 6.0 234 0.9033
0.5716 7.0 273 0.7272
0.4662 8.0 312 0.6731
0.4133 9.0 351 0.6433
0.4 10.0 390 0.6248
0.3862 11.0 429 0.6103
0.3901 12.0 468 0.5962
0.3661 13.0 507 0.5841
0.3609 14.0 546 0.5739
0.3439 15.0 585 0.5660
0.3391 16.0 624 0.5581
0.3231 17.0 663 0.5510
0.3097 18.0 702 0.5441
0.2994 19.0 741 0.5373
0.2991 20.0 780 0.5304
0.2972 21.0 819 0.5240
0.2898 22.0 858 0.5187
0.2809 23.0 897 0.5142
0.2845 24.0 936 0.5119
0.269 25.0 975 0.5074
0.2721 26.0 1014 0.5033
0.2633 27.0 1053 0.5006
0.2623 28.0 1092 0.4984
0.2492 29.0 1131 0.4931
0.25 30.0 1170 0.4861
0.2479 31.0 1209 0.4833
0.2416 32.0 1248 0.4777
0.2356 33.0 1287 0.4794
0.2281 34.0 1326 0.4663
0.2191 35.0 1365 0.4605
0.2218 36.0 1404 0.4600
0.2078 37.0 1443 0.4545
0.2122 38.0 1482 0.4470
0.2076 39.0 1521 0.4510
0.2004 40.0 1560 0.4326
0.2004 41.0 1599 0.4280
0.1901 42.0 1638 0.4342
0.1856 43.0 1677 0.4107
0.1802 44.0 1716 0.4060
0.1677 45.0 1755 0.4029
0.1658 46.0 1794 0.3922
0.1589 47.0 1833 0.3845
0.152 48.0 1872 0.3790
0.1493 49.0 1911 0.3691
0.1426 50.0 1950 0.3666

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1