jethrowang's picture
End of training
3d8f36f verified
|
raw
history blame
2.07 kB
metadata
language:
  - zh
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
datasets:
  - formospeech/tat_asr_aligned
model-index:
  - name: Whisper Tiny Taiwanese Android
    results: []

Whisper Tiny Taiwanese Android

This model is a fine-tuned version of openai/whisper-tiny on the TAT ASR Aligned dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5581
  • Cer: 10.2914

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 681
  • training_steps: 6810
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
0.3523 0.9985 681 0.4273 15.0771
0.2104 1.9971 1362 0.3778 12.2058
0.1224 2.9956 2043 0.3942 11.9977
0.0738 3.9941 2724 0.4164 11.9422
0.0423 4.9927 3405 0.4579 11.4839
0.0216 5.9912 4086 0.4818 11.3165
0.0107 6.9897 4767 0.5189 10.9872
0.0036 7.9883 5448 0.5398 10.8198
0.0012 8.9868 6129 0.5509 10.3179
0.0006 9.9853 6810 0.5581 10.2914

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1