gwanju2_m__model / README.md
Marcusxx's picture
End of training
1c85115 verified
metadata
language:
  - ko
license: apache-2.0
base_model: openai/whisper-medium
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
datasets:
  - Marcusxx/gwanju2
model-index:
  - name: gwanju2_m__model
    results: []

gwanju2_m__model

This model is a fine-tuned version of openai/whisper-medium on the Marcusxx/gwanju2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5285
  • Cer: 31.2542

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
0.4001 0.6046 2000 0.4109 181.6648
0.2492 1.2092 4000 0.3885 23.4726
0.25 1.8138 6000 0.3801 31.3821
0.1398 2.4184 8000 0.3955 23.6811
0.0733 3.0230 10000 0.4135 24.3813
0.0876 3.6276 12000 0.4193 23.1801
0.0402 4.2322 14000 0.4600 28.5421
0.0404 4.8368 16000 0.4663 26.3909
0.0152 5.4414 18000 0.5243 30.2673
0.0048 6.0459 20000 0.5285 31.2542

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.2.2+cu121
  • Datasets 3.2.0
  • Tokenizers 0.19.1