Whisper Large v2 Fine-Tuned for Korean ASR

This model is being fine-tuned from openai/whisper-large-v3-turbo on a custom dataset. It currently achieves the following results on the evaluation set (still fine-tuning):

  • Loss: 0.0164
  • Wer: 19.9134
  • Cer: 0.0660

Model Description

This model is a version of openai/whisper-large-v3-turbo, currently still being incrementally fine-tune in stages, specifically optimized for Korean automatic speech recognition (ASR) tasks. The fine-tuning process aims to deliver high accuracy and timestamped transcriptions for Korean speech.

Dataset Details

Training Details

  • Hardware: L40S GPU
  • Learning Rate Scheduler: Cosine
  • Epochs: [pending completion]
  • Optimizer: AdamW Torch Fused
Downloads last month
221
Safetensors
Model size
809M params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for o0dimplz0o/Whisper-Large-v3-turbo-STT-Zeroth-KO-v2

Finetuned
(199)
this model

Dataset used to train o0dimplz0o/Whisper-Large-v3-turbo-STT-Zeroth-KO-v2