This model is a fine-tune of OpenAI's Whisper Medium model (https://huggingface.co/openai/whisper-medium) over the following Korean datasets:

Combined they have roughly 102k sentences.

This is the last checkpoint which has achieved ~16 WER (down from ~24 WER).

Training was 10,000 iterations.

Downloads last month
22
Safetensors
Model size
764M params
Tensor type
F32
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for royshilkrot/whisper-medium-korean-ggml

Finetuned
(497)
this model

Datasets used to train royshilkrot/whisper-medium-korean-ggml