This model is a fine-tune of OpenAI's Whisper Medium model (https://huggingface.co/openai/whisper-medium) over the following Korean datasets:
- https://huggingface.co/datasets/Junhoee/STT_Korean_Dataset_80000
- https://huggingface.co/datasets/Bingsu/zeroth-korean
Combined they have roughly 102k sentences.
This is the last checkpoint which has achieved ~16 WER (down from ~24 WER).
Training was 10,000 iterations.
- Downloads last month
- 22
Unable to determine this model's library. Check the
docs
.
Model tree for royshilkrot/whisper-medium-korean-ggml
Base model
openai/whisper-medium