YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
π£οΈ Speech-to-Text Model: Whisper Small (openai/whisper-small)
This repository demonstrates how to fine-tune, evaluate, quantize, and deploy the OpenAI Whisper Small model for automatic speech recognition (ASR).
π¦ Model Used
- Model Name:
openai/whisper-small
- Architecture: Transformer-based encoder-decoder
- Task: Automatic Speech Recognition (ASR)
- Pretrained by: OpenAI
π§Ύ Dataset
We use the common_voice dataset from Hugging Face.
πΉ Load English Subset:
from datasets import load_dataset
dataset = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="train[:1%]")
π§ Evaluation / Scoring (WER)
from datasets import load_metric
import numpy as np
wer_metric = load_metric("wer")
def compute_wer(predictions, references):
return wer_metric.compute(predictions=predictions, references=references)
π€ Inference Example
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="./Speech_To_Text_OpenAIWhisper_Model", device=0)
result = pipe("harvard.wav")
print("Transcription:", result["text"])
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support