๐ฃ๏ธ Speech-to-Text Model: Whisper Small (openai/whisper-small)
This repository demonstrates how to fine-tune, evaluate, quantize, and deploy the OpenAI Whisper Small model for automatic speech recognition (ASR).
๐ฆ Model Used
- Model Name:
openai/whisper-small
- Architecture: Transformer-based encoder-decoder
- Task: Automatic Speech Recognition (ASR)
- Pretrained by: OpenAI
๐งพ Dataset
We use the common_voice dataset from Hugging Face.
๐น Load English Subset:
from datasets import load_dataset
dataset = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="train[:1%]")
๐ง Evaluation / Scoring (WER)
from datasets import load_metric
import numpy as np
wer_metric = load_metric("wer")
def compute_wer(predictions, references):
return wer_metric.compute(predictions=predictions, references=references)
๐ค Inference Example
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="./Speech_To_Text_OpenAIWhisper_Model", device=0)
result = pipe("harvard.wav")
print("Transcription:", result["text"])