DeepakKumarMSL's picture
Create README.md
9a0e59e verified

๐Ÿ—ฃ๏ธ Speech-to-Text Model: Whisper Small (openai/whisper-small)

This repository demonstrates how to fine-tune, evaluate, quantize, and deploy the OpenAI Whisper Small model for automatic speech recognition (ASR).


๐Ÿ“ฆ Model Used

  • Model Name: openai/whisper-small
  • Architecture: Transformer-based encoder-decoder
  • Task: Automatic Speech Recognition (ASR)
  • Pretrained by: OpenAI

๐Ÿงพ Dataset

We use the common_voice dataset from Hugging Face.

๐Ÿ”น Load English Subset:

from datasets import load_dataset
dataset = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="train[:1%]")

๐Ÿง  Evaluation / Scoring (WER)


from datasets import load_metric
import numpy as np

wer_metric = load_metric("wer")

def compute_wer(predictions, references):
    return wer_metric.compute(predictions=predictions, references=references)

๐ŸŽค Inference Example


from transformers import pipeline

pipe = pipeline("automatic-speech-recognition", model="./Speech_To_Text_OpenAIWhisper_Model", device=0)

result = pipe("harvard.wav")
print("Transcription:", result["text"])