|
# 🗣️ Speech-to-Text Model: Whisper Small (openai/whisper-small) |
|
|
|
This repository demonstrates how to fine-tune, evaluate, quantize, and deploy the [OpenAI Whisper Small](https://huggingface.co/openai/whisper-small) model for automatic speech recognition (ASR). |
|
|
|
--- |
|
|
|
## 📦 Model Used |
|
|
|
- **Model Name**: `openai/whisper-small` |
|
- **Architecture**: Transformer-based encoder-decoder |
|
- **Task**: Automatic Speech Recognition (ASR) |
|
- **Pretrained by**: OpenAI |
|
|
|
--- |
|
|
|
## 🧾 Dataset |
|
|
|
We use the [common_voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) dataset from Hugging Face. |
|
|
|
### 🔹 Load English Subset: |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="train[:1%]") |
|
``` |
|
|
|
# 🧠 Evaluation / Scoring (WER) |
|
```python |
|
|
|
from datasets import load_metric |
|
import numpy as np |
|
|
|
wer_metric = load_metric("wer") |
|
|
|
def compute_wer(predictions, references): |
|
return wer_metric.compute(predictions=predictions, references=references) |
|
``` |
|
|
|
# 🎤 Inference Example |
|
```python |
|
|
|
from transformers import pipeline |
|
|
|
pipe = pipeline("automatic-speech-recognition", model="./Speech_To_Text_OpenAIWhisper_Model", device=0) |
|
|
|
result = pipe("harvard.wav") |
|
print("Transcription:", result["text"]) |
|
``` |
|
|
|
|