whisper-tiny-thai / README.md
juierror's picture
Create README.md
8ebc6ac
|
raw
history blame
3.13 kB
metadata
license: apache-2.0
language:
  - th
pipeline_tag: automatic-speech-recognition

Whisper-base Thai finetuned

1) Environment Setup

# visit https://pytorch.org/get-started/locally/ to install pytorch
pip3 install transformers librosa

2) Usage

from transformers import WhisperForConditionalGeneration, WhisperProcessor
import librosa

device = "cuda" # cpu, cuda

model = WhisperForConditionalGeneration.from_pretrained("juierror/whisper-tiny-thai").to(device)
processor = WhisperProcessor.from_pretrained("juierror/whisper-tiny-thai", language="Thai", task="transcribe")

path = "/path/to/audio/file"

def inference(path: str) -> str:
    """
    Get the transcription from audio path

    Args:
        path(str): path to audio file (can be load with librosa)

    Returns:
        str: transcription
    """
    audio, sr = librosa.load(path, sr=16000)
    input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features
    generated_tokens = model.generate(
        input_features=input_features.to(device),
        max_new_tokens=255,
        language="Thai"
    ).cpu()
    transcriptions = processor.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
    return transcriptions[0]

print(inference(path=path))

3) Evaluate Result

This model has been trained and evaluated on three datasets:

@techreport{gowajee,
     title = {{Gowajee Corpus}},
     author = {Ekapol Chuangsuwanich and Atiwong Suchato and Korrawe Karunratanakul and Burin Naowarat and Chompakorn CChaichot
and Penpicha Sangsa-nga and Thunyathon Anutarases and Nitchakran Chaipojjana},
     year = {2020},
     institution = {Chulalongkorn University, Faculty of Engineering, Computer Engineering Department},
     month = {12},
     Date-Added = {2021-07-20},
     url = {https://github.com/ekapolc/gowajee_corpus}
     note = {Version 0.9.2}
}

The Common Voice dataset has been cleaned and divided into training, testing, and development sets. Care has been taken to ensure that the sentences in each set are unique and do not have any duplicates. The Gowajee dataset has already been pre-split into training, development, and testing sets, allowing for direct utilization. As for the Thai Elderly Speech dataset, I performed a random split. The Character Error Rate (CER) is calculated by removing spaces in both the labels and predicted text, and then computing the CER. The Word Error Rate (WER) is calculated using the PythaiNLP newmm tokenizer to tokenize both the labels and predicted text, and then computing the WER.

These are the results.

Dataset WER CER
Common Voice 13 26.48 7.83
Gowajee 25.39 11.67
Thai Elderly Speech (Smart Home) 14.85 4.47
Thai Elderly Speech (Health Care) 15.23 4.05