Edit model card

wav2vec2-xls-r-parlaspeech-hr

This model for Croatian ASR is based on the facebook/wav2vec2-xls-r-300m model and was fine-tuned with 300 hours of recordings and transcripts from the ASR Croatian parliament dataset ParlaSpeech-HR v1.0.

If you use this model, please cite the following paper:

Nikola Ljubeőić, Danijel Koržinek, Peter Rupnik, Ivo-Pavao Jazbec. ParlaSpeech-HR -- a freely available ASR dataset for Croatian bootstrapped from the ParlaMint corpus. http://www.lrec-conf.org/proceedings/lrec2022/workshops/ParlaCLARINIII/pdf/2022.parlaclariniii-1.16.pdf

Metrics

Evaluation is performed on the dev and test portions of the ParlaSpeech-HR v1.0 dataset.

split CER WER
dev 0.0335 0.1046
test 0.0234 0.0761

There are multiple models available, and in terms of CER and WER, the best-performing model is wav2vec2-large-slavic-parlaspeech-hr-lm.

Usage in transformers

from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import soundfile as sf
import torch
import os

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained(
    "classla/wav2vec2-xls-r-parlaspeech-hr")
model = Wav2Vec2ForCTC.from_pretrained("classla/wav2vec2-xls-r-parlaspeech-hr")


# download the example wav files:
os.system("wget https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020570a.flac.wav")

# read the wav file 
speech, sample_rate = sf.read("00020570a.flac.wav")
input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values.to(device)

# remove the raw wav file
os.system("rm 00020570a.flac.wav")

# retrieve logits
logits = model.to(device)(input_values).logits

# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0]).lower()

# transcription: 'veliki broj poslovnih subjekata posluje sa minusom velik dio'

Training hyperparameters

In fine-tuning, the following arguments were used:

arg value
per_device_train_batch_size 16
gradient_accumulation_steps 4
num_train_epochs 8
learning_rate 3e-4
warmup_steps 500
Downloads last month
1,039
Safetensors
Model size
315M params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for classla/wav2vec2-xls-r-parlaspeech-hr

Finetunes
1 model

Spaces using classla/wav2vec2-xls-r-parlaspeech-hr 28