File size: 2,468 Bytes
633a9e7
 
 
54c4dba
633a9e7
 
 
b722a56
633a9e7
96f8bab
d0ede00
b722a56
d0ede00
d118fdd
633a9e7
 
b722a56
633a9e7
54c4dba
633a9e7
54c4dba
99a28ed
cce5842
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
language: hr
datasets:
- parlaspeech-hr
tags:
- audio
- automatic-speech-recognition
- parlaspeech
widget:
- example_title: example 1
  src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020578b.flac.wav
- example_title: example 2
  src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020570a.flac.wav

---

# wav2vec2-xls-r-parlaspeech-hr

This model is based on the [facebook/wav2vec2-xls-r-300m model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) and was fine-tuned with 72 hours of recordings and transcripts from the Croatian parliament. This training dataset is an early result of the second iteration of the [ParlaMint project](https://www.clarin.eu/content/parlamint-towards-comparable-parliamentary-corpora) inside which the dataset will be extended and published under the name ParlaSpeech-HR and an open licence.

The efforts resulting in this model were coordinated by Nikola Ljubešić, the rough manual data alignment was performed by Ivo-Pavao Jazbec, the method for fine automatic data alignment from [Plüss et al.](https://arxiv.org/abs/2010.02810) was applied by Vuk Batanović and Lenka Bajčetić, while the final modelling was performed by Peter Rupnik.

Initial evaluation on partially noisy data showed the model to achieve a word error rate of 13.68% and a character error rate of 4.56%.

## Usage in `transformers`

```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import Audio
import soundfile as sf
import torch
import os

# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained(
    "classla/wav2vec2-xls-r-sabor-hr")
model = Wav2Vec2ForCTC.from_pretrained("classla/wav2vec2-xls-r-sabor-hr")


# download the example wav files:
os.system("curl https://huggingface.co/classla/wav2vec2-xls-r-sabor-hr/raw/main/00020570a.flac.wav")

# read the wav file as datasets.Audio object
audio = Audio(sampling_rate=16000).decode_example("00020570a.flac.wav")

# remove the raw wav file
os.system("rm 00020570a.flac.wav")

# tokenize
input_values = processor(
        audio["array"],  return_tensors="pt", padding=True,
        sampling_rate=16000).input_values
        
# retrieve logits
logits = model(input_values).logits

# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)


# transcription: ['veliki broj poslovnih subjekata posluje sa minusom velik dio']
```