File size: 3,267 Bytes
c6f57da
fb2ee5e
 
 
 
 
 
4fdadff
c6f57da
 
fb2ee5e
c6f57da
4298ddc
 
fb2ee5e
 
983dfa0
 
c6f57da
 
fb2ee5e
8ea3212
fb2ee5e
4fdadff
fb2ee5e
 
 
 
 
4fdadff
8ea3212
4fdadff
 
8ea3212
4fdadff
 
8ea3212
4fdadff
 
8ea3212
4fdadff
 
8ea3212
4fdadff
 
8ea3212
4fdadff
 
8ea3212
4fdadff
 
8ea3212
4fdadff
 
8ea3212
4fdadff
 
8ea3212
4fdadff
8ea3212
 
4fdadff
8ea3212
 
 
 
 
4fdadff
8ea3212
4fdadff
 
 
 
 
 
 
 
 
 
8ea3212
 
4fdadff
8ea3212
 
 
 
 
4fdadff
8ea3212
4fdadff
 
8ea3212
4fdadff
 
8ea3212
4fdadff
 
8ea3212
4fdadff
c6f57da
 
 
 
fb2ee5e
c6f57da
fb2ee5e
c6f57da
fb2ee5e
c6f57da
fb2ee5e
 
 
 
c6f57da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
---
language:
- cs
- hsb
- pl
- sk
- sl
- multilingual
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- xlsr-fine-tuning-week
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-300m-west-slavic-cv8
  results:
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: Common Voice 8
      type: mozilla-foundation/common_voice_8_0
      args: cs
    metrics:
    - type: wer
      value: 53.5
      name: Test WER
    - type: cer
      value: 14.7
      name: Test CER
    - type: wer
      value: 81.7
      name: Test WER
    - type: cer
      value: 21.2
      name: Test CER
    - type: wer
      value: 60.2
      name: Test WER
    - type: cer
      value: 15.6
      name: Test CER
    - type: wer
      value: 69.6
      name: Test WER
    - type: cer
      value: 20.7
      name: Test CER
    - type: wer
      value: 73.2
      name: Test WER
    - type: cer
      value: 23.2
      name: Test CER
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: Robust Speech Event - Dev Data
      type: speech-recognition-community-v2/dev_data
      args: cs
    metrics:
    - type: wer
      value: 84.11
      name: Test WER
    - type: wer
      value: 65.3
      name: Test WER
    - type: wer
      value: 88.37
      name: Test WER
    - type: wer
      value: 87.69
      name: Test WER
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: Robust Speech Event - Test Data
      type: speech-recognition-community-v2/eval_data
      args: cs
    metrics:
    - type: wer
      value: 75.99
      name: Test WER
    - type: wer
      value: 72.0
      name: Test WER
    - type: wer
      value: 89.08
      name: Test WER
    - type: wer
      value: 87.89
      name: Test WER
---

# wav2vec2-xls-r-300m-west-slavic-cv8

This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Common Voice 8 dataset of five similar languages with similar scripts: Czech, Slovak, Polish, Slovenian and Upper Sorbian. Training and validation sets were concatenated and shuffled.

Evaluation set used for training was concatenated from the respective test sets and shuffled while limiting each language to at most 2000 samples. During training, cca WER 70 was achieved on this set.

### Evaluation script

```
python eval.py --model_id comodoro/wav2vec2-xls-r-300m-west-slavic-cv8 --dataset mozilla-foundation/common_voice_8_0 --split test --config {lang}
```
                                                   
### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP

### Framework versions

- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0