RASMUS commited on
Commit
93bb961
1 Parent(s): edb9b83

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -19
README.md CHANGED
@@ -1,31 +1,95 @@
1
  ---
2
- language:
3
- - fi
4
- lisence: apache-2.0
 
 
 
5
  tags:
6
- - automatic-speech-recognition
7
- - mozilla-foundation/common_voice_7_0
8
  - generated_from_trainer
9
- - fi
 
10
  - speech
11
  - robust-speech-event
12
- datasets:
13
- - mozilla-foundation/common_voice_7_0
14
  model-index:
15
  - name: XLS-R 1B Wav2Vec2 Finnish by Rasmus Toivanen
16
  results:
17
- - task:
18
- name: Automatic Speech Recognition
19
  type: automatic-speech-recognition
20
  dataset:
21
- name: Common Voice 7
22
- type: mozilla-foundation/common_voice_7_0
23
  args: fi
24
  metrics:
25
- - name: Test WER
26
- type: wer
27
- value: 10.96
28
- - name: Test CER
29
- type: cer
30
- value: 2.81
31
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: fi
3
+ datasets:
4
+ - common_voice
5
+ metrics:
6
+ - wer
7
+ - cer
8
  tags:
 
 
9
  - generated_from_trainer
10
+ - audio
11
+ - automatic-speech-recognition
12
  - speech
13
  - robust-speech-event
 
 
14
  model-index:
15
  - name: XLS-R 1B Wav2Vec2 Finnish by Rasmus Toivanen
16
  results:
17
+ - task:
18
+ name: Speech Recognition
19
  type: automatic-speech-recognition
20
  dataset:
21
+ name: Common Voice fi_7_0
22
+ type: common_voice
23
  args: fi
24
  metrics:
25
+ - name: Test WER
26
+ type: wer
27
+ value: 10.96
28
+ - name: Test CER
29
+ type: cer
30
+ value: 2.81
31
+ ---
32
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
+ should probably proofread and complete it, then remove this comment. -->
34
+
35
+ # wav2vec2-xlsr-fi-train-aug-lm-1B
36
+
37
+ This model was trained from scratch on the None dataset.
38
+ It achieves the following results on the evaluation set:
39
+ - Loss: 0.1499
40
+ - Wer: 0.1955
41
+
42
+ ## Model description
43
+
44
+ More information needed
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 0.0001
60
+ - train_batch_size: 8
61
+ - eval_batch_size: 8
62
+ - seed: 42
63
+ - gradient_accumulation_steps: 2
64
+ - total_train_batch_size: 16
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - lr_scheduler_warmup_steps: 100
68
+ - num_epochs: 4
69
+ - mixed_precision_training: Native AMP
70
+
71
+ ### Training results
72
+
73
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
74
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
75
+ | 0.6473 | 0.29 | 400 | 0.2857 | 0.3825 |
76
+ | 0.6039 | 0.58 | 800 | 0.2459 | 0.3476 |
77
+ | 0.4757 | 0.87 | 1200 | 0.2338 | 0.3274 |
78
+ | 0.4473 | 1.15 | 1600 | 0.2246 | 0.3128 |
79
+ | 0.4322 | 1.44 | 2000 | 0.1962 | 0.2805 |
80
+ | 0.3961 | 1.73 | 2400 | 0.2070 | 0.2797 |
81
+ | 0.3642 | 2.02 | 2800 | 0.1790 | 0.2473 |
82
+ | 0.3561 | 2.31 | 3200 | 0.1769 | 0.2375 |
83
+ | 0.282 | 2.6 | 3600 | 0.1672 | 0.2263 |
84
+ | 0.2978 | 2.89 | 4000 | 0.1636 | 0.2192 |
85
+ | 0.2722 | 3.17 | 4400 | 0.1637 | 0.2102 |
86
+ | 0.2924 | 3.46 | 4800 | 0.1506 | 0.2021 |
87
+ | 0.2631 | 3.75 | 5200 | 0.1499 | 0.1955 |
88
+
89
+
90
+ ### Framework versions
91
+
92
+ - Transformers 4.16.0.dev0
93
+ - Pytorch 1.10.1+cu102
94
+ - Datasets 1.17.1.dev0
95
+ - Tokenizers 0.11.0