Gummybear05 commited on
Commit
e74607c
·
verified ·
1 Parent(s): 8d66129

End of training

Browse files
Files changed (1) hide show
  1. README.md +83 -71
README.md CHANGED
@@ -1,71 +1,83 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- base_model: facebook/wav2vec2-xls-r-300m
5
- tags:
6
- - generated_from_trainer
7
- model-index:
8
- - name: wav2vec2-E30_speed2
9
- results: []
10
- ---
11
-
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
- # wav2vec2-E30_speed2
16
-
17
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 4.3514
20
- - Cer: 90.8893
21
-
22
- ## Model description
23
-
24
- More information needed
25
-
26
- ## Intended uses & limitations
27
-
28
- More information needed
29
-
30
- ## Training and evaluation data
31
-
32
- More information needed
33
-
34
- ## Training procedure
35
-
36
- ### Training hyperparameters
37
-
38
- The following hyperparameters were used during training:
39
- - learning_rate: 0.0001
40
- - train_batch_size: 16
41
- - eval_batch_size: 16
42
- - seed: 42
43
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
- - lr_scheduler_type: linear
45
- - lr_scheduler_warmup_steps: 50
46
- - num_epochs: 3
47
- - mixed_precision_training: Native AMP
48
-
49
- ### Training results
50
-
51
- | Training Loss | Epoch | Step | Validation Loss | Cer |
52
- |:-------------:|:------:|:----:|:---------------:|:-------:|
53
- | 49.2666 | 0.2577 | 200 | 7.4219 | 100.0 |
54
- | 11.8441 | 0.5155 | 400 | 4.7273 | 98.2378 |
55
- | 5.3893 | 0.7732 | 600 | 4.5335 | 94.2904 |
56
- | 5.5815 | 1.0309 | 800 | 4.4939 | 93.8557 |
57
- | 6.1895 | 1.2887 | 1000 | 4.4739 | 93.8381 |
58
- | 6.3558 | 1.5464 | 1200 | 4.4406 | 93.8381 |
59
- | 5.4883 | 1.8041 | 1400 | 4.4290 | 93.3682 |
60
- | 5.2029 | 2.0619 | 1600 | 4.3919 | 92.8043 |
61
- | 5.4859 | 2.3196 | 1800 | 4.3714 | 91.9173 |
62
- | 4.9494 | 2.5773 | 2000 | 4.3444 | 91.0362 |
63
- | 4.8426 | 2.8351 | 2200 | 4.3514 | 90.8893 |
64
-
65
-
66
- ### Framework versions
67
-
68
- - Transformers 4.45.2
69
- - Pytorch 2.5.1
70
- - Datasets 2.19.1
71
- - Tokenizers 0.20.1
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: facebook/wav2vec2-xls-r-300m
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: wav2vec2-E30_speed2
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # wav2vec2-E30_speed2
16
+
17
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.2995
20
+ - Cer: 25.2938
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 0.0001
40
+ - train_batch_size: 8
41
+ - eval_batch_size: 8
42
+ - seed: 42
43
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
44
+ - lr_scheduler_type: linear
45
+ - lr_scheduler_warmup_steps: 50
46
+ - num_epochs: 3
47
+ - mixed_precision_training: Native AMP
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss | Cer |
52
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
53
+ | 41.4516 | 0.1289 | 200 | 5.3911 | 100.0 |
54
+ | 4.9936 | 0.2579 | 400 | 4.7064 | 100.0 |
55
+ | 4.8068 | 0.3868 | 600 | 4.6491 | 100.0 |
56
+ | 4.7989 | 0.5158 | 800 | 4.7160 | 100.0 |
57
+ | 4.7287 | 0.6447 | 1000 | 4.5872 | 100.0 |
58
+ | 4.7102 | 0.7737 | 1200 | 4.5938 | 100.0 |
59
+ | 4.6927 | 0.9026 | 1400 | 4.6049 | 100.0 |
60
+ | 4.6106 | 1.0316 | 1600 | 4.5710 | 100.0 |
61
+ | 4.5281 | 1.1605 | 1800 | 4.4045 | 100.0 |
62
+ | 4.2276 | 1.2895 | 2000 | 3.8233 | 74.6240 |
63
+ | 3.2563 | 1.4184 | 2200 | 2.8222 | 51.8508 |
64
+ | 2.5657 | 1.5474 | 2400 | 2.3714 | 42.7850 |
65
+ | 2.2757 | 1.6763 | 2600 | 2.1794 | 41.1163 |
66
+ | 2.0428 | 1.8053 | 2800 | 1.9496 | 36.1222 |
67
+ | 1.8705 | 1.9342 | 3000 | 1.8052 | 33.6310 |
68
+ | 1.6822 | 2.0632 | 3200 | 1.6552 | 31.3514 |
69
+ | 1.574 | 2.1921 | 3400 | 1.5774 | 30.2115 |
70
+ | 1.4683 | 2.3211 | 3600 | 1.4999 | 28.9424 |
71
+ | 1.4039 | 2.4500 | 3800 | 1.4358 | 28.1786 |
72
+ | 1.3323 | 2.5790 | 4000 | 1.3441 | 26.1868 |
73
+ | 1.3055 | 2.7079 | 4200 | 1.3460 | 25.8813 |
74
+ | 1.2428 | 2.8369 | 4400 | 1.3022 | 25.5170 |
75
+ | 1.2121 | 2.9658 | 4600 | 1.2995 | 25.2938 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.46.2
81
+ - Pytorch 2.5.1+cu121
82
+ - Datasets 3.1.0
83
+ - Tokenizers 0.20.3