Bemba
Collection
Experimental automatic speech recognition models developed for the Bemba language
•
32 items
•
Updated
This model is a fine-tuned version of facebook/w2v-bert-2.0 on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
1.9873 | 1.0 | 6423 | 0.8901 | 0.6111 | 0.1723 |
0.7807 | 2.0 | 12846 | 0.8360 | 0.5188 | 0.1525 |
0.683 | 3.0 | 19269 | 0.6705 | 0.4842 | 0.1410 |
0.6282 | 4.0 | 25692 | 0.6473 | 0.4762 | 0.1394 |
0.594 | 5.0 | 32115 | 0.6369 | 0.4463 | 0.1314 |
0.5645 | 6.0 | 38538 | 0.6244 | 0.4360 | 0.1287 |
0.5322 | 7.0 | 44961 | 0.6186 | 0.4191 | 0.1273 |
0.5045 | 8.0 | 51384 | 0.6334 | 0.4127 | 0.1230 |
0.4767 | 9.0 | 57807 | 0.6017 | 0.4117 | 0.1227 |
0.4505 | 10.0 | 64230 | 0.6142 | 0.4092 | 0.1214 |
0.4247 | 11.0 | 70653 | 0.6155 | 0.4033 | 0.1208 |
0.3974 | 12.0 | 77076 | 0.6161 | 0.4013 | 0.1198 |
0.3714 | 13.0 | 83499 | 0.6415 | 0.4032 | 0.1211 |
0.3437 | 14.0 | 89922 | 0.6691 | 0.4007 | 0.1207 |
0.3175 | 15.0 | 96345 | 0.7251 | 0.4052 | 0.1212 |
0.2921 | 16.0 | 102768 | 0.7279 | 0.4003 | 0.1218 |
0.2681 | 17.0 | 109191 | 0.7837 | 0.4103 | 0.1216 |
0.2455 | 18.0 | 115614 | 0.8336 | 0.4074 | 0.1233 |
0.2242 | 19.0 | 122037 | 0.8544 | 0.4158 | 0.1247 |
0.2044 | 20.0 | 128460 | 0.8591 | 0.4243 | 0.1270 |
0.1857 | 21.0 | 134883 | 0.9652 | 0.4123 | 0.1245 |
0.1676 | 22.0 | 141306 | 1.0143 | 0.4254 | 0.1266 |
Base model
facebook/w2v-bert-2.0