Shona
Collection
Experimental automatic speech recognition models developed for the Shona language
•
36 items
•
Updated
This model is a fine-tuned version of facebook/w2v-bert-2.0 on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Wer | Cer |
---|---|---|---|---|---|---|
0.0958 | 1.0 | 4449 | 0.1764 | 0.0117 | 0.2148 | 0.0352 |
0.093 | 2.0 | 8898 | 0.1879 | 0.0117 | 0.2282 | 0.0382 |
0.0993 | 3.0 | 13347 | 0.1860 | 0.0117 | 0.2285 | 0.0384 |
0.0971 | 4.0 | 17796 | 0.1906 | 0.0117 | 0.2379 | 0.0390 |
0.0912 | 5.0 | 22245 | 0.1843 | 0.0117 | 0.2268 | 0.0381 |
0.084 | 6.0 | 26694 | 0.1970 | 0.0117 | 0.2247 | 0.0376 |
0.0786 | 7.0 | 31143 | 0.2031 | 0.0117 | 0.2433 | 0.0419 |
0.0716 | 8.0 | 35592 | 0.2114 | 0.0117 | 0.2360 | 0.0394 |
0.0673 | 9.0 | 40041 | 0.2146 | 0.0117 | 0.2358 | 0.0388 |
0.0617 | 10.0 | 44490 | 0.2282 | 0.0117 | 0.2311 | 0.0384 |
0.0559 | 11.0 | 48939 | 0.2302 | 0.0117 | 0.2306 | 0.0386 |
Base model
facebook/w2v-bert-2.0