REPO_NAME

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1875
  • Accuracy: 0.9787

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.9854 1.0 176 2.8951 0.1059
2.8938 2.0 352 2.9279 0.1059
2.7909 3.0 528 2.6339 0.1833
2.1234 4.0 704 1.8449 0.2709
1.8188 5.0 880 1.1668 0.5031
1.0437 6.0 1056 0.6863 0.7393
0.571 7.0 1232 0.3105 0.9206
0.3123 8.0 1408 0.1898 0.9511
0.3461 9.0 1584 0.1549 0.9593
0.2453 10.0 1760 0.1557 0.9572
0.2388 11.0 1936 0.1081 0.9776
0.1856 12.0 2112 0.1199 0.9756
0.1738 13.0 2288 0.1216 0.9796
0.1364 14.0 2464 0.1350 0.9695
0.1664 15.0 2640 0.0961 0.9796
0.1232 16.0 2816 0.1136 0.9796
0.1265 17.0 2992 0.1130 0.9735
0.1317 18.0 3168 0.0975 0.9796
0.14 19.0 3344 0.1102 0.9796
0.1318 20.0 3520 0.1120 0.9756
0.0978 21.0 3696 0.1505 0.9674
0.0917 22.0 3872 0.1089 0.9776
0.0966 23.0 4048 0.0996 0.9817
0.0802 24.0 4224 0.1108 0.9817
0.0956 25.0 4400 0.1283 0.9776
0.0677 26.0 4576 0.1182 0.9776
0.07 27.0 4752 0.1573 0.9593
0.0636 28.0 4928 0.1304 0.9817
0.0698 29.0 5104 0.1332 0.9776
0.0565 30.0 5280 0.0982 0.9817
0.034 31.0 5456 0.1481 0.9776
0.0327 32.0 5632 0.1624 0.9796
0.0645 33.0 5808 0.1284 0.9837
0.0521 34.0 5984 0.1477 0.9796
0.048 35.0 6160 0.1208 0.9817
0.0641 36.0 6336 0.1147 0.9837
0.046 37.0 6512 0.1443 0.9776
0.0511 38.0 6688 0.1437 0.9776
0.0548 39.0 6864 0.1809 0.9776
0.0444 40.0 7040 0.1301 0.9796
0.0362 41.0 7216 0.1138 0.9857
0.0431 42.0 7392 0.1467 0.9817
0.048 43.0 7568 0.1596 0.9756
0.0292 44.0 7744 0.1435 0.9796
0.032 45.0 7920 0.1537 0.9796
0.0306 46.0 8096 0.1554 0.9776
0.0303 47.0 8272 0.1322 0.9817
0.0376 48.0 8448 0.1374 0.9796
0.0261 49.0 8624 0.1598 0.9776
0.0319 50.0 8800 0.1490 0.9796
0.0339 51.0 8976 0.1783 0.9756

Framework versions

  • Transformers 4.43.3
  • Pytorch 2.1.0+cu118
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
316M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for KasuleTrevor/runyankore_speech_intent_classifier

Finetuned
(532)
this model