urdu-emotions
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0077
- Accuracy: 1.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
No log | 1.0 | 8 | 1.0905 | 0.4833 |
1.0739 | 2.0 | 16 | 1.0736 | 0.5 |
1.0341 | 3.0 | 24 | 0.9888 | 0.75 |
0.9797 | 4.0 | 32 | 0.9144 | 0.7667 |
0.8716 | 5.0 | 40 | 0.8531 | 0.75 |
0.8716 | 6.0 | 48 | 0.7572 | 0.75 |
0.7905 | 7.0 | 56 | 0.6556 | 0.7333 |
0.6837 | 8.0 | 64 | 0.5976 | 0.7167 |
0.6611 | 9.0 | 72 | 0.5298 | 0.7667 |
0.5744 | 10.0 | 80 | 0.5014 | 0.8 |
0.5744 | 11.0 | 88 | 0.4616 | 0.8667 |
0.5402 | 12.0 | 96 | 0.3964 | 0.9667 |
0.4185 | 13.0 | 104 | 0.3536 | 0.8833 |
0.4138 | 14.0 | 112 | 0.2818 | 0.9167 |
0.3409 | 15.0 | 120 | 0.2554 | 0.9333 |
0.3409 | 16.0 | 128 | 0.1434 | 0.9833 |
0.2923 | 17.0 | 136 | 0.1658 | 0.95 |
0.2412 | 18.0 | 144 | 0.0775 | 1.0 |
0.2038 | 19.0 | 152 | 0.0766 | 1.0 |
0.1618 | 20.0 | 160 | 0.0680 | 1.0 |
0.1618 | 21.0 | 168 | 0.0517 | 1.0 |
0.2041 | 22.0 | 176 | 0.0344 | 1.0 |
0.1238 | 23.0 | 184 | 0.0313 | 1.0 |
0.1179 | 24.0 | 192 | 0.0269 | 1.0 |
0.1738 | 25.0 | 200 | 0.0248 | 1.0 |
0.1738 | 26.0 | 208 | 0.0213 | 1.0 |
0.1265 | 27.0 | 216 | 0.0363 | 0.9833 |
0.1503 | 28.0 | 224 | 0.1058 | 0.9667 |
0.1201 | 29.0 | 232 | 0.0270 | 1.0 |
0.0484 | 30.0 | 240 | 0.1284 | 0.9667 |
0.0484 | 31.0 | 248 | 0.1702 | 0.95 |
0.2394 | 32.0 | 256 | 0.0144 | 1.0 |
0.1625 | 33.0 | 264 | 0.0145 | 1.0 |
0.0999 | 34.0 | 272 | 0.0172 | 1.0 |
0.0406 | 35.0 | 280 | 0.0106 | 1.0 |
0.0406 | 36.0 | 288 | 0.0098 | 1.0 |
0.0374 | 37.0 | 296 | 0.0097 | 1.0 |
0.0268 | 38.0 | 304 | 0.0100 | 1.0 |
0.0279 | 39.0 | 312 | 0.0111 | 1.0 |
0.031 | 40.0 | 320 | 0.0099 | 1.0 |
0.031 | 41.0 | 328 | 0.0096 | 1.0 |
0.0412 | 42.0 | 336 | 0.0096 | 1.0 |
0.0218 | 43.0 | 344 | 0.0087 | 1.0 |
0.026 | 44.0 | 352 | 0.0079 | 1.0 |
0.0122 | 45.0 | 360 | 0.0081 | 1.0 |
0.0122 | 46.0 | 368 | 0.0087 | 1.0 |
0.0328 | 47.0 | 376 | 0.0088 | 1.0 |
0.0264 | 48.0 | 384 | 0.0084 | 1.0 |
0.0339 | 49.0 | 392 | 0.0078 | 1.0 |
0.0257 | 50.0 | 400 | 0.0077 | 1.0 |
Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.