RoBERTa-Base-SE2025T11A-sun-v20241227113119
This model is a fine-tuned version of w11wo/sundanese-roberta-base-emotion-classifier on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2713
- F1 Macro: 0.6396
- F1 Micro: 0.6701
- F1 Weighted: 0.6595
- F1 Samples: 0.6493
- F1 Label Marah: 0.5849
- F1 Label Jijik: 0.5882
- F1 Label Takut: 0.5977
- F1 Label Senang: 0.8387
- F1 Label Sedih: 0.7727
- F1 Label Terkejut: 0.5283
- F1 Label Biasa: 0.5667
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | F1 Samples | F1 Label Marah | F1 Label Jijik | F1 Label Takut | F1 Label Senang | F1 Label Sedih | F1 Label Terkejut | F1 Label Biasa |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.4913 | 0.1133 | 100 | 0.4181 | 0.2064 | 0.3610 | 0.2601 | 0.2678 | 0.0 | 0.0 | 0.2 | 0.7327 | 0.5124 | 0.0 | 0.0 |
0.4259 | 0.2265 | 200 | 0.3804 | 0.2086 | 0.3881 | 0.2697 | 0.2877 | 0.1471 | 0.0 | 0.0364 | 0.7982 | 0.4783 | 0.0 | 0.0 |
0.3776 | 0.3398 | 300 | 0.3601 | 0.2944 | 0.42 | 0.3511 | 0.3169 | 0.325 | 0.0 | 0.3125 | 0.7795 | 0.4186 | 0.225 | 0.0 |
0.3908 | 0.4530 | 400 | 0.3397 | 0.3606 | 0.4571 | 0.4107 | 0.3484 | 0.2632 | 0.0952 | 0.4658 | 0.7018 | 0.6727 | 0.3256 | 0.0 |
0.3672 | 0.5663 | 500 | 0.3174 | 0.4172 | 0.5390 | 0.4742 | 0.4740 | 0.5079 | 0.0 | 0.5349 | 0.7897 | 0.7241 | 0.3636 | 0.0 |
0.3415 | 0.6795 | 600 | 0.3166 | 0.4676 | 0.5587 | 0.5179 | 0.4790 | 0.4466 | 0.3684 | 0.5455 | 0.7735 | 0.7481 | 0.3913 | 0.0 |
0.3177 | 0.7928 | 700 | 0.3041 | 0.5286 | 0.5964 | 0.5631 | 0.5404 | 0.5294 | 0.3733 | 0.5618 | 0.8058 | 0.7167 | 0.4086 | 0.3043 |
0.34 | 0.9060 | 800 | 0.2900 | 0.5289 | 0.6175 | 0.5753 | 0.5635 | 0.5763 | 0.1905 | 0.5570 | 0.8351 | 0.784 | 0.5273 | 0.2326 |
0.3445 | 1.0193 | 900 | 0.2901 | 0.5733 | 0.625 | 0.6024 | 0.5804 | 0.5625 | 0.4918 | 0.5385 | 0.8377 | 0.7571 | 0.4086 | 0.4167 |
0.2587 | 1.1325 | 1000 | 0.2823 | 0.6230 | 0.6598 | 0.6431 | 0.6279 | 0.6034 | 0.5102 | 0.5610 | 0.8513 | 0.7692 | 0.4694 | 0.5965 |
0.2575 | 1.2458 | 1100 | 0.2789 | 0.6162 | 0.6597 | 0.6423 | 0.6265 | 0.6182 | 0.475 | 0.5679 | 0.8454 | 0.7727 | 0.5254 | 0.5091 |
0.2745 | 1.3590 | 1200 | 0.2853 | 0.6132 | 0.64 | 0.6312 | 0.6160 | 0.5362 | 0.5614 | 0.5679 | 0.8161 | 0.7857 | 0.4536 | 0.5714 |
0.2631 | 1.4723 | 1300 | 0.2749 | 0.6280 | 0.6574 | 0.6483 | 0.6374 | 0.5645 | 0.5435 | 0.5542 | 0.8197 | 0.7794 | 0.5586 | 0.5763 |
0.2261 | 1.5855 | 1400 | 0.2819 | 0.6254 | 0.6608 | 0.6467 | 0.6521 | 0.5766 | 0.5614 | 0.5747 | 0.8485 | 0.7556 | 0.5 | 0.5614 |
0.2279 | 1.6988 | 1500 | 0.2814 | 0.6310 | 0.6634 | 0.6515 | 0.6566 | 0.5905 | 0.5664 | 0.5843 | 0.8351 | 0.7413 | 0.5385 | 0.5614 |
0.2872 | 1.8120 | 1600 | 0.2767 | 0.6290 | 0.6599 | 0.6500 | 0.6466 | 0.5905 | 0.5321 | 0.5909 | 0.8342 | 0.7518 | 0.5421 | 0.5614 |
0.261 | 1.9253 | 1700 | 0.2713 | 0.6396 | 0.6701 | 0.6595 | 0.6493 | 0.5849 | 0.5882 | 0.5977 | 0.8387 | 0.7727 | 0.5283 | 0.5667 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 19
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.