RoBERTa-Base-SE2025T11A-sun-v20250107143556
This model is a fine-tuned version of w11wo/sundanese-roberta-base-emotion-classifier on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.3533
- F1 Macro: 0.6178
- F1 Micro: 0.6190
- F1 Weighted: 0.6165
- F1 Samples: 0.6113
- F1 Label Marah: 0.6408
- F1 Label Jijik: 0.5435
- F1 Label Takut: 0.5938
- F1 Label Senang: 0.8395
- F1 Label Sedih: 0.5758
- F1 Label Terkejut: 0.5063
- F1 Label Biasa: 0.625
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | F1 Samples | F1 Label Marah | F1 Label Jijik | F1 Label Takut | F1 Label Senang | F1 Label Sedih | F1 Label Terkejut | F1 Label Biasa |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.502 | 0.1805 | 100 | 0.4277 | 0.1484 | 0.2553 | 0.1776 | 0.1892 | 0.5255 | 0.04 | 0.0 | 0.2917 | 0.1818 | 0.0 | 0.0 |
0.4765 | 0.3610 | 200 | 0.4058 | 0.2881 | 0.3526 | 0.3134 | 0.2492 | 0.5652 | 0.0755 | 0.48 | 0.4727 | 0.4231 | 0.0 | 0.0 |
0.4129 | 0.5415 | 300 | 0.3793 | 0.2532 | 0.3228 | 0.2553 | 0.2177 | 0.1356 | 0.0 | 0.4348 | 0.7949 | 0.4074 | 0.0 | 0.0 |
0.4055 | 0.7220 | 400 | 0.3522 | 0.3939 | 0.4559 | 0.4206 | 0.3194 | 0.4675 | 0.2069 | 0.5 | 0.7765 | 0.4727 | 0.3333 | 0.0 |
0.4033 | 0.9025 | 500 | 0.3440 | 0.3094 | 0.4011 | 0.3305 | 0.2988 | 0.5116 | 0.0 | 0.4681 | 0.7711 | 0.4151 | 0.0 | 0.0 |
0.342 | 1.0830 | 600 | 0.3285 | 0.4847 | 0.5581 | 0.5235 | 0.4581 | 0.6263 | 0.2985 | 0.52 | 0.8140 | 0.5758 | 0.5581 | 0.0 |
0.3255 | 1.2635 | 700 | 0.3483 | 0.4836 | 0.5529 | 0.5265 | 0.4913 | 0.6604 | 0.4110 | 0.5753 | 0.7901 | 0.4528 | 0.4952 | 0.0 |
0.285 | 1.4440 | 800 | 0.3312 | 0.5159 | 0.5356 | 0.5193 | 0.4790 | 0.4324 | 0.4773 | 0.5769 | 0.8046 | 0.5352 | 0.4 | 0.3846 |
0.323 | 1.6245 | 900 | 0.3123 | 0.5834 | 0.5833 | 0.5651 | 0.5461 | 0.56 | 0.3235 | 0.6452 | 0.8293 | 0.6053 | 0.4333 | 0.6875 |
0.3056 | 1.8051 | 1000 | 0.3225 | 0.6087 | 0.6 | 0.5863 | 0.5420 | 0.5122 | 0.3478 | 0.6471 | 0.8395 | 0.6269 | 0.56 | 0.7273 |
0.2933 | 1.9856 | 1100 | 0.3183 | 0.5223 | 0.5691 | 0.5524 | 0.5315 | 0.6733 | 0.4048 | 0.5490 | 0.7727 | 0.5625 | 0.5122 | 0.1818 |
0.2083 | 2.1661 | 1200 | 0.3204 | 0.5837 | 0.5904 | 0.5868 | 0.5518 | 0.5647 | 0.5208 | 0.6316 | 0.8205 | 0.4912 | 0.5567 | 0.5 |
0.225 | 2.3466 | 1300 | 0.3042 | 0.6107 | 0.6201 | 0.6068 | 0.5941 | 0.6465 | 0.4595 | 0.7222 | 0.8293 | 0.5397 | 0.4776 | 0.6 |
0.21 | 2.5271 | 1400 | 0.3257 | 0.6022 | 0.6008 | 0.5980 | 0.5884 | 0.62 | 0.5370 | 0.5818 | 0.7901 | 0.5484 | 0.4928 | 0.6452 |
0.1873 | 2.7076 | 1500 | 0.3303 | 0.6000 | 0.6100 | 0.6033 | 0.6102 | 0.6538 | 0.5495 | 0.5862 | 0.8043 | 0.5714 | 0.4634 | 0.5714 |
0.2102 | 2.8881 | 1600 | 0.3205 | 0.5906 | 0.6055 | 0.5977 | 0.6005 | 0.6783 | 0.4944 | 0.5862 | 0.7857 | 0.5714 | 0.5 | 0.5185 |
0.186 | 3.0686 | 1700 | 0.3239 | 0.6094 | 0.6160 | 0.6141 | 0.5887 | 0.6237 | 0.5714 | 0.5758 | 0.8354 | 0.5484 | 0.5591 | 0.5517 |
0.142 | 3.2491 | 1800 | 0.3281 | 0.6074 | 0.6154 | 0.6108 | 0.5992 | 0.6486 | 0.5060 | 0.6061 | 0.8395 | 0.5714 | 0.5287 | 0.5517 |
0.1476 | 3.4296 | 1900 | 0.3395 | 0.6229 | 0.6157 | 0.6123 | 0.5971 | 0.625 | 0.5 | 0.5660 | 0.8608 | 0.6061 | 0.48 | 0.7222 |
0.1473 | 3.6101 | 2000 | 0.3463 | 0.6006 | 0.6163 | 0.6118 | 0.6059 | 0.625 | 0.5455 | 0.5574 | 0.8395 | 0.5938 | 0.5814 | 0.4615 |
0.1239 | 3.7906 | 2100 | 0.3511 | 0.6009 | 0.6024 | 0.5975 | 0.5944 | 0.6465 | 0.5055 | 0.5517 | 0.7907 | 0.5714 | 0.4935 | 0.6471 |
0.1392 | 3.9711 | 2200 | 0.3458 | 0.6326 | 0.6340 | 0.6313 | 0.6258 | 0.6602 | 0.5769 | 0.5424 | 0.8193 | 0.5846 | 0.5783 | 0.6667 |
0.1247 | 4.1516 | 2300 | 0.3484 | 0.6218 | 0.6216 | 0.6188 | 0.6129 | 0.6139 | 0.5625 | 0.5614 | 0.8293 | 0.5846 | 0.5542 | 0.6471 |
0.0873 | 4.3321 | 2400 | 0.3492 | 0.6195 | 0.6248 | 0.6229 | 0.6198 | 0.6465 | 0.5657 | 0.5846 | 0.8293 | 0.5758 | 0.5542 | 0.5806 |
0.0979 | 4.5126 | 2500 | 0.3490 | 0.6215 | 0.6211 | 0.6197 | 0.6072 | 0.6275 | 0.5517 | 0.5970 | 0.8462 | 0.5846 | 0.5185 | 0.625 |
0.1132 | 4.6931 | 2600 | 0.3534 | 0.6235 | 0.6236 | 0.6220 | 0.6162 | 0.6408 | 0.5376 | 0.6061 | 0.8293 | 0.5846 | 0.5412 | 0.625 |
0.0915 | 4.8736 | 2700 | 0.3533 | 0.6178 | 0.6190 | 0.6165 | 0.6113 | 0.6408 | 0.5435 | 0.5938 | 0.8395 | 0.5758 | 0.5063 | 0.625 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.