RoBERTa-Base-SE2025T11A-sun-v20250112121908
This model is a fine-tuned version of w11wo/sundanese-roberta-base-emotion-classifier on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2949
- F1 Macro: 0.5910
- F1 Micro: 0.6274
- F1 Weighted: 0.6153
- F1 Samples: 0.6087
- F1 Label Marah: 0.5714
- F1 Label Jijik: 0.4719
- F1 Label Takut: 0.5581
- F1 Label Senang: 0.8020
- F1 Label Sedih: 0.7213
- F1 Label Terkejut: 0.5283
- F1 Label Biasa: 0.4839
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | F1 Weighted | F1 Samples | F1 Label Marah | F1 Label Jijik | F1 Label Takut | F1 Label Senang | F1 Label Sedih | F1 Label Terkejut | F1 Label Biasa |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.4842 | 0.1134 | 100 | 0.4075 | 0.0958 | 0.2095 | 0.1451 | 0.1311 | 0.0 | 0.0 | 0.0 | 0.6709 | 0.0 | 0.0 | 0.0 |
0.4016 | 0.2268 | 200 | 0.3800 | 0.2959 | 0.4363 | 0.3427 | 0.3397 | 0.4536 | 0.0 | 0.5526 | 0.7867 | 0.2785 | 0.0 | 0.0 |
0.4007 | 0.3401 | 300 | 0.3647 | 0.3197 | 0.4660 | 0.3694 | 0.3694 | 0.3077 | 0.0 | 0.5432 | 0.7814 | 0.6055 | 0.0 | 0.0 |
0.3876 | 0.4535 | 400 | 0.3456 | 0.3792 | 0.5044 | 0.4319 | 0.4219 | 0.4957 | 0.0 | 0.5432 | 0.7835 | 0.6549 | 0.1772 | 0.0 |
0.3772 | 0.5669 | 500 | 0.3249 | 0.4043 | 0.5341 | 0.4626 | 0.4603 | 0.4742 | 0.0 | 0.55 | 0.8165 | 0.5849 | 0.4043 | 0.0 |
0.3421 | 0.6803 | 600 | 0.3407 | 0.4582 | 0.5462 | 0.5097 | 0.4840 | 0.4510 | 0.4576 | 0.4658 | 0.7960 | 0.6195 | 0.4176 | 0.0 |
0.3847 | 0.7937 | 700 | 0.3121 | 0.4378 | 0.5429 | 0.4932 | 0.4753 | 0.4915 | 0.1875 | 0.525 | 0.8020 | 0.6286 | 0.4301 | 0.0 |
0.3586 | 0.9070 | 800 | 0.3055 | 0.4769 | 0.5710 | 0.5283 | 0.5091 | 0.5138 | 0.2899 | 0.5476 | 0.8060 | 0.6555 | 0.4742 | 0.0513 |
0.3455 | 1.0204 | 900 | 0.3050 | 0.5215 | 0.5917 | 0.5658 | 0.5356 | 0.5849 | 0.5149 | 0.5185 | 0.7958 | 0.6897 | 0.4464 | 0.1 |
0.3108 | 1.1338 | 1000 | 0.2907 | 0.5593 | 0.6123 | 0.5866 | 0.5662 | 0.5983 | 0.3143 | 0.5301 | 0.8182 | 0.7059 | 0.4632 | 0.4848 |
0.2357 | 1.2472 | 1100 | 0.2967 | 0.5547 | 0.5992 | 0.5785 | 0.5493 | 0.5053 | 0.3288 | 0.5476 | 0.8168 | 0.6667 | 0.4792 | 0.5385 |
0.2628 | 1.3605 | 1200 | 0.3027 | 0.5875 | 0.6280 | 0.6165 | 0.5922 | 0.5873 | 0.5455 | 0.5432 | 0.8187 | 0.7049 | 0.5128 | 0.4 |
0.2602 | 1.4739 | 1300 | 0.2977 | 0.5681 | 0.6136 | 0.6010 | 0.5776 | 0.5789 | 0.5 | 0.5238 | 0.8066 | 0.7188 | 0.5091 | 0.3396 |
0.2503 | 1.5873 | 1400 | 0.2968 | 0.5855 | 0.6245 | 0.6113 | 0.6009 | 0.5806 | 0.4691 | 0.5063 | 0.8172 | 0.7287 | 0.5045 | 0.4923 |
0.2694 | 1.7007 | 1500 | 0.3053 | 0.5789 | 0.6222 | 0.6087 | 0.5936 | 0.5625 | 0.4878 | 0.6022 | 0.8085 | 0.7059 | 0.5149 | 0.3704 |
0.2604 | 1.8141 | 1600 | 0.2978 | 0.6067 | 0.6430 | 0.6293 | 0.6260 | 0.5714 | 0.5532 | 0.5618 | 0.8269 | 0.6891 | 0.5283 | 0.5161 |
0.2738 | 1.9274 | 1700 | 0.2949 | 0.5910 | 0.6274 | 0.6153 | 0.6087 | 0.5714 | 0.4719 | 0.5581 | 0.8020 | 0.7213 | 0.5283 | 0.4839 |
Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.