metadata
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ellis-v3-emotion-leadership
results: []
NOTA: Este modelo será utilizado para comparação com a Versão 2.0. Poderá ser excluido após esta validação/comparação
ellis-v3-emotion-leadership
This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.0202
- Accuracy: 0.8402
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 70
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.6209 | 1.0 | 1479 | 0.5094 | 0.8014 |
0.4893 | 2.0 | 2958 | 0.4642 | 0.8166 |
0.4244 | 3.0 | 4437 | 0.4640 | 0.8284 |
0.3607 | 4.0 | 5916 | 0.4596 | 0.8333 |
0.3021 | 5.0 | 7395 | 0.4892 | 0.8322 |
0.2391 | 6.0 | 8874 | 0.5455 | 0.8288 |
0.2066 | 7.0 | 10353 | 0.6553 | 0.8288 |
0.1669 | 8.0 | 11832 | 0.6856 | 0.8387 |
0.1515 | 9.0 | 13311 | 0.8654 | 0.8280 |
0.1154 | 10.0 | 14790 | 0.8985 | 0.8322 |
0.094 | 11.0 | 16269 | 1.1159 | 0.8280 |
0.0822 | 12.0 | 17748 | 1.1954 | 0.8227 |
0.082 | 13.0 | 19227 | 1.2213 | 0.8341 |
0.0656 | 14.0 | 20706 | 1.2533 | 0.8375 |
0.0592 | 15.0 | 22185 | 1.3723 | 0.8284 |
0.0539 | 16.0 | 23664 | 1.4376 | 0.8326 |
0.0533 | 17.0 | 25143 | 1.4746 | 0.8291 |
0.044 | 18.0 | 26622 | 1.4234 | 0.8288 |
0.0411 | 19.0 | 28101 | 1.4971 | 0.8253 |
0.0343 | 20.0 | 29580 | 1.5132 | 0.8284 |
0.0359 | 21.0 | 31059 | 1.5020 | 0.8360 |
0.0309 | 22.0 | 32538 | 1.6418 | 0.8356 |
0.0416 | 23.0 | 34017 | 1.4984 | 0.8303 |
0.0314 | 24.0 | 35496 | 1.5713 | 0.8341 |
0.0316 | 25.0 | 36975 | 1.5679 | 0.8352 |
0.0281 | 26.0 | 38454 | 1.6399 | 0.8311 |
0.0179 | 27.0 | 39933 | 1.7032 | 0.8231 |
0.0326 | 28.0 | 41412 | 1.6551 | 0.8330 |
0.0178 | 29.0 | 42891 | 1.7136 | 0.8284 |
0.0149 | 30.0 | 44370 | 1.7317 | 0.8288 |
0.0211 | 31.0 | 45849 | 1.6790 | 0.8314 |
0.0221 | 32.0 | 47328 | 1.7909 | 0.8280 |
0.0179 | 33.0 | 48807 | 1.8027 | 0.8314 |
0.022 | 34.0 | 50286 | 1.7754 | 0.8299 |
0.0198 | 35.0 | 51765 | 1.7498 | 0.8295 |
0.0124 | 36.0 | 53244 | 1.8098 | 0.8356 |
0.0123 | 37.0 | 54723 | 1.8535 | 0.8261 |
0.0103 | 38.0 | 56202 | 1.8827 | 0.8345 |
0.0145 | 39.0 | 57681 | 1.8882 | 0.8303 |
0.0162 | 40.0 | 59160 | 1.8174 | 0.8326 |
0.0103 | 41.0 | 60639 | 1.8350 | 0.8368 |
0.0103 | 42.0 | 62118 | 1.7853 | 0.8390 |
0.0136 | 43.0 | 63597 | 1.7032 | 0.8368 |
0.0099 | 44.0 | 65076 | 1.8274 | 0.8318 |
0.0074 | 45.0 | 66555 | 1.8598 | 0.8333 |
0.0108 | 46.0 | 68034 | 1.7978 | 0.8413 |
0.0063 | 47.0 | 69513 | 1.8116 | 0.8364 |
0.0112 | 48.0 | 70992 | 1.8066 | 0.8356 |
0.0038 | 49.0 | 72471 | 1.9092 | 0.8352 |
0.005 | 50.0 | 73950 | 1.9159 | 0.8356 |
0.0035 | 51.0 | 75429 | 1.8669 | 0.8379 |
0.0067 | 52.0 | 76908 | 1.9222 | 0.8333 |
0.0049 | 53.0 | 78387 | 1.8417 | 0.8398 |
0.0034 | 54.0 | 79866 | 2.0452 | 0.8311 |
0.0056 | 55.0 | 81345 | 1.9375 | 0.8349 |
0.0014 | 56.0 | 82824 | 1.9941 | 0.8322 |
0.0004 | 57.0 | 84303 | 2.0133 | 0.8349 |
0.0017 | 58.0 | 85782 | 2.0038 | 0.8356 |
0.0009 | 59.0 | 87261 | 2.0347 | 0.8356 |
0.0015 | 60.0 | 88740 | 1.9901 | 0.8368 |
0.0014 | 61.0 | 90219 | 2.0233 | 0.8368 |
0.0022 | 62.0 | 91698 | 2.0148 | 0.8356 |
0.0012 | 63.0 | 93177 | 1.9823 | 0.8383 |
0.0014 | 64.0 | 94656 | 2.0099 | 0.8368 |
0.0034 | 65.0 | 96135 | 1.9925 | 0.8402 |
0.0009 | 66.0 | 97614 | 2.0088 | 0.8390 |
0.0006 | 67.0 | 99093 | 2.0141 | 0.8394 |
0.0 | 68.0 | 100572 | 2.0199 | 0.8417 |
0.0012 | 69.0 | 102051 | 2.0187 | 0.8394 |
0.0 | 70.0 | 103530 | 2.0202 | 0.8402 |
Framework versions
- Transformers 4.39.3
- Pytorch 2.1.0
- Datasets 2.18.0
- Tokenizers 0.15.2