distilbert-base-cased-emotion
Training: The model has been trained using the script provided in the following repository https://github.com/MorenoLaQuatra/transformers-tasks-templates
This model is a fine-tuned version of distilbert-base-cased on emotion dataset. It achieves the following results on the evaluation set:
- Loss: 0.3272
- Accuracy: 0.9235
- F1: 0.9217
- Precision: 0.9224
- Recall: 0.9235
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|---|---|---|
0.2776 | 1.0 | 500 | 0.2954 | 0.9 | 0.8957 | 0.9031 | 0.9 |
0.1887 | 2.0 | 1000 | 0.1716 | 0.934 | 0.9344 | 0.9370 | 0.934 |
0.119 | 3.0 | 1500 | 0.1614 | 0.9345 | 0.9342 | 0.9377 | 0.9345 |
0.1001 | 4.0 | 2000 | 0.2018 | 0.936 | 0.9353 | 0.9359 | 0.936 |
0.0704 | 5.0 | 2500 | 0.1925 | 0.935 | 0.9349 | 0.9354 | 0.935 |
0.0471 | 6.0 | 3000 | 0.2369 | 0.938 | 0.9373 | 0.9377 | 0.938 |
0.0322 | 7.0 | 3500 | 0.2693 | 0.938 | 0.9382 | 0.9392 | 0.938 |
0.0137 | 8.0 | 4000 | 0.2926 | 0.937 | 0.9371 | 0.9372 | 0.937 |
0.0099 | 9.0 | 4500 | 0.2964 | 0.9365 | 0.9362 | 0.9362 | 0.9365 |
0.0114 | 10.0 | 5000 | 0.3044 | 0.935 | 0.9349 | 0.9350 | 0.935 |
Framework versions
- Transformers 4.22.1
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
- Downloads last month
- 128
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for morenolq/distilbert-base-cased-emotion
Base model
distilbert/distilbert-base-casedDataset used to train morenolq/distilbert-base-cased-emotion
Evaluation results
- accuracy on emotionvalidation set verified0.923
- Accuracy on emotionvalidation set self-reported0.938
- Precision Macro on emotionvalidation set self-reported0.928
- Precision Micro on emotionvalidation set self-reported0.938
- Precision Weighted on emotionvalidation set self-reported0.938
- Recall Macro on emotionvalidation set self-reported0.903
- Recall Micro on emotionvalidation set self-reported0.938
- Recall Weighted on emotionvalidation set self-reported0.938
- F1 Macro on emotionvalidation set self-reported0.915
- F1 Micro on emotionvalidation set self-reported0.938