--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 - recall model-index: - name: distilbert-base-uncased-fine-tuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9255 - name: F1 type: f1 value: 0.9254141326182981 - name: Recall type: recall value: 0.9255 --- # distilbert-base-uncased-fine-tuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2156 - Accuracy: 0.9255 - F1: 0.9254 - Recall: 0.9255 ## Model description This is the resuls of fine-tuning a distilbert-base-uncased trained on a NVIDIA GeForce GTX 1650, using a WSL with 7 gb of ram on windows 11. The fine-tuning was obtained by following the book **Natural Language Processing with Tranformers: Building Languaje Applications with Hugging Fabe, By Lewis Tunstall, Leandro von Werra & Thomas Wolf** Labels are associated to: 1. *LABEL_0* is **sadness** 2. *LABEL_1* is **joy** 3. *LABEL_2* is **love** 4. *LABEL_3* is **anger** 5. *LABEL_4* is **fear** 6. *LABEL_5* is **surprise** ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:| | 0.7838 | 1.0 | 250 | 0.2995 | 0.906 | 0.9039 | 0.906 | | 0.237 | 2.0 | 500 | 0.2156 | 0.9255 | 0.9254 | 0.9255 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1+cu117 - Datasets 2.13.2 - Tokenizers 0.12.1