alecocc's picture
Update README.md
7460b83 verified
metadata
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-small
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
  - precision
  - recall
model-index:
  - name: disi-unibo-nlp
    results: []
datasets:
  - disi-unibo-nlp/foodex2-clean

DeBERTa FoodEx2 Coder

This model is a fine-tuned version of microsoft/deberta-v3-small on the train_task1 split of the dataset foodex2-clean. It achieves the following results on the evaluation set:

  • Loss: 0.0548
  • Accuracy: 0.9822
  • F1: 0.8507
  • Precision: 0.9301
  • Recall: 0.7838

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
0.124 0.2899 1000 0.1032 0.9671 0.7090 0.8324 0.6174
0.1004 0.5799 2000 0.0855 0.9721 0.7551 0.8769 0.6631
0.0858 0.8698 3000 0.0737 0.9757 0.7873 0.9102 0.6937
0.0736 1.1598 4000 0.0696 0.9786 0.8196 0.9031 0.7502
0.0696 1.4497 5000 0.0639 0.9795 0.8294 0.8996 0.7694
0.068 1.7396 6000 0.0606 0.9812 0.8401 0.9385 0.7604
0.0634 2.0296 7000 0.0593 0.9809 0.8414 0.9123 0.7808
0.0565 2.3195 8000 0.0568 0.9820 0.8485 0.9318 0.7790
0.0584 2.6095 9000 0.0553 0.9822 0.8512 0.9296 0.7850
0.0568 2.8994 10000 0.0548 0.9822 0.8507 0.9301 0.7838

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.3.2
  • Tokenizers 0.21.0