te-zsc-synthetic / README.md
adriansanz's picture
End of training
35d63be verified
|
raw
history blame
3.51 kB
metadata
license: apache-2.0
base_model: projecte-aina/roberta-base-ca-v2-cased-te
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - precision
  - recall
  - f1
model-index:
  - name: SYN_300524_epoch_5
    results: []

SYN_300524_epoch_5

This model is a fine-tuned version of projecte-aina/roberta-base-ca-v2-cased-te on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3372
  • Accuracy: 0.98
  • Precision: 0.9803
  • Recall: 0.98
  • F1: 0.9800
  • Ratio: 0.488

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 47
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.06
  • lr_scheduler_warmup_steps: 4
  • num_epochs: 1
  • label_smoothing_factor: 0.1

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1 Ratio
0.3174 0.0533 10 0.3307 0.984 0.9840 0.984 0.9840 0.496
0.3202 0.1067 20 0.3258 0.986 0.9861 0.986 0.9860 0.494
0.3016 0.16 30 0.3282 0.986 0.9860 0.986 0.9860 0.504
0.3291 0.2133 40 0.3495 0.977 0.9774 0.977 0.9770 0.485
0.2942 0.2667 50 0.3602 0.973 0.9738 0.973 0.9730 0.479
0.3121 0.32 60 0.3586 0.973 0.9731 0.973 0.9730 0.493
0.3226 0.3733 70 0.3736 0.968 0.9681 0.968 0.9680 0.508
0.3226 0.4267 80 0.3515 0.979 0.9791 0.979 0.9790 0.493
0.3265 0.48 90 0.3697 0.97 0.9706 0.97 0.9700 0.482
0.3424 0.5333 100 0.3650 0.971 0.9717 0.971 0.9710 0.481
0.3348 0.5867 110 0.3502 0.976 0.9760 0.976 0.9760 0.496
0.3393 0.64 120 0.3441 0.978 0.9780 0.978 0.9780 0.496
0.3421 0.6933 130 0.3397 0.979 0.9791 0.979 0.9790 0.493
0.3319 0.7467 140 0.3412 0.979 0.9791 0.979 0.9790 0.493
0.3554 0.8 150 0.3416 0.977 0.9772 0.977 0.9770 0.489
0.3829 0.8533 160 0.3428 0.978 0.9785 0.978 0.9780 0.484
0.3631 0.9067 170 0.3396 0.979 0.9793 0.979 0.9790 0.487
0.3362 0.96 180 0.3376 0.98 0.9803 0.98 0.9800 0.488

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1