salbatarni's picture
Training in progress, step 30
216de55 verified
|
raw
history blame
2.42 kB
metadata
base_model: aubmindlab/bert-base-arabertv02
tags:
  - generated_from_trainer
model-index:
  - name: arabert_baseline_relevance_task1_fold0
    results: []

arabert_baseline_relevance_task1_fold0

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2514
  • Qwk: 0.0808
  • Mse: 0.2559

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.3333 2 1.1076 -0.0049 1.0949
No log 0.6667 4 0.2779 0.0 0.2800
No log 1.0 6 0.2271 0.3467 0.2294
No log 1.3333 8 0.2361 0.1667 0.2359
No log 1.6667 10 0.1985 0.1250 0.2028
No log 2.0 12 0.2232 0.0 0.2304
No log 2.3333 14 0.2167 0.0 0.2237
No log 2.6667 16 0.2206 0.0 0.2259
No log 3.0 18 0.2075 0.0 0.2127
No log 3.3333 20 0.2078 0.0 0.2132
No log 3.6667 22 0.2183 0.0 0.2235
No log 4.0 24 0.2326 0.0 0.2375
No log 4.3333 26 0.2498 0.0392 0.2543
No log 4.6667 28 0.2547 0.0392 0.2590
No log 5.0 30 0.2514 0.0808 0.2559

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1