Edit model card

RobertaLr1e-8Wd0.02E20

This model is a fine-tuned version of deepset/roberta-base-squad2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2435

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-08
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
3.9511 1.0 72 3.9642
2.6869 2.0 144 3.8774
3.0261 3.0 216 3.7963
2.659 4.0 288 3.7220
5.0333 5.0 360 3.6540
4.4433 6.0 432 3.5911
4.8192 7.0 504 3.5341
3.7868 8.0 576 3.4826
2.4543 9.0 648 3.4379
2.4081 10.0 720 3.3981
4.04 11.0 792 3.3631
3.5419 12.0 864 3.3344
2.8603 13.0 936 3.3092
4.191 14.0 1008 3.2884
3.1615 15.0 1080 3.2721
2.8836 16.0 1152 3.2593
4.573 17.0 1224 3.2509
2.5001 18.0 1296 3.2459
3.3777 19.0 1368 3.2438
2.5643 20.0 1440 3.2435

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.4.0
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
124M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for hsmith-morganhill/RobertaLr1e-8Wd0.02E20

Finetuned
(181)
this model