jarod0411's picture
Model save
ae1ddf3 verified
metadata
license: mit
base_model: gpt2
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: stage2Only_scaffoldGPT_new
    results: []

stage2Only_scaffoldGPT_new

This model is a fine-tuned version of gpt2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3643
  • Accuracy: 0.8806

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 24
  • eval_batch_size: 24
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 192
  • total_eval_batch_size: 192
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20.0

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.4661 1.0 6015 0.4390 0.8615
0.4302 2.0 12030 0.4107 0.8688
0.4145 3.0 18045 0.3979 0.8721
0.405 4.0 24060 0.3902 0.8740
0.3985 5.0 30075 0.3850 0.8753
0.3934 6.0 36090 0.3808 0.8764
0.3896 7.0 42105 0.3778 0.8772
0.3864 8.0 48120 0.3752 0.8778
0.3839 9.0 54135 0.3731 0.8784
0.3817 10.0 60150 0.3714 0.8788
0.3799 11.0 66165 0.3700 0.8792
0.3784 12.0 72180 0.3687 0.8795
0.3771 13.0 78195 0.3677 0.8798
0.376 14.0 84210 0.3668 0.8800
0.3751 15.0 90225 0.3661 0.8802
0.3743 16.0 96240 0.3655 0.8803
0.3737 17.0 102255 0.3650 0.8804
0.3732 18.0 108270 0.3646 0.8805
0.3729 19.0 114285 0.3644 0.8806
0.3726 20.0 120300 0.3643 0.8806

Framework versions

  • Transformers 4.36.0.dev0
  • Pytorch 2.1.1+cu121
  • Datasets 2.15.0
  • Tokenizers 0.15.0