UTI2_M2_1000steps_1e6rate_05beta_CSFTDPO

This model is a fine-tuned version of tsavage68/UTI_M2_1000steps_1e7rate_SFT on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5476
  • Rewards/chosen: 0.9941
  • Rewards/rejected: -7.4997
  • Rewards/accuracies: 0.2100
  • Rewards/margins: 8.4937
  • Logps/rejected: -24.3733
  • Logps/chosen: -2.5543
  • Logits/rejected: -2.7955
  • Logits/chosen: -2.7946

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.5541 0.3333 25 0.5519 0.0709 -2.5498 0.2100 2.6207 -14.4735 -4.4006 -2.6521 -2.6515
0.5545 0.6667 50 0.8852 -0.2566 -3.1766 0.1800 2.9200 -15.7271 -5.0556 -2.6377 -2.6371
0.5718 1.0 75 0.7287 -0.0078 -6.5265 0.2000 6.5188 -22.4270 -4.5580 -2.5987 -2.5978
2.2289 1.3333 100 0.5476 0.0080 -5.0654 0.2100 5.0734 -19.5048 -4.5265 -2.6769 -2.6758
0.5545 1.6667 125 0.5476 -0.0935 -5.6463 0.2100 5.5528 -20.6666 -4.7295 -2.6679 -2.6668
0.5545 2.0 150 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5545 2.3333 175 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.4852 2.6667 200 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.6412 3.0 225 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5545 3.3333 250 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5372 3.6667 275 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5892 4.0 300 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.4679 4.3333 325 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5718 4.6667 350 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5199 5.0 375 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5025 5.3333 400 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5199 5.6667 425 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5025 6.0 450 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5718 6.3333 475 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5718 6.6667 500 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5025 7.0 525 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5199 7.3333 550 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5372 7.6667 575 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5718 8.0 600 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5372 8.3333 625 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.4332 8.6667 650 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5372 9.0 675 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5025 9.3333 700 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5025 9.6667 725 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5892 10.0 750 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5199 10.3333 775 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5199 10.6667 800 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5372 11.0 825 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5199 11.3333 850 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.6065 11.6667 875 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5718 12.0 900 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.4159 12.3333 925 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.6238 12.6667 950 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.6065 13.0 975 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946
0.5025 13.3333 1000 0.5476 0.9941 -7.4997 0.2100 8.4937 -24.3733 -2.5543 -2.7955 -2.7946

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.0.0+cu117
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for tsavage68/UTI2_M2_1000steps_1e6rate_05beta_CSFTDPO