tsavage68's picture
End of training
6e4d302 verified
metadata
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: v1_1000_STEPS_1e8_rate_03_beta_DPO
    results: []

v1_1000_STEPS_1e8_rate_03_beta_DPO

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6933
  • Rewards/chosen: -0.0011
  • Rewards/rejected: -0.0010
  • Rewards/accuracies: 0.4615
  • Rewards/margins: -0.0001
  • Logps/rejected: -16.8827
  • Logps/chosen: -15.2567
  • Logits/rejected: -3.3538
  • Logits/chosen: -3.3539

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-08
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.6951 0.05 50 0.6937 -0.0020 -0.0011 0.4681 -0.0009 -16.8832 -15.2599 -3.3538 -3.3539
0.6947 0.1 100 0.6933 0.0007 0.0008 0.4484 -0.0001 -16.8769 -15.2508 -3.3538 -3.3538
0.6922 0.15 150 0.6932 0.0005 0.0004 0.4615 0.0001 -16.8783 -15.2513 -3.3537 -3.3538
0.6927 0.2 200 0.6937 -0.0002 0.0007 0.4527 -0.0009 -16.8771 -15.2536 -3.3538 -3.3538
0.6924 0.24 250 0.6927 0.0006 -0.0005 0.4593 0.0011 -16.8811 -15.2510 -3.3537 -3.3538
0.6916 0.29 300 0.6934 -0.0007 -0.0004 0.4418 -0.0003 -16.8810 -15.2554 -3.3538 -3.3538
0.6948 0.34 350 0.6932 0.0004 0.0003 0.4637 0.0001 -16.8785 -15.2516 -3.3538 -3.3539
0.6929 0.39 400 0.6925 -0.0004 -0.0018 0.4637 0.0014 -16.8855 -15.2543 -3.3538 -3.3538
0.69 0.44 450 0.6936 -0.0007 -0.0000 0.4374 -0.0007 -16.8796 -15.2555 -3.3536 -3.3537
0.694 0.49 500 0.6930 -0.0004 -0.0008 0.4505 0.0005 -16.8823 -15.2542 -3.3538 -3.3538
0.6895 0.54 550 0.6932 -0.0009 -0.0012 0.4703 0.0002 -16.8834 -15.2562 -3.3537 -3.3537
0.6955 0.59 600 0.6930 0.0007 0.0002 0.4747 0.0004 -16.8788 -15.2509 -3.3538 -3.3539
0.6903 0.64 650 0.6934 -0.0005 -0.0003 0.4593 -0.0002 -16.8804 -15.2548 -3.3537 -3.3538
0.6904 0.68 700 0.6934 -0.0004 -0.0001 0.4549 -0.0003 -16.8800 -15.2544 -3.3538 -3.3538
0.6921 0.73 750 0.6930 -0.0008 -0.0013 0.4703 0.0004 -16.8838 -15.2558 -3.3538 -3.3539
0.6945 0.78 800 0.6930 -0.0003 -0.0008 0.4813 0.0005 -16.8823 -15.2540 -3.3538 -3.3539
0.6915 0.83 850 0.6939 -0.0016 -0.0003 0.4484 -0.0014 -16.8804 -15.2585 -3.3538 -3.3539
0.6903 0.88 900 0.6933 -0.0011 -0.0010 0.4615 -0.0001 -16.8827 -15.2567 -3.3538 -3.3539
0.6971 0.93 950 0.6933 -0.0011 -0.0010 0.4615 -0.0001 -16.8827 -15.2567 -3.3538 -3.3539
0.6939 0.98 1000 0.6933 -0.0011 -0.0010 0.4615 -0.0001 -16.8827 -15.2567 -3.3538 -3.3539

Framework versions

  • Transformers 4.39.1
  • Pytorch 2.0.0+cu117
  • Datasets 2.18.0
  • Tokenizers 0.15.2