sfulay's picture
Model save
bd38509 verified
|
raw
history blame
3.56 kB
metadata
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: zephyr-7b-dpo-full-gpt-reward-scale-05
    results: []

zephyr-7b-dpo-full-gpt-reward-scale-05

This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5099
  • Rewards/chosen: -2.9128
  • Rewards/rejected: -4.2868
  • Rewards/accuracies: 0.7328
  • Rewards/margins: 1.3740
  • Logps/rejected: -674.3233
  • Logps/chosen: -575.2830
  • Logits/rejected: 2.9146
  • Logits/chosen: 0.8634

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 55
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.6688 0.1147 50 0.6561 -0.0264 -0.1297 0.6767 0.1033 -258.6208 -286.6466 -2.5069 -2.6066
0.5822 0.2294 100 0.5813 -0.9069 -1.4729 0.6724 0.5660 -392.9425 -374.6986 0.4795 -0.0936
0.5512 0.3440 150 0.5534 -0.9841 -1.8057 0.7284 0.8216 -426.2166 -382.4128 2.9841 1.9098
0.5364 0.4587 200 0.5368 -1.4185 -2.3616 0.7328 0.9430 -481.8062 -425.8600 3.8766 2.4303
0.5308 0.5734 250 0.5235 -2.3552 -3.5698 0.7284 1.2145 -602.6266 -519.5305 4.2409 2.4548
0.532 0.6881 300 0.5117 -2.5133 -3.7138 0.7198 1.2004 -617.0262 -535.3408 3.0154 1.1080
0.5064 0.8028 350 0.5117 -2.6361 -3.9100 0.7241 1.2739 -636.6490 -547.6198 2.6854 0.7127
0.5105 0.9174 400 0.5099 -2.9128 -4.2868 0.7328 1.3740 -674.3233 -575.2830 2.9146 0.8634

Framework versions

  • Transformers 4.44.0.dev0
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1