MLP-FinLLM-dpo-7b / README.md
seanmemery's picture
Model save
c0952b3 verified
|
raw
history blame
2.13 kB
metadata
base_model: seanmemery/MLP-FinLLM-7b-it
tags:
  - trl
  - dpo
  - unsloth
  - generated_from_trainer
model-index:
  - name: MLP-FinLLM-dpo-7b
    results: []

MLP-FinLLM-dpo-7b

This model is a fine-tuned version of seanmemery/MLP-FinLLM-7b-it on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 11.2447
  • Rewards/chosen: -124.0
  • Rewards/rejected: -119.0
  • Rewards/accuracies: 0.5207
  • Rewards/margins: -5.125
  • Logps/rejected: -1216.0
  • Logps/chosen: -1280.0
  • Logits/rejected: -5.5625
  • Logits/chosen: -5.5625

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.003
  • train_batch_size: 32
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
27.875 0.69 50 33.2248 -154.0 -137.0 0.5041 -16.625 -1392.0 -1576.0 -7.9062 -7.9062
14.375 1.39 100 11.2447 -124.0 -119.0 0.5207 -5.125 -1216.0 -1280.0 -5.5625 -5.5625

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.2.0
  • Datasets 2.18.0
  • Tokenizers 0.15.2