Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
sfulay
/
zephyr-7b-dpo-full-gpt-reward-scale-05
like
0
Safetensors
mistral
trl
dpo
alignment-handbook
Generated from Trainer
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
7b08247
zephyr-7b-dpo-full-gpt-reward-scale-05
Commit History
Training in progress, step 436
7b08247
verified
sfulay
commited on
Sep 3, 2024
Training in progress, step 400
03f77ed
verified
sfulay
commited on
Sep 3, 2024
Training in progress, step 300
3f979a2
verified
sfulay
commited on
Sep 2, 2024
Training in progress, step 200
89a8449
verified
sfulay
commited on
Sep 2, 2024
Training in progress, step 100
b54d083
verified
sfulay
commited on
Sep 2, 2024
initial commit
deeb6d9
verified
sfulay
commited on
Sep 2, 2024