Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
RedaAlami
/
zephyr-7b-gemma-dpo
like
0
PEFT
TensorBoard
Safetensors
RedaAlami/PKU-SafeRLHF-Processed
gemma
alignment-handbook
trl
dpo
Generated from Trainer
4-bit precision
bitsandbytes
License:
other
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Use this model
main
zephyr-7b-gemma-dpo
/
runs
1 contributor
History:
4 commits
RedaAlami
End of training
3a9c5a4
verified
5 months ago
Jul31_11-58-27_ip-172-16-2-184.us-west-2.compute.internal
End of training
5 months ago
Jul31_17-29-05_ip-172-16-2-184.us-west-2.compute.internal
End of training
5 months ago