Safetensors
llama
Llama3-SimPO / README.md
sabersaleh's picture
Create README.md
0d95e1b verified
|
raw
history blame
362 Bytes
metadata
license: mit
datasets:
  - HuggingFaceH4/ultrafeedback_binarized
base_model:
  - meta-llama/Llama-3.1-8B

This is an aligned model based on princeton-nlp/Llama-3-Base-8B-SFT. This model is aligned using the Ultrafeedback dataset, fine-tuned through the Simple Preference Optimization (SimPO) loss. The optimization process was conducted with a single epoch.