QuantFactory/Llama-3-Instruct-8B-DPO-GGUF
This is quantized version of princeton-nlp/Llama-3-Instruct-8B-DPO created using llama.cpp
Model Description
This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.
- Downloads last month
- 93
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for QuantFactory/Llama-3-Instruct-8B-DPO-GGUF
Base model
princeton-nlp/Llama-3-Instruct-8B-DPO