Mistral-AI
Collection
Quantized versions of models by mistralai
โข
19 items
โข
Updated
โข
5
This is quantized version of princeton-nlp/Mistral-7B-Instruct-DPO created using llama.cpp
This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.
Base model
princeton-nlp/Mistral-7B-Instruct-DPO