-
-
-
-
-
-
Inference status
Active filters:
dpo
NicholasCorrado/zephyr-7b-uf-rlced-conifer-1e2e-group-dpo-2e
Text Generation
•
Updated
•
11
nomadrp/tq-llama3.1-sent-shlfd-gt3
QuantFactory/Lama-DPOlphin-8B-GGUF
Text Generation
•
Updated
•
182
•
2
LBK95/Llama-2-7b-hf-DPO-LookAhead5_FullEval_TTree1.4_TLoop0.7_TEval0.2_V1.0
Wenboz/zephyr-7b-wpo-lora
Updated
YYYYYYibo/gshf_ours_1_iter_2
Magpie-Align/MagpieLM-4B-Chat-v0.1
Text Generation
•
Updated
•
229
•
18
Triangle104/NeuralDaredevil-8B-abliterated-Q4_K_M-GGUF
Triangle104/NeuralDaredevil-8B-abliterated-Q4_0-GGUF
Triangle104/NeuralDaredevil-8B-abliterated-Q4_K_S-GGUF
YYYYYYibo/gshf_ours_1_iter_3
Updated
lewtun/dpo-model-lora
CharlesLi/OpenELM-1_1B-DPO-full-max-min-reward
Text Generation
•
Updated
•
7
CharlesLi/OpenELM-1_1B-DPO-full-max-random-reward
Text Generation
•
Updated
•
7
CharlesLi/OpenELM-1_1B-DPO-full-least-similar
Text Generation
•
Updated
•
11
taicheng/zephyr-7b-dpo-qlora
CharlesLi/OpenELM-1_1B-DPO-full-max-reward-least-similar
Text Generation
•
Updated
•
8
dmariko/SmolLM-360M-Instruct-dpo-15k
QinLiuNLP/llama3-sudo-dpo-instruct-5epochs-0909
Updated
•
10
CharlesLi/OpenELM-1_1B-DPO-full-max-reward-most-similar
Text Generation
•
Updated
•
8
CharlesLi/OpenELM-1_1B-DPO-full-most-similar
Text Generation
•
Updated
•
9
DUAL-GPO/phi-2-dpo-chatml-lora-i1
Updated
CharlesLi/OpenELM-1_1B-DPO-full-max-second-reward
Text Generation
•
Updated
•
5
CharlesLi/OpenELM-1_1B-DPO-full-random-pair
Text Generation
•
Updated
•
10
Wenboz/zephyr-7b-dpo-lora
Updated
DUAL-GPO/phi-2-dpo-chatml-lora-10k-30k-i1
Updated
DUAL-GPO/phi-2-dpo-chatml-lora-20k-40k-i1
Updated
LBK95/Llama-2-7b-hf-DPO-LookAhead5_FullEval_TTree1.4_TLoop0.7_TEval0.2_V2.0
Updated
Wenboz/llama3-dpo-lora
Updated
DUAL-GPO/phi-2-dpo-chatml-lora-40k-60k-i1
Updated