voxmenthe/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized

The Model voxmenthe/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized was converted to MLX format from Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R using mlx-lm version 0.13.0.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("voxmenthe/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month
23
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Spaces using voxmenthe/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized 6