Model Card for Model ID

A 4-bits double-quantized version of ernestoBocini/Phi3-mini-DPO-Tuned.

Model Details

This is a Phi-3-mini-4k-instruct fine-tuned with SFT and DPO on STEM domains, and finally quantized to a 4 bits precision, to serve the purpose of being an AI university tutor.

Quantization config used:

BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_quant_type="nf4",
   bnb_4bit_use_double_quant=True,
   bnb_4bit_compute_dtype=torch.bfloat16
)
Downloads last month
137
Safetensors
Model size
2.07B params
Tensor type
F32
FP16
U8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.