GGUF models of the following model : https://huggingface.co/mridul3301/BioMistral-7B-finetuned
3 format of quantization:
- fp8
- fp16
- fp32
Converted the safetensors to GGUF for inference in CPU using llama_cpp
- Downloads last month
- 59
GGUF models of the following model : https://huggingface.co/mridul3301/BioMistral-7B-finetuned
3 format of quantization:
Converted the safetensors to GGUF for inference in CPU using llama_cpp