The Quantized Mistral-Small-Instruct-2409 Model

Original Base Model: mistralai/Mistral-Small-Instruct-2409.
Link: https://huggingface.co/mistralai/Mistral-Small-Instruct-2409

Quantization Configurations

{
  "bits": 4,
  "group_size": 128,
  "damp_percent": 0.1,
  "desc_act": true,
  "static_groups": false,
  "sym": true,
  "true_sequential": true,
  "model_name_or_path": null,
  "model_file_base_name": null
}

Source Codes

Source Codes: https://github.com/vkola-lab/medpodgpt/tree/main/quantization.

Downloads last month
84
Safetensors
Model size
3.33B params
Tensor type
I32
·
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Collection including shuyuej/Mistral-Small-Instruct-2409-GPTQ