The Quantized Ministral 8B Instruct 2410 Model

Original Base Model: mistralai/Ministral-8B-Instruct-2410.
Link: https://huggingface.co/mistralai/Ministral-8B-Instruct-2410

Quantization Configurations

"quantization_config": {
    "bits": 4,
    "checkpoint_format": "gptq",
    "damp_percent": 0.01,
    "desc_act": true,
    "group_size": 128,
    "model_file_base_name": null,
    "model_name_or_path": null,
    "quant_method": "gptq",
    "static_groups": false,
    "sym": true,
    "true_sequential": true
  },

Source Codes

Source Codes: https://github.com/vkola-lab/medpodgpt/tree/main/quantization.

Downloads last month
167
Safetensors
Model size
2B params
Tensor type
I32
BF16
FP16
Inference API
Unable to determine this model's library. Check the docs .