metadata
base_model:
- MaziyarPanahi/calme-3.2-instruct-78b
Bits - 6.5bpw
EXL2 Quantizations of calme-3.2-instruct-78b
Using exllamav2 release 0.2.6 for quantization.
Original model: https://huggingface.co/MaziyarPanahi/calme-3.2-instruct-78b
"quantization_config": {
"quant_method": "exl2",
"version": "0.2.6",
"bits": 6.5,
"head_bits": 8,
"calibration": {
"rows": 115,
"length": 2048,
"dataset": "(default)"
}