KafkaLM-8x7B-German-V0.1-AWQ / quant_config.json
doubledsbv's picture
Upload folder using huggingface_hub
3a145d3 verified
raw
history blame contribute delete
144 Bytes
{
"zero_point": true,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM",
"modules_to_not_convert": [
"gate"
]
}