license: apache-2.0 | |
tags: | |
- gemma 2 | |
- gptq | |
- 4bit | |
- gptqmodel | |
This model has been quantized using [GPTQModel](https://github.com/ModelCloud/GPTQModel). | |
- **bits**: 4 | |
- **group_size**: 128 | |
- **desc_act**: false | |
- **static_groups**: false | |
- **sym**: true | |
- **lm_head**: false | |
- **damp_percent**: 0.01 | |
- **true_sequential**: true | |
- **model_name_or_path**: | |
- **model_file_base_name**: model | |
- **quant_method**: gptq | |
- **checkpoint_format**: gptq | |
- **meta**: | |
- **quantizer**: gptqmodel:0.9.2 | |