library_name: peft | |
## Training procedure | |
The following `bitsandbytes` quantization config was used during training: | |
- bits: 4 | |
- checkpoint_format: gptq | |
- desc_act: True | |
- dynamic: None | |
- group_size: 128 | |
- lm_head: False | |
- meta: {'damp_auto_increment': 0.0015, 'damp_percent': 0.01, 'quantizer': ['gptqmodel:1.4.0-dev'], 'static_groups': False, 'true_sequential': True, 'uri': 'https://github.com/modelcloud/gptqmodel'} | |
- quant_method: gptq | |
- sym: True | |
### Framework versions | |
- PEFT 0.5.0 | |