File size: 594 Bytes
f86844b
 
 
 
 
 
 
 
 
 
8c6e676
f3649fa
f86844b
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
4-bit quant of llama part of llava https://github.com/haotian-liu/LLaVA https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0

quantized by:
```
CUDA_VISIBLE_DEVICES=0 python llama.py /workspace/LLaVA-7B-v0/ c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors llava-7b-v0-4bit-128g.safetensors
```

on https://github.com/oobabooga/GPTQ-for-LLaMa CUDA branch of GPTQ (commit `57a2629`)

# NOT COMPATIBILE WITH TEXT-GENERATION-WEBUI YET
(multimodality isn't, text inference is)
waiting for this PR: https://github.com/oobabooga/text-generation-webui/pull/1741

---
license: other
---