Made by merging the following lora: | |
https://huggingface.co/Neko-Institute-of-Science/VicUnLocked-30b-LoRA | |
Then quantizing with ooba's old CUDA branch of GPTQ | |
``` | |
python llama.py vicunlocked-30b c4 --wbits 4 --true-sequential --act-order --save_safetensors vicunlocked-30b-4bit.safetensors | |
``` |