Edit model card

This is the gptq 4bit quantization of this model: https://huggingface.co/dvruette/llama-13b-pretrained-sft-do2

This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton

And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 128)
CUDA_VISIBLE_DEVICES=0 python llama.py ./llama-13b-pretrained-sft-do2-4bit-128g-TRITON c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors llama-13b-pretrained-sft-do2-4bit-128g-TRITON.safetensors

To use the triton model on oobabooga's webui, you can refer to this post to get rid of all the errors you can encounter: https://github.com/oobabooga/text-generation-webui/issues/734

Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.