Original description

https://wandb.ai/open-assistant/supervised-finetuning/runs/i9gmn0dt

Trained with residual dropout 0.1

What is this

This is https://huggingface.co/dvruette/llama-13b-pretrained-dropout quantized to int4, groupsize 128.

Run in text-generation-webui with --wbits 4 and --groupsize 128.

Downloads last month
14
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.