OPT-13B-Erebus-4bit-128g

Model description

Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.

This is a 4-bit GPTQ quantization of OPT-13B-Erebus, original model: https://huggingface.co/KoboldAI/OPT-13B-Erebus

Quantization Information

Quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa

python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.pt

python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save_safetensors models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.safetensors

License

OPT-13B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Downloads last month
10
Inference Examples
Inference API (serverless) has been turned off for this model.