Twitter GitHub LinkedIn Discord

Simply make AI models cheaper, smaller, faster, and greener!

  • Give a thumbs up if you like this model!
  • Contact us and tell us which model to compress next here.
  • Request access to easily compress your own AI models here.
  • Read the documentations to know more here
  • Join Pruna AI community on Discord here to share feedback/suggestions or get help.

Frequently Asked Questions

  • How does the compression work? The model is compressed by using bitsandbytes.
  • How does the model quality change? The quality of the model output will slightly degrade.
  • What is the model format? We the standard safetensors format.
  • How to compress my own models? You can request premium access to more compression methods and tech support for your specific use-cases here.

Usage

Quickstart Guide

Getting started with DBRX models is easy with the transformers library. The model requires ~264GB of RAM and the following packages:

pip install "torch==2.4.0" "transformers>=4.39.2" "tiktoken>=0.6.0" "bitsandbytes"

If you'd like to speed up download time, you can use the hf_transfer package as described by Huggingface here.

pip install hf_transfer
export HF_HUB_ENABLE_HF_TRANSFER=1

You will need to request access to this repository to download the model. Once this is granted, obtain an access token with read permission, and supply the token below.

Run the model on multiple GPUs:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("PrunaAI/dbrx-instruct-bnb-4bit", trust_remote_code=True, token="hf_YOUR_TOKEN")
model = AutoModelForCausalLM.from_pretrained("PrunaAI/dbrx-instruct-bnb-4bit", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True, token="hf_YOUR_TOKEN")

input_text = "What does it take to build a great LLM?"
messages = [{"role": "user", "content": input_text}]
input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))

Credits & License

The license of the smashed model follows the license of the original model. Please check the license of the original model databricks/dbrx-instruct before using this model which provided the base model. The license of the pruna-engine is here on Pypi.

Want to compress other models?

  • Contact us and tell us which model to compress next here.
  • Request access to easily compress your own AI models here.
Downloads last month
112
Safetensors
Model size
68.5B params
Tensor type
BF16
·
F32
·
U8
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Space using PrunaAI/dbrx-instruct-bnb-4bit 1

Collection including PrunaAI/dbrx-instruct-bnb-4bit