Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Llama-3.3-70B-Instruct - EXL2 4.5bpw

This is a 4.5bpw EXL2 quant of meta-llama/Llama-3.3-70B-Instruct

Details about the model can be found at the above model page.

EXL2 Version

These quants were made with exllamav2 version 0.2.4. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.

If you have problems loading these models, please update Text Generation WebUI to the latest version.

Perplexity Scoring

Below are the perplexity scores for the EXL2 models. A lower score is better.

Quant Level Perplexity Score
5.0 4.7932
4.5 4.8894
4.0 5.0079
3.5 5.3992
3.0 7.2686
2.5 10.5543
2.25 8.8764
Downloads last month
77
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Dracones/Llama-3.3-70B-Instruct_exl2_4.5bpw

Quantized
(57)
this model

Collection including Dracones/Llama-3.3-70B-Instruct_exl2_4.5bpw