About

weighted/imatrix quants of https://huggingface.co/lodrick-the-lafted/Grafted-Titanic-Dolphin-2x120B

No IQ-quants are available because llama.cpp is a crashfest atm. and crashes when trying to generate them.

static quants are available at https://huggingface.co/mradermacher/Grafted-Titanic-Dolphin-2x120B-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
PART 1 PART 2 i1-Q2_K 78.6 IQ3_XXS probably better
PART 1 PART 2 i1-Q3_K_S 92.6 IQ3_XS probably better
PART 1 PART 2 PART 3 i1-Q3_K_M 103.0 IQ3_S probably better
PART 1 PART 2 PART 3 i1-Q3_K_L 111.9 IQ3_M probably better
PART 1 PART 2 PART 3 i1-Q4_K_S 122.0 optimal size/speed/quality
PART 1 PART 2 PART 3 i1-Q4_K_M 129.5 fast, recommended
PART 1 PART 2 PART 3 PART 4 i1-Q5_K_S 147.8
PART 1 PART 2 PART 3 PART 4 i1-Q5_K_M 152.2
PART 1 PART 2 PART 3 PART 4 i1-Q6_K 176.2 practically like static Q6_K

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for mradermacher/Grafted-Titanic-Dolphin-2x120B-i1-GGUF

Finetuned
(2)
this model