Grok-1-GGUF / README.md
Arki05's picture
Update to new gguf-split for direct loading.
1312307
|
raw
history blame
598 Bytes
metadata
license: apache-2.0

Unofficial GGUF Quantizations of Grok-1. Should just run with llama.cpp once the PR- Add grok-1 support #6204 is merged.

You need to merge the GGUF file using gguf-split before using them (there's already a PR to change this). The splits now use PR: llama_model_loader: support multiple split/shard GGUFs. Therefore no merging using gguf-split is needed anymore.

For now only Q2_K Quant, others (Q3_K, Q4_K, Q5_K & Q6_K) are prepared waiting to upload.