Open_Gpt4_8x7B_v0.2 iMat GGUF

Open_Gpt4_8x7B_v0.2 iMat GGUF quantized from fp16 with love.

  • Quantizations made possible using mixtral-8x7b-instruct-v0.1.imatrix file from this repo (special thanks to ikawrakow again)

Legacy quants (i.e. Q8, Q5_K_M) in this repo have all been enhanced with importance matrix calculation. These quants show improved KL-Divergence over their static counterparts.

All files have been tested for your safety and convenience. No need to clone the entire repo, just pick the quant that's right for you.

For more information on latest iMatrix quants see this PR - https://github.com/ggerganov/llama.cpp/pull/5747

Downloads last month
17
GGUF
Model size
46.7B params
Architecture
llama

3-bit

4-bit

5-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including InferenceIllusionist/Open_Gpt4_8x7B_v0.2-iMat-GGUF