GGUF version of Isotonic/Mixnueza-6x32M-MoE.

It was not possible to quantize the model, so only the F16 and F32 GGUF files are available.

Try it with llama.cpp

brew install ggerganov/ggerganov/llama.cpp
llama-cli \
  --hf-repo Felladrin/gguf-Mixnueza-6x32M-MoE \
  --model Mixnueza-6x32M-MoE.F32.gguf \
  --random-prompt \
  --temp 1.3 \
  --dynatemp-range 1.2 \
  --top-k 0 \
  --top-p 1 \
  --min-p 0.1 \
  --typical 0.85 \
  --mirostat 2 \
  --mirostat-ent 3.5 \
  --repeat-penalty 1.1 \
  --repeat-last-n -1 \
  -n 256
Downloads last month
10
GGUF
Model size
83.9M params
Architecture
llama

16-bit

32-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Felladrin/gguf-Mixnueza-6x32M-MoE

Quantized
(1)
this model