Edit model card

jxtngx/mamba-2.8b-hf-Q4_0-GGUF

This model was converted to GGUF format from state-spaces/mamba-2.8b-hf using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Downloads last month
17
GGUF
Model size
2.77B params
Architecture
mamba

4-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for jxtngx/mamba-2.8b-hf-Q4_0-GGUF

Quantized
(11)
this model