Edit model card

DarkAtom-12B-v3 GGUF Quantizations πŸ—²

Something that shouldn't exist

DarkAtomQuantizedLogo256.png

This model was converted to GGUF format using llama.cpp.

For more information of the model, see the original model card: Khetterman/DarkAtom-12B-v3.

Available Quantizations (β—•β€Ώβ—•)

Have a good time πŸ–€

Downloads last month
404
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for Khetterman/DarkAtom-12B-v3-GGUF

Quantized
(3)
this model