Pull Q6 from Ollama:

ollama run goekdenizguelmez/JOSIEFIED-Qwen2.5:3b
Downloads last month
685
GGUF
Model size
3.4B params
Architecture
qwen2

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Goekdeniz-Guelmez/Josiefied-Qwen2.5-3B-Instruct-abliterated-v1-gguf

Collection including Goekdeniz-Guelmez/Josiefied-Qwen2.5-3B-Instruct-abliterated-v1-gguf