--- license: mit base_model: - Qwen/Qwen2.5-Coder-14B-Instruct pipeline_tag: text-generation ---
Logo

# 🚀 Qwen2.5-Coder-14B-Instruct-Uncensored-GGUF ### Optimized quantized models for efficient inference
## 📋 Overview A collection of optimized GGUF quantized models derived from [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct), providing various performance-quality tradeoffs. ## 💎 Model Variants | Variant | Use Case | Download |---------|-----------|------------| | Q2_K | Basic text completion tasks | [📥](https://huggingface.co/BlossomsAI/Qwen2.5-Coder-14B-Instruct-Uncensored-GGUF/resolve/main/q2_k.gguf?download=true) | Q3_K_M | Memory-efficient quality operations | [📥](https://huggingface.co/BlossomsAI/Qwen2.5-Coder-14B-Instruct-Uncensored-GGUF/resolve/main/q3_k_m.gguf?download=true) | Q4_K_S | Balanced performance and quality | [📥](https://huggingface.co/BlossomsAI/Qwen2.5-Coder-14B-Instruct-Uncensored-GGUF/resolve/main/q4_k_s.gguf?download=true) | Q4_K_M | Balanced performance and quality | [📥](https://huggingface.co/BlossomsAI/Qwen2.5-Coder-14B-Instruct-Uncensored-GGUF/resolve/main/q4_k_m.gguf?download=true) | Q5_K_S | Enhanced quality text generation | [📥](https://huggingface.co/BlossomsAI/Qwen2.5-Coder-14B-Instruct-Uncensored-GGUF/resolve/main/q5_k_s.gguf?download=true) | Q5_K_M | Enhanced quality text generation | [📥](https://huggingface.co/BlossomsAI/Qwen2.5-Coder-14B-Instruct-Uncensored-GGUF/resolve/main/q5_k_m.gguf?download=true) | Q6_K | Superior quality outputs | [📥](https://huggingface.co/BlossomsAI/Qwen2.5-Coder-14B-Instruct-Uncensored-GGUF/resolve/main/q6_k.gguf?download=true) | Q8_0 | Maximum quality, production-grade results | [📥](https://huggingface.co/BlossomsAI/Qwen2.5-Coder-14B-Instruct-Uncensored-GGUF/resolve/main/q8_0.gguf?download=true) ## 🤝 Contributors Developed with ❤️ by [BlossomAI](https://huggingface.co/BlossomsAI) ---
Star ⭐️ this repo if you find it valuable!