A newer version of this model is available:
YOYO-AI/QwQ-coder-32B
bobig/QwQ-Coder-instruct-mlx-4Bit
This is pretty good. QwQ brains and memory + Qwen code instruct
Now in delicious MLX, eat it or wear it
32k context is solid in QwQ: https://fiction.live/stories/Fiction-liveBench-Mar-14-2025/oQdzQvKHw8JyXbN87
Test Prompt: Write a quick sort in C++
The Model bobig/QwQ-Coder-instruct-mlx-4Bit was converted to MLX format from YOYO-AI/QwQ-Coder-instruct using mlx-lm version 0.21.5.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("bobig/QwQ-Coder-instruct-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 106
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The HF Inference API does not support text-generation models for mlx
library.
Model tree for mlx-community/QwQ-Coder-instruct-mlx-4Bit
Base model
YOYO-AI/QwQ-Coder-instruct