Conversion request to Q5_K_M for MLX

#4
by websprockets - opened
MLX Community org

Could someone convert CodeLlama-70B-Instruct to Q5_K_M for MLX? It’s not listed yet and would be great for my use case (science research). Thank you!!

MLX Community org

Sure

MLX Community org

Wait posted too fast - the KM format is a GGUF format not MLX? Do you just want a Q5?

MLX Community org

Thanks for the reply! Glad you held off. I'm waiting to see the models that are coming in for llama 4. Waiting to see if I can get a Maverick model that'll fit (and make the most of) my brand new maxed out M3 Ultra Mac Studio.

MLX Community org

Actually, let me be specific... if Llama-4-Maverick-17B-16E-Instruct-6bit 256K with vision enabled ("vision_config"), that's probably my sweet spot.

MLX Community org

I see you've made a 4bit, 6bit and 8bit Scout. I believe a 2bit Scout would fit on a M4 Pro Mac mini with 64GB, and still outperform CodeLlama Instruct 70B at 6bit. Any chance there's a 2bit Scout in the works??

MLX Community org

I will push some models with mixed quant to so they fit better on macs with MoE

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment