Conversion request to Q5_K_M for MLX
Could someone convert CodeLlama-70B-Instruct to Q5_K_M for MLX? It’s not listed yet and would be great for my use case (science research). Thank you!!
Sure
Wait posted too fast - the KM format is a GGUF format not MLX? Do you just want a Q5?
Thanks for the reply! Glad you held off. I'm waiting to see the models that are coming in for llama 4. Waiting to see if I can get a Maverick model that'll fit (and make the most of) my brand new maxed out M3 Ultra Mac Studio.
Actually, let me be specific... if Llama-4-Maverick-17B-16E-Instruct-6bit 256K with vision enabled ("vision_config"), that's probably my sweet spot.
I see you've made a 4bit, 6bit and 8bit Scout. I believe a 2bit Scout would fit on a M4 Pro Mac mini with 64GB, and still outperform CodeLlama Instruct 70B at 6bit. Any chance there's a 2bit Scout in the works??
I will push some models with mixed quant to so they fit better on macs with MoE