Running with llama.cpp on Macbook M2 Pro
#1
by
huksley
- opened
Hi, I am trying to run poro-34b.Q4_K_M.gguf and it gives gibberish and an error "ggml_metal_graph_compute: command buffer 5 failed with status 5"
Which quantization works on MacBook M2 Pro 32Gb?