"Error: llama runner process has terminated", when running "ollama run"

#2
by yuguanggao - opened

When I'm trying to run the following command:
ollama run hf.co/bartowski/Qwen2-VL-2B-Instruct-GGUF:Q4_K_M

The following error pops up:
Error: llama runner process has terminated: this model is not supported by your version of Ollama. You may need to upgrade

But I think I've already upgraded to the current version: ollama --version
ollama version is 0.5.1

I've tried several other models, and the error seems to persist. Does this mean ollama still doesn't support these models right now?

it's very likely ollama doesn't support these models yet, they're brand new to llama.cpp

i have the same question

Sign up or log in to comment