Add VLLM tag
#6
by
osanseviero
- opened
No description provided.
I don't think VLLM can inference those binaries, gguf is the ggml/llama.cpp format
This is for vision LLMs, not the vllm
library, we'll change the wording to be clearer
cmp-nct
changed pull request status to
merged