我在ollama上部署了codeqwen-1_5-7b-chat-q5_0.gguf可是对话很奇怪,这个是为什么,有什么解决办法吗
1
#4 opened 7 months ago
by
monsterbeasts
broken output when gpu enable
#3 opened 7 months ago
by
imareo
Using llama.cpp server, responses always end with <|im_end|>
1
#2 opened 8 months ago
by
gilankpam