This LLM seems to be trolling me??
3
#9 opened 4 months ago
by
skynet24
Reducing Latency in Locally Hosted model
1
#8 opened 6 months ago
by
anshulchandel
Not working on M1 Max using llama-cpp-python
#7 opened 11 months ago
by
shroominic
Missing tokenizer.model file
3
#6 opened 11 months ago
by
whatever1983
not working
5
#3 opened 12 months ago
by
imhsouna
Free and ready to use deepseek-coder-6.7B-instruct-GGUF model as OpenAI API compatible endpoint
#2 opened 12 months ago
by
limcheekin
This model cannot be used normally
19
#1 opened 12 months ago
by
hyunfzen