please release a model AWQ for it : Baichuan-M1-14B-Instruct-AWQ
#5 opened 10 days ago
by
classdemo
Support for older GPUs (pre-Ampere)
#4 opened about 2 months ago
by
qwq38b
Require support for macOS mps support/ollama support
#3 opened about 2 months ago
by
robbie-wx
[Finetuning Code] Align-Anything support Baichuan-M1
#2 opened about 2 months ago
by
XuyaoWang

Requesting Support for GGUF Quantization of Baichuan-M1-14B-Instruct through llama.cpp
3
#1 opened about 2 months ago
by
Doctor-Chad-PhD
