GGUF Quants for: yulan-team/YuLan-Mini
Model by: RUC-GSAI-YuLan (thank you!)
Quants by: quantflex
Run with llama.cpp
No K-Quants included because the tensor cols are not divisible by 256.
- Downloads last month
- 213
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for quantflex/YuLan-Mini-GGUF
Base model
yulan-team/YuLan-Mini