EXL2 Quantizations of Qwen2.5-Coder-3B-Instruct
Using exllamav2 release 0.2.5 for quantization.
Original model: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct
Bits 8.0, lm_head 8.0
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.