Quantized from Qwen/Qwen2.5-14B-Instruct-1M down to 4 bits, GEMM.

Downloads last month
471
Safetensors
Model size
3.33B params
Tensor type
I32
BF16
FP16
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for graelo/Qwen2.5-14B-Instruct-1M-AWQ

Base model

Qwen/Qwen2.5-14B
Quantized
(50)
this model