GaLLM-14B-v0.1的GPTQ-Int4量化版,使用方法相同

推荐使用vllm部署,然后使用OpenAI格式的API访问:

vllm serve CjangCjengh/GaLLM-14B-v0.1-GPTQ-Int4 --port <your_port>
Downloads last month
5
Safetensors
Model size
3.33B params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for SakuraLLM/GaLLM-14B-v0.1-GPTQ-Int4