YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Skywork-SWE-32B GPTQ 4bit (vLLM-ready)

This is a 4-bit GPTQ quantized version of Skywork/Skywork-SWE-32B, compatible with vLLM.

  • Quantization: GPTQ (4-bit)
  • Group size: 128
  • Format: GPTQModel (custom loader)
  • Dtype: float16

Usage with vLLM

vllm serve ./skywork-swe-32b-gptqmodel-4bit \
  --quantization gptq \
  --dtype half \
  --max-model-len 5900

Credits

Downloads last month
3
Safetensors
Model size
5.74B params
Tensor type
I32
BF16
F16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support