DeepSeek R1 AWQ
AWQ of DeepSeek R1.
Quantized by Eric Hartford and v2ray.
This quant modified some of the model code to fix an overflow issue when using float16.
To serve using vLLM with 8x 80GB GPUs, use the following command:
VLLM_USE_V1=0 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_MARLIN_USE_ATOMIC_ADD=1 python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-seq-len-to-capture 65536 --enable-chunked-prefill --enable-prefix-caching --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.95 --served-model-name deepseek-reasoner --model cognitivecomputations/DeepSeek-R1-AWQ
You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking here, the benchmark below was done with this wheel, it contains 2 PR merges and an unoptimized FlashMLA (still faster than Triton) for A100 which boosted performance a lot. The vLLM repo which contained A100 FlashMLA can be found at LagPixelLOL/vllm@sm80_flashmla, which is a fork of vllm-project/vllm. The A100 FlashMLA it used is based on LagPixelLOL/FlashMLA@vllm, which is a fork of pzhao-eng/FlashMLA.
TPS Per Request
GPU \ Batch Input Output | B: 1 I: 2 O: 2K | B: 32 I: 4K O: 256 | B: 1 I: 63K O: 2K | Prefill |
---|---|---|---|---|
8x H100/H200 | 61.5 | 30.1 | 54.3 | 4732.2 |
4x H200 | 58.4 | 19.8 | 53.7 | 2653.1 |
8x A100 80GB | 46.8 | 12.8 | 30.4 | 2442.4 |
8x L40S | 46.3 | OOM | OOM | 688.5 |
Note:
- The A100 config uses an unoptimized FlashMLA implementation, which is only superior than Triton during high context inference, it would be faster if it's optimized.
- The L40S config doesn't support FlashMLA, thus the Triton implementation is used, this makes it extremely slow with high context. But the L40S doesn't have much VRAM, so it can't really have that much context anyway, and it also doesn't have the fast GPU to GPU interconnection bandwidth, making it even slower. It is not recommended to serve with this config, as you must limit the context to <= 4096,
--gpu-memory-utilization
to 0.98, and--max-num-seqs
to 4. - All types of GPU used during benchmark are SXM form factor except L40S.
- Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization.
- vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.
- Downloads last month
- 54,558
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for cognitivecomputations/DeepSeek-R1-AWQ
Base model
deepseek-ai/DeepSeek-R1