--- license: mit language: - en - zh base_model: - deepseek-ai/DeepSeek-R1 pipeline_tag: text-generation library_name: transformers --- # DeepSeek R1 AWQ AWQ of DeepSeek R1. Quantized by [Eric Hartford](https://huggingface.co/ehartford) and [v2ray](https://huggingface.co/v2ray). This quant modified some of the model code to fix an overflow issue when using float16. To serve using vLLM with 8x 80GB GPUs, use the following command: ```sh VLLM_USE_V1=0 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_MARLIN_USE_ATOMIC_ADD=1 python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-seq-len-to-capture 65536 --enable-chunked-prefill --enable-prefix-caching --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.95 --served-model-name deepseek-reasoner --model cognitivecomputations/DeepSeek-R1-AWQ ``` You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking [here](https://huggingface.co/x2ray/wheels/resolve/main/vllm-0.8.3.dev250%2Bg10afedcfd.cu128-cp312-cp312-linux_x86_64.whl), the benchmark below was done with this wheel, it contains [2 PR merges](https://github.com/vllm-project/vllm/issues?q=is%3Apr+is%3Aopen+author%3Ajinzhen-lin) and an unoptimized FlashMLA (still faster than Triton) for A100 which boosted performance a lot. The vLLM repo which contained A100 FlashMLA can be found at [LagPixelLOL/vllm@sm80_flashmla](https://github.com/LagPixelLOL/vllm/tree/sm80_flashmla), which is a fork of [vllm-project/vllm](https://github.com/vllm-project/vllm). The A100 FlashMLA it used is based on [LagPixelLOL/FlashMLA@vllm](https://github.com/LagPixelLOL/FlashMLA/tree/vllm), which is a fork of [pzhao-eng/FlashMLA](https://github.com/pzhao-eng/FlashMLA). ## TPS Per Request | GPU \ Batch Input Output | B: 1 I: 2 O: 2K | B: 32 I: 4K O: 256 | B: 1 I: 63K O: 2K | Prefill | |:-:|:-:|:-:|:-:|:-:| | **8x H100/H200** | 61.5 | 30.1 | 54.3 | 4732.2 | | **4x H200** | 58.4 | 19.8 | 53.7 | 2653.1 | | **8x A100 80GB** | 46.8 | 12.8 | 30.4 | 2442.4 | | **8x L40S** | 46.3 | OOM | OOM | 688.5 | Note: - The A100 config uses an unoptimized FlashMLA implementation, which is only superior than Triton during high context inference, it would be faster if it's optimized. - The L40S config doesn't support FlashMLA, thus the Triton implementation is used, this makes it extremely slow with high context. But the L40S doesn't have much VRAM, so it can't really have that much context anyway, and it also doesn't have the fast GPU to GPU interconnection bandwidth, making it even slower. It is not recommended to serve with this config, as you must limit the context to <= 4096, `--gpu-memory-utilization` to 0.98, and `--max-num-seqs` to 4. - All types of GPU used during benchmark are SXM form factor except L40S. - Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization. - vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.