v2ray commited on
Commit
9880e56
·
1 Parent(s): 6c66c30

Updated README.md to include better benchmark.

Browse files
Files changed (2) hide show
  1. README.md +13 -6
  2. config.json +1 -1
README.md CHANGED
@@ -11,18 +11,25 @@ library_name: transformers
11
  # DeepSeek V3 AWQ
12
  AWQ of DeepSeek V3.
13
 
 
 
14
  This quant modified some of the model code to fix an overflow issue when using float16.
15
 
16
  To serve using vLLM with 8x 80GB GPUs, use the following command:
17
  ```sh
18
- VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-chat --model cognitivecomputations/DeepSeek-V3-AWQ
19
  ```
20
- You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking [here](https://huggingface.co/x2ray/wheels/resolve/main/vllm-0.7.3.dev187%2Bg0ff1a4df.d20220101.cu126-cp312-cp312-linux_x86_64.whl).
21
 
22
- Inference speed with batch size 1 and short prompt:
23
- - 8x H100: 48 TPS
24
- - 8x A100: 38 TPS
 
 
 
25
 
26
  Note:
 
 
27
  - Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization.
28
- - vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.
 
11
  # DeepSeek V3 AWQ
12
  AWQ of DeepSeek V3.
13
 
14
+ Quantized by [Eric Hartford](https://huggingface.co/ehartford) and [v2ray](https://huggingface.co/v2ray).
15
+
16
  This quant modified some of the model code to fix an overflow issue when using float16.
17
 
18
  To serve using vLLM with 8x 80GB GPUs, use the following command:
19
  ```sh
20
+ VLLM_USE_V1=0 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_MARLIN_USE_ATOMIC_ADD=1 python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-seq-len-to-capture 65536 --enable-chunked-prefill --enable-prefix-caching --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.95 --served-model-name deepseek-chat --model cognitivecomputations/DeepSeek-V3-AWQ
21
  ```
22
+ You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking [here](https://huggingface.co/x2ray/wheels/resolve/main/vllm-0.8.3.dev166%2Bg29930428e.cu128-cp312-cp312-linux_x86_64.whl), the benchmark below was done with this wheel, it contains 2 PR merges which boosted performance a lot.
23
 
24
+ ## TPS Per Request
25
+ | GPU \ Batch Input Output | B: 1 I: 2 O: 2K | B: 32 I: 4K O: 256 | B: 1 I: 63K O: 2K | Prefill |
26
+ |:-:|:-:|:-:|:-:|:-:|
27
+ | **8x H100/H200** | 61.5 | 30.1 | 54.3 | 4732.2 |
28
+ | **4x H200** | 58.4 | 19.8 | 53.7 | 2653.1 |
29
+ | **8x A100 80GB** | 45.5 | 11.9 | 7.3 | 2435.5 |
30
 
31
  Note:
32
+ - The A100 config is extremely slow on high context is caused by FlashMLA not supporting anything below Hopper GPUs (H200, H100, H800, H20), before it's supported, vLLM will use the Triton implementation which is extremely slow on high context. Thus it's best to serve this model with either 8x H100 or 4x H200.
33
+ - All 3 types of GPU are SXM form factor.
34
  - Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization.
35
+ - vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "/root/data/DeepSeek-V3-AWQ",
3
  "architectures": [
4
  "DeepseekV3ForCausalLM"
5
  ],
 
1
  {
2
+ "_name_or_path": "cognitivecomputations/DeepSeek-V3-AWQ",
3
  "architectures": [
4
  "DeepseekV3ForCausalLM"
5
  ],