Create README_CN.md
Browse files- README_CN.md +456 -0
README_CN.md
ADDED
@@ -0,0 +1,456 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<p align="center">
|
2 |
+
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
|
3 |
+
</p><p></p>
|
4 |
+
|
5 |
+
<p align="center">
|
6 |
+
🫣 <a href="https://huggingface.co/tencent/Hunyuan-A13B-Instruct"><b>Hugging Face</b></a> |
|
7 |
+
🖥️ <a href="https://llm.hunyuan.tencent.com/" style="color: red;"><b>Official Website</b></a> |
|
8 |
+
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
|
9 |
+
🕹️ <a href="https://hunyuan.tencent.com/?model=hunyuan-a13b"><b>Demo</b></a> |
|
10 |
+
<img src="https://avatars.githubusercontent.com/u/109945100?s=200&v=4" width="16"/> <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct"><b>ModelScope</b></a>
|
11 |
+
</p>
|
12 |
+
|
13 |
+
<p align="center">
|
14 |
+
<a href="https://github.com/Tencent/Hunyuan-A13B"><b>GITHUB</b></a>
|
15 |
+
</p>
|
16 |
+
|
17 |
+
|
18 |
+
|
19 |
+
|
20 |
+
## 模型介绍
|
21 |
+
|
22 |
+
随着人工智能技术的快速发展,大型语言模型(LLMs)在自然语言处理、计算机视觉和科学任务等领域取得了显著进展。然而,随着模型规模的扩大,如何在保持高性能的同时优化资源消耗成为一个关键挑战。为了应对这一挑战,我们研究了混合专家(MoE)模型,当前亮相的 Hunyuan-A13B 模型,拥有800亿总参数和130亿激活参数。不仅在效果上达到了高标准,而且在尺寸上也做到了极致的优化,成功平衡了模型性能与资源占用。
|
23 |
+
|
24 |
+
|
25 |
+
### 核心特性与优势
|
26 |
+
- **小参数量,高性能**:仅激活130亿参数(总参数量800亿),即可在多样化基准任务中媲美更大规模模型的竞争力表现
|
27 |
+
- **混合推理支持**:同时支持快思考和慢思考两种模式,支持用户灵活选择
|
28 |
+
- **超长上下文理解**:原生支持256K上下文窗口,在长文本任务中保持稳定性能
|
29 |
+
- **增强Agent能力**:优化Agent能力,在BFCL-v3、τ-Bench等智能体基准测试中领先
|
30 |
+
- **高效推理**:采用分组查询注意力(GQA)策略,支持多量化格式,实现高效推理
|
31 |
+
|
32 |
+
|
33 |
+
### 为何选择Hunyuan-A13B?
|
34 |
+
作为兼具强大性能与计算效率的大模型,Hunyuan-A13B是研究者与开发者在资源受限条件下追求高性能的理想选择。无论学术研究、高性价比AI解决方案开发,还是创新应用探索,本模型都能提供强大的基础支持。
|
35 |
+
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
## 新闻
|
40 |
+
<br>
|
41 |
+
|
42 |
+
* 2025.6.26 我们在Hugging Face开源了 **Hunyuan-A13B-Instruct**,**Hunyuan-A13B-Pretrain**, **Hunyuan-A13B-Instruct-FP8**, **Hunyuan-A13B-Instruct-GPTQ-Int4**。并发布了技术报告和训练推理操作手册,详细介绍了模型能力和训练与推理的操作。
|
43 |
+
|
44 |
+
## 模型结构
|
45 |
+
|
46 |
+
Hunyuan-A13B采用了细粒度混合专家(Fine-grained Mixture of Experts,Fine-grained MoE)架构,包含800亿参数和130亿激活参数,累计训练了超过 20T tokens。该模型支持 256K 的上下文长度,以下为模型结构细节:
|
47 |
+
* 总参数: 80B
|
48 |
+
* 激活参数: 13B
|
49 |
+
* 层数: 32
|
50 |
+
* Attention Heads: 32
|
51 |
+
* 共享专家数: 1
|
52 |
+
* 非共享专家数: 64
|
53 |
+
* 路由策略: Top-8
|
54 |
+
* 激活函数: SwiGLU
|
55 |
+
* 隐层维度: 4096
|
56 |
+
* 专家隐层维度: 3072
|
57 |
+
|
58 |
+
## Benchmark评估榜单
|
59 |
+
|
60 |
+
**Hunyuan-A13B-Pretrain** 在 12/14 个任务上超越了Hunyuan上一代52B激活参数的MoE模型Hunyuan-Large,证实了它在预训练任务上出色的能力。与业界更大参数量的Dense和MoE模型相比, Hunyuan-A13B在多个代码和数学任务上都取得了最高分数。在MMLU, MMLU-PRO等诸多众聚合任务上, Hunyuan-A13B达到了与Qwen3-A22B模型同等的水平,表现出优秀的综合能力。
|
61 |
+
|
62 |
+
| Model | Hunyuan-Large | Qwen2.5-72B | Qwen3-A22B | Hunyuan-A13B |
|
63 |
+
|------------------|---------------|--------------|-------------|---------------|
|
64 |
+
| MMLU | 88.40 | 86.10 | 87.81 | 88.17 |
|
65 |
+
| MMLU-Pro | 60.20 | 58.10 | 68.18 | 67.23 |
|
66 |
+
| MMLU-Redux | 87.47 | 83.90 | 87.40 | 87.67 |
|
67 |
+
| BBH | 86.30 | 85.80 | 88.87 | 87.56 |
|
68 |
+
| SuperGPQA | 38.90 | 36.20 | 44.06 | 41.32 |
|
69 |
+
| EvalPlus | 75.69 | 65.93 | 77.60 | 78.64 |
|
70 |
+
| MultiPL-E | 59.13 | 60.50 | 65.94 | 69.33 |
|
71 |
+
| MBPP | 72.60 | 76.00 | 81.40 | 83.86 |
|
72 |
+
| CRUX-I | 57.00 | 57.63 | - | 70.13 |
|
73 |
+
| CRUX-O | 60.63 | 66.20 | 79.00 | 77.00 |
|
74 |
+
| MATH | 69.80 | 62.12 | 71.84 | 72.35 |
|
75 |
+
| CMATH | 91.30 | 84.80 | - | 91.17 |
|
76 |
+
| GSM8k | 92.80 | 91.50 | 94.39 | 91.83 |
|
77 |
+
| GPQA | 25.18 | 45.90 | 47.47 | 49.12 |
|
78 |
+
|
79 |
+
**Hunyuan-A13B-Instruct** 在多项基准测试中取得了极具有竞争力的表现,尤其是在数学、科学、agent等领域。我们与一些强力模型进行了对比,结果如下所示。
|
80 |
+
|
81 |
+
| Topic | Bench | OpenAI-o1-1217 | DeepSeek R1 | Qwen3-A22B | Hunyuan-A13B-Instruct |
|
82 |
+
|:-------------------:|:-----------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
|
83 |
+
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 74.3<br>79.2<br>96.4 | 79.8<br>70<br>94.9 | 85.7<br>81.5<br>94.0 | 87.3<br>76.8<br>94.3 |
|
84 |
+
| **Science** | GPQA-Diamond<br>OlympiadBench | 78<br>83.1 | 71.5<br>82.4 | 71.1<br>85.7 | 71.2<br>82.7 |
|
85 |
+
| **Coding** | Livecodebench<br>Fullstackbench<br>ArtifactsBench | 63.9<br>64.6<br>38.6 | 65.9<br>71.6<br>44.6 | 70.7<br>65.6<br>44.6 | 63.9<br>67.8<br>43 |
|
86 |
+
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 80.4<br>90.2<br>81 | 83.7<br>92.2<br>78.7 | 88.9<br>90.3<br>80.3 | 89.1<br>91.1<br>84.7 |
|
87 |
+
| **Instruction<br>Following** | IF-Eval<br>SysBench | 91.8<br>82.5 | 88.3<br>77.7 | 83.4<br>74.2 | 84.7<br>76.1 |
|
88 |
+
| **Text<br>Creation**| LengthCtrl<br>InsCtrl | 60.1<br>74.8 | 55.9<br>69 | 53.3<br>73.7 | 55.4<br>71.9 |
|
89 |
+
| **NLU** | ComplexNLU<br>Word-Task | 64.7<br>67.1 | 64.5<br>76.3 | 59.8<br>56.4 | 61.2<br>62.9 |
|
90 |
+
| **Agent** | BDCL v3<br> τ-Bench<br>ComplexFuncBench<br> $C^3$-Bench | 67.8<br>60.4<br>47.6<br>58.8 | 56.9<br>43.8<br>41.1<br>55.3 | 70.8<br>44.6<br>40.6<br>51.7 | 78.3<br>54.7<br>61.2<br>63.5 |
|
91 |
+
|
92 |
+
|
93 |
+
## 推理和部署
|
94 |
+
|
95 |
+
HunyuanLLM可以采用vLLM,sglang或TensorRT-LLM部署。为了简化部署过程HunyuanLLM提供了预构建docker镜像。
|
96 |
+
|
97 |
+
|
98 |
+
## 使用TensorRT-LLM推理
|
99 |
+
|
100 |
+
### BF16部署
|
101 |
+
|
102 |
+
#### Step1:执行推理
|
103 |
+
|
104 |
+
#### 方式1:命令行推理
|
105 |
+
|
106 |
+
下面我们展示一个代码片段,采用`TensorRT-LLM`快速请求chat model:
|
107 |
+
修改 examples/pytorch/quickstart_advanced.py 中如下代码:
|
108 |
+
|
109 |
+
|
110 |
+
```python
|
111 |
+
from tensorrt_llm import SamplingParams
|
112 |
+
from tensorrt_llm._torch import LLM
|
113 |
+
from tensorrt_llm._torch.pyexecutor.config import PyTorchConfig
|
114 |
+
from tensorrt_llm.llmapi import (EagleDecodingConfig, KvCacheConfig,
|
115 |
+
MTPDecodingConfig)
|
116 |
+
|
117 |
+
prompt = "Write a short summary of the benefits of regular exercise"
|
118 |
+
|
119 |
+
def main():
|
120 |
+
args = parse_arguments()
|
121 |
+
|
122 |
+
llm, sampling_params = setup_llm(args)
|
123 |
+
new_prompts = []
|
124 |
+
if args.apply_chat_template:
|
125 |
+
messages = [{"role": "user", "content": f"{prompt}"}]
|
126 |
+
new_prompts.append(llm.tokenizer.apply_chat_template(
|
127 |
+
messages, tokenize=False, add_generation_prompt=True)
|
128 |
+
)
|
129 |
+
|
130 |
+
outputs = llm.generate(new_prompts, sampling_params)
|
131 |
+
|
132 |
+
for i, output in enumerate(outputs):
|
133 |
+
prompt = output.prompt
|
134 |
+
generated_text = output.outputs[0].text
|
135 |
+
print(f"[{i}] Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
136 |
+
```
|
137 |
+
|
138 |
+
运行方式:
|
139 |
+
|
140 |
+
```shell
|
141 |
+
python3 quickstart_advanced.py --model_dir "HunyuanLLM模型路径" --tp_size 4 --apply_chat_template
|
142 |
+
```
|
143 |
+
|
144 |
+
#### 方式2:服务化推理
|
145 |
+
|
146 |
+
下面我们展示使用`TensorRT-LLM`服务化的方式部署模型和请求。
|
147 |
+
|
148 |
+
```shell
|
149 |
+
model_path="HunyuanLLM模型路径"
|
150 |
+
trtllm-serve <model_path> [--backend pytorch --tp_size <tp> --ep_size <ep> --host <host> --port <port>]
|
151 |
+
```
|
152 |
+
|
153 |
+
服务启动成功后, 运行请求脚本:
|
154 |
+
```python
|
155 |
+
### OpenAI Chat Client
|
156 |
+
|
157 |
+
from openai import OpenAI
|
158 |
+
|
159 |
+
client = OpenAI(
|
160 |
+
base_url="http://localhost:8000/v1",
|
161 |
+
api_key="tensorrt_llm",
|
162 |
+
)
|
163 |
+
|
164 |
+
response = client.chat.completions.create(
|
165 |
+
model="default",
|
166 |
+
messages=[{
|
167 |
+
"role": "user",
|
168 |
+
"content": "Write a short summary of the benefits of regular exercise"
|
169 |
+
}],
|
170 |
+
max_tokens=4096,
|
171 |
+
)
|
172 |
+
print(response)
|
173 |
+
```
|
174 |
+
|
175 |
+
#### FP8/Int4量化模型部署:
|
176 |
+
目前 TensorRT-LLM 的 fp8 和 int4 量化模型正在支持中,敬请期待。
|
177 |
+
|
178 |
+
|
179 |
+
## 使用vLLM推理
|
180 |
+
### Docker:
|
181 |
+
|
182 |
+
为了简化部署过程,HunyuanLLM提供了预构建docker镜像:
|
183 |
+
|
184 |
+
[hunyuaninfer/hunyuan-large:hunyuan-moe-A13B-vllm](https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags) 。您只需要下载模型文件并用下面代码启动docker即可开始推理模型。
|
185 |
+
```shell
|
186 |
+
# 拉取
|
187 |
+
docker pull hunyuaninfer/hunyuan-large:hunyuan-moe-A13B-vllm
|
188 |
+
# 起镜像
|
189 |
+
docker run --name hunyuanLLM_infer -itd --privileged --user root --net=host --ipc=host --gpus=8 hunyuaninfer/hunyuan-large:hunyuan-moe-A13B-vllm
|
190 |
+
```
|
191 |
+
|
192 |
+
注: Docker容器权限管理。以上代码采用特权模式(--privileged)启动Docker容器会赋予容器较高的权限,增加数据泄露和集群安全风险。建议在非必要情况下避免使用特权模式,以降低安全威胁。对于必须使用特权模式的场景,应进行严格的安全评估,并实施相应的安全监控、加固措施。
|
193 |
+
|
194 |
+
|
195 |
+
### BF16部署
|
196 |
+
|
197 |
+
BF16可以在2张显存超过80G的GPU卡上部署,如果长文推荐TP4。按如下步骤执行:
|
198 |
+
|
199 |
+
运行命令前请先设置如下环境变量:
|
200 |
+
|
201 |
+
```shell
|
202 |
+
export MODEL_PATH=PATH_TO_MODEL
|
203 |
+
```
|
204 |
+
|
205 |
+
#### Step1:执行推理
|
206 |
+
|
207 |
+
#### 方式1:命令行推理
|
208 |
+
|
209 |
+
下面我们展示一个代码片段,采用`vLLM`快速请求chat model:
|
210 |
+
|
211 |
+
注: vLLM组件远程代码执行防护。下列代码中vLLM组件的trust-remote-code配置项若被启用,将允许加载并执行来自远程模型仓库的代码,这可能导致恶意代码的执行。除非业务需求明确要求,否则建议该配置项处于禁用状态,以降低潜在的安全威胁。
|
212 |
+
|
213 |
+
|
214 |
+
```python
|
215 |
+
import os
|
216 |
+
from typing import List, Optional
|
217 |
+
from vllm import LLM, SamplingParams
|
218 |
+
from vllm.inputs import PromptType
|
219 |
+
from transformers import AutoTokenizer
|
220 |
+
|
221 |
+
model_path=os.environ.get('MODEL_PATH')
|
222 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
223 |
+
|
224 |
+
llm = LLM(model=model_path,
|
225 |
+
tokenizer=model_path,
|
226 |
+
trust_remote_code=True,
|
227 |
+
dtype='bfloat16',
|
228 |
+
tensor_parallel_size=4,
|
229 |
+
gpu_memory_utilization=0.9)
|
230 |
+
|
231 |
+
sampling_params = SamplingParams(
|
232 |
+
temperature=0.7, top_p=0.8, max_tokens=4096, top_k=20, repetition_penalty=1.05)
|
233 |
+
|
234 |
+
messages = [
|
235 |
+
{
|
236 |
+
"role": "system",
|
237 |
+
"content": "You are a helpful assistant.",
|
238 |
+
},
|
239 |
+
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
|
240 |
+
]
|
241 |
+
|
242 |
+
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
|
243 |
+
|
244 |
+
dummy_inputs: List[PromptType] = [{
|
245 |
+
"prompt_token_ids": batch
|
246 |
+
} for batch in tokenized_chat.numpy().tolist()]
|
247 |
+
|
248 |
+
outputs = llm.generate(dummy_inputs, sampling_params)
|
249 |
+
|
250 |
+
# Print the outputs.
|
251 |
+
for output in outputs:
|
252 |
+
prompt = output.prompt
|
253 |
+
generated_text = output.outputs[0].text
|
254 |
+
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
255 |
+
```
|
256 |
+
|
257 |
+
#### 方式2:服务化推理
|
258 |
+
|
259 |
+
下面我们展示使用`vLLM`服务化的方式部署模型并请求
|
260 |
+
|
261 |
+
在主节点上运行:
|
262 |
+
|
263 |
+
```shell
|
264 |
+
export VLLM_HOST_IP=${LOCAL_IP}
|
265 |
+
```
|
266 |
+
接着我们启动服务,运行 :
|
267 |
+
```shell
|
268 |
+
cd inference
|
269 |
+
sh run_server.sh
|
270 |
+
```
|
271 |
+
|
272 |
+
运行`run_server.sh`成功后, 运行请求脚本:
|
273 |
+
```shell
|
274 |
+
sh openapi.sh
|
275 |
+
```
|
276 |
+
|
277 |
+
注意修改`openapi.sh`中的`${LOCAL_IP}`和`${MODEL_PATH}`为服务对应值。
|
278 |
+
|
279 |
+
|
280 |
+
### 量化模型部署:
|
281 |
+
|
282 |
+
本部分介绍采用vLLM部署量化后模型的流程。
|
283 |
+
|
284 |
+
镜像:部署镜像同BF16。
|
285 |
+
|
286 |
+
|
287 |
+
#### Int8量化模型部署:
|
288 |
+
部署Int8-weight-only版本HunYuan-A13B模型只需设置`run_server_int8.sh`中的环境变量:
|
289 |
+
```SHELL
|
290 |
+
export MODEL_PATH=PATH_TO_BF16_MODEL
|
291 |
+
```
|
292 |
+
|
293 |
+
接着我们启动Int8服务。运行:
|
294 |
+
```shell
|
295 |
+
sh run_server_int8.sh
|
296 |
+
```
|
297 |
+
|
298 |
+
运行`run_server_int8.sh`成功后, 运行请求脚本:
|
299 |
+
```shell
|
300 |
+
sh openapi.sh
|
301 |
+
```
|
302 |
+
|
303 |
+
#### Int4量化模型部署:
|
304 |
+
部署Int4-weight-only版本HunYuan-A13B模型只需设置`run_server_int4.sh`中的环境变量,采用GPTQ方式:
|
305 |
+
```SHELL
|
306 |
+
export MODEL_PATH=PATH_TO_INT4_MODEL
|
307 |
+
```
|
308 |
+
|
309 |
+
接着我们启动Int4服务。运行:
|
310 |
+
```shell
|
311 |
+
sh run_server_int4.sh
|
312 |
+
```
|
313 |
+
|
314 |
+
运行`run_server_int4.sh`成功后, 运行请求脚本:
|
315 |
+
```shell
|
316 |
+
sh openapi.sh
|
317 |
+
```
|
318 |
+
|
319 |
+
#### FP8量化模型部署:
|
320 |
+
部署W8A8C8版本HunYuan-A13B模型只需设置`run_server_int8.sh`中的环境变量:
|
321 |
+
```shell
|
322 |
+
export MODEL_PATH=PATH_TO_FP8_MODEL
|
323 |
+
```
|
324 |
+
|
325 |
+
接着我们启动FP8服务。运行:
|
326 |
+
```shell
|
327 |
+
sh run_server_fp8.sh
|
328 |
+
```
|
329 |
+
|
330 |
+
运行`run_server_fp8.sh`成功后, 运行请求脚本:
|
331 |
+
```shell
|
332 |
+
sh openapi.sh
|
333 |
+
```
|
334 |
+
|
335 |
+
### 性能评估:
|
336 |
+
|
337 |
+
本部分介绍采用vLLM部署各个模型(原始模型和量化模型)的效率测试结果,包括不同Batchsize下的推理速度(tokens/s), 测试环境(腾讯云,H80(96G)GPU x 卡数):
|
338 |
+
|
339 |
+
测试命令:
|
340 |
+
```python
|
341 |
+
python3 benchmark_throughput.py --backend vllm \
|
342 |
+
--input-len 2048 \
|
343 |
+
--output-len 14336 \
|
344 |
+
--model $MODEL_PATH \
|
345 |
+
--tensor-parallel-size $TP \
|
346 |
+
--use-v2-block-manager \
|
347 |
+
--async-engine \
|
348 |
+
--trust-remote-code \
|
349 |
+
--num_prompts $BATCH_SIZE \
|
350 |
+
--max-num-seqs $BATCH_SIZE
|
351 |
+
```
|
352 |
+
|
353 |
+
| 推理框架 | 模型 | 部署卡数 | input_length | batch=1 | batch=16 | batch=32 |
|
354 |
+
|------|-----------------------------|-----------|-------------------------|---------------------|----------------------|----------------------|
|
355 |
+
| vLLM | Hunyuan-A13B-Instruct | 8 | 2048 | 190.84 | 1246.54 | 1981.99 |
|
356 |
+
| vLLM | Hunyuan-A13B-Instruct | 4 | 2048 | 158.90 | 779.10 | 1301.75 |
|
357 |
+
| vLLM | Hunyuan-A13B-Instruct | 2 | 2048 | 111.72 | 327.31 | 346.54 |
|
358 |
+
| vLLM | Hunyuan-A13B-Instruct(int8 weight only) | 2 | 2048 | 109.10 | 444.17 | 721.93 |
|
359 |
+
| vLLM | Hunyuan-A13B-Instruct(W8A8C8-FP8) | 2 | 2048 | 91.83 | 372.01 | 617.70 |
|
360 |
+
| vLLM | Hunyuan-A13B-Instruct(W8A8C8-FP8) | 1 | 2048 | 60.07 | 148.80 | 160.41 |
|
361 |
+
|
362 |
+
|
363 |
+
## 使用sglang推理
|
364 |
+
|
365 |
+
### BF16部署
|
366 |
+
|
367 |
+
#### Step1:执行推理
|
368 |
+
|
369 |
+
#### 方式1:命令行推理
|
370 |
+
|
371 |
+
下面我们展示一个代码片段,采用`sglang`快速请求chat model:
|
372 |
+
|
373 |
+
|
374 |
+
```python
|
375 |
+
import sglang as sgl
|
376 |
+
from transformers import AutoTokenizer
|
377 |
+
|
378 |
+
model_path=os.environ.get('MODEL_PATH')
|
379 |
+
|
380 |
+
|
381 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
382 |
+
|
383 |
+
messages = [
|
384 |
+
{
|
385 |
+
"role": "system",
|
386 |
+
"content": "You are a helpful assistant.",
|
387 |
+
},
|
388 |
+
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
|
389 |
+
]
|
390 |
+
prompts = []
|
391 |
+
prompts.append(tokenizer.apply_chat_template(
|
392 |
+
messages,
|
393 |
+
tokenize=False,
|
394 |
+
add_generation_prompt=True
|
395 |
+
))
|
396 |
+
print(prompts)
|
397 |
+
|
398 |
+
llm = sgl.Engine(
|
399 |
+
model_path=model_path,
|
400 |
+
tp_size=4,
|
401 |
+
trust_remote_code=True,
|
402 |
+
mem_fraction_static=0.7,
|
403 |
+
)
|
404 |
+
|
405 |
+
sampling_params = {"temperature": 0.7, "top_p": 0.8, "top_k": 20, "max_new_tokens": 4096}
|
406 |
+
outputs = llm.generate(prompts, sampling_params)
|
407 |
+
for prompt, output in zip(prompts, outputs):
|
408 |
+
print(f"Prompt: {prompt}\nGenerated text: {output['text']}")
|
409 |
+
```
|
410 |
+
|
411 |
+
#### 方式2:服务化推理
|
412 |
+
|
413 |
+
下面我们展示使用`sglang`服务化的方式部署模型和请求。
|
414 |
+
|
415 |
+
```shell
|
416 |
+
model_path="HunyuanLLM模型路径"
|
417 |
+
python3 -u -m sglang.launch_server \
|
418 |
+
--model-path $model_path \
|
419 |
+
--tp 4 \
|
420 |
+
--trust-remote-code \
|
421 |
+
```
|
422 |
+
|
423 |
+
服务启动成功后, 运行请求脚本:
|
424 |
+
```python
|
425 |
+
import openai
|
426 |
+
client = openai.Client(
|
427 |
+
base_url="http://localhost:30000/v1", api_key="EMPTY")
|
428 |
+
|
429 |
+
response = client.chat.completions.create(
|
430 |
+
model="default",
|
431 |
+
messages= [
|
432 |
+
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
|
433 |
+
],
|
434 |
+
temperature=0.7,
|
435 |
+
max_tokens=4096,
|
436 |
+
extra_body={"top_p": 0.8, "top_k": 20}
|
437 |
+
)
|
438 |
+
print(response)
|
439 |
+
```
|
440 |
+
|
441 |
+
#### FP8/Int4量化模型部署:
|
442 |
+
目前 sglang 的 fp8 和 int4 量化模型正在支持中,敬请期待。
|
443 |
+
|
444 |
+
## 交互式Demo Web
|
445 |
+
hunyuan-A13B 现已开放网页demo。访问 https://hunyuan.tencent.com/?model=hunyuan-a13b 即可简单体验我们的模型。
|
446 |
+
|
447 |
+
<br>
|
448 |
+
|
449 |
+
## 引用
|
450 |
+
如果你觉得我们的工作对你有帮助,欢迎引用我们的<a href="report/Hunyuan_A13B_Technical_Report.pdf">技术报告</a>!
|
451 |
+
|
452 |
+
<br>
|
453 |
+
|
454 |
+
|
455 |
+
## 联系我们
|
456 |
+
如果你想给我们的研发和产品团队留言,欢迎联系我们腾讯混元LLM团队。你可以通过邮件([email protected])联系我们。
|