Update README.md
Browse files
README.md
CHANGED
@@ -11,15 +11,18 @@ library_name: transformers
|
|
11 |
|
12 |
|
13 |
<p align="center">
|
14 |
-
|
15 |
-
🖥️ <a href="https://hunyuan.tencent.com
|
16 |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
|
17 |
🕹️ <a href="https://hunyuan.tencent.com/?model=hunyuan-a13b"><b>Demo</b></a> |
|
|
|
18 |
</p>
|
19 |
|
20 |
|
21 |
<p align="center">
|
|
|
22 |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B"><b>GITHUB</b></a> |
|
|
|
23 |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/LICENSE"><b>LICENSE</b></a>
|
24 |
</p>
|
25 |
|
@@ -34,9 +37,9 @@ With the rapid advancement of artificial intelligence technology, large language
|
|
34 |
### Key Features and Advantages
|
35 |
|
36 |
- **Compact yet Powerful**: With only 13 billion active parameters (out of a total of 80 billion), the model delivers competitive performance on a wide range of benchmark tasks, rivaling much larger models.
|
37 |
-
- **Hybrid
|
38 |
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
|
39 |
-
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3 and
|
40 |
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
|
41 |
|
42 |
### Why Choose Hunyuan-A13B?
|
@@ -46,13 +49,14 @@ As a powerful yet computationally efficient large model, Hunyuan-A13B is an idea
|
|
46 |
|
47 |
|
48 |
## Related News
|
49 |
-
* 2025.6.27 We have open-sourced **Hunyuan-A13B-Pretrain** , **Hunyuan-A13B-Instruct** , **Hunyuan-A13B-Instruct-FP8** , **Hunyuan-A13B-Instruct-GPTQ-Int4** on Hugging Face.
|
|
|
50 |
<br>
|
51 |
|
52 |
|
53 |
## Benchmark
|
54 |
|
55 |
-
Note: The following benchmarks are evaluated by TRT-LLM-backend
|
56 |
|
57 |
| Model | Hunyuan-Large | Qwen2.5-72B | Qwen3-A22B | Hunyuan-A13B |
|
58 |
|------------------|---------------|--------------|-------------|---------------|
|
@@ -72,26 +76,31 @@ Note: The following benchmarks are evaluated by TRT-LLM-backend
|
|
72 |
| GPQA | 25.18 | 45.90 | 47.47 | 49.12 |
|
73 |
|
74 |
|
75 |
-
|
76 |
-
|
77 |
Hunyuan-A13B-Instruct has achieved highly competitive performance across multiple benchmarks, particularly in mathematics, science, agent domains, and more. We compared it with several powerful models, and the results are shown below.
|
78 |
|
79 |
-
| Topic |
|
80 |
-
|
81 |
-
| **Mathematics** |
|
82 |
-
| **Science** |
|
83 |
-
| **Coding** |
|
84 |
-
| **Reasoning** |
|
85 |
-
| **Instruction<br>Following** |
|
86 |
-
| **Text<br>Creation**|
|
87 |
-
| **NLU** |
|
88 |
-
| **Agent** |
|
89 |
|
90 |
|
91 |
|
92 |
|
93 |
## Use with transformers
|
94 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
|
96 |
```python
|
97 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
@@ -102,20 +111,13 @@ model_name_or_path = os.environ['MODEL_PATH']
|
|
102 |
# model_name_or_path = "tencent/Hunyuan-A13B-Instruct"
|
103 |
|
104 |
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
|
105 |
-
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
|
106 |
-
device_map="auto",trust_remote_code=True) # You may want to use bfloat16 and/or move to GPU here
|
107 |
-
|
108 |
messages = [
|
109 |
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
|
110 |
]
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
tokenize=True,
|
115 |
-
add_generation_prompt=True,
|
116 |
-
return_tensors="pt",
|
117 |
-
enable_thinking=True # Toggle thinking mode (default: True)
|
118 |
-
)
|
119 |
|
120 |
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=4096)
|
121 |
|
@@ -133,58 +135,6 @@ print(f"thinking_content:{think_content}\n\n")
|
|
133 |
print(f"answer_content:{answer_content}\n\n")
|
134 |
```
|
135 |
|
136 |
-
### Fast and slow thinking switch
|
137 |
-
|
138 |
-
This model supports two modes of operation:
|
139 |
-
|
140 |
-
- Slow Thinking Mode (Default): Enables detailed internal reasoning steps before producing the final answer.
|
141 |
-
- Fast Thinking Mode: Skips the internal reasoning process for faster inference, going straight to the final answer.
|
142 |
-
|
143 |
-
**Switching to Fast Thinking Mode:**
|
144 |
-
|
145 |
-
To disable the reasoning process, set `enable_thinking=False` in the apply_chat_template call:
|
146 |
-
```
|
147 |
-
tokenized_chat = tokenizer.apply_chat_template(
|
148 |
-
messages,
|
149 |
-
tokenize=True,
|
150 |
-
add_generation_prompt=True,
|
151 |
-
return_tensors="pt",
|
152 |
-
enable_thinking=False # Use fast thinking mode
|
153 |
-
)
|
154 |
-
```
|
155 |
-
|
156 |
-
|
157 |
-
## Quantitative Compression
|
158 |
-
We used our own `AngleSlim` compression tool to produce FP8 and INT4 quantization models. `AngleSlim` compression tool is expected to be open source in early July, which will support one-click quantization and compression of large models, please look forward to it, and you can download our quantization models directly for deployment testing now.
|
159 |
-
|
160 |
-
### FP8 Quantization
|
161 |
-
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [Hunyuan-A13B-Instruct-FP8](https://huggingface.co/tencent/Hunyuan-A13B-Instruct-FP8).
|
162 |
-
|
163 |
-
#### FP8 Benchmark
|
164 |
-
This subsection describes the Benchmark metrics for the Hunyuan-80B-A13B-Instruct-FP8 quantitative model.
|
165 |
-
|
166 |
-
| Bench | Hunyuan-A13B-Instruct | Hunyuan-A13B-Instruct-FP8 |
|
167 |
-
|:---------:|:---------------------:|:-------------------------:|
|
168 |
-
| AIME 2024 | 87.3 | 86.7 |
|
169 |
-
| Gsm8k | 94.39 | 94.01 |
|
170 |
-
| BBH | 89.1 | 88.34 |
|
171 |
-
| DROP | 91.1 | 91.1 |
|
172 |
-
|
173 |
-
### Int4 Quantization
|
174 |
-
We use the GPTQ algorithm to achieve W4A16 quantization, which processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold. You can use `AngleSlim` quantization, you can also directly download our quantization completed open source model to use [Hunyuan-A13B-Instruct-Int4](https://huggingface.co/tencent/Hunyuan-A13B-Instruct-GPTQ-Int4).
|
175 |
-
|
176 |
-
#### Int4 Benchmark
|
177 |
-
This subsection describes the Benchmark metrics for the Hunyuan-80B-A13B-Instruct-GPTQ-Int4 quantitative model.
|
178 |
-
|
179 |
-
| Bench | Hunyuan-A13B-Instruct | Hunyuan-A13B-Instruct-GPTQ-Int4 |
|
180 |
-
|:--------------:|:---------------------:|:-------------------------------:|
|
181 |
-
| OlympiadBench | 82.7 | 84.0 |
|
182 |
-
| AIME 2024 | 87.3 | 86.7 |
|
183 |
-
| Gsm8k | 94.39 | 94.24 |
|
184 |
-
| BBH | 88.34 | 87.91 |
|
185 |
-
| DROP | 91.12 | 91.05 |
|
186 |
-
|
187 |
-
|
188 |
## Deployment
|
189 |
|
190 |
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
|
@@ -205,27 +155,47 @@ https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
|
|
205 |
```
|
206 |
docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
|
207 |
```
|
|
|
|
|
|
|
208 |
|
209 |
-
-
|
210 |
|
211 |
```
|
212 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
213 |
```
|
|
|
|
|
|
|
|
|
|
|
214 |
```
|
215 |
trtllm-serve \
|
216 |
/path/to/HunYuan-moe-A13B \
|
217 |
--host localhost \
|
218 |
--port 8000 \
|
219 |
--backend pytorch \
|
220 |
-
--max_batch_size
|
221 |
--max_num_tokens 16384 \
|
222 |
--tp_size 2 \
|
223 |
-
--kv_cache_free_gpu_memory_fraction 0.
|
|
|
224 |
--extra_llm_api_options /path/to/extra-llm-api-config.yml
|
225 |
```
|
226 |
|
227 |
|
228 |
-
###
|
229 |
|
230 |
#### Docker Image
|
231 |
We provide a pre-built Docker image containing vLLM 0.8.5 with full support for this model. The official vllm release is currently under development, **note: cuda 12.8 is require for this docker**.
|
@@ -266,25 +236,6 @@ docker run --privileged --user root --net=host --ipc=host \
|
|
266 |
```
|
267 |
|
268 |
|
269 |
-
|
270 |
-
#### Tool Calling with vLLM
|
271 |
-
|
272 |
-
To support agent-based workflows and function calling capabilities, this model includes specialized parsing mechanisms for handling tool calls and internal reasoning steps.
|
273 |
-
|
274 |
-
For a complete working example of how to implement and use these features in an agent setting, please refer to our full agent implementation on GitHub:
|
275 |
-
🔗 [Hunyuan A13B Agent Example](https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/agent/)
|
276 |
-
|
277 |
-
When deploying the model using **vLLM**, the following parameters can be used to configure the tool parsing behavior:
|
278 |
-
|
279 |
-
| Parameter | Value |
|
280 |
-
|--------------------------|-----------------------------------------------------------------------|
|
281 |
-
| `--tool-parser-plugin` | [Local Hunyuan A13B Tool Parser File](https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/agent/hunyuan_tool_parser.py) |
|
282 |
-
| `--tool-call-parser` | `hunyuan` |
|
283 |
-
|
284 |
-
These settings enable vLLM to correctly interpret and route tool calls generated by the model according to the expected format.
|
285 |
-
|
286 |
-
|
287 |
-
|
288 |
### SGLang
|
289 |
|
290 |
#### Docker Image
|
|
|
11 |
|
12 |
|
13 |
<p align="center">
|
14 |
+
<img src="https://avatars.githubusercontent.com/u/25720743?s=200&v=4" width="16"/><a href="https://huggingface.co/tencent/Hunyuan-A13B-Instruct"><b>Hugging Face</b></a> |
|
15 |
+
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
|
16 |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
|
17 |
🕹️ <a href="https://hunyuan.tencent.com/?model=hunyuan-a13b"><b>Demo</b></a> |
|
18 |
+
<img src="https://avatars.githubusercontent.com/u/109945100?s=200&v=4" width="16"/> <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct"><b>ModelScope</b></a>
|
19 |
</p>
|
20 |
|
21 |
|
22 |
<p align="center">
|
23 |
+
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/report/Hunyuan_A13B_Technical_Report.pdf"><b>Technical Report</b> </a> |
|
24 |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B"><b>GITHUB</b></a> |
|
25 |
+
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-A13B"><b>cnb.cool</b></a> |
|
26 |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/LICENSE"><b>LICENSE</b></a>
|
27 |
</p>
|
28 |
|
|
|
37 |
### Key Features and Advantages
|
38 |
|
39 |
- **Compact yet Powerful**: With only 13 billion active parameters (out of a total of 80 billion), the model delivers competitive performance on a wide range of benchmark tasks, rivaling much larger models.
|
40 |
+
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
|
41 |
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
|
42 |
+
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
|
43 |
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
|
44 |
|
45 |
### Why Choose Hunyuan-A13B?
|
|
|
49 |
|
50 |
|
51 |
## Related News
|
52 |
+
* 2025.6.27 We have open-sourced **Hunyuan-A13B-Pretrain** , **Hunyuan-A13B-Instruct** , **Hunyuan-A13B-Instruct-FP8** , **Hunyuan-A13B-Instruct-GPTQ-Int4** on Hugging Face. In addition, we have released a <a href="report/Hunyuan_A13B_Technical_Report.pdf">technical report </a> and a training and inference operation manual, which provide detailed information about the model’s capabilities as well as the operations for training and inference.
|
53 |
+
|
54 |
<br>
|
55 |
|
56 |
|
57 |
## Benchmark
|
58 |
|
59 |
+
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
|
60 |
|
61 |
| Model | Hunyuan-Large | Qwen2.5-72B | Qwen3-A22B | Hunyuan-A13B |
|
62 |
|------------------|---------------|--------------|-------------|---------------|
|
|
|
76 |
| GPQA | 25.18 | 45.90 | 47.47 | 49.12 |
|
77 |
|
78 |
|
|
|
|
|
79 |
Hunyuan-A13B-Instruct has achieved highly competitive performance across multiple benchmarks, particularly in mathematics, science, agent domains, and more. We compared it with several powerful models, and the results are shown below.
|
80 |
|
81 |
+
| Topic | Bench | OpenAI-o1-1217 | DeepSeek R1 | Qwen3-A22B | Hunyuan-A13B-Instruct |
|
82 |
+
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
|
83 |
+
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 74.3<br>79.2<br>96.4 | 79.8<br>70<br>94.9 | 85.7<br>81.5<br>94.0 | 87.3<br>76.8<br>94.3 |
|
84 |
+
| **Science** | GPQA-Diamond<br>OlympiadBench | 78<br>83.1 | 71.5<br>82.4 | 71.1<br>85.7 | 71.2<br>82.7 |
|
85 |
+
| **Coding** | Livecodebench<br>Fullstackbench<br>ArtifactsBench | 63.9<br>64.6<br>38.6 | 65.9<br>71.6<br>44.6 | 70.7<br>65.6<br>44.6 | 63.9<br>67.8<br>43 |
|
86 |
+
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 80.4<br>90.2<br>81 | 83.7<br>92.2<br>78.7 | 88.9<br>90.3<br>80.3 | 89.1<br>91.1<br>84.7 |
|
87 |
+
| **Instruction<br>Following** | IF-Eval<br>SysBench | 91.8<br>82.5 | 88.3<br>77.7 | 83.4<br>74.2 | 84.7<br>76.1 |
|
88 |
+
| **Text<br>Creation**| LengthCtrl<br>InsCtrl | 60.1<br>74.8 | 55.9<br>69 | 53.3<br>73.7 | 55.4<br>71.9 |
|
89 |
+
| **NLU** | ComplexNLU<br>Word-Task | 64.7<br>67.1 | 64.5<br>76.3 | 59.8<br>56.4 | 61.2<br>62.9 |
|
90 |
+
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 67.8<br>60.4<br>47.6<br>58.8 | 56.9<br>43.8<br>41.1<br>55.3 | 70.8<br>44.6<br>40.6<br>51.7 | 78.3<br>54.7<br>61.2<br>63.5 |
|
91 |
|
92 |
|
93 |
|
94 |
|
95 |
## Use with transformers
|
96 |
+
|
97 |
+
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
|
98 |
+
1. Pass "enable_thinking=False" when calling apply_chat_template.
|
99 |
+
2. Adding "/no_think" before the prompt will force the model not to use perform CoT reasoning. Similarly, adding "/think" before the prompt will force the model to perform CoT reasoning.
|
100 |
+
|
101 |
+
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
|
102 |
+
|
103 |
+
|
104 |
|
105 |
```python
|
106 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
111 |
# model_name_or_path = "tencent/Hunyuan-A13B-Instruct"
|
112 |
|
113 |
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
|
114 |
+
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto",trust_remote_code=True) # You may want to use bfloat16 and/or move to GPU here
|
|
|
|
|
115 |
messages = [
|
116 |
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
|
117 |
]
|
118 |
+
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, return_tensors="pt",
|
119 |
+
enable_thinking=True # Toggle thinking mode (default: True)
|
120 |
+
)
|
|
|
|
|
|
|
|
|
|
|
121 |
|
122 |
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=4096)
|
123 |
|
|
|
135 |
print(f"answer_content:{answer_content}\n\n")
|
136 |
```
|
137 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
138 |
## Deployment
|
139 |
|
140 |
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
|
|
|
155 |
```
|
156 |
docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
|
157 |
```
|
158 |
+
```
|
159 |
+
docker run --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
|
160 |
+
```
|
161 |
|
162 |
+
- Prepare Configuration file:
|
163 |
|
164 |
```
|
165 |
+
cat >/path/to/extra-llm-api-config.yml <<EOF
|
166 |
+
use_cuda_graph: true
|
167 |
+
cuda_graph_padding_enabled: true
|
168 |
+
cuda_graph_batch_sizes:
|
169 |
+
- 1
|
170 |
+
- 2
|
171 |
+
- 4
|
172 |
+
- 8
|
173 |
+
- 16
|
174 |
+
- 32
|
175 |
+
print_iter_log: true
|
176 |
+
EOF
|
177 |
```
|
178 |
+
|
179 |
+
|
180 |
+
- Start the API server:
|
181 |
+
|
182 |
+
|
183 |
```
|
184 |
trtllm-serve \
|
185 |
/path/to/HunYuan-moe-A13B \
|
186 |
--host localhost \
|
187 |
--port 8000 \
|
188 |
--backend pytorch \
|
189 |
+
--max_batch_size 32 \
|
190 |
--max_num_tokens 16384 \
|
191 |
--tp_size 2 \
|
192 |
+
--kv_cache_free_gpu_memory_fraction 0.6 \
|
193 |
+
--trust_remote_code \
|
194 |
--extra_llm_api_options /path/to/extra-llm-api-config.yml
|
195 |
```
|
196 |
|
197 |
|
198 |
+
### vllm
|
199 |
|
200 |
#### Docker Image
|
201 |
We provide a pre-built Docker image containing vLLM 0.8.5 with full support for this model. The official vllm release is currently under development, **note: cuda 12.8 is require for this docker**.
|
|
|
236 |
```
|
237 |
|
238 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
239 |
### SGLang
|
240 |
|
241 |
#### Docker Image
|