File size: 17,287 Bytes
0e46c5b 19cc7fc 0e46c5b 4eee8e6 cb87680 10e78a4 4eee8e6 6c19cd6 10e78a4 cb87680 4eee8e6 cb87680 4eee8e6 cb87680 4eee8e6 cb87680 4eee8e6 cb87680 4eee8e6 cb87680 4eee8e6 10e78a4 4eee8e6 e9d2b83 4eee8e6 cb87680 4eee8e6 54b75d5 4eee8e6 cb87680 4eee8e6 cb87680 fee83ca 4eee8e6 cb87680 4eee8e6 cb87680 fee83ca cb87680 fee83ca cb87680 4eee8e6 cb87680 4eee8e6 cb87680 4eee8e6 fee83ca 10e78a4 4eee8e6 cb87680 4eee8e6 10e78a4 4eee8e6 fee83ca 4eee8e6 cb87680 4eee8e6 10e78a4 cb87680 4eee8e6 cb87680 4eee8e6 cb87680 4eee8e6 cb87680 4eee8e6 cb87680 4eee8e6 cb87680 4eee8e6 cb87680 4eee8e6 fee83ca cb87680 4eee8e6 cb87680 4eee8e6 10e78a4 4eee8e6 10e78a4 cb87680 4eee8e6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 |
---
license: other
license_name: tencent-hunyuan-a13b
license_link: https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/LICENSE
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🫣 <a href="https://huggingface.co/tencent/Hunyuan-A13B-Instruct"><b>Hugging Face</b></a> |
🖥️ <a href="https://llm.hunyuan.tencent.com/" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/?model=hunyuan-a13b"><b>Demo</b></a> |
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B"><b>GITHUB</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/LICENSE"><b>LICENSE</b></a>
</p>
Welcome to the official repository of **Hunyuan-A13B**, an innovative and open-source large language model (LLM) built on a fine-grained Mixture-of-Experts (MoE) architecture. Designed for efficiency and scalability, Hunyuan-A13B delivers cutting-edge performance with minimal computational overhead, making it an ideal choice for advanced reasoning and general-purpose applications, especially in resource-constrained environments.
## Model Introduction
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have achieved remarkable progress in natural language processing, computer vision, and scientific tasks. However, as model scales continue to expand, optimizing resource consumption while maintaining high performance has become a critical challenge. To address this, we have explored Mixture of Experts (MoE) architectures. The newly introduced Hunyuan-A13B model features a total of 80 billion parameters with 13 billion active parameters. It not only delivers high-performance results but also achieves optimal resource efficiency, successfully balancing computational power and resource utilization.
### Key Features and Advantages
- **Compact yet Powerful**: With only 13 billion active parameters (out of a total of 80 billion), the model delivers competitive performance on a wide range of benchmark tasks, rivaling much larger models.
- **Hybrid Inference Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3 and τ-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
### Why Choose Hunyuan-A13B?
As a powerful yet computationally efficient large model, Hunyuan-A13B is an ideal choice for researchers and developers seeking high performance under resource constraints. Whether for academic research, cost-effective AI solution development, or innovative application exploration, this model provides a robust foundation for advancement.
## Related News
* 2025.6.27 We have open-sourced **Hunyuan-A13B-Pretrain** , **Hunyuan-A13B-Instruct** , **Hunyuan-A13B-Instruct-FP8** , **Hunyuan-A13B-Instruct-GPTQ-Int4** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend
| Model | Hunyuan-Large | Qwen2.5-72B | Qwen3-A22B | Hunyuan-A13B |
|------------------|---------------|--------------|-------------|---------------|
| MMLU | 88.40 | 86.10 | 87.81 | 88.17 |
| MMLU-Pro | 60.20 | 58.10 | 68.18 | 67.23 |
| MMLU-Redux | 87.47 | 83.90 | 87.40 | 87.67 |
| BBH | 86.30 | 85.80 | 88.87 | 87.56 |
| SuperGPQA | 38.90 | 36.20 | 44.06 | 41.32 |
| EvalPlus | 75.69 | 65.93 | 77.60 | 78.64 |
| MultiPL-E | 59.13 | 60.50 | 65.94 | 69.33 |
| MBPP | 72.60 | 76.00 | 81.40 | 83.86 |
| CRUX-I | 57.00 | 57.63 | - | 70.13 |
| CRUX-O | 60.63 | 66.20 | 79.00 | 77.00 |
| MATH | 69.80 | 62.12 | 71.84 | 72.35 |
| CMATH | 91.30 | 84.80 | - | 91.17 |
| GSM8k | 92.80 | 91.50 | 94.39 | 91.83 |
| GPQA | 25.18 | 45.90 | 47.47 | 49.12 |
Hunyuan-A13B-Instruct has achieved highly competitive performance across multiple benchmarks, particularly in mathematics, science, agent domains, and more. We compared it with several powerful models, and the results are shown below.
| **Topic** | **Bench** | **OpenAI-o1-1217** | **DeepSeek R1** | **Qwen3-A22B** | **Hunyuan-A13B-Instruct** |
| :--------------------------: | :------------------------------------------------: | :------------------------------: | :--------------------------: | :--------------------------: | :--------------------------------------: |
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 74.3<br>79.2<br>**96.4** | 79.8<br>70<br>94.9 | 85.7<br>**81.5**<br>94.0 | **87.3**<br>76.8<br>94.3 |
| **Science** | GPQA-Diamond<br>OlympiadBench | **78**<br>83.1 | 71.5<br>82.4 | 71.1<br>**85.7** | 71.2<br>82.7 |
| **Coding** | Livecodebench<br>Fullstackbench<br>ArtifactsBench | 63.9<br>64.6<br>38.6 | 65.9<br>**71.6**<br>**44.6** | **70.7**<br>65.6<br>**44.6** | 63.9<br>67.8<br>43 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 80.4<br>90.2<br>81 | 83.7<br>**92.2**<br>78.7 | 88.9<br>90.3<br>80.3 | **89.1**<br>91.1<br>**84.7** |
| **Instruction<br>Following** | IF-Eval<br>SysBench | **91.8**<br>**82.5** | 88.3<br>77.7 | 83.4<br>74.2 | 84.7<br>76.1 |
| **Text<br>Creation** | LengthCtrl<br>InsCtrl | **60.1**<br>**74.8** | 55.9<br>69 | 53.3<br>73.7 | 55.4<br>71.9 |
| **NLU** | ComplexNLU<br>Word-Task | **64.7**<br>67.1 | 64.5<br>**76.3** | 59.8<br>56.4 | 61.2<br>62.9 |
| **Agent** | BDCL v3<br>τ-Bench<br>ComplexFuncBench<br>C3-Bench | 67.8<br>**60.4**<br>47.6<br>58.8 | 56.9<br>43.8<br>41.1<br>55.3 | 70.8<br>44.6<br>40.6<br>51.7 | **78.3**<br>54.7<br>**61.2**<br>**63.5** |
## Use with transformers
Below is an example of how to use this model with the Hugging Face transformers library. This includes loading the model and tokenizer, toggling reasoning (thinking) mode, and parsing both the reasoning process and final answer from the output.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = os.environ['MODEL_PATH']
# model_name_or_path = "tencent/Hunyuan-A13B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",trust_remote_code=True) # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=4096)
output_text = tokenizer.decode(outputs[0])
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
### Fast and slow thinking switch
This model supports two modes of operation:
- Slow Thinking Mode (Default): Enables detailed internal reasoning steps before producing the final answer.
- Fast Thinking Mode: Skips the internal reasoning process for faster inference, going straight to the final answer.
**Switching to Fast Thinking Mode:**
To disable the reasoning process, set `enable_thinking=False` in the apply_chat_template call:
```
tokenized_chat = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
enable_thinking=False # Use fast thinking mode
)
```
## Quantitative Compression
We used our own `AngleSlim` compression tool to produce FP8 and INT4 quantization models. `AngleSlim` compression tool is expected to be open source in early July, which will support one-click quantization and compression of large models, please look forward to it, and you can download our quantization models directly for deployment testing now.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [Hunyuan-A13B-Instruct-FP8](https://huggingface.co/tencent/Hunyuan-A13B-Instruct-FP8).
#### FP8 Benchmark
This subsection describes the Benchmark metrics for the Hunyuan-80B-A13B-Instruct-FP8 quantitative model.
| Bench | Hunyuan-A13B-Instruct | Hunyuan-A13B-Instruct-FP8 |
|:---------:|:---------------------:|:-------------------------:|
| AIME 2024 | 87.3 | 86.7 |
| Gsm8k | 94.39 | 94.01 |
| BBH | 89.1 | 88.34 |
| DROP | 91.1 | 91.1 |
### Int4 Quantization
We use the GPTQ algorithm to achieve W4A16 quantization, which processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold. You can use `AngleSlim` quantization, you can also directly download our quantization completed open source model to use [Hunyuan-A13B-Instruct-Int4](https://huggingface.co/tencent/Hunyuan-A13B-Instruct-GPTQ-Int4).
#### Int4 Benchmark
This subsection describes the Benchmark metrics for the Hunyuan-80B-A13B-Instruct-GPTQ-Int4 quantitative model.
| Bench | Hunyuan-A13B-Instruct | Hunyuan-A13B-Instruct-GPTQ-Int4 |
|:--------------:|:---------------------:|:-------------------------------:|
| OlympiadBench | 82.7 | 84.0 |
| AIME 2024 | 87.3 | 86.7 |
| Gsm8k | 94.39 | 94.24 |
| BBH | 88.34 | 87.91 |
| DROP | 91.12 | 91.05 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-a13b/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
```
- Start the API server:
```
docker run --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
```
```
trtllm-serve \
/path/to/HunYuan-moe-A13B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 128 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.95 \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vLLM
#### Docker Image
We provide a pre-built Docker image containing vLLM 0.8.5 with full support for this model. The official vllm release is currently under development, **note: cuda 12.8 is require for this docker**.
- To get started:
```
docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-vllm
or
docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm
```
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-A13B-Instruct`
- Start the API server:
model download by huggingface:
```
docker run --privileged --user root --net=host --ipc=host \
-v ~/.cache:/root/.cache/ \
--gpus=all -it --entrypoint python hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm
\
-m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8000 \
--tensor-parallel-size 4 --model tencent/Hunyuan-A13B-Instruct --trust-remote-code
```
model downloaded by modelscope:
```
docker run --privileged --user root --net=host --ipc=host \
-v ~/.cache/modelscope:/root/.cache/modelscope \
--gpus=all -it --entrypoint python hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm \
-m vllm.entrypoints.openai.api_server --host 0.0.0.0 --tensor-parallel-size 4 --port 8000 \
--model /root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct/ --trust_remote_code
```
#### Tool Calling with vLLM
To support agent-based workflows and function calling capabilities, this model includes specialized parsing mechanisms for handling tool calls and internal reasoning steps.
For a complete working example of how to implement and use these features in an agent setting, please refer to our full agent implementation on GitHub:
🔗 [Hunyuan A13B Agent Example](https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/agent/)
When deploying the model using **vLLM**, the following parameters can be used to configure the tool parsing behavior:
| Parameter | Value |
|--------------------------|-----------------------------------------------------------------------|
| `--tool-parser-plugin` | [Local Hunyuan A13B Tool Parser File](https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/agent/hunyuan_tool_parser.py) |
| `--tool-call-parser` | `hunyuan` |
These settings enable vLLM to correctly interpret and route tool calls generated by the model according to the expected format.
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
To get started:
- Pull the Docker image
```
docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-sglang
or
docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-sglang
```
- Start the API server:
```
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ipc=host \
docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-sglang \
-m sglang.launch_server --model-path hunyuan/huanyuan_A13B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan[email protected]). |