Update README.md
Browse files
README.md
CHANGED
@@ -12,18 +12,23 @@ library_name: transformers
|
|
12 |
|
13 |
|
14 |
<p align="center">
|
15 |
-
|
16 |
-
🖥️ <a href="https://
|
17 |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
|
18 |
🕹️ <a href="https://hunyuan.tencent.com/?model=hunyuan-a13b"><b>Demo</b></a> |
|
19 |
-
|
20 |
</p>
|
21 |
|
|
|
22 |
<p align="center">
|
23 |
-
<a href="https://github.com/Tencent/Hunyuan-A13B"><b>
|
|
|
|
|
|
|
24 |
</p>
|
25 |
|
26 |
|
|
|
27 |
|
28 |
Welcome to the official repository of **Hunyuan-A13B**, an innovative and open-source large language model (LLM) built on a fine-grained Mixture-of-Experts (MoE) architecture. Designed for efficiency and scalability, Hunyuan-A13B delivers cutting-edge performance with minimal computational overhead, making it an ideal choice for advanced reasoning and general-purpose applications, especially in resource-constrained environments.
|
29 |
|
|
|
12 |
|
13 |
|
14 |
<p align="center">
|
15 |
+
🤗 <a href="https://huggingface.co/tencent/Hunyuan-A13B-Instruct"><b>Hugging Face</b></a> |
|
16 |
+
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
|
17 |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
|
18 |
🕹️ <a href="https://hunyuan.tencent.com/?model=hunyuan-a13b"><b>Demo</b></a> |
|
19 |
+
🤖 <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct"><b>ModelScope</b></a>
|
20 |
</p>
|
21 |
|
22 |
+
|
23 |
<p align="center">
|
24 |
+
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/report/Hunyuan_A13B_Technical_Report.pdf"><b>Technical Report</b> </a> |
|
25 |
+
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B"><b>GITHUB</b></a> |
|
26 |
+
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-A13B"><b>cnb.cool</b></a> |
|
27 |
+
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-A13B/blob/main/LICENSE"><b>LICENSE</b></a>
|
28 |
</p>
|
29 |
|
30 |
|
31 |
+
|
32 |
|
33 |
Welcome to the official repository of **Hunyuan-A13B**, an innovative and open-source large language model (LLM) built on a fine-grained Mixture-of-Experts (MoE) architecture. Designed for efficiency and scalability, Hunyuan-A13B delivers cutting-edge performance with minimal computational overhead, making it an ideal choice for advanced reasoning and general-purpose applications, especially in resource-constrained environments.
|
34 |
|