fix docs
Browse files- vllm_deployment_guide.md +3 -3
- vllm_deployment_guide_cn.md +3 -3
vllm_deployment_guide.md
CHANGED
@@ -4,7 +4,7 @@
|
|
4 |
|
5 |
## 📖 Introduction
|
6 |
|
7 |
-
We recommend using [vLLM](https://docs.vllm.ai/en/latest/) to deploy MiniMax-M1 model. Based on our testing, vLLM performs excellently when deploying this model, with the following features:
|
8 |
|
9 |
- 🔥 Outstanding service throughput performance
|
10 |
- ⚡ Efficient and intelligent memory management
|
@@ -17,7 +17,7 @@ The MiniMax-M1 model can run efficiently on a single server equipped with 8 H800
|
|
17 |
|
18 |
### MiniMax-M1 Model Obtaining
|
19 |
|
20 |
-
You can download the model from our official HuggingFace repository: [MiniMax-M1](https://huggingface.co/MiniMaxAI/MiniMax-M1)
|
21 |
|
22 |
Download command:
|
23 |
```
|
@@ -32,7 +32,7 @@ Or download using git:
|
|
32 |
|
33 |
```bash
|
34 |
git lfs install
|
35 |
-
git clone https://huggingface.co/MiniMaxAI/MiniMax-M1
|
36 |
```
|
37 |
|
38 |
⚠️ **Important Note**: Please ensure that [Git LFS](https://git-lfs.github.com/) is installed on your system, which is necessary for completely downloading the model weight files.
|
|
|
4 |
|
5 |
## 📖 Introduction
|
6 |
|
7 |
+
We recommend using [vLLM](https://docs.vllm.ai/en/latest/) to deploy [MiniMax-M1](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k) model. Based on our testing, vLLM performs excellently when deploying this model, with the following features:
|
8 |
|
9 |
- 🔥 Outstanding service throughput performance
|
10 |
- ⚡ Efficient and intelligent memory management
|
|
|
17 |
|
18 |
### MiniMax-M1 Model Obtaining
|
19 |
|
20 |
+
You can download the model from our official HuggingFace repository: [MiniMax-M1](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k)
|
21 |
|
22 |
Download command:
|
23 |
```
|
|
|
32 |
|
33 |
```bash
|
34 |
git lfs install
|
35 |
+
git clone https://huggingface.co/MiniMaxAI/MiniMax-M1-40k
|
36 |
```
|
37 |
|
38 |
⚠️ **Important Note**: Please ensure that [Git LFS](https://git-lfs.github.com/) is installed on your system, which is necessary for completely downloading the model weight files.
|
vllm_deployment_guide_cn.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
|
3 |
## 📖 简介
|
4 |
|
5 |
-
我们推荐使用 [vLLM](https://docs.vllm.ai/en/latest/) 来部署 [MiniMax-M1](https://huggingface.co/MiniMaxAI/MiniMax-M1) 模型。经过我们的测试,vLLM 在部署这个模型时表现出色,具有以下特点:
|
6 |
|
7 |
- 🔥 卓越的服务吞吐量性能
|
8 |
- ⚡ 高效智能的内存管理机制
|
@@ -15,7 +15,7 @@ MiniMax-M1 模型可在单台配备8个H800或8个H20 GPU的服务器上高效
|
|
15 |
|
16 |
### MiniMax-M1 模型获取
|
17 |
|
18 |
-
您可以从我们的官方 HuggingFace 仓库下载模型:[MiniMax-M1](https://huggingface.co/MiniMaxAI/MiniMax-M1)
|
19 |
|
20 |
下载命令:
|
21 |
```
|
@@ -30,7 +30,7 @@ export HF_ENDPOINT=https://hf-mirror.com
|
|
30 |
|
31 |
```bash
|
32 |
git lfs install
|
33 |
-
git clone https://huggingface.co/MiniMaxAI/MiniMax-M1
|
34 |
```
|
35 |
|
36 |
⚠️ **重要提示**:请确保系统已安装 [Git LFS](https://git-lfs.github.com/),这对于完整下载模型权重文件是必需的。
|
|
|
2 |
|
3 |
## 📖 简介
|
4 |
|
5 |
+
我们推荐使用 [vLLM](https://docs.vllm.ai/en/latest/) 来部署 [MiniMax-M1](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k) 模型。经过我们的测试,vLLM 在部署这个模型时表现出色,具有以下特点:
|
6 |
|
7 |
- 🔥 卓越的服务吞吐量性能
|
8 |
- ⚡ 高效智能的内存管理机制
|
|
|
15 |
|
16 |
### MiniMax-M1 模型获取
|
17 |
|
18 |
+
您可以从我们的官方 HuggingFace 仓库下载模型:[MiniMax-M1](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k)
|
19 |
|
20 |
下载命令:
|
21 |
```
|
|
|
30 |
|
31 |
```bash
|
32 |
git lfs install
|
33 |
+
git clone https://huggingface.co/MiniMaxAI/MiniMax-M1-40k
|
34 |
```
|
35 |
|
36 |
⚠️ **重要提示**:请确保系统已安装 [Git LFS](https://git-lfs.github.com/),这对于完整下载模型权重文件是必需的。
|