Update README.md
Browse files
README.md
CHANGED
@@ -194,7 +194,7 @@ For production deployment, we recommend using [vLLM](https://docs.vllm.ai/en/lat
|
|
194 |
- 📦 Powerful batch request processing capability
|
195 |
- ⚙️ Deeply optimized underlying performance
|
196 |
|
197 |
-
For detailed vLLM deployment instructions, please refer to our [vLLM Deployment Guide](./docs/vllm_deployment_guide.md).
|
198 |
Alternatively, you can also deploy using Transformers directly. For detailed Transformers deployment instructions, you can see our [MiniMax-M1 Transformers Deployment Guide](./docs/transformers_deployment_guide.md).
|
199 |
|
200 |
|
|
|
194 |
- 📦 Powerful batch request processing capability
|
195 |
- ⚙️ Deeply optimized underlying performance
|
196 |
|
197 |
+
For detailed vLLM deployment instructions, please refer to our [vLLM Deployment Guide](./docs/vllm_deployment_guide.md). Special Note: Using vLLM versions below 0.9.2 may result in incompatibility or incorrect precision for the model.
|
198 |
Alternatively, you can also deploy using Transformers directly. For detailed Transformers deployment instructions, you can see our [MiniMax-M1 Transformers Deployment Guide](./docs/transformers_deployment_guide.md).
|
199 |
|
200 |
|