|
--- |
|
license: other |
|
license_name: tencent-hunyuan-community |
|
license_link: https://huggingface.co/Tencent-Hunyuan/HunyuanDiT/blob/main/LICENSE.txt |
|
language: |
|
- en |
|
--- |
|
|
|
# HunyuanDiT TensorRT Acceleration |
|
|
|
Language: **English** | [**中文**](https://huggingface.co/Tencent-Hunyuan/TensorRT-libs/blob/main/README_zh.md) |
|
|
|
We provide a TensorRT version of [HunyuanDiT](https://github.com/Tencent/HunyuanDiT) for inference acceleration |
|
(faster than flash attention). One can convert the torch model to TensorRT model using the following steps based on |
|
**TensorRT-9.2.0.5** and **cuda (11.7 or 11.8)**. |
|
|
|
> ⚠️ Important Reminder (Suggestion for testing the TensorRT acceleration version): |
|
> We recommend users to test the TensorRT version on NVIDIA GPUs with Compute Capability >= 8.0,(For example, RTX4090, |
|
> RTX3090, H800, A10/A100/A800, etc.) you can query the Compute Capability corresponding to your GPU from |
|
> [here](https://developer.nvidia.com/cuda-gpus#compute). For NVIDIA GPUs with Compute Capability < 8.0, if you want to |
|
> try the TensorRT version, you may encounter errors that the TensorRT Engine file cannot be generated or the inference |
|
> performance is poor, the main reason is that TensorRT does not support fused mha kernel on this architecture. |
|
|
|
## 🛠 Instructions |
|
|
|
### 1. Download dependencies from huggingface. |
|
|
|
```shell |
|
cd HunyuanDiT |
|
# Use the huggingface-cli tool to download the model. |
|
huggingface-cli download Tencent-Hunyuan/TensorRT-libs --local-dir ./ckpts/t2i/model_trt |
|
``` |
|
|
|
### 2. Install the TensorRT dependencies. |
|
|
|
```shell |
|
# Extract and install the TensorRT dependencies. |
|
sh trt/install.sh |
|
|
|
# Set the TensorRT build environment variables. We provide a script to set up the environment. |
|
source trt/activate.sh |
|
``` |
|
|
|
### 3. Build the TensorRT engine. |
|
|
|
|
|
#### Method 1: Use the prebuilt engine |
|
|
|
We provide some prebuilt [TensorRT Engines](https://huggingface.co/Tencent-Hunyuan/TensorRT-engine), which need to be downloaded from Huggingface. |
|
|
|
| Supported GPU | Remote Path | |
|
|:----------------:|:---------------------------------:| |
|
| GeForce RTX 3090 | `engines/RTX3090/model_onnx.plan` | |
|
| GeForce RTX 4090 | `engines/RTX4090/model_onnx.plan` | |
|
| A100 | `engines/A100/model_onnx.plan` | |
|
|
|
Use the following command to download and place the engine in the specified location. |
|
|
|
*Note: Please replace `<Remote Path>` with the corresponding remote path in the table above.* |
|
|
|
```shell |
|
export REMOTE_PATH=<Remote Path> |
|
huggingface-cli download Tencent-Hunyuan/TensorRT-engine ${REMOTE_PATH} ./ckpts/t2i/model_trt/engine/ |
|
ln -s ${REMOTE_PATH} ./ckpts/t2i/model_trt/engine/model_onnx.plan |
|
``` |
|
|
|
#### Method 2: Build your own engine |
|
|
|
If you are using a different GPU, you can build the engine using the following command. |
|
|
|
```shell |
|
# Build the TensorRT engine. By default, it will read the `ckpts` folder in the current directory. |
|
sh trt/build_engine.sh |
|
``` |
|
|
|
Finally, if you see the output like `&&&& PASSED TensorRT.trtexec [TensorRT v9200]`, the engine is built successfully. |
|
|
|
### 4. Run the inference using the TensorRT model. |
|
|
|
```shell |
|
# Important: If you have not activated the environment, please run the following command. |
|
source trt/activate.sh |
|
|
|
# Run the inference using the prompt-enhanced model + HunyuanDiT TensorRT model. |
|
python sample_t2i.py --prompt "渔舟唱晚" --infer-mode trt |
|
|
|
# Close prompt enhancement. (save GPU memory) |
|
python sample_t2i.py --prompt "渔舟唱晚" --infer-mode trt --no-enhance |
|
``` |
|
|
|
### 5. Notice |
|
|
|
The TensorRT engine is designed to support following shapes of input for performance reasons. |
|
In the future, we will verify and try to support arbitrary shapes. |
|
|
|
```python |
|
STANDARD_SHAPE = [ |
|
[(1024, 1024), (1280, 1280)], # 1:1 |
|
[(1024, 768), (1152, 864), (1280, 960)], # 4:3 |
|
[(768, 1024), (864, 1152), (960, 1280)], # 3:4 |
|
[(1280, 768)], # 16:9 |
|
[(768, 1280)], # 9:16 |
|
] |
|
``` |
|
|
|
## ❓ Q&A |
|
|
|
Please refer to the [Q&A](./QA.md) for more questions and answers about building the TensorRT Engine. |
|
|
|
|