English
Jarvis73 commited on
Commit
802781f
·
verified ·
1 Parent(s): 76de374

Upload ./README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +25 -12
README.md CHANGED
@@ -8,12 +8,22 @@ language:
8
 
9
  # HunyuanDiT TensorRT Acceleration
10
 
11
- English | [中文](https://huggingface.co/Tencent-Hunyuan/TensorRT-libs/blob/main/README_zh.md)
12
 
13
  We provide a TensorRT version of [HunyuanDiT](https://github.com/Tencent/HunyuanDiT) for inference acceleration
14
- (faster than flash attention). One can convert the torch model to TensorRT model using the following steps.
 
15
 
16
- ## 1. Download dependencies from huggingface.
 
 
 
 
 
 
 
 
 
17
 
18
  ```shell
19
  cd HunyuanDiT
@@ -21,16 +31,16 @@ cd HunyuanDiT
21
  huggingface-cli download Tencent-Hunyuan/TensorRT-libs --local-dir ./ckpts/t2i/model_trt
22
  ```
23
 
24
- ## 2. Install the TensorRT dependencies.
25
 
26
  ```shell
27
  sh trt/install.sh
28
  ```
29
 
30
- ## 3. Build the TensorRT engine.
31
 
32
 
33
- ### Method 1: Use the prebuilt engine
34
 
35
  We provide some prebuilt TensorRT engines.
36
 
@@ -46,7 +56,7 @@ Use the following command to download and place the engine in the specified loca
46
  huggingface-cli download Tencent-Hunyuan/TensorRT-engine <Remote Path> --local-dir ./ckpts/t2i/model_trt/engine
47
  ```
48
 
49
- ### Method 2: Build your own engine
50
 
51
  If you are using a different GPU, you can build the engine using the following command.
52
 
@@ -54,14 +64,13 @@ If you are using a different GPU, you can build the engine using the following c
54
  # Set the TensorRT build environment variables first. We provide a script to set up the environment.
55
  source trt/activate.sh
56
 
57
- # Method 1: Build the TensorRT engine. By default, it will read the `ckpts` folder in the current directory.
58
  sh trt/build_engine.sh
59
-
60
- # Method 2: If your model directory is not `ckpts`, you need to specify the model directory.
61
- sh trt/build_engine.sh </path/to/ckpts>
62
  ```
63
 
64
- 4. Run the inference using the TensorRT model.
 
 
65
 
66
  ```shell
67
  # Run the inference using the prompt-enhanced model + HunyuanDiT TensorRT model.
@@ -71,3 +80,7 @@ python sample_t2i.py --prompt "渔舟唱晚" --infer-mode trt
71
  python sample_t2i.py --prompt "渔舟唱晚" --infer-mode trt --no-enhance
72
  ```
73
 
 
 
 
 
 
8
 
9
  # HunyuanDiT TensorRT Acceleration
10
 
11
+ Language: **English** | [**中文**](https://huggingface.co/Tencent-Hunyuan/TensorRT-libs/blob/main/README_zh.md)
12
 
13
  We provide a TensorRT version of [HunyuanDiT](https://github.com/Tencent/HunyuanDiT) for inference acceleration
14
+ (faster than flash attention). One can convert the torch model to TensorRT model using the following steps based on
15
+ **TensorRT-9.2.0.5** and **cuda (11.7 or 11.8)**.
16
 
17
+ > ⚠️ Important Reminder (Suggestion for testing the TensorRT acceleration version):
18
+ > We recommend users to test the TensorRT version on NVIDIA GPUs with Compute Capability >= 8.0,(For example, RTX4090,
19
+ > RTX3090, H800, A10/A100/A800, etc.) you can query the Compute Capability corresponding to your GPU from
20
+ > [here](https://developer.nvidia.com/cuda-gpus#compute). For NVIDIA GPUs with Compute Capability < 8.0, if you want to
21
+ > try the TensorRT version, you may encounter errors that the TensorRT Engine file cannot be generated or the inference
22
+ > performance is poor, the main reason is that TensorRT does not support fused mha kernel on this architecture.
23
+
24
+ ## 🛠 Instructions
25
+
26
+ ### 1. Download dependencies from huggingface.
27
 
28
  ```shell
29
  cd HunyuanDiT
 
31
  huggingface-cli download Tencent-Hunyuan/TensorRT-libs --local-dir ./ckpts/t2i/model_trt
32
  ```
33
 
34
+ ### 2. Install the TensorRT dependencies.
35
 
36
  ```shell
37
  sh trt/install.sh
38
  ```
39
 
40
+ ### 3. Build the TensorRT engine.
41
 
42
 
43
+ #### Method 1: Use the prebuilt engine
44
 
45
  We provide some prebuilt TensorRT engines.
46
 
 
56
  huggingface-cli download Tencent-Hunyuan/TensorRT-engine <Remote Path> --local-dir ./ckpts/t2i/model_trt/engine
57
  ```
58
 
59
+ #### Method 2: Build your own engine
60
 
61
  If you are using a different GPU, you can build the engine using the following command.
62
 
 
64
  # Set the TensorRT build environment variables first. We provide a script to set up the environment.
65
  source trt/activate.sh
66
 
67
+ # Build the TensorRT engine. By default, it will read the `ckpts` folder in the current directory.
68
  sh trt/build_engine.sh
 
 
 
69
  ```
70
 
71
+ Finally, if you see the output like `&&&& PASSED TensorRT.trtexec [TensorRT v9200]`, the engine is built successfully.
72
+
73
+ ### 4. Run the inference using the TensorRT model.
74
 
75
  ```shell
76
  # Run the inference using the prompt-enhanced model + HunyuanDiT TensorRT model.
 
80
  python sample_t2i.py --prompt "渔舟唱晚" --infer-mode trt --no-enhance
81
  ```
82
 
83
+ ## ❓ Q&A
84
+
85
+ Please refer to the [Q&A](./QA.md) for more questions and answers about building the TensorRT Engine.
86
+