monai
medical
katielink commited on
Commit
c787175
·
1 Parent(s): 7406c7e

update the TensorRT part in the README file

Browse files
Files changed (3) hide show
  1. README.md +10 -2
  2. configs/metadata.json +2 -1
  3. docs/README.md +10 -2
README.md CHANGED
@@ -47,20 +47,28 @@ IoU was used for evaluating the performance of the model. This model achieves a
47
  ![A graph showing the validation mean IoU over 100 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_tool_segmentation_val_iou.png)
48
 
49
  #### TensorRT speedup
50
- The `endoscopic_tool_segmentation` bundle supports the TensorRT acceleration. The table below shows the speedup ratios benchmarked on an A100 80G GPU, in which the `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing and the `end2end` means run the bundle end to end with the TensorRT based model. The `torch_fp32` and `torch_amp` is for the pytorch model with or without `amp` mode. The `trt_fp32` and `trt_fp16` is for the TensorRT based model converted in corresponding precision. The `speedup amp`, `speedup fp32` and `speedup fp16` is the speedup ratio of corresponding models versus the pytorch float32 model, while the `amp vs fp16` is between the pytorch amp model and the TensorRT float16 based model.
51
 
52
  | method | torch_fp32(ms) | torch_amp(ms) | trt_fp32(ms) | trt_fp16(ms) | speedup amp | speedup fp32 | speedup fp16 | amp vs fp16|
53
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
54
  | model computation | 12.00 | 14.06 | 6.59 | 5.20 | 0.85 | 1.82 | 2.31 | 2.70 |
55
  | end2end |170.04 | 172.20 | 155.26 | 155.57 | 0.99 | 1.10 | 1.09 | 1.11 |
56
 
 
 
 
 
 
 
 
 
57
  This result is benchmarked under:
58
  - TensorRT: 8.5.3+cuda11.8
59
  - Torch-TensorRT Version: 1.4.0
60
  - CPU Architecture: x86-64
61
  - OS: ubuntu 20.04
62
  - Python version:3.8.10
63
- - CUDA version: 11.8
64
  - GPU models and configuration: A100 80G
65
 
66
  ## MONAI Bundle Commands
 
47
  ![A graph showing the validation mean IoU over 100 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_tool_segmentation_val_iou.png)
48
 
49
  #### TensorRT speedup
50
+ The `endoscopic_tool_segmentation` bundle supports the TensorRT acceleration. The table below shows the speedup ratios benchmarked on an A100 80G GPU.
51
 
52
  | method | torch_fp32(ms) | torch_amp(ms) | trt_fp32(ms) | trt_fp16(ms) | speedup amp | speedup fp32 | speedup fp16 | amp vs fp16|
53
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
54
  | model computation | 12.00 | 14.06 | 6.59 | 5.20 | 0.85 | 1.82 | 2.31 | 2.70 |
55
  | end2end |170.04 | 172.20 | 155.26 | 155.57 | 0.99 | 1.10 | 1.09 | 1.11 |
56
 
57
+ Where:
58
+ - `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing
59
+ - `end2end` means run the bundle end-to-end with the TensorRT based model.
60
+ - `torch_fp32` and `torch_amp` are for the PyTorch models with or without `amp` mode.
61
+ - `trt_fp32` and `trt_fp16` are for the TensorRT based models converted in corresponding precision.
62
+ - `speedup amp`, `speedup fp32` and `speedup fp16` are the speedup ratios of corresponding models versus the PyTorch float32 model
63
+ - `amp vs fp16` is the speedup ratio between the PyTorch amp model and the TensorRT float16 based model.
64
+
65
  This result is benchmarked under:
66
  - TensorRT: 8.5.3+cuda11.8
67
  - Torch-TensorRT Version: 1.4.0
68
  - CPU Architecture: x86-64
69
  - OS: ubuntu 20.04
70
  - Python version:3.8.10
71
+ - CUDA version: 12.0
72
  - GPU models and configuration: A100 80G
73
 
74
  ## MONAI Bundle Commands
configs/metadata.json CHANGED
@@ -1,7 +1,8 @@
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
3
- "version": "0.4.7",
4
  "changelog": {
 
5
  "0.4.7": "fix mgpu finalize issue",
6
  "0.4.6": "enable deterministic training",
7
  "0.4.5": "add the command of executing inference with TensorRT models",
 
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
3
+ "version": "0.4.8",
4
  "changelog": {
5
+ "0.4.8": "update the TensorRT part in the README file",
6
  "0.4.7": "fix mgpu finalize issue",
7
  "0.4.6": "enable deterministic training",
8
  "0.4.5": "add the command of executing inference with TensorRT models",
docs/README.md CHANGED
@@ -40,20 +40,28 @@ IoU was used for evaluating the performance of the model. This model achieves a
40
  ![A graph showing the validation mean IoU over 100 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_tool_segmentation_val_iou.png)
41
 
42
  #### TensorRT speedup
43
- The `endoscopic_tool_segmentation` bundle supports the TensorRT acceleration. The table below shows the speedup ratios benchmarked on an A100 80G GPU, in which the `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing and the `end2end` means run the bundle end to end with the TensorRT based model. The `torch_fp32` and `torch_amp` is for the pytorch model with or without `amp` mode. The `trt_fp32` and `trt_fp16` is for the TensorRT based model converted in corresponding precision. The `speedup amp`, `speedup fp32` and `speedup fp16` is the speedup ratio of corresponding models versus the pytorch float32 model, while the `amp vs fp16` is between the pytorch amp model and the TensorRT float16 based model.
44
 
45
  | method | torch_fp32(ms) | torch_amp(ms) | trt_fp32(ms) | trt_fp16(ms) | speedup amp | speedup fp32 | speedup fp16 | amp vs fp16|
46
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
47
  | model computation | 12.00 | 14.06 | 6.59 | 5.20 | 0.85 | 1.82 | 2.31 | 2.70 |
48
  | end2end |170.04 | 172.20 | 155.26 | 155.57 | 0.99 | 1.10 | 1.09 | 1.11 |
49
 
 
 
 
 
 
 
 
 
50
  This result is benchmarked under:
51
  - TensorRT: 8.5.3+cuda11.8
52
  - Torch-TensorRT Version: 1.4.0
53
  - CPU Architecture: x86-64
54
  - OS: ubuntu 20.04
55
  - Python version:3.8.10
56
- - CUDA version: 11.8
57
  - GPU models and configuration: A100 80G
58
 
59
  ## MONAI Bundle Commands
 
40
  ![A graph showing the validation mean IoU over 100 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_endoscopic_tool_segmentation_val_iou.png)
41
 
42
  #### TensorRT speedup
43
+ The `endoscopic_tool_segmentation` bundle supports the TensorRT acceleration. The table below shows the speedup ratios benchmarked on an A100 80G GPU.
44
 
45
  | method | torch_fp32(ms) | torch_amp(ms) | trt_fp32(ms) | trt_fp16(ms) | speedup amp | speedup fp32 | speedup fp16 | amp vs fp16|
46
  | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
47
  | model computation | 12.00 | 14.06 | 6.59 | 5.20 | 0.85 | 1.82 | 2.31 | 2.70 |
48
  | end2end |170.04 | 172.20 | 155.26 | 155.57 | 0.99 | 1.10 | 1.09 | 1.11 |
49
 
50
+ Where:
51
+ - `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing
52
+ - `end2end` means run the bundle end-to-end with the TensorRT based model.
53
+ - `torch_fp32` and `torch_amp` are for the PyTorch models with or without `amp` mode.
54
+ - `trt_fp32` and `trt_fp16` are for the TensorRT based models converted in corresponding precision.
55
+ - `speedup amp`, `speedup fp32` and `speedup fp16` are the speedup ratios of corresponding models versus the PyTorch float32 model
56
+ - `amp vs fp16` is the speedup ratio between the PyTorch amp model and the TensorRT float16 based model.
57
+
58
  This result is benchmarked under:
59
  - TensorRT: 8.5.3+cuda11.8
60
  - Torch-TensorRT Version: 1.4.0
61
  - CPU Architecture: x86-64
62
  - OS: ubuntu 20.04
63
  - Python version:3.8.10
64
+ - CUDA version: 12.0
65
  - GPU models and configuration: A100 80G
66
 
67
  ## MONAI Bundle Commands