qaihm-bot commited on
Commit
6ebc757
·
verified ·
1 Parent(s): 4c736ee

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +19 -3
README.md CHANGED
@@ -30,9 +30,12 @@ More details on model performance across various devices, can be found
30
  - Model size: 151 MB
31
 
32
 
 
 
33
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
34
  | ---|---|---|---|---|---|---|---|
35
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 290.505 ms | 4 - 173 MB | FP16 | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite)
 
36
 
37
 
38
  ## Installation
@@ -89,9 +92,21 @@ device. This script does the following:
89
  python -m qai_hub_models.models.deeplabv3_resnet50.export
90
  ```
91
 
 
 
 
 
 
 
 
 
 
 
 
 
92
  ## How does this work?
93
 
94
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/DeepLabV3-ResNet50/export.py)
95
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
96
  on-device. Lets go through each step below in detail:
97
 
@@ -168,6 +183,7 @@ spot check the output with expected output.
168
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
169
 
170
 
 
171
  ## Run demo on a cloud-hosted device
172
 
173
  You can also run the demo on-device.
@@ -204,7 +220,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
204
  ## License
205
  - The license for the original implementation of DeepLabV3-ResNet50 can be found
206
  [here](https://github.com/pytorch/vision/blob/main/LICENSE).
207
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
208
 
209
  ## References
210
  * [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)
 
30
  - Model size: 151 MB
31
 
32
 
33
+
34
+
35
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
  | ---|---|---|---|---|---|---|---|
37
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 292.98 ms | 2 - 143 MB | FP16 | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite)
38
+
39
 
40
 
41
  ## Installation
 
92
  python -m qai_hub_models.models.deeplabv3_resnet50.export
93
  ```
94
 
95
+ ```
96
+ Profile Job summary of DeepLabV3-ResNet50
97
+ --------------------------------------------------
98
+ Device: QCS8550 (Proxy) (12)
99
+ Estimated Inference Time: 291.24 ms
100
+ Estimated Peak Memory Range: 5.22-174.24 MB
101
+ Compute Units: GPU (95) | Total (95)
102
+
103
+
104
+ ```
105
+
106
+
107
  ## How does this work?
108
 
109
+ This [export script](https://aihub.qualcomm.com/models/deeplabv3_resnet50/qai_hub_models/models/DeepLabV3-ResNet50/export.py)
110
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
111
  on-device. Lets go through each step below in detail:
112
 
 
183
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
184
 
185
 
186
+
187
  ## Run demo on a cloud-hosted device
188
 
189
  You can also run the demo on-device.
 
220
  ## License
221
  - The license for the original implementation of DeepLabV3-ResNet50 can be found
222
  [here](https://github.com/pytorch/vision/blob/main/LICENSE).
223
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
224
 
225
  ## References
226
  * [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)