qaihm-bot commited on
Commit
1ee3767
1 Parent(s): b1adaca

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +19 -3
README.md CHANGED
@@ -29,9 +29,12 @@ More details on model performance across various devices, can be found
29
  - Model size: 4.56 MB
30
 
31
 
 
 
32
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
33
  | ---|---|---|---|---|---|---|---|
34
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 11.098 ms | 6 - 9 MB | FP16 | NPU | [LiteHRNet.tflite](https://huggingface.co/qualcomm/LiteHRNet/blob/main/LiteHRNet.tflite)
 
35
 
36
 
37
  ## Installation
@@ -89,9 +92,21 @@ device. This script does the following:
89
  python -m qai_hub_models.models.litehrnet.export
90
  ```
91
 
 
 
 
 
 
 
 
 
 
 
 
 
92
  ## How does this work?
93
 
94
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/LiteHRNet/export.py)
95
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
96
  on-device. Lets go through each step below in detail:
97
 
@@ -168,6 +183,7 @@ spot check the output with expected output.
168
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
169
 
170
 
 
171
  ## Run demo on a cloud-hosted device
172
 
173
  You can also run the demo on-device.
@@ -204,7 +220,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
204
  ## License
205
  - The license for the original implementation of LiteHRNet can be found
206
  [here](https://github.com/HRNet/Lite-HRNet/blob/hrnet/LICENSE).
207
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
208
 
209
  ## References
210
  * [Lite-HRNet: A Lightweight High-Resolution Network](https://arxiv.org/abs/2104.06403)
 
29
  - Model size: 4.56 MB
30
 
31
 
32
+
33
+
34
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
35
  | ---|---|---|---|---|---|---|---|
36
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 11.261 ms | 6 - 13 MB | FP16 | NPU | [LiteHRNet.tflite](https://huggingface.co/qualcomm/LiteHRNet/blob/main/LiteHRNet.tflite)
37
+
38
 
39
 
40
  ## Installation
 
92
  python -m qai_hub_models.models.litehrnet.export
93
  ```
94
 
95
+ ```
96
+ Profile Job summary of LiteHRNet
97
+ --------------------------------------------------
98
+ Device: QCS8550 (Proxy) (12)
99
+ Estimated Inference Time: 11.18 ms
100
+ Estimated Peak Memory Range: 6.26-17.18 MB
101
+ Compute Units: NPU (1226),CPU (10) | Total (1236)
102
+
103
+
104
+ ```
105
+
106
+
107
  ## How does this work?
108
 
109
+ This [export script](https://aihub.qualcomm.com/models/litehrnet/qai_hub_models/models/LiteHRNet/export.py)
110
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
111
  on-device. Lets go through each step below in detail:
112
 
 
183
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
184
 
185
 
186
+
187
  ## Run demo on a cloud-hosted device
188
 
189
  You can also run the demo on-device.
 
220
  ## License
221
  - The license for the original implementation of LiteHRNet can be found
222
  [here](https://github.com/HRNet/Lite-HRNet/blob/hrnet/LICENSE).
223
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
224
 
225
  ## References
226
  * [Lite-HRNet: A Lightweight High-Resolution Network](https://arxiv.org/abs/2104.06403)