Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -25,15 +25,15 @@ More details on model performance across various devices, can be found
|
|
25 |
- **Model Type:** Semantic segmentation
|
26 |
- **Model Stats:**
|
27 |
- Model checkpoint: COCO_WITH_VOC_LABELS_V1
|
28 |
-
- Input resolution:
|
29 |
- Number of parameters: 39.6M
|
30 |
- Model size: 151 MB
|
31 |
|
32 |
|
33 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
34 |
| ---|---|---|---|---|---|---|---|
|
35 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite |
|
36 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library |
|
37 |
|
38 |
|
39 |
## Installation
|
@@ -90,6 +90,16 @@ device. This script does the following:
|
|
90 |
python -m qai_hub_models.models.deeplabv3_resnet50.export
|
91 |
```
|
92 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
93 |
## How does this work?
|
94 |
|
95 |
This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/DeepLabV3-ResNet50/export.py)
|
@@ -169,6 +179,20 @@ spot check the output with expected output.
|
|
169 |
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
|
170 |
|
171 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
172 |
|
173 |
## Deploying compiled model to Android
|
174 |
|
|
|
25 |
- **Model Type:** Semantic segmentation
|
26 |
- **Model Stats:**
|
27 |
- Model checkpoint: COCO_WITH_VOC_LABELS_V1
|
28 |
+
- Input resolution: 513x513
|
29 |
- Number of parameters: 39.6M
|
30 |
- Model size: 151 MB
|
31 |
|
32 |
|
33 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
34 |
| ---|---|---|---|---|---|---|---|
|
35 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 290.847 ms | 0 - 214 MB | FP16 | GPU | [DeepLabV3-ResNet50.tflite](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.tflite)
|
36 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 810.711 ms | 3 - 11 MB | FP16 | GPU | [DeepLabV3-ResNet50.so](https://huggingface.co/qualcomm/DeepLabV3-ResNet50/blob/main/DeepLabV3-ResNet50.so)
|
37 |
|
38 |
|
39 |
## Installation
|
|
|
90 |
python -m qai_hub_models.models.deeplabv3_resnet50.export
|
91 |
```
|
92 |
|
93 |
+
```
|
94 |
+
Profile Job summary of DeepLabV3-ResNet50
|
95 |
+
--------------------------------------------------
|
96 |
+
Device: QCS8550 (Proxy) (12)
|
97 |
+
Estimated Inference Time: 821.17 ms
|
98 |
+
Estimated Peak Memory Range: 3.28-11.89 MB
|
99 |
+
Compute Units: GPU (83) | Total (83)
|
100 |
+
|
101 |
+
|
102 |
+
```
|
103 |
## How does this work?
|
104 |
|
105 |
This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/DeepLabV3-ResNet50/export.py)
|
|
|
179 |
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
|
180 |
|
181 |
|
182 |
+
## Run demo on a cloud-hosted device
|
183 |
+
|
184 |
+
You can also run the demo on-device.
|
185 |
+
|
186 |
+
```bash
|
187 |
+
python -m qai_hub_models.models.deeplabv3_resnet50.demo --on-device
|
188 |
+
```
|
189 |
+
|
190 |
+
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
|
191 |
+
environment, please add the following to your cell (instead of the above).
|
192 |
+
```
|
193 |
+
%run -m qai_hub_models.models.deeplabv3_resnet50.demo -- --on-device
|
194 |
+
```
|
195 |
+
|
196 |
|
197 |
## Deploying compiled model to Android
|
198 |
|