shreyajn commited on
Commit
81f3f4b
·
verified ·
1 Parent(s): d16d11c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +53 -119
README.md CHANGED
@@ -38,64 +38,35 @@ More details on model performance across various devices, can be found
38
 
39
  | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
  |---|---|---|---|---|---|---|---|---|
41
- | CLIPImageEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 19.929 ms | 0 - 34 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
42
- | CLIPImageEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 16.659 ms | 1 - 3 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.so) |
43
- | CLIPImageEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 40.603 ms | 0 - 369 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.onnx) |
44
- | CLIPImageEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 14.675 ms | 0 - 365 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
45
- | CLIPImageEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 11.943 ms | 1 - 19 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.so) |
46
- | CLIPImageEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 30.392 ms | 0 - 222 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.onnx) |
47
- | CLIPImageEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 14.028 ms | 0 - 362 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
48
- | CLIPImageEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 8.959 ms | 1 - 302 MB | FP16 | NPU | Use Export Script |
49
- | CLIPImageEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 27.795 ms | 1 - 219 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.onnx) |
50
- | CLIPImageEncoder | SA7255P ADP | SA7255P | TFLITE | 309.047 ms | 0 - 362 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
51
- | CLIPImageEncoder | SA7255P ADP | SA7255P | QNN | 257.356 ms | 1 - 10 MB | FP16 | NPU | Use Export Script |
52
- | CLIPImageEncoder | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 19.951 ms | 0 - 34 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
53
- | CLIPImageEncoder | SA8255 (Proxy) | SA8255P Proxy | QNN | 16.692 ms | 1 - 3 MB | FP16 | NPU | Use Export Script |
54
- | CLIPImageEncoder | SA8295P ADP | SA8295P | TFLITE | 24.429 ms | 0 - 314 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
55
- | CLIPImageEncoder | SA8295P ADP | SA8295P | QNN | 20.246 ms | 1 - 18 MB | FP16 | NPU | Use Export Script |
56
- | CLIPImageEncoder | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 20.34 ms | 0 - 36 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
57
- | CLIPImageEncoder | SA8650 (Proxy) | SA8650P Proxy | QNN | 16.699 ms | 1 - 3 MB | FP16 | NPU | Use Export Script |
58
- | CLIPImageEncoder | SA8775P ADP | SA8775P | TFLITE | 28.395 ms | 0 - 362 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
59
- | CLIPImageEncoder | SA8775P ADP | SA8775P | QNN | 23.499 ms | 1 - 10 MB | FP16 | NPU | Use Export Script |
60
- | CLIPImageEncoder | QCS8275 (Proxy) | QCS8275 Proxy | TFLITE | 309.047 ms | 0 - 362 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
61
- | CLIPImageEncoder | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 257.356 ms | 1 - 10 MB | FP16 | NPU | Use Export Script |
62
- | CLIPImageEncoder | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 19.941 ms | 0 - 37 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
63
- | CLIPImageEncoder | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 16.579 ms | 1 - 3 MB | FP16 | NPU | Use Export Script |
64
- | CLIPImageEncoder | QCS9075 (Proxy) | QCS9075 Proxy | TFLITE | 28.395 ms | 0 - 362 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
65
- | CLIPImageEncoder | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 23.499 ms | 1 - 10 MB | FP16 | NPU | Use Export Script |
66
- | CLIPImageEncoder | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 21.872 ms | 0 - 326 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite) |
67
- | CLIPImageEncoder | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 18.207 ms | 1 - 306 MB | FP16 | NPU | Use Export Script |
68
- | CLIPImageEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 17.329 ms | 1 - 1 MB | FP16 | NPU | Use Export Script |
69
- | CLIPImageEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 41.047 ms | 171 - 171 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.onnx) |
70
- | CLIPTextEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 4.467 ms | 0 - 17 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
71
- | CLIPTextEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 4.03 ms | 0 - 2 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.so) |
72
- | CLIPTextEncoder | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 9.111 ms | 0 - 385 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.onnx) |
73
- | CLIPTextEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 3.062 ms | 0 - 146 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
74
- | CLIPTextEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 2.69 ms | 0 - 18 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.so) |
75
- | CLIPTextEncoder | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 6.511 ms | 0 - 70 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.onnx) |
76
- | CLIPTextEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 2.59 ms | 0 - 143 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
77
- | CLIPTextEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 2.577 ms | 0 - 127 MB | FP16 | NPU | Use Export Script |
78
- | CLIPTextEncoder | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 8.644 ms | 0 - 68 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.onnx) |
79
- | CLIPTextEncoder | SA7255P ADP | SA7255P | TFLITE | 59.152 ms | 0 - 139 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
80
- | CLIPTextEncoder | SA7255P ADP | SA7255P | QNN | 49.955 ms | 0 - 10 MB | FP16 | NPU | Use Export Script |
81
- | CLIPTextEncoder | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 4.472 ms | 0 - 10 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
82
- | CLIPTextEncoder | SA8255 (Proxy) | SA8255P Proxy | QNN | 4.03 ms | 0 - 3 MB | FP16 | NPU | Use Export Script |
83
- | CLIPTextEncoder | SA8295P ADP | SA8295P | TFLITE | 5.901 ms | 0 - 127 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
84
- | CLIPTextEncoder | SA8295P ADP | SA8295P | QNN | 5.405 ms | 0 - 18 MB | FP16 | NPU | Use Export Script |
85
- | CLIPTextEncoder | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 4.488 ms | 0 - 13 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
86
- | CLIPTextEncoder | SA8650 (Proxy) | SA8650P Proxy | QNN | 4.066 ms | 0 - 2 MB | FP16 | NPU | Use Export Script |
87
- | CLIPTextEncoder | SA8775P ADP | SA8775P | TFLITE | 6.573 ms | 0 - 139 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
88
- | CLIPTextEncoder | SA8775P ADP | SA8775P | QNN | 5.754 ms | 0 - 10 MB | FP16 | NPU | Use Export Script |
89
- | CLIPTextEncoder | QCS8275 (Proxy) | QCS8275 Proxy | TFLITE | 59.152 ms | 0 - 139 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
90
- | CLIPTextEncoder | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 49.955 ms | 0 - 10 MB | FP16 | NPU | Use Export Script |
91
- | CLIPTextEncoder | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 4.393 ms | 0 - 25 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
92
- | CLIPTextEncoder | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 4.029 ms | 0 - 3 MB | FP16 | NPU | Use Export Script |
93
- | CLIPTextEncoder | QCS9075 (Proxy) | QCS9075 Proxy | TFLITE | 6.573 ms | 0 - 139 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
94
- | CLIPTextEncoder | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 5.754 ms | 0 - 10 MB | FP16 | NPU | Use Export Script |
95
- | CLIPTextEncoder | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 5.067 ms | 0 - 134 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite) |
96
- | CLIPTextEncoder | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 4.491 ms | 0 - 131 MB | FP16 | NPU | Use Export Script |
97
- | CLIPTextEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 4.369 ms | 0 - 0 MB | FP16 | NPU | Use Export Script |
98
- | CLIPTextEncoder | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 9.289 ms | 124 - 124 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.onnx) |
99
 
100
 
101
 
@@ -156,22 +127,13 @@ python -m qai_hub_models.models.openai_clip.export
156
  ```
157
  Profiling Results
158
  ------------------------------------------------------------
159
- CLIPImageEncoder
160
- Device : Samsung Galaxy S23 (13)
161
- Runtime : TFLITE
162
- Estimated inference time (ms) : 19.9
163
- Estimated peak memory usage (MB): [0, 34]
164
- Total # Ops : 659
165
- Compute Unit(s) : NPU (659 ops)
166
-
167
- ------------------------------------------------------------
168
- CLIPTextEncoder
169
- Device : Samsung Galaxy S23 (13)
170
- Runtime : TFLITE
171
- Estimated inference time (ms) : 4.5
172
- Estimated peak memory usage (MB): [0, 17]
173
- Total # Ops : 660
174
- Compute Unit(s) : NPU (658 ops) CPU (2 ops)
175
  ```
176
 
177
 
@@ -193,43 +155,26 @@ import qai_hub as hub
193
  from qai_hub_models.models.openai_clip import Model
194
 
195
  # Load the model
196
- model = Model.from_pretrained()
197
- image_encoder_model = model.image_encoder
198
- text_encoder_model = model.text_encoder
199
 
200
  # Device
201
- device = hub.Device("Samsung Galaxy S23")
202
 
203
  # Trace model
204
- image_encoder_input_shape = image_encoder_model.get_input_spec()
205
- image_encoder_sample_inputs = image_encoder_model.sample_inputs()
206
 
207
- traced_image_encoder_model = torch.jit.trace(image_encoder_model, [torch.tensor(data[0]) for _, data in image_encoder_sample_inputs.items()])
208
 
209
  # Compile model on a specific device
210
- image_encoder_compile_job = hub.submit_compile_job(
211
- model=traced_image_encoder_model ,
212
  device=device,
213
- input_specs=image_encoder_model.get_input_spec(),
214
  )
215
 
216
  # Get target model to run on-device
217
- image_encoder_target_model = image_encoder_compile_job.get_target_model()
218
- # Trace model
219
- text_encoder_input_shape = text_encoder_model.get_input_spec()
220
- text_encoder_sample_inputs = text_encoder_model.sample_inputs()
221
-
222
- traced_text_encoder_model = torch.jit.trace(text_encoder_model, [torch.tensor(data[0]) for _, data in text_encoder_sample_inputs.items()])
223
-
224
- # Compile model on a specific device
225
- text_encoder_compile_job = hub.submit_compile_job(
226
- model=traced_text_encoder_model ,
227
- device=device,
228
- input_specs=text_encoder_model.get_input_spec(),
229
- )
230
-
231
- # Get target model to run on-device
232
- text_encoder_target_model = text_encoder_compile_job.get_target_model()
233
 
234
  ```
235
 
@@ -241,15 +186,11 @@ After compiling models from step 1. Models can be profiled model on-device using
241
  provisioned in the cloud. Once the job is submitted, you can navigate to a
242
  provided job URL to view a variety of on-device performance metrics.
243
  ```python
244
- image_encoder_profile_job = hub.submit_profile_job(
245
- model=image_encoder_target_model,
246
- device=device,
247
- )
248
- text_encoder_profile_job = hub.submit_profile_job(
249
- model=text_encoder_target_model,
250
  device=device,
251
  )
252
-
253
  ```
254
 
255
  Step 3: **Verify on-device accuracy**
@@ -257,20 +198,13 @@ Step 3: **Verify on-device accuracy**
257
  To verify the accuracy of the model on-device, you can run on-device inference
258
  on sample input data on the same cloud hosted device.
259
  ```python
260
- image_encoder_input_data = image_encoder_model.sample_inputs()
261
- image_encoder_inference_job = hub.submit_inference_job(
262
- model=image_encoder_target_model,
263
- device=device,
264
- inputs=image_encoder_input_data,
265
- )
266
- image_encoder_inference_job.download_output_data()
267
- text_encoder_input_data = text_encoder_model.sample_inputs()
268
- text_encoder_inference_job = hub.submit_inference_job(
269
- model=text_encoder_target_model,
270
  device=device,
271
- inputs=text_encoder_input_data,
272
  )
273
- text_encoder_inference_job.download_output_data()
274
 
275
  ```
276
  With the output of the model, you can compute like PSNR, relative errors or
 
38
 
39
  | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
  |---|---|---|---|---|---|---|---|---|
41
+ | OpenAI-Clip | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 25.076 ms | 0 - 23 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
42
+ | OpenAI-Clip | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 21.005 ms | 1 - 3 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.so) |
43
+ | OpenAI-Clip | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 25.599 ms | 0 - 214 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.onnx) |
44
+ | OpenAI-Clip | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 17.736 ms | 0 - 386 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
45
+ | OpenAI-Clip | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 14.891 ms | 1 - 20 MB | FP16 | NPU | [OpenAI-Clip.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.so) |
46
+ | OpenAI-Clip | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 18.187 ms | 1 - 476 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.onnx) |
47
+ | OpenAI-Clip | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 14.708 ms | 0 - 382 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
48
+ | OpenAI-Clip | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 11.195 ms | 0 - 386 MB | FP16 | NPU | Use Export Script |
49
+ | OpenAI-Clip | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 17.476 ms | 1 - 442 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.onnx) |
50
+ | OpenAI-Clip | SA7255P ADP | SA7255P | TFLITE | 368.416 ms | 0 - 383 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
51
+ | OpenAI-Clip | SA7255P ADP | SA7255P | QNN | 306.652 ms | 1 - 9 MB | FP16 | NPU | Use Export Script |
52
+ | OpenAI-Clip | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 25.143 ms | 0 - 30 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
53
+ | OpenAI-Clip | SA8255 (Proxy) | SA8255P Proxy | QNN | 21.087 ms | 1 - 3 MB | FP16 | NPU | Use Export Script |
54
+ | OpenAI-Clip | SA8295P ADP | SA8295P | TFLITE | 30.487 ms | 0 - 336 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
55
+ | OpenAI-Clip | SA8295P ADP | SA8295P | QNN | 24.931 ms | 1 - 17 MB | FP16 | NPU | Use Export Script |
56
+ | OpenAI-Clip | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 25.129 ms | 0 - 22 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
57
+ | OpenAI-Clip | SA8650 (Proxy) | SA8650P Proxy | QNN | 21.0 ms | 1 - 3 MB | FP16 | NPU | Use Export Script |
58
+ | OpenAI-Clip | SA8775P ADP | SA8775P | TFLITE | 35.055 ms | 0 - 382 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
59
+ | OpenAI-Clip | SA8775P ADP | SA8775P | QNN | 28.917 ms | 1 - 10 MB | FP16 | NPU | Use Export Script |
60
+ | OpenAI-Clip | QCS8275 (Proxy) | QCS8275 Proxy | TFLITE | 368.416 ms | 0 - 383 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
61
+ | OpenAI-Clip | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 306.652 ms | 1 - 9 MB | FP16 | NPU | Use Export Script |
62
+ | OpenAI-Clip | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 24.93 ms | 0 - 21 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
63
+ | OpenAI-Clip | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 21.023 ms | 1 - 3 MB | FP16 | NPU | Use Export Script |
64
+ | OpenAI-Clip | QCS9075 (Proxy) | QCS9075 Proxy | TFLITE | 35.055 ms | 0 - 382 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
65
+ | OpenAI-Clip | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 28.917 ms | 1 - 10 MB | FP16 | NPU | Use Export Script |
66
+ | OpenAI-Clip | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 26.982 ms | 0 - 349 MB | FP16 | NPU | [OpenAI-Clip.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.tflite) |
67
+ | OpenAI-Clip | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 22.296 ms | 1 - 397 MB | FP16 | NPU | Use Export Script |
68
+ | OpenAI-Clip | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 21.774 ms | 1 - 1 MB | FP16 | NPU | Use Export Script |
69
+ | OpenAI-Clip | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 26.6 ms | 293 - 293 MB | FP16 | NPU | [OpenAI-Clip.onnx](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/OpenAI-Clip.onnx) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
 
72
 
 
127
  ```
128
  Profiling Results
129
  ------------------------------------------------------------
130
+ OpenAI-Clip
131
+ Device : Samsung Galaxy S23 (13)
132
+ Runtime : TFLITE
133
+ Estimated inference time (ms) : 25.1
134
+ Estimated peak memory usage (MB): [0, 23]
135
+ Total # Ops : 1320
136
+ Compute Unit(s) : NPU (1318 ops) CPU (2 ops)
 
 
 
 
 
 
 
 
 
137
  ```
138
 
139
 
 
155
  from qai_hub_models.models.openai_clip import Model
156
 
157
  # Load the model
158
+ torch_model = Model.from_pretrained()
 
 
159
 
160
  # Device
161
+ device = hub.Device("Samsung Galaxy S24")
162
 
163
  # Trace model
164
+ input_shape = torch_model.get_input_spec()
165
+ sample_inputs = torch_model.sample_inputs()
166
 
167
+ pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
168
 
169
  # Compile model on a specific device
170
+ compile_job = hub.submit_compile_job(
171
+ model=pt_model,
172
  device=device,
173
+ input_specs=torch_model.get_input_spec(),
174
  )
175
 
176
  # Get target model to run on-device
177
+ target_model = compile_job.get_target_model()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
178
 
179
  ```
180
 
 
186
  provisioned in the cloud. Once the job is submitted, you can navigate to a
187
  provided job URL to view a variety of on-device performance metrics.
188
  ```python
189
+ profile_job = hub.submit_profile_job(
190
+ model=target_model,
 
 
 
 
191
  device=device,
192
  )
193
+
194
  ```
195
 
196
  Step 3: **Verify on-device accuracy**
 
198
  To verify the accuracy of the model on-device, you can run on-device inference
199
  on sample input data on the same cloud hosted device.
200
  ```python
201
+ input_data = torch_model.sample_inputs()
202
+ inference_job = hub.submit_inference_job(
203
+ model=target_model,
 
 
 
 
 
 
 
204
  device=device,
205
+ inputs=input_data,
206
  )
207
+ on_device_output = inference_job.download_output_data()
208
 
209
  ```
210
  With the output of the model, you can compute like PSNR, relative errors or