oliver9523 commited on
Commit
49bf3b9
·
1 Parent(s): 55df69f

Upload 11 files

Browse files
deployment/Detection task/README.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Exportable code
2
+
3
+ Exportable code is a .zip archive that contains simple demo to get and visualize result of model inference.
4
+
5
+ ## Structure of generated zip
6
+
7
+ - model
8
+ - `model.xml`
9
+ - `model.bin`
10
+ - `config.json`
11
+ - python
12
+ - model_wrappers (Optional)
13
+ - `__init__.py`
14
+ - model_wrappers required to run demo
15
+ - `README.md`
16
+ - `LICENSE`
17
+ - `demo.py`
18
+ - `requirements.txt`
19
+
20
+ > **NOTE**: Zip archive contains model_wrappers when [ModelAPI](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/common/python/openvino/model_zoo/model_api) has no appropriate standard model wrapper for the model.
21
+
22
+ ## Prerequisites
23
+
24
+ - [Python 3.8](https://www.python.org/downloads/)
25
+ - [Git](https://git-scm.com/)
26
+
27
+ ## Install requirements to run demo
28
+
29
+ 1. Install [prerequisites](#prerequisites). You may also need to [install pip](https://pip.pypa.io/en/stable/installation/). For example, on Ubuntu execute the following command to get pip installed:
30
+
31
+ ```bash
32
+ sudo apt install python3-pip
33
+ ```
34
+
35
+ 1. Create clean virtual environment:
36
+
37
+ One of the possible ways for creating a virtual environment is to use `virtualenv`:
38
+
39
+ ```bash
40
+ python -m pip install virtualenv
41
+ python -m virtualenv <directory_for_environment>
42
+ ```
43
+
44
+ Before starting to work inside virtual environment, it should be activated:
45
+
46
+ On Linux and macOS:
47
+
48
+ ```bash
49
+ source <directory_for_environment>/bin/activate
50
+ ```
51
+
52
+ On Windows:
53
+
54
+ ```bash
55
+ .\<directory_for_environment>\Scripts\activate
56
+ ```
57
+
58
+ Please make sure that the environment contains [wheel](https://pypi.org/project/wheel/) by calling the following command:
59
+
60
+ ```bash
61
+ python -m pip install wheel
62
+ ```
63
+
64
+ > **NOTE**: On Linux and macOS, you may need to type `python3` instead of `python`.
65
+
66
+ 1. Install requirements in the environment:
67
+
68
+ ```bash
69
+ python -m pip install -r requirements.txt
70
+ ```
71
+
72
+ 1. Add `model_wrappers` package to PYTHONPATH:
73
+
74
+ On Linux and macOS:
75
+
76
+ ```bash
77
+ export PYTHONPATH=$PYTHONPATH:/path/to/model_wrappers
78
+ ```
79
+
80
+ On Windows:
81
+
82
+ ```bash
83
+ set PYTHONPATH=%PYTHONPATH%;/path/to/model_wrappers
84
+ ```
85
+
86
+ ## Usecase
87
+
88
+ 1. Running the `demo.py` application with the `-h` option yields the following usage message:
89
+
90
+ ```bash
91
+ usage: demo.py [-h] -i INPUT -m MODELS [MODELS ...] [-it {sync,async}] [-l] [--no_show] [-d {CPU,GPU}] [--output OUTPUT]
92
+
93
+ Options:
94
+ -h, --help Show this help message and exit.
95
+ -i INPUT, --input INPUT
96
+ Required. An input to process. The input must be a single image, a folder of images, video file or camera id.
97
+ -m MODELS [MODELS ...], --models MODELS [MODELS ...]
98
+ Required. Path to directory with trained model and configuration file. If you provide several models you will start the task chain pipeline with the provided models in the order in
99
+ which they were specified.
100
+ -it {sync,async}, --inference_type {sync,async}
101
+ Optional. Type of inference for single model.
102
+ -l, --loop Optional. Enable reading the input in a loop.
103
+ --no_show Optional. Disables showing inference results on UI.
104
+ -d {CPU,GPU}, --device {CPU,GPU}
105
+ Optional. Device to infer the model.
106
+ --output OUTPUT Optional. Output path to save input data with predictions.
107
+ ```
108
+
109
+ 2. As a `model`, you can use path to model directory from generated zip. You can pass as `input` a single image, a folder of images, a video file, or a web camera id. So you can use the following command to do inference with a pre-trained model:
110
+
111
+ ```bash
112
+ python3 demo.py \
113
+ -i <path_to_video>/inputVideo.mp4 \
114
+ -m <path_to_model_directory>
115
+ ```
116
+
117
+ You can press `Q` to stop inference during demo running.
118
+
119
+ > **NOTE**: If you provide a single image as input, the demo processes and renders it quickly, then exits. To continuously
120
+ > visualize inference results on the screen, apply the `--loop` option, which enforces processing a single image in a loop.
121
+ > In this case, you can stop the demo by pressing `Q` button or killing the process in the terminal (`Ctrl+C` for Linux).
122
+ >
123
+ > **NOTE**: Default configuration contains info about pre- and post processing for inference and is guaranteed to be correct.
124
+ > Also you can change `config.json` that specifies the confidence threshold and color for each class visualization, but any
125
+ > changes should be made with caution.
126
+
127
+ 3. To save inferenced results with predictions on it, you can specify the folder path, using `--output`.
128
+ It works for images, videos, image folders and web cameras. To prevent issues, do not specify it together with a `--loop` parameter.
129
+
130
+ ```bash
131
+ python3 demo.py \
132
+ --input <path_to_image>/inputImage.jpg \
133
+ --models ../model \
134
+ --output resulted_images
135
+ ```
136
+
137
+ 4. To run a demo on a web camera, you need to know its ID.
138
+ You can check a list of camera devices by running this command line on Linux system:
139
+
140
+ ```bash
141
+ sudo apt-get install v4l-utils
142
+ v4l2-ctl --list-devices
143
+ ```
144
+
145
+ The output will look like this:
146
+
147
+ ```bash
148
+ Integrated Camera (usb-0000:00:1a.0-1.6):
149
+ /dev/video0
150
+ ```
151
+
152
+ After that, you can use this `/dev/video0` as a camera ID for `--input`.
153
+
154
+ ## Troubleshooting
155
+
156
+ 1. If you have access to the Internet through the proxy server only, please use pip with proxy call as demonstrated by command below:
157
+
158
+ ```bash
159
+ python -m pip install --proxy http://<usr_name>:<password>@<proxyserver_name>:<port#> <pkg_name>
160
+ ```
161
+
162
+ 1. If you use Anaconda environment, you should consider that OpenVINO has limited [Conda support](https://docs.openvino.ai/2021.4/openvino_docs_install_guides_installing_openvino_conda.html) for Python 3.6 and 3.7 versions only. But the demo package requires python 3.8. So please use other tools to create the environment (like `venv` or `virtualenv`) and use `pip` as a package manager.
163
+
164
+ 1. If you have problems when you try to use `pip install` command, please update pip version by following command:
165
+
166
+ ```bash
167
+ python -m pip install --upgrade pip
168
+ ```
deployment/Detection task/model.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "6483aa7459c02bd70e92382b",
3
+ "name": "YOLOX OpenVINO INT8",
4
+ "version": 1,
5
+ "creation_date": "2023-06-09T22:40:52.451000+00:00",
6
+ "model_format": "OpenVINO",
7
+ "precision": [
8
+ "INT8"
9
+ ],
10
+ "has_xai_head": false,
11
+ "target_device": "CPU",
12
+ "target_device_type": null,
13
+ "performance": {
14
+ "score": 0.9440353460972017
15
+ },
16
+ "size": 5854179,
17
+ "latency": 0,
18
+ "fps_throughput": 0,
19
+ "optimization_type": "POT",
20
+ "optimization_objectives": {},
21
+ "model_status": "SUCCESS",
22
+ "configurations": [
23
+ {
24
+ "name": "sample_size",
25
+ "value": 300
26
+ }
27
+ ],
28
+ "previous_revision_id": "6483aa7459c02bd70e92382a",
29
+ "previous_trained_revision_id": "64837b2359c02bd70e910cd2",
30
+ "optimization_methods": [
31
+ "QUANTIZATION"
32
+ ]
33
+ }
deployment/Detection task/model/config.json ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "type_of_model": "OTX_SSD",
3
+ "converter_type": "DETECTION",
4
+ "model_parameters": {
5
+ "result_based_confidence_threshold": true,
6
+ "confidence_threshold": 0.675000011920929,
7
+ "use_ellipse_shapes": false,
8
+ "labels": {
9
+ "label_tree": {
10
+ "type": "tree",
11
+ "directed": true,
12
+ "nodes": [],
13
+ "edges": []
14
+ },
15
+ "label_groups": [
16
+ {
17
+ "_id": "6483613d18fb8c1c529cb064",
18
+ "name": "Default group",
19
+ "label_ids": [
20
+ "6483613d18fb8c1c529cb061"
21
+ ],
22
+ "relation_type": "EXCLUSIVE"
23
+ },
24
+ {
25
+ "_id": "6483613d18fb8c1c529cb066",
26
+ "name": "No Object",
27
+ "label_ids": [
28
+ "6483613d18fb8c1c529cb065"
29
+ ],
30
+ "relation_type": "EMPTY_LABEL"
31
+ }
32
+ ],
33
+ "all_labels": {
34
+ "6483613d18fb8c1c529cb061": {
35
+ "_id": "6483613d18fb8c1c529cb061",
36
+ "name": "bird",
37
+ "color": {
38
+ "red": 255,
39
+ "green": 0,
40
+ "blue": 0,
41
+ "alpha": 255
42
+ },
43
+ "hotkey": "",
44
+ "domain": "DETECTION",
45
+ "creation_date": "2023-06-09T17:28:29.943000",
46
+ "is_empty": false,
47
+ "is_anomalous": false
48
+ },
49
+ "6483613d18fb8c1c529cb065": {
50
+ "_id": "6483613d18fb8c1c529cb065",
51
+ "name": "No Object",
52
+ "color": {
53
+ "red": 0,
54
+ "green": 0,
55
+ "blue": 0,
56
+ "alpha": 255
57
+ },
58
+ "hotkey": "",
59
+ "domain": "DETECTION",
60
+ "creation_date": "2023-06-09T17:28:29.945000",
61
+ "is_empty": true,
62
+ "is_anomalous": false
63
+ }
64
+ }
65
+ }
66
+ },
67
+ "tiling_parameters": {
68
+ "visible_in_ui": true,
69
+ "type": "PARAMETER_GROUP",
70
+ "enable_tiling": false,
71
+ "enable_tile_classifier": false,
72
+ "enable_adaptive_params": true,
73
+ "tile_size": 400,
74
+ "tile_overlap": 0.2,
75
+ "tile_max_number": 1500,
76
+ "tile_ir_scale_factor": 2.0,
77
+ "tile_sampling_ratio": 1.0,
78
+ "object_tile_ratio": 0.03,
79
+ "header": "Tiling Parameters",
80
+ "description": "Tiling Parameters",
81
+ "_ParameterGroup__metadata_overrides": {},
82
+ "groups": [],
83
+ "parameters": [
84
+ "enable_adaptive_params",
85
+ "enable_tile_classifier",
86
+ "enable_tiling",
87
+ "object_tile_ratio",
88
+ "tile_ir_scale_factor",
89
+ "tile_max_number",
90
+ "tile_overlap",
91
+ "tile_sampling_ratio",
92
+ "tile_size"
93
+ ]
94
+ }
95
+ }
deployment/Detection task/model/model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3416c16f40670b254987c2ce1588ac7e99592907addbc4c0824f7b5638bedf45
3
+ size 5086055
deployment/Detection task/model/model.xml ADDED
The diff for this file is too large to render. See raw diff
 
deployment/Detection task/python/LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright (C) 2018-2021 Intel Corporation
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
deployment/Detection task/python/demo.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Demo based on ModelAPI."""
2
+ # Copyright (C) 2021-2022 Intel Corporation
3
+ # SPDX-License-Identifier: Apache-2.0
4
+ #
5
+
6
+ import os
7
+ import sys
8
+ from argparse import SUPPRESS, ArgumentParser
9
+ from pathlib import Path
10
+
11
+ os.environ["FEATURE_FLAGS_OTX_ACTION_TASKS"] = "1"
12
+
13
+ # pylint: disable=no-name-in-module, import-error
14
+ from otx.api.usecases.exportable_code.demo.demo_package import (
15
+ AsyncExecutor,
16
+ ChainExecutor,
17
+ ModelContainer,
18
+ SyncExecutor,
19
+ create_visualizer,
20
+ )
21
+
22
+
23
+ def build_argparser():
24
+ """Parses command line arguments."""
25
+ parser = ArgumentParser(add_help=False)
26
+ args = parser.add_argument_group("Options")
27
+ args.add_argument(
28
+ "-h",
29
+ "--help",
30
+ action="help",
31
+ default=SUPPRESS,
32
+ help="Show this help message and exit.",
33
+ )
34
+ args.add_argument(
35
+ "-i",
36
+ "--input",
37
+ required=True,
38
+ help="Required. An input to process. The input must be a single image, "
39
+ "a folder of images, video file or camera id.",
40
+ )
41
+ args.add_argument(
42
+ "-m",
43
+ "--models",
44
+ help="Required. Path to directory with trained model and configuration file. "
45
+ "If you provide several models you will start the task chain pipeline with "
46
+ "the provided models in the order in which they were specified.",
47
+ nargs="+",
48
+ required=True,
49
+ type=Path,
50
+ )
51
+ args.add_argument(
52
+ "-it",
53
+ "--inference_type",
54
+ help="Optional. Type of inference for single model.",
55
+ choices=["sync", "async"],
56
+ default="sync",
57
+ type=str,
58
+ )
59
+ args.add_argument(
60
+ "-l",
61
+ "--loop",
62
+ help="Optional. Enable reading the input in a loop.",
63
+ default=False,
64
+ action="store_true",
65
+ )
66
+ args.add_argument(
67
+ "--no_show",
68
+ help="Optional. Disables showing inference results on UI.",
69
+ default=False,
70
+ action="store_true",
71
+ )
72
+ args.add_argument(
73
+ "-d",
74
+ "--device",
75
+ help="Optional. Device to infer the model.",
76
+ choices=["CPU", "GPU"],
77
+ default="CPU",
78
+ type=str,
79
+ )
80
+ args.add_argument(
81
+ "--output",
82
+ default=None,
83
+ type=str,
84
+ help="Optional. Output path to save input data with predictions.",
85
+ )
86
+
87
+ return parser
88
+
89
+
90
+ EXECUTORS = {
91
+ "sync": SyncExecutor,
92
+ "async": AsyncExecutor,
93
+ "chain": ChainExecutor,
94
+ }
95
+
96
+
97
+ def get_inferencer_class(type_inference, models):
98
+ """Return class for inference of models."""
99
+ if len(models) > 1:
100
+ type_inference = "chain"
101
+ print("You started the task chain pipeline with the provided models in the order in which they were specified")
102
+ return EXECUTORS[type_inference]
103
+
104
+
105
+ def main():
106
+ """Main function that is used to run demo."""
107
+ args = build_argparser().parse_args()
108
+
109
+ if args.loop and args.output:
110
+ raise ValueError("--loop and --output cannot be both specified")
111
+
112
+ # create models
113
+ models = []
114
+ for model_dir in args.models:
115
+ model = ModelContainer(model_dir, device=args.device)
116
+ models.append(model)
117
+
118
+ inferencer = get_inferencer_class(args.inference_type, models)
119
+
120
+ # create visualizer
121
+ visualizer = create_visualizer(models[-1].task_type, no_show=args.no_show, output=args.output)
122
+
123
+ if len(models) == 1:
124
+ models = models[0]
125
+
126
+ # create inferencer and run
127
+ demo = inferencer(models, visualizer)
128
+ demo.run(args.input, args.loop and not args.no_show)
129
+
130
+
131
+ if __name__ == "__main__":
132
+ sys.exit(main() or 0)
deployment/Detection task/python/model_wrappers/__init__.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Model Wrapper Initialization of OTX Detection."""
2
+
3
+ # Copyright (C) 2021 Intel Corporation
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing,
12
+ # software distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions
15
+ # and limitations under the License.
16
+
17
+ from .openvino_models import OTXMaskRCNNModel, OTXSSDModel
18
+
19
+ __all__ = ["OTXMaskRCNNModel", "OTXSSDModel"]
deployment/Detection task/python/model_wrappers/openvino_models.py ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """OTXMaskRCNNModel & OTXSSDModel of OTX Detection."""
2
+
3
+ # Copyright (C) 2022 Intel Corporation
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing,
12
+ # software distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions
15
+ # and limitations under the License.
16
+
17
+ from typing import Dict
18
+
19
+ import numpy as np
20
+
21
+ try:
22
+ from openvino.model_zoo.model_api.models.instance_segmentation import MaskRCNNModel
23
+ from openvino.model_zoo.model_api.models.ssd import SSD, find_layer_by_name
24
+ from openvino.model_zoo.model_api.models.utils import Detection
25
+ except ImportError as e:
26
+ import warnings
27
+
28
+ warnings.warn(f"{e}: ModelAPI was not found.")
29
+
30
+
31
+ class OTXMaskRCNNModel(MaskRCNNModel):
32
+ """OpenVINO model wrapper for OTX MaskRCNN model."""
33
+
34
+ __model__ = "OTX_MaskRCNN"
35
+
36
+ def __init__(self, model_adapter, configuration, preload=False):
37
+ super().__init__(model_adapter, configuration, preload)
38
+ self.resize_mask = True
39
+
40
+ def _check_io_number(self, number_of_inputs, number_of_outputs):
41
+ """Checks whether the number of model inputs/outputs is supported.
42
+
43
+ Args:
44
+ number_of_inputs (int, Tuple(int)): number of inputs supported by wrapper.
45
+ Use -1 to omit the check
46
+ number_of_outputs (int, Tuple(int)): number of outputs supported by wrapper.
47
+ Use -1 to omit the check
48
+
49
+ Raises:
50
+ WrapperError: if the model has unsupported number of inputs/outputs
51
+ """
52
+ super()._check_io_number(number_of_inputs, -1)
53
+
54
+ def _get_outputs(self):
55
+ output_match_dict = {}
56
+ output_names = ["boxes", "labels", "masks", "feature_vector", "saliency_map"]
57
+ for output_name in output_names:
58
+ for node_name, node_meta in self.outputs.items():
59
+ if output_name in node_meta.names:
60
+ output_match_dict[output_name] = node_name
61
+ break
62
+ return output_match_dict
63
+
64
+ def postprocess(self, outputs, meta):
65
+ """Post process function for OTX MaskRCNN model."""
66
+
67
+ # pylint: disable-msg=too-many-locals
68
+ # FIXME: here, batch dim of IR must be 1
69
+ boxes = outputs[self.output_blob_name["boxes"]]
70
+ if boxes.shape[0] == 1:
71
+ boxes = boxes.squeeze(0)
72
+ assert boxes.ndim == 2
73
+ masks = outputs[self.output_blob_name["masks"]]
74
+ if masks.shape[0] == 1:
75
+ masks = masks.squeeze(0)
76
+ assert masks.ndim == 3
77
+ classes = outputs[self.output_blob_name["labels"]].astype(np.uint32)
78
+ if classes.shape[0] == 1:
79
+ classes = classes.squeeze(0)
80
+ assert classes.ndim == 1
81
+ if self.is_segmentoly:
82
+ scores = outputs[self.output_blob_name["scores"]]
83
+ else:
84
+ scores = boxes[:, 4]
85
+ boxes = boxes[:, :4]
86
+ classes += 1
87
+
88
+ # Filter out detections with low confidence.
89
+ detections_filter = scores > self.confidence_threshold # pylint: disable=no-member
90
+ scores = scores[detections_filter]
91
+ boxes = boxes[detections_filter]
92
+ masks = masks[detections_filter]
93
+ classes = classes[detections_filter]
94
+
95
+ scale_x = meta["resized_shape"][1] / meta["original_shape"][1]
96
+ scale_y = meta["resized_shape"][0] / meta["original_shape"][0]
97
+ boxes[:, 0::2] /= scale_x
98
+ boxes[:, 1::2] /= scale_y
99
+
100
+ resized_masks = []
101
+ for box, cls, raw_mask in zip(boxes, classes, masks):
102
+ raw_cls_mask = raw_mask[cls, ...] if self.is_segmentoly else raw_mask
103
+ if self.resize_mask:
104
+ resized_masks.append(self._segm_postprocess(box, raw_cls_mask, *meta["original_shape"][:-1]))
105
+ else:
106
+ resized_masks.append(raw_cls_mask)
107
+
108
+ return scores, classes, boxes, resized_masks
109
+
110
+ def segm_postprocess(self, *args, **kwargs):
111
+ """Post-process for segmentation masks."""
112
+ return self._segm_postprocess(*args, **kwargs)
113
+
114
+ def disable_mask_resizing(self):
115
+ """Disable mask resizing.
116
+
117
+ There is no need to resize mask in tile as it will be processed at the end.
118
+ """
119
+ self.resize_mask = False
120
+
121
+
122
+ class OTXSSDModel(SSD):
123
+ """OpenVINO model wrapper for OTX SSD model."""
124
+
125
+ __model__ = "OTX_SSD"
126
+
127
+ def __init__(self, model_adapter, configuration=None, preload=False):
128
+ # pylint: disable-next=bad-super-call
129
+ super(SSD, self).__init__(model_adapter, configuration, preload)
130
+ self.image_info_blob_name = self.image_info_blob_names[0] if len(self.image_info_blob_names) == 1 else None
131
+ self.output_parser = BatchBoxesLabelsParser(
132
+ self.outputs,
133
+ self.inputs[self.image_blob_name].shape[2:][::-1],
134
+ )
135
+
136
+ def _get_outputs(self) -> Dict:
137
+ """Match the output names with graph node index."""
138
+ output_match_dict = {}
139
+ output_names = ["boxes", "labels", "feature_vector", "saliency_map"]
140
+ for output_name in output_names:
141
+ for node_name, node_meta in self.outputs.items():
142
+ if output_name in node_meta.names:
143
+ output_match_dict[output_name] = node_name
144
+ break
145
+ return output_match_dict
146
+
147
+
148
+ class BatchBoxesLabelsParser:
149
+ """Batched output parser."""
150
+
151
+ def __init__(self, layers, input_size, labels_layer="labels", default_label=0):
152
+ try:
153
+ self.labels_layer = find_layer_by_name(labels_layer, layers)
154
+ except ValueError:
155
+ self.labels_layer = None
156
+ self.default_label = default_label
157
+
158
+ try:
159
+ self.bboxes_layer = self.find_layer_bboxes_output(layers)
160
+ except ValueError:
161
+ self.bboxes_layer = find_layer_by_name("boxes", layers)
162
+
163
+ self.input_size = input_size
164
+
165
+ @staticmethod
166
+ def find_layer_bboxes_output(layers):
167
+ """find_layer_bboxes_output."""
168
+ filter_outputs = [name for name, data in layers.items() if len(data.shape) == 3 and data.shape[-1] == 5]
169
+ if not filter_outputs:
170
+ raise ValueError("Suitable output with bounding boxes is not found")
171
+ if len(filter_outputs) > 1:
172
+ raise ValueError("More than 1 candidate for output with bounding boxes.")
173
+ return filter_outputs[0]
174
+
175
+ def __call__(self, outputs):
176
+ """Parse bboxes."""
177
+ # FIXME: here, batch dim of IR must be 1
178
+ bboxes = outputs[self.bboxes_layer]
179
+ if bboxes.shape[0] == 1:
180
+ bboxes = bboxes.squeeze(0)
181
+ assert bboxes.ndim == 2
182
+ scores = bboxes[:, 4]
183
+ bboxes = bboxes[:, :4]
184
+ bboxes[:, 0::2] /= self.input_size[0]
185
+ bboxes[:, 1::2] /= self.input_size[1]
186
+ if self.labels_layer:
187
+ labels = outputs[self.labels_layer]
188
+ else:
189
+ labels = np.full(len(bboxes), self.default_label, dtype=bboxes.dtype)
190
+ if labels.shape[0] == 1:
191
+ labels = labels.squeeze(0)
192
+
193
+ detections = [Detection(*bbox, score, label) for label, score, bbox in zip(labels, scores, bboxes)]
194
+ return detections
deployment/Detection task/python/requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ openvino==2022.3.0
2
+ openmodelzoo-modelapi==2022.3.0
3
+ otx=1.2.3.3
4
+ numpy>=1.21.0,<=1.23.5 # np.bool was removed in 1.24.0 which was used in openvino runtime
deployment/project.json ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "6483613d18fb8c1c529cb05a",
3
+ "name": "birds",
4
+ "creation_time": "2023-06-09T17:28:29.944000+00:00",
5
+ "creator_id": "[email protected]",
6
+ "pipeline": {
7
+ "tasks": [
8
+ {
9
+ "id": "6483613d18fb8c1c529cb05b",
10
+ "title": "Dataset",
11
+ "task_type": "dataset"
12
+ },
13
+ {
14
+ "id": "6483613d18fb8c1c529cb05e",
15
+ "title": "Detection task",
16
+ "task_type": "detection",
17
+ "labels": [
18
+ {
19
+ "id": "6483613d18fb8c1c529cb061",
20
+ "name": "bird",
21
+ "is_anomalous": false,
22
+ "color": "#ff0000ff",
23
+ "hotkey": "",
24
+ "is_empty": false,
25
+ "group": "Default group",
26
+ "parent_id": null
27
+ },
28
+ {
29
+ "id": "6483613d18fb8c1c529cb065",
30
+ "name": "No Object",
31
+ "is_anomalous": false,
32
+ "color": "#000000ff",
33
+ "hotkey": "",
34
+ "is_empty": true,
35
+ "group": "No Object",
36
+ "parent_id": null
37
+ }
38
+ ],
39
+ "label_schema_id": "6483613d18fb8c1c529cb067"
40
+ }
41
+ ],
42
+ "connections": [
43
+ {
44
+ "from": "6483613d18fb8c1c529cb05b",
45
+ "to": "6483613d18fb8c1c529cb05e"
46
+ }
47
+ ]
48
+ },
49
+ "datasets": [
50
+ {
51
+ "id": "6483613d18fb8c1c529cb062",
52
+ "name": "Dataset",
53
+ "use_for_training": true,
54
+ "creation_time": "2023-06-09T17:28:29.944000+00:00"
55
+ },
56
+ {
57
+ "id": "6483613e18fb8c1c529cb068",
58
+ "name": "Testing set 1",
59
+ "use_for_training": false,
60
+ "creation_time": "2023-06-09T17:28:30.055000+00:00"
61
+ }
62
+ ],
63
+ "thumbnail": "/api/v1/workspaces/6487656fb7efbf83c9b9ec35/projects/6483613d18fb8c1c529cb05a/thumbnail",
64
+ "performance": {
65
+ "score": 0.939078751857355
66
+ }
67
+ }