oliver9523 commited on
Commit
124fb18
·
verified ·
1 Parent(s): 66711b4

Upload 11 files

Browse files
deployment/Instance segmentation task/README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Exportable code
2
+
3
+ Exportable code is a .zip archive that contains simple demo to get and visualize result of model inference.
4
+
5
+ ## Structure of generated zip
6
+
7
+ - `README.md`
8
+ - model
9
+ - `model.xml`
10
+ - `model.bin`
11
+ - `config.json`
12
+ - python
13
+ - model_wrappers (Optional)
14
+ - `__init__.py`
15
+ - model_wrappers required to run demo
16
+ - `LICENSE`
17
+ - `demo.py`
18
+ - `requirements.txt`
19
+
20
+ > **NOTE**: Zip archive contains model_wrappers when [ModelAPI](https://github.com/openvinotoolkit/model_api) has no appropriate standard model wrapper for the model.
21
+
22
+ ## Prerequisites
23
+
24
+ - [Python 3.8](https://www.python.org/downloads/)
25
+ - [Git](https://git-scm.com/)
26
+
27
+ ## Install requirements to run demo
28
+
29
+ 1. Install [prerequisites](#prerequisites). You may also need to [install pip](https://pip.pypa.io/en/stable/installation/). For example, on Ubuntu execute the following command to get pip installed:
30
+
31
+ ```bash
32
+ sudo apt install python3-pip
33
+ ```
34
+
35
+ 1. Create clean virtual environment:
36
+
37
+ One of the possible ways for creating a virtual environment is to use `virtualenv`:
38
+
39
+ ```bash
40
+ python -m pip install virtualenv
41
+ python -m virtualenv <directory_for_environment>
42
+ ```
43
+
44
+ Before starting to work inside virtual environment, it should be activated:
45
+
46
+ On Linux and macOS:
47
+
48
+ ```bash
49
+ source <directory_for_environment>/bin/activate
50
+ ```
51
+
52
+ On Windows:
53
+
54
+ ```bash
55
+ .\<directory_for_environment>\Scripts\activate
56
+ ```
57
+
58
+ Please make sure that the environment contains [wheel](https://pypi.org/project/wheel/) by calling the following command:
59
+
60
+ ```bash
61
+ python -m pip install wheel
62
+ ```
63
+
64
+ > **NOTE**: On Linux and macOS, you may need to type `python3` instead of `python`.
65
+
66
+ 1. Install requirements in the environment:
67
+
68
+ ```bash
69
+ python -m pip install -r requirements.txt
70
+ ```
71
+
72
+ ## Usecase
73
+
74
+ 1. Running the `demo.py` application with the `-h` option yields the following usage message:
75
+
76
+ ```bash
77
+ usage: demo.py [-h] -i INPUT -m MODELS [MODELS ...] [-it {sync,async}] [-l] [--no_show] [-d {CPU,GPU}] [--output OUTPUT]
78
+
79
+ Options:
80
+ -h, --help Show this help message and exit.
81
+ -i INPUT, --input INPUT
82
+ Required. An input to process. The input must be a single image, a folder of images, video file or camera id.
83
+ -m MODELS [MODELS ...], --models MODELS [MODELS ...]
84
+ Optional. Path to directory with trained model and configuration file. If you provide several models you will start the task chain pipeline with the provided models in the order in which they were specified. Default value points to deployed model folder '../model'.
85
+ -it {sync,async}, --inference_type {sync,async}
86
+ Optional. Type of inference for single model.
87
+ -l, --loop Optional. Enable reading the input in a loop.
88
+ --no_show Optional. Disables showing inference results on UI.
89
+ -d {CPU,GPU}, --device {CPU,GPU}
90
+ Optional. Device to infer the model.
91
+ --output OUTPUT Optional. Output path to save input data with predictions.
92
+ ```
93
+
94
+ 2. As a `model` parameter the default value `../model` will be used. Or you can specify the other path to the model directory from generated zip. You can pass as `input` a single image, a folder of images, a video file, or a web camera id. So you can use the following command to do inference with a pre-trained model:
95
+
96
+ ```bash
97
+ python3 demo.py -i <path_to_video>/inputVideo.mp4
98
+ ```
99
+
100
+ You can press `Q` to stop inference during demo running.
101
+
102
+ > **NOTE**: If you provide a single image as input, the demo processes and renders it quickly, then exits. To continuously
103
+ > visualize inference results on the screen, apply the `--loop` option, which enforces processing a single image in a loop.
104
+ > In this case, you can stop the demo by pressing `Q` button or killing the process in the terminal (`Ctrl+C` for Linux).
105
+ >
106
+ > **NOTE**: Default configuration contains info about pre- and post processing for inference and is guaranteed to be correct.
107
+ > Also you can change `config.json` that specifies the confidence threshold and color for each class visualization, but any
108
+ > changes should be made with caution.
109
+
110
+ 3. To save inferenced results with predictions on it, you can specify the folder path, using `--output`.
111
+ It works for images, videos, image folders and web cameras. To prevent issues, do not specify it together with a `--loop` parameter.
112
+
113
+ ```bash
114
+ python3 demo.py \
115
+ --input <path_to_image>/inputImage.jpg \
116
+ --models ../model \
117
+ --output resulted_images
118
+ ```
119
+
120
+ 4. To run a demo on a web camera, you need to know its ID.
121
+ You can check a list of camera devices by running this command line on Linux system:
122
+
123
+ ```bash
124
+ sudo apt-get install v4l-utils
125
+ v4l2-ctl --list-devices
126
+ ```
127
+
128
+ The output will look like this:
129
+
130
+ ```bash
131
+ Integrated Camera (usb-0000:00:1a.0-1.6):
132
+ /dev/video0
133
+ ```
134
+
135
+ After that, you can use this `/dev/video0` as a camera ID for `--input`.
136
+
137
+ ## Troubleshooting
138
+
139
+ 1. If you have access to the Internet through the proxy server only, please use pip with proxy call as demonstrated by command below:
140
+
141
+ ```bash
142
+ python -m pip install --proxy http://<usr_name>:<password>@<proxyserver_name>:<port#> <pkg_name>
143
+ ```
144
+
145
+ 1. If you use Anaconda environment, you should consider that OpenVINO has limited [Conda support](https://docs.openvino.ai/2021.4/openvino_docs_install_guides_installing_openvino_conda.html) for Python 3.6 and 3.7 versions only. But the demo package requires python 3.8. So please use other tools to create the environment (like `venv` or `virtualenv`) and use `pip` as a package manager.
146
+
147
+ 1. If you have problems when you try to use `pip install` command, please update pip version by following command:
148
+
149
+ ```bash
150
+ python -m pip install --upgrade pip
151
+ ```
deployment/Instance segmentation task/model.json CHANGED
@@ -1,37 +1,26 @@
1
  {
2
- "id": "6483248259c02bd70e8df1f4",
3
- "name": "MaskRCNN-ResNet50 OpenVINO INT8",
4
- "version": 1,
5
- "creation_date": "2023-06-09T13:09:22.699000+00:00",
6
  "model_format": "OpenVINO",
7
  "precision": [
8
- "INT8"
9
  ],
10
  "has_xai_head": false,
11
  "target_device": "CPU",
12
  "target_device_type": null,
13
  "performance": {
14
- "score": 0.9397590361445782
15
  },
16
- "size": 247759204,
17
  "latency": 0,
18
  "fps_throughput": 0,
19
- "optimization_type": "NNCF",
20
  "optimization_objectives": {},
21
  "model_status": "SUCCESS",
22
- "configurations": [
23
- {
24
- "name": "max_accuracy_drop",
25
- "value": 0.01
26
- },
27
- {
28
- "name": "filter_pruning",
29
- "value": false
30
- }
31
- ],
32
- "previous_revision_id": "6483248259c02bd70e8df1f5",
33
- "previous_trained_revision_id": "648311e459c02bd70e8db073",
34
- "optimization_methods": [
35
- "QUANTIZATION"
36
- ]
37
  }
 
1
  {
2
+ "id": "660fb1e0824143cee70ff576",
3
+ "name": "MaskRCNN-ResNet50 OpenVINO FP16",
4
+ "version": 3,
5
+ "creation_date": "2024-04-05T08:10:08.633000+00:00",
6
  "model_format": "OpenVINO",
7
  "precision": [
8
+ "FP16"
9
  ],
10
  "has_xai_head": false,
11
  "target_device": "CPU",
12
  "target_device_type": null,
13
  "performance": {
14
+ "score": 0.9641873278236913
15
  },
16
+ "size": 90856771,
17
  "latency": 0,
18
  "fps_throughput": 0,
19
+ "optimization_type": "MO",
20
  "optimization_objectives": {},
21
  "model_status": "SUCCESS",
22
+ "configurations": [],
23
+ "previous_revision_id": "660fb1e0824143cee70ff573",
24
+ "previous_trained_revision_id": "660fb1e0824143cee70ff573",
25
+ "optimization_methods": []
 
 
 
 
 
 
 
 
 
 
 
26
  }
deployment/Instance segmentation task/model/config.json CHANGED
@@ -3,9 +3,8 @@
3
  "converter_type": "INSTANCE_SEGMENTATION",
4
  "model_parameters": {
5
  "result_based_confidence_threshold": true,
6
- "confidence_threshold": 0.8500000238418579,
7
  "use_ellipse_shapes": false,
8
- "resize_type": "fit_to_window",
9
  "labels": {
10
  "label_tree": {
11
  "type": "tree",
 
3
  "converter_type": "INSTANCE_SEGMENTATION",
4
  "model_parameters": {
5
  "result_based_confidence_threshold": true,
6
+ "confidence_threshold": 0.824999988079071,
7
  "use_ellipse_shapes": false,
 
8
  "labels": {
9
  "label_tree": {
10
  "type": "tree",
deployment/Instance segmentation task/model/model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3128a90931c4b7b51ec0218fe952bac97d13b1763fc1f9cef1ecb620a22412e3
3
- size 56328440
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22e3375e168a776d41eab926a797814a7c2e9cc5f2cd5c6134ca9820f191725a
3
+ size 90038530
deployment/Instance segmentation task/model/model.xml CHANGED
The diff for this file is too large to render. See raw diff
 
deployment/Instance segmentation task/python/demo.py CHANGED
@@ -41,11 +41,12 @@ def build_argparser():
41
  args.add_argument(
42
  "-m",
43
  "--models",
44
- help="Required. Path to directory with trained model and configuration file. "
45
  "If you provide several models you will start the task chain pipeline with "
46
- "the provided models in the order in which they were specified.",
 
47
  nargs="+",
48
- required=True,
49
  type=Path,
50
  )
51
  args.add_argument(
 
41
  args.add_argument(
42
  "-m",
43
  "--models",
44
+ help="Optional. Path to directory with trained model and configuration file. "
45
  "If you provide several models you will start the task chain pipeline with "
46
+ "the provided models in the order in which they were specified. Default value "
47
+ "points to deployed model folder '../model'.",
48
  nargs="+",
49
+ default=[Path("../model")],
50
  type=Path,
51
  )
52
  args.add_argument(
deployment/Instance segmentation task/python/model_wrappers/openvino_models.py CHANGED
@@ -16,16 +16,11 @@
16
 
17
  from typing import Dict
18
 
 
19
  import numpy as np
20
-
21
- try:
22
- from openvino.model_zoo.model_api.models.instance_segmentation import MaskRCNNModel
23
- from openvino.model_zoo.model_api.models.ssd import SSD, find_layer_by_name
24
- from openvino.model_zoo.model_api.models.utils import Detection
25
- except ImportError as e:
26
- import warnings
27
-
28
- warnings.warn(f"{e}: ModelAPI was not found.")
29
 
30
 
31
  class OTXMaskRCNNModel(MaskRCNNModel):
@@ -92,24 +87,105 @@ class OTXMaskRCNNModel(MaskRCNNModel):
92
  masks = masks[detections_filter]
93
  classes = classes[detections_filter]
94
 
95
- scale_x = meta["resized_shape"][1] / meta["original_shape"][1]
96
- scale_y = meta["resized_shape"][0] / meta["original_shape"][0]
97
- boxes[:, 0::2] /= scale_x
98
- boxes[:, 1::2] /= scale_y
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
  resized_masks = []
101
  for box, cls, raw_mask in zip(boxes, classes, masks):
102
  raw_cls_mask = raw_mask[cls, ...] if self.is_segmentoly else raw_mask
103
  if self.resize_mask:
104
- resized_masks.append(self._segm_postprocess(box, raw_cls_mask, *meta["original_shape"][:-1]))
105
  else:
106
  resized_masks.append(raw_cls_mask)
107
 
108
  return scores, classes, boxes, resized_masks
109
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
110
  def segm_postprocess(self, *args, **kwargs):
111
  """Post-process for segmentation masks."""
112
- return self._segm_postprocess(*args, **kwargs)
113
 
114
  def disable_mask_resizing(self):
115
  """Disable mask resizing.
 
16
 
17
  from typing import Dict
18
 
19
+ import cv2
20
  import numpy as np
21
+ from openvino.model_api.models.instance_segmentation import MaskRCNNModel, _expand_box, _segm_postprocess
22
+ from openvino.model_api.models.ssd import SSD, find_layer_by_name
23
+ from openvino.model_api.models.utils import Detection
 
 
 
 
 
 
24
 
25
 
26
  class OTXMaskRCNNModel(MaskRCNNModel):
 
87
  masks = masks[detections_filter]
88
  classes = classes[detections_filter]
89
 
90
+ inputImgWidth, inputImgHeight = (
91
+ meta["original_shape"][1],
92
+ meta["original_shape"][0],
93
+ )
94
+ invertedScaleX, invertedScaleY = (
95
+ inputImgWidth / self.orig_width,
96
+ inputImgHeight / self.orig_height,
97
+ )
98
+ padLeft, padTop = 0, 0
99
+ if "fit_to_window" == self.resize_type or "fit_to_window_letterbox" == self.resize_type:
100
+ invertedScaleX = invertedScaleY = max(invertedScaleX, invertedScaleY)
101
+ if "fit_to_window_letterbox" == self.resize_type:
102
+ padLeft = (self.orig_width - round(inputImgWidth / invertedScaleX)) // 2
103
+ padTop = (self.orig_height - round(inputImgHeight / invertedScaleY)) // 2
104
+
105
+ boxes -= (padLeft, padTop, padLeft, padTop)
106
+ boxes *= (invertedScaleX, invertedScaleY, invertedScaleX, invertedScaleY)
107
+ np.around(boxes, out=boxes)
108
+ np.clip(
109
+ boxes,
110
+ 0.0,
111
+ [inputImgWidth, inputImgHeight, inputImgWidth, inputImgHeight],
112
+ out=boxes,
113
+ )
114
 
115
  resized_masks = []
116
  for box, cls, raw_mask in zip(boxes, classes, masks):
117
  raw_cls_mask = raw_mask[cls, ...] if self.is_segmentoly else raw_mask
118
  if self.resize_mask:
119
+ resized_masks.append(_segm_postprocess(box, raw_cls_mask, *meta["original_shape"][:-1]))
120
  else:
121
  resized_masks.append(raw_cls_mask)
122
 
123
  return scores, classes, boxes, resized_masks
124
 
125
+ def get_saliency_map_from_prediction(self, outputs, meta, num_classes):
126
+ """Post process function for saliency map of OTX MaskRCNN model."""
127
+ boxes = outputs[self.output_blob_name["boxes"]]
128
+ if boxes.shape[0] == 1:
129
+ boxes = boxes.squeeze(0)
130
+ scores = boxes[:, 4]
131
+ boxes = boxes[:, :4]
132
+ masks = outputs[self.output_blob_name["masks"]]
133
+ if masks.shape[0] == 1:
134
+ masks = masks.squeeze(0)
135
+ classes = outputs[self.output_blob_name["labels"]].astype(np.uint32)
136
+ if classes.shape[0] == 1:
137
+ classes = classes.squeeze(0)
138
+
139
+ scale_x = meta["resized_shape"][0] / meta["original_shape"][1]
140
+ scale_y = meta["resized_shape"][1] / meta["original_shape"][0]
141
+ boxes[:, 0::2] /= scale_x
142
+ boxes[:, 1::2] /= scale_y
143
+
144
+ saliency_maps = [None for _ in range(num_classes)]
145
+ for box, score, cls, raw_mask in zip(boxes, scores, classes, masks):
146
+ resized_mask = self._resize_mask(box, raw_mask * score, *meta["original_shape"][:-1])
147
+ if saliency_maps[cls] is None:
148
+ saliency_maps[cls] = [resized_mask]
149
+ else:
150
+ saliency_maps[cls].append(resized_mask)
151
+
152
+ saliency_maps = self._average_and_normalize(saliency_maps, num_classes)
153
+ return saliency_maps
154
+
155
+ def _resize_mask(self, box, raw_cls_mask, im_h, im_w):
156
+ # Add zero border to prevent upsampling artifacts on segment borders.
157
+ raw_cls_mask = np.pad(raw_cls_mask, ((1, 1), (1, 1)), "constant", constant_values=0)
158
+ extended_box = _expand_box(box, raw_cls_mask.shape[0] / (raw_cls_mask.shape[0] - 2.0)).astype(int)
159
+ w, h = np.maximum(extended_box[2:] - extended_box[:2] + 1, 1)
160
+ x0, y0 = np.clip(extended_box[:2], a_min=0, a_max=[im_w, im_h])
161
+ x1, y1 = np.clip(extended_box[2:] + 1, a_min=0, a_max=[im_w, im_h])
162
+
163
+ raw_cls_mask = cv2.resize(raw_cls_mask.astype(np.float32), (w, h))
164
+ # Put an object mask in an image mask.
165
+ im_mask = np.zeros((im_h, im_w), dtype=np.float32)
166
+ im_mask[y0:y1, x0:x1] = raw_cls_mask[
167
+ (y0 - extended_box[1]) : (y1 - extended_box[1]), (x0 - extended_box[0]) : (x1 - extended_box[0])
168
+ ]
169
+ return im_mask
170
+
171
+ @staticmethod
172
+ def _average_and_normalize(saliency_maps, num_classes):
173
+ for i in range(num_classes):
174
+ if saliency_maps[i] is not None:
175
+ saliency_maps[i] = np.array(saliency_maps[i]).mean(0)
176
+
177
+ for i in range(num_classes):
178
+ per_class_map = saliency_maps[i]
179
+ if per_class_map is not None:
180
+ max_values = np.max(per_class_map)
181
+ per_class_map = 255 * (per_class_map) / (max_values + 1e-12)
182
+ per_class_map = per_class_map.astype(np.uint8)
183
+ saliency_maps[i] = per_class_map
184
+ return saliency_maps
185
+
186
  def segm_postprocess(self, *args, **kwargs):
187
  """Post-process for segmentation masks."""
188
+ return _segm_postprocess(*args, **kwargs)
189
 
190
  def disable_mask_resizing(self):
191
  """Disable mask resizing.
deployment/Instance segmentation task/python/requirements.txt CHANGED
@@ -1,4 +1,4 @@
1
- openvino==2022.3.0
2
- openmodelzoo-modelapi==2022.3.0
3
- otx=1.2.3.3
4
  numpy>=1.21.0,<=1.23.5 # np.bool was removed in 1.24.0 which was used in openvino runtime
 
1
+ openvino==2023.0
2
+ openvino-model-api==0.1.3
3
+ otx==1.4.3
4
  numpy>=1.21.0,<=1.23.5 # np.bool was removed in 1.24.0 which was used in openvino runtime
deployment/project.json CHANGED
@@ -61,7 +61,8 @@
61
  }
62
  ],
63
  "thumbnail": "/api/v1/workspaces/6487656fb7efbf83c9b9ec35/projects/6483114c18fb8c1c529bd149/thumbnail",
 
64
  "performance": {
65
- "score": 0.9702380952380952
66
  }
67
  }
 
61
  }
62
  ],
63
  "thumbnail": "/api/v1/workspaces/6487656fb7efbf83c9b9ec35/projects/6483114c18fb8c1c529bd149/thumbnail",
64
+ "storage_info": {},
65
  "performance": {
66
+ "score": 0.9641873278236913
67
  }
68
  }