{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Convert and Optimize YOLOv8 with OpenVINO™\n", "\n", "The YOLOv8 algorithm developed by Ultralytics is a cutting-edge, state-of-the-art (SOTA) model that is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection, image segmentation, and image classification tasks.\n", "More details about its realization can be found in the original model [repository](https://github.com/ultralytics/ultralytics).\n", "\n", "This tutorial demonstrates step-by-step instructions on how to run apply quantization with accuracy control to PyTorch YOLOv8.\n", "The advanced quantization flow allows to apply 8-bit quantization to the model with control of accuracy metric. This is achieved by keeping the most impactful operations within the model in the original precision. The flow is based on the [Basic 8-bit quantization](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/quantizing-models-post-training/basic-quantization-flow.html) and has the following differences:\n", "\n", "- Besides the calibration dataset, a validation dataset is required to compute the accuracy metric. Both datasets can refer to the same data in the simplest case.\n", "- Validation function, used to compute accuracy metric is required. It can be a function that is already available in the source framework or a custom function.\n", "- Since accuracy validation is run several times during the quantization process, quantization with accuracy control can take more time than the Basic 8-bit quantization flow.\n", "- The resulted model can provide smaller performance improvement than the Basic 8-bit quantization flow because some of the operations are kept in the original precision.\n", "\n", "> **NOTE**: Currently, 8-bit quantization with accuracy control in NNCF is available only for models in OpenVINO representation.\n", "\n", "The steps for the quantization with accuracy control are described below.\n", "\n", "\n", "#### Table of contents:\n", "\n", "- [Prerequisites](#Prerequisites)\n", "- [Get Pytorch model and OpenVINO IR model](#Get-Pytorch-model-and-OpenVINO-IR-model)\n", " - [Define validator and data loader](#Define-validator-and-data-loader)\n", " - [Prepare calibration and validation datasets](#Prepare-calibration-and-validation-datasets)\n", " - [Prepare validation function](#Prepare-validation-function)\n", "- [Run quantization with accuracy control](#Run-quantization-with-accuracy-control)\n", "- [Compare Accuracy and Performance of the Original and Quantized Models](#Compare-Accuracy-and-Performance-of-the-Original-and-Quantized-Models)\n", "\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### Prerequisites\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Install necessary packages." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%pip install -q \"openvino>=2024.0.0\"\n", "%pip install -q \"nncf>=2.9.0\"\n", "%pip install -q \"ultralytics==8.1.42\" tqdm --extra-index-url https://download.pytorch.org/whl/cpu" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Get Pytorch model and OpenVINO IR model\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Generally, PyTorch models represent an instance of the [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) class, initialized by a state dictionary with model weights.\n", "We will use the YOLOv8 nano model (also known as `yolov8n`) pre-trained on a COCO dataset, which is available in this [repo](https://github.com/ultralytics/ultralytics). Similar steps are also applicable to other YOLOv8 models.\n", "Typical steps to obtain a pre-trained model:\n", "\n", "1. Create an instance of a model class.\n", "2. Load a checkpoint state dict, which contains the pre-trained model weights.\n", "\n", "In this case, the creators of the model provide an API that enables converting the YOLOv8 model to ONNX and then to OpenVINO IR. Therefore, we do not need to do these steps manually." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import os\n", "from pathlib import Path\n", "\n", "from ultralytics import YOLO\n", "from ultralytics.cfg import get_cfg\n", "from ultralytics.data.utils import check_det_dataset\n", "from ultralytics.engine.validator import BaseValidator as Validator\n", "from ultralytics.utils import DEFAULT_CFG\n", "from ultralytics.utils import ops\n", "from ultralytics.utils.metrics import ConfusionMatrix\n", "\n", "ROOT = os.path.abspath(\"\")\n", "\n", "MODEL_NAME = \"yolov8n-seg\"\n", "\n", "model = YOLO(f\"{ROOT}/{MODEL_NAME}.pt\")\n", "args = get_cfg(cfg=DEFAULT_CFG)\n", "args.data = \"coco128-seg.yaml\"" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# Fetch the notebook utils script from the openvino_notebooks repo\n", "import requests\n", "\n", "r = requests.get(\n", " url=\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/notebook_utils.py\",\n", ")\n", "\n", "open(\"notebook_utils.py\", \"w\").write(r.text)\n", "\n", "from notebook_utils import download_file" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "'/home/maleksandr/test_notebooks/ultrali/datasets/coco128-seg.zip' already exists.\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "518d4b38f82346efa704d25c1ca5ebd2", "version_major": 2, "version_minor": 0 }, "text/plain": [ "/home/maleksandr/test_notebooks/ultrali/datasets/coco128-seg.yaml: 0%| | 0.00/0.98k [00:00 float:\n", " validator.seen = 0\n", " validator.jdict = []\n", " validator.stats = dict(tp_m=[], tp=[], conf=[], pred_cls=[], target_cls=[])\n", " validator.batch_i = 1\n", " validator.confusion_matrix = ConfusionMatrix(nc=validator.nc)\n", " num_outputs = len(compiled_model.outputs)\n", "\n", " counter = 0\n", " for batch_i, batch in enumerate(validation_loader):\n", " if num_samples is not None and batch_i == num_samples:\n", " break\n", " batch = validator.preprocess(batch)\n", " results = compiled_model(batch[\"img\"])\n", " if num_outputs == 1:\n", " preds = torch.from_numpy(results[compiled_model.output(0)])\n", " else:\n", " preds = [\n", " torch.from_numpy(results[compiled_model.output(0)]),\n", " torch.from_numpy(results[compiled_model.output(1)]),\n", " ]\n", " preds = validator.postprocess(preds)\n", " validator.update_metrics(preds, batch)\n", " counter += 1\n", " stats = validator.get_stats()\n", " if num_outputs == 1:\n", " stats_metrics = stats[\"metrics/mAP50-95(B)\"]\n", " else:\n", " stats_metrics = stats[\"metrics/mAP50-95(M)\"]\n", " if log:\n", " print(f\"Validate: dataset length = {counter}, metric value = {stats_metrics:.3f}\")\n", "\n", " return stats_metrics\n", "\n", "\n", "validation_fn = partial(validation_ac, validator=validator, log=False)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Run quantization with accuracy control\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "You should provide the calibration dataset and the validation dataset. It can be the same dataset. \n", " - parameter `max_drop` defines the accuracy drop threshold. The quantization process stops when the degradation of accuracy metric on the validation dataset is less than the `max_drop`. The default value is 0.01. NNCF will stop the quantization and report an error if the `max_drop` value can’t be reached.\n", " - `drop_type` defines how the accuracy drop will be calculated: ABSOLUTE (used by default) or RELATIVE.\n", " - `ranking_subset_size` - size of a subset that is used to rank layers by their contribution to the accuracy drop. Default value is 300, and the more samples it has the better ranking, potentially. Here we use the value 25 to speed up the execution. \n", "\n", "> **NOTE**: Execution can take tens of minutes and requires up to 15 GB of free memory" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "dc01eb1666c84309850dd3db8b7fe52c", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n"
      ],
      "text/plain": []
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "
\n",
       "
\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "/home/maleksandr/test_notebooks/ultrali/openvino_notebooks/notebooks/quantizing-model-with-accuracy-control/venv/lib/python3.10/site-packages/nncf/experimental/tensor/tensor.py:84: RuntimeWarning: invalid value encountered in multiply\n", " return Tensor(self.data * unwrap_tensor_data(other))\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "536888cb110d4ed198f7e20503db6d8b", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n"
      ],
      "text/plain": []
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "
\n",
       "
\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:Validation of initial model was started\n", "INFO:nncf:Elapsed Time: 00:00:00\n", "INFO:nncf:Elapsed Time: 00:00:03\n", "INFO:nncf:Metric of initial model: 0.3651327608484117\n", "INFO:nncf:Collecting values for each data item using the initial model\n", "INFO:nncf:Elapsed Time: 00:00:04\n", "INFO:nncf:Validation of quantized model was started\n", "INFO:nncf:Elapsed Time: 00:00:00\n", "INFO:nncf:Elapsed Time: 00:00:03\n", "INFO:nncf:Metric of quantized model: 0.34040251506886543\n", "INFO:nncf:Collecting values for each data item using the quantized model\n", "INFO:nncf:Elapsed Time: 00:00:04\n", "INFO:nncf:Accuracy drop: 0.024730245779546245 (absolute)\n", "INFO:nncf:Accuracy drop: 0.024730245779546245 (absolute)\n", "INFO:nncf:Total number of quantized operations in the model: 92\n", "INFO:nncf:Number of parallel workers to rank quantized operations: 1\n", "INFO:nncf:ORIGINAL metric is used to rank quantizers\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "1677e3f54a294407aa297e2facb698ac", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n"
      ],
      "text/plain": []
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "
\n",
       "
\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:Elapsed Time: 00:01:38\n", "INFO:nncf:Changing the scope of quantizer nodes was started\n", "INFO:nncf:Reverted 1 operations to the floating-point precision: \n", "\t__module.model.4.m.0.cv2.conv/aten::_convolution/Convolution\n", "INFO:nncf:Accuracy drop with the new quantization scope is 0.023408466397916217 (absolute)\n", "INFO:nncf:Reverted 1 operations to the floating-point precision: \n", "\t__module.model.18.m.0.cv2.conv/aten::_convolution/Convolution\n", "INFO:nncf:Accuracy drop with the new quantization scope is 0.024749654890442174 (absolute)\n", "INFO:nncf:Re-calculating ranking scores for remaining groups\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "ce78c0bd52074cc6a8cf006fc94dbcc7", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n"
      ],
      "text/plain": []
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "
\n",
       "
\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:Elapsed Time: 00:01:36\n", "INFO:nncf:Reverted 1 operations to the floating-point precision: \n", "\t__module.model.22.proto.cv3.conv/aten::_convolution/Convolution\n", "INFO:nncf:Accuracy drop with the new quantization scope is 0.023229513575966754 (absolute)\n", "INFO:nncf:Reverted 2 operations to the floating-point precision: \n", "\t__module.model.22/aten::add/Add_6\n", "\t__module.model.22/aten::sub/Subtract\n", "INFO:nncf:Accuracy drop with the new quantization scope is 0.02425608378963906 (absolute)\n", "INFO:nncf:Re-calculating ranking scores for remaining groups\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "76073e20a40b4632b6324a6be54a2d92", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n"
      ],
      "text/plain": []
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "
\n",
       "
\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:Elapsed Time: 00:01:35\n", "INFO:nncf:Reverted 1 operations to the floating-point precision: \n", "\t__module.model.6.m.0.cv2.conv/aten::_convolution/Convolution\n", "INFO:nncf:Accuracy drop with the new quantization scope is 0.023297881500256024 (absolute)\n", "INFO:nncf:Reverted 2 operations to the floating-point precision: \n", "\t__module.model.12.cv2.conv/aten::_convolution/Convolution\n", "\t__module.model.12.m.0.cv1.conv/aten::_convolution/Convolution\n", "INFO:nncf:Accuracy drop with the new quantization scope is 0.021779128052922092 (absolute)\n", "INFO:nncf:Reverted 2 operations to the floating-point precision: \n", "\t__module.model.7.conv/aten::_convolution/Convolution\n", "\t__module.model.12.cv1.conv/aten::_convolution/Convolution\n", "INFO:nncf:Accuracy drop with the new quantization scope is 0.01696486517685941 (absolute)\n", "INFO:nncf:Reverted 2 operations to the floating-point precision: \n", "\t__module.model.22/aten::add/Add_7\n", "\t__module.model.22/aten::sub/Subtract_1\n", "INFO:nncf:Algorithm completed: achieved required accuracy drop 0.005923437521415831 (absolute)\n", "INFO:nncf:9 out of 92 were reverted back to the floating-point precision:\n", "\t__module.model.4.m.0.cv2.conv/aten::_convolution/Convolution\n", "\t__module.model.22.proto.cv3.conv/aten::_convolution/Convolution\n", "\t__module.model.6.m.0.cv2.conv/aten::_convolution/Convolution\n", "\t__module.model.12.cv2.conv/aten::_convolution/Convolution\n", "\t__module.model.12.m.0.cv1.conv/aten::_convolution/Convolution\n", "\t__module.model.7.conv/aten::_convolution/Convolution\n", "\t__module.model.12.cv1.conv/aten::_convolution/Convolution\n", "\t__module.model.22/aten::add/Add_7\n", "\t__module.model.22/aten::sub/Subtract_1\n" ] } ], "source": [ "quantized_model = nncf.quantize_with_accuracy_control(\n", " ov_model,\n", " quantization_dataset,\n", " quantization_dataset,\n", " validation_fn=validation_fn,\n", " max_drop=0.01,\n", " preset=nncf.QuantizationPreset.MIXED,\n", " subset_size=128,\n", " advanced_accuracy_restorer_parameters=AdvancedAccuracyRestorerParameters(ranking_subset_size=25),\n", ")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Compare Accuracy and Performance of the Original and Quantized Models\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Now we can compare metrics of the Original non-quantized OpenVINO IR model and Quantized OpenVINO IR model to make sure that the `max_drop` is not exceeded." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "8be03e2bba694cf4b69d7f78aeca3cd1", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Dropdown(description='Device:', index=4, options=('CPU', 'GPU.0', 'GPU.1', 'GPU.2', 'AUTO'), value='AUTO')" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import ipywidgets as widgets\n", "\n", "core = ov.Core()\n", "\n", "device = widgets.Dropdown(\n", " options=core.available_devices + [\"AUTO\"],\n", " value=\"AUTO\",\n", " description=\"Device:\",\n", " disabled=False,\n", ")\n", "\n", "device" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Validate: dataset length = 128, metric value = 0.368\n", "Validate: dataset length = 128, metric value = 0.357\n", "[Original OpenVINO]: 0.3677\n", "[Quantized OpenVINO]: 0.3570\n" ] } ], "source": [ "core = ov.Core()\n", "ov_config = {}\n", "if device.value != \"CPU\":\n", " quantized_model.reshape({0: [1, 3, 640, 640]})\n", "if \"GPU\" in device.value or (\"AUTO\" in device.value and \"GPU\" in core.available_devices):\n", " ov_config = {\"GPU_DISABLE_WINOGRAD_CONVOLUTION\": \"YES\"}\n", "quantized_compiled_model = core.compile_model(quantized_model, device.value, ov_config)\n", "compiled_ov_model = core.compile_model(ov_model, device.value, ov_config)\n", "\n", "pt_result = validation_ac(compiled_ov_model, data_loader, validator)\n", "quantized_result = validation_ac(quantized_compiled_model, data_loader, validator)\n", "\n", "\n", "print(f\"[Original OpenVINO]: {pt_result:.4f}\")\n", "print(f\"[Quantized OpenVINO]: {quantized_result:.4f}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "And compare performance. " ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from pathlib import Path\n", "\n", "# Set model directory\n", "MODEL_DIR = Path(\"model\")\n", "MODEL_DIR.mkdir(exist_ok=True)\n", "\n", "ir_model_path = MODEL_DIR / \"ir_model.xml\"\n", "quantized_model_path = MODEL_DIR / \"quantized_model.xml\"\n", "\n", "# Save models to use them in the commandline banchmark app\n", "ov.save_model(ov_model, ir_model_path, compress_to_fp16=False)\n", "ov.save_model(quantized_model, quantized_model_path, compress_to_fp16=False)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[Step 1/11] Parsing and validating input arguments\n", "[ INFO ] Parsing input parameters\n", "[Step 2/11] Loading OpenVINO Runtime\n", "[ WARNING ] Default duration 120 seconds is used for unknown device AUTO\n", "[ INFO ] OpenVINO:\n", "[ INFO ] Build ................................. 2024.0.0-14509-34caeefd078-releases/2024/0\n", "[ INFO ] \n", "[ INFO ] Device info:\n", "[ INFO ] AUTO\n", "[ INFO ] Build ................................. 2024.0.0-14509-34caeefd078-releases/2024/0\n", "[ INFO ] \n", "[ INFO ] \n", "[Step 3/11] Setting device configuration\n", "[ WARNING ] Performance hint was not explicitly specified in command line. Device(AUTO) performance hint will be set to PerformanceMode.THROUGHPUT.\n", "[Step 4/11] Reading model files\n", "[ INFO ] Loading model files\n", "[ INFO ] Read model took 13.54 ms\n", "[ INFO ] Original model I/O parameters:\n", "[ INFO ] Model inputs:\n", "[ INFO ] x (node: x) : f32 / [...] / [?,3,?,?]\n", "[ INFO ] Model outputs:\n", "[ INFO ] ***NO_NAME*** (node: __module.model.22/aten::cat/Concat_8) : f32 / [...] / [?,116,16..]\n", "[ INFO ] input.199 (node: __module.model.22.cv4.2.1.act/aten::silu_/Swish_37) : f32 / [...] / [?,32,8..,8..]\n", "[Step 5/11] Resizing model to match image sizes and given batch\n", "[ INFO ] Model batch size: 1\n", "[ INFO ] Reshaping model: 'x': [1,3,640,640]\n", "[ INFO ] Reshape model took 8.56 ms\n", "[Step 6/11] Configuring input of the model\n", "[ INFO ] Model inputs:\n", "[ INFO ] x (node: x) : u8 / [N,C,H,W] / [1,3,640,640]\n", "[ INFO ] Model outputs:\n", "[ INFO ] ***NO_NAME*** (node: __module.model.22/aten::cat/Concat_8) : f32 / [...] / [1,116,8400]\n", "[ INFO ] input.199 (node: __module.model.22.cv4.2.1.act/aten::silu_/Swish_37) : f32 / [...] / [1,32,160,160]\n", "[Step 7/11] Loading the model to the device\n", "[ INFO ] Compile model took 437.16 ms\n", "[Step 8/11] Querying optimal runtime parameters\n", "[ INFO ] Model:\n", "[ INFO ] NETWORK_NAME: Model0\n", "[ INFO ] EXECUTION_DEVICES: ['CPU']\n", "[ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT\n", "[ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 12\n", "[ INFO ] MULTI_DEVICE_PRIORITIES: CPU\n", "[ INFO ] CPU:\n", "[ INFO ] AFFINITY: Affinity.CORE\n", "[ INFO ] CPU_DENORMALS_OPTIMIZATION: False\n", "[ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0\n", "[ INFO ] DYNAMIC_QUANTIZATION_GROUP_SIZE: 0\n", "[ INFO ] ENABLE_CPU_PINNING: True\n", "[ INFO ] ENABLE_HYPER_THREADING: True\n", "[ INFO ] EXECUTION_DEVICES: ['CPU']\n", "[ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE\n", "[ INFO ] INFERENCE_NUM_THREADS: 36\n", "[ INFO ] INFERENCE_PRECISION_HINT: \n", "[ INFO ] KV_CACHE_PRECISION: \n", "[ INFO ] LOG_LEVEL: Level.NO\n", "[ INFO ] NETWORK_NAME: Model0\n", "[ INFO ] NUM_STREAMS: 12\n", "[ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 12\n", "[ INFO ] PERFORMANCE_HINT: THROUGHPUT\n", "[ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0\n", "[ INFO ] PERF_COUNT: NO\n", "[ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE\n", "[ INFO ] MODEL_PRIORITY: Priority.MEDIUM\n", "[ INFO ] LOADED_FROM_CACHE: False\n", "[Step 9/11] Creating infer requests and preparing input tensors\n", "[ WARNING ] No input files were given for input 'x'!. This input will be filled with random values!\n", "[ INFO ] Fill input 'x' with random values \n", "[Step 10/11] Measuring performance (Start inference asynchronously, 12 inference requests, limits: 120000 ms duration)\n", "[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).\n", "[ INFO ] First inference took 46.51 ms\n", "[Step 11/11] Dumping statistics report\n", "[ INFO ] Execution Devices:['CPU']\n", "[ INFO ] Count: 16872 iterations\n", "[ INFO ] Duration: 120117.37 ms\n", "[ INFO ] Latency:\n", "[ INFO ] Median: 85.10 ms\n", "[ INFO ] Average: 85.27 ms\n", "[ INFO ] Min: 53.55 ms\n", "[ INFO ] Max: 108.50 ms\n", "[ INFO ] Throughput: 140.46 FPS\n" ] } ], "source": [ "# Inference Original model (OpenVINO IR)\n", "! benchmark_app -m $ir_model_path -shape \"[1,3,640,640]\" -d $device.value -api async" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[Step 1/11] Parsing and validating input arguments\n", "[ INFO ] Parsing input parameters\n", "[Step 2/11] Loading OpenVINO Runtime\n", "[ WARNING ] Default duration 120 seconds is used for unknown device AUTO\n", "[ INFO ] OpenVINO:\n", "[ INFO ] Build ................................. 2024.0.0-14509-34caeefd078-releases/2024/0\n", "[ INFO ] \n", "[ INFO ] Device info:\n", "[ INFO ] AUTO\n", "[ INFO ] Build ................................. 2024.0.0-14509-34caeefd078-releases/2024/0\n", "[ INFO ] \n", "[ INFO ] \n", "[Step 3/11] Setting device configuration\n", "[ WARNING ] Performance hint was not explicitly specified in command line. Device(AUTO) performance hint will be set to PerformanceMode.THROUGHPUT.\n", "[Step 4/11] Reading model files\n", "[ INFO ] Loading model files\n", "[ INFO ] Read model took 20.52 ms\n", "[ INFO ] Original model I/O parameters:\n", "[ INFO ] Model inputs:\n", "[ INFO ] x (node: x) : f32 / [...] / [?,3,?,?]\n", "[ INFO ] Model outputs:\n", "[ INFO ] ***NO_NAME*** (node: __module.model.22/aten::cat/Concat_8) : f32 / [...] / [?,116,16..]\n", "[ INFO ] input.199 (node: __module.model.22.cv4.2.1.act/aten::silu_/Swish_37) : f32 / [...] / [?,32,8..,8..]\n", "[Step 5/11] Resizing model to match image sizes and given batch\n", "[ INFO ] Model batch size: 1\n", "[ INFO ] Reshaping model: 'x': [1,3,640,640]\n", "[ INFO ] Reshape model took 11.74 ms\n", "[Step 6/11] Configuring input of the model\n", "[ INFO ] Model inputs:\n", "[ INFO ] x (node: x) : u8 / [N,C,H,W] / [1,3,640,640]\n", "[ INFO ] Model outputs:\n", "[ INFO ] ***NO_NAME*** (node: __module.model.22/aten::cat/Concat_8) : f32 / [...] / [1,116,8400]\n", "[ INFO ] input.199 (node: __module.model.22.cv4.2.1.act/aten::silu_/Swish_37) : f32 / [...] / [1,32,160,160]\n", "[Step 7/11] Loading the model to the device\n", "[ INFO ] Compile model took 711.53 ms\n", "[Step 8/11] Querying optimal runtime parameters\n", "[ INFO ] Model:\n", "[ INFO ] NETWORK_NAME: Model0\n", "[ INFO ] EXECUTION_DEVICES: ['CPU']\n", "[ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT\n", "[ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 12\n", "[ INFO ] MULTI_DEVICE_PRIORITIES: CPU\n", "[ INFO ] CPU:\n", "[ INFO ] AFFINITY: Affinity.CORE\n", "[ INFO ] CPU_DENORMALS_OPTIMIZATION: False\n", "[ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0\n", "[ INFO ] DYNAMIC_QUANTIZATION_GROUP_SIZE: 0\n", "[ INFO ] ENABLE_CPU_PINNING: True\n", "[ INFO ] ENABLE_HYPER_THREADING: True\n", "[ INFO ] EXECUTION_DEVICES: ['CPU']\n", "[ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE\n", "[ INFO ] INFERENCE_NUM_THREADS: 36\n", "[ INFO ] INFERENCE_PRECISION_HINT: \n", "[ INFO ] KV_CACHE_PRECISION: \n", "[ INFO ] LOG_LEVEL: Level.NO\n", "[ INFO ] NETWORK_NAME: Model0\n", "[ INFO ] NUM_STREAMS: 12\n", "[ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 12\n", "[ INFO ] PERFORMANCE_HINT: THROUGHPUT\n", "[ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0\n", "[ INFO ] PERF_COUNT: NO\n", "[ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE\n", "[ INFO ] MODEL_PRIORITY: Priority.MEDIUM\n", "[ INFO ] LOADED_FROM_CACHE: False\n", "[Step 9/11] Creating infer requests and preparing input tensors\n", "[ WARNING ] No input files were given for input 'x'!. This input will be filled with random values!\n", "[ INFO ] Fill input 'x' with random values \n", "[Step 10/11] Measuring performance (Start inference asynchronously, 12 inference requests, limits: 120000 ms duration)\n", "[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).\n", "[ INFO ] First inference took 35.64 ms\n", "[Step 11/11] Dumping statistics report\n", "[ INFO ] Execution Devices:['CPU']\n", "[ INFO ] Count: 33564 iterations\n", "[ INFO ] Duration: 120059.16 ms\n", "[ INFO ] Latency:\n", "[ INFO ] Median: 42.72 ms\n", "[ INFO ] Average: 42.76 ms\n", "[ INFO ] Min: 23.29 ms\n", "[ INFO ] Max: 67.71 ms\n", "[ INFO ] Throughput: 279.56 FPS\n" ] } ], "source": [ "# Inference Quantized model (OpenVINO IR)\n", "! benchmark_app -m $quantized_model_path -shape \"[1,3,640,640]\" -d $device.value -api async" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" }, "openvino_notebooks": { "imageUrl": "", "tags": { "categories": [ "Convert", "Optimize" ], "libraries": [], "other": ["YOLO"], "tasks": [ "Image Segmentation" ] } }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": {}, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 4 }