{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Post-Training Quantization of PyTorch models with NNCF\n", "\n", "The goal of this tutorial is to demonstrate how to use the NNCF (Neural Network Compression Framework) 8-bit quantization in post-training mode (without the fine-tuning pipeline) to optimize a PyTorch model for the high-speed inference via OpenVINO™ Toolkit. The optimization process contains the following steps:\n", "\n", "1. Evaluate the original model.\n", "2. Transform the original model to a quantized one.\n", "3. Export optimized and original models to OpenVINO IR.\n", "4. Compare performance of the obtained `FP32` and `INT8` models.\n", "\n", "This tutorial uses a ResNet-50 model, pre-trained on Tiny ImageNet, which contains 100000 images of 200 classes (500 for each class) downsized to 64×64 colored images. The tutorial will demonstrate that only a tiny part of the dataset is needed for the post-training quantization, not demanding the fine-tuning of the model.\n", "\n", "\n", "> **NOTE**: This notebook requires that a C++ compiler is accessible on the default binary search path of the OS you are running the notebook.\n", "\n", "\n", "#### Table of contents:\n", "\n", "- [Preparations](#Preparations)\n", " - [Imports](#Imports)\n", " - [Settings](#Settings)\n", " - [Download and Prepare Tiny ImageNet dataset](#Download-and-Prepare-Tiny-ImageNet-dataset)\n", " - [Helpers classes and functions](#Helpers-classes-and-functions)\n", " - [Validation function](#Validation-function)\n", " - [Create and load original uncompressed model](#Create-and-load-original-uncompressed-model)\n", " - [Create train and validation DataLoaders](#Create-train-and-validation-DataLoaders)\n", "- [Model quantization and benchmarking](#Model-quantization-and-benchmarking)\n", " - [I. Evaluate the loaded model](#I.-Evaluate-the-loaded-model)\n", " - [II. Create and initialize quantization](#II.-Create-and-initialize-quantization)\n", " - [III. Convert the models to OpenVINO Intermediate Representation (OpenVINO IR)](#III.-Convert-the-models-to-OpenVINO-Intermediate-Representation-(OpenVINO-IR))\n", " - [IV. Compare performance of INT8 model and FP32 model in OpenVINO](#IV.-Compare-performance-of-INT8-model-and-FP32-model-in-OpenVINO)\n", "\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Preparations\n", "[back to top ⬆️](#Table-of-contents:)\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# Install openvino package\n", "%pip install -q \"openvino>=2024.0.0\" torch torchvision tqdm --extra-index-url https://download.pytorch.org/whl/cpu\n", "%pip install -q \"nncf>=2.9.0\"" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Imports\n", "[back to top ⬆️](#Table-of-contents:)\n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino\n" ] } ], "source": [ "import os\n", "import time\n", "import zipfile\n", "from pathlib import Path\n", "from typing import List, Tuple\n", "\n", "import nncf\n", "import openvino as ov\n", "\n", "import torch\n", "from torchvision.datasets import ImageFolder\n", "from torchvision.models import resnet50\n", "import torchvision.transforms as transforms\n", "\n", "# Fetch `notebook_utils` module\n", "import requests\n", "\n", "r = requests.get(\n", " url=\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/notebook_utils.py\",\n", ")\n", "\n", "open(\"notebook_utils.py\", \"w\").write(r.text)\n", "from notebook_utils import download_file" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Settings\n", "[back to top ⬆️](#Table-of-contents:)\n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Using cpu device\n", "'model/resnet50_fp32.pth' already exists.\n" ] }, { "data": { "text/plain": [ "PosixPath('/home/ea/work/openvino_notebooks/notebooks/pytorch-post-training-quantization-nncf/model/resnet50_fp32.pth')" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "torch_device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", "print(f\"Using {torch_device} device\")\n", "\n", "MODEL_DIR = Path(\"model\")\n", "OUTPUT_DIR = Path(\"output\")\n", "BASE_MODEL_NAME = \"resnet50\"\n", "IMAGE_SIZE = [64, 64]\n", "\n", "OUTPUT_DIR.mkdir(exist_ok=True)\n", "MODEL_DIR.mkdir(exist_ok=True)\n", "\n", "# Paths where PyTorch and OpenVINO IR models will be stored.\n", "fp32_checkpoint_filename = Path(BASE_MODEL_NAME + \"_fp32\").with_suffix(\".pth\")\n", "fp32_ir_path = OUTPUT_DIR / Path(BASE_MODEL_NAME + \"_fp32\").with_suffix(\".xml\")\n", "int8_ir_path = OUTPUT_DIR / Path(BASE_MODEL_NAME + \"_int8\").with_suffix(\".xml\")\n", "\n", "\n", "fp32_pth_url = \"https://storage.openvinotoolkit.org/repositories/nncf/openvino_notebook_ckpts/304_resnet50_fp32.pth\"\n", "download_file(fp32_pth_url, directory=MODEL_DIR, filename=fp32_checkpoint_filename)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Download and Prepare Tiny ImageNet dataset\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "* 100k images of shape 3x64x64,\n", "* 200 different classes: snake, spider, cat, truck, grasshopper, gull, etc." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def download_tiny_imagenet_200(\n", " output_dir: Path,\n", " url: str = \"http://cs231n.stanford.edu/tiny-imagenet-200.zip\",\n", " tarname: str = \"tiny-imagenet-200.zip\",\n", "):\n", " archive_path = output_dir / tarname\n", " download_file(url, directory=output_dir, filename=tarname)\n", " zip_ref = zipfile.ZipFile(archive_path, \"r\")\n", " zip_ref.extractall(path=output_dir)\n", " zip_ref.close()\n", " print(f\"Successfully downloaded and extracted dataset to: {output_dir}\")\n", "\n", "\n", "def create_validation_dir(dataset_dir: Path):\n", " VALID_DIR = dataset_dir / \"val\"\n", " val_img_dir = VALID_DIR / \"images\"\n", "\n", " fp = open(VALID_DIR / \"val_annotations.txt\", \"r\")\n", " data = fp.readlines()\n", "\n", " val_img_dict = {}\n", " for line in data:\n", " words = line.split(\"\\t\")\n", " val_img_dict[words[0]] = words[1]\n", " fp.close()\n", "\n", " for img, folder in val_img_dict.items():\n", " newpath = val_img_dir / folder\n", " if not newpath.exists():\n", " os.makedirs(newpath)\n", " if (val_img_dir / img).exists():\n", " os.rename(val_img_dir / img, newpath / img)\n", "\n", "\n", "DATASET_DIR = OUTPUT_DIR / \"tiny-imagenet-200\"\n", "if not DATASET_DIR.exists():\n", " download_tiny_imagenet_200(OUTPUT_DIR)\n", " create_validation_dir(DATASET_DIR)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Helpers classes and functions\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "The code below will help to count accuracy and visualize validation process." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "class AverageMeter(object):\n", " \"\"\"Computes and stores the average and current value\"\"\"\n", "\n", " def __init__(self, name: str, fmt: str = \":f\"):\n", " self.name = name\n", " self.fmt = fmt\n", " self.val = 0\n", " self.avg = 0\n", " self.sum = 0\n", " self.count = 0\n", "\n", " def update(self, val: float, n: int = 1):\n", " self.val = val\n", " self.sum += val * n\n", " self.count += n\n", " self.avg = self.sum / self.count\n", "\n", " def __str__(self):\n", " fmtstr = \"{name} {val\" + self.fmt + \"} ({avg\" + self.fmt + \"})\"\n", " return fmtstr.format(**self.__dict__)\n", "\n", "\n", "class ProgressMeter(object):\n", " \"\"\"Displays the progress of validation process\"\"\"\n", "\n", " def __init__(self, num_batches: int, meters: List[AverageMeter], prefix: str = \"\"):\n", " self.batch_fmtstr = self._get_batch_fmtstr(num_batches)\n", " self.meters = meters\n", " self.prefix = prefix\n", "\n", " def display(self, batch: int):\n", " entries = [self.prefix + self.batch_fmtstr.format(batch)]\n", " entries += [str(meter) for meter in self.meters]\n", " print(\"\\t\".join(entries))\n", "\n", " def _get_batch_fmtstr(self, num_batches: int):\n", " num_digits = len(str(num_batches // 1))\n", " fmt = \"{:\" + str(num_digits) + \"d}\"\n", " return \"[\" + fmt + \"/\" + fmt.format(num_batches) + \"]\"\n", "\n", "\n", "def accuracy(output: torch.Tensor, target: torch.Tensor, topk: Tuple[int] = (1,)):\n", " \"\"\"Computes the accuracy over the k top predictions for the specified values of k\"\"\"\n", " with torch.no_grad():\n", " maxk = max(topk)\n", " batch_size = target.size(0)\n", "\n", " _, pred = output.topk(maxk, 1, True, True)\n", " pred = pred.t()\n", " correct = pred.eq(target.view(1, -1).expand_as(pred))\n", "\n", " res = []\n", " for k in topk:\n", " correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)\n", " res.append(correct_k.mul_(100.0 / batch_size))\n", "\n", " return res" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Validation function\n", "[back to top ⬆️](#Table-of-contents:)\n" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "from typing import Union\n", "from openvino.runtime.ie_api import CompiledModel\n", "\n", "\n", "def validate(\n", " val_loader: torch.utils.data.DataLoader,\n", " model: Union[torch.nn.Module, CompiledModel],\n", "):\n", " \"\"\"Compute the metrics using data from val_loader for the model\"\"\"\n", " batch_time = AverageMeter(\"Time\", \":3.3f\")\n", " top1 = AverageMeter(\"Acc@1\", \":2.2f\")\n", " top5 = AverageMeter(\"Acc@5\", \":2.2f\")\n", " progress = ProgressMeter(len(val_loader), [batch_time, top1, top5], prefix=\"Test: \")\n", " start_time = time.time()\n", " # Switch to evaluate mode.\n", " if not isinstance(model, CompiledModel):\n", " model.eval()\n", " model.to(torch_device)\n", "\n", " with torch.no_grad():\n", " end = time.time()\n", " for i, (images, target) in enumerate(val_loader):\n", " images = images.to(torch_device)\n", " target = target.to(torch_device)\n", "\n", " # Compute the output.\n", " if isinstance(model, CompiledModel):\n", " output_layer = model.output(0)\n", " output = model(images)[output_layer]\n", " output = torch.from_numpy(output)\n", " else:\n", " output = model(images)\n", "\n", " # Measure accuracy and record loss.\n", " acc1, acc5 = accuracy(output, target, topk=(1, 5))\n", " top1.update(acc1[0], images.size(0))\n", " top5.update(acc5[0], images.size(0))\n", "\n", " # Measure elapsed time.\n", " batch_time.update(time.time() - end)\n", " end = time.time()\n", "\n", " print_frequency = 10\n", " if i % print_frequency == 0:\n", " progress.display(i)\n", "\n", " print(\" * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f} Total time: {total_time:.3f}\".format(top1=top1, top5=top5, total_time=end - start_time))\n", " return top1.avg" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Create and load original uncompressed model\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "ResNet-50 from the [`torchivision` repository](https://github.com/pytorch/vision) is pre-trained on ImageNet with more prediction classes than Tiny ImageNet, so the model is adjusted by swapping the last FC layer to one with fewer output values." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "tags": [] }, "outputs": [], "source": [ "def create_model(model_path: Path):\n", " \"\"\"Creates the ResNet-50 model and loads the pretrained weights\"\"\"\n", " model = resnet50()\n", " # Update the last FC layer for Tiny ImageNet number of classes.\n", " NUM_CLASSES = 200\n", " model.fc = torch.nn.Linear(in_features=2048, out_features=NUM_CLASSES, bias=True)\n", " model.to(torch_device)\n", " if model_path.exists():\n", " checkpoint = torch.load(str(model_path), map_location=\"cpu\")\n", " model.load_state_dict(checkpoint[\"state_dict\"], strict=True)\n", " else:\n", " raise RuntimeError(\"There is no checkpoint to load\")\n", " return model\n", "\n", "\n", "model = create_model(MODEL_DIR / fp32_checkpoint_filename)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Create train and validation DataLoaders\n", "[back to top ⬆️](#Table-of-contents:)\n" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "tags": [], "test_replace": { "val_dataset,": "torch.utils.data.Subset(val_dataset, range(50))," } }, "outputs": [], "source": [ "def create_dataloaders(batch_size: int = 128):\n", " \"\"\"Creates train dataloader that is used for quantization initialization and validation dataloader for computing the model accruacy\"\"\"\n", " train_dir = DATASET_DIR / \"train\"\n", " val_dir = DATASET_DIR / \"val\" / \"images\"\n", " normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n", " train_dataset = ImageFolder(\n", " train_dir,\n", " transforms.Compose(\n", " [\n", " transforms.Resize(IMAGE_SIZE),\n", " transforms.ToTensor(),\n", " normalize,\n", " ]\n", " ),\n", " )\n", " val_dataset = ImageFolder(\n", " val_dir,\n", " transforms.Compose([transforms.Resize(IMAGE_SIZE), transforms.ToTensor(), normalize]),\n", " )\n", "\n", " train_loader = torch.utils.data.DataLoader(\n", " train_dataset,\n", " batch_size=batch_size,\n", " shuffle=True,\n", " num_workers=0,\n", " pin_memory=True,\n", " sampler=None,\n", " )\n", "\n", " val_loader = torch.utils.data.DataLoader(\n", " val_dataset,\n", " batch_size=batch_size,\n", " shuffle=False,\n", " num_workers=0,\n", " pin_memory=True,\n", " )\n", " return train_loader, val_loader\n", "\n", "\n", "train_loader, val_loader = create_dataloaders()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Model quantization and benchmarking\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "With the validation pipeline, model files, and data-loading procedures for model calibration now prepared, it's time to proceed with the actual post-training quantization using NNCF." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### I. Evaluate the loaded model\n", "[back to top ⬆️](#Table-of-contents:)\n" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "scrolled": true, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test: [ 0/79]\tTime 0.317 (0.317)\tAcc@1 81.25 (81.25)\tAcc@5 92.19 (92.19)\n", "Test: [10/79]\tTime 0.391 (0.324)\tAcc@1 56.25 (66.97)\tAcc@5 86.72 (87.50)\n", "Test: [20/79]\tTime 0.284 (0.315)\tAcc@1 67.97 (64.29)\tAcc@5 85.16 (87.35)\n", "Test: [30/79]\tTime 0.291 (0.310)\tAcc@1 53.12 (62.37)\tAcc@5 77.34 (85.33)\n", "Test: [40/79]\tTime 0.278 (0.306)\tAcc@1 67.19 (60.86)\tAcc@5 90.62 (84.51)\n", "Test: [50/79]\tTime 0.269 (0.298)\tAcc@1 60.16 (60.80)\tAcc@5 88.28 (84.42)\n", "Test: [60/79]\tTime 0.242 (0.290)\tAcc@1 66.41 (60.46)\tAcc@5 86.72 (83.79)\n", "Test: [70/79]\tTime 0.264 (0.285)\tAcc@1 52.34 (60.21)\tAcc@5 80.47 (83.33)\n", " * Acc@1 60.740 Acc@5 83.960 Total time: 22.199\n", "Test accuracy of FP32 model: 60.740\n" ] } ], "source": [ "acc1 = validate(val_loader, model)\n", "print(f\"Test accuracy of FP32 model: {acc1:.3f}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### II. Create and initialize quantization\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "NNCF enables post-training quantization by adding the quantization layers into the model graph and then using a subset of the training dataset to initialize the parameters of these additional quantization layers. The framework is designed so that modifications to your original training code are minor. Quantization is the simplest scenario and requires a few modifications.\n", "For more information about NNCF Post Training Quantization (PTQ) API, refer to the [Basic Quantization Flow Guide](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/quantizing-models-post-training/basic-quantization-flow.html)." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "1. Create a transformation function that accepts a sample from the dataset and returns data suitable for model inference. This enables the creation of an instance of the nncf.Dataset class, which represents the calibration dataset (based on the training dataset) necessary for post-training quantization." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "tags": [], "test_replace": { "train_loader": "val_loader" } }, "outputs": [], "source": [ "def transform_fn(data_item):\n", " images, _ = data_item\n", " return images\n", "\n", "\n", "calibration_dataset = nncf.Dataset(train_loader, transform_fn)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "2. Create a quantized model from the pre-trained `FP32` model and the calibration dataset." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2023-09-12 22:52:13.498264: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n", "2023-09-12 22:52:13.533056: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n", "To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n", "2023-09-12 22:52:14.234552: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n", "No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:Collecting tensor statistics |█████ | 1 / 3\n", "INFO:nncf:Collecting tensor statistics |██████████ | 2 / 3\n", "INFO:nncf:Collecting tensor statistics |████████████████| 3 / 3\n", "INFO:nncf:Compiling and loading torch extension: quantized_functions_cpu...\n", "INFO:nncf:Finished loading torch extension: quantized_functions_cpu\n", "INFO:nncf:BatchNorm statistics adaptation |█████ | 1 / 3\n", "INFO:nncf:BatchNorm statistics adaptation |██████████ | 2 / 3\n", "INFO:nncf:BatchNorm statistics adaptation |████████████████| 3 / 3\n" ] } ], "source": [ "quantized_model = nncf.quantize(model, calibration_dataset)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "3. Evaluate the new model on the validation set after initialization of quantization. The accuracy should be close to the accuracy of the floating-point `FP32` model for a simple case like the one being demonstrated now." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test: [ 0/79]\tTime 0.469 (0.469)\tAcc@1 82.03 (82.03)\tAcc@5 92.97 (92.97)\n", "Test: [10/79]\tTime 0.463 (0.489)\tAcc@1 60.16 (67.26)\tAcc@5 86.72 (87.78)\n", "Test: [20/79]\tTime 0.487 (0.488)\tAcc@1 67.19 (64.47)\tAcc@5 85.94 (87.69)\n", "Test: [30/79]\tTime 0.456 (0.497)\tAcc@1 53.12 (62.63)\tAcc@5 76.56 (85.48)\n", "Test: [40/79]\tTime 0.416 (0.495)\tAcc@1 67.97 (61.11)\tAcc@5 89.84 (84.58)\n", "Test: [50/79]\tTime 0.515 (0.491)\tAcc@1 62.50 (60.94)\tAcc@5 85.94 (84.50)\n", "Test: [60/79]\tTime 0.534 (0.493)\tAcc@1 65.62 (60.50)\tAcc@5 86.72 (83.86)\n", "Test: [70/79]\tTime 0.517 (0.495)\tAcc@1 53.91 (60.28)\tAcc@5 79.69 (83.41)\n", " * Acc@1 60.880 Acc@5 84.050 Total time: 38.752\n", "Accuracy of initialized INT8 model: 60.880\n" ] } ], "source": [ "acc1 = validate(val_loader, quantized_model)\n", "print(f\"Accuracy of initialized INT8 model: {acc1:.3f}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "It should be noted that the inference time for the quantized PyTorch model is longer than that of the original model, as fake quantizers are added to the model by NNCF. However, the model's performance will significantly improve when it is in the OpenVINO Intermediate Representation (IR) format." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### III. Convert the models to OpenVINO Intermediate Representation (OpenVINO IR)\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "To convert the Pytorch models to OpenVINO IR, use Model Conversion Python API. The models will be saved to the 'OUTPUT' directory for later benchmarking.\n", "\n", "For more information about model conversion, refer to this [page](https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html).\n" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.base has been moved to tensorflow.python.trackable.base. The old module will be deleted in version 2.11.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "[ WARNING ] Please fix your imports. Module %s has been moved to %s. The old module will be deleted in version %s.\n" ] } ], "source": [ "dummy_input = torch.randn(128, 3, *IMAGE_SIZE)\n", "\n", "model_ir = ov.convert_model(model, example_input=dummy_input, input=[-1, 3, *IMAGE_SIZE])\n", "\n", "ov.save_model(model_ir, fp32_ir_path)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/ea/work/ov_venv/lib/python3.8/site-packages/nncf/torch/quantization/layers.py:336: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\n", " return self._level_low.item()\n", "/home/ea/work/ov_venv/lib/python3.8/site-packages/nncf/torch/quantization/layers.py:344: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\n", " return self._level_high.item()\n", "/home/ea/work/ov_venv/lib/python3.8/site-packages/torch/jit/_trace.py:1084: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:\n", "Tensor-likes are not close!\n", "\n", "Mismatched elements: 23985 / 25600 (93.7%)\n", "Greatest absolute difference: 0.36202991008758545 at index (90, 14) (up to 1e-05 allowed)\n", "Greatest relative difference: 1153.8073089700997 at index (116, 158) (up to 1e-05 allowed)\n", " _check_trace(\n" ] } ], "source": [ "quantized_model_ir = ov.convert_model(quantized_model, example_input=dummy_input, input=[-1, 3, *IMAGE_SIZE])\n", "\n", "ov.save_model(quantized_model_ir, int8_ir_path)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Select inference device for OpenVINO" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "tags": [ "hide-input" ] }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "e4be0e28604d4474ba58bb32a06e4514", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Dropdown(description='Device:', index=2, options=('CPU', 'GPU', 'AUTO'), value='AUTO')" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import ipywidgets as widgets\n", "\n", "core = ov.Core()\n", "device = widgets.Dropdown(\n", " options=core.available_devices + [\"AUTO\"],\n", " value=\"AUTO\",\n", " description=\"Device:\",\n", " disabled=False,\n", ")\n", "\n", "device" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Evaluate the FP32 and INT8 models." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test: [ 0/79]\tTime 0.166 (0.166)\tAcc@1 81.25 (81.25)\tAcc@5 92.19 (92.19)\n", "Test: [10/79]\tTime 0.123 (0.128)\tAcc@1 56.25 (66.97)\tAcc@5 86.72 (87.50)\n", "Test: [20/79]\tTime 0.121 (0.126)\tAcc@1 67.97 (64.29)\tAcc@5 85.16 (87.35)\n", "Test: [30/79]\tTime 0.121 (0.125)\tAcc@1 53.12 (62.37)\tAcc@5 77.34 (85.33)\n", "Test: [40/79]\tTime 0.123 (0.125)\tAcc@1 67.19 (60.86)\tAcc@5 90.62 (84.51)\n", "Test: [50/79]\tTime 0.122 (0.125)\tAcc@1 60.16 (60.80)\tAcc@5 88.28 (84.42)\n", "Test: [60/79]\tTime 0.124 (0.125)\tAcc@1 66.41 (60.46)\tAcc@5 86.72 (83.79)\n", "Test: [70/79]\tTime 0.129 (0.125)\tAcc@1 52.34 (60.21)\tAcc@5 80.47 (83.33)\n", " * Acc@1 60.740 Acc@5 83.960 Total time: 9.788\n", "Accuracy of FP32 IR model: 60.740\n" ] } ], "source": [ "core = ov.Core()\n", "fp32_compiled_model = core.compile_model(model_ir, device.value)\n", "acc1 = validate(val_loader, fp32_compiled_model)\n", "print(f\"Accuracy of FP32 IR model: {acc1:.3f}\")" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Test: [ 0/79]\tTime 0.115 (0.115)\tAcc@1 82.81 (82.81)\tAcc@5 92.97 (92.97)\n", "Test: [10/79]\tTime 0.072 (0.077)\tAcc@1 60.16 (67.83)\tAcc@5 86.72 (87.93)\n", "Test: [20/79]\tTime 0.073 (0.075)\tAcc@1 69.53 (64.43)\tAcc@5 83.59 (87.43)\n", "Test: [30/79]\tTime 0.067 (0.074)\tAcc@1 53.12 (62.63)\tAcc@5 75.00 (85.26)\n", "Test: [40/79]\tTime 0.072 (0.074)\tAcc@1 66.41 (61.13)\tAcc@5 89.06 (84.41)\n", "Test: [50/79]\tTime 0.075 (0.074)\tAcc@1 60.16 (60.92)\tAcc@5 88.28 (84.33)\n", "Test: [60/79]\tTime 0.073 (0.074)\tAcc@1 66.41 (60.57)\tAcc@5 87.50 (83.67)\n", "Test: [70/79]\tTime 0.072 (0.074)\tAcc@1 53.91 (60.29)\tAcc@5 80.47 (83.30)\n", " * Acc@1 60.920 Acc@5 83.970 Total time: 5.761\n", "Accuracy of INT8 IR model: 60.920\n" ] } ], "source": [ "int8_compiled_model = core.compile_model(quantized_model_ir, device.value)\n", "acc1 = validate(val_loader, int8_compiled_model)\n", "print(f\"Accuracy of INT8 IR model: {acc1:.3f}\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### IV. Compare performance of INT8 model and FP32 model in OpenVINO\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Finally, measure the inference performance of the `FP32` and `INT8` models, using [Benchmark Tool](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) - an inference performance measurement tool in OpenVINO. By default, Benchmark Tool runs inference for 60 seconds in asynchronous mode on CPU. It returns inference speed as latency (milliseconds per image) and throughput (frames per second) values.\n", "\n", "> **NOTE**: This notebook runs benchmark_app for 15 seconds to give a quick indication of performance. For more accurate performance, it is recommended to run benchmark_app in a terminal/command prompt after closing other applications. Run `benchmark_app -m model.xml -d CPU` to benchmark async inference on CPU for one minute. Change CPU to GPU to benchmark on GPU. Run `benchmark_app --help` to see an overview of all command-line options." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "tags": [ "hide-input" ] }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "e4be0e28604d4474ba58bb32a06e4514", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Dropdown(description='Device:', index=2, options=('CPU', 'GPU', 'AUTO'), value='AUTO')" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "device" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Benchmark FP32 model (OpenVINO IR)\n", "[ INFO ] Throughput: 51.11 FPS\n", "Benchmark INT8 model (OpenVINO IR)\n", "[ INFO ] Throughput: 196.85 FPS\n", "Benchmark FP32 model (OpenVINO IR) synchronously\n", "[ INFO ] Throughput: 36.68 FPS\n", "Benchmark INT8 model (OpenVINO IR) synchronously\n", "[ INFO ] Throughput: 128.84 FPS\n" ] } ], "source": [ "def parse_benchmark_output(benchmark_output: str):\n", " \"\"\"Prints the output from benchmark_app in human-readable format\"\"\"\n", " parsed_output = [line for line in benchmark_output if \"FPS\" in line]\n", " print(*parsed_output, sep=\"\\n\")\n", "\n", "\n", "print(\"Benchmark FP32 model (OpenVINO IR)\")\n", "benchmark_output = ! benchmark_app -m \"$fp32_ir_path\" -d $device.value -api async -t 15 -shape \"[1, 3, 512, 512]\"\n", "parse_benchmark_output(benchmark_output)\n", "\n", "print(\"Benchmark INT8 model (OpenVINO IR)\")\n", "benchmark_output = ! benchmark_app -m \"$int8_ir_path\" -d $device.value -api async -t 15 -shape \"[1, 3, 512, 512]\"\n", "parse_benchmark_output(benchmark_output)\n", "\n", "print(\"Benchmark FP32 model (OpenVINO IR) synchronously\")\n", "benchmark_output = ! benchmark_app -m \"$fp32_ir_path\" -d $device.value -api sync -t 15 -shape \"[1, 3, 512, 512]\"\n", "parse_benchmark_output(benchmark_output)\n", "\n", "print(\"Benchmark INT8 model (OpenVINO IR) synchronously\")\n", "benchmark_output = ! benchmark_app -m \"$int8_ir_path\" -d $device.value -api sync -t 15 -shape \"[1, 3, 512, 512]\"\n", "parse_benchmark_output(benchmark_output)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Show device Information for reference:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz\n", "GPU: NVIDIA GeForce GTX 1080 Ti (dGPU)\n" ] } ], "source": [ "core = ov.Core()\n", "devices = core.available_devices\n", "\n", "for device_name in devices:\n", " device_full_name = core.get_property(device_name, \"FULL_DEVICE_NAME\")\n", " print(f\"{device_name}: {device_full_name}\")" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [ "K5HPrY_d-7cV", "E01dMaR2_AFL", "qMnYsGo9_MA8", "L0tH9KdwtHhV" ], "name": "NNCF Quantization PyTorch Demo (tiny-imagenet/resnet-50)", "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" }, "openvino_notebooks": { "imageUrl": "", "tags": { "categories": [ "Optimize", "API Overview" ], "libraries": [], "other": [], "tasks": [ "Image Classification" ] } }, "vscode": { "interpreter": { "hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1" } }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": {}, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 4 }