{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "cacbe6b4", "metadata": { "id": "rQc-wXjqrEuR" }, "source": [ "# Quantize Wav2Vec Speech Recognition Model using NNCF PTQ API\n", "This tutorial demonstrates how to apply `INT8` quantization to the speech recognition model, known as [Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2), using the NNCF (Neural Network Compression Framework) 8-bit quantization in post-training mode (without the fine-tuning pipeline). This notebook uses a fine-tuned [Wav2Vec2-Base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) [PyTorch](https://pytorch.org/) model trained on the [LibriSpeech ASR corpus](https://www.openslr.org/12). The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:\n", "\n", "- Download and prepare the Wav2Vec2 model and LibriSpeech dataset.\n", "- Define data loading and accuracy validation functionality.\n", "- Model quantization.\n", "- Compare Accuracy of original PyTorch model, OpenVINO FP16 and INT8 models.\n", "- Compare performance of the original and quantized models.\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "b6bb1c0f", "metadata": {}, "source": [ "\n", "#### Table of contents:\n", "\n", "- [Imports](#Imports)\n", "- [Settings](#Settings)\n", "- [Prepare the Model](#Prepare-the-Model)\n", "- [Prepare LibriSpeech Dataset](#Prepare-LibriSpeech-Dataset)\n", "- [Run Quantization](#Run-Quantization)\n", "- [Model Usage Example with Inference Pipeline](#Model-Usage-Example-with-Inference-Pipeline)\n", "- [Validate model accuracy on dataset](#Validate-model-accuracy-on-dataset)\n", "- [Compare Performance of the Original and Quantized Models](#Compare-Performance-of-the-Original-and-Quantized-Models)\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "id": "3f1f601b-c012-4e48-ab50-262ec3c0af2d", "metadata": {}, "outputs": [], "source": [ "%pip install -q \"openvino>=2023.3.0\" \"nncf>=2.7\"\n", "%pip install datasets \"torchmetrics>=0.11.0\" \"torch>=2.1.0\" --extra-index-url https://download.pytorch.org/whl/cpu\n", "%pip install -q soundfile librosa \"transformers>=4.36.2\" --extra-index-url https://download.pytorch.org/whl/cpu" ] }, { "attachments": {}, "cell_type": "markdown", "id": "4d6b41e6-132b-40da-b3b9-91bacba29e31", "metadata": {}, "source": [ "## Imports\n", "[back to top ⬆️](#Table-of-contents:)\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "629e2d2c", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "import numpy as np\n", "import openvino as ov\n", "import torch\n", "import IPython.display as ipd\n", "\n", "from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor" ] }, { "attachments": {}, "cell_type": "markdown", "id": "e9e66896-d439-4065-868a-65b44d31525a", "metadata": {}, "source": [ "## Settings\n", "[back to top ⬆️](#Table-of-contents:)\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "284e9a4b", "metadata": {}, "outputs": [], "source": [ "from pathlib import Path\n", "\n", "# Set the data and model directories, model source URL and model filename.\n", "MODEL_DIR = Path(\"model\")\n", "MODEL_DIR.mkdir(exist_ok=True)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "44dc335d", "metadata": { "id": "YytHDzLE0uOJ" }, "source": [ "## Prepare the Model\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Perform the following:\n", "- Download and unpack a pre-trained Wav2Vec2 model.\n", "- Run model conversion API to convert the model from the PyTorch representation to the OpenVINO Intermediate Representation (OpenVINO IR)." ] }, { "cell_type": "code", "execution_count": null, "id": "9d7401c0", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "torch_model = Wav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\", ctc_loss_reduction=\"mean\")\n", "processor = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-base-960h\")" ] }, { "cell_type": "code", "execution_count": null, "id": "a2684e8a", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "BATCH_SIZE = 1\n", "MAX_SEQ_LENGTH = 30480\n", "ov_model = ov.convert_model(torch_model, example_input=torch.zeros([1, MAX_SEQ_LENGTH], dtype=torch.float))\n", "\n", "ir_model_path = MODEL_DIR / \"wav2vec2_base.xml\"\n", "ov.save_model(ov_model, ir_model_path)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "635f4b0d", "metadata": { "id": "LBbY7c4NsHzT" }, "source": [ "## Prepare LibriSpeech Dataset\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "For demonstration purposes, we will use short dummy version of LibriSpeech dataset - `patrickvonplaten/librispeech_asr_dummy` to speed up model evaluation. Model accuracy can be different from reported in the paper. For reproducing original accuracy, use `librispeech_asr` dataset." ] }, { "cell_type": "code", "execution_count": 6, "id": "43070514", "metadata": { "id": "NN-qRME1a-Sm" }, "outputs": [], "source": [ "from datasets import load_dataset\n", "\n", "\n", "dataset = load_dataset(\"patrickvonplaten/librispeech_asr_dummy\", \"clean\", split=\"validation\")\n", "test_sample = dataset[0][\"audio\"]\n", "\n", "\n", "# define preprocessing function for converting audio to input values for model\n", "def map_to_input(batch):\n", " preprocessed_signal = processor(\n", " batch[\"audio\"][\"array\"],\n", " return_tensors=\"pt\",\n", " padding=\"longest\",\n", " sampling_rate=batch[\"audio\"][\"sampling_rate\"],\n", " )\n", " input_values = preprocessed_signal.input_values\n", " batch[\"input_values\"] = input_values\n", " return batch\n", "\n", "\n", "# apply preprocessing function to dataset and remove audio column, to save memory as we do not need it anymore\n", "dataset = dataset.map(map_to_input, batched=False, remove_columns=[\"audio\"])" ] }, { "attachments": {}, "cell_type": "markdown", "id": "9bbbca4a", "metadata": { "id": "CclWk-fVd9Wi" }, "source": [ "## Run Quantization\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "[NNCF](https://github.com/openvinotoolkit/nncf) provides a suite of advanced algorithms for Neural Networks inference optimization in OpenVINO with minimal accuracy drop.\n", "\n", "Create a quantized model from the pre-trained `FP16` model and the calibration dataset. The optimization process contains the following steps:\n", "\n", "1. Create a Dataset for quantization.\n", "2. Run `nncf.quantize` for getting an optimized model. The `nncf.quantize` function provides an interface for model quantization. It requires an instance of the OpenVINO Model and quantization dataset. Optionally, some additional parameters for the configuration quantization process (number of samples for quantization, preset, ignored scope, etc.) can be provided. For more accurate results, we should keep the operation in the postprocessing subgraph in floating point precision, using the `ignored_scope` parameter. For more information see [Tune quantization parameters](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/quantizing-models-post-training/basic-quantization-flow.html#tune-quantization-parameters). For this model, ignored scope was selected experimentally, based on result of quantization with accuracy control. For understanding how it works please check following [notebook](../quantizing-model-with-accuracy-control/speech-recognition-quantization-wav2vec2.ipynb)\n", "3. Serialize OpenVINO IR model using `ov.save_model` function." ] }, { "cell_type": "code", "execution_count": 7, "id": "16457bd4", "metadata": { "id": "PiAvrwo0tr6Z", "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, openvino\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "69792104728e4d6f988d5a8f05ec9645", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n" ], "text/plain": [] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "3c9c5cb13f504cc29316832c6b379eea", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n" ], "text/plain": [] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:nncf:4 ignored nodes were found by name in the NNCFGraph\n", "INFO:nncf:36 ignored nodes were found by name in the NNCFGraph\n", "INFO:nncf:Not adding activation input quantizer for operation: 3 __module.wav2vec2.feature_extractor.conv_layers.0.conv/aten::_convolution/Convolution\n", "INFO:nncf:Not adding activation input quantizer for operation: 11 __module.wav2vec2.feature_extractor.conv_layers.1.conv/aten::_convolution/Convolution\n", "INFO:nncf:Not adding activation input quantizer for operation: 13 __module.wav2vec2.feature_extractor.conv_layers.2.conv/aten::_convolution/Convolution\n", "INFO:nncf:Not adding activation input quantizer for operation: 15 __module.wav2vec2.feature_extractor.conv_layers.3.conv/aten::_convolution/Convolution\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "cc5fe8cc97e948b98e793ffa6006e965", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n" ], "text/plain": [] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "c07c62e25a64422890812d9fb2902f19", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Output()" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n" ], "text/plain": [] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import nncf\n", "from nncf.parameters import ModelType\n", "\n", "\n", "def transform_fn(data_item):\n", " \"\"\"\n", " Extract the model's input from the data item.\n", " The data item here is the data item that is returned from the data source per iteration.\n", " This function should be passed when the data item cannot be used as model's input.\n", " \"\"\"\n", " return np.array(data_item[\"input_values\"])\n", "\n", "\n", "calibration_dataset = nncf.Dataset(dataset, transform_fn)\n", "\n", "quantized_model = nncf.quantize(\n", " ov_model,\n", " calibration_dataset,\n", " model_type=ModelType.TRANSFORMER, # specify additional transformer patterns in the model\n", " ignored_scope=nncf.IgnoredScope(\n", " names=[\n", " \"__module.wav2vec2.feature_extractor.conv_layers.1.conv/aten::_convolution/Convolution\",\n", " \"__module.wav2vec2.feature_extractor.conv_layers.2.conv/aten::_convolution/Convolution\",\n", " \"__module.wav2vec2.feature_extractor.conv_layers.3.conv/aten::_convolution/Convolution\",\n", " \"__module.wav2vec2.feature_extractor.conv_layers.0.conv/aten::_convolution/Convolution\",\n", " ],\n", " ),\n", ")" ] }, { "cell_type": "code", "execution_count": 8, "id": "a05b3999", "metadata": { "id": "hPj_fcDAG8xG" }, "outputs": [], "source": [ "MODEL_NAME = \"quantized_wav2vec2_base\"\n", "quantized_model_path = Path(f\"{MODEL_NAME}_openvino_model/{MODEL_NAME}_quantized.xml\")\n", "ov.save_model(quantized_model, quantized_model_path)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "754b4d84", "metadata": {}, "source": [ "## Model Usage Example with Inference Pipeline\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Both initial (`FP16`) and quantized (`INT8`) models are exactly the same in use.\n", "\n", "Start with taking one example from the dataset to show inference steps for it." ] }, { "cell_type": "code", "execution_count": 9, "id": "0431ac4f", "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "