{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Quantize Speech Recognition Models with accuracy control using NNCF PTQ API\n",
"This tutorial demonstrates how to apply `INT8` quantization with accuracy control to the speech recognition model, known as [Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2), using the NNCF (Neural Network Compression Framework) 8-bit quantization with accuracy control in post-training mode (without the fine-tuning pipeline). This notebook uses a fine-tuned [Wav2Vec2-Base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) [PyTorch](https://pytorch.org/) model trained on the [LibriSpeech ASR corpus](https://www.openslr.org/12). The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:\n",
"\n",
"- Download and prepare the Wav2Vec2 model and LibriSpeech dataset.\n",
"- Define data loading and accuracy validation functionality.\n",
"- Model quantization with accuracy control.\n",
"- Compare Accuracy of original PyTorch model, OpenVINO FP16 and INT8 models.\n",
"- Compare performance of the original and quantized models.\n",
"\n",
"The advanced quantization flow allows to apply 8-bit quantization to the model with control of accuracy metric. This is achieved by keeping the most impactful operations within the model in the original precision. The flow is based on the [Basic 8-bit quantization](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/quantizing-models-post-training/basic-quantization-flow.html) and has the following differences:\n",
"\n",
"- Besides the calibration dataset, a validation dataset is required to compute the accuracy metric. Both datasets can refer to the same data in the simplest case.\n",
"- Validation function, used to compute accuracy metric is required. It can be a function that is already available in the source framework or a custom function.\n",
"- Since accuracy validation is run several times during the quantization process, quantization with accuracy control can take more time than the Basic 8-bit quantization flow.\n",
"- The resulted model can provide smaller performance improvement than the Basic 8-bit quantization flow because some of the operations are kept in the original precision.\n",
"\n",
"> **NOTE**: Currently, 8-bit quantization with accuracy control in NNCF is available only for models in OpenVINO representation.\n",
"\n",
"The steps for the quantization with accuracy control are described below."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### Table of contents:\n",
"\n",
"- [Imports](#Imports)\n",
"- [Prepare the Model](#Prepare-the-Model)\n",
"- [Prepare LibriSpeech Dataset](#Prepare-LibriSpeech-Dataset)\n",
"- [Prepare calibration dataset](#Prepare-calibration-dataset)\n",
"- [Prepare validation function](#Prepare-validation-function)\n",
"- [Run quantization with accuracy control](#Run-quantization-with-accuracy-control)\n",
"- [Model Usage Example](#Model-Usage-Example)\n",
"- [Compare Accuracy of the Original and Quantized Models](#Compare-Accuracy-of-the-Original-and-Quantized-Models)\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"%pip install -q \"openvino>=2023.1.0\"\n",
"%pip install -q \"nncf>=2.6.0\"\n",
"%pip install -q --extra-index-url https://download.pytorch.org/whl/cpu soundfile librosa transformers \"torch>=2.1\" datasets torchmetrics"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imports\n",
"[back to top ⬆️](#Table-of-contents:)\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2023-10-10 09:32:06.465943: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2023-10-10 09:32:06.505459: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
"2023-10-10 09:32:07.113533: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n"
]
}
],
"source": [
"import numpy as np\n",
"import torch\n",
"\n",
"from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare the Model\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"For instantiating PyTorch model class, we should use `Wav2Vec2ForCTC.from_pretrained` method with providing model ID for downloading from HuggingFace hub. Model weights and configuration files will be downloaded automatically in first time usage.\n",
"Keep in mind that downloading the files can take several minutes and depends on your internet connection.\n",
"\n",
"Additionally, we can create processor class which is responsible for model specific pre- and post-processing steps."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']\n",
"You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
]
}
],
"source": [
"BATCH_SIZE = 1\n",
"MAX_SEQ_LENGTH = 30480\n",
"\n",
"\n",
"torch_model = Wav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\", ctc_loss_reduction=\"mean\")\n",
"processor = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-base-960h\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Convert it to the OpenVINO Intermediate Representation (OpenVINO IR)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.base has been moved to tensorflow.python.trackable.base. The old module will be deleted in version 2.11.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"[ WARNING ] Please fix your imports. Module %s has been moved to %s. The old module will be deleted in version %s.\n",
"/home/ea/work/ov_venv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:595: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\n",
" if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):\n",
"/home/ea/work/ov_venv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:634: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\n",
" if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):\n"
]
}
],
"source": [
"import openvino as ov\n",
"\n",
"\n",
"default_input = torch.zeros([1, MAX_SEQ_LENGTH], dtype=torch.float)\n",
"ov_model = ov.convert_model(torch_model, example_input=default_input)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare LibriSpeech Dataset\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"For demonstration purposes, we will use short dummy version of LibriSpeech dataset - `patrickvonplaten/librispeech_asr_dummy` to speed up model evaluation. Model accuracy can be different from reported in the paper. For reproducing original accuracy, use `librispeech_asr` dataset."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Found cached dataset librispeech_asr_dummy (/home/ea/.cache/huggingface/datasets/patrickvonplaten___librispeech_asr_dummy/clean/2.1.0/f2c70a4d03ab4410954901bde48c54b85ca1b7f9bf7d616e7e2a72b5ee6ddbfc)\n",
"Loading cached processed dataset at /home/ea/.cache/huggingface/datasets/patrickvonplaten___librispeech_asr_dummy/clean/2.1.0/f2c70a4d03ab4410954901bde48c54b85ca1b7f9bf7d616e7e2a72b5ee6ddbfc/cache-dcb48242e67b91b1.arrow\n"
]
}
],
"source": [
"from datasets import load_dataset\n",
"\n",
"\n",
"dataset = load_dataset(\"patrickvonplaten/librispeech_asr_dummy\", \"clean\", split=\"validation\")\n",
"test_sample = dataset[0][\"audio\"]\n",
"\n",
"\n",
"# define preprocessing function for converting audio to input values for model\n",
"def map_to_input(batch):\n",
" preprocessed_signal = processor(\n",
" batch[\"audio\"][\"array\"],\n",
" return_tensors=\"pt\",\n",
" padding=\"longest\",\n",
" sampling_rate=batch[\"audio\"][\"sampling_rate\"],\n",
" )\n",
" input_values = preprocessed_signal.input_values\n",
" batch[\"input_values\"] = input_values\n",
" return batch\n",
"\n",
"\n",
"# apply preprocessing function to dataset and remove audio column, to save memory as we do not need it anymore\n",
"dataset = dataset.map(map_to_input, batched=False, remove_columns=[\"audio\"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare calibration dataset\n",
"[back to top ⬆️](#Table-of-contents:)\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, tensorflow, onnx, openvino\n"
]
}
],
"source": [
"import nncf\n",
"\n",
"\n",
"def transform_fn(data_item):\n",
" \"\"\"\n",
" Extract the model's input from the data item.\n",
" The data item here is the data item that is returned from the data source per iteration.\n",
" This function should be passed when the data item cannot be used as model's input.\n",
" \"\"\"\n",
" return np.array(data_item[\"input_values\"])\n",
"\n",
"\n",
"calibration_dataset = nncf.Dataset(dataset, transform_fn)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare validation function\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"Define the validation function."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from torchmetrics import WordErrorRate\n",
"from tqdm.notebook import tqdm\n",
"\n",
"\n",
"def validation_fn(model, dataset):\n",
" \"\"\"\n",
" Calculate and returns a metric for the model.\n",
" \"\"\"\n",
" wer = WordErrorRate()\n",
" for sample in dataset:\n",
" # run infer function on sample\n",
" output = model.output(0)\n",
" logits = model(np.array(sample[\"input_values\"]))[output]\n",
" predicted_ids = np.argmax(logits, axis=-1)\n",
" transcription = processor.batch_decode(torch.from_numpy(predicted_ids))\n",
"\n",
" # update metric on sample result\n",
" wer.update(transcription, [sample[\"text\"]])\n",
"\n",
" result = wer.compute()\n",
"\n",
" return 1 - result"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run quantization with accuracy control\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"You should provide the calibration dataset and the validation dataset. It can be the same dataset. \n",
" - parameter `max_drop` defines the accuracy drop threshold. The quantization process stops when the degradation of accuracy metric on the validation dataset is less than the `max_drop`. The default value is 0.01. NNCF will stop the quantization and report an error if the `max_drop` value can’t be reached.\n",
" - `drop_type` defines how the accuracy drop will be calculated: ABSOLUTE (used by default) or RELATIVE.\n",
" - `ranking_subset_size` - size of a subset that is used to rank layers by their contribution to the accuracy drop. Default value is 300, and the more samples it has the better ranking, potentially. Here we use the value 25 to speed up the execution. \n",
"\n",
"> **NOTE**: Execution can take tens of minutes and requires up to 10 GB of free memory"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Statistics collection: 24%|███████████████████████████████████▎ | 73/300 [00:12<00:37, 5.98it/s]\n",
"Applying Smooth Quant: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:01<00:00, 41.01it/s]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO:nncf:36 ignored nodes was found by name in the NNCFGraph\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Statistics collection: 24%|███████████████████████████████████▎ | 73/300 [00:22<01:08, 3.31it/s]\n",
"Applying Fast Bias correction: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 74/74 [00:23<00:00, 3.09it/s]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO:nncf:Validation of initial model was started\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO:nncf:Elapsed Time: 00:00:00\n",
"INFO:nncf:Elapsed Time: 00:00:11\n",
"INFO:nncf:Metric of initial model: 0.9469565153121948\n",
"INFO:nncf:Collecting values for each data item using the initial model\n",
"INFO:nncf:Elapsed Time: 00:00:09\n",
"INFO:nncf:Validation of quantized model was started\n",
"INFO:nncf:Elapsed Time: 00:00:22\n",
"INFO:nncf:Elapsed Time: 00:00:11\n",
"INFO:nncf:Metric of quantized model: 0.5\n",
"INFO:nncf:Collecting values for each data item using the quantized model\n",
"INFO:nncf:Elapsed Time: 00:00:06\n",
"INFO:nncf:Accuracy drop: 0.4469565153121948 (DropType.ABSOLUTE)\n",
"INFO:nncf:Accuracy drop: 0.4469565153121948 (DropType.ABSOLUTE)\n",
"INFO:nncf:Total number of quantized operations in the model: 94\n",
"INFO:nncf:Number of parallel processes to rank quantized operations: 14\n",
"INFO:nncf:ORIGINAL metric is used to rank quantizers\n",
"INFO:nncf:Calculating ranking score for groups of quantizers\n",
"INFO:nncf:Elapsed Time: 00:04:36\n",
"INFO:nncf:Changing the scope of quantizer nodes was started\n",
"INFO:nncf:Reverted 1 operations to the floating-point precision: \n",
"\t__module.wav2vec2.feature_extractor.conv_layers.2.conv/aten::_convolution/Convolution_11\n",
"INFO:nncf:Accuracy drop with the new quantization scope is 0.06173914670944214 (DropType.ABSOLUTE)\n",
"INFO:nncf:Reverted 1 operations to the floating-point precision: \n",
"\t__module.wav2vec2.feature_extractor.conv_layers.1.conv/aten::_convolution/Convolution_10\n",
"INFO:nncf:Accuracy drop with the new quantization scope is 0.010434746742248535 (DropType.ABSOLUTE)\n",
"INFO:nncf:Reverted 1 operations to the floating-point precision: \n",
"\t__module.wav2vec2.feature_extractor.conv_layers.3.conv/aten::_convolution/Convolution_12\n",
"INFO:nncf:Algorithm completed: achieved required accuracy drop 0.006956517696380615 (DropType.ABSOLUTE)\n",
"INFO:nncf:3 out of 94 were reverted back to the floating-point precision:\n",
"\t__module.wav2vec2.feature_extractor.conv_layers.2.conv/aten::_convolution/Convolution_11\n",
"\t__module.wav2vec2.feature_extractor.conv_layers.1.conv/aten::_convolution/Convolution_10\n",
"\t__module.wav2vec2.feature_extractor.conv_layers.3.conv/aten::_convolution/Convolution_12\n"
]
}
],
"source": [
"from nncf.quantization.advanced_parameters import AdvancedAccuracyRestorerParameters\n",
"from nncf.parameters import ModelType\n",
"\n",
"quantized_model = nncf.quantize_with_accuracy_control(\n",
" ov_model,\n",
" calibration_dataset=calibration_dataset,\n",
" validation_dataset=calibration_dataset,\n",
" validation_fn=validation_fn,\n",
" max_drop=0.01,\n",
" drop_type=nncf.DropType.ABSOLUTE,\n",
" model_type=ModelType.TRANSFORMER,\n",
" advanced_accuracy_restorer_parameters=AdvancedAccuracyRestorerParameters(ranking_subset_size=25),\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model Usage Example\n",
"[back to top ⬆️](#Table-of-contents:)\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false,
"is_executing": true,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" "
],
"text/plain": [
""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import IPython.display as ipd\n",
"\n",
"\n",
"ipd.Audio(test_sample[\"array\"], rate=16000)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Select device for inference"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import ipywidgets as widgets\n",
"\n",
"core = ov.Core()\n",
"\n",
"device = widgets.Dropdown(\n",
" options=core.available_devices + [\"AUTO\"],\n",
" value=\"CPU\",\n",
" description=\"Device:\",\n",
" disabled=False,\n",
")\n",
"\n",
"device"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"compiled_quantized_model = core.compile_model(model=quantized_model, device_name=device.value)\n",
"\n",
"input_data = np.expand_dims(test_sample[\"array\"], axis=0)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, make a prediction."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": [
"['I E O WE WORD I O O FAGGI FARE E BO']"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"predictions = compiled_quantized_model([input_data])[0]\n",
"predicted_ids = np.argmax(predictions, axis=-1)\n",
"transcription = processor.batch_decode(torch.from_numpy(predicted_ids))\n",
"transcription"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compare Accuracy of the Original and Quantized Models\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
" - Define dataloader for test dataset.\n",
" - Define functions to get inference for PyTorch and OpenVINO models.\n",
" - Define functions to compute Word Error Rate."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"# inference function for pytorch\n",
"def torch_infer(model, sample):\n",
" logits = model(torch.Tensor(sample[\"input_values\"])).logits\n",
" # take argmax and decode\n",
" predicted_ids = torch.argmax(logits, dim=-1)\n",
" transcription = processor.batch_decode(predicted_ids)\n",
" return transcription\n",
"\n",
"\n",
"# inference function for openvino\n",
"def ov_infer(model, sample):\n",
" output = model.output(0)\n",
" logits = model(np.array(sample[\"input_values\"]))[output]\n",
" predicted_ids = np.argmax(logits, axis=-1)\n",
" transcription = processor.batch_decode(torch.from_numpy(predicted_ids))\n",
" return transcription\n",
"\n",
"\n",
"def compute_wer(dataset, model, infer_fn):\n",
" wer = WordErrorRate()\n",
" for sample in tqdm(dataset):\n",
" # run infer function on sample\n",
" transcription = infer_fn(model, sample)\n",
" # update metric on sample result\n",
" wer.update(transcription, [sample[\"text\"]])\n",
" # finalize metric calculation\n",
" result = wer.compute()\n",
" return result"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, compute WER for the original PyTorch model and quantized model."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "0e85e405be584837ad46229ffe26e257",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/73 [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "7aa707718748449d96d0261e9ca99e77",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/73 [00:00, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"[PyTorch] Word Error Rate: 0.0530\n",
"[Quantized OpenVino] Word Error Rate: 0.0600\n"
]
}
],
"source": [
"pt_result = compute_wer(dataset, torch_model, torch_infer)\n",
"quantized_result = compute_wer(dataset, compiled_quantized_model, ov_infer)\n",
"\n",
"print(f\"[PyTorch] Word Error Rate: {pt_result:.4f}\")\n",
"print(f\"[Quantized OpenVino] Word Error Rate: {quantized_result:.4f}\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"openvino_notebooks": {
"imageUrl": "",
"tags": {
"categories": [
"Optimize",
"API Overview"
],
"libraries": [],
"other": [],
"tasks": [
"Speech Recognition"
]
}
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"state": {},
"version_major": 2,
"version_minor": 0
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}