{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Text Prediction with OpenVINO™\n", "\n", "This notebook shows text prediction with OpenVINO. This notebook can work in two different modes, Text Generation and Conversation, which the user can select via selecting the model in the Model Selection Section. We use three models [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf), [GPT-Neo](https://zenodo.org/record/5297715#.ZAmpsXZBztU), and [PersonaGPT](https://arxiv.org/abs/2110.12949v1), which are a part of the Generative Pre-trained Transformer (GPT) family. GPT-2 and GPT-Neo can be used for text generation, whereas PersonaGPT is trained for the downstream task of conversation.\n", "\n", "GPT-2 and GPT-Neo are pre-trained on a large corpus of English text using unsupervised training. They both display a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation.\n", "\n", "More details about the models are provided on their HuggingFace cards:\n", "\n", "* [GPT-2](https://huggingface.co/gpt2)\n", "* [GPT-Neo](https://huggingface.co/EleutherAI/gpt-neo-125M)\n", "\n", "PersonaGPT is an open-domain conversational agent that can decode _personalized_ and _controlled_ responses based on user input. It is built on the pretrained [DialoGPT-medium](https://github.com/microsoft/DialoGPT) model, following the [GPT-2](https://github.com/openai/gpt-2) architecture. \n", "PersonaGPT is fine-tuned on the [Persona-Chat](https://arxiv.org/pdf/1801.07243) dataset. The model is available from [HuggingFace](https://huggingface.co/af1tang/personaGPT). PersonaGPT displays a broad set of capabilities, including the ability to take on personas, where we prime the model with few facts and have it generate based upon that, it can also be used for creating a chatbot on a knowledge base.\n", "\n", "The following image illustrates the complete demo pipeline used for text generation:\n", "\n", "![image2](https://user-images.githubusercontent.com/91228207/163990722-d2713ede-921e-4594-8b00-8b5c1a4d73b5.jpeg)\n", "\n", "This is a demonstration in which the user can type the beginning of the text and the network will generate a further. This procedure can be repeated as many times as the user desires.\n", "\n", "For Text Generation, The model input is tokenized text, which serves as the initial condition for text generation. Then, logits from the models' inference results are obtained, and the token with the highest probability is selected using the top-k sampling strategy and joined to the input sequence. This procedure repeats until the end of the sequence token is received or the specified maximum length is reached. After that, tokenized IDs are decoded to text.\n", "\n", "The following image illustrates the demo pipeline for conversation:\n", "\n", "![image2](https://user-images.githubusercontent.com/95569637/226101538-e204aebd-a34f-4c8b-b90c-5363ba41c080.jpeg)\n", "\n", "For Conversation, User Input is tokenized with `eos_token` concatenated in the end. Then, the text gets generated as detailed above. The Generated response is added to the history with the `eos_token` at the end. Additional user input is added to the history, and the sequence is passed back into the model.\n", "\n", "\n", "#### Table of contents:\n", "\n", "- [Model Selection](#Model-Selection)\n", "- [Load Model](#Load-Model)\n", "- [Convert Pytorch Model to OpenVINO IR](#Convert-Pytorch-Model-to-OpenVINO-IR)\n", " - [Load the model](#Load-the-model)\n", " - [Select inference device](#Select-inference-device)\n", "- [Pre-Processing](#Pre-Processing)\n", "- [Define tokenization](#Define-tokenization)\n", " - [Define Softmax layer](#Define-Softmax-layer)\n", " - [Set the minimum sequence length](#Set-the-minimum-sequence-length)\n", " - [Top-K sampling](#Top-K-sampling)\n", " - [Main Processing Function](#Main-Processing-Function)\n", "- [Inference with GPT-Neo/GPT-2](#Inference-with-GPT-Neo/GPT-2)\n", "- [Conversation with PersonaGPT using OpenVINO](#Conversation-with-PersonaGPT-using-OpenVINO)\n", "- [Converse Function](#Converse-Function)\n", "- [Conversation Class](#Conversation-Class)\n", "- [Conversation with PersonaGPT](#Conversation-with-PersonaGPT)\n", "\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Model Selection\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Select the Model to be used for text generation, GPT-2 and GPT-Neo are used for text generation whereas PersonaGPT is used for Conversation." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "%pip install -q \"openvino>=2023.1.0\"\n", "%pip install -q \"gradio>=4.19\"\n", "%pip install -q --extra-index-url https://download.pytorch.org/whl/cpu transformers \"torch>=2.1\"" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "91b87541aae4431db8d2974edbb8de04", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(Select(description='Select Model:', options=('PersonaGPT (Converastional)', 'GPT-2', 'GPT-Neo')…" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import ipywidgets as widgets\n", "\n", "style = {\"description_width\": \"initial\"}\n", "model_name = widgets.Select(\n", " options=[\"PersonaGPT (Converastional)\", \"GPT-2\", \"GPT-Neo\"],\n", " value=\"PersonaGPT (Converastional)\",\n", " description=\"Select Model:\",\n", " disabled=False,\n", ")\n", "\n", "widgets.VBox([model_name])" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Load Model\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Download the Selected Model and Tokenizer from HuggingFace" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/ea/work/ov_venv/lib/python3.8/site-packages/torch/cuda/__init__.py:138: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11080). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)\n", " return torch._C._cuda_getDeviceCount() > 0\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "398f68d549c84c5c934198857ee945fd", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading (…)/main/tokenizer.json: 0%| | 0.00/1.36M [00:00 Tuple[List[int], List[int]]:\n", " \"\"\"\n", " tokenize input text using GPT2 tokenizer\n", "\n", " Parameters:\n", " text, str - input text\n", " Returns:\n", " input_ids - np.array with input token ids\n", " attention_mask - np.array with 0 in place, where should be padding and 1 for places where original tokens are located, represents attention mask for model\n", " \"\"\"\n", "\n", " inputs = tokenizer(text, return_tensors=\"np\")\n", " return inputs[\"input_ids\"], inputs[\"attention_mask\"]" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "`eos_token` is special token, which means that generation is finished. We store the index of this token in order to use this index as padding at later stage." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2023-10-30 09:20:47.662787: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n", "2023-10-30 09:20:47.754627: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n", "To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n", "2023-10-30 09:20:49.414811: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n" ] } ], "source": [ "eos_token_id = tokenizer.eos_token_id\n", "eos_token = tokenizer.decode(eos_token_id)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Define Softmax layer\n", "[back to top ⬆️](#Table-of-contents:)\n", "A softmax function is used to convert top-k logits into a probability distribution. " ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "\n", "def softmax(x: np.array) -> np.array:\n", " e_x = np.exp(x - np.max(x, axis=-1, keepdims=True))\n", " summation = e_x.sum(axis=-1, keepdims=True)\n", " return e_x / summation" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Set the minimum sequence length\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "If the minimum sequence length is not reached, the following code will reduce the probability of the `eos` token occurring. This continues the process of generating the next words." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "def process_logits(cur_length: int, scores: np.array, eos_token_id: int, min_length: int = 0) -> np.array:\n", " \"\"\"\n", " Reduce probability for padded indices.\n", "\n", " Parameters:\n", " cur_length: Current length of input sequence.\n", " scores: Model output logits.\n", " eos_token_id: Index of end of string token in model vocab.\n", " min_length: Minimum length for applying postprocessing.\n", "\n", " Returns:\n", " Processed logits with reduced probability for padded indices.\n", " \"\"\"\n", " if cur_length < min_length:\n", " scores[:, eos_token_id] = -float(\"inf\")\n", " return scores" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Top-K sampling\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "In Top-K sampling, we filter the K most likely next words and redistribute the probability mass among only those K next words. " ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "def get_top_k_logits(scores: np.array, top_k: int) -> np.array:\n", " \"\"\"\n", " Perform top-k sampling on the logits scores.\n", "\n", " Parameters:\n", " scores: np.array, model output logits.\n", " top_k: int, number of elements with the highest probability to select.\n", "\n", " Returns:\n", " np.array, shape (batch_size, sequence_length, vocab_size),\n", " filtered logits scores where only the top-k elements with the highest\n", " probability are kept and the rest are replaced with -inf\n", " \"\"\"\n", " filter_value = -float(\"inf\")\n", " top_k = min(max(top_k, 1), scores.shape[-1])\n", " top_k_scores = -np.sort(-scores)[:, :top_k]\n", " indices_to_remove = scores < np.min(top_k_scores)\n", " filtred_scores = np.ma.array(scores, mask=indices_to_remove, fill_value=filter_value).filled()\n", " return filtred_scores" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Main Processing Function\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Generating the predicted sequence." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "def generate_sequence(\n", " input_ids: List[int],\n", " attention_mask: List[int],\n", " max_sequence_length: int = 128,\n", " eos_token_id: int = eos_token_id,\n", " dynamic_shapes: bool = True,\n", ") -> List[int]:\n", " \"\"\"\n", " Generates a sequence of tokens using a pre-trained language model.\n", "\n", " Parameters:\n", " input_ids: np.array, tokenized input ids for model\n", " attention_mask: np.array, attention mask for model\n", " max_sequence_length: int, maximum sequence length for stopping iteration\n", " eos_token_id: int, index of the end-of-sequence token in the model's vocabulary\n", " dynamic_shapes: bool, whether to use dynamic shapes for inference or pad model input to max_sequence_length\n", "\n", " Returns:\n", " np.array, the predicted sequence of token ids\n", " \"\"\"\n", " while True:\n", " cur_input_len = len(input_ids[0])\n", " if not dynamic_shapes:\n", " pad_len = max_sequence_length - cur_input_len\n", " model_input_ids = np.concatenate((input_ids, [[eos_token_id] * pad_len]), axis=-1)\n", " model_input_attention_mask = np.concatenate((attention_mask, [[0] * pad_len]), axis=-1)\n", " else:\n", " model_input_ids = input_ids\n", " model_input_attention_mask = attention_mask\n", " outputs = compiled_model({\"input_ids\": model_input_ids, \"attention_mask\": model_input_attention_mask})[output_key]\n", " next_token_logits = outputs[:, cur_input_len - 1, :]\n", " # pre-process distribution\n", " next_token_scores = process_logits(cur_input_len, next_token_logits, eos_token_id)\n", " top_k = 20\n", " next_token_scores = get_top_k_logits(next_token_scores, top_k)\n", " # get next token id\n", " probs = softmax(next_token_scores)\n", " next_tokens = np.random.choice(probs.shape[-1], 1, p=probs[0], replace=True)\n", " # break the loop if max length or end of text token is reached\n", " if cur_input_len == max_sequence_length or next_tokens[0] == eos_token_id:\n", " break\n", " else:\n", " input_ids = np.concatenate((input_ids, [next_tokens]), axis=-1)\n", " attention_mask = np.concatenate((attention_mask, [[1] * len(next_tokens)]), axis=-1)\n", " return input_ids" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Inference with GPT-Neo/GPT-2\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "The `text` variable below is the input used to generate a predicted sequence." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Selected Model is PersonaGPT. Please select GPT-Neo or GPT-2 in the first cell to generate text sequences\n" ] } ], "source": [ "import time\n", "\n", "if not model_name.value == \"PersonaGPT (Converastional)\":\n", " text = \"Deep learning is a type of machine learning that uses neural networks\"\n", " input_ids, attention_mask = tokenize(text)\n", "\n", " start = time.perf_counter()\n", " output_ids = generate_sequence(input_ids, attention_mask)\n", " end = time.perf_counter()\n", " output_text = \" \"\n", " # Convert IDs to words and make the sentence from it\n", " for i in output_ids[0]:\n", " output_text += tokenizer.batch_decode([i])[0]\n", " print(f\"Generation took {end - start:.3f} s\")\n", " print(f\"Input Text: {text}\")\n", " print()\n", " print(f\"{model_name.value}: {output_text}\")\n", "else:\n", " print(\"Selected Model is PersonaGPT. Please select GPT-Neo or GPT-2 in the first cell to generate text sequences\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# Conversation with PersonaGPT using OpenVINO\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "User Input is tokenized with `eos_token` concatenated in the end. Model input is tokenized text, which serves as initial condition for generation, then logits from model inference result should be obtained and token with the highest probability is selected using top-k sampling strategy and joined to input sequence. The procedure repeats until end of sequence token will be received or specified maximum length is reached. After that, decoding token ids to text using tokenized should be applied.\n", "\n", "The Generated response is added to the history with the `eos_token` at the end. Further User Input is added to it and again passed into the model." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Converse Function\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "Wrapper on generate sequence function to support conversation" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "def converse(\n", " input: str,\n", " history: List[int],\n", " eos_token: str = eos_token,\n", " eos_token_id: int = eos_token_id,\n", ") -> Tuple[str, List[int]]:\n", " \"\"\"\n", " Converse with the Model.\n", "\n", " Parameters:\n", " input: Text input given by the User\n", " history: Chat History, ids of tokens of chat occured so far\n", " eos_token: end of sequence string\n", " eos_token_id: end of sequence index from vocab\n", " Returns:\n", " response: Text Response generated by the model\n", " history: Chat History, Ids of the tokens of chat occured so far,including the tokens of generated response\n", " \"\"\"\n", "\n", " # Get Input Ids of the User Input\n", " new_user_input_ids, _ = tokenize(input + eos_token)\n", "\n", " # append the new user input tokens to the chat history, if history exists\n", " if len(history) == 0:\n", " bot_input_ids = new_user_input_ids\n", " else:\n", " bot_input_ids = np.concatenate([history, new_user_input_ids[0]])\n", " bot_input_ids = np.expand_dims(bot_input_ids, axis=0)\n", "\n", " # Create Attention Mask\n", " bot_attention_mask = np.ones_like(bot_input_ids)\n", "\n", " # Generate Response from the model\n", " history = generate_sequence(bot_input_ids, bot_attention_mask, max_sequence_length=1000)\n", "\n", " # Add the eos_token to mark end of sequence\n", " history = np.append(history[0], eos_token_id)\n", "\n", " # convert the tokens to text, and then split the responses into lines and retrieve the response from the Model\n", " response = \"\".join(tokenizer.batch_decode(history)).split(eos_token)[-2]\n", " return response, history" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Conversation Class\n", "[back to top ⬆️](#Table-of-contents:)\n" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "class Conversation:\n", " def __init__(self):\n", " # Initialize Empty History\n", " self.history = []\n", " self.messages = []\n", "\n", " def chat(self, input_text):\n", " \"\"\"\n", " Wrapper Over Converse Function.\n", " Parameters:\n", " input_text: Text input given by the User\n", " Returns:\n", " response: Text Response generated by the model\n", " \"\"\"\n", " response, self.history = converse(input_text, self.history)\n", " self.messages.append(f\"Person: {input_text}\")\n", " self.messages.append(f\"PersonaGPT: {response}\")\n", " return response" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Conversation with PersonaGPT\n", "[back to top ⬆️](#Table-of-contents:)\n", "\n", "This notebook provides two styles of inference, Plain and Interactive. The style of inference can be selected in the next cell." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "f399ebf7e6be4a2fae0cd328e055f73c", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(Select(description='Inference Style:', options=('Plain', 'Interactive'), value='Plain'),))" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "style = {\"description_width\": \"initial\"}\n", "interactive_mode = widgets.Select(\n", " options=[\"Plain\", \"Interactive\"],\n", " value=\"Plain\",\n", " description=\"Inference Style:\",\n", " disabled=False,\n", ")\n", "\n", "widgets.VBox([interactive_mode])" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Person: Hi,How are you?\n", "PersonaGPT: good, just got done with my shift at the coffee shop\n", "Person: What are you doing?\n", "PersonaGPT: oh, just getting ready to go out to my friends house to party\n", "Person: I like to dance,do you?\n", "PersonaGPT: i am not much of an activity person, more of a party person.\n", "Person: Can you recommend me some books?\n", "PersonaGPT: i like the holy grail. what about you?\n", "Person: Hi,How are you?\n", "PersonaGPT: good, just got done with my shift at the coffee shop\n", "Person: What are you doing?\n", "PersonaGPT: just got done with my shift at the coffee shop\n", "Person: I like to dance,do you?\n", "PersonaGPT: i like to dance as well.\n", "Person: Can you recommend me some books?\n", "PersonaGPT: the holy grail is a great read.\n", "Person: Hi,How are you?\n", "PersonaGPT: i am doing well. do you have any hobbies?\n", "Person: What are you doing?\n", "PersonaGPT: i just got done with my shift at the coffee shop.\n" ] } ], "source": [ "import gradio as gr\n", "\n", "if model_name.value == \"PersonaGPT (Converastional)\":\n", " if interactive_mode.value == \"Plain\":\n", " conversation = Conversation()\n", " user_prompt = None\n", " pre_written_prompts = [\n", " \"Hi,How are you?\",\n", " \"What are you doing?\",\n", " \"I like to dance,do you?\",\n", " \"Can you recommend me some books?\",\n", " ]\n", " # Number of responses generated by model\n", " n_prompts = 10\n", " for i in range(n_prompts):\n", " # Uncomment for taking User Input\n", " # user_prompt = input()\n", " if not user_prompt:\n", " user_prompt = pre_written_prompts[i % len(pre_written_prompts)]\n", " conversation.chat(user_prompt)\n", " print(conversation.messages[-2])\n", " print(conversation.messages[-1])\n", " user_prompt = None\n", " else:\n", "\n", " def add_text(history, text):\n", " history = history + [(text, None)]\n", " return history, \"\"\n", "\n", " conversation = Conversation()\n", "\n", " def bot(history):\n", " conversation.chat(history[-1][0])\n", " response = conversation.messages[-1]\n", " history[-1][1] = response\n", " return history\n", "\n", " with gr.Blocks() as demo:\n", " chatbot = gr.Chatbot([], elem_id=\"chatbot\")\n", "\n", " with gr.Row():\n", " with gr.Column():\n", " txt = gr.Textbox(\n", " show_label=False,\n", " placeholder=\"Enter text and press enter, or upload an image\",\n", " container=False,\n", " )\n", "\n", " txt.submit(add_text, [chatbot, txt], [chatbot, txt]).then(bot, chatbot, chatbot)\n", " try:\n", " demo.launch(debug=True)\n", " except Exception:\n", " demo.launch(debug=True, share=True)\n", " # if you are launching remotely, specify server_name and server_port\n", " # demo.launch(server_name='your server name', server_port='server port in int')\n", " # Read more in the docs: https://gradio.app/docs/\n", "else:\n", " print(\"Selected Model is not PersonaGPT, Please select PersonaGPT in the first cell to have a conversation\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" }, "openvino_notebooks": { "imageUrl": "https://user-images.githubusercontent.com/95569637/229706278-2aa6a60d-02f4-45e2-9541-97529df8359d.png", "tags": { "categories": [ "Model Demos" ], "libraries": [], "other": [], "tasks": [ "Conversational", "Text Generation" ] } }, "vscode": { "interpreter": { "hash": "a9b3b68eddeff8457de47f167459c4b20b0e6a6bfb00512a2de4d11c79c0e0f1" } }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": { "0141f8b162644ef89ec051fef141b2f5": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_bd54b4b27dca4663b5978490a5f93f6e", "style": "IPY_MODEL_32839db40c5c48229a374382679cefde", "value": "Downloading (…)/main/tokenizer.json: 100%" } }, "063b5b5701904f8da1e982c5ddd0132f": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "06bbf15e9ce545bfbe648415254a651d": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } }, "19970c2e603148749d10894807aacda6": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "DropdownModel", "state": { "_options_labels": [ "CPU", "GPU", "AUTO" ], "description": "Device:", "index": 2, "layout": "IPY_MODEL_c9c2e3c17d0c4960b932a8b73277a5a5", "style": "IPY_MODEL_faf3088d462e4a4f84e1c0b380d4e42a" } }, "32839db40c5c48229a374382679cefde": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "HTMLStyleModel", "state": { "description_width": "", "font_size": null, "text_color": null } }, "398f68d549c84c5c934198857ee945fd": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "HBoxModel", "state": { "children": [ "IPY_MODEL_0141f8b162644ef89ec051fef141b2f5", "IPY_MODEL_8eb0272361754c2295b65fe42e6268ae", "IPY_MODEL_638ae6dcdcd6406f9d0249de83ff90d1" ], "layout": "IPY_MODEL_a682f7e92ead49e2a65450174a475fed" } }, "5774584ade0d4950a82cd7476e7793a6": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "638ae6dcdcd6406f9d0249de83ff90d1": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "HTMLModel", "state": { "layout": "IPY_MODEL_754a0535c16543f2a4a705abcd605f8d", "style": "IPY_MODEL_c75c8b63c0c046268c1e26e8a93ac134", "value": " 1.36M/1.36M [00:00<00:00, 1.83MB/s]" } }, "70147a627c2645a99ff7cf6ee8607a7c": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "SelectModel", "state": { "_options_labels": [ "PersonaGPT (Converastional)", "GPT-2", "GPT-Neo" ], "description": "Select Model:", "index": 0, "layout": "IPY_MODEL_83bac2ae3e5544b9acf8945240b80551", "style": "IPY_MODEL_06bbf15e9ce545bfbe648415254a651d" } }, "754a0535c16543f2a4a705abcd605f8d": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "83bac2ae3e5544b9acf8945240b80551": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "8eb0272361754c2295b65fe42e6268ae": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "FloatProgressModel", "state": { "bar_style": "success", "layout": "IPY_MODEL_fa7baf4549d8439db857963411de3050", "max": 1355972, "style": "IPY_MODEL_dda9f9de898e4c6e9e3ce26c683fb28b", "value": 1355972 } }, "91b87541aae4431db8d2974edbb8de04": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "VBoxModel", "state": { "children": [ "IPY_MODEL_70147a627c2645a99ff7cf6ee8607a7c" ], "layout": "IPY_MODEL_f113a8303b02441790e7b60c63f36dc6" } }, "a682f7e92ead49e2a65450174a475fed": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "bd54b4b27dca4663b5978490a5f93f6e": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "c75c8b63c0c046268c1e26e8a93ac134": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "HTMLStyleModel", "state": { "description_width": "", "font_size": null, "text_color": null } }, "c9c2e3c17d0c4960b932a8b73277a5a5": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "d014151baa144679b3b3c3a21ce96388": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "SelectModel", "state": { "_options_labels": [ "Plain", "Interactive" ], "description": "Inference Style:", "index": 0, "layout": "IPY_MODEL_e9794eb673564df2a8635f113e7f7c53", "style": "IPY_MODEL_063b5b5701904f8da1e982c5ddd0132f" } }, "dda9f9de898e4c6e9e3ce26c683fb28b": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "ProgressStyleModel", "state": { "description_width": "" } }, "e9794eb673564df2a8635f113e7f7c53": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "f113a8303b02441790e7b60c63f36dc6": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "f399ebf7e6be4a2fae0cd328e055f73c": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "VBoxModel", "state": { "children": [ "IPY_MODEL_d014151baa144679b3b3c3a21ce96388" ], "layout": "IPY_MODEL_5774584ade0d4950a82cd7476e7793a6" } }, "fa7baf4549d8439db857963411de3050": { "model_module": "@jupyter-widgets/base", "model_module_version": "2.0.0", "model_name": "LayoutModel", "state": {} }, "faf3088d462e4a4f84e1c0b380d4e42a": { "model_module": "@jupyter-widgets/controls", "model_module_version": "2.0.0", "model_name": "DescriptionStyleModel", "state": { "description_width": "" } } }, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 4 }