{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "provenance": [] }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "language_info": { "name": "python" } }, "cells": [ { "cell_type": "markdown", "source": [ "### 1. **Imports and Dependencies:**\n", " - **Gradio:** Gradio is used to create the interactive web interface where users can interact with the chatbot.\n", " - **OpenAI:** The `openai` library is used to interact with OpenAI's language models (like GPT) to generate responses based on the provided input.\n", " - **Base64 and PIL:** `base64` is used to convert images into a text-based format for processing, and `PIL` (Python Imaging Library) is used for handling and manipulating image files.\n", " - **PyMuPDF (`fitz`):** This library is used for handling PDFs. It allows us to extract text from the uploaded PDF file.\n", "\n", "### 2. **Functions:**\n", "\n", " - **`extract_text_from_pdf(pdf_file):`**\n", " - This function takes a PDF file and extracts all the text from it.\n", " - It opens the PDF, processes each page, and collects the text.\n", " - If there's an error (e.g., the PDF is corrupted), it catches the exception and returns an error message.\n", "\n", " - **`generate_mcq_quiz(pdf_content, num_questions, openai_api_key, model_choice):`**\n", " - This function generates a multiple-choice quiz from the extracted text of a PDF.\n", " - It sends a request to OpenAI's model (via the `openai` library) to generate quiz questions based on the content of the PDF.\n", " - The function constructs a prompt with the content, the number of questions, and asks OpenAI to generate questions in a multiple-choice format.\n", " - It limits the text sent to OpenAI to 8,000 characters to avoid sending too much data.\n", " - The response from OpenAI includes questions, possible answers, and explanations.\n", "\n", " - **`generate_image_response(input_text, image, openai_api_key, model_choice):`**\n", " - This function handles image input.\n", " - When the user uploads an image, the image is converted to base64 (a text format) and sent to OpenAI with any accompanying text (from the user).\n", " - OpenAI generates a response based on both the image and the text provided by the user. It could analyze the image and give a description or answer questions about it.\n", "\n", " - **`chatbot(input_text, image, pdf_file, openai_api_key, model_choice, pdf_content, num_quiz_questions, pdf_quiz_mode, history):`**\n", " - This is the main chatbot function. It checks which type of input (text, image, or PDF) is provided and performs the appropriate action.\n", " - If the input type is **PDF**, it will either extract the content and generate a quiz or display a message prompting the user to upload a PDF.\n", " - If the input type is **Image**, it will call the `generate_image_response` function to analyze the image and respond.\n", " - It also handles the conversation history (so that the chat doesn't lose context).\n", "\n", " - **`clear_history():`**\n", " - This function clears the conversation history, resetting the chat.\n", " \n", " - **`update_input_type(choice):`**\n", " - Based on the user's choice (Image or PDF Quiz), this function hides or shows certain UI elements.\n", " - For example, if the user chooses \"PDF Quiz\", it will show the PDF upload field and quiz settings but hide the image upload field.\n", "\n", "### 3. **Gradio Interface:**\n", "\n", " - **Custom CSS (`custom_css`):**\n", " - This is custom styling that makes the chatbot look nice. It sets a background color, applies gradients to buttons, and customizes the chatbox and headers.\n", " \n", " - **Interface Setup (`create_interface`):**\n", " - The `gr.Blocks` function is used to create the layout of the interface.\n", " - Inside the `gr.Blocks()`, different elements like textboxes, images, buttons, and chat history are defined:\n", " - **Textbox for OpenAI API key:** The user needs to provide their API key to use OpenAI's models.\n", " - **Radio buttons for Input Type:** The user selects between \"Image\" or \"PDF(QUIZ)\".\n", " - **Textbox for user questions:** For image-based input, the user can type questions.\n", " - **Image input field:** Users can upload an image for analysis.\n", " - **PDF input field:** If the user selects \"PDF(QUIZ)\", they can upload a PDF to generate quiz questions.\n", " - **Quiz settings:** Includes a slider to choose the number of questions and a checkbox to confirm that the user wants a quiz.\n", "\n", " - **Buttons:**\n", " - **Submit button:** Sends the user's input to the chatbot function (either an image, PDF, or question).\n", " - **Clear button:** Clears the chat history and resets the input fields.\n", "\n", " - **Chat history (`gr.Chatbot()`):**\n", " - This displays the conversation history between the user and the chatbot, so the user can see the entire interaction.\n", "\n", "### 4. **Logic for Handling User Inputs:**\n", "\n", " - When the user selects **\"Image\"** as the input type, the interface will display fields for the image and a question text box. The bot will generate a response based on the image and any question the user asks.\n", " - When the user selects **\"PDF(QUIZ)\"** as the input type, the interface will show a PDF upload field and allow the user to specify how many quiz questions they want to generate. The bot will extract text from the PDF and use OpenAI to create a quiz.\n", "\n", "### 5. **Launching the Interface:**\n", " - The `demo.launch()` function runs the web interface, making the chatbot available to users in their browser.\n", "\n", "### 6. **Overall Flow:**\n", " - The user chooses an input type (Image or PDF Quiz).\n", " - Depending on the choice, the relevant fields (text, image, PDF) appear.\n", " - The user submits their query or uploads their file, and the chatbot generates a response.\n", " - The conversation history is maintained for context and can be cleared by the user." ], "metadata": { "id": "UClpy9uc9Dkn" } } ] }