diff --git "a/docs.json" "b/docs.json" --- "a/docs.json" +++ "b/docs.json" @@ -1 +1 @@ -[{"text": "`gradio-rs` is a Gradio Client in Rust built by\n[@JacobLinCool](https://github.com/JacobLinCool). You can find the repo\n[here](https://github.com/JacobLinCool/gradio-rs), and more in depth API\ndocumentation [here](https://docs.rs/gradio/latest/gradio/).\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/docs/third-party-clients/rust-client", "source_page_title": "Third Party Clients - Rust Client Docs"}, {"text": "Here is an example of using BS-RoFormer model to separate vocals and\nbackground music from an audio file.\n\n \n \n use gradio::{PredictionInput, Client, ClientOptions};\n \n [tokio::main]\n async fn main() {\n if std::env::args().len() < 2 {\n println!(\"Please provide an audio file path as an argument\");\n std::process::exit(1);\n }\n let args: Vec = std::env::args().collect();\n let file_path = &args[1];\n println!(\"File: {}\", file_path);\n \n let client = Client::new(\"JacobLinCool/vocal-separation\", ClientOptions::default())\n .await\n .unwrap();\n \n let output = client\n .predict(\n \"/separate\",\n vec![\n PredictionInput::from_file(file_path),\n PredictionInput::from_value(\"BS-RoFormer\"),\n ],\n )\n .await\n .unwrap();\n println!(\n \"Vocals: {}\",\n output[0].clone().as_file().unwrap().url.unwrap()\n );\n println!(\n \"Background: {}\",\n output[1].clone().as_file().unwrap().url.unwrap()\n );\n }\n\nYou can find more examples [here](https://github.com/JacobLinCool/gradio-\nrs/tree/main/examples).\n\n", "heading1": "Usage", "source_page_url": "https://gradio.app/docs/third-party-clients/rust-client", "source_page_title": "Third Party Clients - Rust Client Docs"}, {"text": "cargo install gradio\n gr --help\n\nTake [stabilityai/stable-\ndiffusion-3-medium](https://huggingface.co/spaces/stabilityai/stable-\ndiffusion-3-medium) HF Space as an example:\n\n \n \n > gr list stabilityai/stable-diffusion-3-medium\n API Spec for stabilityai/stable-diffusion-3-medium:\n /infer\n Parameters:\n prompt ( str ) \n negative_prompt ( str ) \n seed ( float ) numeric value between 0 and 2147483647\n randomize_seed ( bool ) \n width ( float ) numeric value between 256 and 1344\n height ( float ) numeric value between 256 and 1344\n guidance_scale ( float ) numeric value between 0.0 and 10.0\n num_inference_steps ( float ) numeric value between 1 and 50\n Returns:\n Result ( filepath ) \n Seed ( float ) numeric value between 0 and 2147483647\n \n > gr run stabilityai/stable-diffusion-3-medium infer 'Rusty text \"AI & CLI\" on the snow.' '' 0 true 1024 1024 5 28\n Result: https://stabilityai-stable-diffusion-3-medium.hf.space/file=/tmp/gradio/5735ca7775e05f8d56d929d8f57b099a675c0a01/image.webp\n Seed: 486085626\n\nFor file input, simply use the file path as the argument:\n\n \n \n gr run hf-audio/whisper-large-v3 predict 'test-audio.wav' 'transcribe'\n output: \" Did you know you can try the coolest model on your command line?\"\n\n", "heading1": "Command Line Interface", "source_page_url": "https://gradio.app/docs/third-party-clients/rust-client", "source_page_title": "Third Party Clients - Rust Client Docs"}, {"text": "Gradio applications support programmatic requests from many environments:\n\n * The [Python Client](/docs/python-client): `gradio-client` allows you to make requests from Python environments.\n * The [JavaScript Client](/docs/js-client): `@gradio/client` allows you to make requests in TypeScript from the browser or server-side.\n * You can also query gradio apps [directly from cURL](/guides/querying-gradio-apps-with-curl).\n\n", "heading1": "Gradio Clients", "source_page_url": "https://gradio.app/docs/third-party-clients/introduction", "source_page_title": "Third Party Clients - Introduction Docs"}, {"text": "We also encourage the development and use of third party clients built by\nthe community:\n\n * [Rust Client](/docs/third-party-clients/rust-client): `gradio-rs` built by [@JacobLinCool](https://github.com/JacobLinCool) allows you to make requests in Rust.\n * [Powershell Client](https://github.com/rrg92/powershai): `powershai` built by [@rrg92](https://github.com/rrg92) allows you to make requests to Gradio apps directly from Powershell. See [here for documentation](https://github.com/rrg92/powershai/blob/main/docs/en-US/providers/HUGGING-FACE.md)\n\n", "heading1": "Community Clients", "source_page_url": "https://gradio.app/docs/third-party-clients/introduction", "source_page_title": "Third Party Clients - Introduction Docs"}, {"text": "The main Client class for the Python client. This class is used to connect\nto a remote Gradio app and call its API endpoints. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "from gradio_client import Client\n \n client = Client(\"abidlabs/whisper-large-v2\") connecting to a Hugging Face Space\n client.predict(\"test.mp4\", api_name=\"/predict\")\n >> What a nice recording! returns the result of the remote API call\n \n client = Client(\"https://bec81a83-5b5c-471e.gradio.live\") connecting to a temporary Gradio share URL\n job = client.submit(\"hello\", api_name=\"/predict\") runs the prediction in a background thread\n job.result()\n >> 49 returns the result of the remote API call (blocking call)\n\n", "heading1": "Example usage", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n src: str\n\neither the name of the Hugging Face Space to load, (e.g. \"abidlabs/whisper-\nlarge-v2\") or the full URL (including \"http\" or \"https\") of the hosted Gradio\napp to load (e.g. \"http://mydomain.com/app\" or\n\"https://bec81a83-5b5c-471e.gradio.live/\").\n\n\n \n \n hf_token: str | None\n\ndefault `= None`\n\noptional Hugging Face token to use to access private Spaces. By default, the\nlocally saved token is used if there is one. Find your tokens here:\nhttps://huggingface.co/settings/tokens.\n\n\n \n \n max_workers: int\n\ndefault `= 40`\n\nmaximum number of thread workers that can be used to make requests to the\nremote Gradio app simultaneously.\n\n\n \n \n verbose: bool\n\ndefault `= True`\n\nwhether the client should print statements to the console.\n\n\n \n \n auth: tuple[str, str] | None\n\ndefault `= None`\n\n\n \n \n httpx_kwargs: dict[str, Any] | None\n\ndefault `= None`\n\nadditional keyword arguments to pass to `httpx.Client`, `httpx.stream`,\n`httpx.get` and `httpx.post`. This can be used to set timeouts, proxies, http\nauth, etc.\n\n\n \n \n headers: dict[str, str] | None\n\ndefault `= None`\n\nadditional headers to send to the remote Gradio app on every request. By\ndefault only the HF authorization and user-agent headers are sent. This\nparameter will override the default headers if they have the same keys.\n\n\n \n \n download_files: str | Path | Literal[False]\n\ndefault `= \"/tmp/gradio\"`\n\ndirectory where the client should download output files on the local machine\nfrom the remote API. By default, uses the value of the GRADIO_TEMP_DIR\nenvironment variable which, if not set by the user, is a temporary directory\non your machine. If False, the client does not download files and returns a\nFileData dataclass object with the filepath on the remote machine instead.\n\n\n \n \n ssl_verify: bool\n\ndefault `= True`\n\nif False, skips certificate validation which allows the client to connect to\nGradio apps that are using self-signed", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "h on the remote machine instead.\n\n\n \n \n ssl_verify: bool\n\ndefault `= True`\n\nif False, skips certificate validation which allows the client to connect to\nGradio apps that are using self-signed certificates.\n\n\n \n \n analytics_enabled: bool\n\ndefault `= True`\n\nWhether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED\nenvironment variable or default to True.\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Client component supports the following event listeners. Each event\nlistener takes the same parameters, which are listed in the Event Parameters\ntable below.\n\nListener | Description \n---|--- \n`Client.predict(fn, \u00b7\u00b7\u00b7)` | Calls the Gradio API and returns the result (this is a blocking call). Arguments can be provided as positional arguments or as keyword arguments (latter is recommended).
\n`Client.submit(fn, \u00b7\u00b7\u00b7)` | Creates and returns a Job object which calls the Gradio API in a background thread. The job can be used to retrieve the status and result of the remote API call. Arguments can be provided as positional arguments or as keyword arguments (latter is recommended).
\n`Client.view_api(fn, \u00b7\u00b7\u00b7)` | Prints the usage info for the API. If the Gradio app has multiple API endpoints, the usage info for each endpoint will be printed separately. If return_format=\"dict\" the info is returned in dictionary format, as shown in the example below.
\n`Client.duplicate(fn, \u00b7\u00b7\u00b7)` | Duplicates a Hugging Face Space under your account and returns a Client object for the new Space. No duplication is created if the Space already exists in your account (to override this, provide a new name for the new Space using `to_id`). To use this method, you must provide an `hf_token` or be logged in via the Hugging Face Hub CLI.
The new Space will be private by default and use the same hardware as the original Space. This can be changed by using the `private` and `hardware` parameters. For hardware upgrades (beyond the basic CPU tier), you may be required to provide billing information on Hugging Face: https://huggingface.co/settings/billing
\n`Client.deploy_discord(fn, \u00b7\u00b7\u00b7)` | Deploy", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "dware upgrades (beyond the basic CPU tier), you may be required to provide billing information on Hugging Face: https://huggingface.co/settings/billing
\n`Client.deploy_discord(fn, \u00b7\u00b7\u00b7)` | Deploy the upstream app as a discord bot. Currently only supports gr.ChatInterface. \n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n args: \n\nThe positional arguments to pass to the remote API endpoint. The order of the\narguments must match the order of the inputs in the Gradio app.\n\n\n \n \n api_name: str | None\n\ndefault `= None`\n\nThe name of the API endpoint to call starting with a leading slash, e.g.\n\"/predict\". Does not need to be provided if the Gradio app has only one named\nAPI endpoint.\n\n\n \n \n fn_index: int | None\n\ndefault `= None`\n\nAs an alternative to api_name, this parameter takes the index of the API\nendpoint to call, e.g. 0. Both api_name and fn_index can be provided, but if\nthey conflict, api_name will take precedence.\n\n\n \n \n headers: dict[str, str] | None\n\ndefault `= None`\n\nAdditional headers to send to the remote Gradio app on this request. This\nparameter will overrides the headers provided in the Client constructor if\nthey have the same keys.\n\n\n \n \n kwargs: \n\nThe keyword arguments to pass to the remote API endpoint.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/python-client/client", "source_page_title": "Python Client - Client Docs"}, {"text": "**Stream From a Gradio app in 5 lines**\n\n \n\nUse the `submit` method to get a job you can iterate over.\n\n \n\nIn python:\n\n \n \n from gradio_client import Client\n \n client = Client(\"gradio/llm_stream\")\n \n for result in client.submit(\"What's the best UI framework in Python?\"):\n print(result)\n\n \n\nIn typescript:\n\n \n \n import { Client } from \"@gradio/client\";\n \n const client = await Client.connect(\"gradio/llm_stream\")\n const job = client.submit(\"/predict\", {\"text\": \"What's the best UI framework in Python?\"})\n \n for await (const msg of job) console.log(msg.data)\n\n \n\n**Use the same keyword arguments as the app**\n\n \nIn the examples below, the upstream app has a function with parameters called\n`message`, `system_prompt`, and `tokens`. We can see that the client `predict`\ncall uses the same arguments.\n\nIn python:\n\n \n \n from gradio_client import Client\n \n client = Client(\"http://127.0.0.1:7860/\")\n result = client.predict(\n \t\tmessage=\"Hello!!\",\n \t\tsystem_prompt=\"You are helpful AI.\",\n \t\ttokens=10,\n \t\tapi_name=\"/chat\"\n )\n print(result)\n\nIn typescript:\n\n \n \n import { Client } from \"@gradio/client\";\n \n const client = await Client.connect(\"http://127.0.0.1:7860/\");\n const result = await client.predict(\"/chat\", { \t\t\n \t\tmessage: \"Hello!!\", \t\t\n \t\tsystem_prompt: \"Hello!!\", \t\t\n \t\ttokens: 10, \n });\n \n console.log(result.data);\n\n \n\n**Better Error Messages**\n\n \nIf something goes wrong in the upstream app, the client will raise the same\nexception as the app provided that `show_error=True` in the original app's\n`launch()` function, or it's a `gr.Error` exception.\n\n", "heading1": "Ergonomic API \ud83d\udc86", "source_page_url": "https://gradio.app/docs/python-client/version-1-release", "source_page_title": "Python Client - Version 1 Release Docs"}, {"text": "Anything you can do in the UI, you can do with the client:\n\n * \ud83d\udd10Authentication\n * \ud83d\uded1 Job Cancelling\n * \u2139\ufe0f Access Queue Position and API\n * \ud83d\udcd5 View the API information\n\n \nHere's an example showing how to display the queue position of a pending job:\n\n \n \n from gradio_client import Client\n \n client = Client(\"gradio/diffusion_model\")\n \n job = client.submit(\"A cute cat\")\n while not job.done():\n status = job.status()\n print(f\"Current in position {status.rank} out of {status.queue_size}\")\n\n", "heading1": "Transparent Design \ud83e\ude9f", "source_page_url": "https://gradio.app/docs/python-client/version-1-release", "source_page_title": "Python Client - Version 1 Release Docs"}, {"text": "The client can run from pretty much any python and javascript environment\n(node, deno, the browser, Service Workers). \nHere's an example using the client from a Flask server using gevent:\n\n \n \n from gevent import monkey\n monkey.patch_all()\n \n from gradio_client import Client\n from flask import Flask, send_file\n import time\n \n app = Flask(__name__)\n \n imageclient = Client(\"gradio/diffusion_model\")\n \n @app.route(\"/gen\")\n def gen():\n result = imageclient.predict(\n \"A cute cat\",\n api_name=\"/predict\"\n )\n return send_file(result)\n \n if __name__ == \"__main__\":\n app.run(host=\"0.0.0.0\", port=5000)\n\n", "heading1": "Portable Design \u26fa\ufe0f", "source_page_url": "https://gradio.app/docs/python-client/version-1-release", "source_page_title": "Python Client - Version 1 Release Docs"}, {"text": "Changes\n\n \n\n**Python**\n\n * The `serialize` argument of the `Client` class was removed and has no effect.\n * The `upload_files` argument of the `Client` was removed.\n * All filepaths must be wrapped in the `handle_file` method. For example, `caption = client.predict(handle_file('./dog.jpg'))`.\n * The `output_dir` argument was removed. It is not specified in the `download_files` argument.\n\n \n\n**Javascript**\n\n \nThe client has been redesigned entirely. It was refactored from a function\ninto a class. An instance can now be constructed by awaiting the `connect`\nmethod.\n\n \n \n const app = await Client.connect(\"gradio/whisper\")\n\nThe app variable has the same methods as the python class (`submit`,\n`predict`, `view_api`, `duplicate`).\n\n", "heading1": "v1.0 Migration Guide and Breaking", "source_page_url": "https://gradio.app/docs/python-client/version-1-release", "source_page_title": "Python Client - Version 1 Release Docs"}, {"text": "ZeroGPU\n\nZeroGPU spaces are rate-limited to ensure that a single user does not hog all\nof the available GPUs. The limit is controlled by a special token that the\nHugging Face Hub infrastructure adds to all incoming requests to Spaces. This\ntoken is a request header called `X-IP-Token` and its value changes depending\non the user who makes a request to the ZeroGPU space.\n\n \n\nLet\u2019s say you want to create a space (Space A) that uses a ZeroGPU space\n(Space B) programmatically. Simply calling Space B from Space A with the\npython client will quickly exhaust your rate limit, as all the requests to the\nZeroGPU space will have the same token. So in order to avoid this, we need to\nextract the token of the user using Space A before we call Space B\nprogrammatically.\n\n \n\nHow to do this will be explained in the following section.\n\n", "heading1": "Explaining Rate Limits for", "source_page_url": "https://gradio.app/docs/python-client/using-zero-gpu-spaces", "source_page_title": "Python Client - Using Zero Gpu Spaces Docs"}, {"text": "When a user presses enter in the textbox, we will extract their token from the\n`X-IP-Token` header of the incoming request. We will use this header when\nconstructing the gradio client. The following hypothetical text-to-image\napplication shows how this is done.\n\n \n\n \n \n import gradio as gr\n from gradio_client import Client\n \n def text_to_image(prompt, request: gr.Request):\n x_ip_token = request.headers['x-ip-token']\n client = Client(\"hysts/SDXL\", headers={\"x-ip-token\": x_ip_token})\n img = client.predict(prompt, api_name=\"/predict\")\n return img\n \n \n with gr.Blocks() as demo:\n image = gr.Image()\n prompt = gr.Textbox(max_lines=1)\n prompt.submit(text_to_image, [prompt], [image])\n \n demo.launch()\n\n", "heading1": "Avoiding Rate Limits", "source_page_url": "https://gradio.app/docs/python-client/using-zero-gpu-spaces", "source_page_title": "Python Client - Using Zero Gpu Spaces Docs"}, {"text": "If you already have a recent version of `gradio`, then the `gradio_client` is\nincluded as a dependency. But note that this documentation reflects the latest\nversion of the `gradio_client`, so upgrade if you\u2019re not sure!\n\nThe lightweight `gradio_client` package can be installed from pip (or pip3)\nand is tested to work with **Python versions 3.9 or higher** :\n\n \n \n $ pip install --upgrade gradio_client\n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Spaces\n\nStart by connecting instantiating a `Client` object and connecting it to a\nGradio app that is running on Hugging Face Spaces.\n\n \n \n from gradio_client import Client\n \n client = Client(\"abidlabs/en2fr\") a Space that translates from English to French\n\nYou can also connect to private Spaces by passing in your HF token with the\n`hf_token` parameter. You can get your HF token here:\n\n\n \n \n from gradio_client import Client\n \n client = Client(\"abidlabs/my-private-space\", hf_token=\"...\")\n\n", "heading1": "Connecting to a Gradio App on Hugging Face", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "use\n\nWhile you can use any public Space as an API, you may get rate limited by\nHugging Face if you make too many requests. For unlimited usage of a Space,\nsimply duplicate the Space to create a private Space, and then use it to make\nas many requests as you\u2019d like!\n\nThe `gradio_client` includes a class method: `Client.duplicate()` to make this\nprocess simple (you\u2019ll need to pass in your [Hugging Face\ntoken](https://huggingface.co/settings/tokens) or be logged in using the\nHugging Face CLI):\n\n \n \n import os\n from gradio_client import Client, file\n \n HF_TOKEN = os.environ.get(\"HF_TOKEN\")\n \n client = Client.duplicate(\"abidlabs/whisper\", hf_token=HF_TOKEN)\n client.predict(file(\"audio_sample.wav\"))\n \n >> \"This is a test of the whisper speech recognition model.\"\n\nIf you have previously duplicated a Space, re-running `duplicate()` will _not_\ncreate a new Space. Instead, the Client will attach to the previously-created\nSpace. So it is safe to re-run the `Client.duplicate()` method multiple times.\n\n**Note:** if the original Space uses GPUs, your private Space will as well,\nand your Hugging Face account will get billed based on the price of the GPU.\nTo minimize charges, your Space will automatically go to sleep after 1 hour of\ninactivity. You can also set the hardware using the `hardware` parameter of\n`duplicate()`.\n\n", "heading1": "Duplicating a Space for private", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "app\n\nIf your app is running somewhere else, just provide the full URL instead,\nincluding the \u201chttp://\u201d or \u201chttps://\u201c. Here\u2019s an example of making predictions\nto a Gradio app that is running on a share URL:\n\n \n \n from gradio_client import Client\n \n client = Client(\"https://bec81a83-5b5c-471e.gradio.live\")\n\n", "heading1": "Connecting a general Gradio", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Once you have connected to a Gradio app, you can view the APIs that are\navailable to you by calling the `Client.view_api()` method. For the Whisper\nSpace, we see the following:\n\n \n \n Client.predict() Usage Info\n ---------------------------\n Named API endpoints: 1\n \n - predict(audio, api_name=\"/predict\") -> output\n Parameters:\n - [Audio] audio: filepath (required) \n Returns:\n - [Textbox] output: str \n\nWe see that we have 1 API endpoint in this space, and shows us how to use the\nAPI endpoint to make a prediction: we should call the `.predict()` method\n(which we will explore below), providing a parameter `input_audio` of type\n`str`, which is a `filepath or URL`.\n\nWe should also provide the `api_name='/predict'` argument to the `predict()`\nmethod. Although this isn\u2019t necessary if a Gradio app has only 1 named\nendpoint, it does allow us to call different endpoints in a single app if they\nare available.\n\n", "heading1": "Inspecting the API endpoints", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "As an alternative to running the `.view_api()` method, you can click on the\n\u201cUse via API\u201d link in the footer of the Gradio app, which shows us the same\ninformation, along with example usage.\n\n![](https://huggingface.co/datasets/huggingface/documentation-\nimages/resolve/main/gradio-guides/view-api.png)\n\nThe View API page also includes an \u201cAPI Recorder\u201d that lets you interact with\nthe Gradio UI normally and converts your interactions into the corresponding\ncode to run with the Python Client.\n\n", "heading1": "The \u201cView API\u201d Page", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "The simplest way to make a prediction is simply to call the `.predict()`\nfunction with the appropriate arguments:\n\n \n \n from gradio_client import Client\n \n client = Client(\"abidlabs/en2fr\", api_name='/predict')\n client.predict(\"Hello\")\n \n >> Bonjour\n\nIf there are multiple parameters, then you should pass them as separate\narguments to `.predict()`, like this:\n\n \n \n from gradio_client import Client\n \n client = Client(\"gradio/calculator\")\n client.predict(4, \"add\", 5)\n \n >> 9.0\n\nIt is recommended to provide key-word arguments instead of positional\narguments:\n\n \n \n from gradio_client import Client\n \n client = Client(\"gradio/calculator\")\n client.predict(num1=4, operation=\"add\", num2=5)\n \n >> 9.0\n\nThis allows you to take advantage of default arguments. For example, this\nSpace includes the default value for the Slider component so you do not need\nto provide it when accessing it with the client.\n\n \n \n from gradio_client import Client\n \n client = Client(\"abidlabs/image_generator\")\n client.predict(text=\"an astronaut riding a camel\")\n\nThe default value is the initial value of the corresponding Gradio component.\nIf the component does not have an initial value, but if the corresponding\nargument in the predict function has a default value of `None`, then that\nparameter is also optional in the client. Of course, if you\u2019d like to override\nit, you can include it as well:\n\n \n \n from gradio_client import Client\n \n client = Client(\"abidlabs/image_generator\")\n client.predict(text=\"an astronaut riding a camel\", steps=25)\n\nFor providing files or URLs as inputs, you should pass in the filepath or URL\nto the file enclosed within `gradio_client.file()`. This takes care of\nuploading the file to the Gradio server and ensures that the file is\npreprocessed correctly:\n\n \n \n from gradio_client import Client, file\n \n client = Client(\"abidlabs/whisper\")\n client.predict(\n ", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": " to the Gradio server and ensures that the file is\npreprocessed correctly:\n\n \n \n from gradio_client import Client, file\n \n client = Client(\"abidlabs/whisper\")\n client.predict(\n audio=file(\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\")\n )\n \n >> \"My thought I have nobody by a beauty and will as you poured. Mr. Rochester is serve in that so don't find simpus, and devoted abode, to at might in a r\u2014\"\n\n", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Oe should note that `.predict()` is a _blocking_ operation as it waits for the\noperation to complete before returning the prediction.\n\nIn many cases, you may be better off letting the job run in the background\nuntil you need the results of the prediction. You can do this by creating a\n`Job` instance using the `.submit()` method, and then later calling\n`.result()` on the job to get the result. For example:\n\n \n \n from gradio_client import Client\n \n client = Client(space=\"abidlabs/en2fr\")\n job = client.submit(\"Hello\", api_name=\"/predict\") This is not blocking\n \n Do something else\n \n job.result() This is blocking\n \n >> Bonjour\n\n", "heading1": "Running jobs asynchronously", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Alternatively, one can add one or more callbacks to perform actions after the\njob has completed running, like this:\n\n \n \n from gradio_client import Client\n \n def print_result(x):\n print(\"The translated result is: {x}\")\n \n client = Client(space=\"abidlabs/en2fr\")\n \n job = client.submit(\"Hello\", api_name=\"/predict\", result_callbacks=[print_result])\n \n Do something else\n \n >> The translated result is: Bonjour\n \n\n", "heading1": "Adding callbacks", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "The `Job` object also allows you to get the status of the running job by\ncalling the `.status()` method. This returns a `StatusUpdate` object with the\nfollowing attributes: `code` (the status code, one of a set of defined strings\nrepresenting the status. See the `utils.Status` class), `rank` (the current\nposition of this job in the queue), `queue_size` (the total queue size), `eta`\n(estimated time this job will complete), `success` (a boolean representing\nwhether the job completed successfully), and `time` (the time that the status\nwas generated).\n\n \n \n from gradio_client import Client\n \n client = Client(src=\"gradio/calculator\")\n job = client.submit(5, \"add\", 4, api_name=\"/predict\")\n job.status()\n \n >> \n\n_Note_ : The `Job` class also has a `.done()` instance method which returns a\nboolean indicating whether the job has completed.\n\n", "heading1": "Status", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "The `Job` class also has a `.cancel()` instance method that cancels jobs that\nhave been queued but not started. For example, if you run:\n\n \n \n client = Client(\"abidlabs/whisper\")\n job1 = client.submit(file(\"audio_sample1.wav\"))\n job2 = client.submit(file(\"audio_sample2.wav\"))\n job1.cancel() will return False, assuming the job has started\n job2.cancel() will return True, indicating that the job has been canceled\n\nIf the first job has started processing, then it will not be canceled. If the\nsecond job has not yet started, it will be successfully canceled and removed\nfrom the queue.\n\n", "heading1": "Cancelling Jobs", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Some Gradio API endpoints do not return a single value, rather they return a\nseries of values. You can get the series of values that have been returned at\nany time from such a generator endpoint by running `job.outputs()`:\n\n \n \n from gradio_client import Client\n \n client = Client(src=\"gradio/count_generator\")\n job = client.submit(3, api_name=\"/count\")\n while not job.done():\n time.sleep(0.1)\n job.outputs()\n \n >> ['0', '1', '2']\n\nNote that running `job.result()` on a generator endpoint only gives you the\n_first_ value returned by the endpoint.\n\nThe `Job` object is also iterable, which means you can use it to display the\nresults of a generator function as they are returned from the endpoint. Here\u2019s\nthe equivalent example using the `Job` as a generator:\n\n \n \n from gradio_client import Client\n \n client = Client(src=\"gradio/count_generator\")\n job = client.submit(3, api_name=\"/count\")\n \n for o in job:\n print(o)\n \n >> 0\n >> 1\n >> 2\n\nYou can also cancel jobs that that have iterative outputs, in which case the\njob will finish as soon as the current iteration finishes running.\n\n \n \n from gradio_client import Client\n import time\n \n client = Client(\"abidlabs/test-yield\")\n job = client.submit(\"abcdef\")\n time.sleep(3)\n job.cancel() job cancels after 2 iterations\n\n", "heading1": "Generator Endpoints", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "Gradio demos can include [session state](https://www.gradio.app/guides/state-\nin-blocks), which provides a way for demos to persist information from user\ninteractions within a page session.\n\nFor example, consider the following demo, which maintains a list of words that\na user has submitted in a `gr.State` component. When a user submits a new\nword, it is added to the state, and the number of previous occurrences of that\nword is displayed:\n\n \n \n import gradio as gr\n \n def count(word, list_of_words):\n return list_of_words.count(word), list_of_words + [word]\n \n with gr.Blocks() as demo:\n words = gr.State([])\n textbox = gr.Textbox()\n number = gr.Number()\n textbox.submit(count, inputs=[textbox, words], outputs=[number, words])\n \n demo.launch()\n\nIf you were to connect this this Gradio app using the Python Client, you would\nnotice that the API information only shows a single input and output:\n\n \n \n Client.predict() Usage Info\n ---------------------------\n Named API endpoints: 1\n \n - predict(word, api_name=\"/count\") -> value_31\n Parameters:\n - [Textbox] word: str (required) \n Returns:\n - [Number] value_31: float \n\nThat is because the Python client handles state automatically for you \u2014 as you\nmake a series of requests, the returned state from one request is stored\ninternally and automatically supplied for the subsequent request. If you\u2019d\nlike to reset the state, you can do that by calling `Client.reset_session()`.\n\n", "heading1": "Demos with Session State", "source_page_url": "https://gradio.app/docs/python-client/introduction", "source_page_title": "Python Client - Introduction Docs"}, {"text": "A Job is a wrapper over the Future class that represents a prediction call\nthat has been submitted by the Gradio client. This class is not meant to be\ninstantiated directly, but rather is created by the Client.submit() method. \nA Job object includes methods to get the status of the prediction call, as\nwell to get the outputs of the prediction call. Job objects are also iterable,\nand can be used in a loop to get the outputs of prediction calls as they\nbecome available for generator endpoints.\n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/python-client/job", "source_page_title": "Python Client - Job Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n future: Future\n\nThe future object that represents the prediction call, created by the\nClient.submit() method\n\n\n \n \n communicator: Communicator | None\n\ndefault `= None`\n\nThe communicator object that is used to communicate between the client and the\nbackground thread running the job\n\n\n \n \n verbose: bool\n\ndefault `= True`\n\nWhether to print any status-related messages to the console\n\n\n \n \n space_id: str | None\n\ndefault `= None`\n\nThe space ID corresponding to the Client object that created this Job object\n\n", "heading1": "Initialization", "source_page_url": "https://gradio.app/docs/python-client/job", "source_page_title": "Python Client - Job Docs"}, {"text": "Description\n\nEvent listeners allow you to respond to user interactions with the UI\ncomponents you've defined in a Gradio Blocks app. When a user interacts with\nan element, such as changing a slider value or uploading an image, a function\nis called.\n\nSupported Event Listeners\n\nThe Job component supports the following event listeners. Each event listener\ntakes the same parameters, which are listed in the Event Parameters table\nbelow.\n\nListener | Description \n---|--- \n`Job.result(fn, \u00b7\u00b7\u00b7)` | Return the result of the call that the future represents. Raises CancelledError: If the future was cancelled, TimeoutError: If the future didn't finish executing before the given timeout, and Exception: If the call raised then that exception will be raised.
\n`Job.outputs(fn, \u00b7\u00b7\u00b7)` | Returns a list containing the latest outputs from the Job.
If the endpoint has multiple output components, the list will contain a tuple of results. Otherwise, it will contain the results without storing them in tuples.
For endpoints that are queued, this list will contain the final job output even if that endpoint does not use a generator function.
\n`Job.status(fn, \u00b7\u00b7\u00b7)` | Returns the latest status update from the Job in the form of a StatusUpdate object, which contains the following fields: code, rank, queue_size, success, time, eta, and progress_data.
progress_data is a list of updates emitted by the gr.Progress() tracker of the event handler. Each element of the list has the following fields: index, length, unit, progress, desc. If the event handler does not have a gr.Progress() tracker, the progress_data field will be None.
\n \nEvent Parameters\n\nParameters \u25bc\n\n\n \n \n timeout: float | None\n\ndefault `= None`\n\nThe number of seconds to wait for the result if the future isn't done. If\nNone, then there is no limit on the wait time.\n\n", "heading1": "Event Listeners", "source_page_url": "https://gradio.app/docs/python-client/job", "source_page_title": "Python Client - Job Docs"}, {"text": "A TabbedInterface is created by providing a list of Interfaces or Blocks,\neach of which gets rendered in a separate tab. Only the components from the\nInterface/Blocks will be rendered in the tab. Certain high-level attributes of\nthe Blocks (e.g. custom `css`, `js`, and `head` attributes) will not be\nloaded. \n\n", "heading1": "Description", "source_page_url": "https://gradio.app/docs/gradio/tabbedinterface", "source_page_title": "Gradio - Tabbedinterface Docs"}, {"text": "Parameters \u25bc\n\n\n \n \n interface_list: list[Blocks]\n\nA list of Interfaces (or Blocks) to be rendered in the tabs.\n\n\n \n \n tab_names: list[str] | None\n\ndefault `= None`\n\nA list of tab names. If None, the tab names will be \"Tab 1\", \"Tab 2\", etc.\n\n\n \n \n title: str | None\n\ndefault `= None`\n\nThe tab title to display when this demo is opened in a browser window.\n\n\n \n \n theme: Theme | str | None\n\ndefault `= None`\n\nA Theme object or a string representing a theme. If a string, will look for a\nbuilt-in theme with that name (e.g. \"soft\" or \"default\"), or will attempt to\nload a theme from the Hugging Face Hub (e.g. \"gradio/monochrome\"). If None,\nwill use the Default theme.\n\n\n \n \n analytics_enabled: bool | None\n\ndefault `= None`\n\nWhether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED\nenvironment variable or default to True.\n\n\n \n \n css: str | None\n\ndefault `= None`\n\nCustom css as a string or path to a css file. This css will be included in the\ndemo webpage.\n\n\n \n \n js: str | Literal[True] | None\n\ndefault `= None`\n\nCustom js as a string or path to a js file. The custom js should in the form\nof a single js function. This function will automatically be executed when the\npage loads. For more flexibility, use the head parameter to insert js inside\n\n```\n\n2. Add\n\n```html\n\n```\n\nelement where you want to place the app. Set the `src=` attribute to your Space's embed URL, which you can find in the \"Embed this Space\" button. For example:\n\n```html\n\n```\n\n\n\nYou can see examples of h", "heading1": "Embedding Hosted Spaces", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "=> {\n let v = obj.info.version;\n content = document.querySelector('.prose');\n content.innerHTML = content.innerHTML.replaceAll(\"{GRADIO_VERSION}\", v);\n});\n\n\nYou can see examples of how web components look on the Gradio landing page.\n\nYou can also customize the appearance and behavior of your web component with attributes that you pass into the `` tag:\n\n- `src`: as we've seen, the `src` attributes links to the URL of the hosted Gradio demo that you would like to embed\n- `space`: an optional shorthand if your Gradio demo is hosted on Hugging Face Space. Accepts a `username/space_name` instead of a full URL. Example: `gradio/Echocardiogram-Segmentation`. If this attribute attribute is provided, then `src` does not need to be provided.\n- `control_page_title`: a boolean designating whether the html title of the page should be set to the title of the Gradio app (by default `\"false\"`)\n- `initial_height`: the initial height of the web component while it is loading the Gradio app, (by default `\"300px\"`). Note that the final height is set based on the size of the Gradio app.\n- `container`: whether to show the border frame and information about where the Space is hosted (by default `\"true\"`)\n- `info`: whether to show just the information about where the Space is hosted underneath the embedded app (by default `\"true\"`)\n- `autoscroll`: whether to autoscroll to the output when prediction has finished (by default `\"false\"`)\n- `eager`: whether to load the Gradio app as soon as the page loads (by default `\"false\"`)\n- `theme_mode`: whether to use the `dark`, `light`, or default `system` theme mode (by default `\"system\"`)\n- `render`: an event that is triggered once the embedded space has finished rendering.\n\nHere's an example of how to use these attributes to create a Gradio app that does not lazy load and has an initial height of 0px.\n\n```html\n\n```\n\nHere's another example of how to use the `render` event. An event listener is used to capture the `render` event and will call the `handleLoadComplete()` function once rendering is complete.\n\n```html\n\n```\n\n_Note: While Gradio's CSS will never impact the embedding page, the embedding page can affect the style of the embedded Gradio app. Make sure that any CSS in the parent page isn't so general that it could also apply to the embedded Gradio app and cause the styling to break. Element selectors such as `header { ... }` and `footer { ... }` will be the most likely to cause issues._\n\nEmbedding with IFrames\n\nTo embed with IFrames instead (if you cannot add javascript to your website, for example), add this element:\n\n```html\n\n```\n\nAgain, you can find the `src=` attribute to your Space's embed URL, which you can find in the \"Embed this Space\" button.\n\nNote: if you use IFrames, you'll probably want to add a fixed `height` attribute and set `style=\"border:0;\"` to remove the border. In addition, if your app requires permissions such as access to the webcam or the microphone, you'll need to provide that as well using the `allow` attribute.\n\n", "heading1": "Embedding Hosted Spaces", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "You can use almost any Gradio app as an API! In the footer of a Gradio app [like this one](https://huggingface.co/spaces/gradio/hello_world), you'll see a \"Use via API\" link.\n\n![Use via API](https://github.com/gradio-app/gradio/blob/main/guides/assets/use_via_api.png?raw=true)\n\nThis is a page that lists the endpoints that can be used to query the Gradio app, via our supported clients: either [the Python client](https://gradio.app/guides/getting-started-with-the-python-client/), or [the JavaScript client](https://gradio.app/guides/getting-started-with-the-js-client/). For each endpoint, Gradio automatically generates the parameters and their types, as well as example inputs, like this.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)\n\nThe endpoints are automatically created when you launch a Gradio application. If you are using Gradio `Blocks`, you can also name each event listener, such as\n\n```python\nbtn.click(add, [num1, num2], output, api_name=\"addition\")\n```\n\nThis will add and document the endpoint `/addition/` to the automatically generated API page. Read more about the [API page here](./view-api-page).\n\n", "heading1": "API Page", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "When a user makes a prediction to your app, you may need the underlying network request, in order to get the request headers (e.g. for advanced authentication), log the client's IP address, getting the query parameters, or for other reasons. Gradio supports this in a similar manner to FastAPI: simply add a function parameter whose type hint is `gr.Request` and Gradio will pass in the network request as that parameter. Here is an example:\n\n```python\nimport gradio as gr\n\ndef echo(text, request: gr.Request):\n if request:\n print(\"Request headers dictionary:\", request.headers)\n print(\"IP address:\", request.client.host)\n print(\"Query parameters:\", dict(request.query_params))\n return text\n\nio = gr.Interface(echo, \"textbox\", \"textbox\").launch()\n```\n\nNote: if your function is called directly instead of through the UI (this happens, for\nexample, when examples are cached, or when the Gradio app is called via API), then `request` will be `None`.\nYou should handle this case explicitly to ensure that your app does not throw any errors. That is why\nwe have the explicit check `if request`.\n\n", "heading1": "Accessing the Network Request Directly", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "In some cases, you might have an existing FastAPI app, and you'd like to add a path for a Gradio demo.\nYou can easily do this with `gradio.mount_gradio_app()`.\n\nHere's a complete example:\n\n$code_custom_path\n\nNote that this approach also allows you run your Gradio apps on custom paths (`http://localhost:8000/gradio` in the example above).\n\n\n", "heading1": "Mounting Within Another FastAPI App", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Password-protected app\n\nYou may wish to put an authentication page in front of your app to limit who can open your app. With the `auth=` keyword argument in the `launch()` method, you can provide a tuple with a username and password, or a list of acceptable username/password tuples; Here's an example that provides password-based authentication for a single user named \"admin\":\n\n```python\ndemo.launch(auth=(\"admin\", \"pass1234\"))\n```\n\nFor more complex authentication handling, you can even pass a function that takes a username and password as arguments, and returns `True` to allow access, `False` otherwise.\n\nHere's an example of a function that accepts any login where the username and password are the same:\n\n```python\ndef same_auth(username, password):\n return username == password\ndemo.launch(auth=same_auth)\n```\n\nIf you have multiple users, you may wish to customize the content that is shown depending on the user that is logged in. You can retrieve the logged in user by [accessing the network request directly](accessing-the-network-request-directly) as discussed above, and then reading the `.username` attribute of the request. Here's an example:\n\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as demo:\n m = gr.Markdown()\n demo.load(update_message, None, m)\n\ndemo.launch(auth=[(\"Abubakar\", \"Abubakar\"), (\"Ali\", \"Ali\")])\n```\n\nNote: For authentication to work properly, third party cookies must be enabled in your browser. This is not the case by default for Safari or for Chrome Incognito Mode.\n\nIf users visit the `/logout` page of your Gradio app, they will automatically be logged out and session cookies deleted. This allows you to add logout functionality to your Gradio app as well. Let's update the previous example to include a log out button:\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as ", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": " Let's update the previous example to include a log out button:\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as demo:\n m = gr.Markdown()\n logout_button = gr.Button(\"Logout\", link=\"/logout\")\n demo.load(update_message, None, m)\n\ndemo.launch(auth=[(\"Pete\", \"Pete\"), (\"Dawood\", \"Dawood\")])\n```\n\nNote: Gradio's built-in authentication provides a straightforward and basic layer of access control but does not offer robust security features for applications that require stringent access controls (e.g. multi-factor authentication, rate limiting, or automatic lockout policies).\n\nOAuth (Login via Hugging Face)\n\nGradio natively supports OAuth login via Hugging Face. In other words, you can easily add a _\"Sign in with Hugging Face\"_ button to your demo, which allows you to get a user's HF username as well as other information from their HF profile. Check out [this Space](https://huggingface.co/spaces/Wauplin/gradio-oauth-demo) for a live demo.\n\nTo enable OAuth, you must set `hf_oauth: true` as a Space metadata in your README.md file. This will register your Space\nas an OAuth application on Hugging Face. Next, you can use `gr.LoginButton` to add a login button to\nyour Gradio app. Once a user is logged in with their HF account, you can retrieve their profile by adding a parameter of type\n`gr.OAuthProfile` to any Gradio function. The user profile will be automatically injected as a parameter value. If you want\nto perform actions on behalf of the user (e.g. list user's private repos, create repo, etc.), you can retrieve the user\ntoken by adding a parameter of type `gr.OAuthToken`. You must define which scopes you will use in your Space metadata\n(see [documentation](https://huggingface.co/docs/hub/spaces-oauthscopes) for more details).\n\nHere is a short example:\n\n$code_login_with_huggingface\n\nWhen the user clicks on the login button, they get redirected in a new page to authorize your ", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "docs/hub/spaces-oauthscopes) for more details).\n\nHere is a short example:\n\n$code_login_with_huggingface\n\nWhen the user clicks on the login button, they get redirected in a new page to authorize your Space.\n\n
\n\n
\n\nUsers can revoke access to their profile at any time in their [settings](https://huggingface.co/settings/connected-applications).\n\nAs seen above, OAuth features are available only when your app runs in a Space. However, you often need to test your app\nlocally before deploying it. To test OAuth features locally, your machine must be logged in to Hugging Face. Please run `huggingface-cli login` or set `HF_TOKEN` as environment variable with one of your access token. You can generate a new token in your settings page (https://huggingface.co/settings/tokens). Then, clicking on the `gr.LoginButton` will log in to your local Hugging Face profile, allowing you to debug your app with your Hugging Face account before deploying it to a Space.\n\n**Security Note**: It is important to note that adding a `gr.LoginButton` does not restrict users from using your app, in the same way that adding [username-password authentication](/guides/sharing-your-apppassword-protected-app) does. This means that users of your app who have not logged in with Hugging Face can still access and run events in your Gradio app -- the difference is that the `gr.OAuthProfile` or `gr.OAuthToken` will be `None` in the corresponding functions.\n\n\nOAuth (with external providers)\n\nIt is also possible to authenticate with external OAuth providers (e.g. Google OAuth) in your Gradio apps. To do this, first mount your Gradio app within a FastAPI app ([as discussed above](mounting-within-another-fast-api-app)). Then, you must write an *authentication function*, which gets the user's username from the OAuth provider and returns it. Th", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": " FastAPI app ([as discussed above](mounting-within-another-fast-api-app)). Then, you must write an *authentication function*, which gets the user's username from the OAuth provider and returns it. This function should be passed to the `auth_dependency` parameter in `gr.mount_gradio_app`.\n\nSimilar to [FastAPI dependency functions](https://fastapi.tiangolo.com/tutorial/dependencies/), the function specified by `auth_dependency` will run before any Gradio-related route in your FastAPI app. The function should accept a single parameter: the FastAPI `Request` and return either a string (representing a user's username) or `None`. If a string is returned, the user will be able to access the Gradio-related routes in your FastAPI app.\n\nFirst, let's show a simplistic example to illustrate the `auth_dependency` parameter:\n\n```python\nfrom fastapi import FastAPI, Request\nimport gradio as gr\n\napp = FastAPI()\n\ndef get_user(request: Request):\n return request.headers.get(\"user\")\n\ndemo = gr.Interface(lambda s: f\"Hello {s}!\", \"textbox\", \"textbox\")\n\napp = gr.mount_gradio_app(app, demo, path=\"/demo\", auth_dependency=get_user)\n\nif __name__ == '__main__':\n uvicorn.run(app)\n```\n\nIn this example, only requests that include a \"user\" header will be allowed to access the Gradio app. Of course, this does not add much security, since any user can add this header in their request.\n\nHere's a more complete example showing how to add Google OAuth to a Gradio app (assuming you've already created OAuth Credentials on the [Google Developer Console](https://console.cloud.google.com/project)):\n\n```python\nimport os\nfrom authlib.integrations.starlette_client import OAuth, OAuthError\nfrom fastapi import FastAPI, Depends, Request\nfrom starlette.config import Config\nfrom starlette.responses import RedirectResponse\nfrom starlette.middleware.sessions import SessionMiddleware\nimport uvicorn\nimport gradio as gr\n\napp = FastAPI()\n\nReplace these with your own OAuth settings\nGOOGLE_CLIENT_ID = \"...\"\nGOOGLE_C", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Response\nfrom starlette.middleware.sessions import SessionMiddleware\nimport uvicorn\nimport gradio as gr\n\napp = FastAPI()\n\nReplace these with your own OAuth settings\nGOOGLE_CLIENT_ID = \"...\"\nGOOGLE_CLIENT_SECRET = \"...\"\nSECRET_KEY = \"...\"\n\nconfig_data = {'GOOGLE_CLIENT_ID': GOOGLE_CLIENT_ID, 'GOOGLE_CLIENT_SECRET': GOOGLE_CLIENT_SECRET}\nstarlette_config = Config(environ=config_data)\noauth = OAuth(starlette_config)\noauth.register(\n name='google',\n server_metadata_url='https://accounts.google.com/.well-known/openid-configuration',\n client_kwargs={'scope': 'openid email profile'},\n)\n\nSECRET_KEY = os.environ.get('SECRET_KEY') or \"a_very_secret_key\"\napp.add_middleware(SessionMiddleware, secret_key=SECRET_KEY)\n\nDependency to get the current user\ndef get_user(request: Request):\n user = request.session.get('user')\n if user:\n return user['name']\n return None\n\n@app.get('/')\ndef public(user: dict = Depends(get_user)):\n if user:\n return RedirectResponse(url='/gradio')\n else:\n return RedirectResponse(url='/login-demo')\n\n@app.route('/logout')\nasync def logout(request: Request):\n request.session.pop('user', None)\n return RedirectResponse(url='/')\n\n@app.route('/login')\nasync def login(request: Request):\n redirect_uri = request.url_for('auth')\n If your app is running on https, you should ensure that the\n `redirect_uri` is https, e.g. uncomment the following lines:\n \n from urllib.parse import urlparse, urlunparse\n redirect_uri = urlunparse(urlparse(str(redirect_uri))._replace(scheme='https'))\n return await oauth.google.authorize_redirect(request, redirect_uri)\n\n@app.route('/auth')\nasync def auth(request: Request):\n try:\n access_token = await oauth.google.authorize_access_token(request)\n except OAuthError:\n return RedirectResponse(url='/')\n request.session['user'] = dict(access_token)[\"userinfo\"]\n return RedirectResponse(url='/')\n\nwith gr.Blocks() as login_demo:\n gr.Button(", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "t OAuthError:\n return RedirectResponse(url='/')\n request.session['user'] = dict(access_token)[\"userinfo\"]\n return RedirectResponse(url='/')\n\nwith gr.Blocks() as login_demo:\n gr.Button(\"Login\", link=\"/login\")\n\napp = gr.mount_gradio_app(app, login_demo, path=\"/login-demo\")\n\ndef greet(request: gr.Request):\n return f\"Welcome to Gradio, {request.username}\"\n\nwith gr.Blocks() as main_demo:\n m = gr.Markdown(\"Welcome to Gradio!\")\n gr.Button(\"Logout\", link=\"/logout\")\n main_demo.load(greet, None, m)\n\napp = gr.mount_gradio_app(app, main_demo, path=\"/gradio\", auth_dependency=get_user)\n\nif __name__ == '__main__':\n uvicorn.run(app)\n```\n\nThere are actually two separate Gradio apps in this example! One that simply displays a log in button (this demo is accessible to any user), while the other main demo is only accessible to users that are logged in. You can try this example out on [this Space](https://huggingface.co/spaces/gradio/oauth-example).\n\n\n", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "By default, Gradio collects certain analytics to help us better understand the usage of the `gradio` library. This includes the following information:\n\n* What environment the Gradio app is running on (e.g. Colab Notebook, Hugging Face Spaces)\n* What input/output components are being used in the Gradio app\n* Whether the Gradio app is utilizing certain advanced features, such as `auth` or `show_error`\n* The IP address which is used solely to measure the number of unique developers using Gradio\n* The version of Gradio that is running\n\nNo information is collected from _users_ of your Gradio app. If you'd like to disable analytics altogether, you can do so by setting the `analytics_enabled` parameter to `False` in `gr.Blocks`, `gr.Interface`, or `gr.ChatInterface`. Or, you can set the GRADIO_ANALYTICS_ENABLED environment variable to `\"False\"` to apply this to all Gradio apps created across your system.\n\n*Note*: this reflects the analytics policy as of `gradio>=4.32.0`.\n\n", "heading1": "Analytics", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "[Progressive Web Apps (PWAs)](https://developer.mozilla.org/en-US/docs/Web/Progressive_web_apps) are web applications that are regular web pages or websites, but can appear to the user like installable platform-specific applications.\n\nGradio apps can be easily served as PWAs by setting the `pwa=True` parameter in the `launch()` method. Here's an example:\n\n```python\nimport gradio as gr\n\ndef greet(name):\n return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(fn=greet, inputs=\"textbox\", outputs=\"textbox\")\n\ndemo.launch(pwa=True) Launch your app as a PWA\n```\n\nThis will generate a PWA that can be installed on your device. Here's how it looks:\n\n![Installing PWA](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/install-pwa.gif)\n\nWhen you specify `favicon_path` in the `launch()` method, the icon will be used as the app's icon. Here's an example:\n\n```python\ndemo.launch(pwa=True, favicon_path=\"./hf-logo.svg\") Use a custom icon for your PWA\n```\n\n![Custom PWA Icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/pwa-favicon.png)\n", "heading1": "Progressive Web App (PWA)", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "By default, each event listener has its own queue, which handles one request at a time. This can be configured via two arguments:\n\n- `concurrency_limit`: This sets the maximum number of concurrent executions for an event listener. By default, the limit is 1 unless configured otherwise in `Blocks.queue()`. You can also set it to `None` for no limit (i.e., an unlimited number of concurrent executions). For example:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n prompt = gr.Textbox()\n image = gr.Image()\n generate_btn = gr.Button(\"Generate Image\")\n generate_btn.click(image_gen, prompt, image, concurrency_limit=5)\n```\n\nIn the code above, up to 5 requests can be processed simultaneously for this event listener. Additional requests will be queued until a slot becomes available.\n\nIf you want to manage multiple event listeners using a shared queue, you can use the `concurrency_id` argument:\n\n- `concurrency_id`: This allows event listeners to share a queue by assigning them the same ID. For example, if your setup has only 2 GPUs but multiple functions require GPU access, you can create a shared queue for all those functions. Here's how that might look:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n prompt = gr.Textbox()\n image = gr.Image()\n generate_btn_1 = gr.Button(\"Generate Image via model 1\")\n generate_btn_2 = gr.Button(\"Generate Image via model 2\")\n generate_btn_3 = gr.Button(\"Generate Image via model 3\")\n generate_btn_1.click(image_gen_1, prompt, image, concurrency_limit=2, concurrency_id=\"gpu_queue\")\n generate_btn_2.click(image_gen_2, prompt, image, concurrency_id=\"gpu_queue\")\n generate_btn_3.click(image_gen_3, prompt, image, concurrency_id=\"gpu_queue\")\n```\n\nIn this example, all three event listeners share a queue identified by `\"gpu_queue\"`. The queue can handle up to 2 concurrent requests at a time, as defined by the `concurrency_limit`.\n\nNotes\n\n- To ensure unlimited concurrency for an event listener, se", "heading1": "Configuring the Queue", "source_page_url": "https://gradio.app/guides/queuing", "source_page_title": "Additional Features - Queuing Guide"}, {"text": " identified by `\"gpu_queue\"`. The queue can handle up to 2 concurrent requests at a time, as defined by the `concurrency_limit`.\n\nNotes\n\n- To ensure unlimited concurrency for an event listener, set `concurrency_limit=None`. This is useful if your function is calling e.g. an external API which handles the rate limiting of requests itself.\n- The default concurrency limit for all queues can be set globally using the `default_concurrency_limit` parameter in `Blocks.queue()`. \n\nThese configurations make it easy to manage the queuing behavior of your Gradio app.\n", "heading1": "Configuring the Queue", "source_page_url": "https://gradio.app/guides/queuing", "source_page_title": "Additional Features - Queuing Guide"}, {"text": "**API endpoint names**\n\nWhen you create a Gradio application, the API endpoint names are automatically generated based on the function names. You can change this by using the `api_name` parameter in `gr.Interface` or `gr.ChatInterface`. If you are using Gradio `Blocks`, you can name each event listener, like this:\n\n```python\nbtn.click(add, [num1, num2], output, api_name=\"addition\")\n```\n\n**Hiding API endpoints**\n\nWhen building a complex Gradio app, you might want to hide certain API endpoints from appearing on the view API page, e.g. if they correspond to functions that simply update the UI. You can set the `show_api` parameter to `False` in any `Blocks` event listener to achieve this, e.g. \n\n```python\nbtn.click(add, [num1, num2], output, show_api=False)\n```\n\n**Disabling API endpoints**\n\nHiding the API endpoint doesn't disable it. A user can still programmatically call the API endpoint if they know the name. If you want to disable an API endpoint altogether, set `api_name=False`, e.g. \n\n```python\nbtn.click(add, [num1, num2], output, api_name=False)\n```\n\nNote: setting an `api_name=False` also means that downstream apps will not be able to load your Gradio app using `gr.load()` as this function uses the Gradio API under the hood.\n\n**Adding API endpoints**\n\nYou can also add new API routes to your Gradio application that do not correspond to events in your UI.\n\nFor example, in this Gradio application, we add a new route that adds numbers and slices a list:\n\n```py\nimport gradio as gr\nwith gr.Blocks() as demo:\n with gr.Row():\n input = gr.Textbox()\n button = gr.Button(\"Submit\")\n output = gr.Textbox()\n def fn(a: int, b: int, c: list[str]) -> tuple[int, str]:\n return a + b, c[a:b]\n gr.api(fn, api_name=\"add_and_slice\")\n\n_, url, _ = demo.launch()\n```\n\nThis will create a new route `/add_and_slice` which will show up in the \"view API\" page. It can be programmatically called by the Python or JS Clients (discussed below) like this:\n\n```py\nfrom grad", "heading1": "Configuring the API Page", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "``\n\nThis will create a new route `/add_and_slice` which will show up in the \"view API\" page. It can be programmatically called by the Python or JS Clients (discussed below) like this:\n\n```py\nfrom gradio_client import Client\n\nclient = Client(url)\nresult = client.predict(\n a=3,\n b=5,\n c=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n api_name=\"/add_and_slice\"\n)\nprint(result)\n```\n\n", "heading1": "Configuring the API Page", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "This API page not only lists all of the endpoints that can be used to query the Gradio app, but also shows the usage of both [the Gradio Python client](https://gradio.app/guides/getting-started-with-the-python-client/), and [the Gradio JavaScript client](https://gradio.app/guides/getting-started-with-the-js-client/). \n\nFor each endpoint, Gradio automatically generates a complete code snippet with the parameters and their types, as well as example inputs, allowing you to immediately test an endpoint. Here's an example showing an image file input and `str` output:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-snippet.png)\n\n\n", "heading1": "The Clients", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "Instead of reading through the view API page, you can also use Gradio's built-in API recorder to generate the relevant code snippet. Simply click on the \"API Recorder\" button, use your Gradio app via the UI as you would normally, and then the API Recorder will generate the code using the Clients to recreate your all of your interactions programmatically.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/api-recorder.gif)\n\n", "heading1": "The API Recorder \ud83e\ude84", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "The API page also includes instructions on how to use the Gradio app as an Model Context Protocol (MCP) server, which is a standardized way to expose functions as tools so that they can be used by LLMs. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-mcp.png)\n\nFor the MCP sever, each tool, its description, and its parameters are listed, along with instructions on how to integrate with popular MCP Clients. Read more about Gradio's [MCP integration here](https://www.gradio.app/guides/building-mcp-server-with-gradio).\n\n", "heading1": "MCP Server", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "You can access the complete OpenAPI (formerly Swagger) specification of your Gradio app's API at the endpoint `/gradio_api/openapi.json`. The OpenAPI specification is a standardized, language-agnostic interface description for REST APIs that enables both humans and computers to discover and understand the capabilities of your service.\n", "heading1": "OpenAPI Specification", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "Let's create a demo where a user can choose a filter to apply to their webcam stream. Users can choose from an edge-detection filter, a cartoon filter, or simply flipping the stream vertically.\n\n$code_streaming_filter\n$demo_streaming_filter\n\nYou will notice that if you change the filter value it will immediately take effect in the output stream. That is an important difference of stream events in comparison to other Gradio events. The input values of the stream can be changed while the stream is being processed. \n\nTip: We set the \"streaming\" parameter of the image output component to be \"True\". Doing so lets the server automatically convert our output images into base64 format, a format that is efficient for streaming.\n\n", "heading1": "A Realistic Image Demo", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "For some image streaming demos, like the one above, we don't need to display separate input and output components. Our app would look cleaner if we could just display the modified output stream.\n\nWe can do so by just specifying the input image component as the output of the stream event.\n\n$code_streaming_filter_unified\n$demo_streaming_filter_unified\n\n", "heading1": "Unified Image Demos", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "Your streaming function should be stateless. It should take the current input and return its corresponding output. However, there are cases where you may want to keep track of past inputs or outputs. For example, you may want to keep a buffer of the previous `k` inputs to improve the accuracy of your transcription demo. You can do this with Gradio's `gr.State()` component.\n\nLet's showcase this with a sample demo:\n\n```python\ndef transcribe_handler(current_audio, state, transcript):\n next_text = transcribe(current_audio, history=state)\n state.append(current_audio)\n state = state[-3:]\n return state, transcript + next_text\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n mic = gr.Audio(sources=\"microphone\")\n state = gr.State(value=[])\n with gr.Column():\n transcript = gr.Textbox(label=\"Transcript\")\n mic.stream(transcribe_handler, [mic, state, transcript], [state, transcript],\n time_limit=10, stream_every=1)\n\n\ndemo.launch()\n```\n\n", "heading1": "Keeping track of past inputs or outputs", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "For an end-to-end example of streaming from the webcam, see the object detection from webcam [guide](/main/guides/object-detection-from-webcam-with-webrtc).", "heading1": "End-to-End Examples", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "Client side functions are ideal for updating component properties (like visibility, placeholders, interactive state, or styling). \n\nHere's a basic example:\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n with gr.Row() as row:\n btn = gr.Button(\"Hide this row\")\n \n This function runs in the browser without a server roundtrip\n btn.click(\n lambda: gr.Row(visible=False), \n None, \n row, \n js=True\n )\n\ndemo.launch()\n```\n\n\n", "heading1": "When to Use Client Side Functions", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "Client side functions have some important restrictions:\n* They can only update component properties (not values)\n* They cannot take any inputs\n\nHere are some functions that will work with `js=True`:\n\n```py\nSimple property updates\nlambda: gr.Textbox(lines=4)\n\nMultiple component updates\nlambda: [gr.Textbox(lines=4), gr.Button(interactive=False)]\n\nUsing gr.update() for property changes\nlambda: gr.update(visible=True, interactive=False)\n```\n\nWe are working to increase the space of functions that can be transpiled to JavaScript so that they can be run in the browser. [Follow the Groovy library for more info](https://github.com/abidlabs/groovy-transpiler).\n\n\n", "heading1": "Limitations", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "Here's a more complete example showing how client side functions can improve the user experience:\n\n$code_todo_list_js\n\n\n", "heading1": "Complete Example", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "When you set `js=True`, Gradio:\n\n1. Transpiles your Python function to JavaScript\n\n2. Runs the function directly in the browser\n\n3. Still sends the request to the server (for consistency and to handle any side effects)\n\nThis provides immediate visual feedback while ensuring your application state remains consistent.\n", "heading1": "Behind the Scenes", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "- **1. Static files**. You can designate static files or directories using the `gr.set_static_paths` function. Static files are not be copied to the Gradio cache (see below) and will be served directly from your computer. This can help save disk space and reduce the time your app takes to launch but be mindful of possible security implications as any static files are accessible to all useres of your Gradio app.\n\n- **2. Files in the `allowed_paths` parameter in `launch()`**. This parameter allows you to pass in a list of additional directories or exact filepaths you'd like to allow users to have access to. (By default, this parameter is an empty list).\n\n- **3. Files in Gradio's cache**. After you launch your Gradio app, Gradio copies certain files into a temporary cache and makes these files accessible to users. Let's unpack this in more detail below.\n\n\n", "heading1": "Files Gradio allows users to access", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "First, it's important to understand why Gradio has a cache at all. Gradio copies files to a cache directory before returning them to the frontend. This prevents files from being overwritten by one user while they are still needed by another user of your application. For example, if your prediction function returns a video file, then Gradio will move that video to the cache after your prediction function runs and returns a URL the frontend can use to show the video. Any file in the cache is available via URL to all users of your running application.\n\nTip: You can customize the location of the cache by setting the `GRADIO_TEMP_DIR` environment variable to an absolute path, such as `/home/usr/scripts/project/temp/`. \n\nFiles Gradio moves to the cache\n\nGradio moves three kinds of files into the cache\n\n1. Files specified by the developer before runtime, e.g. cached examples, default values of components, or files passed into parameters such as the `avatar_images` of `gr.Chatbot`\n\n2. File paths returned by a prediction function in your Gradio application, if they ALSO meet one of the conditions below:\n\n* It is in the `allowed_paths` parameter of the `Blocks.launch` method.\n* It is in the current working directory of the python interpreter.\n* It is in the temp directory obtained by `tempfile.gettempdir()`.\n\n**Note:** files in the current working directory whose name starts with a period (`.`) will not be moved to the cache, even if they are returned from a prediction function, since they often contain sensitive information. \n\nIf none of these criteria are met, the prediction function that is returning that file will raise an exception instead of moving the file to cache. Gradio performs this check so that arbitrary files on your machine cannot be accessed.\n\n3. Files uploaded by a user to your Gradio app (e.g. through the `File` or `Image` input components).\n\nTip: If at any time Gradio blocks a file that you would like it to process, add its path to the `allowed_paths` p", "heading1": "The Gradio cache", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "d by a user to your Gradio app (e.g. through the `File` or `Image` input components).\n\nTip: If at any time Gradio blocks a file that you would like it to process, add its path to the `allowed_paths` parameter.\n\n", "heading1": "The Gradio cache", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "While running, Gradio apps will NOT ALLOW users to access:\n\n- **Files that you explicitly block via the `blocked_paths` parameter in `launch()`**. You can pass in a list of additional directories or exact filepaths to the `blocked_paths` parameter in `launch()`. This parameter takes precedence over the files that Gradio exposes by default, or by the `allowed_paths` parameter or the `gr.set_static_paths` function.\n\n- **Any other paths on the host machine**. Users should NOT be able to access other arbitrary paths on the host.\n\n", "heading1": "The files Gradio will not allow others to access", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "Sharing your Gradio application will also allow users to upload files to your computer or server. You can set a maximum file size for uploads to prevent abuse and to preserve disk space. You can do this with the `max_file_size` parameter of `.launch`. For example, the following two code snippets limit file uploads to 5 megabytes per file.\n\n```python\nimport gradio as gr\n\ndemo = gr.Interface(lambda x: x, \"image\", \"image\")\n\ndemo.launch(max_file_size=\"5mb\")\nor\ndemo.launch(max_file_size=5 * gr.FileSize.MB)\n```\n\n", "heading1": "Uploading Files", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "* Set a `max_file_size` for your application.\n* Do not return arbitrary user input from a function that is connected to a file-based output component (`gr.Image`, `gr.File`, etc.). For example, the following interface would allow anyone to move an arbitrary file in your local directory to the cache: `gr.Interface(lambda s: s, \"text\", \"file\")`. This is because the user input is treated as an arbitrary file path. \n* Make `allowed_paths` as small as possible. If a path in `allowed_paths` is a directory, any file within that directory can be accessed. Make sure the entires of `allowed_paths` only contains files related to your application.\n* Run your gradio application from the same directory the application file is located in. This will narrow the scope of files Gradio will be allowed to move into the cache. For example, prefer `python app.py` to `python Users/sources/project/app.py`.\n\n\n", "heading1": "Best Practices", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "Both `gr.set_static_paths` and the `allowed_paths` parameter in launch expect absolute paths. Below is a minimal example to display a local `.png` image file in an HTML block.\n\n```txt\n\u251c\u2500\u2500 assets\n\u2502 \u2514\u2500\u2500 logo.png\n\u2514\u2500\u2500 app.py\n```\nFor the example directory structure, `logo.png` and any other files in the `assets` folder can be accessed from your Gradio app in `app.py` as follows:\n\n```python\nfrom pathlib import Path\n\nimport gradio as gr\n\ngr.set_static_paths(paths=[Path.cwd().absolute()/\"assets\"])\n\nwith gr.Blocks() as demo:\n gr.HTML(\"\")\n\ndemo.launch()\n```\n", "heading1": "Example: Accessing local files", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "Gradio can stream audio and video directly from your generator function.\nThis lets your user hear your audio or see your video nearly as soon as it's `yielded` by your function.\nAll you have to do is \n\n1. Set `streaming=True` in your `gr.Audio` or `gr.Video` output component.\n2. Write a python generator that yields the next \"chunk\" of audio or video.\n3. Set `autoplay=True` so that the media starts playing automatically.\n\nFor audio, the next \"chunk\" can be either an `.mp3` or `.wav` file or a `bytes` sequence of audio.\nFor video, the next \"chunk\" has to be either `.mp4` file or a file with `h.264` codec with a `.ts` extension.\nFor smooth playback, make sure chunks are consistent lengths and larger than 1 second.\n\nWe'll finish with some simple examples illustrating these points.\n\nStreaming Audio\n\n```python\nimport gradio as gr\nfrom time import sleep\n\ndef keep_repeating(audio_file):\n for _ in range(10):\n sleep(0.5)\n yield audio_file\n\ngr.Interface(keep_repeating,\n gr.Audio(sources=[\"microphone\"], type=\"filepath\"),\n gr.Audio(streaming=True, autoplay=True)\n).launch()\n```\n\nStreaming Video\n\n```python\nimport gradio as gr\nfrom time import sleep\n\ndef keep_repeating(video_file):\n for _ in range(10):\n sleep(0.5)\n yield video_file\n\ngr.Interface(keep_repeating,\n gr.Video(sources=[\"webcam\"], format=\"mp4\"),\n gr.Video(streaming=True, autoplay=True)\n).launch()\n```\n\n", "heading1": "Streaming Media", "source_page_url": "https://gradio.app/guides/streaming-outputs", "source_page_title": "Additional Features - Streaming Outputs Guide"}, {"text": "For an end-to-end example of streaming media, see the object detection from video [guide](/main/guides/object-detection-from-video) or the streaming AI-generated audio with [transformers](https://huggingface.co/docs/transformers/index) [guide](/main/guides/streaming-ai-generated-audio).", "heading1": "End-to-End Examples", "source_page_url": "https://gradio.app/guides/streaming-outputs", "source_page_title": "Additional Features - Streaming Outputs Guide"}, {"text": "You can initialize the `I18n` class with multiple language dictionaries to add custom translations:\n\n```python\nimport gradio as gr\n\nCreate an I18n instance with translations for multiple languages\ni18n = gr.I18n(\n en={\"greeting\": \"Hello, welcome to my app!\", \"submit\": \"Submit\"},\n es={\"greeting\": \"\u00a1Hola, bienvenido a mi aplicaci\u00f3n!\", \"submit\": \"Enviar\"},\n fr={\"greeting\": \"Bonjour, bienvenue dans mon application!\", \"submit\": \"Soumettre\"}\n)\n\nwith gr.Blocks() as demo:\n Use the i18n method to translate the greeting\n gr.Markdown(i18n(\"greeting\"))\n with gr.Row():\n input_text = gr.Textbox(label=\"Input\")\n output_text = gr.Textbox(label=\"Output\")\n \n submit_btn = gr.Button(i18n(\"submit\"))\n\nPass the i18n instance to the launch method\ndemo.launch(i18n=i18n)\n```\n\n", "heading1": "Setting Up Translations", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "When you use the `i18n` instance with a translation key, Gradio will show the corresponding translation to users based on their browser's language settings or the language they've selected in your app.\n\nIf a translation isn't available for the user's locale, the system will fall back to English (if available) or display the key itself.\n\n", "heading1": "How It Works", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "Locale codes should follow the BCP 47 format (e.g., 'en', 'en-US', 'zh-CN'). The `I18n` class will warn you if you use an invalid locale code.\n\n", "heading1": "Valid Locale Codes", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "The following component properties typically support internationalization:\n\n- `description`\n- `info`\n- `title`\n- `placeholder`\n- `value`\n- `label`\n\nNote that support may vary depending on the component, and some properties might have exceptions where internationalization is not applicable. You can check this by referring to the typehint for the parameter and if it contains `I18nData`, then it supports internationalization.", "heading1": "Supported Component Properties", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "When a user closes their browser tab, Gradio will automatically delete any `gr.State` variables associated with that user session after 60 minutes. If the user connects again within those 60 minutes, no state will be deleted.\n\nYou can control the deletion behavior further with the following two parameters of `gr.State`:\n\n1. `delete_callback` - An arbitrary function that will be called when the variable is deleted. This function must take the state value as input. This function is useful for deleting variables from GPU memory.\n2. `time_to_live` - The number of seconds the state should be stored for after it is created or updated. This will delete variables before the session is closed, so it's useful for clearing state for potentially long running sessions.\n\n", "heading1": "Automatic deletion of `gr.State`", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "Your Gradio application will save uploaded and generated files to a special directory called the cache directory. Gradio uses a hashing scheme to ensure that duplicate files are not saved to the cache but over time the size of the cache will grow (especially if your app goes viral \ud83d\ude09).\n\nGradio can periodically clean up the cache for you if you specify the `delete_cache` parameter of `gr.Blocks()`, `gr.Interface()`, or `gr.ChatInterface()`. \nThis parameter is a tuple of the form `[frequency, age]` both expressed in number of seconds.\nEvery `frequency` seconds, the temporary files created by this Blocks instance will be deleted if more than `age` seconds have passed since the file was created. \nFor example, setting this to (86400, 86400) will delete temporary files every day if they are older than a day old.\nAdditionally, the cache will be deleted entirely when the server restarts.\n\n", "heading1": "Automatic cache cleanup via `delete_cache`", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "Additionally, Gradio now includes a `Blocks.unload()` event, allowing you to run arbitrary cleanup functions when users disconnect (this does not have a 60 minute delay).\nUnlike other gradio events, this event does not accept inputs or outptus.\nYou can think of the `unload` event as the opposite of the `load` event.\n\n", "heading1": "The `unload` event", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "The following demo uses all of these features. When a user visits the page, a special unique directory is created for that user.\nAs the user interacts with the app, images are saved to disk in that special directory.\nWhen the user closes the page, the images created in that session are deleted via the `unload` event.\nThe state and files in the cache are cleaned up automatically as well.\n\n$code_state_cleanup\n$demo_state_cleanup", "heading1": "Putting it all together", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "1. `GRADIO_SERVER_PORT`\n\n- **Description**: Specifies the port on which the Gradio app will run.\n- **Default**: `7860`\n- **Example**:\n ```bash\n export GRADIO_SERVER_PORT=8000\n ```\n\n2. `GRADIO_SERVER_NAME`\n\n- **Description**: Defines the host name for the Gradio server. To make Gradio accessible from any IP address, set this to `\"0.0.0.0\"`\n- **Default**: `\"127.0.0.1\"` \n- **Example**:\n ```bash\n export GRADIO_SERVER_NAME=\"0.0.0.0\"\n ```\n\n3. `GRADIO_NUM_PORTS`\n\n- **Description**: Defines the number of ports to try when starting the Gradio server.\n- **Default**: `100`\n- **Example**:\n ```bash\n export GRADIO_NUM_PORTS=200\n ```\n\n4. `GRADIO_ANALYTICS_ENABLED`\n\n- **Description**: Whether Gradio should provide \n- **Default**: `\"True\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_ANALYTICS_ENABLED=\"True\"\n ```\n\n5. `GRADIO_DEBUG`\n\n- **Description**: Enables or disables debug mode in Gradio. If debug mode is enabled, the main thread does not terminate allowing error messages to be printed in environments such as Google Colab.\n- **Default**: `0`\n- **Example**:\n ```sh\n export GRADIO_DEBUG=1\n ```\n\n6. `GRADIO_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag inputs/outputs in the Gradio interface. See [the Guide on flagging](/guides/using-flagging) for more details.\n- **Default**: `\"manual\"`\n- **Options**: `\"never\"`, `\"manual\"`, `\"auto\"`\n- **Example**:\n ```sh\n export GRADIO_FLAGGING_MODE=\"never\"\n ```\n\n7. `GRADIO_TEMP_DIR`\n\n- **Description**: Specifies the directory where temporary files created by Gradio are stored.\n- **Default**: System default temporary directory\n- **Example**:\n ```sh\n export GRADIO_TEMP_DIR=\"/path/to/temp\"\n ```\n\n8. `GRADIO_ROOT_PATH`\n\n- **Description**: Sets the root path for the Gradio application. Useful if running Gradio [behind a reverse proxy](/guides/running-gradio-on-your-web-server-with-nginx).\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ROOT_PATH=", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "r the Gradio application. Useful if running Gradio [behind a reverse proxy](/guides/running-gradio-on-your-web-server-with-nginx).\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ROOT_PATH=\"/myapp\"\n ```\n\n9. `GRADIO_SHARE`\n\n- **Description**: Enables or disables sharing the Gradio app.\n- **Default**: `\"False\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_SHARE=\"True\"\n ```\n\n10. `GRADIO_ALLOWED_PATHS`\n\n- **Description**: Sets a list of complete filepaths or parent directories that gradio is allowed to serve. Must be absolute paths. Warning: if you provide directories, any files in these directories or their subdirectories are accessible to all users of your app. Multiple items can be specified by separating items with commas.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ALLOWED_PATHS=\"/mnt/sda1,/mnt/sda2\"\n ```\n\n11. `GRADIO_BLOCKED_PATHS`\n\n- **Description**: Sets a list of complete filepaths or parent directories that gradio is not allowed to serve (i.e. users of your app are not allowed to access). Must be absolute paths. Warning: takes precedence over `allowed_paths` and all other directories exposed by Gradio by default. Multiple items can be specified by separating items with commas.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_BLOCKED_PATHS=\"/users/x/gradio_app/admin,/users/x/gradio_app/keys\"\n ```\n\n12. `FORWARDED_ALLOW_IPS`\n\n- **Description**: This is not a Gradio-specific environment variable, but rather one used in server configurations, specifically `uvicorn` which is used by Gradio internally. This environment variable is useful when deploying applications behind a reverse proxy. It defines a list of IP addresses that are trusted to forward traffic to your application. When set, the application will trust the `X-Forwarded-For` header from these IP addresses to determine the original IP address of the user making the request. This means that if you use the `gr.Request` [objec", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": " the application will trust the `X-Forwarded-For` header from these IP addresses to determine the original IP address of the user making the request. This means that if you use the `gr.Request` [object's](https://www.gradio.app/docs/gradio/request) `client.host` property, it will correctly get the user's IP address instead of the IP address of the reverse proxy server. Note that only trusted IP addresses (i.e. the IP addresses of your reverse proxy servers) should be added, as any server with these IP addresses can modify the `X-Forwarded-For` header and spoof the client's IP address.\n- **Default**: `\"127.0.0.1\"`\n- **Example**:\n ```sh\n export FORWARDED_ALLOW_IPS=\"127.0.0.1,192.168.1.100\"\n ```\n\n13. `GRADIO_CACHE_EXAMPLES`\n\n- **Description**: Whether or not to cache examples by default in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()` when no explicit argument is passed for the `cache_examples` parameter. You can set this environment variable to either the string \"true\" or \"false\".\n- **Default**: `\"false\"`\n- **Example**:\n ```sh\n export GRADIO_CACHE_EXAMPLES=\"true\"\n ```\n\n\n14. `GRADIO_CACHE_MODE`\n\n- **Description**: How to cache examples. Only applies if `cache_examples` is set to `True` either via enviornment variable or by an explicit parameter, AND no no explicit argument is passed for the `cache_mode` parameter in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()`. Can be set to either the strings \"lazy\" or \"eager.\" If \"lazy\", examples are cached after their first use for all users of the app. If \"eager\", all examples are cached at app launch.\n\n- **Default**: `\"eager\"`\n- **Example**:\n ```sh\n export GRADIO_CACHE_MODE=\"lazy\"\n ```\n\n\n15. `GRADIO_EXAMPLES_CACHE`\n\n- **Description**: If you set `cache_examples=True` in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()`, Gradio will run your prediction function and save the results to disk. By default, this is in the `.gradio/cached_examples//` subdirectory within your", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "e()`, `gr.ChatInterface()` or in `gr.Examples()`, Gradio will run your prediction function and save the results to disk. By default, this is in the `.gradio/cached_examples//` subdirectory within your app's working directory. You can customize the location of cached example files created by Gradio by setting the environment variable `GRADIO_EXAMPLES_CACHE` to an absolute path or a path relative to your working directory.\n- **Default**: `\".gradio/cached_examples/\"`\n- **Example**:\n ```sh\n export GRADIO_EXAMPLES_CACHE=\"custom_cached_examples/\"\n ```\n\n\n16. `GRADIO_SSR_MODE`\n\n- **Description**: Controls whether server-side rendering (SSR) is enabled. When enabled, the initial HTML is rendered on the server rather than the client, which can improve initial page load performance and SEO.\n\n- **Default**: `\"False\"` (except on Hugging Face Spaces, where this environment variable sets it to `True`)\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_SSR_MODE=\"True\"\n ```\n\n17. `GRADIO_NODE_SERVER_NAME`\n\n- **Description**: Defines the host name for the Gradio node server. (Only applies if `ssr_mode` is set to `True`.)\n- **Default**: `GRADIO_SERVER_NAME` if it is set, otherwise `\"127.0.0.1\"`\n- **Example**:\n ```sh\n export GRADIO_NODE_SERVER_NAME=\"0.0.0.0\"\n ```\n\n18. `GRADIO_NODE_NUM_PORTS`\n\n- **Description**: Defines the number of ports to try when starting the Gradio node server. (Only applies if `ssr_mode` is set to `True`.)\n- **Default**: `100`\n- **Example**:\n ```sh\n export GRADIO_NODE_NUM_PORTS=200\n ```\n\n19. `GRADIO_RESET_EXAMPLES_CACHE`\n\n- **Description**: If set to \"True\", Gradio will delete and recreate the examples cache directory when the app starts instead of reusing the cached example if they already exist. \n- **Default**: `\"False\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_RESET_EXAMPLES_CACHE=\"True\"\n ```\n\n20. `GRADIO_CHAT_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "e\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_RESET_EXAMPLES_CACHE=\"True\"\n ```\n\n20. `GRADIO_CHAT_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag messages in `gr.ChatInterface` applications. Similar to `GRADIO_FLAGGING_MODE` but specifically for chat interfaces.\n- **Default**: `\"never\"`\n- **Options**: `\"never\"`, `\"manual\"`\n- **Example**:\n ```sh\n export GRADIO_CHAT_FLAGGING_MODE=\"manual\"\n ```\n\n21. `GRADIO_WATCH_DIRS`\n\n- **Description**: Specifies directories to watch for file changes when running Gradio in development mode. When files in these directories change, the Gradio app will automatically reload. Multiple directories can be specified by separating them with commas. This is primarily used by the `gradio` CLI command for development workflows.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_WATCH_DIRS=\"/path/to/src,/path/to/templates\"\n ```\n\n22. `GRADIO_VIBE_MODE`\n\n- **Description**: Enables the Vibe editor mode, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. When enabled, anyone who can access the Gradio endpoint can modify files and run arbitrary code on the host machine. Use with extreme caution in production environments.\n- **Default**: `\"\"`\n- **Options**: Any non-empty string enables the mode\n- **Example**:\n ```sh\n export GRADIO_VIBE_MODE=\"1\"\n ```\n\n\n\n", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "To set environment variables in your terminal, use the `export` command followed by the variable name and its value. For example:\n\n```sh\nexport GRADIO_SERVER_PORT=8000\n```\n\nIf you're using a `.env` file to manage your environment variables, you can add them like this:\n\n```sh\nGRADIO_SERVER_PORT=8000\nGRADIO_SERVER_NAME=\"localhost\"\n```\n\nThen, use a tool like `dotenv` to load these variables when running your application.\n\n\n\n", "heading1": "How to Set Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "**Prerequisite**: Gradio requires [Python 3.10 or higher](https://www.python.org/downloads/).\n\n\nWe recommend installing Gradio using `pip`, which is included by default in Python. Run this in your terminal or command prompt:\n\n```bash\npip install --upgrade gradio\n```\n\n\nTip: It is best to install Gradio in a virtual environment. Detailed installation instructions for all common operating systems are provided here. \n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "You can run Gradio in your favorite code editor, Jupyter notebook, Google Colab, or anywhere else you write Python. Let's write your first Gradio app:\n\n\n$code_hello_world_4\n\n\nTip: We shorten the imported name from gradio to gr. This is a widely adopted convention for better readability of code. \n\nNow, run your code. If you've written the Python code in a file named `app.py`, then you would run `python app.py` from the terminal.\n\nThe demo below will open in a browser on [http://localhost:7860](http://localhost:7860) if running from a file. If you are running within a notebook, the demo will appear embedded within the notebook.\n\n$demo_hello_world_4\n\nType your name in the textbox on the left, drag the slider, and then press the Submit button. You should see a friendly greeting on the right.\n\nTip: When developing locally, you can run your Gradio app in hot reload mode, which automatically reloads the Gradio app whenever you make changes to the file. To do this, simply type in gradio before the name of the file instead of python. In the example above, you would type: `gradio app.py` in your terminal. You can also enable vibe mode by using the --vibe flag, e.g. gradio --vibe app.py, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. Learn more in the Hot Reloading Guide.\n\n\n**Understanding the `Interface` Class**\n\nYou'll notice that in order to make your first demo, you created an instance of the `gr.Interface` class. The `Interface` class is designed to create demos for machine learning models which accept one or more inputs, and return one or more outputs. \n\nThe `Interface` class has three core arguments:\n\n- `fn`: the function to wrap a user interface (UI) around\n- `inputs`: the Gradio component(s) to use for the input. The num", "heading1": "Building Your First Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "turn one or more outputs. \n\nThe `Interface` class has three core arguments:\n\n- `fn`: the function to wrap a user interface (UI) around\n- `inputs`: the Gradio component(s) to use for the input. The number of components should match the number of arguments in your function.\n- `outputs`: the Gradio component(s) to use for the output. The number of components should match the number of return values from your function.\n\nThe `fn` argument is very flexible -- you can pass *any* Python function that you want to wrap with a UI. In the example above, we saw a relatively simple function, but the function could be anything from a music generator to a tax calculator to the prediction function of a pretrained machine learning model.\n\nThe `inputs` and `outputs` arguments take one or more Gradio components. As we'll see, Gradio includes more than [30 built-in components](https://www.gradio.app/docs/gradio/introduction) (such as the `gr.Textbox()`, `gr.Image()`, and `gr.HTML()` components) that are designed for machine learning applications. \n\nTip: For the `inputs` and `outputs` arguments, you can pass in the name of these components as a string (`\"textbox\"`) or an instance of the class (`gr.Textbox()`).\n\nIf your function accepts more than one argument, as is the case above, pass a list of input components to `inputs`, with each input component corresponding to one of the arguments of the function, in order. The same holds true if your function returns more than one value: simply pass in a list of components to `outputs`. This flexibility makes the `Interface` class a very powerful way to create demos.\n\nWe'll dive deeper into the `gr.Interface` on our series on [building Interfaces](https://www.gradio.app/main/guides/the-interface-class).\n\n", "heading1": "Building Your First Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "What good is a beautiful demo if you can't share it? Gradio lets you easily share a machine learning demo without having to worry about the hassle of hosting on a web server. Simply set `share=True` in `launch()`, and a publicly accessible URL will be created for your demo. Let's revisit our example demo, but change the last line as follows:\n\n```python\nimport gradio as gr\n\ndef greet(name):\n return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(fn=greet, inputs=\"textbox\", outputs=\"textbox\")\n \ndemo.launch(share=True) Share your demo with just 1 extra parameter \ud83d\ude80\n```\n\nWhen you run this code, a public URL will be generated for your demo in a matter of seconds, something like:\n\n\ud83d\udc49   `https://a23dsf231adb.gradio.live`\n\nNow, anyone around the world can try your Gradio demo from their browser, while the machine learning model and all computation continues to run locally on your computer.\n\nTo learn more about sharing your demo, read our dedicated guide on [sharing your Gradio application](https://www.gradio.app/guides/sharing-your-app).\n\n\n", "heading1": "Sharing Your Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "So far, we've been discussing the `Interface` class, which is a high-level class that lets to build demos quickly with Gradio. But what else does Gradio include?\n\nCustom Demos with `gr.Blocks`\n\nGradio offers a low-level approach for designing web apps with more customizable layouts and data flows with the `gr.Blocks` class. Blocks supports things like controlling where components appear on the page, handling multiple data flows and more complex interactions (e.g. outputs can serve as inputs to other functions), and updating properties/visibility of components based on user interaction \u2014 still all in Python. \n\nYou can build very custom and complex applications using `gr.Blocks()`. For example, the popular image generation [Automatic1111 Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is built using Gradio Blocks. We dive deeper into the `gr.Blocks` on our series on [building with Blocks](https://www.gradio.app/guides/blocks-and-event-listeners).\n\nChatbots with `gr.ChatInterface`\n\nGradio includes another high-level class, `gr.ChatInterface`, which is specifically designed to create Chatbot UIs. Similar to `Interface`, you supply a function and Gradio creates a fully working Chatbot UI. If you're interested in creating a chatbot, you can jump straight to [our dedicated guide on `gr.ChatInterface`](https://www.gradio.app/guides/creating-a-chatbot-fast).\n\nThe Gradio Python & JavaScript Ecosystem\n\nThat's the gist of the core `gradio` Python library, but Gradio is actually so much more! It's an entire ecosystem of Python and JavaScript libraries that let you build machine learning applications, or query them programmatically, in Python or JavaScript. Here are other related parts of the Gradio ecosystem:\n\n* [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.\n* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-t", "heading1": "An Overview of Gradio", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.\n* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-the-js-client) (`@gradio/client`): query any Gradio app programmatically in JavaScript.\n* [Gradio-Lite](https://www.gradio.app/guides/gradio-lite) (`@gradio/lite`): write Gradio apps in Python that run entirely in the browser (no server needed!), thanks to Pyodide. \n* [Hugging Face Spaces](https://huggingface.co/spaces): the most popular place to host Gradio applications \u2014 for free!\n\n", "heading1": "An Overview of Gradio", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "Keep learning about Gradio sequentially using the Gradio Guides, which include explanations as well as example code and embedded interactive demos. Next up: [let's dive deeper into the Interface class](https://www.gradio.app/guides/the-interface-class).\n\nOr, if you already know the basics and are looking for something specific, you can search the more [technical API documentation](https://www.gradio.app/docs/).\n\n\n", "heading1": "What's Next?", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "You can also build Gradio applications without writing any code. Simply type `gradio sketch` into your terminal to open up an editor that lets you define and modify Gradio components, adjust their layouts, add events, all through a web editor. Or [use this hosted version of Gradio Sketch, running on Hugging Face Spaces](https://huggingface.co/spaces/aliabid94/Sketch).", "heading1": "Gradio Sketch", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "The Model Context Protocol (MCP) standardizes how applications provide context to LLMs. It allows Claude to interact with external tools, like image generators, file systems, or APIs, etc.\n\n", "heading1": "What is MCP?", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "- Python 3.10+\n- An Anthropic API key\n- Basic understanding of Python programming\n\n", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "First, install the required packages:\n\n```bash\npip install gradio anthropic mcp\n```\n\nCreate a `.env` file in your project directory and add your Anthropic API key:\n\n```\nANTHROPIC_API_KEY=your_api_key_here\n```\n\n", "heading1": "Setup", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "The server provides tools that Claude can use. In this example, we'll create a server that generates images through [a HuggingFace space](https://huggingface.co/spaces/ysharma/SanaSprint).\n\nCreate a file named `gradio_mcp_server.py`:\n\n```python\nfrom mcp.server.fastmcp import FastMCP\nimport json\nimport sys\nimport io\nimport time\nfrom gradio_client import Client\n\nsys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')\nsys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')\n\nmcp = FastMCP(\"huggingface_spaces_image_display\")\n\n@mcp.tool()\nasync def generate_image(prompt: str, width: int = 512, height: int = 512) -> str:\n \"\"\"Generate an image using SanaSprint model.\n \n Args:\n prompt: Text prompt describing the image to generate\n width: Image width (default: 512)\n height: Image height (default: 512)\n \"\"\"\n client = Client(\"https://ysharma-sanasprint.hf.space/\")\n \n try:\n result = client.predict(\n prompt,\n \"0.6B\",\n 0,\n True,\n width,\n height,\n 4.0,\n 2,\n api_name=\"/infer\"\n )\n \n if isinstance(result, list) and len(result) >= 1:\n image_data = result[0]\n if isinstance(image_data, dict) and \"url\" in image_data:\n return json.dumps({\n \"type\": \"image\",\n \"url\": image_data[\"url\"],\n \"message\": f\"Generated image for prompt: {prompt}\"\n })\n \n return json.dumps({\n \"type\": \"error\",\n \"message\": \"Failed to generate image\"\n })\n \n except Exception as e:\n return json.dumps({\n \"type\": \"error\",\n \"message\": f\"Error generating image: {str(e)}\"\n })\n\nif __name__ == \"__main__\":\n mcp.run(transport='stdio')\n```\n\nWhat this server does:\n\n1. It creates an MCP server that exposes a `gene", "heading1": "Part 1: Building the MCP Server", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": " \"message\": f\"Error generating image: {str(e)}\"\n })\n\nif __name__ == \"__main__\":\n mcp.run(transport='stdio')\n```\n\nWhat this server does:\n\n1. It creates an MCP server that exposes a `generate_image` tool\n2. The tool connects to the SanaSprint model hosted on HuggingFace Spaces\n3. It handles the asynchronous nature of image generation by polling for results\n4. When an image is ready, it returns the URL in a structured JSON format\n\n", "heading1": "Part 1: Building the MCP Server", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Now let's create a Gradio chat interface as MCP Client that connects Claude to our MCP server.\n\nCreate a file named `app.py`:\n\n```python\nimport asyncio\nimport os\nimport json\nfrom typing import List, Dict, Any, Union\nfrom contextlib import AsyncExitStack\n\nimport gradio as gr\nfrom gradio.components.chatbot import ChatMessage\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\nfrom anthropic import Anthropic\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nloop = asyncio.new_event_loop()\nasyncio.set_event_loop(loop)\n\nclass MCPClientWrapper:\n def __init__(self):\n self.session = None\n self.exit_stack = None\n self.anthropic = Anthropic()\n self.tools = []\n \n def connect(self, server_path: str) -> str:\n return loop.run_until_complete(self._connect(server_path))\n \n async def _connect(self, server_path: str) -> str:\n if self.exit_stack:\n await self.exit_stack.aclose()\n \n self.exit_stack = AsyncExitStack()\n \n is_python = server_path.endswith('.py')\n command = \"python\" if is_python else \"node\"\n \n server_params = StdioServerParameters(\n command=command,\n args=[server_path],\n env={\"PYTHONIOENCODING\": \"utf-8\", \"PYTHONUNBUFFERED\": \"1\"}\n )\n \n stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))\n self.stdio, self.write = stdio_transport\n \n self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))\n await self.session.initialize()\n \n response = await self.session.list_tools()\n self.tools = [{ \n \"name\": tool.name,\n \"description\": tool.description,\n \"input_schema\": tool.inputSchema\n } for tool in response.tools]\n \n tool_names = [tool[\"name\"] for tool in self.tools]\n return f\"Connected to MCP server.", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "iption,\n \"input_schema\": tool.inputSchema\n } for tool in response.tools]\n \n tool_names = [tool[\"name\"] for tool in self.tools]\n return f\"Connected to MCP server. Available tools: {', '.join(tool_names)}\"\n \n def process_message(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]) -> tuple:\n if not self.session:\n return history + [\n {\"role\": \"user\", \"content\": message}, \n {\"role\": \"assistant\", \"content\": \"Please connect to an MCP server first.\"}\n ], gr.Textbox(value=\"\")\n \n new_messages = loop.run_until_complete(self._process_query(message, history))\n return history + [{\"role\": \"user\", \"content\": message}] + new_messages, gr.Textbox(value=\"\")\n \n async def _process_query(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]):\n claude_messages = []\n for msg in history:\n if isinstance(msg, ChatMessage):\n role, content = msg.role, msg.content\n else:\n role, content = msg.get(\"role\"), msg.get(\"content\")\n \n if role in [\"user\", \"assistant\", \"system\"]:\n claude_messages.append({\"role\": role, \"content\": content})\n \n claude_messages.append({\"role\": \"user\", \"content\": message})\n \n response = self.anthropic.messages.create(\n model=\"claude-3-5-sonnet-20241022\",\n max_tokens=1000,\n messages=claude_messages,\n tools=self.tools\n )\n\n result_messages = []\n \n for content in response.content:\n if content.type == 'text':\n result_messages.append({\n \"role\": \"assistant\", \n \"content\": content.text\n })\n \n elif content.type == 'tool_use':\n tool_name = content.name\n tool_args = content.input\n ", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "ntent\": content.text\n })\n \n elif content.type == 'tool_use':\n tool_name = content.name\n tool_args = content.input\n \n result_messages.append({\n \"role\": \"assistant\",\n \"content\": f\"I'll use the {tool_name} tool to help answer your question.\",\n \"metadata\": {\n \"title\": f\"Using tool: {tool_name}\",\n \"log\": f\"Parameters: {json.dumps(tool_args, ensure_ascii=True)}\",\n \"status\": \"pending\",\n \"id\": f\"tool_call_{tool_name}\"\n }\n })\n \n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"```json\\n\" + json.dumps(tool_args, indent=2, ensure_ascii=True) + \"\\n```\",\n \"metadata\": {\n \"parent_id\": f\"tool_call_{tool_name}\",\n \"id\": f\"params_{tool_name}\",\n \"title\": \"Tool Parameters\"\n }\n })\n \n result = await self.session.call_tool(tool_name, tool_args)\n \n if result_messages and \"metadata\" in result_messages[-2]:\n result_messages[-2][\"metadata\"][\"status\"] = \"done\"\n \n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"Here are the results from the tool:\",\n \"metadata\": {\n \"title\": f\"Tool Result for {tool_name}\",\n \"status\": \"done\",\n \"id\": f\"result_{tool_name}\"\n }\n })\n \n result_content = result.content\n if isinstance(result_content, list):\n result_content = \"\\n\".join(str(item) for item in re", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": " })\n \n result_content = result.content\n if isinstance(result_content, list):\n result_content = \"\\n\".join(str(item) for item in result_content)\n \n try:\n result_json = json.loads(result_content)\n if isinstance(result_json, dict) and \"type\" in result_json:\n if result_json[\"type\"] == \"image\" and \"url\" in result_json:\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": {\"path\": result_json[\"url\"], \"alt_text\": result_json.get(\"message\", \"Generated image\")},\n \"metadata\": {\n \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"image_{tool_name}\",\n \"title\": \"Generated Image\"\n }\n })\n else:\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"```\\n\" + result_content + \"\\n```\",\n \"metadata\": {\n \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"raw_result_{tool_name}\",\n \"title\": \"Raw Output\"\n }\n })\n except:\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"```\\n\" + result_content + \"\\n```\",\n \"metadata\": {\n \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"raw_result_{tool_name}\",\n \"title\": \"Raw Output\"\n }\n })\n ", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": " \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"raw_result_{tool_name}\",\n \"title\": \"Raw Output\"\n }\n })\n \n claude_messages.append({\"role\": \"user\", \"content\": f\"Tool result for {tool_name}: {result_content}\"})\n next_response = self.anthropic.messages.create(\n model=\"claude-3-5-sonnet-20241022\",\n max_tokens=1000,\n messages=claude_messages,\n )\n \n if next_response.content and next_response.content[0].type == 'text':\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": next_response.content[0].text\n })\n\n return result_messages\n\nclient = MCPClientWrapper()\n\ndef gradio_interface():\n with gr.Blocks(title=\"MCP Weather Client\") as demo:\n gr.Markdown(\"MCP Weather Assistant\")\n gr.Markdown(\"Connect to your MCP weather server and chat with the assistant\")\n \n with gr.Row(equal_height=True):\n with gr.Column(scale=4):\n server_path = gr.Textbox(\n label=\"Server Script Path\",\n placeholder=\"Enter path to server script (e.g., weather.py)\",\n value=\"gradio_mcp_server.py\"\n )\n with gr.Column(scale=1):\n connect_btn = gr.Button(\"Connect\")\n \n status = gr.Textbox(label=\"Connection Status\", interactive=False)\n \n chatbot = gr.Chatbot(\n value=[], \n height=500,\n type=\"messages\",\n show_copy_button=True,\n avatar_images=(\"\ud83d\udc64\", \"\ud83e\udd16\")\n )\n \n with gr.Row(equal_height=True):\n msg = gr.Textbox(\n label=\"Your Question\",\n placeholder=\"Ask about weather or alerts (e.g., What's the weath", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": ")\n \n with gr.Row(equal_height=True):\n msg = gr.Textbox(\n label=\"Your Question\",\n placeholder=\"Ask about weather or alerts (e.g., What's the weather in New York?)\",\n scale=4\n )\n clear_btn = gr.Button(\"Clear Chat\", scale=1)\n \n connect_btn.click(client.connect, inputs=server_path, outputs=status)\n msg.submit(client.process_message, [msg, chatbot], [chatbot, msg])\n clear_btn.click(lambda: [], None, chatbot)\n \n return demo\n\nif __name__ == \"__main__\":\n if not os.getenv(\"ANTHROPIC_API_KEY\"):\n print(\"Warning: ANTHROPIC_API_KEY not found in environment. Please set it in your .env file.\")\n \n interface = gradio_interface()\n interface.launch(debug=True)\n```\n\nWhat this MCP Client does:\n\n- Creates a friendly Gradio chat interface for user interaction\n- Connects to the MCP server you specify\n- Handles conversation history and message formatting\n- Makes call to Claude API with tool definitions\n- Processes tool usage requests from Claude\n- Displays images and other tool outputs in the chat\n- Sends tool results back to Claude for interpretation\n\n", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "To run your MCP application:\n\n- Start a terminal window and run the MCP Client:\n ```bash\n python app.py\n ```\n- Open the Gradio interface at the URL shown (typically http://127.0.0.1:7860)\n- In the Gradio interface, you'll see a field for the MCP Server path. It should default to `gradio_mcp_server.py`.\n- Click \"Connect\" to establish the connection to the MCP server.\n- You should see a message indicating the server connection was successful.\n\n", "heading1": "Running the Application", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Now you can chat with Claude and it will be able to generate images based on your descriptions.\n\nTry prompts like:\n- \"Can you generate an image of a mountain landscape at sunset?\"\n- \"Create an image of a cool tabby cat\"\n- \"Generate a picture of a panda wearing sunglasses\"\n\nClaude will recognize these as image generation requests and automatically use the `generate_image` tool from your MCP server.\n\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Here's the high-level flow of what happens during a chat session:\n\n1. Your prompt enters the Gradio interface\n2. The client forwards your prompt to Claude\n3. Claude analyzes the prompt and decides to use the `generate_image` tool\n4. The client sends the tool call to the MCP server\n5. The server calls the external image generation API\n6. The image URL is returned to the client\n7. The client sends the image URL back to Claude\n8. Claude provides a response that references the generated image\n9. The Gradio chat interface displays both Claude's response and the image\n\n\n", "heading1": "How it Works", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Now that you have a working MCP system, here are some ideas to extend it:\n\n- Add more tools to your server\n- Improve error handling \n- Add private Huggingface Spaces with authentication for secure tool access\n- Create custom tools that connect to your own APIs or services\n- Implement streaming responses for better user experience\n\n", "heading1": "Next Steps", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Congratulations! You've successfully built an MCP Client and Server that allows Claude to generate images based on text prompts. This is just the beginning of what you can do with Gradio and MCP. This guide enables you to build complex AI applications that can use Claude or any other powerful LLM to interact with virtually any external tool or service.\n\nRead our other Guide on using [Gradio apps as MCP Servers](./building-mcp-server-with-gradio).\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "As of version 5.36.0, Gradio now comes with a built-in MCP server that can upload files to a running Gradio application. In the `View API` page of the server, you should see the following code snippet if any of the tools require file inputs:\n\n\n\nThe command to start the MCP server takes two arguments:\n\n- The URL (or Hugging Face space id) of the gradio application to upload the files to. In this case, `http://127.0.0.1:7860`.\n- The local directory on your computer with which the server is allowed to upload files from (``). For security, please make this directory as narrow as possible to prevent unintended file uploads.\n\nAs stated in the image, you need to install [uv](https://docs.astral.sh/uv/getting-started/installation/) (a python package manager that can run python scripts) before connecting from your MCP client. \n\nIf you have gradio installed locally and you don't want to install uv, you can replace the `uvx` command with the path to gradio binary. It should look like this:\n\n```json\n\"upload-files\": {\n \"command\": \"\",\n \"args\": [\n \"upload-mcp\",\n \"http://localhost:7860/\",\n \"/Users/freddyboulton/Pictures\"\n ]\n}\n```\n\nAfter connecting to the upload server, your LLM agent will know when to upload files for you automatically!\n\n\n\n", "heading1": "Using the File Upload MCP Server", "source_page_url": "https://gradio.app/guides/file-upload-mcp", "source_page_title": "Mcp - File Upload Mcp Guide"}, {"text": "In this guide, we've covered how you can connect to the Upload File MCP Server so that your agent can upload files before using Gradio MCP servers. Remember to set the `` as small as possible to prevent unintended file uploads!\n\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/file-upload-mcp", "source_page_title": "Mcp - File Upload Mcp Guide"}, {"text": "An MCP (Model Control Protocol) server is a standardized way to expose tools so that they can be used by LLMs. A tool can provide an LLM functionality that it does not have natively, such as the ability to generate images or calculate the prime factors of a number. \n\n", "heading1": "What is an MCP Server?", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "LLMs are famously not great at counting the number of letters in a word (e.g. the number of \"r\"-s in \"strawberry\"). But what if we equip them with a tool to help? Let's start by writing a simple Gradio app that counts the number of letters in a word or phrase:\n\n$code_letter_counter\n\nNotice that we have: (1) included a detailed docstring for our function, and (2) set `mcp_server=True` in `.launch()`. This is all that's needed for your Gradio app to serve as an MCP server! Now, when you run this app, it will:\n\n1. Start the regular Gradio web interface\n2. Start the MCP server\n3. Print the MCP server URL in the console\n\nThe MCP server will be accessible at:\n```\nhttp://your-server:port/gradio_api/mcp/sse\n```\n\nGradio automatically converts the `letter_counter` function into an MCP tool that can be used by LLMs. The docstring of the function and the type hints of arguments will be used to generate the description of the tool and its parameters. The name of the function will be used as the name of your tool. Any initial values you provide to your input components (e.g. \"strawberry\" and \"r\" in the `gr.Textbox` components above) will be used as the default values if your LLM doesn't specify a value for that particular input parameter.\n\nNow, all you need to do is add this URL endpoint to your MCP Client (e.g. Claude Desktop, Cursor, or Cline), which typically means pasting this config in the settings:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"http://your-server:port/gradio_api/mcp/sse\"\n }\n }\n}\n```\n\n(By the way, you can find the exact config to copy-paste by going to the \"View API\" link in the footer of your Gradio app, and then clicking on \"MCP\").\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-mcp.png)\n\n", "heading1": "Example: Counting Letters in a Word", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "1. **Tool Conversion**: Each API endpoint in your Gradio app is automatically converted into an MCP tool with a corresponding name, description, and input schema. To view the tools and schemas, visit http://your-server:port/gradio_api/mcp/schema or go to the \"View API\" link in the footer of your Gradio app, and then click on \"MCP\".\n\n\n2. **Environment variable support**. There are two ways to enable the MCP server functionality:\n\n* Using the `mcp_server` parameter, as shown above:\n ```python\n demo.launch(mcp_server=True)\n ```\n\n* Using environment variables:\n ```bash\n export GRADIO_MCP_SERVER=True\n ```\n\n3. **File Handling**: The Gradio MCP server automatically handles file data conversions, including:\n - Processing image files and returning them in the correct format\n - Managing temporary file storage\n\n By default, the Gradio MCP server accepts input images and files as full URLs (\"http://...\" or \"https:/...\"). For convenience, an additional STDIO-based MCP server is also generated, which can be used to upload files to any remote Gradio app and which returns a URL that can be used for subsequent tool calls.\n\n4. **Hosted MCP Servers on \udb40\udc20\ud83e\udd17 Spaces**: You can publish your Gradio application for free on Hugging Face Spaces, which will allow you to have a free hosted MCP server. Here's an example of such a Space: https://huggingface.co/spaces/abidlabs/mcp-tools. Notice that you can add this config to your MCP Client to start using the tools from this Space immediately:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse\"\n }\n }\n}\n```\n\n\n\n\n", "heading1": "Key features of the Gradio <> MCP Integration", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "If there's an existing Space that you'd like to use an MCP server, you'll need to do three things:\n\n1. First, [duplicate the Space](https://huggingface.co/docs/hub/en/spaces-more-ways-to-createduplicating-a-space) if it is not your own Space. This will allow you to make changes to the app. If the Space requires a GPU, set the hardware of the duplicated Space to be same as the original Space. You can make it either a public Space or a private Space, since it is possible to use either as an MCP server, as described below.\n2. Then, add docstrings to the functions that you'd like the LLM to be able to call as a tool. The docstring should be in the same format as the example code above.\n3. Finally, add `mcp_server=True` in `.launch()`.\n\nThat's it!\n\n", "heading1": "Converting an Existing Space", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "You can use either a public Space or a private Space as an MCP server. If you'd like to use a private Space as an MCP server (or a ZeroGPU Space with your own quota), then you will need to provide your [Hugging Face token](https://huggingface.co/settings/token) when you make your request. To do this, simply add it as a header in your config like this:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse\",\n \"headers\": {\n \"Authorization\": \"Bearer \"\n }\n }\n }\n}\n```\n\n", "heading1": "Private Spaces", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "You may wish to authenticate users more precisely or let them provide other kinds of credentials or tokens in order to provide a custom experience for different users. \n\nGradio allows you to access the underlying `starlette.Request` that has made the tool call, which means that you can access headers, originating IP address, or any other information that is part of the network request. To do this, simply add a parameter in your function of the type `gr.Request`, and Gradio will automatically inject the request object as the parameter.\n\nHere's an example:\n\n```py\nimport gradio as gr\n\ndef echo_headers(x, request: gr.Request):\n return str(dict(request.headers))\n\ngr.Interface(echo_headers, \"textbox\", \"textbox\").launch(mcp_server=True)\n```\n\nThis MCP server will simply ignore the user's input and echo back all of the headers from a user's request. One can build more complex apps using the same idea. See the [docs on `gr.Request`](https://www.gradio.app/main/docs/gradio/request) for more information (note that only the core Starlette attributes of the `gr.Request` object will be present, attributes such as Gradio's `.session_hash` will not be present).\n\nUsing the gr.Header class\n\nA common pattern in MCP server development is to use authentication headers to call services on behalf of your users. Instead of using a `gr.Request` object like in the example above, you can use a `gr.Header` argument. Gradio will automatically extract that header from the incoming request (if it exists) and pass it to your function.\n\nIn the example below, the `X-API-Token` header is extracted from the incoming request and passed in as the `x_api_token` argument to `make_api_request_on_behalf_of_user`.\n\nThe benefit of using `gr.Header` is that the MCP connection docs will automatically display the headers you need to supply when connecting to the server! See the image below:\n\n```python\nimport gradio as gr\n\ndef make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header):\n \"\"\"M", "heading1": "Authentication and Credentials", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "the headers you need to supply when connecting to the server! See the image below:\n\n```python\nimport gradio as gr\n\ndef make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header):\n \"\"\"Make a request to everyone's favorite API.\n Args:\n prompt: The prompt to send to the API.\n Returns:\n The response from the API.\n Raises:\n AssertionError: If the API token is not valid.\n \"\"\"\n return \"Hello from the API\" if not x_api_token else \"Hello from the API with token!\"\n\n\ndemo = gr.Interface(\n make_api_request_on_behalf_of_user,\n [\n gr.Textbox(label=\"Prompt\"),\n ],\n gr.Textbox(label=\"Response\"),\n)\n\ndemo.launch(mcp_server=True)\n```\n\n![MCP Header Connection Page](https://github.com/user-attachments/assets/e264eedf-a91a-476b-880d-5be0d5934134)\n\nSending Progress Updates\n\nThe Gradio MCP server automatically sends progress updates to your MCP Client based on the queue in the Gradio application. If you'd like to send custom progress updates, you can do so using the same mechanism as you would use to display progress updates in the UI of your Gradio app: by using the `gr.Progress` class!\n\nHere's an example of how to do this:\n\n$code_mcp_progress\n\n[Here are the docs](https://www.gradio.app/docs/gradio/progress) for the `gr.Progress` class, which can also automatically track `tqdm` calls.\n\n\n", "heading1": "Authentication and Credentials", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "Gradio automatically sets the tool name based on the name of your function, and the description from the docstring of your function. But you may want to change how the description appears to your LLM. You can do this by using the `api_description` parameter in `Interface`, `ChatInterface`, or any event listener. This parameter takes three different kinds of values:\n\n* `None` (default): the tool description is automatically created from the docstring of the function (or its parent's docstring if it does not have a docstring but inherits from a method that does.)\n* `False`: no tool description appears to the LLM.\n* `str`: an arbitrary string to use as the tool description.\n\nIn addition to modifying the tool descriptions, you can also toggle which tools appear to the LLM. You can do this by setting the `show_api` parameter, which is by default `True`. Setting it to `False` hides the endpoint from the API docs and from the MCP server. If you expose multiple tools, users of your app will also be able to toggle which tools they'd like to add to their MCP server by checking boxes in the \"view MCP or API\" panel.\n\nHere's an example that shows the `api_description` and `show_api` parameters in actions:\n\n$code_mcp_tools\n\n", "heading1": "Modifying Tool Descriptions", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "So far, all of our MCP tools have corresponded to event listeners in the UI. This works well for functions that directly update the UI, but may not work if you wish to expose a \"pure logic\" function that should return raw data (e.g. a JSON object) without directly causing a UI update.\n\nIn order to expose such an MCP tool, you can create a pure Gradio API endpoint using `gr.api` (see [full docs here](https://www.gradio.app/main/docs/gradio/api)). Here's an example of creating an MCP tool that slices a list:\n\n$code_mcp_tool_only\n\nNote that if you use this approach, your function signature must be fully typed, including the return value, as these signature are used to determine the typing information for the MCP tool.\n\n\n", "heading1": "Adding MCP-Only Tools", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "In some cases, you may decide not to use Gradio's built-in integration and instead manually create an FastMCP Server that calls a Gradio app. This approach is useful when you want to:\n\n- Store state / identify users between calls instead of treating every tool call completely independently\n- Start the Gradio app MCP server when a tool is called (if you are running multiple Gradio apps locally and want to save memory / GPU)\n\nThis is very doable thanks to the [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) and the [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)'s `FastMCP` class. Here's an example of creating a custom MCP server that connects to various Gradio apps hosted on [HuggingFace Spaces](https://huggingface.co/spaces) using the `stdio` protocol:\n\n```python\nfrom mcp.server.fastmcp import FastMCP\nfrom gradio_client import Client\nimport sys\nimport io\nimport json \n\nmcp = FastMCP(\"gradio-spaces\")\n\nclients = {}\n\ndef get_client(space_id: str) -> Client:\n \"\"\"Get or create a Gradio client for the specified space.\"\"\"\n if space_id not in clients:\n clients[space_id] = Client(space_id)\n return clients[space_id]\n\n\n@mcp.tool()\nasync def generate_image(prompt: str, space_id: str = \"ysharma/SanaSprint\") -> str:\n \"\"\"Generate an image using Flux.\n \n Args:\n prompt: Text prompt describing the image to generate\n space_id: HuggingFace Space ID to use \n \"\"\"\n client = get_client(space_id)\n result = client.predict(\n prompt=prompt,\n model_size=\"1.6B\",\n seed=0,\n randomize_seed=True,\n width=1024,\n height=1024,\n guidance_scale=4.5,\n num_inference_steps=2,\n api_name=\"/infer\"\n )\n return result\n\n\n@mcp.tool()\nasync def run_dia_tts(prompt: str, space_id: str = \"ysharma/Dia-1.6B\") -> str:\n \"\"\"Text-to-Speech Synthesis.\n \n Args:\n prompt: Text prompt describing the co", "heading1": "Gradio with FastMCP", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "return result\n\n\n@mcp.tool()\nasync def run_dia_tts(prompt: str, space_id: str = \"ysharma/Dia-1.6B\") -> str:\n \"\"\"Text-to-Speech Synthesis.\n \n Args:\n prompt: Text prompt describing the conversation between speakers S1, S2\n space_id: HuggingFace Space ID to use \n \"\"\"\n client = get_client(space_id)\n result = client.predict(\n text_input=f\"\"\"{prompt}\"\"\",\n audio_prompt_input=None, \n max_new_tokens=3072,\n cfg_scale=3,\n temperature=1.3,\n top_p=0.95,\n cfg_filter_top_k=30,\n speed_factor=0.94,\n api_name=\"/generate_audio\"\n )\n return result\n\n\nif __name__ == \"__main__\":\n import sys\n import io\n sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')\n \n mcp.run(transport='stdio')\n```\n\nThis server exposes two tools:\n1. `run_dia_tts` - Generates a conversation for the given transcript in the form of `[S1]first-sentence. [S2]second-sentence. [S1]...`\n2. `generate_image` - Generates images using a fast text-to-image model\n\nTo use this MCP Server with Claude Desktop (as MCP Client):\n\n1. Save the code to a file (e.g., `gradio_mcp_server.py`)\n2. Install the required dependencies: `pip install mcp gradio-client`\n3. Configure Claude Desktop to use your server by editing the configuration file at `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\\Claude\\claude_desktop_config.json` (Windows):\n\n```json\n{\n \"mcpServers\": {\n \"gradio-spaces\": {\n \"command\": \"python\",\n \"args\": [\n \"/absolute/path/to/gradio_mcp_server.py\"\n ]\n }\n }\n}\n```\n\n4. Restart Claude Desktop\n\nNow, when you ask Claude about generating an image or transcribing audio, it can use your Gradio-powered tools to accomplish these tasks.\n\n\n", "heading1": "Gradio with FastMCP", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "use your Gradio-powered tools to accomplish these tasks.\n\n\n", "heading1": "Gradio with FastMCP", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "The MCP protocol is still in its infancy and you might see issues connecting to an MCP Server that you've built. We generally recommend using the [MCP Inspector Tool](https://github.com/modelcontextprotocol/inspector) to try connecting and debugging your MCP Server.\n\nHere are some things that may help:\n\n**1. Ensure that you've provided type hints and valid docstrings for your functions**\n\nAs mentioned earlier, Gradio reads the docstrings for your functions and the type hints of input arguments to generate the description of the tool and parameters. A valid function and docstring looks like this (note the \"Args:\" block with indented parameter names underneath):\n\n```py\ndef image_orientation(image: Image.Image) -> str:\n \"\"\"\n Returns whether image is portrait or landscape.\n\n Args:\n image (Image.Image): The image to check.\n \"\"\"\n return \"Portrait\" if image.height > image.width else \"Landscape\"\n```\n\nNote: You can preview the schema that is created for your MCP server by visiting the `http://your-server:port/gradio_api/mcp/schema` URL.\n\n**2. Try accepting input arguments as `str`**\n\nSome MCP Clients do not recognize parameters that are numeric or other complex types, but all of the MCP Clients that we've tested accept `str` input parameters. When in doubt, change your input parameter to be a `str` and then cast to a specific type in the function, as in this example:\n\n```py\ndef prime_factors(n: str):\n \"\"\"\n Compute the prime factorization of a positive integer.\n\n Args:\n n (str): The integer to factorize. Must be greater than 1.\n \"\"\"\n n_int = int(n)\n if n_int <= 1:\n raise ValueError(\"Input must be an integer greater than 1.\")\n\n factors = []\n while n_int % 2 == 0:\n factors.append(2)\n n_int //= 2\n\n divisor = 3\n while divisor * divisor <= n_int:\n while n_int % divisor == 0:\n factors.append(divisor)\n n_int //= divisor\n divisor += 2\n\n if n_int > 1:\n factors.", "heading1": "Troubleshooting your MCP Servers", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "= 3\n while divisor * divisor <= n_int:\n while n_int % divisor == 0:\n factors.append(divisor)\n n_int //= divisor\n divisor += 2\n\n if n_int > 1:\n factors.append(n_int)\n\n return factors\n```\n\n**3. Ensure that your MCP Client Supports SSE**\n\nSome MCP Clients, notably [Claude Desktop](https://claude.ai/download), do not yet support SSE-based MCP Servers. In those cases, you can use a tool such as [mcp-remote](https://github.com/geelen/mcp-remote). First install [Node.js](https://nodejs.org/en/download/). Then, add the following to your own MCP Client config:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"command\": \"npx\",\n \"args\": [\n \"mcp-remote\",\n \"http://your-server:port/gradio_api/mcp/sse\"\n ]\n }\n }\n}\n```\n\n**4. Restart your MCP Client and MCP Server**\n\nSome MCP Clients require you to restart them every time you update the MCP configuration. Other times, if the connection between the MCP Client and servers breaks, you might need to restart the MCP server. If all else fails, try restarting both your MCP Client and MCP Servers!\n\n", "heading1": "Troubleshooting your MCP Servers", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "If you're using LLMs in your workflow, adding this server will augment them with just the right context on gradio - which makes your experience a lot faster and smoother. \n\n\n\nThe server is running on Spaces and was launched entirely using Gradio, you can see all the code [here](https://huggingface.co/spaces/gradio/docs-mcp). For more on building an mcp server with gradio, see the [previous guide](./building-an-mcp-client-with-gradio). \n\n", "heading1": "Why an MCP Server?", "source_page_url": "https://gradio.app/guides/using-docs-mcp", "source_page_title": "Mcp - Using Docs Mcp Guide"}, {"text": "For clients that support SSE (e.g. Cursor, Windsurf, Cline), simply add the following configuration to your MCP config:\n\n```json\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://gradio-docs-mcp.hf.space/gradio_api/mcp/sse\"\n }\n }\n}\n```\n\nWe've included step-by-step instructions for Cursor below, but you can consult the docs for Windsurf [here](https://docs.windsurf.com/windsurf/mcp), and Cline [here](https://docs.cline.bot/mcp-servers/configuring-mcp-servers) which are similar to set up. \n\n\n\nCursor \n\n1. Make sure you're using the latest version of Cursor, and go to Cursor > Settings > Cursor Settings > MCP \n2. Click on '+ Add new global MCP server' \n3. Copy paste this json into the file that opens and then save it. \n```json\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://gradio-docs-mcp.hf.space/gradio_api/mcp/sse\"\n }\n }\n}\n```\n4. That's it! You should see the tools load and the status go green in the settings page. You may have to click the refresh icon or wait a few seconds. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/cursor-mcp.png)\n\nClaude Desktop\n\n1. Since Claude Desktop only supports stdio, you will need to [install Node.js](https://nodejs.org/en/download/) to get this to work. \n2. Make sure you're using the latest version of Claude Desktop, and go to Claude > Settings > Developer > Edit Config \n3. Open the file with your favorite editor and copy paste this json, then save the file. \n```json\n{\n \"mcpServers\": {\n \"gradio\": {\n \"command\": \"npx\",\n \"args\": [\n \"mcp-remote\",\n \"https://gradio-docs-mcp.hf.space/gradio_api/mcp/sse\",\n \"--transport\",\n \"sse-only\"\n ]\n }\n }\n}\n```\n4. Quit and re-open Claude Desktop, and you should be good to go. You should see it loaded in the Search and Tools icon or on the developer settings page. \n \n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/claude-deskt", "heading1": "Installing in the Clients", "source_page_url": "https://gradio.app/guides/using-docs-mcp", "source_page_title": "Mcp - Using Docs Mcp Guide"}, {"text": "You should see it loaded in the Search and Tools icon or on the developer settings page. \n \n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/claude-desktop-mcp.gif)\n\n", "heading1": "Installing in the Clients", "source_page_url": "https://gradio.app/guides/using-docs-mcp", "source_page_title": "Mcp - Using Docs Mcp Guide"}, {"text": "There are currently only two tools in the server: `gradio_docs_mcp_load_gradio_docs` and `gradio_docs_mcp_search_gradio_docs`. \n\n1. `gradio_docs_mcp_load_gradio_docs`: This tool takes no arguments and will load an /llms.txt style summary of Gradio's latest, full documentation. Very useful context the LLM can parse before answering questions or generating code. \n\n2. `gradio_docs_mcp_search_gradio_docs`: This tool takes a query as an argument and will run embedding search on Gradio's docs, guides, and demos to return the most useful context for the LLM to parse.", "heading1": "Tools", "source_page_url": "https://gradio.app/guides/using-docs-mcp", "source_page_title": "Mcp - Using Docs Mcp Guide"}, {"text": "The next generation of AI user interfaces is moving towards audio-native experiences. Users will be able to speak to chatbots and receive spoken responses in return. Several models have been built under this paradigm, including GPT-4o and [mini omni](https://github.com/gpt-omni/mini-omni).\n\nIn this guide, we'll walk you through building your own conversational chat application using mini omni as an example. You can see a demo of the finished app below:\n\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "Our application will enable the following user experience:\n\n1. Users click a button to start recording their message\n2. The app detects when the user has finished speaking and stops recording\n3. The user's audio is passed to the omni model, which streams back a response\n4. After omni mini finishes speaking, the user's microphone is reactivated\n5. All previous spoken audio, from both the user and omni, is displayed in a chatbot component\n\nLet's dive into the implementation details.\n\n", "heading1": "Application Overview", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "We'll stream the user's audio from their microphone to the server and determine if the user has stopped speaking on each new chunk of audio.\n\nHere's our `process_audio` function:\n\n```python\nimport numpy as np\nfrom utils import determine_pause\n\ndef process_audio(audio: tuple, state: AppState):\n if state.stream is None:\n state.stream = audio[1]\n state.sampling_rate = audio[0]\n else:\n state.stream = np.concatenate((state.stream, audio[1]))\n\n pause_detected = determine_pause(state.stream, state.sampling_rate, state)\n state.pause_detected = pause_detected\n\n if state.pause_detected and state.started_talking:\n return gr.Audio(recording=False), state\n return None, state\n```\n\nThis function takes two inputs:\n1. The current audio chunk (a tuple of `(sampling_rate, numpy array of audio)`)\n2. The current application state\n\nWe'll use the following `AppState` dataclass to manage our application state:\n\n```python\nfrom dataclasses import dataclass\n\n@dataclass\nclass AppState:\n stream: np.ndarray | None = None\n sampling_rate: int = 0\n pause_detected: bool = False\n stopped: bool = False\n conversation: list = []\n```\n\nThe function concatenates new audio chunks to the existing stream and checks if the user has stopped speaking. If a pause is detected, it returns an update to stop recording. Otherwise, it returns `None` to indicate no changes.\n\nThe implementation of the `determine_pause` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/eb027808c7bfe5179b46d9352e3fa1813a45f7c3/app.pyL98).\n\n", "heading1": "Processing User Audio", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "After processing the user's audio, we need to generate and stream the chatbot's response. Here's our `response` function:\n\n```python\nimport io\nimport tempfile\nfrom pydub import AudioSegment\n\ndef response(state: AppState):\n if not state.pause_detected and not state.started_talking:\n return None, AppState()\n \n audio_buffer = io.BytesIO()\n\n segment = AudioSegment(\n state.stream.tobytes(),\n frame_rate=state.sampling_rate,\n sample_width=state.stream.dtype.itemsize,\n channels=(1 if len(state.stream.shape) == 1 else state.stream.shape[1]),\n )\n segment.export(audio_buffer, format=\"wav\")\n\n with tempfile.NamedTemporaryFile(suffix=\".wav\", delete=False) as f:\n f.write(audio_buffer.getvalue())\n \n state.conversation.append({\"role\": \"user\",\n \"content\": {\"path\": f.name,\n \"mime_type\": \"audio/wav\"}})\n \n output_buffer = b\"\"\n\n for mp3_bytes in speaking(audio_buffer.getvalue()):\n output_buffer += mp3_bytes\n yield mp3_bytes, state\n\n with tempfile.NamedTemporaryFile(suffix=\".mp3\", delete=False) as f:\n f.write(output_buffer)\n \n state.conversation.append({\"role\": \"assistant\",\n \"content\": {\"path\": f.name,\n \"mime_type\": \"audio/mp3\"}})\n yield None, AppState(conversation=state.conversation)\n```\n\nThis function:\n1. Converts the user's audio to a WAV file\n2. Adds the user's message to the conversation history\n3. Generates and streams the chatbot's response using the `speaking` function\n4. Saves the chatbot's response as an MP3 file\n5. Adds the chatbot's response to the conversation history\n\nNote: The implementation of the `speaking` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/main/app.pyL116).\n\n", "heading1": "Generating the Response", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "Now let's put it all together using Gradio's Blocks API:\n\n```python\nimport gradio as gr\n\ndef start_recording_user(state: AppState):\n if not state.stopped:\n return gr.Audio(recording=True)\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n input_audio = gr.Audio(\n label=\"Input Audio\", sources=\"microphone\", type=\"numpy\"\n )\n with gr.Column():\n chatbot = gr.Chatbot(label=\"Conversation\", type=\"messages\")\n output_audio = gr.Audio(label=\"Output Audio\", streaming=True, autoplay=True)\n state = gr.State(value=AppState())\n\n stream = input_audio.stream(\n process_audio,\n [input_audio, state],\n [input_audio, state],\n stream_every=0.5,\n time_limit=30,\n )\n respond = input_audio.stop_recording(\n response,\n [state],\n [output_audio, state]\n )\n respond.then(lambda s: s.conversation, [state], [chatbot])\n\n restart = output_audio.stop(\n start_recording_user,\n [state],\n [input_audio]\n )\n cancel = gr.Button(\"Stop Conversation\", variant=\"stop\")\n cancel.click(lambda: (AppState(stopped=True), gr.Audio(recording=False)), None,\n [state, input_audio], cancels=[respond, restart])\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThis setup creates a user interface with:\n- An input audio component for recording user messages\n- A chatbot component to display the conversation history\n- An output audio component for the chatbot's responses\n- A button to stop and reset the conversation\n\nThe app streams user audio in 0.5-second chunks, processes it, generates responses, and updates the conversation history accordingly.\n\n", "heading1": "Building the Gradio App", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "This guide demonstrates how to build a conversational chatbot application using Gradio and the mini omni model. You can adapt this framework to create various audio-based chatbot demos. To see the full application in action, visit the Hugging Face Spaces demo: https://huggingface.co/spaces/gradio/omni-mini\n\nFeel free to experiment with different models, audio processing techniques, or user interface designs to create your own unique conversational AI experiences!", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "Modern voice applications should feel natural and responsive, moving beyond the traditional \"click-to-record\" pattern. By combining Groq's fast inference capabilities with automatic speech detection, we can create a more intuitive interaction model where users can simply start talking whenever they want to engage with the AI.\n\n> Credits: VAD and Gradio code inspired by [WillHeld's Diva-audio-chat](https://huggingface.co/spaces/WillHeld/diva-audio-chat/tree/main).\n\nIn this tutorial, you will learn how to create a multimodal Gradio and Groq app that has automatic speech detection. You can also watch the full video tutorial which includes a demo of the application:\n\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "Many voice apps currently work by the user clicking record, speaking, then stopping the recording. While this can be a powerful demo, the most natural mode of interaction with voice requires the app to dynamically detect when the user is speaking, so they can talk back and forth without having to continually click a record button. \n\nCreating a natural interaction with voice and text requires a dynamic and low-latency response. Thus, we need both automatic voice detection and fast inference. With @ricky0123/vad-web powering speech detection and Groq powering the LLM, both of these requirements are met. Groq provides a lightning fast response, and Gradio allows for easy creation of impressively functional apps.\n\nThis tutorial shows you how to build a calorie tracking app where you speak to an AI that automatically detects when you start and stop your response, and provides its own text response back to guide you with questions that allow it to give a calorie estimate of your last meal.\n\n", "heading1": "Background", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "- **Gradio**: Provides the web interface and audio handling capabilities\n- **@ricky0123/vad-web**: Handles voice activity detection\n- **Groq**: Powers fast LLM inference for natural conversations\n- **Whisper**: Transcribes speech to text\n\nSetting Up the Environment\n\nFirst, let\u2019s install and import our essential libraries and set up a client for using the Groq API. Here\u2019s how to do it:\n\n`requirements.txt`\n```\ngradio\ngroq\nnumpy\nsoundfile\nlibrosa\nspaces\nxxhash\ndatasets\n```\n\n`app.py`\n```python\nimport groq\nimport gradio as gr\nimport soundfile as sf\nfrom dataclasses import dataclass, field\nimport os\n\nInitialize Groq client securely\napi_key = os.environ.get(\"GROQ_API_KEY\")\nif not api_key:\n raise ValueError(\"Please set the GROQ_API_KEY environment variable.\")\nclient = groq.Client(api_key=api_key)\n```\n\nHere, we\u2019re pulling in key libraries to interact with the Groq API, build a sleek UI with Gradio, and handle audio data. We\u2019re accessing the Groq API key securely with a key stored in an environment variable, which is a security best practice for avoiding leaking the API key.\n\n---\n\nState Management for Seamless Conversations\n\nWe need a way to keep track of our conversation history, so the chatbot remembers past interactions, and manage other states like whether recording is currently active. To do this, let\u2019s create an `AppState` class:\n\n```python\n@dataclass\nclass AppState:\n conversation: list = field(default_factory=list)\n stopped: bool = False\n model_outs: Any = None\n```\n\nOur `AppState` class is a handy tool for managing conversation history and tracking whether recording is on or off. Each instance will have its own fresh list of conversations, making sure chat history is isolated to each session. \n\n---\n\nTranscribing Audio with Whisper on Groq\n\nNext, we\u2019ll create a function to transcribe the user\u2019s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there\u2019s meani", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "e\u2019ll create a function to transcribe the user\u2019s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there\u2019s meaningful speech in the input. Here\u2019s how:\n\n```python\ndef transcribe_audio(client, file_name):\n if file_name is None:\n return None\n\n try:\n with open(file_name, \"rb\") as audio_file:\n response = client.audio.transcriptions.with_raw_response.create(\n model=\"whisper-large-v3-turbo\",\n file=(\"audio.wav\", audio_file),\n response_format=\"verbose_json\",\n )\n completion = process_whisper_response(response.parse())\n return completion\n except Exception as e:\n print(f\"Error in transcription: {e}\")\n return f\"Error in transcription: {str(e)}\"\n```\n\nThis function opens the audio file and sends it to Groq\u2019s Whisper model for transcription, requesting detailed JSON output. verbose_json is needed to get information to determine if speech was included in the audio. We also handle any potential errors so our app doesn\u2019t fully crash if there\u2019s an issue with the API request. \n\n```python\ndef process_whisper_response(completion):\n \"\"\"\n Process Whisper transcription response and return text or null based on no_speech_prob\n \n Args:\n completion: Whisper transcription response object\n \n Returns:\n str or None: Transcribed text if no_speech_prob <= 0.7, otherwise None\n \"\"\"\n if completion.segments and len(completion.segments) > 0:\n no_speech_prob = completion.segments[0].get('no_speech_prob', 0)\n print(\"No speech prob:\", no_speech_prob)\n\n if no_speech_prob > 0.7:\n return None\n \n return completion.text.strip()\n \n return None\n```\n\nWe also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was j", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "ext.strip()\n \n return None\n```\n\nWe also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was just background noise or had actual speaking that was transcribed. It uses a threshold of 0.7 to interpret the no_speech_prob, and will return None if there was no speech. Otherwise, it will return the text transcript of the conversational response from the human.\n\n\n---\n\nAdding Conversational Intelligence with LLM Integration\n\nOur chatbot needs to provide intelligent, friendly responses that flow naturally. We\u2019ll use a Groq-hosted Llama-3.2 for this:\n\n```python\ndef generate_chat_completion(client, history):\n messages = []\n messages.append(\n {\n \"role\": \"system\",\n \"content\": \"In conversation with the user, ask questions to estimate and provide (1) total calories, (2) protein, carbs, and fat in grams, (3) fiber and sugar content. Only ask *one question at a time*. Be conversational and natural.\",\n }\n )\n\n for message in history:\n messages.append(message)\n\n try:\n completion = client.chat.completions.create(\n model=\"llama-3.2-11b-vision-preview\",\n messages=messages,\n )\n return completion.choices[0].message.content\n except Exception as e:\n return f\"Error in generating chat completion: {str(e)}\"\n```\n\nWe\u2019re defining a system prompt to guide the chatbot\u2019s behavior, ensuring it asks one question at a time and keeps things conversational. This setup also includes error handling to ensure the app gracefully manages any issues.\n\n---\n\nVoice Activity Detection for Hands-Free Interaction\n\nTo make our chatbot hands-free, we\u2019ll add Voice Activity Detection (VAD) to automatically detect when someone starts or stops speaking. Here\u2019s how to implement it using ONNX in JavaScript:\n\n```javascript\nasync function main() {\n const script1 = document.createElement(\"script\");\n scrip", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "ly detect when someone starts or stops speaking. Here\u2019s how to implement it using ONNX in JavaScript:\n\n```javascript\nasync function main() {\n const script1 = document.createElement(\"script\");\n script1.src = \"https://cdn.jsdelivr.net/npm/onnxruntime-web@1.14.0/dist/ort.js\";\n document.head.appendChild(script1)\n const script2 = document.createElement(\"script\");\n script2.onload = async () => {\n console.log(\"vad loaded\");\n var record = document.querySelector('.record-button');\n record.textContent = \"Just Start Talking!\"\n \n const myvad = await vad.MicVAD.new({\n onSpeechStart: () => {\n var record = document.querySelector('.record-button');\n var player = document.querySelector('streaming-out')\n if (record != null && (player == null || player.paused)) {\n record.click();\n }\n },\n onSpeechEnd: (audio) => {\n var stop = document.querySelector('.stop-button');\n if (stop != null) {\n stop.click();\n }\n }\n })\n myvad.start()\n }\n script2.src = \"https://cdn.jsdelivr.net/npm/@ricky0123/vad-web@0.0.7/dist/bundle.min.js\";\n}\n```\n\nThis script loads our VAD model and sets up functions to start and stop recording automatically. When the user starts speaking, it triggers the recording, and when they stop, it ends the recording.\n\n---\n\nBuilding a User Interface with Gradio\n\nNow, let\u2019s create an intuitive and visually appealing user interface with Gradio. This interface will include an audio input for capturing voice, a chat window for displaying responses, and state management to keep things synchronized.\n\n```python\nwith gr.Blocks(theme=theme, js=js) as demo:\n with gr.Row():\n input_audio = gr.Audio(\n label=\"Input Audio\",\n sources=[\"microphone\"],\n type=\"numpy\",\n streaming=False,\n waveform_options=gr.WaveformOptions(waveform_color=\"B83A4B\"),\n )\n with gr.Row():\n chatbot = gr.Chatbot(label=\"Conversati", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": " type=\"numpy\",\n streaming=False,\n waveform_options=gr.WaveformOptions(waveform_color=\"B83A4B\"),\n )\n with gr.Row():\n chatbot = gr.Chatbot(label=\"Conversation\", type=\"messages\")\n state = gr.State(value=AppState())\n```\n\nIn this code block, we\u2019re using Gradio\u2019s `Blocks` API to create an interface with an audio input, a chat display, and an application state manager. The color customization for the waveform adds a nice visual touch.\n\n---\n\nHandling Recording and Responses\n\nFinally, let\u2019s link the recording and response components to ensure the app reacts smoothly to user inputs and provides responses in real-time.\n\n```python\n stream = input_audio.start_recording(\n process_audio,\n [input_audio, state],\n [input_audio, state],\n )\n respond = input_audio.stop_recording(\n response, [state, input_audio], [state, chatbot]\n )\n```\n\nThese lines set up event listeners for starting and stopping the recording, processing the audio input, and generating responses. By linking these events, we create a cohesive experience where users can simply talk, and the chatbot handles the rest.\n\n---\n\n", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "1. When you open the app, the VAD system automatically initializes and starts listening for speech\n2. As soon as you start talking, it triggers the recording automatically\n3. When you stop speaking, the recording ends and:\n - The audio is transcribed using Whisper\n - The transcribed text is sent to the LLM\n - The LLM generates a response about calorie tracking\n - The response is displayed in the chat interface\n4. This creates a natural back-and-forth conversation where you can simply talk about your meals and get instant feedback on nutritional content\n\nThis app demonstrates how to create a natural voice interface that feels responsive and intuitive. By combining Groq's fast inference with automatic speech detection, we've eliminated the need for manual recording controls while maintaining high-quality interactions. The result is a practical calorie tracking assistant that users can simply talk to as naturally as they would to a human nutritionist.\n\nLink to GitHub repository: [Groq Gradio Basics](https://github.com/bklieger-groq/gradio-groq-basics/tree/main/calorie-tracker)", "heading1": "Summary", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "First, we'll install the following requirements in our system:\n\n```\nopencv-python\ntorch\ntransformers>=4.43.0\nspaces\n```\n\nThen, we'll download the model from the Hugging Face Hub:\n\n```python\nfrom transformers import RTDetrForObjectDetection, RTDetrImageProcessor\n\nimage_processor = RTDetrImageProcessor.from_pretrained(\"PekingU/rtdetr_r50vd\")\nmodel = RTDetrForObjectDetection.from_pretrained(\"PekingU/rtdetr_r50vd\").to(\"cuda\")\n```\n\nWe're moving the model to the GPU. We'll be deploying our model to Hugging Face Spaces and running the inference in the [free ZeroGPU cluster](https://huggingface.co/zero-gpu-explorers). \n\n\n", "heading1": "Setting up the Model", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "Our inference function will accept a video and a desired confidence threshold.\nObject detection models identify many objects and assign a confidence score to each object. The lower the confidence, the higher the chance of a false positive. So we will let our users set the conference threshold.\n\nOur function will iterate over the frames in the video and run the RT-DETR model over each frame.\nWe will then draw the bounding boxes for each detected object in the frame and save the frame to a new output video.\nThe function will yield each output video in chunks of two seconds.\n\nIn order to keep inference times as low as possible on ZeroGPU (there is a time-based quota),\nwe will halve the original frames-per-second in the output video and resize the input frames to be half the original \nsize before running the model.\n\nThe code for the inference function is below - we'll go over it piece by piece.\n\n```python\nimport spaces\nimport cv2\nfrom PIL import Image\nimport torch\nimport time\nimport numpy as np\nimport uuid\n\nfrom draw_boxes import draw_bounding_boxes\n\nSUBSAMPLE = 2\n\n@spaces.GPU\ndef stream_object_detection(video, conf_threshold):\n cap = cv2.VideoCapture(video)\n\n This means we will output mp4 videos\n video_codec = cv2.VideoWriter_fourcc(*\"mp4v\") type: ignore\n fps = int(cap.get(cv2.CAP_PROP_FPS))\n\n desired_fps = fps // SUBSAMPLE\n width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) // 2\n height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) // 2\n\n iterating, frame = cap.read()\n\n n_frames = 0\n\n Use UUID to create a unique video file\n output_video_name = f\"output_{uuid.uuid4()}.mp4\"\n\n Output Video\n output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore\n batch = []\n\n while iterating:\n frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)\n frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n if n_frames % SUBSAMPLE == 0:\n batch.append(frame)\n if len(batc", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": " frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)\n frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n if n_frames % SUBSAMPLE == 0:\n batch.append(frame)\n if len(batch) == 2 * desired_fps:\n inputs = image_processor(images=batch, return_tensors=\"pt\").to(\"cuda\")\n\n with torch.no_grad():\n outputs = model(**inputs)\n\n boxes = image_processor.post_process_object_detection(\n outputs,\n target_sizes=torch.tensor([(height, width)] * len(batch)),\n threshold=conf_threshold)\n \n for i, (array, box) in enumerate(zip(batch, boxes)):\n pil_image = draw_bounding_boxes(Image.fromarray(array), box, model, conf_threshold)\n frame = np.array(pil_image)\n Convert RGB to BGR\n frame = frame[:, :, ::-1].copy()\n output_video.write(frame)\n\n batch = []\n output_video.release()\n yield output_video_name\n output_video_name = f\"output_{uuid.uuid4()}.mp4\"\n output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore\n\n iterating, frame = cap.read()\n n_frames += 1\n```\n\n1. **Reading from the Video**\n\nOne of the industry standards for creating videos in python is OpenCV so we will use it in this app.\n\nThe `cap` variable is how we will read from the input video. Whenever we call `cap.read()`, we are reading the next frame in the video.\n\nIn order to stream video in Gradio, we need to yield a different video file for each \"chunk\" of the output video.\nWe create the next video file to write to with the `output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height))` line. The `video_codec` is how we specify the type of video file. Only \"mp4\" and \"ts\" files are supported for video sreaming at the moment.\n\n\n2. **The Inference Loop**\n\nFor each frame i", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "dth, height))` line. The `video_codec` is how we specify the type of video file. Only \"mp4\" and \"ts\" files are supported for video sreaming at the moment.\n\n\n2. **The Inference Loop**\n\nFor each frame in the video, we will resize it to be half the size. OpenCV reads files in `BGR` format, so will convert to the expected `RGB` format of transfomers. That's what the first two lines of the while loop are doing. \n\nWe take every other frame and add it to a `batch` list so that the output video is half the original FPS. When the batch covers two seconds of video, we will run the model. The two second threshold was chosen to keep the processing time of each batch small enough so that video is smoothly displayed in the server while not requiring too many separate forward passes. In order for video streaming to work properly in Gradio, the batch size should be at least 1 second. \n\nWe run the forward pass of the model and then use the `post_process_object_detection` method of the model to scale the detected bounding boxes to the size of the input frame.\n\nWe make use of a custom function to draw the bounding boxes (source [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection/blob/main/draw_boxes.pyL14)). We then have to convert from `RGB` to `BGR` before writing back to the output video.\n\nOnce we have finished processing the batch, we create a new output video file for the next batch.\n\n", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "The UI code is pretty similar to other kinds of Gradio apps. \nWe'll use a standard two-column layout so that users can see the input and output videos side by side.\n\nIn order for streaming to work, we have to set `streaming=True` in the output video. Setting the video\nto autoplay is not necessary but it's a better experience for users.\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as app:\n gr.HTML(\n \"\"\"\n

\n Video Object Detection with RT-DETR\n

\n \"\"\")\n with gr.Row():\n with gr.Column():\n video = gr.Video(label=\"Video Source\")\n conf_threshold = gr.Slider(\n label=\"Confidence Threshold\",\n minimum=0.0,\n maximum=1.0,\n step=0.05,\n value=0.30,\n )\n with gr.Column():\n output_video = gr.Video(label=\"Processed Video\", streaming=True, autoplay=True)\n\n video.upload(\n fn=stream_object_detection,\n inputs=[video, conf_threshold],\n outputs=[output_video],\n )\n\n\n```\n\n\n", "heading1": "The Gradio Demo", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "You can check out our demo hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection). \n\nIt is also embedded on this page below\n\n$demo_rt-detr-object-detection", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "Automatic speech recognition (ASR), the conversion of spoken speech to text, is a very important and thriving area of machine learning. ASR algorithms run on practically every smartphone, and are becoming increasingly embedded in professional workflows, such as digital assistants for nurses and doctors. Because ASR algorithms are designed to be used directly by customers and end users, it is important to validate that they are behaving as expected when confronted with a wide variety of speech patterns (different accents, pitches, and background audio conditions).\n\nUsing `gradio`, you can easily build a demo of your ASR model and share that with a testing team, or test it yourself by speaking through the microphone on your device.\n\nThis tutorial will show how to take a pretrained speech-to-text model and deploy it with a Gradio interface. We will start with a **_full-context_** model, in which the user speaks the entire audio before the prediction runs. Then we will adapt the demo to make it **_streaming_**, meaning that the audio model will convert speech as you speak. \n\nPrerequisites\n\nMake sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained speech recognition model. In this tutorial, we will build demos from 2 ASR libraries:\n\n- Transformers (for this, `pip install torch transformers torchaudio`)\n\nMake sure you have at least one of these installed so that you can follow along the tutorial. You will also need `ffmpeg` [installed on your system](https://www.ffmpeg.org/download.html), if you do not already have it, to process files from the microphone.\n\nHere's how to build a real time speech recognition (ASR) app:\n\n1. [Set up the Transformers ASR Model](1-set-up-the-transformers-asr-model)\n2. [Create a Full-Context ASR Demo with Transformers](2-create-a-full-context-asr-demo-with-transformers)\n3. [Create a Streaming ASR Demo with Transformers](3-create-a-streaming-asr-demo-with-transformers)\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the model, `whisper`.\n\nHere is the code to load `whisper` from Hugging Face `transformers`.\n\n```python\nfrom transformers import pipeline\n\np = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-base.en\")\n```\n\nThat's it!\n\n", "heading1": "1. Set up the Transformers ASR Model", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "We will start by creating a _full-context_ ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the `pipeline` object above.\n\nWe will use `gradio`'s built in `Audio` component, configured to take input from the user's microphone and return a filepath for the recorded audio. The output component will be a plain `Textbox`.\n\n$code_asr\n$demo_asr\n\nThe `transcribe` function takes a single parameter, `audio`, which is a numpy array of the audio the user recorded. The `pipeline` object expects this in float32 format, so we convert it first to float32, and then extract the transcribed text.\n\n", "heading1": "2. Create a Full-Context ASR Demo with Transformers", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "To make this a *streaming* demo, we need to make these changes:\n\n1. Set `streaming=True` in the `Audio` component\n2. Set `live=True` in the `Interface`\n3. Add a `state` to the interface to store the recorded audio of a user\n\nTip: You can also set `time_limit` and `stream_every` parameters in the interface. The `time_limit` caps the amount of time each user's stream can take. The default is 30 seconds so users won't be able to stream audio for more than 30 seconds. The `stream_every` parameter controls how frequently data is sent to your function. By default it is 0.5 seconds.\n\nTake a look below.\n\n$code_stream_asr\n\nNotice that we now have a state variable because we need to track all the audio history. `transcribe` gets called whenever there is a new small chunk of audio, but we also need to keep track of all the audio spoken so far in the state. As the interface runs, the `transcribe` function gets called, with a record of all the previously spoken audio in the `stream` and the new chunk of audio as `new_chunk`. We return the new full audio to be stored back in its current state, and we also return the transcription. Here, we naively append the audio together and call the `transcriber` object on the entire audio. You can imagine more efficient ways of handling this, such as re-processing only the last 5 seconds of audio whenever a new chunk of audio is received. \n\n$demo_stream_asr\n\nNow the ASR model will run inference as you speak! \n", "heading1": "3. Create a Streaming ASR Demo with Transformers", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "Just like the classic Magic 8 Ball, a user should ask it a question orally and then wait for a response. Under the hood, we'll use Whisper to transcribe the audio and then use an LLM to generate a magic-8-ball-style answer. Finally, we'll use Parler TTS to read the response aloud.\n\n", "heading1": "The Overview", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "First let's define the UI and put placeholders for all the python logic.\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as block:\n gr.HTML(\n f\"\"\"\n

Magic 8 Ball \ud83c\udfb1

\n

Ask a question and receive wisdom

\n

Powered by Parler-TTS\n \"\"\"\n )\n with gr.Group():\n with gr.Row():\n audio_out = gr.Audio(label=\"Spoken Answer\", streaming=True, autoplay=True)\n answer = gr.Textbox(label=\"Answer\")\n state = gr.State()\n with gr.Row():\n audio_in = gr.Audio(label=\"Speak your question\", sources=\"microphone\", type=\"filepath\")\n\n audio_in.stop_recording(generate_response, audio_in, [state, answer, audio_out])\\\n .then(fn=read_response, inputs=state, outputs=[answer, audio_out])\n\nblock.launch()\n```\n\nWe're placing the output Audio and Textbox components and the input Audio component in separate rows. In order to stream the audio from the server, we'll set `streaming=True` in the output Audio component. We'll also set `autoplay=True` so that the audio plays as soon as it's ready.\nWe'll be using the Audio input component's `stop_recording` event to trigger our application's logic when a user stops recording from their microphone.\n\nWe're separating the logic into two parts. First, `generate_response` will take the recorded audio, transcribe it and generate a response with an LLM. We're going to store the response in a `gr.State` variable that then gets passed to the `read_response` function that generates the audio.\n\nWe're doing this in two parts because only `read_response` will require a GPU. Our app will run on Hugging Faces [ZeroGPU](https://huggingface.co/zero-gpu-explorers) which has time-based quotas. Since generating the response can be done with Hugging Face's Inference API, we shouldn't include that code in our GPU func", "heading1": "The UI", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "GPU](https://huggingface.co/zero-gpu-explorers) which has time-based quotas. Since generating the response can be done with Hugging Face's Inference API, we shouldn't include that code in our GPU function as it will needlessly use our GPU quota.\n\n", "heading1": "The UI", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "As mentioned above, we'll use [Hugging Face's Inference API](https://huggingface.co/docs/huggingface_hub/guides/inference) to transcribe the audio and generate a response from an LLM. After instantiating the client, I use the `automatic_speech_recognition` method (this automatically uses Whisper running on Hugging Face's Inference Servers) to transcribe the audio. Then I pass the question to an LLM (Mistal-7B-Instruct) to generate a response. We are prompting the LLM to act like a magic 8 ball with the system message.\n\nOur `generate_response` function will also send empty updates to the output textbox and audio components (returning `None`). \nThis is because I want the Gradio progress tracker to be displayed over the components but I don't want to display the answer until the audio is ready.\n\n\n```python\nfrom huggingface_hub import InferenceClient\n\nclient = InferenceClient(token=os.getenv(\"HF_TOKEN\"))\n\ndef generate_response(audio):\n gr.Info(\"Transcribing Audio\", duration=5)\n question = client.automatic_speech_recognition(audio).text\n\n messages = [{\"role\": \"system\", \"content\": (\"You are a magic 8 ball.\"\n \"Someone will present to you a situation or question and your job \"\n \"is to answer with a cryptic adage or proverb such as \"\n \"'curiosity killed the cat' or 'The early bird gets the worm'.\"\n \"Keep your answers short and do not include the phrase 'Magic 8 Ball' in your response. If the question does not make sense or is off-topic, say 'Foolish questions get foolish answers.'\"\n \"For example, 'Magic 8 Ball, should I get a dog?', 'A dog is ready for you but are you ready for the dog?'\")},\n {\"role\": \"user\", \"content\": f\"Magic 8 Ball please answer this question - {question}\"}]\n \n response = client.chat_completion(messages,", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "for you but are you ready for the dog?'\")},\n {\"role\": \"user\", \"content\": f\"Magic 8 Ball please answer this question - {question}\"}]\n \n response = client.chat_completion(messages, max_tokens=64, seed=random.randint(1, 5000),\n model=\"mistralai/Mistral-7B-Instruct-v0.3\")\n\n response = response.choices[0].message.content.replace(\"Magic 8 Ball\", \"\").replace(\":\", \"\")\n return response, None, None\n```\n\n\nNow that we have our text response, we'll read it aloud with Parler TTS. The `read_response` function will be a python generator that yields the next chunk of audio as it's ready.\n\n\nWe'll be using the [Mini v0.1](https://huggingface.co/parler-tts/parler_tts_mini_v0.1) for the feature extraction but the [Jenny fine tuned version](https://huggingface.co/parler-tts/parler-tts-mini-jenny-30H) for the voice. This is so that the voice is consistent across generations.\n\n\nStreaming audio with transformers requires a custom Streamer class. You can see the implementation [here](https://huggingface.co/spaces/gradio/magic-8-ball/blob/main/streamer.py). Additionally, we'll convert the output to bytes so that it can be streamed faster from the backend. \n\n\n```python\nfrom streamer import ParlerTTSStreamer\nfrom transformers import AutoTokenizer, AutoFeatureExtractor, set_seed\nimport numpy as np\nimport spaces\nimport torch\nfrom threading import Thread\n\n\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"mps\" if torch.backends.mps.is_available() else \"cpu\"\ntorch_dtype = torch.float16 if device != \"cpu\" else torch.float32\n\nrepo_id = \"parler-tts/parler_tts_mini_v0.1\"\n\njenny_repo_id = \"ylacombe/parler-tts-mini-jenny-30H\"\n\nmodel = ParlerTTSForConditionalGeneration.from_pretrained(\n jenny_repo_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True\n).to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(repo_id)\nfeature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)\n\nsampling_rate = model.audio_encoder.config.sampling_rate\nf", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "sage=True\n).to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(repo_id)\nfeature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)\n\nsampling_rate = model.audio_encoder.config.sampling_rate\nframe_rate = model.audio_encoder.config.frame_rate\n\n@spaces.GPU\ndef read_response(answer):\n\n play_steps_in_s = 2.0\n play_steps = int(frame_rate * play_steps_in_s)\n\n description = \"Jenny speaks at an average pace with a calm delivery in a very confined sounding environment with clear audio quality.\"\n description_tokens = tokenizer(description, return_tensors=\"pt\").to(device)\n\n streamer = ParlerTTSStreamer(model, device=device, play_steps=play_steps)\n prompt = tokenizer(answer, return_tensors=\"pt\").to(device)\n\n generation_kwargs = dict(\n input_ids=description_tokens.input_ids,\n prompt_input_ids=prompt.input_ids,\n streamer=streamer,\n do_sample=True,\n temperature=1.0,\n min_new_tokens=10,\n )\n\n set_seed(42)\n thread = Thread(target=model.generate, kwargs=generation_kwargs)\n thread.start()\n\n for new_audio in streamer:\n print(f\"Sample of length: {round(new_audio.shape[0] / sampling_rate, 2)} seconds\")\n yield answer, numpy_to_mp3(new_audio, sampling_rate=sampling_rate)\n```\n\n", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "You can see our final application [here](https://huggingface.co/spaces/gradio/magic-8-ball)!\n\n\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "Start by installing all the dependencies. Add the following lines to a `requirements.txt` file and run `pip install -r requirements.txt`:\n\n```bash\nopencv-python\ntwilio\ngradio>=5.0\ngradio-webrtc\nonnxruntime-gpu\n```\n\nWe'll use the ONNX runtime to speed up YOLOv10 inference. This guide assumes you have access to a GPU. If you don't, change `onnxruntime-gpu` to `onnxruntime`. Without a GPU, the model will run slower, resulting in a laggy demo.\n\nWe'll use OpenCV for image manipulation and the [Gradio WebRTC](https://github.com/freddyaboulton/gradio-webrtc) custom component to use [WebRTC](https://webrtc.org/) under the hood, achieving near-zero latency.\n\n**Note**: If you want to deploy this app on any cloud provider, you'll need to use the free Twilio API for their [TURN servers](https://www.twilio.com/docs/stun-turn). Create a free account on Twilio. If you're not familiar with TURN servers, consult this [guide](https://www.twilio.com/docs/stun-turn/faqfaq-what-is-nat).\n\n", "heading1": "Setting up", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "We'll download the YOLOv10 model from the Hugging Face hub and instantiate a custom inference class to use this model. \n\nThe implementation of the inference class isn't covered in this guide, but you can find the source code [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n/blob/main/inference.pyL9) if you're interested. This implementation borrows heavily from this [github repository](https://github.com/ibaiGorordo/ONNX-YOLOv8-Object-Detection).\n\nWe're using the `yolov10-n` variant because it has the lowest latency. See the [Performance](https://github.com/THU-MIG/yolov10?tab=readme-ov-fileperformance) section of the README in the YOLOv10 GitHub repository.\n\n```python\nfrom huggingface_hub import hf_hub_download\nfrom inference import YOLOv10\n\nmodel_file = hf_hub_download(\n repo_id=\"onnx-community/yolov10n\", filename=\"onnx/model.onnx\"\n)\n\nmodel = YOLOv10(model_file)\n\ndef detection(image, conf_threshold=0.3):\n image = cv2.resize(image, (model.input_width, model.input_height))\n new_image = model.detect_objects(image, conf_threshold)\n return new_image\n```\n\nOur inference function, `detection`, accepts a numpy array from the webcam and a desired confidence threshold. Object detection models like YOLO identify many objects and assign a confidence score to each. The lower the confidence, the higher the chance of a false positive. We'll let users adjust the confidence threshold.\n\nThe function returns a numpy array corresponding to the same input image with all detected objects in bounding boxes.\n\n", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "The Gradio demo is straightforward, but we'll implement a few specific features:\n\n1. Use the `WebRTC` custom component to ensure input and output are sent to/from the server with WebRTC. \n2. The [WebRTC](https://github.com/freddyaboulton/gradio-webrtc) component will serve as both an input and output component.\n3. Utilize the `time_limit` parameter of the `stream` event. This parameter sets a processing time for each user's stream. In a multi-user setting, such as on Spaces, we'll stop processing the current user's stream after this period and move on to the next. \n\nWe'll also apply custom CSS to center the webcam and slider on the page.\n\n```python\nimport gradio as gr\nfrom gradio_webrtc import WebRTC\n\ncss = \"\"\".my-group {max-width: 600px !important; max-height: 600px !important;}\n .my-column {display: flex !important; justify-content: center !important; align-items: center !important;}\"\"\"\n\nwith gr.Blocks(css=css) as demo:\n gr.HTML(\n \"\"\"\n

\n YOLOv10 Webcam Stream (Powered by WebRTC \u26a1\ufe0f)\n

\n \"\"\"\n )\n with gr.Column(elem_classes=[\"my-column\"]):\n with gr.Group(elem_classes=[\"my-group\"]):\n image = WebRTC(label=\"Stream\", rtc_configuration=rtc_configuration)\n conf_threshold = gr.Slider(\n label=\"Confidence Threshold\",\n minimum=0.0,\n maximum=1.0,\n step=0.05,\n value=0.30,\n )\n\n image.stream(\n fn=detection, inputs=[image, conf_threshold], outputs=[image], time_limit=10\n )\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\n", "heading1": "The Gradio Demo", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "Our app is hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n). \n\nYou can use this app as a starting point to build real-time image applications with Gradio. Don't hesitate to open issues in the space or in the [WebRTC component GitHub repo](https://github.com/freddyaboulton/gradio-webrtc) if you have any questions or encounter problems.", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "The frontend code should have, at minimum, three files:\n\n* `Index.svelte`: This is the main export and where your component's layout and logic should live.\n* `Example.svelte`: This is where the example view of the component is defined.\n\nFeel free to add additional files and subdirectories. \nIf you want to export any additional modules, remember to modify the `package.json` file\n\n```json\n\"exports\": {\n \".\": \"./Index.svelte\",\n \"./example\": \"./Example.svelte\",\n \"./package.json\": \"./package.json\"\n},\n```\n\n", "heading1": "The directory structure", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "Your component should expose the following props that will be passed down from the parent Gradio application.\n\n```typescript\nimport type { LoadingStatus } from \"@gradio/statustracker\";\nimport type { Gradio } from \"@gradio/utils\";\n\nexport let gradio: Gradio<{\n event_1: never;\n event_2: never;\n}>;\n\nexport let elem_id = \"\";\nexport let elem_classes: string[] = [];\nexport let scale: number | null = null;\nexport let min_width: number | undefined = undefined;\nexport let loading_status: LoadingStatus | undefined = undefined;\nexport let mode: \"static\" | \"interactive\";\n```\n\n* `elem_id` and `elem_classes` allow Gradio app developers to target your component with custom CSS and JavaScript from the Python `Blocks` class.\n\n* `scale` and `min_width` allow Gradio app developers to control how much space your component takes up in the UI.\n\n* `loading_status` is used to display a loading status over the component when it is the output of an event.\n\n* `mode` is how the parent Gradio app tells your component whether the `interactive` or `static` version should be displayed.\n\n* `gradio`: The `gradio` object is created by the parent Gradio app. It stores some application-level configuration that will be useful in your component, like internationalization. You must use it to dispatch events from your component.\n\nA minimal `Index.svelte` file would look like:\n\n```svelte\n\n\n\n\n\n\t{if loading_status}\n\t\t\n\t{/if}\n

{value}

\n\n```\n\n", "heading1": "The Index.svelte file", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "The `Example.svelte` file should expose the following props:\n\n```typescript\n export let value: string;\n export let type: \"gallery\" | \"table\";\n export let selected = false;\n export let index: number;\n```\n\n* `value`: The example value that should be displayed.\n\n* `type`: This is a variable that can be either `\"gallery\"` or `\"table\"` depending on how the examples are displayed. The `\"gallery\"` form is used when the examples correspond to a single input component, while the `\"table\"` form is used when a user has multiple input components, and the examples need to populate all of them. \n\n* `selected`: You can also adjust how the examples are displayed if a user \"selects\" a particular example by using the selected variable.\n\n* `index`: The current index of the selected value.\n\n* Any additional props your \"non-example\" component takes!\n\nThis is the `Example.svelte` file for the code `Radio` component:\n\n```svelte\n\n\n\n\t{value}\n\n\n\n```\n\n", "heading1": "The Example.svelte file", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "If your component deals with files, these files **should** be uploaded to the backend server. \nThe `@gradio/client` npm package provides the `upload` and `prepare_files` utility functions to help you do this.\n\nThe `prepare_files` function will convert the browser's `File` datatype to gradio's internal `FileData` type.\nYou should use the `FileData` data in your component to keep track of uploaded files.\n\nThe `upload` function will upload an array of `FileData` values to the server.\n\nHere's an example of loading files from an `` element when its value changes.\n\n\n```svelte\n\n\n\n```\n\nThe component exposes a prop named `root`. \nThis is passed down by the parent gradio app and it represents the base url that the files will be uploaded to and fetched from.\n\nFor WASM support, you should get the upload function from the `Context` and pass that as the third parameter of the `upload` function.\n\n```typescript\n\n```\n\n", "heading1": "Handling Files", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "Most of Gradio's frontend components are published on [npm](https://www.npmjs.com/), the javascript package repository.\nThis means that you can use them to save yourself time while incorporating common patterns in your component, like uploading files.\nFor example, the `@gradio/upload` package has `Upload` and `ModifyUpload` components for properly uploading files to the Gradio server. \nHere is how you can use them to create a user interface to upload and display PDF files.\n\n```svelte\n\n\n\n{if value === null && interactive}\n \n \n \n{:else if value !== null}\n {if interactive}\n \n {/if}\n \n{:else}\n \t\n{/if}\n```\n\nYou can also combine existing Gradio components to create entirely unique experiences.\nLike rendering a gallery of chatbot conversations. \nThe possibilities are endless, please read the documentation on our javascript packages [here](https://gradio.app/main/docs/js).\nWe'll be adding more packages and documentation over the coming weeks!\n\n", "heading1": "Leveraging Existing Gradio Components", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "You can explore our component library via Storybook. You'll be able to interact with our components and see them in their various states.\n\nFor those interested in design customization, we provide the CSS variables consisting of our color palette, radii, spacing, and the icons we use - so you can easily match up your custom component with the style of our core components. This Storybook will be regularly updated with any new additions or changes.\n\n[Storybook Link](https://gradio.app/main/docs/js/storybook)\n\n", "heading1": "Matching Gradio Core's Design System", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "If you want to make use of the vast vite ecosystem, you can use the `gradio.config.js` file to configure your component's build process. This allows you to make use of tools like tailwindcss, mdsvex, and more.\n\nCurrently, it is possible to configure the following:\n\nVite options:\n- `plugins`: A list of vite plugins to use.\n\nSvelte options:\n- `preprocess`: A list of svelte preprocessors to use.\n- `extensions`: A list of file extensions to compile to `.svelte` files.\n- `build.target`: The target to build for, this may be necessary to support newer javascript features. See the [esbuild docs](https://esbuild.github.io/api/target) for more information.\n\nThe `gradio.config.js` file should be placed in the root of your component's `frontend` directory. A default config file is created for you when you create a new component. But you can also create your own config file, if one doesn't exist, and use it to customize your component's build process.\n\nExample for a Vite plugin\n\nCustom components can use Vite plugins to customize the build process. Check out the [Vite Docs](https://vitejs.dev/guide/using-plugins.html) for more information. \n\nHere we configure [TailwindCSS](https://tailwindcss.com), a utility-first CSS framework. Setup is easiest using the version 4 prerelease. \n\n```\nnpm install tailwindcss@next @tailwindcss/vite@next\n```\n\nIn `gradio.config.js`:\n\n```typescript\nimport tailwindcss from \"@tailwindcss/vite\";\nexport default {\n plugins: [tailwindcss()]\n};\n```\n\nThen create a `style.css` file with the following content:\n\n```css\n@import \"tailwindcss\";\n```\n\nImport this file into `Index.svelte`. Note, that you need to import the css file containing `@import` and cannot just use a `\n```\n\nNow import `PdfUploadText.svelte` in your `\n\n\n\t\n\n\n\n```\n\n\nTip: Exercise for the reader - reduce the code duplication between `Index.svelte` and `Example.svelte` \ud83d\ude0a\n\n\nYou will not be able to render examples until we make some changes to the backend code in the next step!\n\n", "heading1": "Step 8.5: The Example view", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "The backend changes needed are smaller.\nWe're almost done!\n\nWhat we're going to do is:\n* Add `change` and `upload` events to our component.\n* Add a `height` property to let users control the height of the PDF.\n* Set the `data_model` of our component to be `FileData`. This is so that Gradio can automatically cache and safely serve any files that are processed by our component.\n* Modify the `preprocess` method to return a string corresponding to the path of our uploaded PDF.\n* Modify the `postprocess` to turn a path to a PDF created in an event handler to a `FileData`.\n\nWhen all is said an done, your component's backend code should look like this:\n\n```python\nfrom __future__ import annotations\nfrom typing import Any, Callable, TYPE_CHECKING\n\nfrom gradio.components.base import Component\nfrom gradio.data_classes import FileData\nfrom gradio import processing_utils\nif TYPE_CHECKING:\n from gradio.components import Timer\n\nclass PDF(Component):\n\n EVENTS = [\"change\", \"upload\"]\n\n data_model = FileData\n\n def __init__(self, value: Any = None, *,\n height: int | None = None,\n label: str | I18nData | None = None,\n info: str | I18nData | None = None,\n show_label: bool | None = None,\n container: bool = True,\n scale: int | None = None,\n min_width: int | None = None,\n interactive: bool | None = None,\n visible: bool = True,\n elem_id: str | None = None,\n elem_classes: list[str] | str | None = None,\n render: bool = True,\n load_fn: Callable[..., Any] | None = None,\n every: Timer | float | None = None):\n super().__init__(value, label=label, info=info,\n show_label=show_label, container=container,\n scale=scale, min_width=min_width,\n interactive=interactive, visible=visible,\n ", "heading1": "Step 9: The backend", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": " show_label=show_label, container=container,\n scale=scale, min_width=min_width,\n interactive=interactive, visible=visible,\n elem_id=elem_id, elem_classes=elem_classes,\n render=render, load_fn=load_fn, every=every)\n self.height = height\n\n def preprocess(self, payload: FileData) -> str:\n return payload.path\n\n def postprocess(self, value: str | None) -> FileData:\n if not value:\n return None\n return FileData(path=value)\n\n def example_payload(self):\n return \"https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf\"\n\n def example_value(self):\n return \"https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf\"\n```\n\n", "heading1": "Step 9: The backend", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "To test our backend code, let's add a more complex demo that performs Document Question and Answering with huggingface transformers.\n\nIn our `demo` directory, create a `requirements.txt` file with the following packages\n\n```\ntorch\ntransformers\npdf2image\npytesseract\n```\n\n\nTip: Remember to install these yourself and restart the dev server! You may need to install extra non-python dependencies for `pdf2image`. See [here](https://pypi.org/project/pdf2image/). Feel free to write your own demo if you have trouble.\n\n\n```python\nimport gradio as gr\nfrom gradio_pdf import PDF\nfrom pdf2image import convert_from_path\nfrom transformers import pipeline\nfrom pathlib import Path\n\ndir_ = Path(__file__).parent\n\np = pipeline(\n \"document-question-answering\",\n model=\"impira/layoutlm-document-qa\",\n)\n\ndef qa(question: str, doc: str) -> str:\n img = convert_from_path(doc)[0]\n output = p(img, question)\n return sorted(output, key=lambda x: x[\"score\"], reverse=True)[0]['answer']\n\n\ndemo = gr.Interface(\n qa,\n [gr.Textbox(label=\"Question\"), PDF(label=\"Document\")],\n gr.Textbox(),\n)\n\ndemo.launch()\n```\n\nSee our demo in action below!\n\n\n\nFinally lets build our component with `gradio cc build` and publish it with the `gradio cc publish` command!\nThis will guide you through the process of uploading your component to [PyPi](https://pypi.org/) and [HuggingFace Spaces](https://huggingface.co/spaces).\n\n\nTip: You may need to add the following lines to the `Dockerfile` of your HuggingFace Space.\n\n```Dockerfile\nRUN mkdir -p /tmp/cache/\nRUN chmod a+rwx -R /tmp/cache/\nRUN apt-get update && apt-get install -y poppler-utils tesseract-ocr\n\nENV TRANSFORMERS_CACHE=/tmp/cache/\n```\n\n", "heading1": "Step 10: Add a demo and publish!", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "In order to use our new component in **any** gradio 4.0 app, simply install it with pip, e.g. `pip install gradio-pdf`. Then you can use it like the built-in `gr.File()` component (except that it will only accept and display PDF files).\n\nHere is a simple demo with the Blocks api:\n\n```python\nimport gradio as gr\nfrom gradio_pdf import PDF\n\nwith gr.Blocks() as demo:\n pdf = PDF(label=\"Upload a PDF\", interactive=True)\n name = gr.Textbox()\n pdf.upload(lambda f: f, pdf, name)\n\ndemo.launch()\n```\n\n\nI hope you enjoyed this tutorial!\nThe complete source code for our component is [here](https://huggingface.co/spaces/freddyaboulton/gradio_pdf/tree/main/src).\nPlease don't hesitate to reach out to the gradio community on the [HuggingFace Discord](https://discord.gg/hugging-face-879548962464493619) if you get stuck.\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "The documentation will be generated when running `gradio cc build`. You can pass the `--no-generate-docs` argument to turn off this behaviour.\n\nThere is also a standalone `docs` command that allows for greater customisation. If you are running this command manually it should be run _after_ the `version` in your `pyproject.toml` has been bumped but before building the component.\n\nAll arguments are optional.\n\n```bash\ngradio cc docs\n path The directory of the custom component.\n --demo-dir Path to the demo directory.\n --demo-name Name of the demo file\n --space-url URL of the Hugging Face Space to link to\n --generate-space create a documentation space.\n --no-generate-space do not create a documentation space\n --readme-path Path to the README.md file.\n --generate-readme create a REAMDE.md file\n --no-generate-readme do not create a README.md file\n --suppress-demo-check suppress validation checks and warnings\n```\n\n", "heading1": "How do I use it?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "The `gradio cc docs` command will generate an interactive Gradio app and a static README file with various features. You can see an example here:\n\n- [Gradio app deployed on Hugging Face Spaces]()\n- [README.md rendered by GitHub]()\n\nThe README.md and space both have the following features:\n\n- A description.\n- Installation instructions.\n- A fully functioning code snippet.\n- Optional links to PyPi, GitHub, and Hugging Face Spaces.\n- API documentation including:\n - An argument table for component initialisation showing types, defaults, and descriptions.\n - A description of how the component affects the user's predict function.\n - A table of events and their descriptions.\n - Any additional interfaces or classes that may be used during initialisation or in the pre- or post- processors.\n\nAdditionally, the Gradio includes:\n\n- A live demo.\n- A richer, interactive version of the parameter tables.\n- Nicer styling!\n\n", "heading1": "What gets generated?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "The documentation generator uses existing standards to extract the necessary information, namely Type Hints and Docstrings. There are no Gradio-specific APIs for documentation, so following best practices will generally yield the best results.\n\nIf you already use type hints and docstrings in your component source code, you don't need to do much to benefit from this feature, but there are some details that you should be aware of.\n\nPython version\n\nTo get the best documentation experience, you need to use Python `3.10` or greater when generating documentation. This is because some introspection features used to generate the documentation were only added in `3.10`.\n\nType hints\n\nPython type hints are used extensively to provide helpful information for users. \n\n
\n What are type hints?\n\n\nIf you need to become more familiar with type hints in Python, they are a simple way to express what Python types are expected for arguments and return values of functions and methods. They provide a helpful in-editor experience, aid in maintenance, and integrate with various other tools. These types can be simple primitives, like `list` `str` `bool`; they could be more compound types like `list[str]`, `str | None` or `tuple[str, float | int]`; or they can be more complex types using utility classed like [`TypedDict`](https://peps.python.org/pep-0589/abstract).\n\n[Read more about type hints in Python.](https://realpython.com/lessons/type-hinting/)\n\n\n
\n\nWhat do I need to add hints to?\n\nYou do not need to add type hints to every part of your code. For the documentation to work correctly, you will need to add type hints to the following component methods:\n\n- `__init__` parameters should be typed.\n- `postprocess` parameters and return value should be typed.\n- `preprocess` parameters and return value should be typed.\n\nIf you are using `gradio cc create`, these types should already exist, but you may need to tweak them based on any changes you ma", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "be typed.\n- `preprocess` parameters and return value should be typed.\n\nIf you are using `gradio cc create`, these types should already exist, but you may need to tweak them based on any changes you make.\n\n`__init__`\n\nHere, you only need to type the parameters. If you have cloned a template with `gradio` cc create`, these should already be in place. You will only need to add new hints for anything you have added or changed:\n\n```py\ndef __init__(\n self,\n value: str | None = None,\n *,\n sources: Literal[\"upload\", \"microphone\"] = \"upload,\n every: Timer | float | None = None,\n ...\n):\n ...\n```\n\n`preprocess` and `postprocess`\n\nThe `preprocess` and `postprocess` methods determine the value passed to the user function and the value that needs to be returned.\n\nEven if the design of your component is primarily as an input or an output, it is worth adding type hints to both the input parameters and the return values because Gradio has no way of limiting how components can be used.\n\nIn this case, we specifically care about:\n\n- The return type of `preprocess`.\n- The input type of `postprocess`.\n\n```py\ndef preprocess(\n self, payload: FileData | None input is optional\n) -> tuple[int, str] | str | None:\n\nuser function input is the preprocess return \u25b2\nuser function output is the postprocess input \u25bc\n\ndef postprocess(\n self, value: tuple[int, str] | None\n) -> FileData | bytes | None: return is optional\n ...\n```\n\nDocstrings\n\nDocstrings are also used extensively to extract more meaningful, human-readable descriptions of certain parts of the API.\n\n
\n What are docstrings?\n\n\nIf you need to become more familiar with docstrings in Python, they are a way to annotate parts of your code with human-readable decisions and explanations. They offer a rich in-editor experience like type hints, but unlike type hints, they don't have any specific syntax requirements. They are simple strings and can take almost any form. The only requirement i", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "offer a rich in-editor experience like type hints, but unlike type hints, they don't have any specific syntax requirements. They are simple strings and can take almost any form. The only requirement is where they appear. Docstrings should be \"a string literal that occurs as the first statement in a module, function, class, or method definition\".\n\n[Read more about Python docstrings.](https://peps.python.org/pep-0257/what-is-a-docstring)\n\n
\n\nWhile docstrings don't have any syntax requirements, we need a particular structure for documentation purposes.\n\nAs with type hint, the specific information we care about is as follows:\n\n- `__init__` parameter docstrings.\n- `preprocess` return docstrings.\n- `postprocess` input parameter docstrings.\n\nEverything else is optional.\n\nDocstrings should always take this format to be picked up by the documentation generator:\n\nClasses\n\n```py\n\"\"\"\nA description of the class.\n\nThis can span multiple lines and can _contain_ *markdown*.\n\"\"\"\n```\n\nMethods and functions \n\nMarkdown in these descriptions will not be converted into formatted text.\n\n```py\n\"\"\"\nParameters:\n param_one: A description for this parameter.\n param_two: A description for this parameter.\nReturns:\n A description for this return value.\n\"\"\"\n```\n\nEvents\n\nIn custom components, events are expressed as a list stored on the `events` field of the component class. While we do not need types for events, we _do_ need a human-readable description so users can understand the behaviour of the event.\n\nTo facilitate this, we must create the event in a specific way.\n\nThere are two ways to add events to a custom component.\n\nBuilt-in events\n\nGradio comes with a variety of built-in events that may be enough for your component. If you are using built-in events, you do not need to do anything as they already have descriptions we can extract:\n\n```py\nfrom gradio.events import Events\n\nclass ParamViewer(Component):\n ...\n\n EVENTS = [\n Events.change,\n Events.up", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "do not need to do anything as they already have descriptions we can extract:\n\n```py\nfrom gradio.events import Events\n\nclass ParamViewer(Component):\n ...\n\n EVENTS = [\n Events.change,\n Events.upload,\n ]\n```\n\nCustom events\n\nYou can define a custom event if the built-in events are unsuitable for your use case. This is a straightforward process, but you must create the event in this way for docstrings to work correctly:\n\n```py\nfrom gradio.events import Events, EventListener\n\nclass ParamViewer(Component):\n ...\n\n EVENTS = [\n Events.change,\n EventListener(\n \"bingbong\",\n doc=\"This listener is triggered when the user does a bingbong.\"\n )\n ]\n```\n\nDemo\n\nThe `demo/app.py`, often used for developing the component, generates the live demo and code snippet. The only strict rule here is that the `demo.launch()` command must be contained with a `__name__ == \"__main__\"` conditional as below:\n\n```py\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThe documentation generator will scan for such a clause and error if absent. If you are _not_ launching the demo inside the `demo/app.py`, then you can pass `--suppress-demo-check` to turn off this check.\n\nDemo recommendations\n\nAlthough there are no additional rules, there are some best practices you should bear in mind to get the best experience from the documentation generator.\n\nThese are only guidelines, and every situation is unique, but they are sound principles to remember.\n\nKeep the demo compact\n\nCompact demos look better and make it easier for users to understand what the demo does. Try to remove as many extraneous UI elements as possible to focus the users' attention on the core use case. \n\nSometimes, it might make sense to have a `demo/app.py` just for the docs and an additional, more complex app for your testing purposes. You can also create other spaces, showcasing more complex examples and linking to them from the main class docstring or the `pyproject.toml` description.\n\n", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "ore complex app for your testing purposes. You can also create other spaces, showcasing more complex examples and linking to them from the main class docstring or the `pyproject.toml` description.\n\nKeep the code concise\n\nThe 'getting started' snippet utilises the demo code, which should be as short as possible to keep users engaged and avoid confusion.\n\nIt isn't the job of the sample snippet to demonstrate the whole API; this snippet should be the shortest path to success for a new user. It should be easy to type or copy-paste and easy to understand. Explanatory comments should be brief and to the point.\n\nAvoid external dependencies\n\nAs mentioned above, users should be able to copy-paste a snippet and have a fully working app. Try to avoid third-party library dependencies to facilitate this.\n\nYou should carefully consider any examples; avoiding examples that require additional files or that make assumptions about the environment is generally a good idea.\n\nEnsure the `demo` directory is self-contained\n\nOnly the `demo` directory will be uploaded to Hugging Face spaces in certain instances, as the component will be installed via PyPi if possible. It is essential that this directory is self-contained and any files needed for the correct running of the demo are present.\n\nAdditional URLs\n\nThe documentation generator will generate a few buttons, providing helpful information and links to users. They are obtained automatically in some cases, but some need to be explicitly included in the `pyproject.yaml`. \n\n- PyPi Version and link - This is generated automatically.\n- GitHub Repository - This is populated via the `pyproject.toml`'s `project.urls.repository`.\n- Hugging Face Space - This is populated via the `pyproject.toml`'s `project.urls.space`.\n\nAn example `pyproject.toml` urls section might look like this:\n\n```toml\n[project.urls]\nrepository = \"https://github.com/user/repo-name\"\nspace = \"https://huggingface.co/spaces/user/space-name\"\n```", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "pyproject.toml` urls section might look like this:\n\n```toml\n[project.urls]\nrepository = \"https://github.com/user/repo-name\"\nspace = \"https://huggingface.co/spaces/user/space-name\"\n```", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "For this demo we will be tweaking the existing Gradio `Chatbot` component to display text and media files in the same message.\nLet's create a new custom component directory by templating off of the `Chatbot` component source code.\n\n```bash\ngradio cc create MultimodalChatbot --template Chatbot\n```\n\nAnd we're ready to go!\n\nTip: Make sure to modify the `Author` key in the `pyproject.toml` file.\n\n", "heading1": "Part 1 - Creating our project", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "Open up the `multimodalchatbot.py` file in your favorite code editor and let's get started modifying the backend of our component.\n\nThe first thing we will do is create the `data_model` of our component.\nThe `data_model` is the data format that your python component will receive and send to the javascript client running the UI.\nYou can read more about the `data_model` in the [backend guide](./backend).\n\nFor our component, each chatbot message will consist of two keys: a `text` key that displays the text message and an optional list of media files that can be displayed underneath the text.\n\nImport the `FileData` and `GradioModel` classes from `gradio.data_classes` and modify the existing `ChatbotData` class to look like the following:\n\n```python\nclass FileMessage(GradioModel):\n file: FileData\n alt_text: Optional[str] = None\n\n\nclass MultimodalMessage(GradioModel):\n text: Optional[str] = None\n files: Optional[List[FileMessage]] = None\n\n\nclass ChatbotData(GradioRootModel):\n root: List[Tuple[Optional[MultimodalMessage], Optional[MultimodalMessage]]]\n\n\nclass MultimodalChatbot(Component):\n ...\n data_model = ChatbotData\n```\n\n\nTip: The `data_model`s are implemented using `Pydantic V2`. Read the documentation [here](https://docs.pydantic.dev/latest/).\n\nWe've done the hardest part already!\n\n", "heading1": "Part 2a - The backend data_model", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "For the `preprocess` method, we will keep it simple and pass a list of `MultimodalMessage`s to the python functions that use this component as input. \nThis will let users of our component access the chatbot data with `.text` and `.files` attributes.\nThis is a design choice that you can modify in your implementation!\nWe can return the list of messages with the `root` property of the `ChatbotData` like so:\n\n```python\ndef preprocess(\n self,\n payload: ChatbotData | None,\n) -> List[MultimodalMessage] | None:\n if payload is None:\n return payload\n return payload.root\n```\n\n\nTip: Learn about the reasoning behind the `preprocess` and `postprocess` methods in the [key concepts guide](./key-component-concepts)\n\nIn the `postprocess` method we will coerce each message returned by the python function to be a `MultimodalMessage` class. \nWe will also clean up any indentation in the `text` field so that it can be properly displayed as markdown in the frontend.\n\nWe can leave the `postprocess` method as is and modify the `_postprocess_chat_messages`\n\n```python\ndef _postprocess_chat_messages(\n self, chat_message: MultimodalMessage | dict | None\n) -> MultimodalMessage | None:\n if chat_message is None:\n return None\n if isinstance(chat_message, dict):\n chat_message = MultimodalMessage(**chat_message)\n chat_message.text = inspect.cleandoc(chat_message.text or \"\")\n for file_ in chat_message.files:\n file_.file.mime_type = client_utils.get_mimetype(file_.file.path)\n return chat_message\n```\n\nBefore we wrap up with the backend code, let's modify the `example_value` and `example_payload` method to return a valid dictionary representation of the `ChatbotData`:\n\n```python\ndef example_value(self) -> Any:\n return [[{\"text\": \"Hello!\", \"files\": []}, None]]\n\ndef example_payload(self) -> Any:\n return [[{\"text\": \"Hello!\", \"files\": []}, None]]\n```\n\nCongrats - the backend is complete!\n\n", "heading1": "Part 2b - The pre and postprocess methods", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "The frontend for the `Chatbot` component is divided into two parts - the `Index.svelte` file and the `shared/Chatbot.svelte` file.\nThe `Index.svelte` file applies some processing to the data received from the server and then delegates the rendering of the conversation to the `shared/Chatbot.svelte` file.\nFirst we will modify the `Index.svelte` file to apply processing to the new data type the backend will return.\n\nLet's begin by porting our custom types from our python `data_model` to typescript.\nOpen `frontend/shared/utils.ts` and add the following type definitions at the top of the file:\n\n```ts\nexport type FileMessage = {\n\tfile: FileData;\n\talt_text?: string;\n};\n\n\nexport type MultimodalMessage = {\n\ttext: string;\n\tfiles?: FileMessage[];\n}\n```\n\nNow let's import them in `Index.svelte` and modify the type annotations for `value` and `_value`.\n\n```ts\nimport type { FileMessage, MultimodalMessage } from \"./shared/utils\";\n\nexport let value: [\n MultimodalMessage | null,\n MultimodalMessage | null\n][] = [];\n\nlet _value: [\n MultimodalMessage | null,\n MultimodalMessage | null\n][];\n```\n\nWe need to normalize each message to make sure each file has a proper URL to fetch its contents from.\nWe also need to format any embedded file links in the `text` key.\nLet's add a `process_message` utility function and apply it whenever the `value` changes.\n\n```ts\nfunction process_message(msg: MultimodalMessage | null): MultimodalMessage | null {\n if (msg === null) {\n return msg;\n }\n msg.text = redirect_src_url(msg.text);\n msg.files = msg.files.map(normalize_messages);\n return msg;\n}\n\n$: _value = value\n ? value.map(([user_msg, bot_msg]) => [\n process_message(user_msg),\n process_message(bot_msg)\n ])\n : [];\n```\n\n", "heading1": "Part 3a - The Index.svelte file", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "Let's begin similarly to the `Index.svelte` file and let's first modify the type annotations.\nImport `Mulimodal` message at the top of the `\n```\n\n3. That's it!\n\nYour website now has a chat widget that connects to your Gradio app! Users can click the chat button to open the widget and start interacting with your app.\n\nCustomization\n\nYou can customize the appearance of the widget by modifying the CSS. Some ideas:\n- Change the colors to match your website's theme\n- Adjust the size and position of the widget\n- Add animations for opening/closing\n- Modify the message styling\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif)\n\nIf you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are hap", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot", "source_page_title": "Chatbots - Creating A Website Widget From A Gradio Chatbot Guide"}, {"text": "%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif)\n\nIf you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are happy to help you amplify!", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot", "source_page_title": "Chatbots - Creating A Website Widget From A Gradio Chatbot Guide"}, {"text": "Chatbots are a popular application of large language models (LLMs). Using Gradio, you can easily build a chat application and share that with your users, or try it yourself using an intuitive UI.\n\nThis tutorial uses `gr.ChatInterface()`, which is a high-level abstraction that allows you to create your chatbot UI fast, often with a _few lines of Python_. It can be easily adapted to support multimodal chatbots, or chatbots that require further customization.\n\n**Prerequisites**: please make sure you are using the latest version of Gradio:\n\n```bash\n$ pip install --upgrade gradio\n```\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "If you have a chat server serving an OpenAI-API compatible endpoint (such as Ollama), you can spin up a ChatInterface in a single line of Python. First, also run `pip install openai`. Then, with your own URL, model, and optional token:\n\n```python\nimport gradio as gr\n\ngr.load_chat(\"http://localhost:11434/v1/\", model=\"llama3.2\", token=\"***\").launch()\n```\n\nRead about `gr.load_chat` in [the docs](https://www.gradio.app/docs/gradio/load_chat). If you have your own model, keep reading to see how to create an application around any chat model in Python!\n\n", "heading1": "Note for OpenAI-API compatible endpoints", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "To create a chat application with `gr.ChatInterface()`, the first thing you should do is define your **chat function**. In the simplest case, your chat function should accept two arguments: `message` and `history` (the arguments can be named anything, but must be in this order).\n\n- `message`: a `str` representing the user's most recent message.\n- `history`: a list of openai-style dictionaries with `role` and `content` keys, representing the previous conversation history. May also include additional keys representing message metadata.\n\nFor example, the `history` could look like this:\n\n```python\n[\n {\"role\": \"user\", \"content\": \"What is the capital of France?\"},\n {\"role\": \"assistant\", \"content\": \"Paris\"}\n]\n```\n\nwhile the next `message` would be:\n\n```py\n\"And what is its largest city?\"\n```\n\nYour chat function simply needs to return: \n\n* a `str` value, which is the chatbot's response based on the chat `history` and most recent `message`, for example, in this case:\n\n```\nParis is also the largest city.\n```\n\nLet's take a look at a few example chat functions:\n\n**Example: a chatbot that randomly responds with yes or no**\n\nLet's write a chat function that responds `Yes` or `No` randomly.\n\nHere's our chat function:\n\n```python\nimport random\n\ndef random_response(message, history):\n return random.choice([\"Yes\", \"No\"])\n```\n\nNow, we can plug this into `gr.ChatInterface()` and call the `.launch()` method to create the web interface:\n\n```python\nimport gradio as gr\n\ngr.ChatInterface(\n fn=random_response, \n type=\"messages\"\n).launch()\n```\n\nTip: Always set type=\"messages\" in gr.ChatInterface. The default value (type=\"tuples\") is deprecated and will be removed in a future version of Gradio.\n\nThat's it! Here's our running demo, try it out:\n\n$demo_chatinterface_random_response\n\n**Example: a chatbot that alternates between agreeing and disagreeing**\n\nOf course, the previous example was very simplistic, it didn't take user input or the previous history into account! Here's another", "heading1": "Defining a chat function", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "ample: a chatbot that alternates between agreeing and disagreeing**\n\nOf course, the previous example was very simplistic, it didn't take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history.\n\n```python\nimport gradio as gr\n\ndef alternatingly_agree(message, history):\n if len([h for h in history if h['role'] == \"assistant\"]) % 2 == 0:\n return f\"Yes, I do think that: {message}\"\n else:\n return \"I don't think so\"\n\ngr.ChatInterface(\n fn=alternatingly_agree, \n type=\"messages\"\n).launch()\n```\n\nWe'll look at more realistic examples of chat functions in our next Guide, which shows [examples of using `gr.ChatInterface` with popular LLMs](../guides/chatinterface-examples). \n\n", "heading1": "Defining a chat function", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "In your chat function, you can use `yield` to generate a sequence of partial responses, each replacing the previous ones. This way, you'll end up with a streaming chatbot. It's that simple!\n\n```python\nimport time\nimport gradio as gr\n\ndef slow_echo(message, history):\n for i in range(len(message)):\n time.sleep(0.3)\n yield \"You typed: \" + message[: i+1]\n\ngr.ChatInterface(\n fn=slow_echo, \n type=\"messages\"\n).launch()\n```\n\nWhile the response is streaming, the \"Submit\" button turns into a \"Stop\" button that can be used to stop the generator function.\n\nTip: Even though you are yielding the latest message at each iteration, Gradio only sends the \"diff\" of each message from the server to the frontend, which reduces latency and data consumption over your network.\n\n", "heading1": "Streaming chatbots", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "If you're familiar with Gradio's `gr.Interface` class, the `gr.ChatInterface` includes many of the same arguments that you can use to customize the look and feel of your Chatbot. For example, you can:\n\n- add a title and description above your chatbot using `title` and `description` arguments.\n- add a theme or custom css using `theme` and `css` arguments respectively.\n- add `examples` and even enable `cache_examples`, which make your Chatbot easier for users to try it out.\n- customize the chatbot (e.g. to change the height or add a placeholder) or textbox (e.g. to add a max number of characters or add a placeholder).\n\n**Adding examples**\n\nYou can add preset examples to your `gr.ChatInterface` with the `examples` parameter, which takes a list of string examples. Any examples will appear as \"buttons\" within the Chatbot before any messages are sent. If you'd like to include images or other files as part of your examples, you can do so by using this dictionary format for each example instead of a string: `{\"text\": \"What's in this image?\", \"files\": [\"cheetah.jpg\"]}`. Each file will be a separate message that is added to your Chatbot history.\n\nYou can change the displayed text for each example by using the `example_labels` argument. You can add icons to each example as well using the `example_icons` argument. Both of these arguments take a list of strings, which should be the same length as the `examples` list.\n\nIf you'd like to cache the examples so that they are pre-computed and the results appear instantly, set `cache_examples=True`.\n\n**Customizing the chatbot or textbox component**\n\nIf you want to customize the `gr.Chatbot` or `gr.Textbox` that compose the `ChatInterface`, then you can pass in your own chatbot or textbox components. Here's an example of how we to apply the parameters we've discussed in this section:\n\n```python\nimport gradio as gr\n\ndef yes_man(message, history):\n if message.endswith(\"?\"):\n return \"Yes\"\n else:\n return \"Ask me anything", "heading1": "Customizing the Chat UI", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": " parameters we've discussed in this section:\n\n```python\nimport gradio as gr\n\ndef yes_man(message, history):\n if message.endswith(\"?\"):\n return \"Yes\"\n else:\n return \"Ask me anything!\"\n\ngr.ChatInterface(\n yes_man,\n type=\"messages\",\n chatbot=gr.Chatbot(height=300),\n textbox=gr.Textbox(placeholder=\"Ask me a yes or no question\", container=False, scale=7),\n title=\"Yes Man\",\n description=\"Ask Yes Man any question\",\n theme=\"ocean\",\n examples=[\"Hello\", \"Am I cool?\", \"Are tomatoes vegetables?\"],\n cache_examples=True,\n).launch()\n```\n\nHere's another example that adds a \"placeholder\" for your chat interface, which appears before the user has started chatting. The `placeholder` argument of `gr.Chatbot` accepts Markdown or HTML:\n\n```python\ngr.ChatInterface(\n yes_man,\n type=\"messages\",\n chatbot=gr.Chatbot(placeholder=\"Your Personal Yes-Man
Ask Me Anything\"),\n...\n```\n\nThe placeholder appears vertically and horizontally centered in the chatbot.\n\n", "heading1": "Customizing the Chat UI", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may want to add multimodal capabilities to your chat interface. For example, you may want users to be able to upload images or files to your chatbot and ask questions about them. You can make your chatbot \"multimodal\" by passing in a single parameter (`multimodal=True`) to the `gr.ChatInterface` class.\n\nWhen `multimodal=True`, the signature of your chat function changes slightly: the first parameter of your function (what we referred to as `message` above) should accept a dictionary consisting of the submitted text and uploaded files that looks like this: \n\n```py\n{\n \"text\": \"user input\", \n \"files\": [\n \"updated_file_1_path.ext\",\n \"updated_file_2_path.ext\", \n ...\n ]\n}\n```\n\nThis second parameter of your chat function, `history`, will be in the same openai-style dictionary format as before. However, if the history contains uploaded files, the `content` key for a file will be not a string, but rather a single-element tuple consisting of the filepath. Each file will be a separate message in the history. So after uploading two files and asking a question, your history might look like this:\n\n```python\n[\n {\"role\": \"user\", \"content\": (\"cat1.png\")},\n {\"role\": \"user\", \"content\": (\"cat2.png\")},\n {\"role\": \"user\", \"content\": \"What's the difference between these two images?\"},\n]\n```\n\nThe return type of your chat function does *not change* when setting `multimodal=True` (i.e. in the simplest case, you should still return a string value). We discuss more complex cases, e.g. returning files [below](returning-complex-responses).\n\nIf you are customizing a multimodal chat interface, you should pass in an instance of `gr.MultimodalTextbox` to the `textbox` parameter. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. Here's an example that illustrates how to set up and customize and multimodal chat interface:\n \n\n```python\nimport gradio as gr\n\ndef count_images(message, hi", "heading1": "Multimodal Chat Interface", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "eter, which is a list of sources to enable. Here's an example that illustrates how to set up and customize and multimodal chat interface:\n \n\n```python\nimport gradio as gr\n\ndef count_images(message, history):\n num_images = len(message[\"files\"])\n total_images = 0\n for message in history:\n if isinstance(message[\"content\"], tuple):\n total_images += 1\n return f\"You just uploaded {num_images} images, total uploaded: {total_images+num_images}\"\n\ndemo = gr.ChatInterface(\n fn=count_images, \n type=\"messages\", \n examples=[\n {\"text\": \"No files\", \"files\": []}\n ], \n multimodal=True,\n textbox=gr.MultimodalTextbox(file_count=\"multiple\", file_types=[\"image\"], sources=[\"upload\", \"microphone\"])\n)\n\ndemo.launch()\n```\n\n", "heading1": "Multimodal Chat Interface", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may want to add additional inputs to your chat function and expose them to your users through the chat UI. For example, you could add a textbox for a system prompt, or a slider that sets the number of tokens in the chatbot's response. The `gr.ChatInterface` class supports an `additional_inputs` parameter which can be used to add additional input components.\n\nThe `additional_inputs` parameters accepts a component or a list of components. You can pass the component instances directly, or use their string shortcuts (e.g. `\"textbox\"` instead of `gr.Textbox()`). If you pass in component instances, and they have _not_ already been rendered, then the components will appear underneath the chatbot within a `gr.Accordion()`. \n\nHere's a complete example:\n\n$code_chatinterface_system_prompt\n\nIf the components you pass into the `additional_inputs` have already been rendered in a parent `gr.Blocks()`, then they will _not_ be re-rendered in the accordion. This provides flexibility in deciding where to lay out the input components. In the example below, we position the `gr.Textbox()` on top of the Chatbot UI, while keeping the slider underneath.\n\n```python\nimport gradio as gr\nimport time\n\ndef echo(message, history, system_prompt, tokens):\n response = f\"System prompt: {system_prompt}\\n Message: {message}.\"\n for i in range(min(len(response), int(tokens))):\n time.sleep(0.05)\n yield response[: i+1]\n\nwith gr.Blocks() as demo:\n system_prompt = gr.Textbox(\"You are helpful AI.\", label=\"System Prompt\")\n slider = gr.Slider(10, 100, render=False)\n\n gr.ChatInterface(\n echo, additional_inputs=[system_prompt, slider], type=\"messages\"\n )\n\ndemo.launch()\n```\n\n**Examples with additional inputs**\n\nYou can also add example values for your additional inputs. Pass in a list of lists to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should ", "heading1": "Additional Inputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "n a list of lists to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should be the example value for the chat message, and each subsequent element should be an example value for one of the additional inputs, in order. When additional inputs are provided, examples are rendered in a table underneath the chat interface.\n\nIf you need to create something even more custom, then its best to construct the chatbot UI using the low-level `gr.Blocks()` API. We have [a dedicated guide for that here](/guides/creating-a-custom-chatbot-with-blocks).\n\n", "heading1": "Additional Inputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "In the same way that you can accept additional inputs into your chat function, you can also return additional outputs. Simply pass in a list of components to the `additional_outputs` parameter in `gr.ChatInterface` and return additional values for each component from your chat function. Here's an example that extracts code and outputs it into a separate `gr.Code` component:\n\n$code_chatinterface_artifacts\n\n**Note:** unlike the case of additional inputs, the components passed in `additional_outputs` must be already defined in your `gr.Blocks` context -- they are not rendered automatically. If you need to render them after your `gr.ChatInterface`, you can set `render=False` when they are first defined and then `.render()` them in the appropriate section of your `gr.Blocks()` as we do in the example above.\n\n", "heading1": "Additional Outputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "We mentioned earlier that in the simplest case, your chat function should return a `str` response, which will be rendered as Markdown in the chatbot. However, you can also return more complex responses as we discuss below:\n\n\n**Returning files or Gradio components**\n\nCurrently, the following Gradio components can be displayed inside the chat interface:\n* `gr.Image`\n* `gr.Plot`\n* `gr.Audio`\n* `gr.HTML`\n* `gr.Video`\n* `gr.Gallery`\n* `gr.File`\n\nSimply return one of these components from your function to use it with `gr.ChatInterface`. Here's an example that returns an audio file:\n\n```py\nimport gradio as gr\n\ndef music(message, history):\n if message.strip():\n return gr.Audio(\"https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav\")\n else:\n return \"Please provide the name of an artist\"\n\ngr.ChatInterface(\n music,\n type=\"messages\",\n textbox=gr.Textbox(placeholder=\"Which artist's music do you want to listen to?\", scale=7),\n).launch()\n```\n\nSimilarly, you could return image files with `gr.Image`, video files with `gr.Video`, or arbitrary files with the `gr.File` component.\n\n**Returning Multiple Messages**\n\nYou can return multiple assistant messages from your chat function simply by returning a `list` of messages, each of which is a valid chat type. This lets you, for example, send a message along with files, as in the following example:\n\n$code_chatinterface_echo_multimodal\n\n\n**Displaying intermediate thoughts or tool usage**\n\nThe `gr.ChatInterface` class supports displaying intermediate thoughts or tool usage direct in the chatbot.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/nested-thought.png)\n\n To do this, you will need to return a `gr.ChatMessage` object from your chat function. Here is the schema of the `gr.ChatMessage` data class as well as two internal typed dictionaries:\n \n ```py\n@dataclass\nclass ChatMessage:\n content: str | Component\n metadata: MetadataDict = ", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "ion. Here is the schema of the `gr.ChatMessage` data class as well as two internal typed dictionaries:\n \n ```py\n@dataclass\nclass ChatMessage:\n content: str | Component\n metadata: MetadataDict = None\n options: list[OptionDict] = None\n\nclass MetadataDict(TypedDict):\n title: NotRequired[str]\n id: NotRequired[int | str]\n parent_id: NotRequired[int | str]\n log: NotRequired[str]\n duration: NotRequired[float]\n status: NotRequired[Literal[\"pending\", \"done\"]]\n\nclass OptionDict(TypedDict):\n label: NotRequired[str]\n value: str\n ```\n \nAs you can see, the `gr.ChatMessage` dataclass is similar to the openai-style message format, e.g. it has a \"content\" key that refers to the chat message content. But it also includes a \"metadata\" key whose value is a dictionary. If this dictionary includes a \"title\" key, the resulting message is displayed as an intermediate thought with the title being displayed on top of the thought. Here's an example showing the usage:\n\n$code_chatinterface_thoughts\n\nYou can even show nested thoughts, which is useful for agent demos in which one tool may call other tools. To display nested thoughts, include \"id\" and \"parent_id\" keys in the \"metadata\" dictionary. Read our [dedicated guide on displaying intermediate thoughts and tool usage](/guides/agents-and-tool-usage) for more realistic examples.\n\n**Providing preset responses**\n\nWhen returning an assistant message, you may want to provide preset options that a user can choose in response. To do this, again, you will again return a `gr.ChatMessage` instance from your chat function. This time, make sure to set the `options` key specifying the preset responses.\n\nAs shown in the schema for `gr.ChatMessage` above, the value corresponding to the `options` key should be a list of dictionaries, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an optional `label` (if provided, is the text displayed as the preset r", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "ies, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an optional `label` (if provided, is the text displayed as the preset response instead of the `value`). \n\nThis example illustrates how to use preset responses:\n\n$code_chatinterface_options\n\n", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may wish to modify the value of the chatbot with your own events, other than those prebuilt in the `gr.ChatInterface`. For example, you could create a dropdown that prefills the chat history with certain conversations or add a separate button to clear the conversation history. The `gr.ChatInterface` supports these events, but you need to use the `gr.ChatInterface.chatbot_value` as the input or output component in such events. In this example, we use a `gr.Radio` component to prefill the the chatbot with certain conversations:\n\n$code_chatinterface_prefill\n\n", "heading1": "Modifying the Chatbot Value Directly", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "Once you've built your Gradio chat interface and are hosting it on [Hugging Face Spaces](https://hf.space) or somewhere else, then you can query it with a simple API at the `/chat` endpoint. The endpoint just expects the user's message and will return the response, internally keeping track of the message history.\n\n![](https://github.com/gradio-app/gradio/assets/1778297/7b10d6db-6476-4e2e-bebd-ecda802c3b8f)\n\nTo use the endpoint, you should use either the [Gradio Python Client](/guides/getting-started-with-the-python-client) or the [Gradio JS client](/guides/getting-started-with-the-js-client). Or, you can deploy your Chat Interface to other platforms, such as a:\n\n* Discord bot [[tutorial]](../guides/creating-a-discord-bot-from-a-gradio-app)\n* Slack bot [[tutorial]](../guides/creating-a-slack-bot-from-a-gradio-app)\n* Website widget [[tutorial]](../guides/creating-a-website-widget-from-a-gradio-chatbot)\n\n", "heading1": "Using Your Chatbot via API", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You can enable persistent chat history for your ChatInterface, allowing users to maintain multiple conversations and easily switch between them. When enabled, conversations are stored locally and privately in the user's browser using local storage. So if you deploy a ChatInterface e.g. on [Hugging Face Spaces](https://hf.space), each user will have their own separate chat history that won't interfere with other users' conversations. This means multiple users can interact with the same ChatInterface simultaneously while maintaining their own private conversation histories.\n\nTo enable this feature, simply set `gr.ChatInterface(save_history=True)` (as shown in the example in the next section). Users will then see their previous conversations in a side panel and can continue any previous chat or start a new one.\n\n", "heading1": "Chat History", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "To gather feedback on your chat model, set `gr.ChatInterface(flagging_mode=\"manual\")` and users will be able to thumbs-up or thumbs-down assistant responses. Each flagged response, along with the entire chat history, will get saved in a CSV file in the app working directory (this can be configured via the `flagging_dir` parameter). \n\nYou can also change the feedback options via `flagging_options` parameter. The default options are \"Like\" and \"Dislike\", which appear as the thumbs-up and thumbs-down icons. Any other options appear under a dedicated flag icon. This example shows a ChatInterface that has both chat history (mentioned in the previous section) and user feedback enabled:\n\n$code_chatinterface_streaming_echo\n\nNote that in this example, we set several flagging options: \"Like\", \"Spam\", \"Inappropriate\", \"Other\". Because the case-sensitive string \"Like\" is one of the flagging options, the user will see a thumbs-up icon next to each assistant message. The three other flagging options will appear in a dropdown under the flag icon.\n\n", "heading1": "Collecting User Feedback", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "Now that you've learned about the `gr.ChatInterface` class and how it can be used to create chatbot UIs quickly, we recommend reading one of the following:\n\n* [Our next Guide](../guides/chatinterface-examples) shows examples of how to use `gr.ChatInterface` with popular LLM libraries.\n* If you'd like to build very custom chat applications from scratch, you can build them using the low-level Blocks API, as [discussed in this Guide](../guides/creating-a-custom-chatbot-with-blocks).\n* Once you've deployed your Gradio Chat Interface, its easy to use in other applications because of the built-in API. Here's a tutorial on [how to deploy a Gradio chat interface as a Discord bot](../guides/creating-a-discord-bot-from-a-gradio-app).\n\n\n", "heading1": "What's Next?", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "Each message in Gradio's chatbot is a dataclass of type `ChatMessage` (this is assuming that chatbot's `type=\"message\"`, which is strongly recommended). The schema of `ChatMessage` is as follows:\n\n ```py\n@dataclass\nclass ChatMessage:\n content: str | Component\n role: Literal[\"user\", \"assistant\"]\n metadata: MetadataDict = None\n options: list[OptionDict] = None\n\nclass MetadataDict(TypedDict):\n title: NotRequired[str]\n id: NotRequired[int | str]\n parent_id: NotRequired[int | str]\n log: NotRequired[str]\n duration: NotRequired[float]\n status: NotRequired[Literal[\"pending\", \"done\"]]\n\nclass OptionDict(TypedDict):\n label: NotRequired[str]\n value: str\n ```\n\n\nFor our purposes, the most important key is the `metadata` key, which accepts a dictionary. If this dictionary includes a `title` for the message, it will be displayed in a collapsible accordion representing a thought. It's that simple! Take a look at this example:\n\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n chatbot = gr.Chatbot(\n type=\"messages\",\n value=[\n gr.ChatMessage(\n role=\"user\", \n content=\"What is the weather in San Francisco?\"\n ),\n gr.ChatMessage(\n role=\"assistant\", \n content=\"I need to use the weather API tool?\",\n metadata={\"title\": \"\ud83e\udde0 Thinking\"}\n ]\n )\n\ndemo.launch()\n```\n\n\n\nIn addition to `title`, the dictionary provided to `metadata` can take several optional keys:\n\n* `log`: an optional string value to be displayed in a subdued font next to the thought title.\n* `duration`: an optional numeric value representing the duration of the thought/tool usage, in seconds. Displayed in a subdued font next inside parentheses next to the thought title.\n* `status`: if set to `\"pending\"`, a spinner appears next to the thought title and the accordion is initialized open. If `status` is `\"done\"`, the thought accordion is initialized closed. I", "heading1": "The `ChatMessage` dataclass", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "ht title.\n* `status`: if set to `\"pending\"`, a spinner appears next to the thought title and the accordion is initialized open. If `status` is `\"done\"`, the thought accordion is initialized closed. If `status` is not provided, the thought accordion is initialized open and no spinner is displayed.\n* `id` and `parent_id`: if these are provided, they can be used to nest thoughts inside other thoughts.\n\nBelow, we show several complete examples of using `gr.Chatbot` and `gr.ChatInterface` to display tool use or thinking UIs.\n\n", "heading1": "The `ChatMessage` dataclass", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "A real example using transformers.agents\n\nWe'll create a Gradio application simple agent that has access to a text-to-image tool.\n\nTip: Make sure you read the [smolagents documentation](https://huggingface.co/docs/smolagents/index) first\n\nWe'll start by importing the necessary classes from transformers and gradio. \n\n```python\nimport gradio as gr\nfrom gradio import ChatMessage\nfrom transformers import Tool, ReactCodeAgent type: ignore\nfrom transformers.agents import stream_to_gradio, HfApiEngine type: ignore\n\nImport tool from Hub\nimage_generation_tool = Tool.from_space(\n space_id=\"black-forest-labs/FLUX.1-schnell\",\n name=\"image_generator\",\n description=\"Generates an image following your prompt. Returns a PIL Image.\",\n api_name=\"/infer\",\n)\n\nllm_engine = HfApiEngine(\"Qwen/Qwen2.5-Coder-32B-Instruct\")\nInitialize the agent with both tools and engine\nagent = ReactCodeAgent(tools=[image_generation_tool], llm_engine=llm_engine)\n```\n\nThen we'll build the UI:\n\n```python\ndef interact_with_agent(prompt, history):\n messages = []\n yield messages\n for msg in stream_to_gradio(agent, prompt):\n messages.append(asdict(msg))\n yield messages\n yield messages\n\n\ndemo = gr.ChatInterface(\n interact_with_agent,\n chatbot= gr.Chatbot(\n label=\"Agent\",\n type=\"messages\",\n avatar_images=(\n None,\n \"https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png\",\n ),\n ),\n examples=[\n [\"Generate an image of an astronaut riding an alligator\"],\n [\"I am writing a children's book for my daughter. Can you help me with some illustrations?\"],\n ],\n type=\"messages\",\n)\n```\n\nYou can see the full demo code [here](https://huggingface.co/spaces/gradio/agent_chatbot/blob/main/app.py).\n\n\n![transformers_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d)\n\n\nA real example using langchain agents\n\nWe'll create a UI for l", "heading1": "Building with Agents", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "\n\n\n![transformers_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d)\n\n\nA real example using langchain agents\n\nWe'll create a UI for langchain agent that has access to a search engine.\n\nWe'll begin with imports and setting up the langchain agent. Note that you'll need an .env file with the following environment variables set - \n\n```\nSERPAPI_API_KEY=\nHF_TOKEN=\nOPENAI_API_KEY=\n```\n\n```python\nfrom langchain import hub\nfrom langchain.agents import AgentExecutor, create_openai_tools_agent, load_tools\nfrom langchain_openai import ChatOpenAI\nfrom gradio import ChatMessage\nimport gradio as gr\n\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nmodel = ChatOpenAI(temperature=0, streaming=True)\n\ntools = load_tools([\"serpapi\"])\n\nGet the prompt to use - you can modify this!\nprompt = hub.pull(\"hwchase17/openai-tools-agent\")\nagent = create_openai_tools_agent(\n model.with_config({\"tags\": [\"agent_llm\"]}), tools, prompt\n)\nagent_executor = AgentExecutor(agent=agent, tools=tools).with_config(\n {\"run_name\": \"Agent\"}\n)\n```\n\nThen we'll create the Gradio UI\n\n```python\nasync def interact_with_langchain_agent(prompt, messages):\n messages.append(ChatMessage(role=\"user\", content=prompt))\n yield messages\n async for chunk in agent_executor.astream(\n {\"input\": prompt}\n ):\n if \"steps\" in chunk:\n for step in chunk[\"steps\"]:\n messages.append(ChatMessage(role=\"assistant\", content=step.action.log,\n metadata={\"title\": f\"\ud83d\udee0\ufe0f Used tool {step.action.tool}\"}))\n yield messages\n if \"output\" in chunk:\n messages.append(ChatMessage(role=\"assistant\", content=chunk[\"output\"]))\n yield messages\n\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with a LangChain Agent \ud83e\udd9c\u26d3\ufe0f and see its thoughts \ud83d\udcad\")\n chatbot = gr.Chatbot(\n type=\"messages\",\n label=\"Agent\",\n avatar_images=(\n None,\n ", "heading1": "Building with Agents", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": " gr.Markdown(\"Chat with a LangChain Agent \ud83e\udd9c\u26d3\ufe0f and see its thoughts \ud83d\udcad\")\n chatbot = gr.Chatbot(\n type=\"messages\",\n label=\"Agent\",\n avatar_images=(\n None,\n \"https://em-content.zobj.net/source/twitter/141/parrot_1f99c.png\",\n ),\n )\n input = gr.Textbox(lines=1, label=\"Chat Message\")\n input.submit(interact_with_langchain_agent, [input_2, chatbot_2], [chatbot_2])\n\ndemo.launch()\n```\n\n![langchain_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/762283e5-3937-47e5-89e0-79657279ea67)\n\nThat's it! See our finished langchain demo [here](https://huggingface.co/spaces/gradio/langchain-agent).\n\n\n", "heading1": "Building with Agents", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "The Gradio Chatbot can natively display intermediate thoughts of a _thinking_ LLM. This makes it perfect for creating UIs that show how an AI model \"thinks\" while generating responses. Below guide will show you how to build a chatbot that displays Gemini AI's thought process in real-time.\n\n\nA real example using Gemini 2.0 Flash Thinking API\n\nLet's create a complete chatbot that shows its thoughts and responses in real-time. We'll use Google's Gemini API for accessing Gemini 2.0 Flash Thinking LLM and Gradio for the UI.\n\nWe'll begin with imports and setting up the gemini client. Note that you'll need to [acquire a Google Gemini API key](https://aistudio.google.com/apikey) first -\n\n```python\nimport gradio as gr\nfrom gradio import ChatMessage\nfrom typing import Iterator\nimport google.generativeai as genai\n\ngenai.configure(api_key=\"your-gemini-api-key\")\nmodel = genai.GenerativeModel(\"gemini-2.0-flash-thinking-exp-1219\")\n```\n\nFirst, let's set up our streaming function that handles the model's output:\n\n```python\ndef stream_gemini_response(user_message: str, messages: list) -> Iterator[list]:\n \"\"\"\n Streams both thoughts and responses from the Gemini model.\n \"\"\"\n Initialize response from Gemini\n response = model.generate_content(user_message, stream=True)\n \n Initialize buffers\n thought_buffer = \"\"\n response_buffer = \"\"\n thinking_complete = False\n \n Add initial thinking message\n messages.append(\n ChatMessage(\n role=\"assistant\",\n content=\"\",\n metadata={\"title\": \"\u23f3Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental\"}\n )\n )\n \n for chunk in response:\n parts = chunk.candidates[0].content.parts\n current_chunk = parts[0].text\n \n if len(parts) == 2 and not thinking_complete:\n Complete thought and start response\n thought_buffer += current_chunk\n messages[-1] = ChatMessage(\n rol", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": " if len(parts) == 2 and not thinking_complete:\n Complete thought and start response\n thought_buffer += current_chunk\n messages[-1] = ChatMessage(\n role=\"assistant\",\n content=thought_buffer,\n metadata={\"title\": \"\u23f3Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental\"}\n )\n \n Add response message\n messages.append(\n ChatMessage(\n role=\"assistant\",\n content=parts[1].text\n )\n )\n thinking_complete = True\n \n elif thinking_complete:\n Continue streaming response\n response_buffer += current_chunk\n messages[-1] = ChatMessage(\n role=\"assistant\",\n content=response_buffer\n )\n \n else:\n Continue streaming thoughts\n thought_buffer += current_chunk\n messages[-1] = ChatMessage(\n role=\"assistant\",\n content=thought_buffer,\n metadata={\"title\": \"\u23f3Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental\"}\n )\n \n yield messages\n```\n\nThen, let's create the Gradio interface:\n\n```python\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with Gemini 2.0 Flash and See its Thoughts \ud83d\udcad\")\n \n chatbot = gr.Chatbot(\n type=\"messages\",\n label=\"Gemini2.0 'Thinking' Chatbot\",\n render_markdown=True,\n )\n \n input_box = gr.Textbox(\n lines=1,\n label=\"Chat Message\",\n placeholder=\"Type your message here and press Enter...\"\n )\n \n Set up event handlers\n msg_store = gr.State(\"\") Store for preserving user message\n \n input_box.submit(\n lambda msg: (msg, msg, \"\"), Store message and clear input\n inputs=[input_box],\n outputs=[msg_store, input_box, inp", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "Store for preserving user message\n \n input_box.submit(\n lambda msg: (msg, msg, \"\"), Store message and clear input\n inputs=[input_box],\n outputs=[msg_store, input_box, input_box],\n queue=False\n ).then(\n user_message, Add user message to chat\n inputs=[msg_store, chatbot],\n outputs=[input_box, chatbot],\n queue=False\n ).then(\n stream_gemini_response, Generate and stream response\n inputs=[msg_store, chatbot],\n outputs=chatbot\n )\n\ndemo.launch()\n```\n\nThis creates a chatbot that:\n\n- Displays the model's thoughts in a collapsible section\n- Streams the thoughts and final response in real-time\n- Maintains a clean chat history\n\n That's it! You now have a chatbot that not only responds to users but also shows its thinking process, creating a more transparent and engaging interaction. See our finished Gemini 2.0 Flash Thinking demo [here](https://huggingface.co/spaces/ysharma/Gemini2-Flash-Thinking).\n\n\n Building with Citations \n\nThe Gradio Chatbot can display citations from LLM responses, making it perfect for creating UIs that show source documentation and references. This guide will show you how to build a chatbot that displays Claude's citations in real-time.\n\nA real example using Anthropic's Citations API\nLet's create a complete chatbot that shows both responses and their supporting citations. We'll use Anthropic's Claude API with citations enabled and Gradio for the UI.\n\nWe'll begin with imports and setting up the Anthropic client. Note that you'll need an `ANTHROPIC_API_KEY` environment variable set:\n\n```python\nimport gradio as gr\nimport anthropic\nimport base64\nfrom typing import List, Dict, Any\n\nclient = anthropic.Anthropic()\n```\n\nFirst, let's set up our message formatting functions that handle document preparation:\n\n```python\ndef encode_pdf_to_base64(file_obj) -> str:\n \"\"\"Convert uploaded PDF file to base64 string.\"\"\"\n if file_obj is None:\n return None\n", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "ng functions that handle document preparation:\n\n```python\ndef encode_pdf_to_base64(file_obj) -> str:\n \"\"\"Convert uploaded PDF file to base64 string.\"\"\"\n if file_obj is None:\n return None\n with open(file_obj.name, 'rb') as f:\n return base64.b64encode(f.read()).decode('utf-8')\n\ndef format_message_history(\n history: list, \n enable_citations: bool,\n doc_type: str,\n text_input: str,\n pdf_file: str\n) -> List[Dict]:\n \"\"\"Convert Gradio chat history to Anthropic message format.\"\"\"\n formatted_messages = []\n \n Add previous messages\n for msg in history[:-1]:\n if msg[\"role\"] == \"user\":\n formatted_messages.append({\"role\": \"user\", \"content\": msg[\"content\"]})\n \n Prepare the latest message with document\n latest_message = {\"role\": \"user\", \"content\": []}\n \n if enable_citations:\n if doc_type == \"plain_text\":\n latest_message[\"content\"].append({\n \"type\": \"document\",\n \"source\": {\n \"type\": \"text\",\n \"media_type\": \"text/plain\",\n \"data\": text_input.strip()\n },\n \"title\": \"Text Document\",\n \"citations\": {\"enabled\": True}\n })\n elif doc_type == \"pdf\" and pdf_file:\n pdf_data = encode_pdf_to_base64(pdf_file)\n if pdf_data:\n latest_message[\"content\"].append({\n \"type\": \"document\",\n \"source\": {\n \"type\": \"base64\",\n \"media_type\": \"application/pdf\",\n \"data\": pdf_data\n },\n \"title\": pdf_file.name,\n \"citations\": {\"enabled\": True}\n })\n \n Add the user's question\n latest_message[\"content\"].append({\"type\": \"text\", \"text\": history[-1][\"content\"]})\n \n formatted_messages.append(latest_message)\n return formatted_messages\n```\n\nThen, ", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": " the user's question\n latest_message[\"content\"].append({\"type\": \"text\", \"text\": history[-1][\"content\"]})\n \n formatted_messages.append(latest_message)\n return formatted_messages\n```\n\nThen, let's create our bot response handler that processes citations:\n\n```python\ndef bot_response(\n history: list,\n enable_citations: bool,\n doc_type: str,\n text_input: str,\n pdf_file: str\n) -> List[Dict[str, Any]]:\n try:\n messages = format_message_history(history, enable_citations, doc_type, text_input, pdf_file)\n response = client.messages.create(model=\"claude-3-5-sonnet-20241022\", max_tokens=1024, messages=messages)\n \n Initialize main response and citations\n main_response = \"\"\n citations = []\n \n Process each content block\n for block in response.content:\n if block.type == \"text\":\n main_response += block.text\n if enable_citations and hasattr(block, 'citations') and block.citations:\n for citation in block.citations:\n if citation.cited_text not in citations:\n citations.append(citation.cited_text)\n \n Add main response\n history.append({\"role\": \"assistant\", \"content\": main_response})\n \n Add citations in a collapsible section\n if enable_citations and citations:\n history.append({\n \"role\": \"assistant\",\n \"content\": \"\\n\".join([f\"\u2022 {cite}\" for cite in citations]),\n \"metadata\": {\"title\": \"\ud83d\udcda Citations\"}\n })\n \n return history\n \n except Exception as e:\n history.append({\n \"role\": \"assistant\",\n \"content\": \"I apologize, but I encountered an error while processing your request.\"\n })\n return history\n```\n\nFinally, let's create the Gradio interface:\n\n```python\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with Citations\"", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "an error while processing your request.\"\n })\n return history\n```\n\nFinally, let's create the Gradio interface:\n\n```python\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with Citations\")\n \n with gr.Row(scale=1):\n with gr.Column(scale=4):\n chatbot = gr.Chatbot(type=\"messages\", bubble_full_width=False, show_label=False, scale=1)\n msg = gr.Textbox(placeholder=\"Enter your message here...\", show_label=False, container=False)\n \n with gr.Column(scale=1):\n enable_citations = gr.Checkbox(label=\"Enable Citations\", value=True, info=\"Toggle citation functionality\" )\n doc_type_radio = gr.Radio( choices=[\"plain_text\", \"pdf\"], value=\"plain_text\", label=\"Document Type\", info=\"Choose the type of document to use\")\n text_input = gr.Textbox(label=\"Document Content\", lines=10, info=\"Enter the text you want to reference\")\n pdf_input = gr.File(label=\"Upload PDF\", file_types=[\".pdf\"], file_count=\"single\", visible=False)\n \n Handle message submission\n msg.submit(\n user_message,\n [msg, chatbot, enable_citations, doc_type_radio, text_input, pdf_input],\n [msg, chatbot]\n ).then(\n bot_response,\n [chatbot, enable_citations, doc_type_radio, text_input, pdf_input],\n chatbot\n )\n\ndemo.launch()\n```\n\nThis creates a chatbot that:\n- Supports both plain text and PDF documents for Claude to cite from \n- Displays Citations in collapsible sections using our `metadata` feature\n- Shows source quotes directly from the given documents\n\nThe citations feature works particularly well with the Gradio Chatbot's `metadata` support, allowing us to create collapsible sections that keep the chat interface clean while still providing easy access to source documentation.\n\nThat's it! You now have a chatbot that not only responds to users but also shows its sources, creating a more transparent and trustworthy interaction. See our finished Citations demo [her", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "umentation.\n\nThat's it! You now have a chatbot that not only responds to users but also shows its sources, creating a more transparent and trustworthy interaction. See our finished Citations demo [here](https://huggingface.co/spaces/ysharma/anthropic-citations-with-gradio-metadata-key).\n\n", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "Gradio-Lite\n\nGradio-Lite is the serverless version of Gradio, allowing you to build serverless web UI applications by embedding Python code within HTML. For a detailed introduction to Gradio-Lite itself, please read [this Guide](./gradio-lite).\n\nTransformers.js and Transformers.js.py\n\nTransformers.js is the JavaScript version of the Transformers library that allows you to run machine learning models entirely in the browser.\nSince Transformers.js is a JavaScript library, it cannot be directly used from the Python code of Gradio-Lite applications. To address this, we use a wrapper library called [Transformers.js.py](https://github.com/whitphx/transformers.js.py).\nThe name Transformers.js.py may sound unusual, but it represents the necessary technology stack for using Transformers.js from Python code within a browser environment. The regular Transformers library is not compatible with browser environments.\n\n", "heading1": "Libraries Used", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "Here's an example of how to use Gradio-Lite and Transformers.js together.\nPlease create an HTML file and paste the following code:\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis')\n\ndemo = gr.Interface.from_pipeline(pipe)\n\ndemo.launch()\n\n\t\t\t\ntransformers-js-py\n\t\t\t\n\t\t\n\t\n\n```\n\nHere is a running example of the code above (after the app has loaded, you could disconnect your Internet connection and the app will still work since its running entirely in your browser):\n\n\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis')\n\ndemo = gr.Interface.from_pipeline(pipe)\n\ndemo.launch()\n\ntransformers-js-py\n\n\n\nAnd you you can open your HTML file in a browser to see the Gradio app running!\n\nThe Python code inside the `` tag is the Gradio application code. For more details on this part, please refer to [this article](./gradio-lite).\nThe `` tag is used to specify packages to be installed in addition to Gradio-Lite and its dependencies. In this case, we are using Transformers.js.py (`transformers-js-py`), so it is specified here.\n\nLet's break down the code:\n\n`pipe = await pipeline('sentiment-analysis')` creates a Transformers.js pipeline.\nIn this example, we create a sentiment analysis pipeline.\nFor more information on the available pipeline types and usage, please refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).\n\n`demo = gr.Interface.from_pipeline(pipe)` creates a Gradio a", "heading1": "Sample Code", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "vailable pipeline types and usage, please refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).\n\n`demo = gr.Interface.from_pipeline(pipe)` creates a Gradio app instance. By passing the Transformers.js.py pipeline to `gr.Interface.from_pipeline()`, we can create an interface that utilizes that pipeline with predefined input and output components.\n\nFinally, `demo.launch()` launches the created app.\n\n", "heading1": "Sample Code", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "You can modify the line `pipe = await pipeline('sentiment-analysis')` in the sample above to try different models or tasks.\n\nFor example, if you change it to `pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')`, you can test the same sentiment analysis task but with a different model. The second argument of the `pipeline` function specifies the model name.\nIf it's not specified like in the first example, the default model is used. For more details on these specs, refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).\n\n\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')\n\ndemo = gr.Interface.from_pipeline(pipe)\n\ndemo.launch()\n\ntransformers-js-py\n\n\n\nAs another example, changing it to `pipe = await pipeline('image-classification')` creates a pipeline for image classification instead of sentiment analysis.\nIn this case, the interface created with `demo = gr.Interface.from_pipeline(pipe)` will have a UI for uploading an image and displaying the classification result. The `gr.Interface.from_pipeline` function automatically creates an appropriate UI based on the type of pipeline.\n\n\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('image-classification')\n\ndemo = gr.Interface.from_pipeline(pipe)\n\ndemo.launch()\n\ntransformers-js-py\n\n\n\n
\n\n**Note**: If you use an audio pipeline, such as `automatic-speech-recognition`, you will need to put `transformers-js-py[audio]` in your `` as there are additional requirements needed to process audio files.\n\n", "heading1": "Customizing the Model or Pipeline", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "Instead of using `gr.Interface.from_pipeline()`, you can define the user interface using Gradio's regular API.\nHere's an example where the Python code inside the `` tag has been modified from the previous sample:\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis')\n\nasync def fn(text):\n\tresult = await pipe(text)\n\treturn result\n\ndemo = gr.Interface(\n\tfn=fn,\n\tinputs=gr.Textbox(),\n\toutputs=gr.JSON(),\n)\n\ndemo.launch()\n\n\t\t\t\ntransformers-js-py\n\t\t\t\n\t\t\n\t\n\n```\n\nIn this example, we modified the code to construct the Gradio user interface manually so that we could output the result as JSON.\n\n\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis')\n\nasync def fn(text):\n\tresult = await pipe(text)\n\treturn result\n\ndemo = gr.Interface(\n\tfn=fn,\n\tinputs=gr.Textbox(),\n\toutputs=gr.JSON(),\n)\n\ndemo.launch()\n\ntransformers-js-py\n\n\n\n", "heading1": "Customizing the UI", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "By combining Gradio-Lite and Transformers.js (and Transformers.js.py), you can create serverless machine learning applications that run entirely in the browser.\n\nGradio-Lite provides a convenient method to create an interface for a given Transformers.js pipeline, `gr.Interface.from_pipeline()`.\nThis method automatically constructs the interface based on the pipeline's task type.\n\nAlternatively, you can define the interface manually using Gradio's regular API, as shown in the second example.\n\nBy using these libraries, you can build and deploy machine learning applications without the need for server-side Python setup or external dependencies.\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "What are agents?\n\nA [LangChain agent](https://docs.langchain.com/docs/components/agents/agent) is a Large Language Model (LLM) that takes user input and reports an output based on using one of many tools at its disposal.\n\nWhat is Gradio?\n\n[Gradio](https://github.com/gradio-app/gradio) is the defacto standard framework for building Machine Learning Web Applications and sharing them with the world - all with just python! \ud83d\udc0d\n\n", "heading1": "Some background", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "To get started with `gradio_tools`, all you need to do is import and initialize your tools and pass them to the langchain agent!\n\nIn the following example, we import the `StableDiffusionPromptGeneratorTool` to create a good prompt for stable diffusion, the\n`StableDiffusionTool` to create an image with our improved prompt, the `ImageCaptioningTool` to caption the generated image, and\nthe `TextToVideoTool` to create a video from a prompt.\n\nWe then tell our agent to create an image of a dog riding a skateboard, but to please improve our prompt ahead of time. We also ask\nit to caption the generated image and create a video for it. The agent can decide which tool to use without us explicitly telling it.\n\n```python\nimport os\n\nif not os.getenv(\"OPENAI_API_KEY\"):\n raise ValueError(\"OPENAI_API_KEY must be set\")\n\nfrom langchain.agents import initialize_agent\nfrom langchain.llms import OpenAI\nfrom gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,\n TextToVideoTool)\n\nfrom langchain.memory import ConversationBufferMemory\n\nllm = OpenAI(temperature=0)\nmemory = ConversationBufferMemory(memory_key=\"chat_history\")\ntools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,\n StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]\n\n\nagent = initialize_agent(tools, llm, memory=memory, agent=\"conversational-react-description\", verbose=True)\noutput = agent.run(input=(\"Please create a photo of a dog riding a skateboard \"\n \"but improve my prompt prior to using an image generator.\"\n \"Please caption the generated image and create a video for it using the improved prompt.\"))\n```\n\nYou'll note that we are using some pre-built tools that come with `gradio_tools`. Please see this [doc](https://github.com/freddyaboulton/gradio-toolsgradio-tools-gradio--llm-agents) for a complete list of the tools that come with `gradio_tools`.\nIf ", "heading1": "gradio_tools - An end-to-end example", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "that come with `gradio_tools`. Please see this [doc](https://github.com/freddyaboulton/gradio-toolsgradio-tools-gradio--llm-agents) for a complete list of the tools that come with `gradio_tools`.\nIf you would like to use a tool that's not currently in `gradio_tools`, it is very easy to add your own. That's what the next section will cover.\n\n", "heading1": "gradio_tools - An end-to-end example", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "The core abstraction is the `GradioTool`, which lets you define a new tool for your LLM as long as you implement a standard interface:\n\n```python\nclass GradioTool(BaseTool):\n\n def __init__(self, name: str, description: str, src: str) -> None:\n\n @abstractmethod\n def create_job(self, query: str) -> Job:\n pass\n\n @abstractmethod\n def postprocess(self, output: Tuple[Any] | Any) -> str:\n pass\n```\n\nThe requirements are:\n\n1. The name for your tool\n2. The description for your tool. This is crucial! Agents decide which tool to use based on their description. Be precise and be sure to include example of what the input and the output of the tool should look like.\n3. The url or space id, e.g. `freddyaboulton/calculator`, of the Gradio application. Based on this value, `gradio_tool` will create a [gradio client](https://github.com/gradio-app/gradio/blob/main/client/python/README.md) instance to query the upstream application via API. Be sure to click the link and learn more about the gradio client library if you are not familiar with it.\n4. create_job - Given a string, this method should parse that string and return a job from the client. Most times, this is as simple as passing the string to the `submit` function of the client. More info on creating jobs [here](https://github.com/gradio-app/gradio/blob/main/client/python/README.mdmaking-a-prediction)\n5. postprocess - Given the result of the job, convert it to a string the LLM can display to the user.\n6. _Optional_ - Some libraries, e.g. [MiniChain](https://github.com/srush/MiniChain/tree/main), may need some info about the underlying gradio input and output types used by the tool. By default, this will return gr.Textbox() but\n if you'd like to provide more accurate info, implement the `_block_input(self, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be\n automatically imported by the `GradiTool` parent", "heading1": "gradio_tools - creating your own tool", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "lf, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be\n automatically imported by the `GradiTool` parent class and passed to the `_block_input` and `_block_output` methods.\n\nAnd that's it!\n\nOnce you have created your tool, open a pull request to the `gradio_tools` repo! We welcome all contributions.\n\n", "heading1": "gradio_tools - creating your own tool", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "Here is the code for the StableDiffusion tool as an example:\n\n```python\nfrom gradio_tool import GradioTool\nimport os\n\nclass StableDiffusionTool(GradioTool):\n \"\"\"Tool for calling stable diffusion from llm\"\"\"\n\n def __init__(\n self,\n name=\"StableDiffusion\",\n description=(\n \"An image generator. Use this to generate images based on \"\n \"text input. Input should be a description of what the image should \"\n \"look like. The output will be a path to an image file.\"\n ),\n src=\"gradio-client-demos/stable-diffusion\",\n hf_token=None,\n ) -> None:\n super().__init__(name, description, src, hf_token)\n\n def create_job(self, query: str) -> Job:\n return self.client.submit(query, \"\", 9, fn_index=1)\n\n def postprocess(self, output: str) -> str:\n return [os.path.join(output, i) for i in os.listdir(output) if not i.endswith(\"json\")][0]\n\n def _block_input(self, gr) -> \"gr.components.Component\":\n return gr.Textbox()\n\n def _block_output(self, gr) -> \"gr.components.Component\":\n return gr.Image()\n```\n\nSome notes on this implementation:\n\n1. All instances of `GradioTool` have an attribute called `client` that is a pointed to the underlying [gradio client](https://github.com/gradio-app/gradio/tree/main/client/pythongradio_client-use-a-gradio-app-as-an-api----in-3-lines-of-python). That is what you should use\n in the `create_job` method.\n2. `create_job` just passes the query string to the `submit` function of the client with some other parameters hardcoded, i.e. the negative prompt string and the guidance scale. We could modify our tool to also accept these values from the input string in a subsequent version.\n3. The `postprocess` method simply returns the first image from the gallery of images created by the stable diffusion space. We use the `os` module to get the full path of the image.\n\n", "heading1": "Example tool - Stable Diffusion", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "You now know how to extend the abilities of your LLM with the 1000s of gradio spaces running in the wild!\nAgain, we welcome any contributions to the [gradio_tools](https://github.com/freddyaboulton/gradio-tools) library.\nWe're excited to see the tools you all build!\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "Let's start with what seems like the most complex bit -- using machine learning to remove the music from a video.\n\nLuckily for us, there's an existing Space we can use to make this process easier: [https://huggingface.co/spaces/abidlabs/music-separation](https://huggingface.co/spaces/abidlabs/music-separation). This Space takes an audio file and produces two separate audio files: one with the instrumental music and one with all other sounds in the original clip. Perfect to use with our client!\n\nOpen a new Python file, say `main.py`, and start by importing the `Client` class from `gradio_client` and connecting it to this Space:\n\n```py\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"abidlabs/music-separation\")\n\ndef acapellify(audio_path):\n result = client.predict(handle_file(audio_path), api_name=\"/predict\")\n return result[0]\n```\n\nThat's all the code that's needed -- notice that the API endpoints returns two audio files (one without the music, and one with just the music) in a list, and so we just return the first element of the list.\n\n---\n\n**Note**: since this is a public Space, there might be other users using this Space as well, which might result in a slow experience. You can duplicate this Space with your own [Hugging Face token](https://huggingface.co/settings/tokens) and create a private Space that only you have will have access to and bypass the queue. To do that, simply replace the first two lines above with:\n\n```py\nfrom gradio_client import Client\n\nclient = Client.duplicate(\"abidlabs/music-separation\", hf_token=YOUR_HF_TOKEN)\n```\n\nEverything else remains the same!\n\n---\n\nNow, of course, we are working with video files, so we first need to extract the audio from the video files. For this, we will be using the `ffmpeg` library, which does a lot of heavy lifting when it comes to working with audio and video files. The most common way to use `ffmpeg` is through the command line, which we'll call via Python's `subprocess` module:\n\nOur video p", "heading1": "Step 1: Write the Video Processing Function", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "t of heavy lifting when it comes to working with audio and video files. The most common way to use `ffmpeg` is through the command line, which we'll call via Python's `subprocess` module:\n\nOur video processing workflow will consist of three steps:\n\n1. First, we start by taking in a video filepath and extracting the audio using `ffmpeg`.\n2. Then, we pass in the audio file through the `acapellify()` function above.\n3. Finally, we combine the new audio with the original video to produce a final acapellified video.\n\nHere's the complete code in Python, which you can add to your `main.py` file:\n\n```python\nimport subprocess\n\ndef process_video(video_path):\n old_audio = os.path.basename(video_path).split(\".\")[0] + \".m4a\"\n subprocess.run(['ffmpeg', '-y', '-i', video_path, '-vn', '-acodec', 'copy', old_audio])\n\n new_audio = acapellify(old_audio)\n\n new_video = f\"acap_{video_path}\"\n subprocess.call(['ffmpeg', '-y', '-i', video_path, '-i', new_audio, '-map', '0:v', '-map', '1:a', '-c:v', 'copy', '-c:a', 'aac', '-strict', 'experimental', f\"static/{new_video}\"])\n return new_video\n```\n\nYou can read up on [ffmpeg documentation](https://ffmpeg.org/ffmpeg.html) if you'd like to understand all of the command line parameters, as they are beyond the scope of this tutorial.\n\n", "heading1": "Step 1: Write the Video Processing Function", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "Next up, we'll create a simple FastAPI app. If you haven't used FastAPI before, check out [the great FastAPI docs](https://fastapi.tiangolo.com/). Otherwise, this basic template, which we add to `main.py`, will look pretty familiar:\n\n```python\nimport os\nfrom fastapi import FastAPI, File, UploadFile, Request\nfrom fastapi.responses import HTMLResponse, RedirectResponse\nfrom fastapi.staticfiles import StaticFiles\nfrom fastapi.templating import Jinja2Templates\n\napp = FastAPI()\nos.makedirs(\"static\", exist_ok=True)\napp.mount(\"/static\", StaticFiles(directory=\"static\"), name=\"static\")\ntemplates = Jinja2Templates(directory=\"templates\")\n\nvideos = []\n\n@app.get(\"/\", response_class=HTMLResponse)\nasync def home(request: Request):\n return templates.TemplateResponse(\n \"home.html\", {\"request\": request, \"videos\": videos})\n\n@app.post(\"/uploadvideo/\")\nasync def upload_video(video: UploadFile = File(...)):\n video_path = video.filename\n with open(video_path, \"wb+\") as fp:\n fp.write(video.file.read())\n\n new_video = process_video(video.filename)\n videos.append(new_video)\n return RedirectResponse(url='/', status_code=303)\n```\n\nIn this example, the FastAPI app has two routes: `/` and `/uploadvideo/`.\n\nThe `/` route returns an HTML template that displays a gallery of all uploaded videos.\n\nThe `/uploadvideo/` route accepts a `POST` request with an `UploadFile` object, which represents the uploaded video file. The video file is \"acapellified\" via the `process_video()` method, and the output video is stored in a list which stores all of the uploaded videos in memory.\n\nNote that this is a very basic example and if this were a production app, you will need to add more logic to handle file storage, user authentication, and security considerations.\n\n", "heading1": "Step 2: Create a FastAPI app (Backend Routes)", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "Finally, we create the frontend of our web application. First, we create a folder called `templates` in the same directory as `main.py`. We then create a template, `home.html` inside the `templates` folder. Here is the resulting file structure:\n\n```csv\n\u251c\u2500\u2500 main.py\n\u251c\u2500\u2500 templates\n\u2502 \u2514\u2500\u2500 home.html\n```\n\nWrite the following as the contents of `home.html`:\n\n```html\n<!DOCTYPE html> <html> <head> <title>Video Gallery</title>\n<style> body { font-family: sans-serif; margin: 0; padding: 0;\nbackground-color: f5f5f5; } h1 { text-align: center; margin-top: 30px;\nmargin-bottom: 20px; } .gallery { display: flex; flex-wrap: wrap;\njustify-content: center; gap: 20px; padding: 20px; } .video { border: 2px solid\nccc; box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.2); border-radius: 5px; overflow:\nhidden; width: 300px; margin-bottom: 20px; } .video video { width: 100%; height:\n200px; } .video p { text-align: center; margin: 10px 0; } form { margin-top:\n20px; text-align: center; } input[type=\"file\"] { display: none; } .upload-btn {\ndisplay: inline-block; background-color: 3498db; color: fff; padding: 10px\n20px; font-size: 16px; border: none; border-radius: 5px; cursor: pointer; }\n.upload-btn:hover { background-color: 2980b9; } .file-name { margin-left: 10px;\n} </style> </head> <body> <h1>Video Gallery</h1> {% if videos %}\n<div class=\"gallery\"> {% for video in videos %} <div class=\"video\">\n<video controls> <source src=\"{{ url_for('static', path=video) }}\"\ntype=\"video/mp4\"> Your browser does not support the video tag. </video>\n<p>{{ video }}</p> </div> {% endfor %} </div> {% else %} <p>No\nvideos uploaded yet.</p> {% endif %} <form action=\"/uploadvideo/\"\nmethod=\"post\" enctype=\"multipart/form-data\"> <label for=\"video-upload\"\nclass=\"upload-btn\">Choose video file</label> <input type=\"file\"\nname=\"video\" id=\"video-upload\"> <span class=\"file-name\"></span> <button\ntype=\"submit\" class=\"upload-btn\">Upload</butto", "heading1": "Step 3: Create a FastAPI app (Frontend Template)", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "class=\"upload-btn\">Choose video file</label> <input type=\"file\"\nname=\"video\" id=\"video-upload\"> <span class=\"file-name\"></span> <button\ntype=\"submit\" class=\"upload-btn\">Upload</button> </form> <script> //\nDisplay selected file name in the form const fileUpload =\ndocument.getElementById(\"video-upload\"); const fileName =\ndocument.querySelector(\".file-name\"); fileUpload.addEventListener(\"change\", (e)\n=> { fileName.textContent = e.target.files[0].name; }); </script> </body>\n</html>\n```\n\n", "heading1": "Step 3: Create a FastAPI app (Frontend Template)", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "Finally, we are ready to run our FastAPI app, powered by the Gradio Python Client!\n\nOpen up a terminal and navigate to the directory containing `main.py`. Then run the following command in the terminal:\n\n```bash\n$ uvicorn main:app\n```\n\nYou should see an output that looks like this:\n\n```csv\nLoaded as API: https://abidlabs-music-separation.hf.space \u2714\nINFO: Started server process [1360]\nINFO: Waiting for application startup.\nINFO: Application startup complete.\nINFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)\n```\n\nAnd that's it! Start uploading videos and you'll get some \"acapellified\" videos in response (might take seconds to minutes to process depending on the length of your videos). Here's how the UI looks after uploading two videos:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/acapellify.png)\n\nIf you'd like to learn more about how to use the Gradio Python Client in your projects, [read the dedicated Guide](/guides/getting-started-with-the-python-client/).\n", "heading1": "Step 4: Run your FastAPI app", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "If you already have a recent version of `gradio`, then the `gradio_client` is included as a dependency. But note that this documentation reflects the latest version of the `gradio_client`, so upgrade if you're not sure!\n\nThe lightweight `gradio_client` package can be installed from pip (or pip3) and is tested to work with **Python versions 3.10 or higher**:\n\n```bash\n$ pip install --upgrade gradio_client\n```\n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Start by connecting instantiating a `Client` object and connecting it to a Gradio app that is running on Hugging Face Spaces.\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/en2fr\") a Space that translates from English to French\n```\n\nYou can also connect to private Spaces by passing in your HF token with the `hf_token` parameter. You can get your HF token here: https://huggingface.co/settings/tokens\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/my-private-space\", hf_token=\"...\")\n```\n\n\n", "heading1": "Connecting to a Gradio App on Hugging Face Spaces", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,\nand then use it to make as many requests as you'd like!\n\nThe `gradio_client` includes a class method: `Client.duplicate()` to make this process simple (you'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens) or be logged in using the Hugging Face CLI):\n\n```python\nimport os\nfrom gradio_client import Client, handle_file\n\nHF_TOKEN = os.environ.get(\"HF_TOKEN\")\n\nclient = Client.duplicate(\"abidlabs/whisper\", hf_token=HF_TOKEN)\nclient.predict(handle_file(\"audio_sample.wav\"))\n\n>> \"This is a test of the whisper speech recognition model.\"\n```\n\nIf you have previously duplicated a Space, re-running `duplicate()` will _not_ create a new Space. Instead, the Client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate()` method multiple times.\n\n**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 1 hour of inactivity. You can also set the hardware using the `hardware` parameter of `duplicate()`.\n\n", "heading1": "Duplicating a Space for private use", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "If your app is running somewhere else, just provide the full URL instead, including the \"http://\" or \"https://\". Here's an example of making predictions to a Gradio app that is running on a share URL:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"https://bec81a83-5b5c-471e.gradio.live\")\n```\n\n", "heading1": "Connecting a general Gradio app", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "If the Gradio application you are connecting to [requires a username and password](/guides/sharing-your-appauthentication), then provide them as a tuple to the `auth` argument of the `Client` class:\n\n```python\nfrom gradio_client import Client\n\nClient(\n space_name,\n auth=[username, password]\n)\n```\n\n\n", "heading1": "Connecting to a Gradio app with auth", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Once you have connected to a Gradio app, you can view the APIs that are available to you by calling the `Client.view_api()` method. For the Whisper Space, we see the following:\n\n```bash\nClient.predict() Usage Info\n---------------------------\nNamed API endpoints: 1\n\n - predict(audio, api_name=\"/predict\") -> output\n Parameters:\n - [Audio] audio: filepath (required) \n Returns:\n - [Textbox] output: str \n```\n\nWe see that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `str`, which is a `filepath or URL`.\n\nWe should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available.\n\n", "heading1": "Inspecting the API endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "As an alternative to running the `.view_api()` method, you can click on the \"Use via API\" link in the footer of the Gradio app, which shows us the same information, along with example usage. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)\n\nThe View API page also includes an \"API Recorder\" that lets you interact with the Gradio UI normally and converts your interactions into the corresponding code to run with the Python Client.\n\n", "heading1": "The \"View API\" Page", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "The simplest way to make a prediction is simply to call the `.predict()` function with the appropriate arguments:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/en2fr\", api_name='/predict')\nclient.predict(\"Hello\")\n\n>> Bonjour\n```\n\nIf there are multiple parameters, then you should pass them as separate arguments to `.predict()`, like this:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"gradio/calculator\")\nclient.predict(4, \"add\", 5)\n\n>> 9.0\n```\n\nIt is recommended to provide key-word arguments instead of positional arguments:\n\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"gradio/calculator\")\nclient.predict(num1=4, operation=\"add\", num2=5)\n\n>> 9.0\n```\n\nThis allows you to take advantage of default arguments. For example, this Space includes the default value for the Slider component so you do not need to provide it when accessing it with the client.\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/image_generator\")\nclient.predict(text=\"an astronaut riding a camel\")\n```\n\nThe default value is the initial value of the corresponding Gradio component. If the component does not have an initial value, but if the corresponding argument in the predict function has a default value of `None`, then that parameter is also optional in the client. Of course, if you'd like to override it, you can include it as well:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/image_generator\")\nclient.predict(text=\"an astronaut riding a camel\", steps=25)\n```\n\nFor providing files or URLs as inputs, you should pass in the filepath or URL to the file enclosed within `gradio_client.handle_file()`. This takes care of uploading the file to the Gradio server and ensures that the file is preprocessed correctly:\n\n```python\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"abidlabs/whisper\")\nclient.predict(\n audio=handle_file(\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/s", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "```python\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"abidlabs/whisper\")\nclient.predict(\n audio=handle_file(\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\")\n)\n\n>> \"My thought I have nobody by a beauty and will as you poured. Mr. Rochester is serve in that so don't find simpus, and devoted abode, to at might in a r\u2014\"\n```\n\n", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "One should note that `.predict()` is a _blocking_ operation as it waits for the operation to complete before returning the prediction.\n\nIn many cases, you may be better off letting the job run in the background until you need the results of the prediction. You can do this by creating a `Job` instance using the `.submit()` method, and then later calling `.result()` on the job to get the result. For example:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(space=\"abidlabs/en2fr\")\njob = client.submit(\"Hello\", api_name=\"/predict\") This is not blocking\n\nDo something else\n\njob.result() This is blocking\n\n>> Bonjour\n```\n\n", "heading1": "Running jobs asynchronously", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Alternatively, one can add one or more callbacks to perform actions after the job has completed running, like this:\n\n```python\nfrom gradio_client import Client\n\ndef print_result(x):\n print(\"The translated result is: {x}\")\n\nclient = Client(space=\"abidlabs/en2fr\")\n\njob = client.submit(\"Hello\", api_name=\"/predict\", result_callbacks=[print_result])\n\nDo something else\n\n>> The translated result is: Bonjour\n\n```\n\n", "heading1": "Adding callbacks", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "The `Job` object also allows you to get the status of the running job by calling the `.status()` method. This returns a `StatusUpdate` object with the following attributes: `code` (the status code, one of a set of defined strings representing the status. See the `utils.Status` class), `rank` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` (the time that the status was generated).\n\n```py\nfrom gradio_client import Client\n\nclient = Client(src=\"gradio/calculator\")\njob = client.submit(5, \"add\", 4, api_name=\"/predict\")\njob.status()\n\n>> \n```\n\n_Note_: The `Job` class also has a `.done()` instance method which returns a boolean indicating whether the job has completed.\n\n", "heading1": "Status", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "The `Job` class also has a `.cancel()` instance method that cancels jobs that have been queued but not started. For example, if you run:\n\n```py\nclient = Client(\"abidlabs/whisper\")\njob1 = client.submit(handle_file(\"audio_sample1.wav\"))\njob2 = client.submit(handle_file(\"audio_sample2.wav\"))\njob1.cancel() will return False, assuming the job has started\njob2.cancel() will return True, indicating that the job has been canceled\n```\n\nIf the first job has started processing, then it will not be canceled. If the second job\nhas not yet started, it will be successfully canceled and removed from the queue.\n\n", "heading1": "Cancelling Jobs", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Some Gradio API endpoints do not return a single value, rather they return a series of values. You can get the series of values that have been returned at any time from such a generator endpoint by running `job.outputs()`:\n\n```py\nfrom gradio_client import Client\n\nclient = Client(src=\"gradio/count_generator\")\njob = client.submit(3, api_name=\"/count\")\nwhile not job.done():\n time.sleep(0.1)\njob.outputs()\n\n>> ['0', '1', '2']\n```\n\nNote that running `job.result()` on a generator endpoint only gives you the _first_ value returned by the endpoint.\n\nThe `Job` object is also iterable, which means you can use it to display the results of a generator function as they are returned from the endpoint. Here's the equivalent example using the `Job` as a generator:\n\n```py\nfrom gradio_client import Client\n\nclient = Client(src=\"gradio/count_generator\")\njob = client.submit(3, api_name=\"/count\")\n\nfor o in job:\n print(o)\n\n>> 0\n>> 1\n>> 2\n```\n\nYou can also cancel jobs that that have iterative outputs, in which case the job will finish as soon as the current iteration finishes running.\n\n```py\nfrom gradio_client import Client\nimport time\n\nclient = Client(\"abidlabs/test-yield\")\njob = client.submit(\"abcdef\")\ntime.sleep(3)\njob.cancel() job cancels after 2 iterations\n```\n\n", "heading1": "Generator Endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Gradio demos can include [session state](https://www.gradio.app/guides/state-in-blocks), which provides a way for demos to persist information from user interactions within a page session.\n\nFor example, consider the following demo, which maintains a list of words that a user has submitted in a `gr.State` component. When a user submits a new word, it is added to the state, and the number of previous occurrences of that word is displayed:\n\n```python\nimport gradio as gr\n\ndef count(word, list_of_words):\n return list_of_words.count(word), list_of_words + [word]\n\nwith gr.Blocks() as demo:\n words = gr.State([])\n textbox = gr.Textbox()\n number = gr.Number()\n textbox.submit(count, inputs=[textbox, words], outputs=[number, words])\n \ndemo.launch()\n```\n\nIf you were to connect this this Gradio app using the Python Client, you would notice that the API information only shows a single input and output:\n\n```csv\nClient.predict() Usage Info\n---------------------------\nNamed API endpoints: 1\n\n - predict(word, api_name=\"/count\") -> value_31\n Parameters:\n - [Textbox] word: str (required) \n Returns:\n - [Number] value_31: float \n```\n\nThat is because the Python client handles state automatically for you -- as you make a series of requests, the returned state from one request is stored internally and automatically supplied for the subsequent request. If you'd like to reset the state, you can do that by calling `Client.reset_session()`.\n", "heading1": "Demos with Session State", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "You generally don't need to install cURL, as it comes pre-installed on many operating systems. Run:\n\n```bash\ncurl --version\n```\n\nto confirm that `curl` is installed. If it is not already installed, you can install it by visiting https://curl.se/download.html. \n\n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "To query a Gradio app, you'll need its full URL. This is usually just the URL that the Gradio app is hosted on, for example: https://bec81a83-5b5c-471e.gradio.live\n\n\n**Hugging Face Spaces**\n\nHowever, if you are querying a Gradio on Hugging Face Spaces, you will need to use the URL of the embedded Gradio app, not the URL of the Space webpage. For example:\n\n```bash\n\u274c Space URL: https://huggingface.co/spaces/abidlabs/en2fr\n\u2705 Gradio app URL: https://abidlabs-en2fr.hf.space/\n```\n\nYou can get the Gradio app URL by clicking the \"view API\" link at the bottom of the page. Or, you can right-click on the page and then click on \"View Frame Source\" or the equivalent in your browser to view the URL of the embedded Gradio app.\n\nWhile you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,\nand then use it to make as many requests as you'd like!\n\nNote: to query private Spaces, you will need to pass in your Hugging Face (HF) token. You can get your HF token here: https://huggingface.co/settings/tokens. In this case, you will need to include an additional header in both of your `curl` calls that we'll discuss below:\n\n```bash\n-H \"Authorization: Bearer $HF_TOKEN\"\n```\n\nNow, we are ready to make the two `curl` requests.\n\n", "heading1": "Step 0: Get the URL for your Gradio App", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "The first of the two `curl` requests is `POST` request that submits the input payload to the Gradio app. \n\nThe syntax of the `POST` request is as follows:\n\n```bash\n$ curl -X POST $URL/call/$API_NAME -H \"Content-Type: application/json\" -d '{\n \"data\": $PAYLOAD\n}'\n```\n\nHere:\n\n* `$URL` is the URL of the Gradio app as obtained in Step 0\n* `$API_NAME` is the name of the API endpoint for the event that you are running. You can get the API endpoint names by clicking the \"view API\" link at the bottom of the page.\n* `$PAYLOAD` is a valid JSON data list containing the input payload, one element for each input component.\n\nWhen you make this `POST` request successfully, you will get an event id that is printed to the terminal in this format:\n\n```bash\n>> {\"event_id\": $EVENT_ID} \n```\n\nThis `EVENT_ID` will be needed in the subsequent `curl` request to fetch the results of the prediction. \n\nHere are some examples of how to make the `POST` request\n\n**Basic Example**\n\nRevisiting the example at the beginning of the page, here is how to make the `POST` request for a simple Gradio application that takes in a single input text component:\n\n```bash\n$ curl -X POST https://abidlabs-en2fr.hf.space/call/predict -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Hello, my friend.\"] \n}'\n```\n\n**Multiple Input Components**\n\nThis [Gradio demo](https://huggingface.co/spaces/gradio/hello_world_3) accepts three inputs: a string corresponding to the `gr.Textbox`, a boolean value corresponding to the `gr.Checkbox`, and a numerical value corresponding to the `gr.Slider`. Here is the `POST` request:\n\n```bash\ncurl -X POST https://gradio-hello-world-3.hf.space/call/predict -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Hello\", true, 5]\n}'\n```\n\n**Private Spaces**\n\nAs mentioned earlier, if you are making a request to a private Space, you will need to pass in a [Hugging Face token](https://huggingface.co/settings/tokens) that has read access to the Space. The request will look like this:\n\n```bash\n", "heading1": "Step 1: Make a Prediction (POST)", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "king a request to a private Space, you will need to pass in a [Hugging Face token](https://huggingface.co/settings/tokens) that has read access to the Space. The request will look like this:\n\n```bash\n$ curl -X POST https://private-space.hf.space/call/predict -H \"Content-Type: application/json\" -H \"Authorization: Bearer $HF_TOKEN\" -d '{\n \"data\": [\"Hello, my friend.\"] \n}'\n```\n\n**Files**\n\nIf you are using `curl` to query a Gradio application that requires file inputs, the files *need* to be provided as URLs, and The URL needs to be enclosed in a dictionary in this format:\n\n```bash\n{\"path\": $URL}\n```\n\nHere is an example `POST` request:\n\n```bash\n$ curl -X POST https://gradio-image-mod.hf.space/call/predict -H \"Content-Type: application/json\" -d '{\n \"data\": [{\"path\": \"https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png\"}] \n}'\n```\n\n\n**Stateful Demos**\n\nIf your Gradio demo [persists user state](/guides/interface-state) across multiple interactions (e.g. is a chatbot), you can pass in a `session_hash` alongside the `data`. Requests with the same `session_hash` are assumed to be part of the same user session. Here's how that might look:\n\n```bash\nThese two requests will share a session\n\ncurl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Are you sentient?\"],\n \"session_hash\": \"randomsequence1234\"\n}'\n\ncurl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Really?\"],\n \"session_hash\": \"randomsequence1234\"\n}'\n\nThis request will be treated as a new session\n\ncurl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Are you sentient?\"],\n \"session_hash\": \"newsequence5678\"\n}'\n```\n\n\n\n", "heading1": "Step 1: Make a Prediction (POST)", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "ient?\"],\n \"session_hash\": \"newsequence5678\"\n}'\n```\n\n\n\n", "heading1": "Step 1: Make a Prediction (POST)", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "Once you have received the `EVENT_ID` corresponding to your prediction, you can stream the results. Gradio stores these results in a least-recently-used cache in the Gradio app. By default, the cache can store 2,000 results (across all users and endpoints of the app). \n\nTo stream the results for your prediction, make a `GET` request with the following syntax:\n\n```bash\n$ curl -N $URL/call/$API_NAME/$EVENT_ID\n```\n\n\nTip: If you are fetching results from a private Space, include a header with your HF token like this: `-H \"Authorization: Bearer $HF_TOKEN\"` in the `GET` request.\n\nThis should produce a stream of responses in this format:\n\n```bash\nevent: ... \ndata: ...\nevent: ... \ndata: ...\n...\n```\n\nHere: `event` can be one of the following:\n* `generating`: indicating an intermediate result\n* `complete`: indicating that the prediction is complete and the final result \n* `error`: indicating that the prediction was not completed successfully\n* `heartbeat`: sent every 15 seconds to keep the request alive\n\nThe `data` is in the same format as the input payload: valid JSON data list containing the output result, one element for each output component.\n\nHere are some examples of what results you should expect if a request is completed successfully:\n\n**Basic Example**\n\nRevisiting the example at the beginning of the page, we would expect the result to look like this:\n\n```bash\nevent: complete\ndata: [\"Bonjour, mon ami.\"]\n```\n\n**Multiple Outputs**\n\nIf your endpoint returns multiple values, they will appear as elements of the `data` list:\n\n```bash\nevent: complete\ndata: [\"Good morning Hello. It is 5 degrees today\", -15.0]\n```\n\n**Streaming Example**\n\nIf your Gradio app [streams a sequence of values](/guides/streaming-outputs), then they will be streamed directly to your terminal, like this:\n\n```bash\nevent: generating\ndata: [\"Hello, w!\"]\nevent: generating\ndata: [\"Hello, wo!\"]\nevent: generating\ndata: [\"Hello, wor!\"]\nevent: generating\ndata: [\"Hello, worl!\"]\nevent: generating\ndata: [\"Hello, w", "heading1": "Step 2: GET the result", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "```bash\nevent: generating\ndata: [\"Hello, w!\"]\nevent: generating\ndata: [\"Hello, wo!\"]\nevent: generating\ndata: [\"Hello, wor!\"]\nevent: generating\ndata: [\"Hello, worl!\"]\nevent: generating\ndata: [\"Hello, world!\"]\nevent: complete\ndata: [\"Hello, world!\"]\n```\n\n**File Example**\n\nIf your Gradio app returns a file, the file will be represented as a dictionary in this format (including potentially some additional keys):\n\n```python\n{\n \"orig_name\": \"example.jpg\",\n \"path\": \"/path/in/server.jpg\",\n \"url\": \"https:/example.com/example.jpg\",\n \"meta\": {\"_type\": \"gradio.FileData\"}\n}\n```\n\nIn your terminal, it may appear like this:\n\n```bash\nevent: complete\ndata: [{\"path\": \"/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp\", \"url\": \"https://gradio-image-mod.hf.space/c/file=/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp\", \"size\": null, \"orig_name\": \"image.webp\", \"mime_type\": null, \"is_stream\": false, \"meta\": {\"_type\": \"gradio.FileData\"}}]\n```\n\n", "heading1": "Step 2: GET the result", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "What if your Gradio application has [authentication enabled](/guides/sharing-your-appauthentication)? In that case, you'll need to make an additional `POST` request with cURL to authenticate yourself before you make any queries. Here are the complete steps:\n\nFirst, login with a `POST` request supplying a valid username and password:\n\n```bash\ncurl -X POST $URL/login \\\n -d \"username=$USERNAME&password=$PASSWORD\" \\\n -c cookies.txt\n```\n\nIf the credentials are correct, you'll get `{\"success\":true}` in response and the cookies will be saved in `cookies.txt`.\n\nNext, you'll need to include these cookies when you make the original `POST` request, like this:\n\n```bash\n$ curl -X POST $URL/call/$API_NAME -b cookies.txt -H \"Content-Type: application/json\" -d '{\n \"data\": $PAYLOAD\n}'\n```\n\nFinally, you'll need to `GET` the results, again supplying the cookies from the file:\n\n```bash\ncurl -N $URL/call/$API_NAME/$EVENT_ID -b cookies.txt\n```\n", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "`@gradio/lite` is a JavaScript library that enables you to run Gradio applications directly within your web browser. It achieves this by utilizing Pyodide, a Python runtime for WebAssembly, which allows Python code to be executed in the browser environment. With `@gradio/lite`, you can **write regular Python code for your Gradio applications**, and they will **run seamlessly in the browser** without the need for server-side infrastructure.\n\n", "heading1": "What is `@gradio/lite`?", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "Let's build a \"Hello World\" Gradio app in `@gradio/lite`\n\n\n1. Import JS and CSS\n\nStart by creating a new HTML file, if you don't have one already. Importing the JavaScript and CSS corresponding to the `@gradio/lite` package by using the following code:\n\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\n```\n\nNote that you should generally use the latest version of `@gradio/lite` that is available. You can see the [versions available here](https://www.jsdelivr.com/package/npm/@gradio/lite?tab=files).\n\n2. Create the `` tags\n\nSomewhere in the body of your HTML page (wherever you'd like the Gradio app to be rendered), create opening and closing `` tags.\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\n\t\t\n\t\n\n```\n\nNote: you can add the `theme` attribute to the `` tag to force the theme to be dark or light (by default, it respects the system theme). E.g.\n\n```html\n\n...\n\n```\n\n3. Write your Gradio app inside of the tags\n\nNow, write your Gradio app as you would normally, in Python! Keep in mind that since this is Python, whitespace and indentations matter.\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\n\t\timport gradio as gr\n\n\t\tdef greet(name):\n\t\t\treturn \"Hello, \" + name + \"!\"\n\n\t\tgr.Interface(greet, \"textbox\", \"textbox\").launch()\n\t\t\n\t\n\n```\n\nAn", "heading1": "Getting Started", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "head>\n\t\n\t\t\n\t\timport gradio as gr\n\n\t\tdef greet(name):\n\t\t\treturn \"Hello, \" + name + \"!\"\n\n\t\tgr.Interface(greet, \"textbox\", \"textbox\").launch()\n\t\t\n\t\n\n```\n\nAnd that's it! You should now be able to open your HTML page in the browser and see the Gradio app rendered! Note that it may take a little while for the Gradio app to load initially since Pyodide can take a while to install in your browser.\n\n**Note on debugging**: to see any errors in your Gradio-lite application, open the inspector in your web browser. All errors (including Python errors) will be printed there.\n\n", "heading1": "Getting Started", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "What if you want to create a Gradio app that spans multiple files? Or that has custom Python requirements? Both are possible with `@gradio/lite`!\n\nMultiple Files\n\nAdding multiple files within a `@gradio/lite` app is very straightforward: use the `` tag. You can have as many `` tags as you want, but each one needs to have a `name` attribute and the entry point to your Gradio app should have the `entrypoint` attribute.\n\nHere's an example:\n\n```html\n\n\n\nimport gradio as gr\nfrom utils import add\n\ndemo = gr.Interface(fn=add, inputs=[\"number\", \"number\"], outputs=\"number\")\n\ndemo.launch()\n\n\n\ndef add(a, b):\n\treturn a + b\n\n\n\n\n```\n\nAdditional Requirements\n\nIf your Gradio app has additional requirements, it is usually possible to [install them in the browser using micropip](https://pyodide.org/en/stable/usage/loading-packages.htmlloading-packages). We've created a wrapper to make this paticularly convenient: simply list your requirements in the same syntax as a `requirements.txt` and enclose them with `` tags.\n\nHere, we install `transformers_js_py` to run a text classification model directly in the browser!\n\n```html\n\n\n\ntransformers_js_py\n\n\n\nfrom transformers_js import import_transformers_js\nimport gradio as gr\n\ntransformers = await import_transformers_js()\npipeline = transformers.pipeline\npipe = await pipeline('sentiment-analysis')\n\nasync def classify(text):\n\treturn await pipe(text)\n\ndemo = gr.Interface(classify, \"textbox\", \"json\")\ndemo.launch()\n\n\n\n\n```\n\n**Try it out**: You can see this example running in [this Hugging Face Static Space](https://huggingface.co/spaces/abidlabs/gradio-lite-classify), which lets you host static (serverless) web applications for free. Visit the page and y", "heading1": "More Examples: Adding Additional Files and Requirements", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "xample running in [this Hugging Face Static Space](https://huggingface.co/spaces/abidlabs/gradio-lite-classify), which lets you host static (serverless) web applications for free. Visit the page and you'll be able to run a machine learning model without internet access!\n\nSharedWorker mode\n\nBy default, Gradio-Lite executes Python code in a [Web Worker](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API) with [Pyodide](https://pyodide.org/) runtime, and each Gradio-Lite app has its own worker.\nIt has some benefits such as environment isolation.\n\nHowever, when there are many Gradio-Lite apps in the same page, it may cause performance issues such as high memory usage because each app has its own worker and Pyodide runtime.\nIn such cases, you can use the **SharedWorker mode** to share a single Pyodide runtime in a [SharedWorker](https://developer.mozilla.org/en-US/docs/Web/API/SharedWorker) among multiple Gradio-Lite apps. To enable the SharedWorker mode, set the `shared-worker` attribute to the `` tag.\n\n```html\n\n\n\nimport gradio as gr\n...\n\n\n\nimport gradio as gr\n...\n\n```\n\nWhen using the SharedWorker mode, you should be aware of the following points:\n* The apps share the same Python environment, which means that they can access the same modules and objects. If, for example, one app makes changes to some modules, the changes will be visible to other apps.\n* The file system is shared among the apps, while each app's files are mounted in each home directory, so each app can access the files of other apps.\n\nCode and Demo Playground\n\nIf you'd like to see the code side-by-side with the demo just pass in the `playground` attribute to the gradio-lite element. This will create an interactive playground that allows you to change the code and update the demo! If you're using playground, you can also set layo", "heading1": "More Examples: Adding Additional Files and Requirements", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": " `playground` attribute to the gradio-lite element. This will create an interactive playground that allows you to change the code and update the demo! If you're using playground, you can also set layout to either 'vertical' or 'horizontal' which will determine if the code editor and preview are side-by-side or on top of each other (by default it's reposnsive with the width of the page).\n\n```html\n\nimport gradio as gr\n\ngr.Interface(fn=lambda x: x,\n\t\t\tinputs=gr.Textbox(),\n\t\t\toutputs=gr.Textbox()\n\t\t).launch()\n\n```\n\n", "heading1": "More Examples: Adding Additional Files and Requirements", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "1. Serverless Deployment\nThe primary advantage of @gradio/lite is that it eliminates the need for server infrastructure. This simplifies deployment, reduces server-related costs, and makes it easier to share your Gradio applications with others.\n\n2. Low Latency\nBy running in the browser, @gradio/lite offers low-latency interactions for users. There's no need for data to travel to and from a server, resulting in faster responses and a smoother user experience.\n\n3. Privacy and Security\nSince all processing occurs within the user's browser, `@gradio/lite` enhances privacy and security. User data remains on their device, providing peace of mind regarding data handling.\n\nLimitations\n\n* Currently, the biggest limitation in using `@gradio/lite` is that your Gradio apps will generally take more time (usually 5-15 seconds) to load initially in the browser. This is because the browser needs to load the Pyodide runtime before it can render Python code.\n\n* Not every Python package is supported by Pyodide. While `gradio` and many other popular packages (including `numpy`, `scikit-learn`, and `transformers-js`) can be installed in Pyodide, if your app has many dependencies, its worth checking whether whether the dependencies are included in Pyodide, or can be [installed with `micropip`](https://micropip.pyodide.org/en/v0.2.2/project/api.htmlmicropip.install).\n\n", "heading1": "Benefits of Using `@gradio/lite`", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "You can immediately try out `@gradio/lite` by copying and pasting this code in a local `index.html` file and opening it with your browser:\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\n\t\timport gradio as gr\n\n\t\tdef greet(name):\n\t\t\treturn \"Hello, \" + name + \"!\"\n\n\t\tgr.Interface(greet, \"textbox\", \"textbox\").launch()\n\t\t\n\t\n\n```\n\n\nWe've also created a playground on the Gradio website that allows you to interactively edit code and see the results immediately!\n\nPlayground: https://www.gradio.app/playground\n", "heading1": "Try it out!", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "Install the @gradio/client package to interact with Gradio APIs using Node.js version >=18.0.0 or in browser-based projects. Use npm or any compatible package manager:\n\n```bash\nnpm i @gradio/client\n```\n\nThis command adds @gradio/client to your project dependencies, allowing you to import it in your JavaScript or TypeScript files.\n\n", "heading1": "Installation via npm", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "For quick addition to your web project, you can use the jsDelivr CDN to load the latest version of @gradio/client directly into your HTML:\n\n```html\n\n```\n\nBe sure to add this to the `` of your HTML. This will install the latest version but we advise hardcoding the version in production. You can find all available versions [here](https://www.jsdelivr.com/package/npm/@gradio/client). This approach is ideal for experimental or prototying purposes, though has some limitations. A complete example would look like this:\n\n```html\n\n\n\n \n\n\n```\n\n", "heading1": "Installation via CDN", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "Start by connecting instantiating a `client` instance and connecting it to a Gradio app that is running on Hugging Face Spaces or generally anywhere on the web.\n\n", "heading1": "Connecting to a running Gradio App", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\"); // a Space that translates from English to French\n```\n\nYou can also connect to private Spaces by passing in your HF token with the `hf_token` property of the options parameter. You can get your HF token here: https://huggingface.co/settings/tokens\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/my-private-space\", { hf_token: \"hf_...\" })\n```\n\n", "heading1": "Connecting to a Hugging Face Space", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space, and then use it to make as many requests as you'd like! You'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens)).\n\n`Client.duplicate` is almost identical to `Client.connect`, the only difference is under the hood:\n\n```js\nimport { Client, handle_file } from \"@gradio/client\";\n\nconst response = await fetch(\n\t\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\"\n);\nconst audio_file = await response.blob();\n\nconst app = await Client.duplicate(\"abidlabs/whisper\", { hf_token: \"hf_...\" });\nconst transcription = await app.predict(\"/predict\", [handle_file(audio_file)]);\n```\n\nIf you have previously duplicated a Space, re-running `Client.duplicate` will _not_ create a new Space. Instead, the client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate` method multiple times with the same space.\n\n**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 5 minutes of inactivity. You can also set the hardware using the `hardware` and `timeout` properties of `duplicate`'s options object like this:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.duplicate(\"abidlabs/whisper\", {\n\thf_token: \"hf_...\",\n\ttimeout: 60,\n\thardware: \"a10g-small\"\n});\n```\n\n", "heading1": "Duplicating a Space for private use", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "If your app is running somewhere else, just provide the full URL instead, including the \"http://\" or \"https://\". Here's an example of making predictions to a Gradio app that is running on a share URL:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = Client.connect(\"https://bec81a83-5b5c-471e.gradio.live\");\n```\n\n", "heading1": "Connecting a general Gradio app", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "If the Gradio application you are connecting to [requires a username and password](/guides/sharing-your-appauthentication), then provide them as a tuple to the `auth` argument of the `Client` class:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nClient.connect(\n space_name,\n { auth: [username, password] }\n)\n```\n\n\n", "heading1": "Connecting to a Gradio app with auth", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "Once you have connected to a Gradio app, you can view the APIs that are available to you by calling the `Client`'s `view_api` method.\n\nFor the Whisper Space, we can do this:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/whisper\");\n\nconst app_info = await app.view_api();\n\nconsole.log(app_info);\n```\n\nAnd we will see the following:\n\n```json\n{\n\t\"named_endpoints\": {\n\t\t\"/predict\": {\n\t\t\t\"parameters\": [\n\t\t\t\t{\n\t\t\t\t\t\"label\": \"text\",\n\t\t\t\t\t\"component\": \"Textbox\",\n\t\t\t\t\t\"type\": \"string\"\n\t\t\t\t}\n\t\t\t],\n\t\t\t\"returns\": [\n\t\t\t\t{\n\t\t\t\t\t\"label\": \"output\",\n\t\t\t\t\t\"component\": \"Textbox\",\n\t\t\t\t\t\"type\": \"string\"\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t},\n\t\"unnamed_endpoints\": {}\n}\n```\n\nThis shows us that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `string`, which is a url to a file.\n\nWe should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available. If an app has unnamed API endpoints, these can also be displayed by running `.view_api(all_endpoints=True)`.\n\n", "heading1": "Inspecting the API endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "As an alternative to running the `.view_api()` method, you can click on the \"Use via API\" link in the footer of the Gradio app, which shows us the same information, along with example usage. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)\n\nThe View API page also includes an \"API Recorder\" that lets you interact with the Gradio UI normally and converts your interactions into the corresponding code to run with the JS Client.\n\n\n", "heading1": "The \"View API\" Page", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "The simplest way to make a prediction is simply to call the `.predict()` method with the appropriate arguments:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\");\nconst result = await app.predict(\"/predict\", [\"Hello\"]);\n```\n\nIf there are multiple parameters, then you should pass them as an array to `.predict()`, like this:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"gradio/calculator\");\nconst result = await app.predict(\"/predict\", [4, \"add\", 5]);\n```\n\nFor certain inputs, such as images, you should pass in a `Buffer`, `Blob` or `File` depending on what is most convenient. In node, this would be a `Buffer` or `Blob`; in a browser environment, this would be a `Blob` or `File`.\n\n```js\nimport { Client, handle_file } from \"@gradio/client\";\n\nconst response = await fetch(\n\t\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\"\n);\nconst audio_file = await response.blob();\n\nconst app = await Client.connect(\"abidlabs/whisper\");\nconst result = await app.predict(\"/predict\", [handle_file(audio_file)]);\n```\n\n", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "If the API you are working with can return results over time, or you wish to access information about the status of a job, you can use the iterable interface for more flexibility. This is especially useful for iterative endpoints or generator endpoints that will produce a series of values over time as discrete responses.\n\n```js\nimport { Client } from \"@gradio/client\";\n\nfunction log_result(payload) {\n\tconst {\n\t\tdata: [translation]\n\t} = payload;\n\n\tconsole.log(`The translated result is: ${translation}`);\n}\n\nconst app = await Client.connect(\"abidlabs/en2fr\");\nconst job = app.submit(\"/predict\", [\"Hello\"]);\n\nfor await (const message of job) {\n\tlog_result(message);\n}\n```\n\n", "heading1": "Using events", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "The event interface also allows you to get the status of the running job by instantiating the client with the `events` options passing `status` and `data` as an array:\n\n\n```ts\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\", {\n\tevents: [\"status\", \"data\"]\n});\n```\n\nThis ensures that status messages are also reported to the client.\n\n`status`es are returned as an object with the following attributes: `status` (a human readbale status of the current job, `\"pending\" | \"generating\" | \"complete\" | \"error\"`), `code` (the detailed gradio code for the job), `position` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` ( as `Date` object detailing the time that the status was generated).\n\n```js\nimport { Client } from \"@gradio/client\";\n\nfunction log_status(status) {\n\tconsole.log(\n\t\t`The current status for this job is: ${JSON.stringify(status, null, 2)}.`\n\t);\n}\n\nconst app = await Client.connect(\"abidlabs/en2fr\", {\n\tevents: [\"status\", \"data\"]\n});\nconst job = app.submit(\"/predict\", [\"Hello\"]);\n\nfor await (const message of job) {\n\tif (message.type === \"status\") {\n\t\tlog_status(message);\n\t}\n}\n```\n\n", "heading1": "Status", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "The job instance also has a `.cancel()` method that cancels jobs that have been queued but not started. For example, if you run:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\");\nconst job_one = app.submit(\"/predict\", [\"Hello\"]);\nconst job_two = app.submit(\"/predict\", [\"Friends\"]);\n\njob_one.cancel();\njob_two.cancel();\n```\n\nIf the first job has started processing, then it will not be canceled but the client will no longer listen for updates (throwing away the job). If the second job has not yet started, it will be successfully canceled and removed from the queue.\n\n", "heading1": "Cancelling Jobs", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "Some Gradio API endpoints do not return a single value, rather they return a series of values. You can listen for these values in real time using the iterable interface:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"gradio/count_generator\");\nconst job = app.submit(0, [9]);\n\nfor await (const message of job) {\n\tconsole.log(message.data);\n}\n```\n\nThis will log out the values as they are generated by the endpoint.\n\nYou can also cancel jobs that that have iterative outputs, in which case the job will finish immediately.\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"gradio/count_generator\");\nconst job = app.submit(0, [9]);\n\nfor await (const message of job) {\n\tconsole.log(message.data);\n}\n\nsetTimeout(() => {\n\tjob.cancel();\n}, 3000);\n```\n", "heading1": "Generator Endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "**[OpenAPI](https://www.openapis.org/)** is a widely adopted standard for describing RESTful APIs in a machine-readable format, typically as a JSON file. \n\nYou can create a Gradio UI from an OpenAPI Spec **in 1 line of Python**, instantly generating an interactive web interface for any API, making it accessible for demos, testing, or sharing with non-developers, without writing custom frontend code.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/from-openapi-spec", "source_page_title": "Other Tutorials - From Openapi Spec Guide"}, {"text": "Gradio now provides a convenient function, `gr.load_openapi`, that can automatically generate a Gradio app from an OpenAPI v3 specification. This function parses the spec, creates UI components for each endpoint and parameter, and lets you interact with the API directly from your browser.\n\nHere's a minimal example:\n\n```python\nimport gradio as gr\n\ndemo = gr.load_openapi(\n openapi_spec=\"https://petstore3.swagger.io/api/v3/openapi.json\",\n base_url=\"https://petstore3.swagger.io/api/v3\",\n paths=[\"/pet.*\"],\n methods=[\"get\", \"post\"],\n)\n\ndemo.launch()\n```\n\n**Parameters:**\n- **openapi_spec**: URL, file path, or Python dictionary containing the OpenAPI v3 spec (JSON format only).\n- **base_url**: The base URL for the API endpoints (e.g., `https://api.example.com/v1`).\n- **paths** (optional): List of endpoint path patterns (supports regex) to include. If not set, all paths are included.\n- **methods** (optional): List of HTTP methods (e.g., `[\"get\", \"post\"]`) to include. If not set, all methods are included.\n\nThe generated app will display a sidebar with available endpoints and create interactive forms for each operation, letting you make API calls and view responses in real time.\n\n", "heading1": "How it works", "source_page_url": "https://gradio.app/guides/from-openapi-spec", "source_page_title": "Other Tutorials - From Openapi Spec Guide"}, {"text": "Once your Gradio app is running, you can share the URL with others so they can try out the API through a friendly web interface\u2014no code required. For even more power, you can launch the app as an MCP (Model Control Protocol) server using [Gradio's MCP integration](https://www.gradio.app/guides/building-mcp-server-with-gradio), enabling programmatic access and orchestration of your API via the MCP ecosystem. This makes it easy to build, share, and automate API workflows with minimal effort.\n\n", "heading1": "Next steps", "source_page_url": "https://gradio.app/guides/from-openapi-spec", "source_page_title": "Other Tutorials - From Openapi Spec Guide"}, {"text": "In this guide we will demonstrate some of the ways you can use Gradio with Comet. We will cover the basics of using Comet with Gradio and show you some of the ways that you can leverage Gradio's advanced features such as [Embedding with iFrames](https://www.gradio.app/guides/sharing-your-app/embedding-with-iframes) and [State](https://www.gradio.app/docs/state) to build some amazing model evaluation workflows.\n\nHere is a list of the topics covered in this guide.\n\n1. Logging Gradio UI's to your Comet Experiments\n2. Embedding Gradio Applications directly into your Comet Projects\n3. Embedding Hugging Face Spaces directly into your Comet Projects\n4. Logging Model Inferences from your Gradio Application to Comet\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "[Comet](https://www.comet.com?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs) is an MLOps Platform that is designed to help Data Scientists and Teams build better models faster! Comet provides tooling to Track, Explain, Manage, and Monitor your models in a single place! It works with Jupyter Notebooks and Scripts and most importantly it's 100% free!\n\n", "heading1": "What is Comet?", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "First, install the dependencies needed to run these examples\n\n```shell\npip install comet_ml torch torchvision transformers gradio shap requests Pillow\n```\n\nNext, you will need to [sign up for a Comet Account](https://www.comet.com/signup?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs). Once you have your account set up, [grab your API Key](https://www.comet.com/docs/v2/guides/getting-started/quickstart/get-an-api-key?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs) and configure your Comet credentials\n\nIf you're running these examples as a script, you can either export your credentials as environment variables\n\n```shell\nexport COMET_API_KEY=\"\"\nexport COMET_WORKSPACE=\"\"\nexport COMET_PROJECT_NAME=\"\"\n```\n\nor set them in a `.comet.config` file in your working directory. You file should be formatted in the following way.\n\n```shell\n[comet]\napi_key=\nworkspace=\nproject_name=\n```\n\nIf you are using the provided Colab Notebooks to run these examples, please run the cell with the following snippet before starting the Gradio UI. Running this cell allows you to interactively add your API key to the notebook.\n\n```python\nimport comet_ml\ncomet_ml.init()\n```\n\n", "heading1": "Setup", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-evaluation/gradio/notebooks/Gradio_and_Comet.ipynb)\n\nIn this example, we will go over how to log your Gradio Applications to Comet and interact with them using the Gradio Custom Panel.\n\nLet's start by building a simple Image Classification example using `resnet18`.\n\n```python\nimport comet_ml\n\nimport requests\nimport torch\nfrom PIL import Image\nfrom torchvision import transforms\n\ntorch.hub.download_url_to_file(\"https://github.com/pytorch/hub/raw/master/images/dog.jpg\", \"dog.jpg\")\n\nif torch.cuda.is_available():\n device = \"cuda\"\nelse:\n device = \"cpu\"\n\nmodel = torch.hub.load(\"pytorch/vision:v0.6.0\", \"resnet18\", pretrained=True).eval()\nmodel = model.to(device)\n\nDownload human-readable labels for ImageNet.\nresponse = requests.get(\"https://git.io/JJkYN\")\nlabels = response.text.split(\"\\n\")\n\n\ndef predict(inp):\n inp = Image.fromarray(inp.astype(\"uint8\"), \"RGB\")\n inp = transforms.ToTensor()(inp).unsqueeze(0)\n with torch.no_grad():\n prediction = torch.nn.functional.softmax(model(inp.to(device))[0], dim=0)\n return {labels[i]: float(prediction[i]) for i in range(1000)}\n\n\ninputs = gr.Image()\noutputs = gr.Label(num_top_classes=3)\n\nio = gr.Interface(\n fn=predict, inputs=inputs, outputs=outputs, examples=[\"dog.jpg\"]\n)\nio.launch(inline=False, share=True)\n\nexperiment = comet_ml.Experiment()\nexperiment.add_tag(\"image-classifier\")\n\nio.integrate(comet_ml=experiment)\n```\n\nThe last line in this snippet will log the URL of the Gradio Application to your Comet Experiment. You can find the URL in the Text Tab of your Experiment.\n\n\n\nAdd the Gradio Panel to your Experiment to interact with your application.\n\n\n\nAdd the Gradio Panel to your Experiment to interact with your application.\n\n\n\n", "heading1": "1. Logging Gradio UI's to your Comet Experiments", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "\n\nIf you are permanently hosting your Gradio application, you can embed the UI using the Gradio Panel Extended custom Panel.\n\nGo to your Comet Project page, and head over to the Panels tab. Click the `+ Add` button to bring up the Panels search page.\n\n\"adding-panels\"\n\nNext, search for Gradio Panel Extended in the Public Panels section and click `Add`.\n\n\"gradio-panel-extended\"\n\nOnce you have added your Panel, click `Edit` to access to the Panel Options page and paste in the URL of your Gradio application.\n\n![Edit-Gradio-Panel-Options](https://user-images.githubusercontent.com/7529846/214573001-23814b5a-ca65-4ace-a8a5-b27cdda70f7a.gif)\n\n\"Edit-Gradio-Panel-URL\"\n\n", "heading1": "2. Embedding Gradio Applications directly into your Comet Projects", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "\n\nYou can also embed Gradio Applications that are hosted on Hugging Faces Spaces into your Comet Projects using the Hugging Face Spaces Panel.\n\nGo to your Comet Project page, and head over to the Panels tab. Click the `+ Add` button to bring up the Panels search page. Next, search for the Hugging Face Spaces Panel in the Public Panels section and click `Add`.\n\n\"huggingface-spaces-panel\"\n\nOnce you have added your Panel, click Edit to access to the Panel Options page and paste in the path of your Hugging Face Space e.g. `pytorch/ResNet`\n\n\"Edit-HF-Space\"\n\n", "heading1": "3. Embedding Hugging Face Spaces directly into your Comet Projects", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-evaluation/gradio/notebooks/Logging_Model_Inferences_with_Comet_and_Gradio.ipynb)\n\nIn the previous examples, we demonstrated the various ways in which you can interact with a Gradio application through the Comet UI. Additionally, you can also log model inferences, such as SHAP plots, from your Gradio application to Comet.\n\nIn the following snippet, we're going to log inferences from a Text Generation model. We can persist an Experiment across multiple inference calls using Gradio's [State](https://www.gradio.app/docs/state) object. This will allow you to log multiple inferences from a model to a single Experiment.\n\n```python\nimport comet_ml\nimport gradio as gr\nimport shap\nimport torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\nif torch.cuda.is_available():\n device = \"cuda\"\nelse:\n device = \"cpu\"\n\nMODEL_NAME = \"gpt2\"\n\nmodel = AutoModelForCausalLM.from_pretrained(MODEL_NAME)\n\nset model decoder to true\nmodel.config.is_decoder = True\nset text-generation params under task_specific_params\nmodel.config.task_specific_params[\"text-generation\"] = {\n \"do_sample\": True,\n \"max_length\": 50,\n \"temperature\": 0.7,\n \"top_k\": 50,\n \"no_repeat_ngram_size\": 2,\n}\nmodel = model.to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\nexplainer = shap.Explainer(model, tokenizer)\n\n\ndef start_experiment():\n \"\"\"Returns an APIExperiment object that is thread safe\n and can be used to log inferences to a single Experiment\n \"\"\"\n try:\n api = comet_ml.API()\n workspace = api.get_default_", "heading1": "4. Logging Model Inferences to Comet", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": " \"\"\"Returns an APIExperiment object that is thread safe\n and can be used to log inferences to a single Experiment\n \"\"\"\n try:\n api = comet_ml.API()\n workspace = api.get_default_workspace()\n project_name = comet_ml.config.get_config()[\"comet.project_name\"]\n\n experiment = comet_ml.APIExperiment(\n workspace=workspace, project_name=project_name\n )\n experiment.log_other(\"Created from\", \"gradio-inference\")\n\n message = f\"Started Experiment: [{experiment.name}]({experiment.url})\"\n\n return (experiment, message)\n\n except Exception as e:\n return None, None\n\n\ndef predict(text, state, message):\n experiment = state\n\n shap_values = explainer([text])\n plot = shap.plots.text(shap_values, display=False)\n\n if experiment is not None:\n experiment.log_other(\"message\", message)\n experiment.log_html(plot)\n\n return plot\n\n\nwith gr.Blocks() as demo:\n start_experiment_btn = gr.Button(\"Start New Experiment\")\n experiment_status = gr.Markdown()\n\n Log a message to the Experiment to provide more context\n experiment_message = gr.Textbox(label=\"Experiment Message\")\n experiment = gr.State()\n\n input_text = gr.Textbox(label=\"Input Text\", lines=5, interactive=True)\n submit_btn = gr.Button(\"Submit\")\n\n output = gr.HTML(interactive=True)\n\n start_experiment_btn.click(\n start_experiment, outputs=[experiment, experiment_status]\n )\n submit_btn.click(\n predict, inputs=[input_text, experiment, experiment_message], outputs=[output]\n )\n```\n\nInferences from this snippet will be saved in the HTML tab of your experiment.\n\n\n\n", "heading1": "4. Logging Model Inferences to Comet", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "887c-065aca14dd30.mp4\">\n\n\n", "heading1": "4. Logging Model Inferences to Comet", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "We hope you found this guide useful and that it provides some inspiration to help you build awesome model evaluation workflows with Comet and Gradio.\n\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "- Create an account on Hugging Face [here](https://huggingface.co/join).\n- Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.\n- Request to join the Comet organization [here](https://huggingface.co/Comet).\n\n", "heading1": "How to contribute Gradio demos on HF spaces on the Comet organization", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "- [Comet Documentation](https://www.comet.com/docs/v2/?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs)\n", "heading1": "Additional Resources", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "Gradio is a Python library that allows you to quickly create customizable web apps for your machine learning models and data processing pipelines. Gradio apps can be deployed on [Hugging Face Spaces](https://hf.space) for free.\n\nIn some cases though, you might want to deploy a Gradio app on your own web server. You might already be using [Nginx](https://www.nginx.com/), a highly performant web server, to serve your website (say `https://www.example.com`), and you want to attach Gradio to a specific subpath on your website (e.g. `https://www.example.com/gradio-demo`).\n\nIn this Guide, we will guide you through the process of running a Gradio app behind Nginx on your own web server to achieve this.\n\n**Prerequisites**\n\n1. A Linux web server with [Nginx installed](https://www.nginx.com/blog/setting-up-nginx/) and [Gradio installed](/quickstart)\n2. A working Gradio app saved as a python file on your web server\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "1. Start by editing the Nginx configuration file on your web server. By default, this is located at: `/etc/nginx/nginx.conf`\n\nIn the `http` block, add the following line to include server block configurations from a separate file:\n\n```bash\ninclude /etc/nginx/sites-enabled/*;\n```\n\n2. Create a new file in the `/etc/nginx/sites-available` directory (create the directory if it does not already exist), using a filename that represents your app, for example: `sudo nano /etc/nginx/sites-available/my_gradio_app`\n\n3. Paste the following into your file editor:\n\n```bash\nserver {\n listen 80;\n server_name example.com www.example.com; Change this to your domain name\n\n location /gradio-demo/ { Change this if you'd like to server your Gradio app on a different path\n proxy_pass http://127.0.0.1:7860/; Change this if your Gradio app will be running on a different port\n proxy_buffering off;\n proxy_redirect off;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";\n proxy_set_header Host $host;\n proxy_set_header X-Forwarded-Host $host;\n proxy_set_header X-Forwarded-Proto $scheme;\n }\n}\n```\n\n\nTip: Setting the `X-Forwarded-Host` and `X-Forwarded-Proto` headers is important as Gradio uses these, in conjunction with the `root_path` parameter discussed below, to construct the public URL that your app is being served on. Gradio uses the public URL to fetch various static assets. If these headers are not set, your Gradio app may load in a broken state.\n\n*Note:* The `$host` variable does not include the host port. If you are serving your Gradio application on a raw IP address and port, you should use the `$http_host` variable instead, in these lines:\n\n```bash\n proxy_set_header Host $host;\n proxy_set_header X-Forwarded-Host $host;\n```\n\n", "heading1": "Editing your Nginx configuration file", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "1. Before you launch your Gradio app, you'll need to set the `root_path` to be the same as the subpath that you specified in your nginx configuration. This is necessary for Gradio to run on any subpath besides the root of the domain.\n\n *Note:* Instead of a subpath, you can also provide a complete URL for `root_path` (beginning with `http` or `https`) in which case the `root_path` is treated as an absolute URL instead of a URL suffix (but in this case, you'll need to update the `root_path` if the domain changes).\n\nHere's a simple example of a Gradio app with a custom `root_path` corresponding to the Nginx configuration above.\n\n```python\nimport gradio as gr\nimport time\n\ndef test(x):\ntime.sleep(4)\nreturn x\n\ngr.Interface(test, \"textbox\", \"textbox\").queue().launch(root_path=\"/gradio-demo\")\n```\n\n2. Start a `tmux` session by typing `tmux` and pressing enter (optional)\n\nIt's recommended that you run your Gradio app in a `tmux` session so that you can keep it running in the background easily\n\n3. Then, start your Gradio app. Simply type in `python` followed by the name of your Gradio python file. By default, your app will run on `localhost:7860`, but if it starts on a different port, you will need to update the nginx configuration file above.\n\n", "heading1": "Run your Gradio app on your web server", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "1. If you are in a tmux session, exit by typing CTRL+B (or CMD+B), followed by the \"D\" key.\n\n2. Finally, restart nginx by running `sudo systemctl restart nginx`.\n\nAnd that's it! If you visit `https://example.com/gradio-demo` on your browser, you should see your Gradio app running there\n\n", "heading1": "Restart Nginx", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "When you demo a machine learning model, you might want to collect data from users who try the model, particularly data points in which the model is not behaving as expected. Capturing these \"hard\" data points is valuable because it allows you to improve your machine learning model and make it more reliable and robust.\n\nGradio simplifies the collection of this data by including a **Flag** button with every `Interface`. This allows a user or tester to easily send data back to the machine where the demo is running. In this Guide, we discuss more about how to use the flagging feature, both with `gradio.Interface` as well as with `gradio.Blocks`.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "Flagging with Gradio's `Interface` is especially easy. By default, underneath the output components, there is a button marked **Flag**. When a user testing your model sees input with interesting output, they can click the flag button to send the input and output data back to the machine where the demo is running. The sample is saved to a CSV log file (by default). If the demo involves images, audio, video, or other types of files, these are saved separately in a parallel directory and the paths to these files are saved in the CSV file.\n\nThere are [four parameters](https://gradio.app/docs/interfaceinitialization) in `gradio.Interface` that control how flagging works. We will go over them in greater detail.\n\n- `flagging_mode`: this parameter can be set to either `\"manual\"` (default), `\"auto\"`, or `\"never\"`.\n - `manual`: users will see a button to flag, and samples are only flagged when the button is clicked.\n - `auto`: users will not see a button to flag, but every sample will be flagged automatically.\n - `never`: users will not see a button to flag, and no sample will be flagged.\n- `flagging_options`: this parameter can be either `None` (default) or a list of strings.\n - If `None`, then the user simply clicks on the **Flag** button and no additional options are shown.\n - If a list of strings are provided, then the user sees several buttons, corresponding to each of the strings that are provided. For example, if the value of this parameter is `[\"Incorrect\", \"Ambiguous\"]`, then buttons labeled **Flag as Incorrect** and **Flag as Ambiguous** appear. This only applies if `flagging_mode` is `\"manual\"`.\n - The chosen option is then logged along with the input and output.\n- `flagging_dir`: this parameter takes a string.\n - It represents what to name the directory where flagged data is stored.\n- `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class\n - Using this parameter allows you to write custom code that gets run whe", "heading1": "The **Flag** button in `gradio.Interface`", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "flagged data is stored.\n- `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class\n - Using this parameter allows you to write custom code that gets run when the flag button is clicked\n - By default, this is set to an instance of `gr.JSONLogger`\n\n", "heading1": "The **Flag** button in `gradio.Interface`", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "Within the directory provided by the `flagging_dir` argument, a JSON file will log the flagged data.\n\nHere's an example: The code below creates the calculator interface embedded below it:\n\n```python\nimport gradio as gr\n\n\ndef calculator(num1, operation, num2):\n if operation == \"add\":\n return num1 + num2\n elif operation == \"subtract\":\n return num1 - num2\n elif operation == \"multiply\":\n return num1 * num2\n elif operation == \"divide\":\n return num1 / num2\n\n\niface = gr.Interface(\n calculator,\n [\"number\", gr.Radio([\"add\", \"subtract\", \"multiply\", \"divide\"]), \"number\"],\n \"number\",\n flagging_mode=\"manual\"\n)\n\niface.launch()\n```\n\n\n\nWhen you click the flag button above, the directory where the interface was launched will include a new flagged subfolder, with a csv file inside it. This csv file includes all the data that was flagged.\n\n```directory\n+-- flagged/\n| +-- logs.csv\n```\n\n_flagged/logs.csv_\n\n```csv\nnum1,operation,num2,Output,timestamp\n5,add,7,12,2022-01-31 11:40:51.093412\n6,subtract,1.5,4.5,2022-01-31 03:25:32.023542\n```\n\nIf the interface involves file data, such as for Image and Audio components, folders will be created to store those flagged data as well. For example an `image` input to `image` output interface will create the following structure.\n\n```directory\n+-- flagged/\n| +-- logs.csv\n| +-- image/\n| | +-- 0.png\n| | +-- 1.png\n| +-- Output/\n| | +-- 0.png\n| | +-- 1.png\n```\n\n_flagged/logs.csv_\n\n```csv\nim,Output timestamp\nim/0.png,Output/0.png,2022-02-04 19:49:58.026963\nim/1.png,Output/1.png,2022-02-02 10:40:51.093412\n```\n\nIf you wish for the user to provide a reason for flagging, you can pass a list of strings to the `flagging_options` argument of Interface. Users will have to select one of these choices when flagging, and the option will be saved as an additional column to the CSV.\n\nIf we go back to the calculator example, the fo", "heading1": "What happens to flagged data?", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "` argument of Interface. Users will have to select one of these choices when flagging, and the option will be saved as an additional column to the CSV.\n\nIf we go back to the calculator example, the following code will create the interface embedded below it.\n\n```python\niface = gr.Interface(\n calculator,\n [\"number\", gr.Radio([\"add\", \"subtract\", \"multiply\", \"divide\"]), \"number\"],\n \"number\",\n flagging_mode=\"manual\",\n flagging_options=[\"wrong sign\", \"off by one\", \"other\"]\n)\n\niface.launch()\n```\n\n\n\nWhen users click the flag button, the csv file will now include a column indicating the selected option.\n\n_flagged/logs.csv_\n\n```csv\nnum1,operation,num2,Output,flag,timestamp\n5,add,7,-12,wrong sign,2022-02-04 11:40:51.093412\n6,subtract,1.5,3.5,off by one,2022-02-04 11:42:32.062512\n```\n\n", "heading1": "What happens to flagged data?", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "What about if you are using `gradio.Blocks`? On one hand, you have even more flexibility\nwith Blocks -- you can write whatever Python code you want to run when a button is clicked,\nand assign that using the built-in events in Blocks.\n\nAt the same time, you might want to use an existing `FlaggingCallback` to avoid writing extra code.\nThis requires two steps:\n\n1. You have to run your callback's `.setup()` somewhere in the code prior to the\n first time you flag data\n2. When the flagging button is clicked, then you trigger the callback's `.flag()` method,\n making sure to collect the arguments correctly and disabling the typical preprocessing.\n\nHere is an example with an image sepia filter Blocks demo that lets you flag\ndata using the default `CSVLogger`:\n\n$code_blocks_flag\n$demo_blocks_flag\n\n", "heading1": "Flagging with Blocks", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "Important Note: please make sure your users understand when the data they submit is being saved, and what you plan on doing with it. This is especially important when you use `flagging_mode=auto` (when all of the data submitted through the demo is being flagged)\n\nThat's all! Happy building :)\n", "heading1": "Privacy", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "Gradio features [blocks](https://www.gradio.app/docs/blocks) to easily layout applications. To use this feature, you need to stack or nest layout components and create a hierarchy with them. This isn't difficult to implement and maintain for small projects, but after the project gets more complex, this component hierarchy becomes difficult to maintain and reuse.\n\nIn this guide, we are going to explore how we can wrap the layout classes to create more maintainable and easy-to-read applications without sacrificing flexibility.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "We are going to follow the implementation from this Huggingface Space example:\n\n\n\n\n", "heading1": "Example", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "The wrapping utility has two important classes. The first one is the ```LayoutBase``` class and the other one is the ```Application``` class.\n\nWe are going to look at the ```render``` and ```attach_event``` functions of them for brevity. You can look at the full implementation from [the example code](https://huggingface.co/spaces/WoWoWoWololo/wrapping-layouts/blob/main/app.py).\n\nSo let's start with the ```LayoutBase``` class.\n\nLayoutBase Class\n\n1. Render Function\n\n Let's look at the ```render``` function in the ```LayoutBase``` class:\n\n```python\nother LayoutBase implementations\n\ndef render(self) -> None:\n with self.main_layout:\n for renderable in self.renderables:\n renderable.render()\n\n self.main_layout.render()\n```\nThis is a little confusing at first but if you consider the default implementation you can understand it easily.\nLet's look at an example:\n\nIn the default implementation, this is what we're doing:\n\n```python\nwith Row():\n left_textbox = Textbox(value=\"left_textbox\")\n right_textbox = Textbox(value=\"right_textbox\")\n```\n\nNow, pay attention to the Textbox variables. These variables' ```render``` parameter is true by default. So as we use the ```with``` syntax and create these variables, they are calling the ```render``` function under the ```with``` syntax.\n\nWe know the render function is called in the constructor with the implementation from the ```gradio.blocks.Block``` class:\n\n```python\nclass Block:\n constructor parameters are omitted for brevity\n def __init__(self, ...):\n other assign functions \n\n if render:\n self.render()\n```\n\nSo our implementation looks like this:\n\n```python\nself.main_layout -> Row()\nwith self.main_layout:\n left_textbox.render()\n right_textbox.render()\n```\n\nWhat this means is by calling the components' render functions under the ```with``` syntax, we are actually simulating the default implementation.\n\nSo now let's consider two nested ```with```s to see ho", "heading1": "Implementation", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "at this means is by calling the components' render functions under the ```with``` syntax, we are actually simulating the default implementation.\n\nSo now let's consider two nested ```with```s to see how the outer one works. For this, let's expand our example with the ```Tab``` component:\n\n```python\nwith Tab():\n with Row():\n first_textbox = Textbox(value=\"first_textbox\")\n second_textbox = Textbox(value=\"second_textbox\")\n```\n\nPay attention to the Row and Tab components this time. We have created the Textbox variables above and added them to Row with the ```with``` syntax. Now we need to add the Row component to the Tab component. You can see that the Row component is created with default parameters, so its render parameter is true, that's why the render function is going to be executed under the Tab component's ```with``` syntax.\n\nTo mimic this implementation, we need to call the ```render``` function of the ```main_layout``` variable after the ```with``` syntax of the ```main_layout``` variable.\n\nSo the implementation looks like this:\n\n```python\nwith tab_main_layout:\n with row_main_layout:\n first_textbox.render()\n second_textbox.render()\n\n row_main_layout.render()\n\ntab_main_layout.render()\n```\n\nThe default implementation and our implementation are the same, but we are using the render function ourselves. So it requires a little work.\n\nNow, let's take a look at the ```attach_event``` function.\n\n2. Attach Event Function\n\n The function is left as not implemented because it is specific to the class, so each class has to implement its `attach_event` function.\n\n```python\n other LayoutBase implementations\n\n def attach_event(self, block_dict: Dict[str, Block]) -> None:\n raise NotImplementedError\n```\n\nCheck out the ```block_dict``` variable in the ```Application``` class's ```attach_event``` function.\n\nApplication Class\n\n1. Render Function\n\n```python\n other Application implementations\n\n def _render(self):\n ", "heading1": "Implementation", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "ct``` variable in the ```Application``` class's ```attach_event``` function.\n\nApplication Class\n\n1. Render Function\n\n```python\n other Application implementations\n\n def _render(self):\n with self.app:\n for child in self.children:\n child.render()\n\n self.app.render()\n```\n\nFrom the explanation of the ```LayoutBase``` class's ```render``` function, we can understand the ```child.render``` part.\n\nSo let's look at the bottom part, why are we calling the ```app``` variable's ```render``` function? It's important to call this function because if we look at the implementation in the ```gradio.blocks.Blocks``` class, we can see that it is adding the components and event functions into the root component. To put it another way, it is creating and structuring the gradio application.\n\n2. Attach Event Function\n\n Let's see how we can attach events to components:\n\n```python\n other Application implementations\n\n def _attach_event(self):\n block_dict: Dict[str, Block] = {}\n\n for child in self.children:\n block_dict.update(child.global_children_dict)\n\n with self.app:\n for child in self.children:\n try:\n child.attach_event(block_dict=block_dict)\n except NotImplementedError:\n print(f\"{child.name}'s attach_event is not implemented\")\n```\n\nYou can see why the ```global_children_list``` is used in the ```LayoutBase``` class from the example code. With this, all the components in the application are gathered into one dictionary, so the component can access all the components with their names.\n\nThe ```with``` syntax is used here again to attach events to components. If we look at the ```__exit__``` function in the ```gradio.blocks.Blocks``` class, we can see that it is calling the ```attach_load_events``` function which is used for setting event triggers to components. So we have to use the ```with``` syntax to trigger the ```_", "heading1": "Implementation", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "Blocks``` class, we can see that it is calling the ```attach_load_events``` function which is used for setting event triggers to components. So we have to use the ```with``` syntax to trigger the ```__exit__``` function.\n\nOf course, we can call ```attach_load_events``` without using the ```with``` syntax, but the function needs a ```Context.root_block```, and it is set in the ```__enter__``` function. So we used the ```with``` syntax here rather than calling the function ourselves.\n\n", "heading1": "Implementation", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "In this guide, we saw\n\n- How we can wrap the layouts\n- How components are rendered\n- How we can structure our application with wrapped layout classes\n\nBecause the classes used in this guide are used for demonstration purposes, they may still not be totally optimized or modular. But that would make the guide much longer!\n\nI hope this guide helps you gain another view of the layout classes and gives you an idea about how you can use them for your needs. See the full implementation of our example [here](https://huggingface.co/spaces/WoWoWoWololo/wrapping-layouts/blob/main/app.py).\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "This guide explains how you can run background tasks from your gradio app.\nBackground tasks are operations that you'd like to perform outside the request-response\nlifecycle of your app either once or on a periodic schedule.\nExamples of background tasks include periodically synchronizing data to an external database or\nsending a report of model predictions via email.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "We will be creating a simple \"Google-forms-style\" application to gather feedback from users of the gradio library.\nWe will use a local sqlite database to store our data, but we will periodically synchronize the state of the database\nwith a [HuggingFace Dataset](https://huggingface.co/datasets) so that our user reviews are always backed up.\nThe synchronization will happen in a background task running every 60 seconds.\n\nAt the end of the demo, you'll have a fully working application like this one:\n\n \n\n", "heading1": "Overview", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "Our application will store the name of the reviewer, their rating of gradio on a scale of 1 to 5, as well as\nany comments they want to share about the library. Let's write some code that creates a database table to\nstore this data. We'll also write some functions to insert a review into that table and fetch the latest 10 reviews.\n\nWe're going to use the `sqlite3` library to connect to our sqlite database but gradio will work with any library.\n\nThe code will look like this:\n\n```python\nDB_FILE = \"./reviews.db\"\ndb = sqlite3.connect(DB_FILE)\n\nCreate table if it doesn't already exist\ntry:\n db.execute(\"SELECT * FROM reviews\").fetchall()\n db.close()\nexcept sqlite3.OperationalError:\n db.execute(\n '''\n CREATE TABLE reviews (id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,\n created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,\n name TEXT, review INTEGER, comments TEXT)\n ''')\n db.commit()\n db.close()\n\ndef get_latest_reviews(db: sqlite3.Connection):\n reviews = db.execute(\"SELECT * FROM reviews ORDER BY id DESC limit 10\").fetchall()\n total_reviews = db.execute(\"Select COUNT(id) from reviews\").fetchone()[0]\n reviews = pd.DataFrame(reviews, columns=[\"id\", \"date_created\", \"name\", \"review\", \"comments\"])\n return reviews, total_reviews\n\n\ndef add_review(name: str, review: int, comments: str):\n db = sqlite3.connect(DB_FILE)\n cursor = db.cursor()\n cursor.execute(\"INSERT INTO reviews(name, review, comments) VALUES(?,?,?)\", [name, review, comments])\n db.commit()\n reviews, total_reviews = get_latest_reviews(db)\n db.close()\n return reviews, total_reviews\n```\n\nLet's also write a function to load the latest reviews when the gradio application loads:\n\n```python\ndef load_data():\n db = sqlite3.connect(DB_FILE)\n reviews, total_reviews = get_latest_reviews(db)\n db.close()\n return reviews, total_reviews\n```\n\n", "heading1": "Step 1 - Write your database logic \ud83d\udcbe", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "Now that we have our database logic defined, we can use gradio create a dynamic web page to ask our users for feedback!\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n name = gr.Textbox(label=\"Name\", placeholder=\"What is your name?\")\n review = gr.Radio(label=\"How satisfied are you with using gradio?\", choices=[1, 2, 3, 4, 5])\n comments = gr.Textbox(label=\"Comments\", lines=10, placeholder=\"Do you have any feedback on gradio?\")\n submit = gr.Button(value=\"Submit Feedback\")\n with gr.Column():\n data = gr.Dataframe(label=\"Most recently created 10 rows\")\n count = gr.Number(label=\"Total number of reviews\")\n submit.click(add_review, [name, review, comments], [data, count])\n demo.load(load_data, None, [data, count])\n```\n\n", "heading1": "Step 2 - Create a gradio app \u26a1", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "We could call `demo.launch()` after step 2 and have a fully functioning application. However,\nour data would be stored locally on our machine. If the sqlite file were accidentally deleted, we'd lose all of our reviews!\nLet's back up our data to a dataset on the HuggingFace hub.\n\nCreate a dataset [here](https://huggingface.co/datasets) before proceeding.\n\nNow at the **top** of our script, we'll use the [huggingface hub client library](https://huggingface.co/docs/huggingface_hub/index)\nto connect to our dataset and pull the latest backup.\n\n```python\nTOKEN = os.environ.get('HUB_TOKEN')\nrepo = huggingface_hub.Repository(\n local_dir=\"data\",\n repo_type=\"dataset\",\n clone_from=\"\",\n use_auth_token=TOKEN\n)\nrepo.git_pull()\n\nshutil.copyfile(\"./data/reviews.db\", DB_FILE)\n```\n\nNote that you'll have to get an access token from the \"Settings\" tab of your HuggingFace for the above code to work.\nIn the script, the token is securely accessed via an environment variable.\n\n![access_token](https://github.com/gradio-app/gradio/blob/main/guides/assets/access_token.png?raw=true)\n\nNow we will create a background task to synch our local database to the dataset hub every 60 seconds.\nWe will use the [AdvancedPythonScheduler](https://apscheduler.readthedocs.io/en/3.x/) to handle the scheduling.\nHowever, this is not the only task scheduling library available. Feel free to use whatever you are comfortable with.\n\nThe function to back up our data will look like this:\n\n```python\nfrom apscheduler.schedulers.background import BackgroundScheduler\n\ndef backup_db():\n shutil.copyfile(DB_FILE, \"./data/reviews.db\")\n db = sqlite3.connect(DB_FILE)\n reviews = db.execute(\"SELECT * FROM reviews\").fetchall()\n pd.DataFrame(reviews).to_csv(\"./data/reviews.csv\", index=False)\n print(\"updating db\")\n repo.push_to_hub(blocking=False, commit_message=f\"Updating data at {datetime.datetime.now()}\")\n\n\nscheduler = BackgroundScheduler()\nscheduler.add_job(func=backup_db, trigge", "heading1": "Step 3 - Synchronize with HuggingFace Datasets \ud83e\udd17", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": " print(\"updating db\")\n repo.push_to_hub(blocking=False, commit_message=f\"Updating data at {datetime.datetime.now()}\")\n\n\nscheduler = BackgroundScheduler()\nscheduler.add_job(func=backup_db, trigger=\"interval\", seconds=60)\nscheduler.start()\n```\n\n", "heading1": "Step 3 - Synchronize with HuggingFace Datasets \ud83e\udd17", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "You can use the HuggingFace [Spaces](https://huggingface.co/spaces) platform to deploy this application for free \u2728\n\nIf you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).\nYou will have to use the `HUB_TOKEN` environment variable as a secret in the Guides.\n\n", "heading1": "Step 4 (Bonus) - Deployment to HuggingFace Spaces", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "Congratulations! You know how to run background tasks from your gradio app on a schedule \u23f2\ufe0f.\n\nCheckout the application running on Spaces [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms).\nThe complete code is [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms/blob/main/app.py)\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "When you are building a Gradio demo, particularly out of Blocks, you may find it cumbersome to keep re-running your code to test your changes.\n\nTo make it faster and more convenient to write your code, we've made it easier to \"reload\" your Gradio apps instantly when you are developing in a **Python IDE** (like VS Code, Sublime Text, PyCharm, or so on) or generally running your Python code from the terminal. We've also developed an analogous \"magic command\" that allows you to re-run cells faster if you use **Jupyter Notebooks** (or any similar environment like Colab).\n\nThis short Guide will cover both of these methods, so no matter how you write Python, you'll leave knowing how to build Gradio apps faster.\n\n", "heading1": "Why Hot Reloading?", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "If you are building Gradio Blocks using a Python IDE, your file of code (let's name it `run.py`) might look something like this:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"Greetings from Gradio!\")\n inp = gr.Textbox(placeholder=\"What is your name?\")\n out = gr.Textbox()\n\n inp.change(fn=lambda x: f\"Welcome, {x}!\",\n inputs=inp,\n outputs=out)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThe problem is that anytime that you want to make a change to your layout, events, or components, you have to close and rerun your app by writing `python run.py`.\n\nInstead of doing this, you can run your code in **reload mode** by changing 1 word: `python` to `gradio`:\n\nIn the terminal, run `gradio run.py`. That's it!\n\nNow, you'll see that after you'll see something like this:\n\n```bash\nWatching: '/Users/freddy/sources/gradio/gradio', '/Users/freddy/sources/gradio/demo/'\n\nRunning on local URL: http://127.0.0.1:7860\n```\n\nThe important part here is the line that says `Watching...` What's happening here is that Gradio will be observing the directory where `run.py` file lives, and if the file changes, it will automatically rerun the file for you. So you can focus on writing your code, and your Gradio demo will refresh automatically \ud83e\udd73\n\nTip: the `gradio` command does not detect the parameters passed to the `launch()` methods because the `launch()` method is never called in reload mode. For example, setting `auth`, or `show_error` in `launch()` will not be reflected in the app.\n\nThere is one important thing to keep in mind when using the reload mode: Gradio specifically looks for a Gradio Blocks/Interface demo called `demo` in your code. If you have named your demo something else, you will need to pass in the name of your demo as the 2nd parameter in your code. So if your `run.py` file looked like this:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as my_demo:\n gr.Markdown(\"Greetings from Gradio!\")\n inp = gr.", "heading1": "Python IDE Reload \ud83d\udd25", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "emo as the 2nd parameter in your code. So if your `run.py` file looked like this:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as my_demo:\n gr.Markdown(\"Greetings from Gradio!\")\n inp = gr.Textbox(placeholder=\"What is your name?\")\n out = gr.Textbox()\n\n inp.change(fn=lambda x: f\"Welcome, {x}!\",\n inputs=inp,\n outputs=out)\n\nif __name__ == \"__main__\":\n my_demo.launch()\n```\n\nThen you would launch it in reload mode like this: `gradio run.py --demo-name=my_demo`.\n\nBy default, the Gradio use UTF-8 encoding for scripts. **For reload mode**, If you are using encoding formats other than UTF-8 (such as cp1252), make sure you've done like this:\n\n1. Configure encoding declaration of python script, for example: `-*- coding: cp1252 -*-`\n2. Confirm that your code editor has identified that encoding format. \n3. Run like this: `gradio run.py --encoding cp1252`\n\n\ud83d\udd25 If your application accepts command line arguments, you can pass them in as well. Here's an example:\n\n```python\nimport gradio as gr\nimport argparse\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--name\", type=str, default=\"User\")\nargs, unknown = parser.parse_known_args()\n\nwith gr.Blocks() as demo:\n gr.Markdown(f\"Greetings {args.name}!\")\n inp = gr.Textbox()\n out = gr.Textbox()\n\n inp.change(fn=lambda x: x, inputs=inp, outputs=out)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nWhich you could run like this: `gradio run.py --name Gretel`\n\nAs a small aside, this auto-reloading happens if you change your `run.py` source code or the Gradio source code. Meaning that this can be useful if you decide to [contribute to Gradio itself](https://github.com/gradio-app/gradio/blob/main/CONTRIBUTING.md) \u2705\n\n\n", "heading1": "Python IDE Reload \ud83d\udd25", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "By default, reload mode will re-run your entire script for every change you make.\nBut there are some cases where this is not desirable.\nFor example, loading a machine learning model should probably only happen once to save time. There are also some Python libraries that use C or Rust extensions that throw errors when they are reloaded, like `numpy` and `tiktoken`.\n\nIn these situations, you can place code that you do not want to be re-run inside an `if gr.NO_RELOAD:` codeblock. Here's an example of how you can use it to only load a transformers model once during the development process.\n\nTip: The value of `gr.NO_RELOAD` is `True`. So you don't have to change your script when you are done developing and want to run it in production. Simply run the file with `python` instead of `gradio`.\n\n```python\nimport gradio as gr\n\nif gr.NO_RELOAD:\n\tfrom transformers import pipeline\n\tpipe = pipeline(\"text-classification\", model=\"cardiffnlp/twitter-roberta-base-sentiment-latest\")\n\ndemo = gr.Interface(lambda s: {d[\"label\"]: d[\"score\"] for d in pipe(s)}, gr.Textbox(), gr.Label())\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\n", "heading1": "Controlling the Reload \ud83c\udf9b\ufe0f", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "You can also enable Gradio's **Vibe Mode**, which, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. To enable this, simply run use the `--vibe` flag with Gradio, e.g. `gradio --vibe app.py`.\n\nVibe Mode lets you describe commands using natural language and have an LLM write or edit the code in your Gradio app. The LLM is powered by Hugging Face's [Inference Providers](https://huggingface.co/docs/inference-providers/en/index), so you must be logged into Hugging Face locally to use this. \n\nNote: When Vibe Mode is enabled, anyone who can access the Gradio endpoint can modify files and run arbitrary code on the host machine. Use only for local development.\n\n", "heading1": "Vibe Mode", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "What about if you use Jupyter Notebooks (or Colab Notebooks, etc.) to develop code? We got something for you too!\n\nWe've developed a **magic command** that will create and run a Blocks demo for you. To use this, load the gradio extension at the top of your notebook:\n\n`%load_ext gradio`\n\nThen, in the cell that you are developing your Gradio demo, simply write the magic command **`%%blocks`** at the top, and then write the layout and components like you would normally:\n\n```py\n%%blocks\n\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(f\"Greetings {args.name}!\")\n inp = gr.Textbox()\n out = gr.Textbox()\n\n inp.change(fn=lambda x: x, inputs=inp, outputs=out)\n```\n\nNotice that:\n\n- You do not need to launch your demo \u2014 Gradio does that for you automatically!\n\n- Every time you rerun the cell, Gradio will re-render your app on the same port and using the same underlying web server. This means you'll see your changes _much, much faster_ than if you were rerunning the cell normally.\n\nHere's what it looks like in a jupyter notebook:\n\n![](https://gradio-builds.s3.amazonaws.com/demo-files/jupyter_reload.gif)\n\n\ud83e\ude84 This works in colab notebooks too! [Here's a colab notebook](https://colab.research.google.com/drive/1zAuWoiTIb3O2oitbtVb2_ekv1K6ggtC1?usp=sharing) where you can see the Blocks magic in action. Try making some changes and re-running the cell with the Gradio code!\n\nTip: You may have to use `%%blocks --share` in Colab to get the demo to appear in the cell.\n\nThe Notebook Magic is now the author's preferred way of building Gradio demos. Regardless of how you write Python code, we hope either of these methods will give you a much better development experience using Gradio.\n\n---\n\n", "heading1": "Jupyter Notebook Magic \ud83d\udd2e", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "Now that you know how to develop quickly using Gradio, start building your own!\n\nIf you are looking for inspiration, try exploring demos other people have built with Gradio, [browse public Hugging Face Spaces](http://hf.space/) \ud83e\udd17\n", "heading1": "Next Steps", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "3D models are becoming more popular in machine learning and make for some of the most fun demos to experiment with. Using `gradio`, you can easily build a demo of your 3D image model and share it with anyone. The Gradio 3D Model component accepts 3 file types including: _.obj_, _.glb_, & _.gltf_.\n\nThis guide will show you how to build a demo for your 3D image model in a few lines of code; like the one below. Play around with 3D object by clicking around, dragging and zooming:\n\n \n\nPrerequisites\n\nMake sure you have the `gradio` Python package already [installed](https://gradio.app/guides/quickstart).\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/how-to-use-3D-model-component", "source_page_title": "Other Tutorials - How To Use 3D Model Component Guide"}, {"text": "Let's take a look at how to create the minimal interface above. The prediction function in this case will just return the original 3D model mesh, but you can change this function to run inference on your machine learning model. We'll take a look at more complex examples below.\n\n```python\nimport gradio as gr\nimport os\n\n\ndef load_mesh(mesh_file_name):\n return mesh_file_name\n\n\ndemo = gr.Interface(\n fn=load_mesh,\n inputs=gr.Model3D(),\n outputs=gr.Model3D(\n clear_color=[0.0, 0.0, 0.0, 0.0], label=\"3D Model\"),\n examples=[\n [os.path.join(os.path.dirname(__file__), \"files/Bunny.obj\")],\n [os.path.join(os.path.dirname(__file__), \"files/Duck.glb\")],\n [os.path.join(os.path.dirname(__file__), \"files/Fox.gltf\")],\n [os.path.join(os.path.dirname(__file__), \"files/face.obj\")],\n ],\n)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nLet's break down the code above:\n\n`load_mesh`: This is our 'prediction' function and for simplicity, this function will take in the 3D model mesh and return it.\n\nCreating the Interface:\n\n- `fn`: the prediction function that is used when the user clicks submit. In our case this is the `load_mesh` function.\n- `inputs`: create a model3D input component. The input expects an uploaded file as a {str} filepath.\n- `outputs`: create a model3D output component. The output component also expects a file as a {str} filepath.\n - `clear_color`: this is the background color of the 3D model canvas. Expects RGBa values.\n - `label`: the label that appears on the top left of the component.\n- `examples`: list of 3D model files. The 3D model component can accept _.obj_, _.glb_, & _.gltf_ file types.\n- `cache_examples`: saves the predicted output for the examples, to save time on inference.\n\n", "heading1": "Taking a Look at the Code", "source_page_url": "https://gradio.app/guides/how-to-use-3D-model-component", "source_page_title": "Other Tutorials - How To Use 3D Model Component Guide"}, {"text": "Below is a demo that uses the DPT model to predict the depth of an image and then uses 3D Point Cloud to create a 3D object. Take a look at the [app.py](https://huggingface.co/spaces/gradio/dpt-depth-estimation-3d-obj/blob/main/app.py) file for a peek into the code and the model prediction function.\n \n\n---\n\nAnd you're done! That's all the code you need to build an interface for your Model3D model. Here are some references that you may find useful:\n\n- Gradio's [\"Getting Started\" guide](https://gradio.app/getting_started/)\n- The first [3D Model Demo](https://huggingface.co/spaces/gradio/Model3D) and [complete code](https://huggingface.co/spaces/gradio/Model3D/tree/main) (on Hugging Face Spaces)\n", "heading1": "Exploring a more complex Model3D Demo:", "source_page_url": "https://gradio.app/guides/how-to-use-3D-model-component", "source_page_title": "Other Tutorials - How To Use 3D Model Component Guide"}, {"text": "Let\u2019s start with a simple example of integrating a C++ program into a Gradio app. Suppose we have the following C++ program that adds two numbers:\n\n```cpp\n// add.cpp\ninclude \n\nint main() {\n double a, b;\n std::cin >> a >> b;\n std::cout << a + b << std::endl;\n return 0;\n}\n```\n\nThis program reads two numbers from standard input, adds them, and outputs the result.\n\nWe can build a Gradio interface around this C++ program using Python's `subprocess` module. Here\u2019s the corresponding Python code:\n\n```python\nimport gradio as gr\nimport subprocess\n\ndef add_numbers(a, b):\n process = subprocess.Popen(\n ['./add'], \n stdin=subprocess.PIPE, \n stdout=subprocess.PIPE, \n stderr=subprocess.PIPE\n )\n output, error = process.communicate(input=f\"{a} {b}\\n\".encode())\n \n if error:\n return f\"Error: {error.decode()}\"\n return float(output.decode().strip())\n\ndemo = gr.Interface(\n fn=add_numbers, \n inputs=[gr.Number(label=\"Number 1\"), gr.Number(label=\"Number 2\")], \n outputs=gr.Textbox(label=\"Result\")\n)\n\ndemo.launch()\n```\n\nHere, `subprocess.Popen` is used to execute the compiled C++ program (`add`), pass the input values, and capture the output. You can compile the C++ program by running:\n\n```bash\ng++ -o add add.cpp\n```\n\nThis example shows how easy it is to call C++ from Python using `subprocess` and build a Gradio interface around it.\n\n", "heading1": "Using Gradio with C++", "source_page_url": "https://gradio.app/guides/using-gradio-in-other-programming-languages", "source_page_title": "Other Tutorials - Using Gradio In Other Programming Languages Guide"}, {"text": "Now, let\u2019s move to another example: calling a Rust program to apply a sepia filter to an image. The Rust code could look something like this:\n\n```rust\n// sepia.rs\nextern crate image;\n\nuse image::{GenericImageView, ImageBuffer, Rgba};\n\nfn sepia_filter(input: &str, output: &str) {\n let img = image::open(input).unwrap();\n let (width, height) = img.dimensions();\n let mut img_buf = ImageBuffer::new(width, height);\n\n for (x, y, pixel) in img.pixels() {\n let (r, g, b, a) = (pixel[0] as f32, pixel[1] as f32, pixel[2] as f32, pixel[3]);\n let tr = (0.393 * r + 0.769 * g + 0.189 * b).min(255.0);\n let tg = (0.349 * r + 0.686 * g + 0.168 * b).min(255.0);\n let tb = (0.272 * r + 0.534 * g + 0.131 * b).min(255.0);\n img_buf.put_pixel(x, y, Rgba([tr as u8, tg as u8, tb as u8, a]));\n }\n\n img_buf.save(output).unwrap();\n}\n\nfn main() {\n let args: Vec = std::env::args().collect();\n if args.len() != 3 {\n eprintln!(\"Usage: sepia \");\n return;\n }\n sepia_filter(&args[1], &args[2]);\n}\n```\n\nThis Rust program applies a sepia filter to an image. It takes two command-line arguments: the input image path and the output image path. You can compile this program using:\n\n```bash\ncargo build --release\n```\n\nNow, we can call this Rust program from Python and use Gradio to build the interface:\n\n```python\nimport gradio as gr\nimport subprocess\n\ndef apply_sepia(input_path):\n output_path = \"output.png\"\n \n process = subprocess.Popen(\n ['./target/release/sepia', input_path, output_path], \n stdout=subprocess.PIPE, \n stderr=subprocess.PIPE\n )\n process.wait()\n \n return output_path\n\ndemo = gr.Interface(\n fn=apply_sepia, \n inputs=gr.Image(type=\"filepath\", label=\"Input Image\"), \n outputs=gr.Image(label=\"Sepia Image\")\n)\n\ndemo.launch()\n```\n\nHere, when a user uploads an image and clicks submit, Gradio calls the Rust binary (`sepia`) to process the image, and re", "heading1": "Using Gradio with Rust", "source_page_url": "https://gradio.app/guides/using-gradio-in-other-programming-languages", "source_page_title": "Other Tutorials - Using Gradio In Other Programming Languages Guide"}, {"text": "nput Image\"), \n outputs=gr.Image(label=\"Sepia Image\")\n)\n\ndemo.launch()\n```\n\nHere, when a user uploads an image and clicks submit, Gradio calls the Rust binary (`sepia`) to process the image, and returns the sepia-filtered output to Gradio.\n\nThis setup showcases how you can integrate performance-critical or specialized code written in Rust into a Gradio interface.\n\n", "heading1": "Using Gradio with Rust", "source_page_url": "https://gradio.app/guides/using-gradio-in-other-programming-languages", "source_page_title": "Other Tutorials - Using Gradio In Other Programming Languages Guide"}, {"text": "Integrating Gradio with R is particularly straightforward thanks to the `reticulate` package, which allows you to run Python code directly in R. Let\u2019s walk through an example of using Gradio in R. \n\n**Installation**\n\nFirst, you need to install the `reticulate` package in R:\n\n```r\ninstall.packages(\"reticulate\")\n```\n\n\nOnce installed, you can use the package to run Gradio directly from within an R script.\n\n\n```r\nlibrary(reticulate)\n\npy_install(\"gradio\", pip = TRUE)\n\ngr <- import(\"gradio\") import gradio as gr\n```\n\n**Building a Gradio Application**\n\nWith gradio installed and imported, we now have access to gradio's app building methods. Let's build a simple app for an R function that returns a greeting\n\n```r\ngreeting <- \\(name) paste(\"Hello\", name)\n\napp <- gr$Interface(\n fn = greeting,\n inputs = gr$Text(label = \"Name\"),\n outputs = gr$Text(label = \"Greeting\"),\n title = \"Hello! &128515 &128075\"\n)\n\napp$launch(server_name = \"localhost\", \n server_port = as.integer(3000))\n```\n\nCredit to [@IfeanyiIdiaye](https://github.com/Ifeanyi55) for contributing this section. You can see more examples [here](https://github.com/Ifeanyi55/Gradio-in-R/tree/main/Code), including using Gradio Blocks to build a machine learning application in R.\n", "heading1": "Using Gradio with R (via `reticulate`)", "source_page_url": "https://gradio.app/guides/using-gradio-in-other-programming-languages", "source_page_title": "Other Tutorials - Using Gradio In Other Programming Languages Guide"}, {"text": "To use Gradio with BigQuery, you will need to obtain your BigQuery credentials and use them with the [BigQuery Python client](https://pypi.org/project/google-cloud-bigquery/). If you already have BigQuery credentials (as a `.json` file), you can skip this section. If not, you can do this for free in just a couple of minutes.\n\n1. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/)\n\n2. In the Cloud Console, click on the hamburger menu in the top-left corner and select \"APIs & Services\" from the menu. If you do not have an existing project, you will need to create one.\n\n3. Then, click the \"+ Enabled APIs & services\" button, which allows you to enable specific services for your project. Search for \"BigQuery API\", click on it, and click the \"Enable\" button. If you see the \"Manage\" button, then the BigQuery is already enabled, and you're all set.\n\n4. In the APIs & Services menu, click on the \"Credentials\" tab and then click on the \"Create credentials\" button.\n\n5. In the \"Create credentials\" dialog, select \"Service account key\" as the type of credentials to create, and give it a name. Also grant the service account permissions by giving it a role such as \"BigQuery User\", which will allow you to run queries.\n\n6. After selecting the service account, select the \"JSON\" key type and then click on the \"Create\" button. This will download the JSON key file containing your credentials to your computer. It will look something like this:\n\n```json\n{\n\t\"type\": \"service_account\",\n\t\"project_id\": \"your project\",\n\t\"private_key_id\": \"your private key id\",\n\t\"private_key\": \"private key\",\n\t\"client_email\": \"email\",\n\t\"client_id\": \"client id\",\n\t\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n\t\"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n\t\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n\t\"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/email_id\"\n}\n```\n\n", "heading1": "Setting up your BigQuery Credentials", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-bigquery-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Bigquery Data Guide"}, {"text": "Once you have the credentials, you will need to use the BigQuery Python client to authenticate using your credentials. To do this, you will need to install the BigQuery Python client by running the following command in the terminal:\n\n```bash\npip install google-cloud-bigquery[pandas]\n```\n\nYou'll notice that we've installed the pandas add-on, which will be helpful for processing the BigQuery dataset as a pandas dataframe. Once the client is installed, you can authenticate using your credentials by running the following code:\n\n```py\nfrom google.cloud import bigquery\n\nclient = bigquery.Client.from_service_account_json(\"path/to/key.json\")\n```\n\nWith your credentials authenticated, you can now use the BigQuery Python client to interact with your BigQuery datasets.\n\nHere is an example of a function which queries the `covid19_nyt.us_counties` dataset in BigQuery to show the top 20 counties with the most confirmed cases as of the current day:\n\n```py\nimport numpy as np\n\nQUERY = (\n 'SELECT * FROM `bigquery-public-data.covid19_nyt.us_counties` '\n 'ORDER BY date DESC,confirmed_cases DESC '\n 'LIMIT 20')\n\ndef run_query():\n query_job = client.query(QUERY)\n query_result = query_job.result()\n df = query_result.to_dataframe()\n Select a subset of columns\n df = df[[\"confirmed_cases\", \"deaths\", \"county\", \"state_name\"]]\n Convert numeric columns to standard numpy types\n df = df.astype({\"deaths\": np.int64, \"confirmed_cases\": np.int64})\n return df\n```\n\n", "heading1": "Using the BigQuery Client", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-bigquery-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Bigquery Data Guide"}, {"text": "Once you have a function to query the data, you can use the `gr.DataFrame` component from the Gradio library to display the results in a tabular format. This is a useful way to inspect the data and make sure that it has been queried correctly.\n\nHere is an example of how to use the `gr.DataFrame` component to display the results. By passing in the `run_query` function to `gr.DataFrame`, we instruct Gradio to run the function as soon as the page loads and show the results. In addition, you also pass in the keyword `every` to tell the dashboard to refresh every hour (60\\*60 seconds).\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.DataFrame(run_query, every=gr.Timer(60*60))\n\ndemo.launch()\n```\n\nPerhaps you'd like to add a visualization to our dashboard. You can use the `gr.ScatterPlot()` component to visualize the data in a scatter plot. This allows you to see the relationship between different variables such as case count and case deaths in the dataset and can be useful for exploring the data and gaining insights. Again, we can do this in real-time\nby passing in the `every` parameter.\n\nHere is a complete example showing how to use the `gr.ScatterPlot` to visualize in addition to displaying data with the `gr.DataFrame`\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"\ud83d\udc89 Covid Dashboard (Updated Hourly)\")\n with gr.Row():\n gr.DataFrame(run_query, every=gr.Timer(60*60))\n gr.ScatterPlot(run_query, every=gr.Timer(60*60), x=\"confirmed_cases\",\n y=\"deaths\", tooltip=\"county\", width=500, height=500)\n\ndemo.queue().launch() Run the demo with queuing enabled\n```\n", "heading1": "Building the Real-Time Dashboard", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-bigquery-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Bigquery Data Guide"}, {"text": "First of all, we need some data to visualize. Following this [excellent guide](https://supabase.com/blog/loading-data-supabase-python), we'll create fake commerce data and put it in Supabase.\n\n1\\. Start by creating a new project in Supabase. Once you're logged in, click the \"New Project\" button\n\n2\\. Give your project a name and database password. You can also choose a pricing plan (for our purposes, the Free Tier is sufficient!)\n\n3\\. You'll be presented with your API keys while the database spins up (can take up to 2 minutes).\n\n4\\. Click on \"Table Editor\" (the table icon) in the left pane to create a new table. We'll create a single table called `Product`, with the following schema:\n\n
\n\n\n\n\n\n
product_idint8
inventory_countint8
pricefloat8
product_namevarchar
\n
\n\n5\\. Click Save to save the table schema.\n\nOur table is now ready!\n\n", "heading1": "Create a table in Supabase", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "The next step is to write data to a Supabase dataset. We will use the Supabase Python library to do this.\n\n6\\. Install `supabase` by running the following command in your terminal:\n\n```bash\npip install supabase\n```\n\n7\\. Get your project URL and API key. Click the Settings (gear icon) on the left pane and click 'API'. The URL is listed in the Project URL box, while the API key is listed in Project API keys (with the tags `service_role`, `secret`)\n\n8\\. Now, run the following Python script to write some fake data to the table (note you have to put the values of `SUPABASE_URL` and `SUPABASE_SECRET_KEY` from step 7):\n\n```python\nimport supabase\n\nInitialize the Supabase client\nclient = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')\n\nDefine the data to write\nimport random\n\nmain_list = []\nfor i in range(10):\n value = {'product_id': i,\n 'product_name': f\"Item {i}\",\n 'inventory_count': random.randint(1, 100),\n 'price': random.random()*100\n }\n main_list.append(value)\n\nWrite the data to the table\ndata = client.table('Product').insert(main_list).execute()\n```\n\nReturn to your Supabase dashboard and refresh the page, you should now see 10 rows populated in the `Product` table!\n\n", "heading1": "Write data to Supabase", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "Finally, we will read the data from the Supabase dataset using the same `supabase` Python library and create a realtime dashboard using `gradio`.\n\nNote: We repeat certain steps in this section (like creating the Supabase client) in case you did not go through the previous sections. As described in Step 7, you will need the project URL and API Key for your database.\n\n9\\. Write a function that loads the data from the `Product` table and returns it as a pandas Dataframe:\n\n```python\nimport supabase\nimport pandas as pd\n\nclient = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')\n\ndef read_data():\n response = client.table('Product').select(\"*\").execute()\n df = pd.DataFrame(response.data)\n return df\n```\n\n10\\. Create a small Gradio Dashboard with 2 Barplots that plots the prices and inventories of all of the items every minute and updates in real-time:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as dashboard:\n with gr.Row():\n gr.BarPlot(read_data, x=\"product_id\", y=\"price\", title=\"Prices\", every=gr.Timer(60))\n gr.BarPlot(read_data, x=\"product_id\", y=\"inventory_count\", title=\"Inventory\", every=gr.Timer(60))\n\ndashboard.queue().launch()\n```\n\nNotice that by passing in a function to `gr.BarPlot()`, we have the BarPlot query the database as soon as the web app loads (and then again every 60 seconds because of the `every` parameter). Your final dashboard should look something like this:\n\n\n\n", "heading1": "Visualize the Data in a Real-Time Gradio Dashboard", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "That's it! In this tutorial, you learned how to write data to a Supabase dataset, and then read that data and plot the results as bar plots. If you update the data in the Supabase database, you'll notice that the Gradio dashboard will update within a minute.\n\nTry adding more plots and visualizations to this example (or with a different dataset) to build a more complex dashboard!\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "A virtual environment in Python is a self-contained directory that holds a Python installation for a particular version of Python, along with a number of additional packages. This environment is isolated from the main Python installation and other virtual environments. Each environment can have its own independent set of installed Python packages, which allows you to maintain different versions of libraries for different projects without conflicts.\n\n\nUsing virtual environments ensures that you can work on multiple Python projects on the same machine without any conflicts. This is particularly useful when different projects require different versions of the same library. It also simplifies dependency management and enhances reproducibility, as you can easily share the requirements of your project with others.\n\n\n", "heading1": "Virtual Environments", "source_page_url": "https://gradio.app/guides/installing-gradio-in-a-virtual-environment", "source_page_title": "Other Tutorials - Installing Gradio In A Virtual Environment Guide"}, {"text": "To install Gradio on a Windows system in a virtual environment, follow these steps:\n\n1. **Install Python**: Ensure you have Python 3.10 or higher installed. You can download it from [python.org](https://www.python.org/). You can verify the installation by running `python --version` or `python3 --version` in Command Prompt.\n\n\n2. **Create a Virtual Environment**:\n Open Command Prompt and navigate to your project directory. Then create a virtual environment using the following command:\n\n ```bash\n python -m venv gradio-env\n ```\n\n This command creates a new directory `gradio-env` in your project folder, containing a fresh Python installation.\n\n3. **Activate the Virtual Environment**:\n To activate the virtual environment, run:\n\n ```bash\n .\\gradio-env\\Scripts\\activate\n ```\n\n Your command prompt should now indicate that you are working inside `gradio-env`. Note: you can choose a different name than `gradio-env` for your virtual environment in this step.\n\n\n4. **Install Gradio**:\n Now, you can install Gradio using pip:\n\n ```bash\n pip install gradio\n ```\n\n5. **Verification**:\n To verify the installation, run `python` and then type:\n\n ```python\n import gradio as gr\n print(gr.__version__)\n ```\n\n This will display the installed version of Gradio.\n\n", "heading1": "Installing Gradio on Windows", "source_page_url": "https://gradio.app/guides/installing-gradio-in-a-virtual-environment", "source_page_title": "Other Tutorials - Installing Gradio In A Virtual Environment Guide"}, {"text": "The installation steps on MacOS and Linux are similar to Windows but with some differences in commands.\n\n1. **Install Python**:\n Python usually comes pre-installed on MacOS and most Linux distributions. You can verify the installation by running `python --version` in the terminal (note that depending on how Python is installed, you might have to use `python3` instead of `python` throughout these steps). \n \n Ensure you have Python 3.10 or higher installed. If you do not have it installed, you can download it from [python.org](https://www.python.org/). \n\n2. **Create a Virtual Environment**:\n Open Terminal and navigate to your project directory. Then create a virtual environment using:\n\n ```bash\n python -m venv gradio-env\n ```\n\n Note: you can choose a different name than `gradio-env` for your virtual environment in this step.\n\n3. **Activate the Virtual Environment**:\n To activate the virtual environment on MacOS/Linux, use:\n\n ```bash\n source gradio-env/bin/activate\n ```\n\n4. **Install Gradio**:\n With the virtual environment activated, install Gradio using pip:\n\n ```bash\n pip install gradio\n ```\n\n5. **Verification**:\n To verify the installation, run `python` and then type:\n\n ```python\n import gradio as gr\n print(gr.__version__)\n ```\n\n This will display the installed version of Gradio.\n\nBy following these steps, you can successfully install Gradio in a virtual environment on your operating system, ensuring a clean and managed workspace for your Python projects.", "heading1": "Installing Gradio on MacOS/Linux", "source_page_url": "https://gradio.app/guides/installing-gradio-in-a-virtual-environment", "source_page_title": "Other Tutorials - Installing Gradio In A Virtual Environment Guide"}, {"text": "Named-entity recognition (NER), also known as token classification or text tagging, is the task of taking a sentence and classifying every word (or \"token\") into different categories, such as names of people or names of locations, or different parts of speech.\n\nFor example, given the sentence:\n\n> Does Chicago have any Pakistani restaurants?\n\nA named-entity recognition algorithm may identify:\n\n- \"Chicago\" as a **location**\n- \"Pakistani\" as an **ethnicity**\n\nand so on.\n\nUsing `gradio` (specifically the `HighlightedText` component), you can easily build a web demo of your NER model and share that with the rest of your team.\n\nHere is an example of a demo that you'll be able to build:\n\n$demo_ner_pipeline\n\nThis tutorial will show how to take a pretrained NER model and deploy it with a Gradio interface. We will show two different ways to use the `HighlightedText` component -- depending on your NER model, either of these two ways may be easier to learn!\n\nPrerequisites\n\nMake sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained named-entity recognition model. You can use your own, while in this tutorial, we will use one from the `transformers` library.\n\nApproach 1: List of Entity Dictionaries\n\nMany named-entity recognition models output a list of dictionaries. Each dictionary consists of an _entity_, a \"start\" index, and an \"end\" index. This is, for example, how NER models in the `transformers` library operate:\n\n```py\nfrom transformers import pipeline\nner_pipeline = pipeline(\"ner\")\nner_pipeline(\"Does Chicago have any Pakistani restaurants\")\n```\n\nOutput:\n\n```bash\n[{'entity': 'I-LOC',\n 'score': 0.9988978,\n 'index': 2,\n 'word': 'Chicago',\n 'start': 5,\n 'end': 12},\n {'entity': 'I-MISC',\n 'score': 0.9958592,\n 'index': 5,\n 'word': 'Pakistani',\n 'start': 22,\n 'end': 31}]\n```\n\nIf you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this ", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/named-entity-recognition", "source_page_title": "Other Tutorials - Named Entity Recognition Guide"}, {"text": "index': 5,\n 'word': 'Pakistani',\n 'start': 22,\n 'end': 31}]\n```\n\nIf you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this **list of entities**, along with the **original text** to the model, together as dictionary, with the keys being `\"entities\"` and `\"text\"` respectively.\n\nHere is a complete example:\n\n$code_ner_pipeline\n$demo_ner_pipeline\n\nApproach 2: List of Tuples\n\nAn alternative way to pass data into the `HighlightedText` component is a list of tuples. The first element of each tuple should be the word or words that are being classified into a particular entity. The second element should be the entity label (or `None` if they should be unlabeled). The `HighlightedText` component automatically strings together the words and labels to display the entities.\n\nIn some cases, this can be easier than the first approach. Here is a demo showing this approach using Spacy's parts-of-speech tagger:\n\n$code_text_analysis\n$demo_text_analysis\n\n---\n\nAnd you're done! That's all you need to know to build a web-based GUI for your NER model.\n\nFun tip: you can share your NER demo instantly with others simply by setting `share=True` in `launch()`.\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/named-entity-recognition", "source_page_title": "Other Tutorials - Named Entity Recognition Guide"}, {"text": "In this Guide, we'll walk you through:\n\n- Introduction of ONNX, ONNX model zoo, Gradio, and Hugging Face Spaces\n- How to setup a Gradio demo for EfficientNet-Lite4\n- How to contribute your own Gradio demos for the ONNX organization on Hugging Face\n\nHere's an [example](https://onnx-efficientnet-lite4.hf.space/) of an ONNX model.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "Open Neural Network Exchange ([ONNX](https://onnx.ai/)) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. For example, if you have trained a model in TensorFlow or PyTorch, you can convert it to ONNX easily, and from there run it on a variety of devices using an engine/compiler like ONNX Runtime.\n\nThe [ONNX Model Zoo](https://github.com/onnx/models) is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture.\n\n", "heading1": "What is the ONNX Model Zoo?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "Gradio\n\nGradio lets users demo their machine learning models as a web app all in python code. Gradio wraps a python function into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.\n\nGet started [here](https://gradio.app/getting_started)\n\nHugging Face Spaces\n\nHugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).\n\nHugging Face Models\n\nHugging Face Model Hub also supports ONNX models and ONNX models can be filtered through the [ONNX tag](https://huggingface.co/models?library=onnx&sort=downloads)\n\n", "heading1": "What are Hugging Face Spaces & Gradio?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "There are a lot of Jupyter notebooks in the ONNX Model Zoo for users to test models. Previously, users needed to download the models themselves and run those notebooks locally for testing. With Hugging Face, the testing process can be much simpler and more user-friendly. Users can easily try certain ONNX Model Zoo model on Hugging Face Spaces and run a quick demo powered by Gradio with ONNX Runtime, all on cloud without downloading anything locally. Note, there are various runtimes for ONNX, e.g., [ONNX Runtime](https://github.com/microsoft/onnxruntime), [MXNet](https://github.com/apache/incubator-mxnet).\n\n", "heading1": "How did Hugging Face help the ONNX Model Zoo?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "ONNX Runtime is a cross-platform inference and training machine-learning accelerator. It makes live Gradio demos with ONNX Model Zoo model on Hugging Face possible.\n\nONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. For more information please see the [official website](https://onnxruntime.ai/).\n\n", "heading1": "What is the role of ONNX Runtime?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite models. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU. To learn more read the [model card](https://github.com/onnx/models/tree/main/vision/classification/efficientnet-lite4)\n\nHere we walk through setting up a example demo for EfficientNet-Lite4 using Gradio\n\nFirst we import our dependencies and download and load the efficientnet-lite4 model from the onnx model zoo. Then load the labels from the labels_map.txt file. We then setup our preprocessing functions, load the model for inference, and setup the inference function. Finally, the inference function is wrapped into a gradio interface for a user to interact with. See the full code below.\n\n```python\nimport numpy as np\nimport math\nimport matplotlib.pyplot as plt\nimport cv2\nimport json\nimport gradio as gr\nfrom huggingface_hub import hf_hub_download\nfrom onnx import hub\nimport onnxruntime as ort\n\nloads ONNX model from ONNX Model Zoo\nmodel = hub.load(\"efficientnet-lite4\")\nloads the labels text file\nlabels = json.load(open(\"labels_map.txt\", \"r\"))\n\nsets image file dimensions to 224x224 by resizing and cropping image from center\ndef pre_process_edgetpu(img, dims):\n output_height, output_width, _ = dims\n img = resize_with_aspectratio(img, output_height, output_width, inter_pol=cv2.INTER_LINEAR)\n img = center_crop(img, output_height, output_width)\n img = np.asarray(img, dtype='float32')\n converts jpg pixel value from [0 - 255] to float array [-1.0 - 1.0]\n img -= [127.0, 127.0, 127.0]\n img /= [128.0, 128.0, 128.0]\n return img\n\nresizes the image with a proportional scale\ndef resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):\n height, width, _ = img.shape\n new_height = int(100. * out_he", "heading1": "Setting up a Gradio Demo for EfficientNet-Lite4", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "the image with a proportional scale\ndef resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):\n height, width, _ = img.shape\n new_height = int(100. * out_height / scale)\n new_width = int(100. * out_width / scale)\n if height > width:\n w = new_width\n h = int(new_height * height / width)\n else:\n h = new_height\n w = int(new_width * width / height)\n img = cv2.resize(img, (w, h), interpolation=inter_pol)\n return img\n\ncrops the image around the center based on given height and width\ndef center_crop(img, out_height, out_width):\n height, width, _ = img.shape\n left = int((width - out_width) / 2)\n right = int((width + out_width) / 2)\n top = int((height - out_height) / 2)\n bottom = int((height + out_height) / 2)\n img = img[top:bottom, left:right]\n return img\n\n\nsess = ort.InferenceSession(model)\n\ndef inference(img):\n img = cv2.imread(img)\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n img = pre_process_edgetpu(img, (224, 224, 3))\n\n img_batch = np.expand_dims(img, axis=0)\n\n results = sess.run([\"Softmax:0\"], {\"images:0\": img_batch})[0]\n result = reversed(results[0].argsort()[-5:])\n resultdic = {}\n for r in result:\n resultdic[labels[str(r)]] = float(results[0][r])\n return resultdic\n\ntitle = \"EfficientNet-Lite4\"\ndescription = \"EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite model. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU.\"\nexamples = [['catonnx.jpg']]\ngr.Interface(inference, gr.Image(type=\"filepath\"), \"label\", title=title, description=description, examples=examples).launch()\n```\n\n", "heading1": "Setting up a Gradio Demo for EfficientNet-Lite4", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": " examples=examples).launch()\n```\n\n", "heading1": "Setting up a Gradio Demo for EfficientNet-Lite4", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "- Add model to the [onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)\n- Create an account on Hugging Face [here](https://huggingface.co/join).\n- See list of models left to add to ONNX organization, please refer to the table with the [Models list](https://github.com/onnx/modelsmodels)\n- Add Gradio Demo under your username, see this [blog post](https://huggingface.co/blog/gradio-spaces) for setting up Gradio Demo on Hugging Face.\n- Request to join ONNX Organization [here](https://huggingface.co/onnx).\n- Once approved transfer model from your username to ONNX organization\n- Add a badge for model in model table, see examples in [Models list](https://github.com/onnx/modelsmodels)\n", "heading1": "How to contribute Gradio demos on HF spaces using ONNX models", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "Gradio features a built-in theming engine that lets you customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `Blocks` or `Interface` constructor. For example:\n\n```python\nwith gr.Blocks(theme=gr.themes.Soft()) as demo:\n ...\n```\n\n
\n\n
\n\nGradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. These are:\n\n\n* `gr.themes.Base()` - the `\"base\"` theme sets the primary color to blue but otherwise has minimal styling, making it particularly useful as a base for creating new, custom themes.\n* `gr.themes.Default()` - the `\"default\"` Gradio 5 theme, with a vibrant orange primary color and gray secondary color.\n* `gr.themes.Origin()` - the `\"origin\"` theme is most similar to Gradio 4 styling. Colors, especially in light mode, are more subdued than the Gradio 5 default theme.\n* `gr.themes.Citrus()` - the `\"citrus\"` theme uses a yellow primary color, highlights form elements that are in focus, and includes fun 3D effects when buttons are clicked.\n* `gr.themes.Monochrome()` - the `\"monochrome\"` theme uses a black primary and white secondary color, and uses serif-style fonts, giving the appearance of a black-and-white newspaper. \n* `gr.themes.Soft()` - the `\"soft\"` theme uses a purple primary color and white secondary color. It also increases the border radius around buttons and form elements and highlights labels.\n* `gr.themes.Glass()` - the `\"glass\"` theme has a blue primary color and a transclucent gray secondary color. The theme also uses vertical gradients to create a glassy effect.\n* `gr.themes.Ocean()` - the `\"ocean\"` theme has a blue-green primary color and gray secondary color. The theme also uses horizontal gradients, especially for buttons and some form elements.\n\n\nEach of these themes set values for hundreds of CSS variables. You can use preb", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "lor and gray secondary color. The theme also uses horizontal gradients, especially for buttons and some form elements.\n\n\nEach of these themes set values for hundreds of CSS variables. You can use prebuilt themes as a starting point for your own custom themes, or you can create your own themes from scratch. Let's take a look at each approach.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "The easiest way to build a theme is using the Theme Builder. To launch the Theme Builder locally, run the following code:\n\n```python\nimport gradio as gr\n\ngr.themes.builder()\n```\n\n$demo_theme_builder\n\nYou can use the Theme Builder running on Spaces above, though it runs much faster when you launch it locally via `gr.themes.builder()`.\n\nAs you edit the values in the Theme Builder, the app will preview updates in real time. You can download the code to generate the theme you've created so you can use it in any Gradio app.\n\nIn the rest of the guide, we will cover building themes programmatically.\n\n", "heading1": "Using the Theme Builder", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "Although each theme has hundreds of CSS variables, the values for most these variables are drawn from 8 core variables which can be set through the constructor of each prebuilt theme. Modifying these 8 arguments allows you to quickly change the look and feel of your app.\n\nCore Colors\n\nThe first 3 constructor arguments set the colors of the theme and are `gradio.themes.Color` objects. Internally, these Color objects hold brightness values for the palette of a single hue, ranging from 50, 100, 200..., 800, 900, 950. Other CSS variables are derived from these 3 colors.\n\nThe 3 color constructor arguments are:\n\n- `primary_hue`: This is the color draws attention in your theme. In the default theme, this is set to `gradio.themes.colors.orange`.\n- `secondary_hue`: This is the color that is used for secondary elements in your theme. In the default theme, this is set to `gradio.themes.colors.blue`.\n- `neutral_hue`: This is the color that is used for text and other neutral elements in your theme. In the default theme, this is set to `gradio.themes.colors.gray`.\n\nYou could modify these values using their string shortcuts, such as\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(primary_hue=\"red\", secondary_hue=\"pink\")) as demo:\n ...\n```\n\nor you could use the `Color` objects directly, like this:\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(primary_hue=gr.themes.colors.red, secondary_hue=gr.themes.colors.pink)) as demo:\n ...\n```\n\n
\n\n
\n\nPredefined colors are:\n\n- `slate`\n- `gray`\n- `zinc`\n- `neutral`\n- `stone`\n- `red`\n- `orange`\n- `amber`\n- `yellow`\n- `lime`\n- `green`\n- `emerald`\n- `teal`\n- `cyan`\n- `sky`\n- `blue`\n- `indigo`\n- `violet`\n- `purple`\n- `fuchsia`\n- `pink`\n- `rose`\n\nYou could also create your own custom `Color` objects and pass them in.\n\nCore Sizing\n\nThe next 3 constructor arguments set the sizing of the theme and are `gradio.", "heading1": "Extending Themes via the Constructor", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "`\n- `fuchsia`\n- `pink`\n- `rose`\n\nYou could also create your own custom `Color` objects and pass them in.\n\nCore Sizing\n\nThe next 3 constructor arguments set the sizing of the theme and are `gradio.themes.Size` objects. Internally, these Size objects hold pixel size values that range from `xxs` to `xxl`. Other CSS variables are derived from these 3 sizes.\n\n- `spacing_size`: This sets the padding within and spacing between elements. In the default theme, this is set to `gradio.themes.sizes.spacing_md`.\n- `radius_size`: This sets the roundedness of corners of elements. In the default theme, this is set to `gradio.themes.sizes.radius_md`.\n- `text_size`: This sets the font size of text. In the default theme, this is set to `gradio.themes.sizes.text_md`.\n\nYou could modify these values using their string shortcuts, such as\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(spacing_size=\"sm\", radius_size=\"none\")) as demo:\n ...\n```\n\nor you could use the `Size` objects directly, like this:\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(spacing_size=gr.themes.sizes.spacing_sm, radius_size=gr.themes.sizes.radius_none)) as demo:\n ...\n```\n\n
\n\n
\n\nThe predefined size objects are:\n\n- `radius_none`\n- `radius_sm`\n- `radius_md`\n- `radius_lg`\n- `spacing_sm`\n- `spacing_md`\n- `spacing_lg`\n- `text_sm`\n- `text_md`\n- `text_lg`\n\nYou could also create your own custom `Size` objects and pass them in.\n\nCore Fonts\n\nThe final 2 constructor arguments set the fonts of the theme. You can pass a list of fonts to each of these arguments to specify fallbacks. If you provide a string, it will be loaded as a system font. If you provide a `gradio.themes.GoogleFont`, the font will be loaded from Google Fonts.\n\n- `font`: This sets the primary font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont(\"IBM Plex Sans\")`.\n- `font_mono`: This sets th", "heading1": "Extending Themes via the Constructor", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "font will be loaded from Google Fonts.\n\n- `font`: This sets the primary font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont(\"IBM Plex Sans\")`.\n- `font_mono`: This sets the monospace font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont(\"IBM Plex Mono\")`.\n\nYou could modify these values such as the following:\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(font=[gr.themes.GoogleFont(\"Inconsolata\"), \"Arial\", \"sans-serif\"])) as demo:\n ...\n```\n\n
\n\n
\n\n", "heading1": "Extending Themes via the Constructor", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "You can also modify the values of CSS variables after the theme has been loaded. To do so, use the `.set()` method of the theme object to get access to the CSS variables. For example:\n\n```python\ntheme = gr.themes.Default(primary_hue=\"blue\").set(\n loader_color=\"FF0000\",\n slider_color=\"FF0000\",\n)\n\nwith gr.Blocks(theme=theme) as demo:\n ...\n```\n\nIn the example above, we've set the `loader_color` and `slider_color` variables to `FF0000`, despite the overall `primary_color` using the blue color palette. You can set any CSS variable that is defined in the theme in this manner.\n\nYour IDE type hinting should help you navigate these variables. Since there are so many CSS variables, let's take a look at how these variables are named and organized.\n\nCSS Variable Naming Conventions\n\nCSS variable names can get quite long, like `button_primary_background_fill_hover_dark`! However they follow a common naming convention that makes it easy to understand what they do and to find the variable you're looking for. Separated by underscores, the variable name is made up of:\n\n1. The target element, such as `button`, `slider`, or `block`.\n2. The target element type or sub-element, such as `button_primary`, or `block_label`.\n3. The property, such as `button_primary_background_fill`, or `block_label_border_width`.\n4. Any relevant state, such as `button_primary_background_fill_hover`.\n5. If the value is different in dark mode, the suffix `_dark`. For example, `input_border_color_focus_dark`.\n\nOf course, many CSS variable names are shorter than this, such as `table_border_color`, or `input_shadow`.\n\nCSS Variable Organization\n\nThough there are hundreds of CSS variables, they do not all have to have individual values. They draw their values by referencing a set of core variables and referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of individual elements that we may wan", "heading1": "Extending Themes via `.set()`", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "d referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of individual elements that we may want to modify.\n\nReferencing Core Variables\n\nTo reference one of the core constructor variables, precede the variable name with an asterisk. To reference a core color, use the `*primary_`, `*secondary_`, or `*neutral_` prefix, followed by the brightness value. For example:\n\n```python\ntheme = gr.themes.Default(primary_hue=\"blue\").set(\n button_primary_background_fill=\"*primary_200\",\n button_primary_background_fill_hover=\"*primary_300\",\n)\n```\n\nIn the example above, we've set the `button_primary_background_fill` and `button_primary_background_fill_hover` variables to `*primary_200` and `*primary_300`. These variables will be set to the 200 and 300 brightness values of the blue primary color palette, respectively.\n\nSimilarly, to reference a core size, use the `*spacing_`, `*radius_`, or `*text_` prefix, followed by the size value. For example:\n\n```python\ntheme = gr.themes.Default(radius_size=\"md\").set(\n button_primary_border_radius=\"*radius_xl\",\n)\n```\n\nIn the example above, we've set the `button_primary_border_radius` variable to `*radius_xl`. This variable will be set to the `xl` setting of the medium radius size range.\n\nReferencing Other Variables\n\nVariables can also reference each other. For example, look at the example below:\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"FF0000\",\n button_primary_background_fill_hover=\"FF0000\",\n button_primary_border=\"FF0000\",\n)\n```\n\nHaving to set these values to a common color is a bit tedious. Instead, we can reference the `button_primary_background_fill` variable in the `button_primary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"FF0000\",\n button_primary_back", "heading1": "Extending Themes via `.set()`", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "mary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"FF0000\",\n button_primary_background_fill_hover=\"*button_primary_background_fill\",\n button_primary_border=\"*button_primary_background_fill\",\n)\n```\n\nNow, if we change the `button_primary_background_fill` variable, the `button_primary_background_fill_hover` and `button_primary_border` variables will automatically update as well.\n\nThis is particularly useful if you intend to share your theme - it makes it easy to modify the theme without having to change every variable.\n\nNote that dark mode variables automatically reference each other. For example:\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"FF0000\",\n button_primary_background_fill_dark=\"AAAAAA\",\n button_primary_border=\"*button_primary_background_fill\",\n button_primary_border_dark=\"*button_primary_background_fill_dark\",\n)\n```\n\n`button_primary_border_dark` will draw its value from `button_primary_background_fill_dark`, because dark mode always draw from the dark version of the variable.\n\n", "heading1": "Extending Themes via `.set()`", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "Let's say you want to create a theme from scratch! We'll go through it step by step - you can also see the source of prebuilt themes in the gradio source repo for reference - [here's the source](https://github.com/gradio-app/gradio/blob/main/gradio/themes/monochrome.py) for the Monochrome theme.\n\nOur new theme class will inherit from `gradio.themes.Base`, a theme that sets a lot of convenient defaults. Let's make a simple demo that creates a dummy theme called Seafoam, and make a simple app that uses it.\n\n$code_theme_new_step_1\n\n
\n\n
\n\nThe Base theme is very barebones, and uses `gr.themes.Blue` as it primary color - you'll note the primary button and the loading animation are both blue as a result. Let's change the defaults core arguments of our app. We'll overwrite the constructor and pass new defaults for the core constructor arguments.\n\nWe'll use `gr.themes.Emerald` as our primary color, and set secondary and neutral hues to `gr.themes.Blue`. We'll make our text larger using `text_lg`. We'll use `Quicksand` as our default font, loaded from Google Fonts.\n\n$code_theme_new_step_2\n\n
\n\n
\n\nSee how the primary button and the loading animation are now green? These CSS variables are tied to the `primary_hue` variable.\n\nLet's modify the theme a bit more directly. We'll call the `set()` method to overwrite CSS variable values explicitly. We can use any CSS logic, and reference our core constructor arguments using the `*` prefix.\n\n$code_theme_new_step_3\n\n
\n\n
\n\nLook how fun our theme looks now! With just a few variable changes, our theme looks completely different.\n\nYou may find it helpful to explore the [source code ", "heading1": "Creating a Full Theme", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "ght\"\n\tframeborder=\"0\"\n>\n\n\nLook how fun our theme looks now! With just a few variable changes, our theme looks completely different.\n\nYou may find it helpful to explore the [source code of the other prebuilt themes](https://github.com/gradio-app/gradio/blob/main/gradio/themes) to see how they modified the base theme. You can also find your browser's Inspector useful to select elements from the UI and see what CSS variables are being used in the styles panel.\n\n", "heading1": "Creating a Full Theme", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "Once you have created a theme, you can upload it to the HuggingFace Hub to let others view it, use it, and build off of it!\n\nUploading a Theme\n\nThere are two ways to upload a theme, via the theme class instance or the command line. We will cover both of them with the previously created `seafoam` theme.\n\n- Via the class instance\n\nEach theme instance has a method called `push_to_hub` we can use to upload a theme to the HuggingFace hub.\n\n```python\nseafoam.push_to_hub(repo_name=\"seafoam\",\n version=\"0.0.1\",\n\t\t\t\t\thf_token=\"\")\n```\n\n- Via the command line\n\nFirst save the theme to disk\n\n```python\nseafoam.dump(filename=\"seafoam.json\")\n```\n\nThen use the `upload_theme` command:\n\n```bash\nupload_theme\\\n\"seafoam.json\"\\\n\"seafoam\"\\\n--version \"0.0.1\"\\\n--hf_token \"\"\n```\n\nIn order to upload a theme, you must have a HuggingFace account and pass your [Access Token](https://huggingface.co/docs/huggingface_hub/quick-startlogin)\nas the `hf_token` argument. However, if you log in via the [HuggingFace command line](https://huggingface.co/docs/huggingface_hub/quick-startlogin) (which comes installed with `gradio`),\nyou can omit the `hf_token` argument.\n\nThe `version` argument lets you specify a valid [semantic version](https://www.geeksforgeeks.org/introduction-semantic-versioning/) string for your theme.\nThat way your users are able to specify which version of your theme they want to use in their apps. This also lets you publish updates to your theme without worrying\nabout changing how previously created apps look. The `version` argument is optional. If omitted, the next patch version is automatically applied.\n\nTheme Previews\n\nBy calling `push_to_hub` or `upload_theme`, the theme assets will be stored in a [HuggingFace space](https://huggingface.co/docs/hub/spaces-overview).\n\nFor example, the theme preview for the calm seafoam theme is here: [calm seafoam preview](https://huggingface.co/spaces/shivalikasingh/calm_seafoam).\n\n
\n\n\n
\n\nDiscovering Themes\n\nThe [Theme Gallery](https://huggingface.co/spaces/gradio/theme-gallery) shows all the public gradio themes. After publishing your theme,\nit will automatically show up in the theme gallery after a couple of minutes.\n\nYou can sort the themes by the number of likes on the space and from most to least recently created as well as toggling themes between light and dark mode.\n\n
\n\n
\n\nDownloading\n\nTo use a theme from the hub, use the `from_hub` method on the `ThemeClass` and pass it to your app:\n\n```python\nmy_theme = gr.Theme.from_hub(\"gradio/seafoam\")\n\nwith gr.Blocks(theme=my_theme) as demo:\n ....\n```\n\nYou can also pass the theme string directly to `Blocks` or `Interface` (`gr.Blocks(theme=\"gradio/seafoam\")`)\n\nYou can pin your app to an upstream theme version by using semantic versioning expressions.\n\nFor example, the following would ensure the theme we load from the `seafoam` repo was between versions `0.0.1` and `0.1.0`:\n\n```python\nwith gr.Blocks(theme=\"gradio/seafoam@>=0.0.1,<0.1.0\") as demo:\n ....\n```\n\nEnjoy creating your own themes! If you make one you're proud of, please share it with the world by uploading it to the hub!\nIf you tag us on [Twitter](https://twitter.com/gradio) we can give your theme a shout out!\n\n\n", "heading1": "Sharing Themes", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "er iframe {\n position: absolute;\n top: 0;\n left: 0;\n width: 100%;\n height: 100%;\n}\n\n", "heading1": "Sharing Themes", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "In this Guide, we'll walk you through:\n\n- Introduction of Gradio, and Hugging Face Spaces, and Wandb\n- How to setup a Gradio demo using the Wandb integration for JoJoGAN\n- How to contribute your own Gradio demos after tracking your experiments on wandb to the Wandb organization on Hugging Face\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "Weights and Biases (W&B) allows data scientists and machine learning scientists to track their machine learning experiments at every stage, from training to production. Any metric can be aggregated over samples and shown in panels in a customizable and searchable dashboard, like below:\n\n\"Screen\n\n", "heading1": "What is Wandb?", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "Gradio\n\nGradio lets users demo their machine learning models as a web app, all in a few lines of Python. Gradio wraps any Python function (such as a machine learning model's inference function) into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.\n\nGet started [here](https://gradio.app/getting_started)\n\nHugging Face Spaces\n\nHugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).\n\n", "heading1": "What are Hugging Face Spaces & Gradio?", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "Now, let's walk you through how to do this on your own. We'll make the assumption that you're new to W&B and Gradio for the purposes of this tutorial.\n\nLet's get started!\n\n1. Create a W&B account\n\n Follow [these quick instructions](https://app.wandb.ai/login) to create your free account if you don\u2019t have one already. It shouldn't take more than a couple minutes. Once you're done (or if you've already got an account), next, we'll run a quick colab.\n\n2. Open Colab Install Gradio and W&B\n\n We'll be following along with the colab provided in the JoJoGAN repo with some minor modifications to use Wandb and Gradio more effectively.\n\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)\n\n Install Gradio and Wandb at the top:\n\n ```sh\n pip install gradio wandb\n ```\n\n3. Finetune StyleGAN and W&B experiment tracking\n\n This next step will open a W&B dashboard to track your experiments and a gradio panel showing pretrained models to choose from a drop down menu from a Gradio Demo hosted on Huggingface Spaces. Here's the code you need for that:\n\n ```python\n alpha = 1.0\n alpha = 1-alpha\n\n preserve_color = True\n num_iter = 100\n log_interval = 50\n\n samples = []\n column_names = [\"Reference (y)\", \"Style Code(w)\", \"Real Face Image(x)\"]\n\n wandb.init(project=\"JoJoGAN\")\n config = wandb.config\n config.num_iter = num_iter\n config.preserve_color = preserve_color\n wandb.log(\n {\"Style reference\": [wandb.Image(transforms.ToPILImage()(target_im))]},\n step=0)\n\n load discriminator for perceptual loss\n discriminator = Discriminator(1024, 2).eval().to(device)\n ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)\n discriminator.load_state_dict(ckpt[\"d\"], strict=False)\n\n reset generator\n del generator\n generator = deepcopy(original_generator)\n\n g_optim = optim.Adam(generator.parameters(),", "heading1": "Setting up a Gradio Demo for JoJoGAN", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": ": storage)\n discriminator.load_state_dict(ckpt[\"d\"], strict=False)\n\n reset generator\n del generator\n generator = deepcopy(original_generator)\n\n g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))\n\n Which layers to swap for generating a family of plausible real images -> fake image\n if preserve_color:\n id_swap = [9,11,15,16,17]\n else:\n id_swap = list(range(7, generator.n_latent))\n\n for idx in tqdm(range(num_iter)):\n mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)\n in_latent = latents.clone()\n in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]\n\n img = generator(in_latent, input_is_latent=True)\n\n with torch.no_grad():\n real_feat = discriminator(targets)\n fake_feat = discriminator(img)\n\n loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)\n\n wandb.log({\"loss\": loss}, step=idx)\n if idx % log_interval == 0:\n generator.eval()\n my_sample = generator(my_w, input_is_latent=True)\n generator.train()\n my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))\n wandb.log(\n {\"Current stylization\": [wandb.Image(my_sample)]},\n step=idx)\n table_data = [\n wandb.Image(transforms.ToPILImage()(target_im)),\n wandb.Image(img),\n wandb.Image(my_sample),\n ]\n samples.append(table_data)\n\n g_optim.zero_grad()\n loss.backward()\n g_optim.step()\n\n out_table = wandb.Table(data=samples, columns=column_names)\n wandb.log({\"Current Samples\": out_table})\n ```\n4. Save, Download, and Load Model\n\n Here's how to save and download your model.\n\n ```python\n from PIL import Image\n import torch\n torch.backends.cudnn.benchmark = True\n from torchvision impor", "heading1": "Setting up a Gradio Demo for JoJoGAN", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "ave, Download, and Load Model\n\n Here's how to save and download your model.\n\n ```python\n from PIL import Image\n import torch\n torch.backends.cudnn.benchmark = True\n from torchvision import transforms, utils\n from util import *\n import math\n import random\n import numpy as np\n from torch import nn, autograd, optim\n from torch.nn import functional as F\n from tqdm import tqdm\n import lpips\n from model import *\n from e4e_projection import projection as e4e_projection\n \n from copy import deepcopy\n import imageio\n \n import os\n import sys\n import torchvision.transforms as transforms\n from argparse import Namespace\n from e4e.models.psp import pSp\n from util import *\n from huggingface_hub import hf_hub_download\n from google.colab import files\n \n torch.save({\"g\": generator.state_dict()}, \"your-model-name.pt\")\n \n files.download('your-model-name.pt')\n \n latent_dim = 512\n device=\"cuda\"\n model_path_s = hf_hub_download(repo_id=\"akhaliq/jojogan-stylegan2-ffhq-config-f\", filename=\"stylegan2-ffhq-config-f.pt\")\n original_generator = Generator(1024, latent_dim, 8, 2).to(device)\n ckpt = torch.load(model_path_s, map_location=lambda storage, loc: storage)\n original_generator.load_state_dict(ckpt[\"g_ema\"], strict=False)\n mean_latent = original_generator.mean_latent(10000)\n \n generator = deepcopy(original_generator)\n \n ckpt = torch.load(\"/content/JoJoGAN/your-model-name.pt\", map_location=lambda storage, loc: storage)\n generator.load_state_dict(ckpt[\"g\"], strict=False)\n generator.eval()\n \n plt.rcParams['figure.dpi'] = 150\n \n transform = transforms.Compose(\n [\n transforms.Resize((1024, 1024)),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ]\n )\n \n def inference(img):\n img.save('out.jpg')\n aligned_face = align_face('out.jpg')\n \n my_w = e4e_projection(aligned_face, \"out.pt\", device).unsqueeze(0)", "heading1": "Setting up a Gradio Demo for JoJoGAN", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": ".5, 0.5)),\n ]\n )\n \n def inference(img):\n img.save('out.jpg')\n aligned_face = align_face('out.jpg')\n \n my_w = e4e_projection(aligned_face, \"out.pt\", device).unsqueeze(0)\n with torch.no_grad():\n my_sample = generator(my_w, input_is_latent=True)\n \n npimage = my_sample[0].cpu().permute(1, 2, 0).detach().numpy()\n imageio.imwrite('filename.jpeg', npimage)\n return 'filename.jpeg'\n ````\n\n5. Build a Gradio Demo\n\n ```python\n import gradio as gr\n \n title = \"JoJoGAN\"\n description = \"Gradio Demo for JoJoGAN: One Shot Face Stylization. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below.\"\n \n demo = gr.Interface(\n inference,\n gr.Image(type=\"pil\"),\n gr.Image(type=\"file\"),\n title=title,\n description=description\n )\n \n demo.launch(share=True)\n ```\n\n6. Integrate Gradio into your W&B Dashboard\n\n The last step\u2014integrating your Gradio demo with your W&B dashboard\u2014is just one extra line:\n\n ```python\n demo.integrate(wandb=wandb)\n ```\n\n Once you call integrate, a demo will be created and you can integrate it into your dashboard or report.\n\n Outside of W&B with Web components, using the `gradio-app` tags, anyone can embed Gradio demos on HF spaces directly into their blogs, websites, documentation, etc.:\n \n ```html\n \n ```\n\n7. (Optional) Embed W&B plots in your Gradio App\n\n It's also possible to embed W&B plots within Gradio apps. To do so, you can create a W&B Report of your plots and\n embed them within your Gradio app within a `gr.HTML` block.\n\n The Report will need to be public and you will need to wrap the URL within an iFrame like this:\n\n ```python\n import gradio as gr\n \n def wandb_report(url):\n iframe = f'\n```\n\nAgain, you can find the `src=` attribute to your Space's embed URL, which you can find in the \"Embed this Space\" button.\n\nNote: if you use IFrames, you'll probably want to add a fixed `height` attribute and set `style=\"border:0;\"` to remove the border. In addition, if your app requires permissions such as access to the webcam or the microphone, you'll need to provide that as well using the `allow` attribute.\n\n", "heading1": "Embedding Hosted Spaces", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "You can use almost any Gradio app as an API! In the footer of a Gradio app [like this one](https://huggingface.co/spaces/gradio/hello_world), you'll see a \"Use via API\" link.\n\n![Use via API](https://github.com/gradio-app/gradio/blob/main/guides/assets/use_via_api.png?raw=true)\n\nThis is a page that lists the endpoints that can be used to query the Gradio app, via our supported clients: either [the Python client](https://gradio.app/guides/getting-started-with-the-python-client/), or [the JavaScript client](https://gradio.app/guides/getting-started-with-the-js-client/). For each endpoint, Gradio automatically generates the parameters and their types, as well as example inputs, like this.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)\n\nThe endpoints are automatically created when you launch a Gradio application. If you are using Gradio `Blocks`, you can also name each event listener, such as\n\n```python\nbtn.click(add, [num1, num2], output, api_name=\"addition\")\n```\n\nThis will add and document the endpoint `/addition/` to the automatically generated API page. Read more about the [API page here](./view-api-page).\n\n", "heading1": "API Page", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "When a user makes a prediction to your app, you may need the underlying network request, in order to get the request headers (e.g. for advanced authentication), log the client's IP address, getting the query parameters, or for other reasons. Gradio supports this in a similar manner to FastAPI: simply add a function parameter whose type hint is `gr.Request` and Gradio will pass in the network request as that parameter. Here is an example:\n\n```python\nimport gradio as gr\n\ndef echo(text, request: gr.Request):\n if request:\n print(\"Request headers dictionary:\", request.headers)\n print(\"IP address:\", request.client.host)\n print(\"Query parameters:\", dict(request.query_params))\n return text\n\nio = gr.Interface(echo, \"textbox\", \"textbox\").launch()\n```\n\nNote: if your function is called directly instead of through the UI (this happens, for\nexample, when examples are cached, or when the Gradio app is called via API), then `request` will be `None`.\nYou should handle this case explicitly to ensure that your app does not throw any errors. That is why\nwe have the explicit check `if request`.\n\n", "heading1": "Accessing the Network Request Directly", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "In some cases, you might have an existing FastAPI app, and you'd like to add a path for a Gradio demo.\nYou can easily do this with `gradio.mount_gradio_app()`.\n\nHere's a complete example:\n\n$code_custom_path\n\nNote that this approach also allows you run your Gradio apps on custom paths (`http://localhost:8000/gradio` in the example above).\n\n\n", "heading1": "Mounting Within Another FastAPI App", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Password-protected app\n\nYou may wish to put an authentication page in front of your app to limit who can open your app. With the `auth=` keyword argument in the `launch()` method, you can provide a tuple with a username and password, or a list of acceptable username/password tuples; Here's an example that provides password-based authentication for a single user named \"admin\":\n\n```python\ndemo.launch(auth=(\"admin\", \"pass1234\"))\n```\n\nFor more complex authentication handling, you can even pass a function that takes a username and password as arguments, and returns `True` to allow access, `False` otherwise.\n\nHere's an example of a function that accepts any login where the username and password are the same:\n\n```python\ndef same_auth(username, password):\n return username == password\ndemo.launch(auth=same_auth)\n```\n\nIf you have multiple users, you may wish to customize the content that is shown depending on the user that is logged in. You can retrieve the logged in user by [accessing the network request directly](accessing-the-network-request-directly) as discussed above, and then reading the `.username` attribute of the request. Here's an example:\n\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as demo:\n m = gr.Markdown()\n demo.load(update_message, None, m)\n\ndemo.launch(auth=[(\"Abubakar\", \"Abubakar\"), (\"Ali\", \"Ali\")])\n```\n\nNote: For authentication to work properly, third party cookies must be enabled in your browser. This is not the case by default for Safari or for Chrome Incognito Mode.\n\nIf users visit the `/logout` page of your Gradio app, they will automatically be logged out and session cookies deleted. This allows you to add logout functionality to your Gradio app as well. Let's update the previous example to include a log out button:\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as ", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": " Let's update the previous example to include a log out button:\n\n```python\nimport gradio as gr\n\ndef update_message(request: gr.Request):\n return f\"Welcome, {request.username}\"\n\nwith gr.Blocks() as demo:\n m = gr.Markdown()\n logout_button = gr.Button(\"Logout\", link=\"/logout\")\n demo.load(update_message, None, m)\n\ndemo.launch(auth=[(\"Pete\", \"Pete\"), (\"Dawood\", \"Dawood\")])\n```\n\nNote: Gradio's built-in authentication provides a straightforward and basic layer of access control but does not offer robust security features for applications that require stringent access controls (e.g. multi-factor authentication, rate limiting, or automatic lockout policies).\n\nOAuth (Login via Hugging Face)\n\nGradio natively supports OAuth login via Hugging Face. In other words, you can easily add a _\"Sign in with Hugging Face\"_ button to your demo, which allows you to get a user's HF username as well as other information from their HF profile. Check out [this Space](https://huggingface.co/spaces/Wauplin/gradio-oauth-demo) for a live demo.\n\nTo enable OAuth, you must set `hf_oauth: true` as a Space metadata in your README.md file. This will register your Space\nas an OAuth application on Hugging Face. Next, you can use `gr.LoginButton` to add a login button to\nyour Gradio app. Once a user is logged in with their HF account, you can retrieve their profile by adding a parameter of type\n`gr.OAuthProfile` to any Gradio function. The user profile will be automatically injected as a parameter value. If you want\nto perform actions on behalf of the user (e.g. list user's private repos, create repo, etc.), you can retrieve the user\ntoken by adding a parameter of type `gr.OAuthToken`. You must define which scopes you will use in your Space metadata\n(see [documentation](https://huggingface.co/docs/hub/spaces-oauthscopes) for more details).\n\nHere is a short example:\n\n$code_login_with_huggingface\n\nWhen the user clicks on the login button, they get redirected in a new page to authorize your ", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "docs/hub/spaces-oauthscopes) for more details).\n\nHere is a short example:\n\n$code_login_with_huggingface\n\nWhen the user clicks on the login button, they get redirected in a new page to authorize your Space.\n\n
\n\n
\n\nUsers can revoke access to their profile at any time in their [settings](https://huggingface.co/settings/connected-applications).\n\nAs seen above, OAuth features are available only when your app runs in a Space. However, you often need to test your app\nlocally before deploying it. To test OAuth features locally, your machine must be logged in to Hugging Face. Please run `huggingface-cli login` or set `HF_TOKEN` as environment variable with one of your access token. You can generate a new token in your settings page (https://huggingface.co/settings/tokens). Then, clicking on the `gr.LoginButton` will log in to your local Hugging Face profile, allowing you to debug your app with your Hugging Face account before deploying it to a Space.\n\n**Security Note**: It is important to note that adding a `gr.LoginButton` does not restrict users from using your app, in the same way that adding [username-password authentication](/guides/sharing-your-apppassword-protected-app) does. This means that users of your app who have not logged in with Hugging Face can still access and run events in your Gradio app -- the difference is that the `gr.OAuthProfile` or `gr.OAuthToken` will be `None` in the corresponding functions.\n\n\nOAuth (with external providers)\n\nIt is also possible to authenticate with external OAuth providers (e.g. Google OAuth) in your Gradio apps. To do this, first mount your Gradio app within a FastAPI app ([as discussed above](mounting-within-another-fast-api-app)). Then, you must write an *authentication function*, which gets the user's username from the OAuth provider and returns it. Th", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": " FastAPI app ([as discussed above](mounting-within-another-fast-api-app)). Then, you must write an *authentication function*, which gets the user's username from the OAuth provider and returns it. This function should be passed to the `auth_dependency` parameter in `gr.mount_gradio_app`.\n\nSimilar to [FastAPI dependency functions](https://fastapi.tiangolo.com/tutorial/dependencies/), the function specified by `auth_dependency` will run before any Gradio-related route in your FastAPI app. The function should accept a single parameter: the FastAPI `Request` and return either a string (representing a user's username) or `None`. If a string is returned, the user will be able to access the Gradio-related routes in your FastAPI app.\n\nFirst, let's show a simplistic example to illustrate the `auth_dependency` parameter:\n\n```python\nfrom fastapi import FastAPI, Request\nimport gradio as gr\n\napp = FastAPI()\n\ndef get_user(request: Request):\n return request.headers.get(\"user\")\n\ndemo = gr.Interface(lambda s: f\"Hello {s}!\", \"textbox\", \"textbox\")\n\napp = gr.mount_gradio_app(app, demo, path=\"/demo\", auth_dependency=get_user)\n\nif __name__ == '__main__':\n uvicorn.run(app)\n```\n\nIn this example, only requests that include a \"user\" header will be allowed to access the Gradio app. Of course, this does not add much security, since any user can add this header in their request.\n\nHere's a more complete example showing how to add Google OAuth to a Gradio app (assuming you've already created OAuth Credentials on the [Google Developer Console](https://console.cloud.google.com/project)):\n\n```python\nimport os\nfrom authlib.integrations.starlette_client import OAuth, OAuthError\nfrom fastapi import FastAPI, Depends, Request\nfrom starlette.config import Config\nfrom starlette.responses import RedirectResponse\nfrom starlette.middleware.sessions import SessionMiddleware\nimport uvicorn\nimport gradio as gr\n\napp = FastAPI()\n\nReplace these with your own OAuth settings\nGOOGLE_CLIENT_ID = \"...\"\nGOOGLE_C", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "Response\nfrom starlette.middleware.sessions import SessionMiddleware\nimport uvicorn\nimport gradio as gr\n\napp = FastAPI()\n\nReplace these with your own OAuth settings\nGOOGLE_CLIENT_ID = \"...\"\nGOOGLE_CLIENT_SECRET = \"...\"\nSECRET_KEY = \"...\"\n\nconfig_data = {'GOOGLE_CLIENT_ID': GOOGLE_CLIENT_ID, 'GOOGLE_CLIENT_SECRET': GOOGLE_CLIENT_SECRET}\nstarlette_config = Config(environ=config_data)\noauth = OAuth(starlette_config)\noauth.register(\n name='google',\n server_metadata_url='https://accounts.google.com/.well-known/openid-configuration',\n client_kwargs={'scope': 'openid email profile'},\n)\n\nSECRET_KEY = os.environ.get('SECRET_KEY') or \"a_very_secret_key\"\napp.add_middleware(SessionMiddleware, secret_key=SECRET_KEY)\n\nDependency to get the current user\ndef get_user(request: Request):\n user = request.session.get('user')\n if user:\n return user['name']\n return None\n\n@app.get('/')\ndef public(user: dict = Depends(get_user)):\n if user:\n return RedirectResponse(url='/gradio')\n else:\n return RedirectResponse(url='/login-demo')\n\n@app.route('/logout')\nasync def logout(request: Request):\n request.session.pop('user', None)\n return RedirectResponse(url='/')\n\n@app.route('/login')\nasync def login(request: Request):\n redirect_uri = request.url_for('auth')\n If your app is running on https, you should ensure that the\n `redirect_uri` is https, e.g. uncomment the following lines:\n \n from urllib.parse import urlparse, urlunparse\n redirect_uri = urlunparse(urlparse(str(redirect_uri))._replace(scheme='https'))\n return await oauth.google.authorize_redirect(request, redirect_uri)\n\n@app.route('/auth')\nasync def auth(request: Request):\n try:\n access_token = await oauth.google.authorize_access_token(request)\n except OAuthError:\n return RedirectResponse(url='/')\n request.session['user'] = dict(access_token)[\"userinfo\"]\n return RedirectResponse(url='/')\n\nwith gr.Blocks() as login_demo:\n gr.Button(", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "t OAuthError:\n return RedirectResponse(url='/')\n request.session['user'] = dict(access_token)[\"userinfo\"]\n return RedirectResponse(url='/')\n\nwith gr.Blocks() as login_demo:\n gr.Button(\"Login\", link=\"/login\")\n\napp = gr.mount_gradio_app(app, login_demo, path=\"/login-demo\")\n\ndef greet(request: gr.Request):\n return f\"Welcome to Gradio, {request.username}\"\n\nwith gr.Blocks() as main_demo:\n m = gr.Markdown(\"Welcome to Gradio!\")\n gr.Button(\"Logout\", link=\"/logout\")\n main_demo.load(greet, None, m)\n\napp = gr.mount_gradio_app(app, main_demo, path=\"/gradio\", auth_dependency=get_user)\n\nif __name__ == '__main__':\n uvicorn.run(app)\n```\n\nThere are actually two separate Gradio apps in this example! One that simply displays a log in button (this demo is accessible to any user), while the other main demo is only accessible to users that are logged in. You can try this example out on [this Space](https://huggingface.co/spaces/gradio/oauth-example).\n\n\n", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "By default, Gradio collects certain analytics to help us better understand the usage of the `gradio` library. This includes the following information:\n\n* What environment the Gradio app is running on (e.g. Colab Notebook, Hugging Face Spaces)\n* What input/output components are being used in the Gradio app\n* Whether the Gradio app is utilizing certain advanced features, such as `auth` or `show_error`\n* The IP address which is used solely to measure the number of unique developers using Gradio\n* The version of Gradio that is running\n\nNo information is collected from _users_ of your Gradio app. If you'd like to disable analytics altogether, you can do so by setting the `analytics_enabled` parameter to `False` in `gr.Blocks`, `gr.Interface`, or `gr.ChatInterface`. Or, you can set the GRADIO_ANALYTICS_ENABLED environment variable to `\"False\"` to apply this to all Gradio apps created across your system.\n\n*Note*: this reflects the analytics policy as of `gradio>=4.32.0`.\n\n", "heading1": "Analytics", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "[Progressive Web Apps (PWAs)](https://developer.mozilla.org/en-US/docs/Web/Progressive_web_apps) are web applications that are regular web pages or websites, but can appear to the user like installable platform-specific applications.\n\nGradio apps can be easily served as PWAs by setting the `pwa=True` parameter in the `launch()` method. Here's an example:\n\n```python\nimport gradio as gr\n\ndef greet(name):\n return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(fn=greet, inputs=\"textbox\", outputs=\"textbox\")\n\ndemo.launch(pwa=True) Launch your app as a PWA\n```\n\nThis will generate a PWA that can be installed on your device. Here's how it looks:\n\n![Installing PWA](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/install-pwa.gif)\n\nWhen you specify `favicon_path` in the `launch()` method, the icon will be used as the app's icon. Here's an example:\n\n```python\ndemo.launch(pwa=True, favicon_path=\"./hf-logo.svg\") Use a custom icon for your PWA\n```\n\n![Custom PWA Icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/pwa-favicon.png)\n", "heading1": "Progressive Web App (PWA)", "source_page_url": "https://gradio.app/guides/sharing-your-app", "source_page_title": "Additional Features - Sharing Your App Guide"}, {"text": "By default, each event listener has its own queue, which handles one request at a time. This can be configured via two arguments:\n\n- `concurrency_limit`: This sets the maximum number of concurrent executions for an event listener. By default, the limit is 1 unless configured otherwise in `Blocks.queue()`. You can also set it to `None` for no limit (i.e., an unlimited number of concurrent executions). For example:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n prompt = gr.Textbox()\n image = gr.Image()\n generate_btn = gr.Button(\"Generate Image\")\n generate_btn.click(image_gen, prompt, image, concurrency_limit=5)\n```\n\nIn the code above, up to 5 requests can be processed simultaneously for this event listener. Additional requests will be queued until a slot becomes available.\n\nIf you want to manage multiple event listeners using a shared queue, you can use the `concurrency_id` argument:\n\n- `concurrency_id`: This allows event listeners to share a queue by assigning them the same ID. For example, if your setup has only 2 GPUs but multiple functions require GPU access, you can create a shared queue for all those functions. Here's how that might look:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n prompt = gr.Textbox()\n image = gr.Image()\n generate_btn_1 = gr.Button(\"Generate Image via model 1\")\n generate_btn_2 = gr.Button(\"Generate Image via model 2\")\n generate_btn_3 = gr.Button(\"Generate Image via model 3\")\n generate_btn_1.click(image_gen_1, prompt, image, concurrency_limit=2, concurrency_id=\"gpu_queue\")\n generate_btn_2.click(image_gen_2, prompt, image, concurrency_id=\"gpu_queue\")\n generate_btn_3.click(image_gen_3, prompt, image, concurrency_id=\"gpu_queue\")\n```\n\nIn this example, all three event listeners share a queue identified by `\"gpu_queue\"`. The queue can handle up to 2 concurrent requests at a time, as defined by the `concurrency_limit`.\n\nNotes\n\n- To ensure unlimited concurrency for an event listener, se", "heading1": "Configuring the Queue", "source_page_url": "https://gradio.app/guides/queuing", "source_page_title": "Additional Features - Queuing Guide"}, {"text": " identified by `\"gpu_queue\"`. The queue can handle up to 2 concurrent requests at a time, as defined by the `concurrency_limit`.\n\nNotes\n\n- To ensure unlimited concurrency for an event listener, set `concurrency_limit=None`. This is useful if your function is calling e.g. an external API which handles the rate limiting of requests itself.\n- The default concurrency limit for all queues can be set globally using the `default_concurrency_limit` parameter in `Blocks.queue()`. \n\nThese configurations make it easy to manage the queuing behavior of your Gradio app.\n", "heading1": "Configuring the Queue", "source_page_url": "https://gradio.app/guides/queuing", "source_page_title": "Additional Features - Queuing Guide"}, {"text": "**API endpoint names**\n\nWhen you create a Gradio application, the API endpoint names are automatically generated based on the function names. You can change this by using the `api_name` parameter in `gr.Interface` or `gr.ChatInterface`. If you are using Gradio `Blocks`, you can name each event listener, like this:\n\n```python\nbtn.click(add, [num1, num2], output, api_name=\"addition\")\n```\n\n**Hiding API endpoints**\n\nWhen building a complex Gradio app, you might want to hide certain API endpoints from appearing on the view API page, e.g. if they correspond to functions that simply update the UI. You can set the `show_api` parameter to `False` in any `Blocks` event listener to achieve this, e.g. \n\n```python\nbtn.click(add, [num1, num2], output, show_api=False)\n```\n\n**Disabling API endpoints**\n\nHiding the API endpoint doesn't disable it. A user can still programmatically call the API endpoint if they know the name. If you want to disable an API endpoint altogether, set `api_name=False`, e.g. \n\n```python\nbtn.click(add, [num1, num2], output, api_name=False)\n```\n\nNote: setting an `api_name=False` also means that downstream apps will not be able to load your Gradio app using `gr.load()` as this function uses the Gradio API under the hood.\n\n**Adding API endpoints**\n\nYou can also add new API routes to your Gradio application that do not correspond to events in your UI.\n\nFor example, in this Gradio application, we add a new route that adds numbers and slices a list:\n\n```py\nimport gradio as gr\nwith gr.Blocks() as demo:\n with gr.Row():\n input = gr.Textbox()\n button = gr.Button(\"Submit\")\n output = gr.Textbox()\n def fn(a: int, b: int, c: list[str]) -> tuple[int, str]:\n return a + b, c[a:b]\n gr.api(fn, api_name=\"add_and_slice\")\n\n_, url, _ = demo.launch()\n```\n\nThis will create a new route `/add_and_slice` which will show up in the \"view API\" page. It can be programmatically called by the Python or JS Clients (discussed below) like this:\n\n```py\nfrom grad", "heading1": "Configuring the API Page", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "``\n\nThis will create a new route `/add_and_slice` which will show up in the \"view API\" page. It can be programmatically called by the Python or JS Clients (discussed below) like this:\n\n```py\nfrom gradio_client import Client\n\nclient = Client(url)\nresult = client.predict(\n a=3,\n b=5,\n c=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\n api_name=\"/add_and_slice\"\n)\nprint(result)\n```\n\n", "heading1": "Configuring the API Page", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "This API page not only lists all of the endpoints that can be used to query the Gradio app, but also shows the usage of both [the Gradio Python client](https://gradio.app/guides/getting-started-with-the-python-client/), and [the Gradio JavaScript client](https://gradio.app/guides/getting-started-with-the-js-client/). \n\nFor each endpoint, Gradio automatically generates a complete code snippet with the parameters and their types, as well as example inputs, allowing you to immediately test an endpoint. Here's an example showing an image file input and `str` output:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-snippet.png)\n\n\n", "heading1": "The Clients", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "Instead of reading through the view API page, you can also use Gradio's built-in API recorder to generate the relevant code snippet. Simply click on the \"API Recorder\" button, use your Gradio app via the UI as you would normally, and then the API Recorder will generate the code using the Clients to recreate your all of your interactions programmatically.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/api-recorder.gif)\n\n", "heading1": "The API Recorder \ud83e\ude84", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "The API page also includes instructions on how to use the Gradio app as an Model Context Protocol (MCP) server, which is a standardized way to expose functions as tools so that they can be used by LLMs. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-mcp.png)\n\nFor the MCP sever, each tool, its description, and its parameters are listed, along with instructions on how to integrate with popular MCP Clients. Read more about Gradio's [MCP integration here](https://www.gradio.app/guides/building-mcp-server-with-gradio).\n\n", "heading1": "MCP Server", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "You can access the complete OpenAPI (formerly Swagger) specification of your Gradio app's API at the endpoint `/gradio_api/openapi.json`. The OpenAPI specification is a standardized, language-agnostic interface description for REST APIs that enables both humans and computers to discover and understand the capabilities of your service.\n", "heading1": "OpenAPI Specification", "source_page_url": "https://gradio.app/guides/view-api-page", "source_page_title": "Additional Features - View Api Page Guide"}, {"text": "Let's create a demo where a user can choose a filter to apply to their webcam stream. Users can choose from an edge-detection filter, a cartoon filter, or simply flipping the stream vertically.\n\n$code_streaming_filter\n$demo_streaming_filter\n\nYou will notice that if you change the filter value it will immediately take effect in the output stream. That is an important difference of stream events in comparison to other Gradio events. The input values of the stream can be changed while the stream is being processed. \n\nTip: We set the \"streaming\" parameter of the image output component to be \"True\". Doing so lets the server automatically convert our output images into base64 format, a format that is efficient for streaming.\n\n", "heading1": "A Realistic Image Demo", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "For some image streaming demos, like the one above, we don't need to display separate input and output components. Our app would look cleaner if we could just display the modified output stream.\n\nWe can do so by just specifying the input image component as the output of the stream event.\n\n$code_streaming_filter_unified\n$demo_streaming_filter_unified\n\n", "heading1": "Unified Image Demos", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "Your streaming function should be stateless. It should take the current input and return its corresponding output. However, there are cases where you may want to keep track of past inputs or outputs. For example, you may want to keep a buffer of the previous `k` inputs to improve the accuracy of your transcription demo. You can do this with Gradio's `gr.State()` component.\n\nLet's showcase this with a sample demo:\n\n```python\ndef transcribe_handler(current_audio, state, transcript):\n next_text = transcribe(current_audio, history=state)\n state.append(current_audio)\n state = state[-3:]\n return state, transcript + next_text\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n mic = gr.Audio(sources=\"microphone\")\n state = gr.State(value=[])\n with gr.Column():\n transcript = gr.Textbox(label=\"Transcript\")\n mic.stream(transcribe_handler, [mic, state, transcript], [state, transcript],\n time_limit=10, stream_every=1)\n\n\ndemo.launch()\n```\n\n", "heading1": "Keeping track of past inputs or outputs", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "For an end-to-end example of streaming from the webcam, see the object detection from webcam [guide](/main/guides/object-detection-from-webcam-with-webrtc).", "heading1": "End-to-End Examples", "source_page_url": "https://gradio.app/guides/streaming-inputs", "source_page_title": "Additional Features - Streaming Inputs Guide"}, {"text": "Client side functions are ideal for updating component properties (like visibility, placeholders, interactive state, or styling). \n\nHere's a basic example:\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n with gr.Row() as row:\n btn = gr.Button(\"Hide this row\")\n \n This function runs in the browser without a server roundtrip\n btn.click(\n lambda: gr.Row(visible=False), \n None, \n row, \n js=True\n )\n\ndemo.launch()\n```\n\n\n", "heading1": "When to Use Client Side Functions", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "Client side functions have some important restrictions:\n* They can only update component properties (not values)\n* They cannot take any inputs\n\nHere are some functions that will work with `js=True`:\n\n```py\nSimple property updates\nlambda: gr.Textbox(lines=4)\n\nMultiple component updates\nlambda: [gr.Textbox(lines=4), gr.Button(interactive=False)]\n\nUsing gr.update() for property changes\nlambda: gr.update(visible=True, interactive=False)\n```\n\nWe are working to increase the space of functions that can be transpiled to JavaScript so that they can be run in the browser. [Follow the Groovy library for more info](https://github.com/abidlabs/groovy-transpiler).\n\n\n", "heading1": "Limitations", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "Here's a more complete example showing how client side functions can improve the user experience:\n\n$code_todo_list_js\n\n\n", "heading1": "Complete Example", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "When you set `js=True`, Gradio:\n\n1. Transpiles your Python function to JavaScript\n\n2. Runs the function directly in the browser\n\n3. Still sends the request to the server (for consistency and to handle any side effects)\n\nThis provides immediate visual feedback while ensuring your application state remains consistent.\n", "heading1": "Behind the Scenes", "source_page_url": "https://gradio.app/guides/client-side-functions", "source_page_title": "Additional Features - Client Side Functions Guide"}, {"text": "- **1. Static files**. You can designate static files or directories using the `gr.set_static_paths` function. Static files are not be copied to the Gradio cache (see below) and will be served directly from your computer. This can help save disk space and reduce the time your app takes to launch but be mindful of possible security implications as any static files are accessible to all useres of your Gradio app.\n\n- **2. Files in the `allowed_paths` parameter in `launch()`**. This parameter allows you to pass in a list of additional directories or exact filepaths you'd like to allow users to have access to. (By default, this parameter is an empty list).\n\n- **3. Files in Gradio's cache**. After you launch your Gradio app, Gradio copies certain files into a temporary cache and makes these files accessible to users. Let's unpack this in more detail below.\n\n\n", "heading1": "Files Gradio allows users to access", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "First, it's important to understand why Gradio has a cache at all. Gradio copies files to a cache directory before returning them to the frontend. This prevents files from being overwritten by one user while they are still needed by another user of your application. For example, if your prediction function returns a video file, then Gradio will move that video to the cache after your prediction function runs and returns a URL the frontend can use to show the video. Any file in the cache is available via URL to all users of your running application.\n\nTip: You can customize the location of the cache by setting the `GRADIO_TEMP_DIR` environment variable to an absolute path, such as `/home/usr/scripts/project/temp/`. \n\nFiles Gradio moves to the cache\n\nGradio moves three kinds of files into the cache\n\n1. Files specified by the developer before runtime, e.g. cached examples, default values of components, or files passed into parameters such as the `avatar_images` of `gr.Chatbot`\n\n2. File paths returned by a prediction function in your Gradio application, if they ALSO meet one of the conditions below:\n\n* It is in the `allowed_paths` parameter of the `Blocks.launch` method.\n* It is in the current working directory of the python interpreter.\n* It is in the temp directory obtained by `tempfile.gettempdir()`.\n\n**Note:** files in the current working directory whose name starts with a period (`.`) will not be moved to the cache, even if they are returned from a prediction function, since they often contain sensitive information. \n\nIf none of these criteria are met, the prediction function that is returning that file will raise an exception instead of moving the file to cache. Gradio performs this check so that arbitrary files on your machine cannot be accessed.\n\n3. Files uploaded by a user to your Gradio app (e.g. through the `File` or `Image` input components).\n\nTip: If at any time Gradio blocks a file that you would like it to process, add its path to the `allowed_paths` p", "heading1": "The Gradio cache", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "d by a user to your Gradio app (e.g. through the `File` or `Image` input components).\n\nTip: If at any time Gradio blocks a file that you would like it to process, add its path to the `allowed_paths` parameter.\n\n", "heading1": "The Gradio cache", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "While running, Gradio apps will NOT ALLOW users to access:\n\n- **Files that you explicitly block via the `blocked_paths` parameter in `launch()`**. You can pass in a list of additional directories or exact filepaths to the `blocked_paths` parameter in `launch()`. This parameter takes precedence over the files that Gradio exposes by default, or by the `allowed_paths` parameter or the `gr.set_static_paths` function.\n\n- **Any other paths on the host machine**. Users should NOT be able to access other arbitrary paths on the host.\n\n", "heading1": "The files Gradio will not allow others to access", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "Sharing your Gradio application will also allow users to upload files to your computer or server. You can set a maximum file size for uploads to prevent abuse and to preserve disk space. You can do this with the `max_file_size` parameter of `.launch`. For example, the following two code snippets limit file uploads to 5 megabytes per file.\n\n```python\nimport gradio as gr\n\ndemo = gr.Interface(lambda x: x, \"image\", \"image\")\n\ndemo.launch(max_file_size=\"5mb\")\nor\ndemo.launch(max_file_size=5 * gr.FileSize.MB)\n```\n\n", "heading1": "Uploading Files", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "* Set a `max_file_size` for your application.\n* Do not return arbitrary user input from a function that is connected to a file-based output component (`gr.Image`, `gr.File`, etc.). For example, the following interface would allow anyone to move an arbitrary file in your local directory to the cache: `gr.Interface(lambda s: s, \"text\", \"file\")`. This is because the user input is treated as an arbitrary file path. \n* Make `allowed_paths` as small as possible. If a path in `allowed_paths` is a directory, any file within that directory can be accessed. Make sure the entires of `allowed_paths` only contains files related to your application.\n* Run your gradio application from the same directory the application file is located in. This will narrow the scope of files Gradio will be allowed to move into the cache. For example, prefer `python app.py` to `python Users/sources/project/app.py`.\n\n\n", "heading1": "Best Practices", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "Both `gr.set_static_paths` and the `allowed_paths` parameter in launch expect absolute paths. Below is a minimal example to display a local `.png` image file in an HTML block.\n\n```txt\n\u251c\u2500\u2500 assets\n\u2502 \u2514\u2500\u2500 logo.png\n\u2514\u2500\u2500 app.py\n```\nFor the example directory structure, `logo.png` and any other files in the `assets` folder can be accessed from your Gradio app in `app.py` as follows:\n\n```python\nfrom pathlib import Path\n\nimport gradio as gr\n\ngr.set_static_paths(paths=[Path.cwd().absolute()/\"assets\"])\n\nwith gr.Blocks() as demo:\n gr.HTML(\"\")\n\ndemo.launch()\n```\n", "heading1": "Example: Accessing local files", "source_page_url": "https://gradio.app/guides/file-access", "source_page_title": "Additional Features - File Access Guide"}, {"text": "Gradio can stream audio and video directly from your generator function.\nThis lets your user hear your audio or see your video nearly as soon as it's `yielded` by your function.\nAll you have to do is \n\n1. Set `streaming=True` in your `gr.Audio` or `gr.Video` output component.\n2. Write a python generator that yields the next \"chunk\" of audio or video.\n3. Set `autoplay=True` so that the media starts playing automatically.\n\nFor audio, the next \"chunk\" can be either an `.mp3` or `.wav` file or a `bytes` sequence of audio.\nFor video, the next \"chunk\" has to be either `.mp4` file or a file with `h.264` codec with a `.ts` extension.\nFor smooth playback, make sure chunks are consistent lengths and larger than 1 second.\n\nWe'll finish with some simple examples illustrating these points.\n\nStreaming Audio\n\n```python\nimport gradio as gr\nfrom time import sleep\n\ndef keep_repeating(audio_file):\n for _ in range(10):\n sleep(0.5)\n yield audio_file\n\ngr.Interface(keep_repeating,\n gr.Audio(sources=[\"microphone\"], type=\"filepath\"),\n gr.Audio(streaming=True, autoplay=True)\n).launch()\n```\n\nStreaming Video\n\n```python\nimport gradio as gr\nfrom time import sleep\n\ndef keep_repeating(video_file):\n for _ in range(10):\n sleep(0.5)\n yield video_file\n\ngr.Interface(keep_repeating,\n gr.Video(sources=[\"webcam\"], format=\"mp4\"),\n gr.Video(streaming=True, autoplay=True)\n).launch()\n```\n\n", "heading1": "Streaming Media", "source_page_url": "https://gradio.app/guides/streaming-outputs", "source_page_title": "Additional Features - Streaming Outputs Guide"}, {"text": "For an end-to-end example of streaming media, see the object detection from video [guide](/main/guides/object-detection-from-video) or the streaming AI-generated audio with [transformers](https://huggingface.co/docs/transformers/index) [guide](/main/guides/streaming-ai-generated-audio).", "heading1": "End-to-End Examples", "source_page_url": "https://gradio.app/guides/streaming-outputs", "source_page_title": "Additional Features - Streaming Outputs Guide"}, {"text": "You can initialize the `I18n` class with multiple language dictionaries to add custom translations:\n\n```python\nimport gradio as gr\n\nCreate an I18n instance with translations for multiple languages\ni18n = gr.I18n(\n en={\"greeting\": \"Hello, welcome to my app!\", \"submit\": \"Submit\"},\n es={\"greeting\": \"\u00a1Hola, bienvenido a mi aplicaci\u00f3n!\", \"submit\": \"Enviar\"},\n fr={\"greeting\": \"Bonjour, bienvenue dans mon application!\", \"submit\": \"Soumettre\"}\n)\n\nwith gr.Blocks() as demo:\n Use the i18n method to translate the greeting\n gr.Markdown(i18n(\"greeting\"))\n with gr.Row():\n input_text = gr.Textbox(label=\"Input\")\n output_text = gr.Textbox(label=\"Output\")\n \n submit_btn = gr.Button(i18n(\"submit\"))\n\nPass the i18n instance to the launch method\ndemo.launch(i18n=i18n)\n```\n\n", "heading1": "Setting Up Translations", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "When you use the `i18n` instance with a translation key, Gradio will show the corresponding translation to users based on their browser's language settings or the language they've selected in your app.\n\nIf a translation isn't available for the user's locale, the system will fall back to English (if available) or display the key itself.\n\n", "heading1": "How It Works", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "Locale codes should follow the BCP 47 format (e.g., 'en', 'en-US', 'zh-CN'). The `I18n` class will warn you if you use an invalid locale code.\n\n", "heading1": "Valid Locale Codes", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "The following component properties typically support internationalization:\n\n- `description`\n- `info`\n- `title`\n- `placeholder`\n- `value`\n- `label`\n\nNote that support may vary depending on the component, and some properties might have exceptions where internationalization is not applicable. You can check this by referring to the typehint for the parameter and if it contains `I18nData`, then it supports internationalization.", "heading1": "Supported Component Properties", "source_page_url": "https://gradio.app/guides/internationalization", "source_page_title": "Additional Features - Internationalization Guide"}, {"text": "When a user closes their browser tab, Gradio will automatically delete any `gr.State` variables associated with that user session after 60 minutes. If the user connects again within those 60 minutes, no state will be deleted.\n\nYou can control the deletion behavior further with the following two parameters of `gr.State`:\n\n1. `delete_callback` - An arbitrary function that will be called when the variable is deleted. This function must take the state value as input. This function is useful for deleting variables from GPU memory.\n2. `time_to_live` - The number of seconds the state should be stored for after it is created or updated. This will delete variables before the session is closed, so it's useful for clearing state for potentially long running sessions.\n\n", "heading1": "Automatic deletion of `gr.State`", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "Your Gradio application will save uploaded and generated files to a special directory called the cache directory. Gradio uses a hashing scheme to ensure that duplicate files are not saved to the cache but over time the size of the cache will grow (especially if your app goes viral \ud83d\ude09).\n\nGradio can periodically clean up the cache for you if you specify the `delete_cache` parameter of `gr.Blocks()`, `gr.Interface()`, or `gr.ChatInterface()`. \nThis parameter is a tuple of the form `[frequency, age]` both expressed in number of seconds.\nEvery `frequency` seconds, the temporary files created by this Blocks instance will be deleted if more than `age` seconds have passed since the file was created. \nFor example, setting this to (86400, 86400) will delete temporary files every day if they are older than a day old.\nAdditionally, the cache will be deleted entirely when the server restarts.\n\n", "heading1": "Automatic cache cleanup via `delete_cache`", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "Additionally, Gradio now includes a `Blocks.unload()` event, allowing you to run arbitrary cleanup functions when users disconnect (this does not have a 60 minute delay).\nUnlike other gradio events, this event does not accept inputs or outptus.\nYou can think of the `unload` event as the opposite of the `load` event.\n\n", "heading1": "The `unload` event", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "The following demo uses all of these features. When a user visits the page, a special unique directory is created for that user.\nAs the user interacts with the app, images are saved to disk in that special directory.\nWhen the user closes the page, the images created in that session are deleted via the `unload` event.\nThe state and files in the cache are cleaned up automatically as well.\n\n$code_state_cleanup\n$demo_state_cleanup", "heading1": "Putting it all together", "source_page_url": "https://gradio.app/guides/resource-cleanup", "source_page_title": "Additional Features - Resource Cleanup Guide"}, {"text": "1. `GRADIO_SERVER_PORT`\n\n- **Description**: Specifies the port on which the Gradio app will run.\n- **Default**: `7860`\n- **Example**:\n ```bash\n export GRADIO_SERVER_PORT=8000\n ```\n\n2. `GRADIO_SERVER_NAME`\n\n- **Description**: Defines the host name for the Gradio server. To make Gradio accessible from any IP address, set this to `\"0.0.0.0\"`\n- **Default**: `\"127.0.0.1\"` \n- **Example**:\n ```bash\n export GRADIO_SERVER_NAME=\"0.0.0.0\"\n ```\n\n3. `GRADIO_NUM_PORTS`\n\n- **Description**: Defines the number of ports to try when starting the Gradio server.\n- **Default**: `100`\n- **Example**:\n ```bash\n export GRADIO_NUM_PORTS=200\n ```\n\n4. `GRADIO_ANALYTICS_ENABLED`\n\n- **Description**: Whether Gradio should provide \n- **Default**: `\"True\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_ANALYTICS_ENABLED=\"True\"\n ```\n\n5. `GRADIO_DEBUG`\n\n- **Description**: Enables or disables debug mode in Gradio. If debug mode is enabled, the main thread does not terminate allowing error messages to be printed in environments such as Google Colab.\n- **Default**: `0`\n- **Example**:\n ```sh\n export GRADIO_DEBUG=1\n ```\n\n6. `GRADIO_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag inputs/outputs in the Gradio interface. See [the Guide on flagging](/guides/using-flagging) for more details.\n- **Default**: `\"manual\"`\n- **Options**: `\"never\"`, `\"manual\"`, `\"auto\"`\n- **Example**:\n ```sh\n export GRADIO_FLAGGING_MODE=\"never\"\n ```\n\n7. `GRADIO_TEMP_DIR`\n\n- **Description**: Specifies the directory where temporary files created by Gradio are stored.\n- **Default**: System default temporary directory\n- **Example**:\n ```sh\n export GRADIO_TEMP_DIR=\"/path/to/temp\"\n ```\n\n8. `GRADIO_ROOT_PATH`\n\n- **Description**: Sets the root path for the Gradio application. Useful if running Gradio [behind a reverse proxy](/guides/running-gradio-on-your-web-server-with-nginx).\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ROOT_PATH=", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "r the Gradio application. Useful if running Gradio [behind a reverse proxy](/guides/running-gradio-on-your-web-server-with-nginx).\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ROOT_PATH=\"/myapp\"\n ```\n\n9. `GRADIO_SHARE`\n\n- **Description**: Enables or disables sharing the Gradio app.\n- **Default**: `\"False\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_SHARE=\"True\"\n ```\n\n10. `GRADIO_ALLOWED_PATHS`\n\n- **Description**: Sets a list of complete filepaths or parent directories that gradio is allowed to serve. Must be absolute paths. Warning: if you provide directories, any files in these directories or their subdirectories are accessible to all users of your app. Multiple items can be specified by separating items with commas.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_ALLOWED_PATHS=\"/mnt/sda1,/mnt/sda2\"\n ```\n\n11. `GRADIO_BLOCKED_PATHS`\n\n- **Description**: Sets a list of complete filepaths or parent directories that gradio is not allowed to serve (i.e. users of your app are not allowed to access). Must be absolute paths. Warning: takes precedence over `allowed_paths` and all other directories exposed by Gradio by default. Multiple items can be specified by separating items with commas.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_BLOCKED_PATHS=\"/users/x/gradio_app/admin,/users/x/gradio_app/keys\"\n ```\n\n12. `FORWARDED_ALLOW_IPS`\n\n- **Description**: This is not a Gradio-specific environment variable, but rather one used in server configurations, specifically `uvicorn` which is used by Gradio internally. This environment variable is useful when deploying applications behind a reverse proxy. It defines a list of IP addresses that are trusted to forward traffic to your application. When set, the application will trust the `X-Forwarded-For` header from these IP addresses to determine the original IP address of the user making the request. This means that if you use the `gr.Request` [objec", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": " the application will trust the `X-Forwarded-For` header from these IP addresses to determine the original IP address of the user making the request. This means that if you use the `gr.Request` [object's](https://www.gradio.app/docs/gradio/request) `client.host` property, it will correctly get the user's IP address instead of the IP address of the reverse proxy server. Note that only trusted IP addresses (i.e. the IP addresses of your reverse proxy servers) should be added, as any server with these IP addresses can modify the `X-Forwarded-For` header and spoof the client's IP address.\n- **Default**: `\"127.0.0.1\"`\n- **Example**:\n ```sh\n export FORWARDED_ALLOW_IPS=\"127.0.0.1,192.168.1.100\"\n ```\n\n13. `GRADIO_CACHE_EXAMPLES`\n\n- **Description**: Whether or not to cache examples by default in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()` when no explicit argument is passed for the `cache_examples` parameter. You can set this environment variable to either the string \"true\" or \"false\".\n- **Default**: `\"false\"`\n- **Example**:\n ```sh\n export GRADIO_CACHE_EXAMPLES=\"true\"\n ```\n\n\n14. `GRADIO_CACHE_MODE`\n\n- **Description**: How to cache examples. Only applies if `cache_examples` is set to `True` either via enviornment variable or by an explicit parameter, AND no no explicit argument is passed for the `cache_mode` parameter in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()`. Can be set to either the strings \"lazy\" or \"eager.\" If \"lazy\", examples are cached after their first use for all users of the app. If \"eager\", all examples are cached at app launch.\n\n- **Default**: `\"eager\"`\n- **Example**:\n ```sh\n export GRADIO_CACHE_MODE=\"lazy\"\n ```\n\n\n15. `GRADIO_EXAMPLES_CACHE`\n\n- **Description**: If you set `cache_examples=True` in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()`, Gradio will run your prediction function and save the results to disk. By default, this is in the `.gradio/cached_examples//` subdirectory within your", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "e()`, `gr.ChatInterface()` or in `gr.Examples()`, Gradio will run your prediction function and save the results to disk. By default, this is in the `.gradio/cached_examples//` subdirectory within your app's working directory. You can customize the location of cached example files created by Gradio by setting the environment variable `GRADIO_EXAMPLES_CACHE` to an absolute path or a path relative to your working directory.\n- **Default**: `\".gradio/cached_examples/\"`\n- **Example**:\n ```sh\n export GRADIO_EXAMPLES_CACHE=\"custom_cached_examples/\"\n ```\n\n\n16. `GRADIO_SSR_MODE`\n\n- **Description**: Controls whether server-side rendering (SSR) is enabled. When enabled, the initial HTML is rendered on the server rather than the client, which can improve initial page load performance and SEO.\n\n- **Default**: `\"False\"` (except on Hugging Face Spaces, where this environment variable sets it to `True`)\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_SSR_MODE=\"True\"\n ```\n\n17. `GRADIO_NODE_SERVER_NAME`\n\n- **Description**: Defines the host name for the Gradio node server. (Only applies if `ssr_mode` is set to `True`.)\n- **Default**: `GRADIO_SERVER_NAME` if it is set, otherwise `\"127.0.0.1\"`\n- **Example**:\n ```sh\n export GRADIO_NODE_SERVER_NAME=\"0.0.0.0\"\n ```\n\n18. `GRADIO_NODE_NUM_PORTS`\n\n- **Description**: Defines the number of ports to try when starting the Gradio node server. (Only applies if `ssr_mode` is set to `True`.)\n- **Default**: `100`\n- **Example**:\n ```sh\n export GRADIO_NODE_NUM_PORTS=200\n ```\n\n19. `GRADIO_RESET_EXAMPLES_CACHE`\n\n- **Description**: If set to \"True\", Gradio will delete and recreate the examples cache directory when the app starts instead of reusing the cached example if they already exist. \n- **Default**: `\"False\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_RESET_EXAMPLES_CACHE=\"True\"\n ```\n\n20. `GRADIO_CHAT_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "e\"`\n- **Options**: `\"True\"`, `\"False\"`\n- **Example**:\n ```sh\n export GRADIO_RESET_EXAMPLES_CACHE=\"True\"\n ```\n\n20. `GRADIO_CHAT_FLAGGING_MODE`\n\n- **Description**: Controls whether users can flag messages in `gr.ChatInterface` applications. Similar to `GRADIO_FLAGGING_MODE` but specifically for chat interfaces.\n- **Default**: `\"never\"`\n- **Options**: `\"never\"`, `\"manual\"`\n- **Example**:\n ```sh\n export GRADIO_CHAT_FLAGGING_MODE=\"manual\"\n ```\n\n21. `GRADIO_WATCH_DIRS`\n\n- **Description**: Specifies directories to watch for file changes when running Gradio in development mode. When files in these directories change, the Gradio app will automatically reload. Multiple directories can be specified by separating them with commas. This is primarily used by the `gradio` CLI command for development workflows.\n- **Default**: `\"\"`\n- **Example**:\n ```sh\n export GRADIO_WATCH_DIRS=\"/path/to/src,/path/to/templates\"\n ```\n\n22. `GRADIO_VIBE_MODE`\n\n- **Description**: Enables the Vibe editor mode, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. When enabled, anyone who can access the Gradio endpoint can modify files and run arbitrary code on the host machine. Use with extreme caution in production environments.\n- **Default**: `\"\"`\n- **Options**: Any non-empty string enables the mode\n- **Example**:\n ```sh\n export GRADIO_VIBE_MODE=\"1\"\n ```\n\n\n\n", "heading1": "Key Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "To set environment variables in your terminal, use the `export` command followed by the variable name and its value. For example:\n\n```sh\nexport GRADIO_SERVER_PORT=8000\n```\n\nIf you're using a `.env` file to manage your environment variables, you can add them like this:\n\n```sh\nGRADIO_SERVER_PORT=8000\nGRADIO_SERVER_NAME=\"localhost\"\n```\n\nThen, use a tool like `dotenv` to load these variables when running your application.\n\n\n\n", "heading1": "How to Set Environment Variables", "source_page_url": "https://gradio.app/guides/environment-variables", "source_page_title": "Additional Features - Environment Variables Guide"}, {"text": "**Prerequisite**: Gradio requires [Python 3.10 or higher](https://www.python.org/downloads/).\n\n\nWe recommend installing Gradio using `pip`, which is included by default in Python. Run this in your terminal or command prompt:\n\n```bash\npip install --upgrade gradio\n```\n\n\nTip: It is best to install Gradio in a virtual environment. Detailed installation instructions for all common operating systems are provided here. \n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "You can run Gradio in your favorite code editor, Jupyter notebook, Google Colab, or anywhere else you write Python. Let's write your first Gradio app:\n\n\n$code_hello_world_4\n\n\nTip: We shorten the imported name from gradio to gr. This is a widely adopted convention for better readability of code. \n\nNow, run your code. If you've written the Python code in a file named `app.py`, then you would run `python app.py` from the terminal.\n\nThe demo below will open in a browser on [http://localhost:7860](http://localhost:7860) if running from a file. If you are running within a notebook, the demo will appear embedded within the notebook.\n\n$demo_hello_world_4\n\nType your name in the textbox on the left, drag the slider, and then press the Submit button. You should see a friendly greeting on the right.\n\nTip: When developing locally, you can run your Gradio app in hot reload mode, which automatically reloads the Gradio app whenever you make changes to the file. To do this, simply type in gradio before the name of the file instead of python. In the example above, you would type: `gradio app.py` in your terminal. You can also enable vibe mode by using the --vibe flag, e.g. gradio --vibe app.py, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. Learn more in the Hot Reloading Guide.\n\n\n**Understanding the `Interface` Class**\n\nYou'll notice that in order to make your first demo, you created an instance of the `gr.Interface` class. The `Interface` class is designed to create demos for machine learning models which accept one or more inputs, and return one or more outputs. \n\nThe `Interface` class has three core arguments:\n\n- `fn`: the function to wrap a user interface (UI) around\n- `inputs`: the Gradio component(s) to use for the input. The num", "heading1": "Building Your First Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "turn one or more outputs. \n\nThe `Interface` class has three core arguments:\n\n- `fn`: the function to wrap a user interface (UI) around\n- `inputs`: the Gradio component(s) to use for the input. The number of components should match the number of arguments in your function.\n- `outputs`: the Gradio component(s) to use for the output. The number of components should match the number of return values from your function.\n\nThe `fn` argument is very flexible -- you can pass *any* Python function that you want to wrap with a UI. In the example above, we saw a relatively simple function, but the function could be anything from a music generator to a tax calculator to the prediction function of a pretrained machine learning model.\n\nThe `inputs` and `outputs` arguments take one or more Gradio components. As we'll see, Gradio includes more than [30 built-in components](https://www.gradio.app/docs/gradio/introduction) (such as the `gr.Textbox()`, `gr.Image()`, and `gr.HTML()` components) that are designed for machine learning applications. \n\nTip: For the `inputs` and `outputs` arguments, you can pass in the name of these components as a string (`\"textbox\"`) or an instance of the class (`gr.Textbox()`).\n\nIf your function accepts more than one argument, as is the case above, pass a list of input components to `inputs`, with each input component corresponding to one of the arguments of the function, in order. The same holds true if your function returns more than one value: simply pass in a list of components to `outputs`. This flexibility makes the `Interface` class a very powerful way to create demos.\n\nWe'll dive deeper into the `gr.Interface` on our series on [building Interfaces](https://www.gradio.app/main/guides/the-interface-class).\n\n", "heading1": "Building Your First Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "What good is a beautiful demo if you can't share it? Gradio lets you easily share a machine learning demo without having to worry about the hassle of hosting on a web server. Simply set `share=True` in `launch()`, and a publicly accessible URL will be created for your demo. Let's revisit our example demo, but change the last line as follows:\n\n```python\nimport gradio as gr\n\ndef greet(name):\n return \"Hello \" + name + \"!\"\n\ndemo = gr.Interface(fn=greet, inputs=\"textbox\", outputs=\"textbox\")\n \ndemo.launch(share=True) Share your demo with just 1 extra parameter \ud83d\ude80\n```\n\nWhen you run this code, a public URL will be generated for your demo in a matter of seconds, something like:\n\n\ud83d\udc49   `https://a23dsf231adb.gradio.live`\n\nNow, anyone around the world can try your Gradio demo from their browser, while the machine learning model and all computation continues to run locally on your computer.\n\nTo learn more about sharing your demo, read our dedicated guide on [sharing your Gradio application](https://www.gradio.app/guides/sharing-your-app).\n\n\n", "heading1": "Sharing Your Demo", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "So far, we've been discussing the `Interface` class, which is a high-level class that lets to build demos quickly with Gradio. But what else does Gradio include?\n\nCustom Demos with `gr.Blocks`\n\nGradio offers a low-level approach for designing web apps with more customizable layouts and data flows with the `gr.Blocks` class. Blocks supports things like controlling where components appear on the page, handling multiple data flows and more complex interactions (e.g. outputs can serve as inputs to other functions), and updating properties/visibility of components based on user interaction \u2014 still all in Python. \n\nYou can build very custom and complex applications using `gr.Blocks()`. For example, the popular image generation [Automatic1111 Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is built using Gradio Blocks. We dive deeper into the `gr.Blocks` on our series on [building with Blocks](https://www.gradio.app/guides/blocks-and-event-listeners).\n\nChatbots with `gr.ChatInterface`\n\nGradio includes another high-level class, `gr.ChatInterface`, which is specifically designed to create Chatbot UIs. Similar to `Interface`, you supply a function and Gradio creates a fully working Chatbot UI. If you're interested in creating a chatbot, you can jump straight to [our dedicated guide on `gr.ChatInterface`](https://www.gradio.app/guides/creating-a-chatbot-fast).\n\nThe Gradio Python & JavaScript Ecosystem\n\nThat's the gist of the core `gradio` Python library, but Gradio is actually so much more! It's an entire ecosystem of Python and JavaScript libraries that let you build machine learning applications, or query them programmatically, in Python or JavaScript. Here are other related parts of the Gradio ecosystem:\n\n* [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.\n* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-t", "heading1": "An Overview of Gradio", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.\n* [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-the-js-client) (`@gradio/client`): query any Gradio app programmatically in JavaScript.\n* [Gradio-Lite](https://www.gradio.app/guides/gradio-lite) (`@gradio/lite`): write Gradio apps in Python that run entirely in the browser (no server needed!), thanks to Pyodide. \n* [Hugging Face Spaces](https://huggingface.co/spaces): the most popular place to host Gradio applications \u2014 for free!\n\n", "heading1": "An Overview of Gradio", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "Keep learning about Gradio sequentially using the Gradio Guides, which include explanations as well as example code and embedded interactive demos. Next up: [let's dive deeper into the Interface class](https://www.gradio.app/guides/the-interface-class).\n\nOr, if you already know the basics and are looking for something specific, you can search the more [technical API documentation](https://www.gradio.app/docs/).\n\n\n", "heading1": "What's Next?", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "You can also build Gradio applications without writing any code. Simply type `gradio sketch` into your terminal to open up an editor that lets you define and modify Gradio components, adjust their layouts, add events, all through a web editor. Or [use this hosted version of Gradio Sketch, running on Hugging Face Spaces](https://huggingface.co/spaces/aliabid94/Sketch).", "heading1": "Gradio Sketch", "source_page_url": "https://gradio.app/guides/quickstart", "source_page_title": "Getting Started - Quickstart Guide"}, {"text": "The Model Context Protocol (MCP) standardizes how applications provide context to LLMs. It allows Claude to interact with external tools, like image generators, file systems, or APIs, etc.\n\n", "heading1": "What is MCP?", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "- Python 3.10+\n- An Anthropic API key\n- Basic understanding of Python programming\n\n", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "First, install the required packages:\n\n```bash\npip install gradio anthropic mcp\n```\n\nCreate a `.env` file in your project directory and add your Anthropic API key:\n\n```\nANTHROPIC_API_KEY=your_api_key_here\n```\n\n", "heading1": "Setup", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "The server provides tools that Claude can use. In this example, we'll create a server that generates images through [a HuggingFace space](https://huggingface.co/spaces/ysharma/SanaSprint).\n\nCreate a file named `gradio_mcp_server.py`:\n\n```python\nfrom mcp.server.fastmcp import FastMCP\nimport json\nimport sys\nimport io\nimport time\nfrom gradio_client import Client\n\nsys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')\nsys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')\n\nmcp = FastMCP(\"huggingface_spaces_image_display\")\n\n@mcp.tool()\nasync def generate_image(prompt: str, width: int = 512, height: int = 512) -> str:\n \"\"\"Generate an image using SanaSprint model.\n \n Args:\n prompt: Text prompt describing the image to generate\n width: Image width (default: 512)\n height: Image height (default: 512)\n \"\"\"\n client = Client(\"https://ysharma-sanasprint.hf.space/\")\n \n try:\n result = client.predict(\n prompt,\n \"0.6B\",\n 0,\n True,\n width,\n height,\n 4.0,\n 2,\n api_name=\"/infer\"\n )\n \n if isinstance(result, list) and len(result) >= 1:\n image_data = result[0]\n if isinstance(image_data, dict) and \"url\" in image_data:\n return json.dumps({\n \"type\": \"image\",\n \"url\": image_data[\"url\"],\n \"message\": f\"Generated image for prompt: {prompt}\"\n })\n \n return json.dumps({\n \"type\": \"error\",\n \"message\": \"Failed to generate image\"\n })\n \n except Exception as e:\n return json.dumps({\n \"type\": \"error\",\n \"message\": f\"Error generating image: {str(e)}\"\n })\n\nif __name__ == \"__main__\":\n mcp.run(transport='stdio')\n```\n\nWhat this server does:\n\n1. It creates an MCP server that exposes a `gene", "heading1": "Part 1: Building the MCP Server", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": " \"message\": f\"Error generating image: {str(e)}\"\n })\n\nif __name__ == \"__main__\":\n mcp.run(transport='stdio')\n```\n\nWhat this server does:\n\n1. It creates an MCP server that exposes a `generate_image` tool\n2. The tool connects to the SanaSprint model hosted on HuggingFace Spaces\n3. It handles the asynchronous nature of image generation by polling for results\n4. When an image is ready, it returns the URL in a structured JSON format\n\n", "heading1": "Part 1: Building the MCP Server", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Now let's create a Gradio chat interface as MCP Client that connects Claude to our MCP server.\n\nCreate a file named `app.py`:\n\n```python\nimport asyncio\nimport os\nimport json\nfrom typing import List, Dict, Any, Union\nfrom contextlib import AsyncExitStack\n\nimport gradio as gr\nfrom gradio.components.chatbot import ChatMessage\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\nfrom anthropic import Anthropic\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nloop = asyncio.new_event_loop()\nasyncio.set_event_loop(loop)\n\nclass MCPClientWrapper:\n def __init__(self):\n self.session = None\n self.exit_stack = None\n self.anthropic = Anthropic()\n self.tools = []\n \n def connect(self, server_path: str) -> str:\n return loop.run_until_complete(self._connect(server_path))\n \n async def _connect(self, server_path: str) -> str:\n if self.exit_stack:\n await self.exit_stack.aclose()\n \n self.exit_stack = AsyncExitStack()\n \n is_python = server_path.endswith('.py')\n command = \"python\" if is_python else \"node\"\n \n server_params = StdioServerParameters(\n command=command,\n args=[server_path],\n env={\"PYTHONIOENCODING\": \"utf-8\", \"PYTHONUNBUFFERED\": \"1\"}\n )\n \n stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))\n self.stdio, self.write = stdio_transport\n \n self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))\n await self.session.initialize()\n \n response = await self.session.list_tools()\n self.tools = [{ \n \"name\": tool.name,\n \"description\": tool.description,\n \"input_schema\": tool.inputSchema\n } for tool in response.tools]\n \n tool_names = [tool[\"name\"] for tool in self.tools]\n return f\"Connected to MCP server.", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "iption,\n \"input_schema\": tool.inputSchema\n } for tool in response.tools]\n \n tool_names = [tool[\"name\"] for tool in self.tools]\n return f\"Connected to MCP server. Available tools: {', '.join(tool_names)}\"\n \n def process_message(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]) -> tuple:\n if not self.session:\n return history + [\n {\"role\": \"user\", \"content\": message}, \n {\"role\": \"assistant\", \"content\": \"Please connect to an MCP server first.\"}\n ], gr.Textbox(value=\"\")\n \n new_messages = loop.run_until_complete(self._process_query(message, history))\n return history + [{\"role\": \"user\", \"content\": message}] + new_messages, gr.Textbox(value=\"\")\n \n async def _process_query(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]):\n claude_messages = []\n for msg in history:\n if isinstance(msg, ChatMessage):\n role, content = msg.role, msg.content\n else:\n role, content = msg.get(\"role\"), msg.get(\"content\")\n \n if role in [\"user\", \"assistant\", \"system\"]:\n claude_messages.append({\"role\": role, \"content\": content})\n \n claude_messages.append({\"role\": \"user\", \"content\": message})\n \n response = self.anthropic.messages.create(\n model=\"claude-3-5-sonnet-20241022\",\n max_tokens=1000,\n messages=claude_messages,\n tools=self.tools\n )\n\n result_messages = []\n \n for content in response.content:\n if content.type == 'text':\n result_messages.append({\n \"role\": \"assistant\", \n \"content\": content.text\n })\n \n elif content.type == 'tool_use':\n tool_name = content.name\n tool_args = content.input\n ", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "ntent\": content.text\n })\n \n elif content.type == 'tool_use':\n tool_name = content.name\n tool_args = content.input\n \n result_messages.append({\n \"role\": \"assistant\",\n \"content\": f\"I'll use the {tool_name} tool to help answer your question.\",\n \"metadata\": {\n \"title\": f\"Using tool: {tool_name}\",\n \"log\": f\"Parameters: {json.dumps(tool_args, ensure_ascii=True)}\",\n \"status\": \"pending\",\n \"id\": f\"tool_call_{tool_name}\"\n }\n })\n \n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"```json\\n\" + json.dumps(tool_args, indent=2, ensure_ascii=True) + \"\\n```\",\n \"metadata\": {\n \"parent_id\": f\"tool_call_{tool_name}\",\n \"id\": f\"params_{tool_name}\",\n \"title\": \"Tool Parameters\"\n }\n })\n \n result = await self.session.call_tool(tool_name, tool_args)\n \n if result_messages and \"metadata\" in result_messages[-2]:\n result_messages[-2][\"metadata\"][\"status\"] = \"done\"\n \n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"Here are the results from the tool:\",\n \"metadata\": {\n \"title\": f\"Tool Result for {tool_name}\",\n \"status\": \"done\",\n \"id\": f\"result_{tool_name}\"\n }\n })\n \n result_content = result.content\n if isinstance(result_content, list):\n result_content = \"\\n\".join(str(item) for item in re", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": " })\n \n result_content = result.content\n if isinstance(result_content, list):\n result_content = \"\\n\".join(str(item) for item in result_content)\n \n try:\n result_json = json.loads(result_content)\n if isinstance(result_json, dict) and \"type\" in result_json:\n if result_json[\"type\"] == \"image\" and \"url\" in result_json:\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": {\"path\": result_json[\"url\"], \"alt_text\": result_json.get(\"message\", \"Generated image\")},\n \"metadata\": {\n \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"image_{tool_name}\",\n \"title\": \"Generated Image\"\n }\n })\n else:\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"```\\n\" + result_content + \"\\n```\",\n \"metadata\": {\n \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"raw_result_{tool_name}\",\n \"title\": \"Raw Output\"\n }\n })\n except:\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": \"```\\n\" + result_content + \"\\n```\",\n \"metadata\": {\n \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"raw_result_{tool_name}\",\n \"title\": \"Raw Output\"\n }\n })\n ", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": " \"parent_id\": f\"result_{tool_name}\",\n \"id\": f\"raw_result_{tool_name}\",\n \"title\": \"Raw Output\"\n }\n })\n \n claude_messages.append({\"role\": \"user\", \"content\": f\"Tool result for {tool_name}: {result_content}\"})\n next_response = self.anthropic.messages.create(\n model=\"claude-3-5-sonnet-20241022\",\n max_tokens=1000,\n messages=claude_messages,\n )\n \n if next_response.content and next_response.content[0].type == 'text':\n result_messages.append({\n \"role\": \"assistant\",\n \"content\": next_response.content[0].text\n })\n\n return result_messages\n\nclient = MCPClientWrapper()\n\ndef gradio_interface():\n with gr.Blocks(title=\"MCP Weather Client\") as demo:\n gr.Markdown(\"MCP Weather Assistant\")\n gr.Markdown(\"Connect to your MCP weather server and chat with the assistant\")\n \n with gr.Row(equal_height=True):\n with gr.Column(scale=4):\n server_path = gr.Textbox(\n label=\"Server Script Path\",\n placeholder=\"Enter path to server script (e.g., weather.py)\",\n value=\"gradio_mcp_server.py\"\n )\n with gr.Column(scale=1):\n connect_btn = gr.Button(\"Connect\")\n \n status = gr.Textbox(label=\"Connection Status\", interactive=False)\n \n chatbot = gr.Chatbot(\n value=[], \n height=500,\n type=\"messages\",\n show_copy_button=True,\n avatar_images=(\"\ud83d\udc64\", \"\ud83e\udd16\")\n )\n \n with gr.Row(equal_height=True):\n msg = gr.Textbox(\n label=\"Your Question\",\n placeholder=\"Ask about weather or alerts (e.g., What's the weath", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": ")\n \n with gr.Row(equal_height=True):\n msg = gr.Textbox(\n label=\"Your Question\",\n placeholder=\"Ask about weather or alerts (e.g., What's the weather in New York?)\",\n scale=4\n )\n clear_btn = gr.Button(\"Clear Chat\", scale=1)\n \n connect_btn.click(client.connect, inputs=server_path, outputs=status)\n msg.submit(client.process_message, [msg, chatbot], [chatbot, msg])\n clear_btn.click(lambda: [], None, chatbot)\n \n return demo\n\nif __name__ == \"__main__\":\n if not os.getenv(\"ANTHROPIC_API_KEY\"):\n print(\"Warning: ANTHROPIC_API_KEY not found in environment. Please set it in your .env file.\")\n \n interface = gradio_interface()\n interface.launch(debug=True)\n```\n\nWhat this MCP Client does:\n\n- Creates a friendly Gradio chat interface for user interaction\n- Connects to the MCP server you specify\n- Handles conversation history and message formatting\n- Makes call to Claude API with tool definitions\n- Processes tool usage requests from Claude\n- Displays images and other tool outputs in the chat\n- Sends tool results back to Claude for interpretation\n\n", "heading1": "Part 2: Building the MCP Client with Gradio", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "To run your MCP application:\n\n- Start a terminal window and run the MCP Client:\n ```bash\n python app.py\n ```\n- Open the Gradio interface at the URL shown (typically http://127.0.0.1:7860)\n- In the Gradio interface, you'll see a field for the MCP Server path. It should default to `gradio_mcp_server.py`.\n- Click \"Connect\" to establish the connection to the MCP server.\n- You should see a message indicating the server connection was successful.\n\n", "heading1": "Running the Application", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Now you can chat with Claude and it will be able to generate images based on your descriptions.\n\nTry prompts like:\n- \"Can you generate an image of a mountain landscape at sunset?\"\n- \"Create an image of a cool tabby cat\"\n- \"Generate a picture of a panda wearing sunglasses\"\n\nClaude will recognize these as image generation requests and automatically use the `generate_image` tool from your MCP server.\n\n\n", "heading1": "Example Usage", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Here's the high-level flow of what happens during a chat session:\n\n1. Your prompt enters the Gradio interface\n2. The client forwards your prompt to Claude\n3. Claude analyzes the prompt and decides to use the `generate_image` tool\n4. The client sends the tool call to the MCP server\n5. The server calls the external image generation API\n6. The image URL is returned to the client\n7. The client sends the image URL back to Claude\n8. Claude provides a response that references the generated image\n9. The Gradio chat interface displays both Claude's response and the image\n\n\n", "heading1": "How it Works", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Now that you have a working MCP system, here are some ideas to extend it:\n\n- Add more tools to your server\n- Improve error handling \n- Add private Huggingface Spaces with authentication for secure tool access\n- Create custom tools that connect to your own APIs or services\n- Implement streaming responses for better user experience\n\n", "heading1": "Next Steps", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "Congratulations! You've successfully built an MCP Client and Server that allows Claude to generate images based on text prompts. This is just the beginning of what you can do with Gradio and MCP. This guide enables you to build complex AI applications that can use Claude or any other powerful LLM to interact with virtually any external tool or service.\n\nRead our other Guide on using [Gradio apps as MCP Servers](./building-mcp-server-with-gradio).\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/building-an-mcp-client-with-gradio", "source_page_title": "Mcp - Building An Mcp Client With Gradio Guide"}, {"text": "As of version 5.36.0, Gradio now comes with a built-in MCP server that can upload files to a running Gradio application. In the `View API` page of the server, you should see the following code snippet if any of the tools require file inputs:\n\n\n\nThe command to start the MCP server takes two arguments:\n\n- The URL (or Hugging Face space id) of the gradio application to upload the files to. In this case, `http://127.0.0.1:7860`.\n- The local directory on your computer with which the server is allowed to upload files from (``). For security, please make this directory as narrow as possible to prevent unintended file uploads.\n\nAs stated in the image, you need to install [uv](https://docs.astral.sh/uv/getting-started/installation/) (a python package manager that can run python scripts) before connecting from your MCP client. \n\nIf you have gradio installed locally and you don't want to install uv, you can replace the `uvx` command with the path to gradio binary. It should look like this:\n\n```json\n\"upload-files\": {\n \"command\": \"\",\n \"args\": [\n \"upload-mcp\",\n \"http://localhost:7860/\",\n \"/Users/freddyboulton/Pictures\"\n ]\n}\n```\n\nAfter connecting to the upload server, your LLM agent will know when to upload files for you automatically!\n\n\n\n", "heading1": "Using the File Upload MCP Server", "source_page_url": "https://gradio.app/guides/file-upload-mcp", "source_page_title": "Mcp - File Upload Mcp Guide"}, {"text": "In this guide, we've covered how you can connect to the Upload File MCP Server so that your agent can upload files before using Gradio MCP servers. Remember to set the `` as small as possible to prevent unintended file uploads!\n\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/file-upload-mcp", "source_page_title": "Mcp - File Upload Mcp Guide"}, {"text": "An MCP (Model Control Protocol) server is a standardized way to expose tools so that they can be used by LLMs. A tool can provide an LLM functionality that it does not have natively, such as the ability to generate images or calculate the prime factors of a number. \n\n", "heading1": "What is an MCP Server?", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "LLMs are famously not great at counting the number of letters in a word (e.g. the number of \"r\"-s in \"strawberry\"). But what if we equip them with a tool to help? Let's start by writing a simple Gradio app that counts the number of letters in a word or phrase:\n\n$code_letter_counter\n\nNotice that we have: (1) included a detailed docstring for our function, and (2) set `mcp_server=True` in `.launch()`. This is all that's needed for your Gradio app to serve as an MCP server! Now, when you run this app, it will:\n\n1. Start the regular Gradio web interface\n2. Start the MCP server\n3. Print the MCP server URL in the console\n\nThe MCP server will be accessible at:\n```\nhttp://your-server:port/gradio_api/mcp/sse\n```\n\nGradio automatically converts the `letter_counter` function into an MCP tool that can be used by LLMs. The docstring of the function and the type hints of arguments will be used to generate the description of the tool and its parameters. The name of the function will be used as the name of your tool. Any initial values you provide to your input components (e.g. \"strawberry\" and \"r\" in the `gr.Textbox` components above) will be used as the default values if your LLM doesn't specify a value for that particular input parameter.\n\nNow, all you need to do is add this URL endpoint to your MCP Client (e.g. Claude Desktop, Cursor, or Cline), which typically means pasting this config in the settings:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"http://your-server:port/gradio_api/mcp/sse\"\n }\n }\n}\n```\n\n(By the way, you can find the exact config to copy-paste by going to the \"View API\" link in the footer of your Gradio app, and then clicking on \"MCP\").\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-mcp.png)\n\n", "heading1": "Example: Counting Letters in a Word", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "1. **Tool Conversion**: Each API endpoint in your Gradio app is automatically converted into an MCP tool with a corresponding name, description, and input schema. To view the tools and schemas, visit http://your-server:port/gradio_api/mcp/schema or go to the \"View API\" link in the footer of your Gradio app, and then click on \"MCP\".\n\n\n2. **Environment variable support**. There are two ways to enable the MCP server functionality:\n\n* Using the `mcp_server` parameter, as shown above:\n ```python\n demo.launch(mcp_server=True)\n ```\n\n* Using environment variables:\n ```bash\n export GRADIO_MCP_SERVER=True\n ```\n\n3. **File Handling**: The Gradio MCP server automatically handles file data conversions, including:\n - Processing image files and returning them in the correct format\n - Managing temporary file storage\n\n By default, the Gradio MCP server accepts input images and files as full URLs (\"http://...\" or \"https:/...\"). For convenience, an additional STDIO-based MCP server is also generated, which can be used to upload files to any remote Gradio app and which returns a URL that can be used for subsequent tool calls.\n\n4. **Hosted MCP Servers on \udb40\udc20\ud83e\udd17 Spaces**: You can publish your Gradio application for free on Hugging Face Spaces, which will allow you to have a free hosted MCP server. Here's an example of such a Space: https://huggingface.co/spaces/abidlabs/mcp-tools. Notice that you can add this config to your MCP Client to start using the tools from this Space immediately:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse\"\n }\n }\n}\n```\n\n\n\n\n", "heading1": "Key features of the Gradio <> MCP Integration", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "If there's an existing Space that you'd like to use an MCP server, you'll need to do three things:\n\n1. First, [duplicate the Space](https://huggingface.co/docs/hub/en/spaces-more-ways-to-createduplicating-a-space) if it is not your own Space. This will allow you to make changes to the app. If the Space requires a GPU, set the hardware of the duplicated Space to be same as the original Space. You can make it either a public Space or a private Space, since it is possible to use either as an MCP server, as described below.\n2. Then, add docstrings to the functions that you'd like the LLM to be able to call as a tool. The docstring should be in the same format as the example code above.\n3. Finally, add `mcp_server=True` in `.launch()`.\n\nThat's it!\n\n", "heading1": "Converting an Existing Space", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "You can use either a public Space or a private Space as an MCP server. If you'd like to use a private Space as an MCP server (or a ZeroGPU Space with your own quota), then you will need to provide your [Hugging Face token](https://huggingface.co/settings/token) when you make your request. To do this, simply add it as a header in your config like this:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse\",\n \"headers\": {\n \"Authorization\": \"Bearer \"\n }\n }\n }\n}\n```\n\n", "heading1": "Private Spaces", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "You may wish to authenticate users more precisely or let them provide other kinds of credentials or tokens in order to provide a custom experience for different users. \n\nGradio allows you to access the underlying `starlette.Request` that has made the tool call, which means that you can access headers, originating IP address, or any other information that is part of the network request. To do this, simply add a parameter in your function of the type `gr.Request`, and Gradio will automatically inject the request object as the parameter.\n\nHere's an example:\n\n```py\nimport gradio as gr\n\ndef echo_headers(x, request: gr.Request):\n return str(dict(request.headers))\n\ngr.Interface(echo_headers, \"textbox\", \"textbox\").launch(mcp_server=True)\n```\n\nThis MCP server will simply ignore the user's input and echo back all of the headers from a user's request. One can build more complex apps using the same idea. See the [docs on `gr.Request`](https://www.gradio.app/main/docs/gradio/request) for more information (note that only the core Starlette attributes of the `gr.Request` object will be present, attributes such as Gradio's `.session_hash` will not be present).\n\nUsing the gr.Header class\n\nA common pattern in MCP server development is to use authentication headers to call services on behalf of your users. Instead of using a `gr.Request` object like in the example above, you can use a `gr.Header` argument. Gradio will automatically extract that header from the incoming request (if it exists) and pass it to your function.\n\nIn the example below, the `X-API-Token` header is extracted from the incoming request and passed in as the `x_api_token` argument to `make_api_request_on_behalf_of_user`.\n\nThe benefit of using `gr.Header` is that the MCP connection docs will automatically display the headers you need to supply when connecting to the server! See the image below:\n\n```python\nimport gradio as gr\n\ndef make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header):\n \"\"\"M", "heading1": "Authentication and Credentials", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "the headers you need to supply when connecting to the server! See the image below:\n\n```python\nimport gradio as gr\n\ndef make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header):\n \"\"\"Make a request to everyone's favorite API.\n Args:\n prompt: The prompt to send to the API.\n Returns:\n The response from the API.\n Raises:\n AssertionError: If the API token is not valid.\n \"\"\"\n return \"Hello from the API\" if not x_api_token else \"Hello from the API with token!\"\n\n\ndemo = gr.Interface(\n make_api_request_on_behalf_of_user,\n [\n gr.Textbox(label=\"Prompt\"),\n ],\n gr.Textbox(label=\"Response\"),\n)\n\ndemo.launch(mcp_server=True)\n```\n\n![MCP Header Connection Page](https://github.com/user-attachments/assets/e264eedf-a91a-476b-880d-5be0d5934134)\n\nSending Progress Updates\n\nThe Gradio MCP server automatically sends progress updates to your MCP Client based on the queue in the Gradio application. If you'd like to send custom progress updates, you can do so using the same mechanism as you would use to display progress updates in the UI of your Gradio app: by using the `gr.Progress` class!\n\nHere's an example of how to do this:\n\n$code_mcp_progress\n\n[Here are the docs](https://www.gradio.app/docs/gradio/progress) for the `gr.Progress` class, which can also automatically track `tqdm` calls.\n\n\n", "heading1": "Authentication and Credentials", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "Gradio automatically sets the tool name based on the name of your function, and the description from the docstring of your function. But you may want to change how the description appears to your LLM. You can do this by using the `api_description` parameter in `Interface`, `ChatInterface`, or any event listener. This parameter takes three different kinds of values:\n\n* `None` (default): the tool description is automatically created from the docstring of the function (or its parent's docstring if it does not have a docstring but inherits from a method that does.)\n* `False`: no tool description appears to the LLM.\n* `str`: an arbitrary string to use as the tool description.\n\nIn addition to modifying the tool descriptions, you can also toggle which tools appear to the LLM. You can do this by setting the `show_api` parameter, which is by default `True`. Setting it to `False` hides the endpoint from the API docs and from the MCP server. If you expose multiple tools, users of your app will also be able to toggle which tools they'd like to add to their MCP server by checking boxes in the \"view MCP or API\" panel.\n\nHere's an example that shows the `api_description` and `show_api` parameters in actions:\n\n$code_mcp_tools\n\n", "heading1": "Modifying Tool Descriptions", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "So far, all of our MCP tools have corresponded to event listeners in the UI. This works well for functions that directly update the UI, but may not work if you wish to expose a \"pure logic\" function that should return raw data (e.g. a JSON object) without directly causing a UI update.\n\nIn order to expose such an MCP tool, you can create a pure Gradio API endpoint using `gr.api` (see [full docs here](https://www.gradio.app/main/docs/gradio/api)). Here's an example of creating an MCP tool that slices a list:\n\n$code_mcp_tool_only\n\nNote that if you use this approach, your function signature must be fully typed, including the return value, as these signature are used to determine the typing information for the MCP tool.\n\n\n", "heading1": "Adding MCP-Only Tools", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "In some cases, you may decide not to use Gradio's built-in integration and instead manually create an FastMCP Server that calls a Gradio app. This approach is useful when you want to:\n\n- Store state / identify users between calls instead of treating every tool call completely independently\n- Start the Gradio app MCP server when a tool is called (if you are running multiple Gradio apps locally and want to save memory / GPU)\n\nThis is very doable thanks to the [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) and the [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)'s `FastMCP` class. Here's an example of creating a custom MCP server that connects to various Gradio apps hosted on [HuggingFace Spaces](https://huggingface.co/spaces) using the `stdio` protocol:\n\n```python\nfrom mcp.server.fastmcp import FastMCP\nfrom gradio_client import Client\nimport sys\nimport io\nimport json \n\nmcp = FastMCP(\"gradio-spaces\")\n\nclients = {}\n\ndef get_client(space_id: str) -> Client:\n \"\"\"Get or create a Gradio client for the specified space.\"\"\"\n if space_id not in clients:\n clients[space_id] = Client(space_id)\n return clients[space_id]\n\n\n@mcp.tool()\nasync def generate_image(prompt: str, space_id: str = \"ysharma/SanaSprint\") -> str:\n \"\"\"Generate an image using Flux.\n \n Args:\n prompt: Text prompt describing the image to generate\n space_id: HuggingFace Space ID to use \n \"\"\"\n client = get_client(space_id)\n result = client.predict(\n prompt=prompt,\n model_size=\"1.6B\",\n seed=0,\n randomize_seed=True,\n width=1024,\n height=1024,\n guidance_scale=4.5,\n num_inference_steps=2,\n api_name=\"/infer\"\n )\n return result\n\n\n@mcp.tool()\nasync def run_dia_tts(prompt: str, space_id: str = \"ysharma/Dia-1.6B\") -> str:\n \"\"\"Text-to-Speech Synthesis.\n \n Args:\n prompt: Text prompt describing the co", "heading1": "Gradio with FastMCP", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "return result\n\n\n@mcp.tool()\nasync def run_dia_tts(prompt: str, space_id: str = \"ysharma/Dia-1.6B\") -> str:\n \"\"\"Text-to-Speech Synthesis.\n \n Args:\n prompt: Text prompt describing the conversation between speakers S1, S2\n space_id: HuggingFace Space ID to use \n \"\"\"\n client = get_client(space_id)\n result = client.predict(\n text_input=f\"\"\"{prompt}\"\"\",\n audio_prompt_input=None, \n max_new_tokens=3072,\n cfg_scale=3,\n temperature=1.3,\n top_p=0.95,\n cfg_filter_top_k=30,\n speed_factor=0.94,\n api_name=\"/generate_audio\"\n )\n return result\n\n\nif __name__ == \"__main__\":\n import sys\n import io\n sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')\n \n mcp.run(transport='stdio')\n```\n\nThis server exposes two tools:\n1. `run_dia_tts` - Generates a conversation for the given transcript in the form of `[S1]first-sentence. [S2]second-sentence. [S1]...`\n2. `generate_image` - Generates images using a fast text-to-image model\n\nTo use this MCP Server with Claude Desktop (as MCP Client):\n\n1. Save the code to a file (e.g., `gradio_mcp_server.py`)\n2. Install the required dependencies: `pip install mcp gradio-client`\n3. Configure Claude Desktop to use your server by editing the configuration file at `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\\Claude\\claude_desktop_config.json` (Windows):\n\n```json\n{\n \"mcpServers\": {\n \"gradio-spaces\": {\n \"command\": \"python\",\n \"args\": [\n \"/absolute/path/to/gradio_mcp_server.py\"\n ]\n }\n }\n}\n```\n\n4. Restart Claude Desktop\n\nNow, when you ask Claude about generating an image or transcribing audio, it can use your Gradio-powered tools to accomplish these tasks.\n\n\n", "heading1": "Gradio with FastMCP", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "use your Gradio-powered tools to accomplish these tasks.\n\n\n", "heading1": "Gradio with FastMCP", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "The MCP protocol is still in its infancy and you might see issues connecting to an MCP Server that you've built. We generally recommend using the [MCP Inspector Tool](https://github.com/modelcontextprotocol/inspector) to try connecting and debugging your MCP Server.\n\nHere are some things that may help:\n\n**1. Ensure that you've provided type hints and valid docstrings for your functions**\n\nAs mentioned earlier, Gradio reads the docstrings for your functions and the type hints of input arguments to generate the description of the tool and parameters. A valid function and docstring looks like this (note the \"Args:\" block with indented parameter names underneath):\n\n```py\ndef image_orientation(image: Image.Image) -> str:\n \"\"\"\n Returns whether image is portrait or landscape.\n\n Args:\n image (Image.Image): The image to check.\n \"\"\"\n return \"Portrait\" if image.height > image.width else \"Landscape\"\n```\n\nNote: You can preview the schema that is created for your MCP server by visiting the `http://your-server:port/gradio_api/mcp/schema` URL.\n\n**2. Try accepting input arguments as `str`**\n\nSome MCP Clients do not recognize parameters that are numeric or other complex types, but all of the MCP Clients that we've tested accept `str` input parameters. When in doubt, change your input parameter to be a `str` and then cast to a specific type in the function, as in this example:\n\n```py\ndef prime_factors(n: str):\n \"\"\"\n Compute the prime factorization of a positive integer.\n\n Args:\n n (str): The integer to factorize. Must be greater than 1.\n \"\"\"\n n_int = int(n)\n if n_int <= 1:\n raise ValueError(\"Input must be an integer greater than 1.\")\n\n factors = []\n while n_int % 2 == 0:\n factors.append(2)\n n_int //= 2\n\n divisor = 3\n while divisor * divisor <= n_int:\n while n_int % divisor == 0:\n factors.append(divisor)\n n_int //= divisor\n divisor += 2\n\n if n_int > 1:\n factors.", "heading1": "Troubleshooting your MCP Servers", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "= 3\n while divisor * divisor <= n_int:\n while n_int % divisor == 0:\n factors.append(divisor)\n n_int //= divisor\n divisor += 2\n\n if n_int > 1:\n factors.append(n_int)\n\n return factors\n```\n\n**3. Ensure that your MCP Client Supports SSE**\n\nSome MCP Clients, notably [Claude Desktop](https://claude.ai/download), do not yet support SSE-based MCP Servers. In those cases, you can use a tool such as [mcp-remote](https://github.com/geelen/mcp-remote). First install [Node.js](https://nodejs.org/en/download/). Then, add the following to your own MCP Client config:\n\n```\n{\n \"mcpServers\": {\n \"gradio\": {\n \"command\": \"npx\",\n \"args\": [\n \"mcp-remote\",\n \"http://your-server:port/gradio_api/mcp/sse\"\n ]\n }\n }\n}\n```\n\n**4. Restart your MCP Client and MCP Server**\n\nSome MCP Clients require you to restart them every time you update the MCP configuration. Other times, if the connection between the MCP Client and servers breaks, you might need to restart the MCP server. If all else fails, try restarting both your MCP Client and MCP Servers!\n\n", "heading1": "Troubleshooting your MCP Servers", "source_page_url": "https://gradio.app/guides/building-mcp-server-with-gradio", "source_page_title": "Mcp - Building Mcp Server With Gradio Guide"}, {"text": "If you're using LLMs in your workflow, adding this server will augment them with just the right context on gradio - which makes your experience a lot faster and smoother. \n\n\n\nThe server is running on Spaces and was launched entirely using Gradio, you can see all the code [here](https://huggingface.co/spaces/gradio/docs-mcp). For more on building an mcp server with gradio, see the [previous guide](./building-an-mcp-client-with-gradio). \n\n", "heading1": "Why an MCP Server?", "source_page_url": "https://gradio.app/guides/using-docs-mcp", "source_page_title": "Mcp - Using Docs Mcp Guide"}, {"text": "For clients that support SSE (e.g. Cursor, Windsurf, Cline), simply add the following configuration to your MCP config:\n\n```json\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://gradio-docs-mcp.hf.space/gradio_api/mcp/sse\"\n }\n }\n}\n```\n\nWe've included step-by-step instructions for Cursor below, but you can consult the docs for Windsurf [here](https://docs.windsurf.com/windsurf/mcp), and Cline [here](https://docs.cline.bot/mcp-servers/configuring-mcp-servers) which are similar to set up. \n\n\n\nCursor \n\n1. Make sure you're using the latest version of Cursor, and go to Cursor > Settings > Cursor Settings > MCP \n2. Click on '+ Add new global MCP server' \n3. Copy paste this json into the file that opens and then save it. \n```json\n{\n \"mcpServers\": {\n \"gradio\": {\n \"url\": \"https://gradio-docs-mcp.hf.space/gradio_api/mcp/sse\"\n }\n }\n}\n```\n4. That's it! You should see the tools load and the status go green in the settings page. You may have to click the refresh icon or wait a few seconds. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/cursor-mcp.png)\n\nClaude Desktop\n\n1. Since Claude Desktop only supports stdio, you will need to [install Node.js](https://nodejs.org/en/download/) to get this to work. \n2. Make sure you're using the latest version of Claude Desktop, and go to Claude > Settings > Developer > Edit Config \n3. Open the file with your favorite editor and copy paste this json, then save the file. \n```json\n{\n \"mcpServers\": {\n \"gradio\": {\n \"command\": \"npx\",\n \"args\": [\n \"mcp-remote\",\n \"https://gradio-docs-mcp.hf.space/gradio_api/mcp/sse\",\n \"--transport\",\n \"sse-only\"\n ]\n }\n }\n}\n```\n4. Quit and re-open Claude Desktop, and you should be good to go. You should see it loaded in the Search and Tools icon or on the developer settings page. \n \n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/claude-deskt", "heading1": "Installing in the Clients", "source_page_url": "https://gradio.app/guides/using-docs-mcp", "source_page_title": "Mcp - Using Docs Mcp Guide"}, {"text": "You should see it loaded in the Search and Tools icon or on the developer settings page. \n \n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/claude-desktop-mcp.gif)\n\n", "heading1": "Installing in the Clients", "source_page_url": "https://gradio.app/guides/using-docs-mcp", "source_page_title": "Mcp - Using Docs Mcp Guide"}, {"text": "There are currently only two tools in the server: `gradio_docs_mcp_load_gradio_docs` and `gradio_docs_mcp_search_gradio_docs`. \n\n1. `gradio_docs_mcp_load_gradio_docs`: This tool takes no arguments and will load an /llms.txt style summary of Gradio's latest, full documentation. Very useful context the LLM can parse before answering questions or generating code. \n\n2. `gradio_docs_mcp_search_gradio_docs`: This tool takes a query as an argument and will run embedding search on Gradio's docs, guides, and demos to return the most useful context for the LLM to parse.", "heading1": "Tools", "source_page_url": "https://gradio.app/guides/using-docs-mcp", "source_page_title": "Mcp - Using Docs Mcp Guide"}, {"text": "The next generation of AI user interfaces is moving towards audio-native experiences. Users will be able to speak to chatbots and receive spoken responses in return. Several models have been built under this paradigm, including GPT-4o and [mini omni](https://github.com/gpt-omni/mini-omni).\n\nIn this guide, we'll walk you through building your own conversational chat application using mini omni as an example. You can see a demo of the finished app below:\n\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "Our application will enable the following user experience:\n\n1. Users click a button to start recording their message\n2. The app detects when the user has finished speaking and stops recording\n3. The user's audio is passed to the omni model, which streams back a response\n4. After omni mini finishes speaking, the user's microphone is reactivated\n5. All previous spoken audio, from both the user and omni, is displayed in a chatbot component\n\nLet's dive into the implementation details.\n\n", "heading1": "Application Overview", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "We'll stream the user's audio from their microphone to the server and determine if the user has stopped speaking on each new chunk of audio.\n\nHere's our `process_audio` function:\n\n```python\nimport numpy as np\nfrom utils import determine_pause\n\ndef process_audio(audio: tuple, state: AppState):\n if state.stream is None:\n state.stream = audio[1]\n state.sampling_rate = audio[0]\n else:\n state.stream = np.concatenate((state.stream, audio[1]))\n\n pause_detected = determine_pause(state.stream, state.sampling_rate, state)\n state.pause_detected = pause_detected\n\n if state.pause_detected and state.started_talking:\n return gr.Audio(recording=False), state\n return None, state\n```\n\nThis function takes two inputs:\n1. The current audio chunk (a tuple of `(sampling_rate, numpy array of audio)`)\n2. The current application state\n\nWe'll use the following `AppState` dataclass to manage our application state:\n\n```python\nfrom dataclasses import dataclass\n\n@dataclass\nclass AppState:\n stream: np.ndarray | None = None\n sampling_rate: int = 0\n pause_detected: bool = False\n stopped: bool = False\n conversation: list = []\n```\n\nThe function concatenates new audio chunks to the existing stream and checks if the user has stopped speaking. If a pause is detected, it returns an update to stop recording. Otherwise, it returns `None` to indicate no changes.\n\nThe implementation of the `determine_pause` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/eb027808c7bfe5179b46d9352e3fa1813a45f7c3/app.pyL98).\n\n", "heading1": "Processing User Audio", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "After processing the user's audio, we need to generate and stream the chatbot's response. Here's our `response` function:\n\n```python\nimport io\nimport tempfile\nfrom pydub import AudioSegment\n\ndef response(state: AppState):\n if not state.pause_detected and not state.started_talking:\n return None, AppState()\n \n audio_buffer = io.BytesIO()\n\n segment = AudioSegment(\n state.stream.tobytes(),\n frame_rate=state.sampling_rate,\n sample_width=state.stream.dtype.itemsize,\n channels=(1 if len(state.stream.shape) == 1 else state.stream.shape[1]),\n )\n segment.export(audio_buffer, format=\"wav\")\n\n with tempfile.NamedTemporaryFile(suffix=\".wav\", delete=False) as f:\n f.write(audio_buffer.getvalue())\n \n state.conversation.append({\"role\": \"user\",\n \"content\": {\"path\": f.name,\n \"mime_type\": \"audio/wav\"}})\n \n output_buffer = b\"\"\n\n for mp3_bytes in speaking(audio_buffer.getvalue()):\n output_buffer += mp3_bytes\n yield mp3_bytes, state\n\n with tempfile.NamedTemporaryFile(suffix=\".mp3\", delete=False) as f:\n f.write(output_buffer)\n \n state.conversation.append({\"role\": \"assistant\",\n \"content\": {\"path\": f.name,\n \"mime_type\": \"audio/mp3\"}})\n yield None, AppState(conversation=state.conversation)\n```\n\nThis function:\n1. Converts the user's audio to a WAV file\n2. Adds the user's message to the conversation history\n3. Generates and streams the chatbot's response using the `speaking` function\n4. Saves the chatbot's response as an MP3 file\n5. Adds the chatbot's response to the conversation history\n\nNote: The implementation of the `speaking` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/main/app.pyL116).\n\n", "heading1": "Generating the Response", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "Now let's put it all together using Gradio's Blocks API:\n\n```python\nimport gradio as gr\n\ndef start_recording_user(state: AppState):\n if not state.stopped:\n return gr.Audio(recording=True)\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n input_audio = gr.Audio(\n label=\"Input Audio\", sources=\"microphone\", type=\"numpy\"\n )\n with gr.Column():\n chatbot = gr.Chatbot(label=\"Conversation\", type=\"messages\")\n output_audio = gr.Audio(label=\"Output Audio\", streaming=True, autoplay=True)\n state = gr.State(value=AppState())\n\n stream = input_audio.stream(\n process_audio,\n [input_audio, state],\n [input_audio, state],\n stream_every=0.5,\n time_limit=30,\n )\n respond = input_audio.stop_recording(\n response,\n [state],\n [output_audio, state]\n )\n respond.then(lambda s: s.conversation, [state], [chatbot])\n\n restart = output_audio.stop(\n start_recording_user,\n [state],\n [input_audio]\n )\n cancel = gr.Button(\"Stop Conversation\", variant=\"stop\")\n cancel.click(lambda: (AppState(stopped=True), gr.Audio(recording=False)), None,\n [state, input_audio], cancels=[respond, restart])\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThis setup creates a user interface with:\n- An input audio component for recording user messages\n- A chatbot component to display the conversation history\n- An output audio component for the chatbot's responses\n- A button to stop and reset the conversation\n\nThe app streams user audio in 0.5-second chunks, processes it, generates responses, and updates the conversation history accordingly.\n\n", "heading1": "Building the Gradio App", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "This guide demonstrates how to build a conversational chatbot application using Gradio and the mini omni model. You can adapt this framework to create various audio-based chatbot demos. To see the full application in action, visit the Hugging Face Spaces demo: https://huggingface.co/spaces/gradio/omni-mini\n\nFeel free to experiment with different models, audio processing techniques, or user interface designs to create your own unique conversational AI experiences!", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/conversational-chatbot", "source_page_title": "Streaming - Conversational Chatbot Guide"}, {"text": "Modern voice applications should feel natural and responsive, moving beyond the traditional \"click-to-record\" pattern. By combining Groq's fast inference capabilities with automatic speech detection, we can create a more intuitive interaction model where users can simply start talking whenever they want to engage with the AI.\n\n> Credits: VAD and Gradio code inspired by [WillHeld's Diva-audio-chat](https://huggingface.co/spaces/WillHeld/diva-audio-chat/tree/main).\n\nIn this tutorial, you will learn how to create a multimodal Gradio and Groq app that has automatic speech detection. You can also watch the full video tutorial which includes a demo of the application:\n\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "Many voice apps currently work by the user clicking record, speaking, then stopping the recording. While this can be a powerful demo, the most natural mode of interaction with voice requires the app to dynamically detect when the user is speaking, so they can talk back and forth without having to continually click a record button. \n\nCreating a natural interaction with voice and text requires a dynamic and low-latency response. Thus, we need both automatic voice detection and fast inference. With @ricky0123/vad-web powering speech detection and Groq powering the LLM, both of these requirements are met. Groq provides a lightning fast response, and Gradio allows for easy creation of impressively functional apps.\n\nThis tutorial shows you how to build a calorie tracking app where you speak to an AI that automatically detects when you start and stop your response, and provides its own text response back to guide you with questions that allow it to give a calorie estimate of your last meal.\n\n", "heading1": "Background", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "- **Gradio**: Provides the web interface and audio handling capabilities\n- **@ricky0123/vad-web**: Handles voice activity detection\n- **Groq**: Powers fast LLM inference for natural conversations\n- **Whisper**: Transcribes speech to text\n\nSetting Up the Environment\n\nFirst, let\u2019s install and import our essential libraries and set up a client for using the Groq API. Here\u2019s how to do it:\n\n`requirements.txt`\n```\ngradio\ngroq\nnumpy\nsoundfile\nlibrosa\nspaces\nxxhash\ndatasets\n```\n\n`app.py`\n```python\nimport groq\nimport gradio as gr\nimport soundfile as sf\nfrom dataclasses import dataclass, field\nimport os\n\nInitialize Groq client securely\napi_key = os.environ.get(\"GROQ_API_KEY\")\nif not api_key:\n raise ValueError(\"Please set the GROQ_API_KEY environment variable.\")\nclient = groq.Client(api_key=api_key)\n```\n\nHere, we\u2019re pulling in key libraries to interact with the Groq API, build a sleek UI with Gradio, and handle audio data. We\u2019re accessing the Groq API key securely with a key stored in an environment variable, which is a security best practice for avoiding leaking the API key.\n\n---\n\nState Management for Seamless Conversations\n\nWe need a way to keep track of our conversation history, so the chatbot remembers past interactions, and manage other states like whether recording is currently active. To do this, let\u2019s create an `AppState` class:\n\n```python\n@dataclass\nclass AppState:\n conversation: list = field(default_factory=list)\n stopped: bool = False\n model_outs: Any = None\n```\n\nOur `AppState` class is a handy tool for managing conversation history and tracking whether recording is on or off. Each instance will have its own fresh list of conversations, making sure chat history is isolated to each session. \n\n---\n\nTranscribing Audio with Whisper on Groq\n\nNext, we\u2019ll create a function to transcribe the user\u2019s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there\u2019s meani", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "e\u2019ll create a function to transcribe the user\u2019s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there\u2019s meaningful speech in the input. Here\u2019s how:\n\n```python\ndef transcribe_audio(client, file_name):\n if file_name is None:\n return None\n\n try:\n with open(file_name, \"rb\") as audio_file:\n response = client.audio.transcriptions.with_raw_response.create(\n model=\"whisper-large-v3-turbo\",\n file=(\"audio.wav\", audio_file),\n response_format=\"verbose_json\",\n )\n completion = process_whisper_response(response.parse())\n return completion\n except Exception as e:\n print(f\"Error in transcription: {e}\")\n return f\"Error in transcription: {str(e)}\"\n```\n\nThis function opens the audio file and sends it to Groq\u2019s Whisper model for transcription, requesting detailed JSON output. verbose_json is needed to get information to determine if speech was included in the audio. We also handle any potential errors so our app doesn\u2019t fully crash if there\u2019s an issue with the API request. \n\n```python\ndef process_whisper_response(completion):\n \"\"\"\n Process Whisper transcription response and return text or null based on no_speech_prob\n \n Args:\n completion: Whisper transcription response object\n \n Returns:\n str or None: Transcribed text if no_speech_prob <= 0.7, otherwise None\n \"\"\"\n if completion.segments and len(completion.segments) > 0:\n no_speech_prob = completion.segments[0].get('no_speech_prob', 0)\n print(\"No speech prob:\", no_speech_prob)\n\n if no_speech_prob > 0.7:\n return None\n \n return completion.text.strip()\n \n return None\n```\n\nWe also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was j", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "ext.strip()\n \n return None\n```\n\nWe also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was just background noise or had actual speaking that was transcribed. It uses a threshold of 0.7 to interpret the no_speech_prob, and will return None if there was no speech. Otherwise, it will return the text transcript of the conversational response from the human.\n\n\n---\n\nAdding Conversational Intelligence with LLM Integration\n\nOur chatbot needs to provide intelligent, friendly responses that flow naturally. We\u2019ll use a Groq-hosted Llama-3.2 for this:\n\n```python\ndef generate_chat_completion(client, history):\n messages = []\n messages.append(\n {\n \"role\": \"system\",\n \"content\": \"In conversation with the user, ask questions to estimate and provide (1) total calories, (2) protein, carbs, and fat in grams, (3) fiber and sugar content. Only ask *one question at a time*. Be conversational and natural.\",\n }\n )\n\n for message in history:\n messages.append(message)\n\n try:\n completion = client.chat.completions.create(\n model=\"llama-3.2-11b-vision-preview\",\n messages=messages,\n )\n return completion.choices[0].message.content\n except Exception as e:\n return f\"Error in generating chat completion: {str(e)}\"\n```\n\nWe\u2019re defining a system prompt to guide the chatbot\u2019s behavior, ensuring it asks one question at a time and keeps things conversational. This setup also includes error handling to ensure the app gracefully manages any issues.\n\n---\n\nVoice Activity Detection for Hands-Free Interaction\n\nTo make our chatbot hands-free, we\u2019ll add Voice Activity Detection (VAD) to automatically detect when someone starts or stops speaking. Here\u2019s how to implement it using ONNX in JavaScript:\n\n```javascript\nasync function main() {\n const script1 = document.createElement(\"script\");\n scrip", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "ly detect when someone starts or stops speaking. Here\u2019s how to implement it using ONNX in JavaScript:\n\n```javascript\nasync function main() {\n const script1 = document.createElement(\"script\");\n script1.src = \"https://cdn.jsdelivr.net/npm/onnxruntime-web@1.14.0/dist/ort.js\";\n document.head.appendChild(script1)\n const script2 = document.createElement(\"script\");\n script2.onload = async () => {\n console.log(\"vad loaded\");\n var record = document.querySelector('.record-button');\n record.textContent = \"Just Start Talking!\"\n \n const myvad = await vad.MicVAD.new({\n onSpeechStart: () => {\n var record = document.querySelector('.record-button');\n var player = document.querySelector('streaming-out')\n if (record != null && (player == null || player.paused)) {\n record.click();\n }\n },\n onSpeechEnd: (audio) => {\n var stop = document.querySelector('.stop-button');\n if (stop != null) {\n stop.click();\n }\n }\n })\n myvad.start()\n }\n script2.src = \"https://cdn.jsdelivr.net/npm/@ricky0123/vad-web@0.0.7/dist/bundle.min.js\";\n}\n```\n\nThis script loads our VAD model and sets up functions to start and stop recording automatically. When the user starts speaking, it triggers the recording, and when they stop, it ends the recording.\n\n---\n\nBuilding a User Interface with Gradio\n\nNow, let\u2019s create an intuitive and visually appealing user interface with Gradio. This interface will include an audio input for capturing voice, a chat window for displaying responses, and state management to keep things synchronized.\n\n```python\nwith gr.Blocks(theme=theme, js=js) as demo:\n with gr.Row():\n input_audio = gr.Audio(\n label=\"Input Audio\",\n sources=[\"microphone\"],\n type=\"numpy\",\n streaming=False,\n waveform_options=gr.WaveformOptions(waveform_color=\"B83A4B\"),\n )\n with gr.Row():\n chatbot = gr.Chatbot(label=\"Conversati", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": " type=\"numpy\",\n streaming=False,\n waveform_options=gr.WaveformOptions(waveform_color=\"B83A4B\"),\n )\n with gr.Row():\n chatbot = gr.Chatbot(label=\"Conversation\", type=\"messages\")\n state = gr.State(value=AppState())\n```\n\nIn this code block, we\u2019re using Gradio\u2019s `Blocks` API to create an interface with an audio input, a chat display, and an application state manager. The color customization for the waveform adds a nice visual touch.\n\n---\n\nHandling Recording and Responses\n\nFinally, let\u2019s link the recording and response components to ensure the app reacts smoothly to user inputs and provides responses in real-time.\n\n```python\n stream = input_audio.start_recording(\n process_audio,\n [input_audio, state],\n [input_audio, state],\n )\n respond = input_audio.stop_recording(\n response, [state, input_audio], [state, chatbot]\n )\n```\n\nThese lines set up event listeners for starting and stopping the recording, processing the audio input, and generating responses. By linking these events, we create a cohesive experience where users can simply talk, and the chatbot handles the rest.\n\n---\n\n", "heading1": "Key Components", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "1. When you open the app, the VAD system automatically initializes and starts listening for speech\n2. As soon as you start talking, it triggers the recording automatically\n3. When you stop speaking, the recording ends and:\n - The audio is transcribed using Whisper\n - The transcribed text is sent to the LLM\n - The LLM generates a response about calorie tracking\n - The response is displayed in the chat interface\n4. This creates a natural back-and-forth conversation where you can simply talk about your meals and get instant feedback on nutritional content\n\nThis app demonstrates how to create a natural voice interface that feels responsive and intuitive. By combining Groq's fast inference with automatic speech detection, we've eliminated the need for manual recording controls while maintaining high-quality interactions. The result is a practical calorie tracking assistant that users can simply talk to as naturally as they would to a human nutritionist.\n\nLink to GitHub repository: [Groq Gradio Basics](https://github.com/bklieger-groq/gradio-groq-basics/tree/main/calorie-tracker)", "heading1": "Summary", "source_page_url": "https://gradio.app/guides/automatic-voice-detection", "source_page_title": "Streaming - Automatic Voice Detection Guide"}, {"text": "First, we'll install the following requirements in our system:\n\n```\nopencv-python\ntorch\ntransformers>=4.43.0\nspaces\n```\n\nThen, we'll download the model from the Hugging Face Hub:\n\n```python\nfrom transformers import RTDetrForObjectDetection, RTDetrImageProcessor\n\nimage_processor = RTDetrImageProcessor.from_pretrained(\"PekingU/rtdetr_r50vd\")\nmodel = RTDetrForObjectDetection.from_pretrained(\"PekingU/rtdetr_r50vd\").to(\"cuda\")\n```\n\nWe're moving the model to the GPU. We'll be deploying our model to Hugging Face Spaces and running the inference in the [free ZeroGPU cluster](https://huggingface.co/zero-gpu-explorers). \n\n\n", "heading1": "Setting up the Model", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "Our inference function will accept a video and a desired confidence threshold.\nObject detection models identify many objects and assign a confidence score to each object. The lower the confidence, the higher the chance of a false positive. So we will let our users set the conference threshold.\n\nOur function will iterate over the frames in the video and run the RT-DETR model over each frame.\nWe will then draw the bounding boxes for each detected object in the frame and save the frame to a new output video.\nThe function will yield each output video in chunks of two seconds.\n\nIn order to keep inference times as low as possible on ZeroGPU (there is a time-based quota),\nwe will halve the original frames-per-second in the output video and resize the input frames to be half the original \nsize before running the model.\n\nThe code for the inference function is below - we'll go over it piece by piece.\n\n```python\nimport spaces\nimport cv2\nfrom PIL import Image\nimport torch\nimport time\nimport numpy as np\nimport uuid\n\nfrom draw_boxes import draw_bounding_boxes\n\nSUBSAMPLE = 2\n\n@spaces.GPU\ndef stream_object_detection(video, conf_threshold):\n cap = cv2.VideoCapture(video)\n\n This means we will output mp4 videos\n video_codec = cv2.VideoWriter_fourcc(*\"mp4v\") type: ignore\n fps = int(cap.get(cv2.CAP_PROP_FPS))\n\n desired_fps = fps // SUBSAMPLE\n width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) // 2\n height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) // 2\n\n iterating, frame = cap.read()\n\n n_frames = 0\n\n Use UUID to create a unique video file\n output_video_name = f\"output_{uuid.uuid4()}.mp4\"\n\n Output Video\n output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore\n batch = []\n\n while iterating:\n frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)\n frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n if n_frames % SUBSAMPLE == 0:\n batch.append(frame)\n if len(batc", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": " frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5)\n frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)\n if n_frames % SUBSAMPLE == 0:\n batch.append(frame)\n if len(batch) == 2 * desired_fps:\n inputs = image_processor(images=batch, return_tensors=\"pt\").to(\"cuda\")\n\n with torch.no_grad():\n outputs = model(**inputs)\n\n boxes = image_processor.post_process_object_detection(\n outputs,\n target_sizes=torch.tensor([(height, width)] * len(batch)),\n threshold=conf_threshold)\n \n for i, (array, box) in enumerate(zip(batch, boxes)):\n pil_image = draw_bounding_boxes(Image.fromarray(array), box, model, conf_threshold)\n frame = np.array(pil_image)\n Convert RGB to BGR\n frame = frame[:, :, ::-1].copy()\n output_video.write(frame)\n\n batch = []\n output_video.release()\n yield output_video_name\n output_video_name = f\"output_{uuid.uuid4()}.mp4\"\n output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore\n\n iterating, frame = cap.read()\n n_frames += 1\n```\n\n1. **Reading from the Video**\n\nOne of the industry standards for creating videos in python is OpenCV so we will use it in this app.\n\nThe `cap` variable is how we will read from the input video. Whenever we call `cap.read()`, we are reading the next frame in the video.\n\nIn order to stream video in Gradio, we need to yield a different video file for each \"chunk\" of the output video.\nWe create the next video file to write to with the `output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height))` line. The `video_codec` is how we specify the type of video file. Only \"mp4\" and \"ts\" files are supported for video sreaming at the moment.\n\n\n2. **The Inference Loop**\n\nFor each frame i", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "dth, height))` line. The `video_codec` is how we specify the type of video file. Only \"mp4\" and \"ts\" files are supported for video sreaming at the moment.\n\n\n2. **The Inference Loop**\n\nFor each frame in the video, we will resize it to be half the size. OpenCV reads files in `BGR` format, so will convert to the expected `RGB` format of transfomers. That's what the first two lines of the while loop are doing. \n\nWe take every other frame and add it to a `batch` list so that the output video is half the original FPS. When the batch covers two seconds of video, we will run the model. The two second threshold was chosen to keep the processing time of each batch small enough so that video is smoothly displayed in the server while not requiring too many separate forward passes. In order for video streaming to work properly in Gradio, the batch size should be at least 1 second. \n\nWe run the forward pass of the model and then use the `post_process_object_detection` method of the model to scale the detected bounding boxes to the size of the input frame.\n\nWe make use of a custom function to draw the bounding boxes (source [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection/blob/main/draw_boxes.pyL14)). We then have to convert from `RGB` to `BGR` before writing back to the output video.\n\nOnce we have finished processing the batch, we create a new output video file for the next batch.\n\n", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "The UI code is pretty similar to other kinds of Gradio apps. \nWe'll use a standard two-column layout so that users can see the input and output videos side by side.\n\nIn order for streaming to work, we have to set `streaming=True` in the output video. Setting the video\nto autoplay is not necessary but it's a better experience for users.\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as app:\n gr.HTML(\n \"\"\"\n

\n Video Object Detection with RT-DETR\n

\n \"\"\")\n with gr.Row():\n with gr.Column():\n video = gr.Video(label=\"Video Source\")\n conf_threshold = gr.Slider(\n label=\"Confidence Threshold\",\n minimum=0.0,\n maximum=1.0,\n step=0.05,\n value=0.30,\n )\n with gr.Column():\n output_video = gr.Video(label=\"Processed Video\", streaming=True, autoplay=True)\n\n video.upload(\n fn=stream_object_detection,\n inputs=[video, conf_threshold],\n outputs=[output_video],\n )\n\n\n```\n\n\n", "heading1": "The Gradio Demo", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "You can check out our demo hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection). \n\nIt is also embedded on this page below\n\n$demo_rt-detr-object-detection", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/object-detection-from-video", "source_page_title": "Streaming - Object Detection From Video Guide"}, {"text": "Automatic speech recognition (ASR), the conversion of spoken speech to text, is a very important and thriving area of machine learning. ASR algorithms run on practically every smartphone, and are becoming increasingly embedded in professional workflows, such as digital assistants for nurses and doctors. Because ASR algorithms are designed to be used directly by customers and end users, it is important to validate that they are behaving as expected when confronted with a wide variety of speech patterns (different accents, pitches, and background audio conditions).\n\nUsing `gradio`, you can easily build a demo of your ASR model and share that with a testing team, or test it yourself by speaking through the microphone on your device.\n\nThis tutorial will show how to take a pretrained speech-to-text model and deploy it with a Gradio interface. We will start with a **_full-context_** model, in which the user speaks the entire audio before the prediction runs. Then we will adapt the demo to make it **_streaming_**, meaning that the audio model will convert speech as you speak. \n\nPrerequisites\n\nMake sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained speech recognition model. In this tutorial, we will build demos from 2 ASR libraries:\n\n- Transformers (for this, `pip install torch transformers torchaudio`)\n\nMake sure you have at least one of these installed so that you can follow along the tutorial. You will also need `ffmpeg` [installed on your system](https://www.ffmpeg.org/download.html), if you do not already have it, to process files from the microphone.\n\nHere's how to build a real time speech recognition (ASR) app:\n\n1. [Set up the Transformers ASR Model](1-set-up-the-transformers-asr-model)\n2. [Create a Full-Context ASR Demo with Transformers](2-create-a-full-context-asr-demo-with-transformers)\n3. [Create a Streaming ASR Demo with Transformers](3-create-a-streaming-asr-demo-with-transformers)\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "First, you will need to have an ASR model that you have either trained yourself or you will need to download a pretrained model. In this tutorial, we will start by using a pretrained ASR model from the model, `whisper`.\n\nHere is the code to load `whisper` from Hugging Face `transformers`.\n\n```python\nfrom transformers import pipeline\n\np = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-base.en\")\n```\n\nThat's it!\n\n", "heading1": "1. Set up the Transformers ASR Model", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "We will start by creating a _full-context_ ASR demo, in which the user speaks the full audio before using the ASR model to run inference. This is very easy with Gradio -- we simply create a function around the `pipeline` object above.\n\nWe will use `gradio`'s built in `Audio` component, configured to take input from the user's microphone and return a filepath for the recorded audio. The output component will be a plain `Textbox`.\n\n$code_asr\n$demo_asr\n\nThe `transcribe` function takes a single parameter, `audio`, which is a numpy array of the audio the user recorded. The `pipeline` object expects this in float32 format, so we convert it first to float32, and then extract the transcribed text.\n\n", "heading1": "2. Create a Full-Context ASR Demo with Transformers", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "To make this a *streaming* demo, we need to make these changes:\n\n1. Set `streaming=True` in the `Audio` component\n2. Set `live=True` in the `Interface`\n3. Add a `state` to the interface to store the recorded audio of a user\n\nTip: You can also set `time_limit` and `stream_every` parameters in the interface. The `time_limit` caps the amount of time each user's stream can take. The default is 30 seconds so users won't be able to stream audio for more than 30 seconds. The `stream_every` parameter controls how frequently data is sent to your function. By default it is 0.5 seconds.\n\nTake a look below.\n\n$code_stream_asr\n\nNotice that we now have a state variable because we need to track all the audio history. `transcribe` gets called whenever there is a new small chunk of audio, but we also need to keep track of all the audio spoken so far in the state. As the interface runs, the `transcribe` function gets called, with a record of all the previously spoken audio in the `stream` and the new chunk of audio as `new_chunk`. We return the new full audio to be stored back in its current state, and we also return the transcription. Here, we naively append the audio together and call the `transcriber` object on the entire audio. You can imagine more efficient ways of handling this, such as re-processing only the last 5 seconds of audio whenever a new chunk of audio is received. \n\n$demo_stream_asr\n\nNow the ASR model will run inference as you speak! \n", "heading1": "3. Create a Streaming ASR Demo with Transformers", "source_page_url": "https://gradio.app/guides/real-time-speech-recognition", "source_page_title": "Streaming - Real Time Speech Recognition Guide"}, {"text": "Just like the classic Magic 8 Ball, a user should ask it a question orally and then wait for a response. Under the hood, we'll use Whisper to transcribe the audio and then use an LLM to generate a magic-8-ball-style answer. Finally, we'll use Parler TTS to read the response aloud.\n\n", "heading1": "The Overview", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "First let's define the UI and put placeholders for all the python logic.\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as block:\n gr.HTML(\n f\"\"\"\n

Magic 8 Ball \ud83c\udfb1

\n

Ask a question and receive wisdom

\n

Powered by Parler-TTS\n \"\"\"\n )\n with gr.Group():\n with gr.Row():\n audio_out = gr.Audio(label=\"Spoken Answer\", streaming=True, autoplay=True)\n answer = gr.Textbox(label=\"Answer\")\n state = gr.State()\n with gr.Row():\n audio_in = gr.Audio(label=\"Speak your question\", sources=\"microphone\", type=\"filepath\")\n\n audio_in.stop_recording(generate_response, audio_in, [state, answer, audio_out])\\\n .then(fn=read_response, inputs=state, outputs=[answer, audio_out])\n\nblock.launch()\n```\n\nWe're placing the output Audio and Textbox components and the input Audio component in separate rows. In order to stream the audio from the server, we'll set `streaming=True` in the output Audio component. We'll also set `autoplay=True` so that the audio plays as soon as it's ready.\nWe'll be using the Audio input component's `stop_recording` event to trigger our application's logic when a user stops recording from their microphone.\n\nWe're separating the logic into two parts. First, `generate_response` will take the recorded audio, transcribe it and generate a response with an LLM. We're going to store the response in a `gr.State` variable that then gets passed to the `read_response` function that generates the audio.\n\nWe're doing this in two parts because only `read_response` will require a GPU. Our app will run on Hugging Faces [ZeroGPU](https://huggingface.co/zero-gpu-explorers) which has time-based quotas. Since generating the response can be done with Hugging Face's Inference API, we shouldn't include that code in our GPU func", "heading1": "The UI", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "GPU](https://huggingface.co/zero-gpu-explorers) which has time-based quotas. Since generating the response can be done with Hugging Face's Inference API, we shouldn't include that code in our GPU function as it will needlessly use our GPU quota.\n\n", "heading1": "The UI", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "As mentioned above, we'll use [Hugging Face's Inference API](https://huggingface.co/docs/huggingface_hub/guides/inference) to transcribe the audio and generate a response from an LLM. After instantiating the client, I use the `automatic_speech_recognition` method (this automatically uses Whisper running on Hugging Face's Inference Servers) to transcribe the audio. Then I pass the question to an LLM (Mistal-7B-Instruct) to generate a response. We are prompting the LLM to act like a magic 8 ball with the system message.\n\nOur `generate_response` function will also send empty updates to the output textbox and audio components (returning `None`). \nThis is because I want the Gradio progress tracker to be displayed over the components but I don't want to display the answer until the audio is ready.\n\n\n```python\nfrom huggingface_hub import InferenceClient\n\nclient = InferenceClient(token=os.getenv(\"HF_TOKEN\"))\n\ndef generate_response(audio):\n gr.Info(\"Transcribing Audio\", duration=5)\n question = client.automatic_speech_recognition(audio).text\n\n messages = [{\"role\": \"system\", \"content\": (\"You are a magic 8 ball.\"\n \"Someone will present to you a situation or question and your job \"\n \"is to answer with a cryptic adage or proverb such as \"\n \"'curiosity killed the cat' or 'The early bird gets the worm'.\"\n \"Keep your answers short and do not include the phrase 'Magic 8 Ball' in your response. If the question does not make sense or is off-topic, say 'Foolish questions get foolish answers.'\"\n \"For example, 'Magic 8 Ball, should I get a dog?', 'A dog is ready for you but are you ready for the dog?'\")},\n {\"role\": \"user\", \"content\": f\"Magic 8 Ball please answer this question - {question}\"}]\n \n response = client.chat_completion(messages,", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "for you but are you ready for the dog?'\")},\n {\"role\": \"user\", \"content\": f\"Magic 8 Ball please answer this question - {question}\"}]\n \n response = client.chat_completion(messages, max_tokens=64, seed=random.randint(1, 5000),\n model=\"mistralai/Mistral-7B-Instruct-v0.3\")\n\n response = response.choices[0].message.content.replace(\"Magic 8 Ball\", \"\").replace(\":\", \"\")\n return response, None, None\n```\n\n\nNow that we have our text response, we'll read it aloud with Parler TTS. The `read_response` function will be a python generator that yields the next chunk of audio as it's ready.\n\n\nWe'll be using the [Mini v0.1](https://huggingface.co/parler-tts/parler_tts_mini_v0.1) for the feature extraction but the [Jenny fine tuned version](https://huggingface.co/parler-tts/parler-tts-mini-jenny-30H) for the voice. This is so that the voice is consistent across generations.\n\n\nStreaming audio with transformers requires a custom Streamer class. You can see the implementation [here](https://huggingface.co/spaces/gradio/magic-8-ball/blob/main/streamer.py). Additionally, we'll convert the output to bytes so that it can be streamed faster from the backend. \n\n\n```python\nfrom streamer import ParlerTTSStreamer\nfrom transformers import AutoTokenizer, AutoFeatureExtractor, set_seed\nimport numpy as np\nimport spaces\nimport torch\nfrom threading import Thread\n\n\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"mps\" if torch.backends.mps.is_available() else \"cpu\"\ntorch_dtype = torch.float16 if device != \"cpu\" else torch.float32\n\nrepo_id = \"parler-tts/parler_tts_mini_v0.1\"\n\njenny_repo_id = \"ylacombe/parler-tts-mini-jenny-30H\"\n\nmodel = ParlerTTSForConditionalGeneration.from_pretrained(\n jenny_repo_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True\n).to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(repo_id)\nfeature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)\n\nsampling_rate = model.audio_encoder.config.sampling_rate\nf", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "sage=True\n).to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(repo_id)\nfeature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)\n\nsampling_rate = model.audio_encoder.config.sampling_rate\nframe_rate = model.audio_encoder.config.frame_rate\n\n@spaces.GPU\ndef read_response(answer):\n\n play_steps_in_s = 2.0\n play_steps = int(frame_rate * play_steps_in_s)\n\n description = \"Jenny speaks at an average pace with a calm delivery in a very confined sounding environment with clear audio quality.\"\n description_tokens = tokenizer(description, return_tensors=\"pt\").to(device)\n\n streamer = ParlerTTSStreamer(model, device=device, play_steps=play_steps)\n prompt = tokenizer(answer, return_tensors=\"pt\").to(device)\n\n generation_kwargs = dict(\n input_ids=description_tokens.input_ids,\n prompt_input_ids=prompt.input_ids,\n streamer=streamer,\n do_sample=True,\n temperature=1.0,\n min_new_tokens=10,\n )\n\n set_seed(42)\n thread = Thread(target=model.generate, kwargs=generation_kwargs)\n thread.start()\n\n for new_audio in streamer:\n print(f\"Sample of length: {round(new_audio.shape[0] / sampling_rate, 2)} seconds\")\n yield answer, numpy_to_mp3(new_audio, sampling_rate=sampling_rate)\n```\n\n", "heading1": "The Logic", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "You can see our final application [here](https://huggingface.co/spaces/gradio/magic-8-ball)!\n\n\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/streaming-ai-generated-audio", "source_page_title": "Streaming - Streaming Ai Generated Audio Guide"}, {"text": "Start by installing all the dependencies. Add the following lines to a `requirements.txt` file and run `pip install -r requirements.txt`:\n\n```bash\nopencv-python\ntwilio\ngradio>=5.0\ngradio-webrtc\nonnxruntime-gpu\n```\n\nWe'll use the ONNX runtime to speed up YOLOv10 inference. This guide assumes you have access to a GPU. If you don't, change `onnxruntime-gpu` to `onnxruntime`. Without a GPU, the model will run slower, resulting in a laggy demo.\n\nWe'll use OpenCV for image manipulation and the [Gradio WebRTC](https://github.com/freddyaboulton/gradio-webrtc) custom component to use [WebRTC](https://webrtc.org/) under the hood, achieving near-zero latency.\n\n**Note**: If you want to deploy this app on any cloud provider, you'll need to use the free Twilio API for their [TURN servers](https://www.twilio.com/docs/stun-turn). Create a free account on Twilio. If you're not familiar with TURN servers, consult this [guide](https://www.twilio.com/docs/stun-turn/faqfaq-what-is-nat).\n\n", "heading1": "Setting up", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "We'll download the YOLOv10 model from the Hugging Face hub and instantiate a custom inference class to use this model. \n\nThe implementation of the inference class isn't covered in this guide, but you can find the source code [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n/blob/main/inference.pyL9) if you're interested. This implementation borrows heavily from this [github repository](https://github.com/ibaiGorordo/ONNX-YOLOv8-Object-Detection).\n\nWe're using the `yolov10-n` variant because it has the lowest latency. See the [Performance](https://github.com/THU-MIG/yolov10?tab=readme-ov-fileperformance) section of the README in the YOLOv10 GitHub repository.\n\n```python\nfrom huggingface_hub import hf_hub_download\nfrom inference import YOLOv10\n\nmodel_file = hf_hub_download(\n repo_id=\"onnx-community/yolov10n\", filename=\"onnx/model.onnx\"\n)\n\nmodel = YOLOv10(model_file)\n\ndef detection(image, conf_threshold=0.3):\n image = cv2.resize(image, (model.input_width, model.input_height))\n new_image = model.detect_objects(image, conf_threshold)\n return new_image\n```\n\nOur inference function, `detection`, accepts a numpy array from the webcam and a desired confidence threshold. Object detection models like YOLO identify many objects and assign a confidence score to each. The lower the confidence, the higher the chance of a false positive. We'll let users adjust the confidence threshold.\n\nThe function returns a numpy array corresponding to the same input image with all detected objects in bounding boxes.\n\n", "heading1": "The Inference Function", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "The Gradio demo is straightforward, but we'll implement a few specific features:\n\n1. Use the `WebRTC` custom component to ensure input and output are sent to/from the server with WebRTC. \n2. The [WebRTC](https://github.com/freddyaboulton/gradio-webrtc) component will serve as both an input and output component.\n3. Utilize the `time_limit` parameter of the `stream` event. This parameter sets a processing time for each user's stream. In a multi-user setting, such as on Spaces, we'll stop processing the current user's stream after this period and move on to the next. \n\nWe'll also apply custom CSS to center the webcam and slider on the page.\n\n```python\nimport gradio as gr\nfrom gradio_webrtc import WebRTC\n\ncss = \"\"\".my-group {max-width: 600px !important; max-height: 600px !important;}\n .my-column {display: flex !important; justify-content: center !important; align-items: center !important;}\"\"\"\n\nwith gr.Blocks(css=css) as demo:\n gr.HTML(\n \"\"\"\n

\n YOLOv10 Webcam Stream (Powered by WebRTC \u26a1\ufe0f)\n

\n \"\"\"\n )\n with gr.Column(elem_classes=[\"my-column\"]):\n with gr.Group(elem_classes=[\"my-group\"]):\n image = WebRTC(label=\"Stream\", rtc_configuration=rtc_configuration)\n conf_threshold = gr.Slider(\n label=\"Confidence Threshold\",\n minimum=0.0,\n maximum=1.0,\n step=0.05,\n value=0.30,\n )\n\n image.stream(\n fn=detection, inputs=[image, conf_threshold], outputs=[image], time_limit=10\n )\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\n", "heading1": "The Gradio Demo", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "Our app is hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n). \n\nYou can use this app as a starting point to build real-time image applications with Gradio. Don't hesitate to open issues in the space or in the [WebRTC component GitHub repo](https://github.com/freddyaboulton/gradio-webrtc) if you have any questions or encounter problems.", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/object-detection-from-webcam-with-webrtc", "source_page_title": "Streaming - Object Detection From Webcam With Webrtc Guide"}, {"text": "The frontend code should have, at minimum, three files:\n\n* `Index.svelte`: This is the main export and where your component's layout and logic should live.\n* `Example.svelte`: This is where the example view of the component is defined.\n\nFeel free to add additional files and subdirectories. \nIf you want to export any additional modules, remember to modify the `package.json` file\n\n```json\n\"exports\": {\n \".\": \"./Index.svelte\",\n \"./example\": \"./Example.svelte\",\n \"./package.json\": \"./package.json\"\n},\n```\n\n", "heading1": "The directory structure", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "Your component should expose the following props that will be passed down from the parent Gradio application.\n\n```typescript\nimport type { LoadingStatus } from \"@gradio/statustracker\";\nimport type { Gradio } from \"@gradio/utils\";\n\nexport let gradio: Gradio<{\n event_1: never;\n event_2: never;\n}>;\n\nexport let elem_id = \"\";\nexport let elem_classes: string[] = [];\nexport let scale: number | null = null;\nexport let min_width: number | undefined = undefined;\nexport let loading_status: LoadingStatus | undefined = undefined;\nexport let mode: \"static\" | \"interactive\";\n```\n\n* `elem_id` and `elem_classes` allow Gradio app developers to target your component with custom CSS and JavaScript from the Python `Blocks` class.\n\n* `scale` and `min_width` allow Gradio app developers to control how much space your component takes up in the UI.\n\n* `loading_status` is used to display a loading status over the component when it is the output of an event.\n\n* `mode` is how the parent Gradio app tells your component whether the `interactive` or `static` version should be displayed.\n\n* `gradio`: The `gradio` object is created by the parent Gradio app. It stores some application-level configuration that will be useful in your component, like internationalization. You must use it to dispatch events from your component.\n\nA minimal `Index.svelte` file would look like:\n\n```svelte\n\n\n\n\n\n\t{if loading_status}\n\t\t\n\t{/if}\n

{value}

\n\n```\n\n", "heading1": "The Index.svelte file", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "The `Example.svelte` file should expose the following props:\n\n```typescript\n export let value: string;\n export let type: \"gallery\" | \"table\";\n export let selected = false;\n export let index: number;\n```\n\n* `value`: The example value that should be displayed.\n\n* `type`: This is a variable that can be either `\"gallery\"` or `\"table\"` depending on how the examples are displayed. The `\"gallery\"` form is used when the examples correspond to a single input component, while the `\"table\"` form is used when a user has multiple input components, and the examples need to populate all of them. \n\n* `selected`: You can also adjust how the examples are displayed if a user \"selects\" a particular example by using the selected variable.\n\n* `index`: The current index of the selected value.\n\n* Any additional props your \"non-example\" component takes!\n\nThis is the `Example.svelte` file for the code `Radio` component:\n\n```svelte\n\n\n\n\t{value}\n\n\n\n```\n\n", "heading1": "The Example.svelte file", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "If your component deals with files, these files **should** be uploaded to the backend server. \nThe `@gradio/client` npm package provides the `upload` and `prepare_files` utility functions to help you do this.\n\nThe `prepare_files` function will convert the browser's `File` datatype to gradio's internal `FileData` type.\nYou should use the `FileData` data in your component to keep track of uploaded files.\n\nThe `upload` function will upload an array of `FileData` values to the server.\n\nHere's an example of loading files from an `` element when its value changes.\n\n\n```svelte\n\n\n\n```\n\nThe component exposes a prop named `root`. \nThis is passed down by the parent gradio app and it represents the base url that the files will be uploaded to and fetched from.\n\nFor WASM support, you should get the upload function from the `Context` and pass that as the third parameter of the `upload` function.\n\n```typescript\n\n```\n\n", "heading1": "Handling Files", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "Most of Gradio's frontend components are published on [npm](https://www.npmjs.com/), the javascript package repository.\nThis means that you can use them to save yourself time while incorporating common patterns in your component, like uploading files.\nFor example, the `@gradio/upload` package has `Upload` and `ModifyUpload` components for properly uploading files to the Gradio server. \nHere is how you can use them to create a user interface to upload and display PDF files.\n\n```svelte\n\n\n\n{if value === null && interactive}\n \n \n \n{:else if value !== null}\n {if interactive}\n \n {/if}\n \n{:else}\n \t\n{/if}\n```\n\nYou can also combine existing Gradio components to create entirely unique experiences.\nLike rendering a gallery of chatbot conversations. \nThe possibilities are endless, please read the documentation on our javascript packages [here](https://gradio.app/main/docs/js).\nWe'll be adding more packages and documentation over the coming weeks!\n\n", "heading1": "Leveraging Existing Gradio Components", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "You can explore our component library via Storybook. You'll be able to interact with our components and see them in their various states.\n\nFor those interested in design customization, we provide the CSS variables consisting of our color palette, radii, spacing, and the icons we use - so you can easily match up your custom component with the style of our core components. This Storybook will be regularly updated with any new additions or changes.\n\n[Storybook Link](https://gradio.app/main/docs/js/storybook)\n\n", "heading1": "Matching Gradio Core's Design System", "source_page_url": "https://gradio.app/guides/frontend", "source_page_title": "Custom Components - Frontend Guide"}, {"text": "If you want to make use of the vast vite ecosystem, you can use the `gradio.config.js` file to configure your component's build process. This allows you to make use of tools like tailwindcss, mdsvex, and more.\n\nCurrently, it is possible to configure the following:\n\nVite options:\n- `plugins`: A list of vite plugins to use.\n\nSvelte options:\n- `preprocess`: A list of svelte preprocessors to use.\n- `extensions`: A list of file extensions to compile to `.svelte` files.\n- `build.target`: The target to build for, this may be necessary to support newer javascript features. See the [esbuild docs](https://esbuild.github.io/api/target) for more information.\n\nThe `gradio.config.js` file should be placed in the root of your component's `frontend` directory. A default config file is created for you when you create a new component. But you can also create your own config file, if one doesn't exist, and use it to customize your component's build process.\n\nExample for a Vite plugin\n\nCustom components can use Vite plugins to customize the build process. Check out the [Vite Docs](https://vitejs.dev/guide/using-plugins.html) for more information. \n\nHere we configure [TailwindCSS](https://tailwindcss.com), a utility-first CSS framework. Setup is easiest using the version 4 prerelease. \n\n```\nnpm install tailwindcss@next @tailwindcss/vite@next\n```\n\nIn `gradio.config.js`:\n\n```typescript\nimport tailwindcss from \"@tailwindcss/vite\";\nexport default {\n plugins: [tailwindcss()]\n};\n```\n\nThen create a `style.css` file with the following content:\n\n```css\n@import \"tailwindcss\";\n```\n\nImport this file into `Index.svelte`. Note, that you need to import the css file containing `@import` and cannot just use a `\n```\n\nNow import `PdfUploadText.svelte` in your `\n\n\n\t\n\n\n\n```\n\n\nTip: Exercise for the reader - reduce the code duplication between `Index.svelte` and `Example.svelte` \ud83d\ude0a\n\n\nYou will not be able to render examples until we make some changes to the backend code in the next step!\n\n", "heading1": "Step 8.5: The Example view", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "The backend changes needed are smaller.\nWe're almost done!\n\nWhat we're going to do is:\n* Add `change` and `upload` events to our component.\n* Add a `height` property to let users control the height of the PDF.\n* Set the `data_model` of our component to be `FileData`. This is so that Gradio can automatically cache and safely serve any files that are processed by our component.\n* Modify the `preprocess` method to return a string corresponding to the path of our uploaded PDF.\n* Modify the `postprocess` to turn a path to a PDF created in an event handler to a `FileData`.\n\nWhen all is said an done, your component's backend code should look like this:\n\n```python\nfrom __future__ import annotations\nfrom typing import Any, Callable, TYPE_CHECKING\n\nfrom gradio.components.base import Component\nfrom gradio.data_classes import FileData\nfrom gradio import processing_utils\nif TYPE_CHECKING:\n from gradio.components import Timer\n\nclass PDF(Component):\n\n EVENTS = [\"change\", \"upload\"]\n\n data_model = FileData\n\n def __init__(self, value: Any = None, *,\n height: int | None = None,\n label: str | I18nData | None = None,\n info: str | I18nData | None = None,\n show_label: bool | None = None,\n container: bool = True,\n scale: int | None = None,\n min_width: int | None = None,\n interactive: bool | None = None,\n visible: bool = True,\n elem_id: str | None = None,\n elem_classes: list[str] | str | None = None,\n render: bool = True,\n load_fn: Callable[..., Any] | None = None,\n every: Timer | float | None = None):\n super().__init__(value, label=label, info=info,\n show_label=show_label, container=container,\n scale=scale, min_width=min_width,\n interactive=interactive, visible=visible,\n ", "heading1": "Step 9: The backend", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": " show_label=show_label, container=container,\n scale=scale, min_width=min_width,\n interactive=interactive, visible=visible,\n elem_id=elem_id, elem_classes=elem_classes,\n render=render, load_fn=load_fn, every=every)\n self.height = height\n\n def preprocess(self, payload: FileData) -> str:\n return payload.path\n\n def postprocess(self, value: str | None) -> FileData:\n if not value:\n return None\n return FileData(path=value)\n\n def example_payload(self):\n return \"https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf\"\n\n def example_value(self):\n return \"https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf\"\n```\n\n", "heading1": "Step 9: The backend", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "To test our backend code, let's add a more complex demo that performs Document Question and Answering with huggingface transformers.\n\nIn our `demo` directory, create a `requirements.txt` file with the following packages\n\n```\ntorch\ntransformers\npdf2image\npytesseract\n```\n\n\nTip: Remember to install these yourself and restart the dev server! You may need to install extra non-python dependencies for `pdf2image`. See [here](https://pypi.org/project/pdf2image/). Feel free to write your own demo if you have trouble.\n\n\n```python\nimport gradio as gr\nfrom gradio_pdf import PDF\nfrom pdf2image import convert_from_path\nfrom transformers import pipeline\nfrom pathlib import Path\n\ndir_ = Path(__file__).parent\n\np = pipeline(\n \"document-question-answering\",\n model=\"impira/layoutlm-document-qa\",\n)\n\ndef qa(question: str, doc: str) -> str:\n img = convert_from_path(doc)[0]\n output = p(img, question)\n return sorted(output, key=lambda x: x[\"score\"], reverse=True)[0]['answer']\n\n\ndemo = gr.Interface(\n qa,\n [gr.Textbox(label=\"Question\"), PDF(label=\"Document\")],\n gr.Textbox(),\n)\n\ndemo.launch()\n```\n\nSee our demo in action below!\n\n\n\nFinally lets build our component with `gradio cc build` and publish it with the `gradio cc publish` command!\nThis will guide you through the process of uploading your component to [PyPi](https://pypi.org/) and [HuggingFace Spaces](https://huggingface.co/spaces).\n\n\nTip: You may need to add the following lines to the `Dockerfile` of your HuggingFace Space.\n\n```Dockerfile\nRUN mkdir -p /tmp/cache/\nRUN chmod a+rwx -R /tmp/cache/\nRUN apt-get update && apt-get install -y poppler-utils tesseract-ocr\n\nENV TRANSFORMERS_CACHE=/tmp/cache/\n```\n\n", "heading1": "Step 10: Add a demo and publish!", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "In order to use our new component in **any** gradio 4.0 app, simply install it with pip, e.g. `pip install gradio-pdf`. Then you can use it like the built-in `gr.File()` component (except that it will only accept and display PDF files).\n\nHere is a simple demo with the Blocks api:\n\n```python\nimport gradio as gr\nfrom gradio_pdf import PDF\n\nwith gr.Blocks() as demo:\n pdf = PDF(label=\"Upload a PDF\", interactive=True)\n name = gr.Textbox()\n pdf.upload(lambda f: f, pdf, name)\n\ndemo.launch()\n```\n\n\nI hope you enjoyed this tutorial!\nThe complete source code for our component is [here](https://huggingface.co/spaces/freddyaboulton/gradio_pdf/tree/main/src).\nPlease don't hesitate to reach out to the gradio community on the [HuggingFace Discord](https://discord.gg/hugging-face-879548962464493619) if you get stuck.\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/pdf-component-example", "source_page_title": "Custom Components - Pdf Component Example Guide"}, {"text": "The documentation will be generated when running `gradio cc build`. You can pass the `--no-generate-docs` argument to turn off this behaviour.\n\nThere is also a standalone `docs` command that allows for greater customisation. If you are running this command manually it should be run _after_ the `version` in your `pyproject.toml` has been bumped but before building the component.\n\nAll arguments are optional.\n\n```bash\ngradio cc docs\n path The directory of the custom component.\n --demo-dir Path to the demo directory.\n --demo-name Name of the demo file\n --space-url URL of the Hugging Face Space to link to\n --generate-space create a documentation space.\n --no-generate-space do not create a documentation space\n --readme-path Path to the README.md file.\n --generate-readme create a REAMDE.md file\n --no-generate-readme do not create a README.md file\n --suppress-demo-check suppress validation checks and warnings\n```\n\n", "heading1": "How do I use it?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "The `gradio cc docs` command will generate an interactive Gradio app and a static README file with various features. You can see an example here:\n\n- [Gradio app deployed on Hugging Face Spaces]()\n- [README.md rendered by GitHub]()\n\nThe README.md and space both have the following features:\n\n- A description.\n- Installation instructions.\n- A fully functioning code snippet.\n- Optional links to PyPi, GitHub, and Hugging Face Spaces.\n- API documentation including:\n - An argument table for component initialisation showing types, defaults, and descriptions.\n - A description of how the component affects the user's predict function.\n - A table of events and their descriptions.\n - Any additional interfaces or classes that may be used during initialisation or in the pre- or post- processors.\n\nAdditionally, the Gradio includes:\n\n- A live demo.\n- A richer, interactive version of the parameter tables.\n- Nicer styling!\n\n", "heading1": "What gets generated?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "The documentation generator uses existing standards to extract the necessary information, namely Type Hints and Docstrings. There are no Gradio-specific APIs for documentation, so following best practices will generally yield the best results.\n\nIf you already use type hints and docstrings in your component source code, you don't need to do much to benefit from this feature, but there are some details that you should be aware of.\n\nPython version\n\nTo get the best documentation experience, you need to use Python `3.10` or greater when generating documentation. This is because some introspection features used to generate the documentation were only added in `3.10`.\n\nType hints\n\nPython type hints are used extensively to provide helpful information for users. \n\n
\n What are type hints?\n\n\nIf you need to become more familiar with type hints in Python, they are a simple way to express what Python types are expected for arguments and return values of functions and methods. They provide a helpful in-editor experience, aid in maintenance, and integrate with various other tools. These types can be simple primitives, like `list` `str` `bool`; they could be more compound types like `list[str]`, `str | None` or `tuple[str, float | int]`; or they can be more complex types using utility classed like [`TypedDict`](https://peps.python.org/pep-0589/abstract).\n\n[Read more about type hints in Python.](https://realpython.com/lessons/type-hinting/)\n\n\n
\n\nWhat do I need to add hints to?\n\nYou do not need to add type hints to every part of your code. For the documentation to work correctly, you will need to add type hints to the following component methods:\n\n- `__init__` parameters should be typed.\n- `postprocess` parameters and return value should be typed.\n- `preprocess` parameters and return value should be typed.\n\nIf you are using `gradio cc create`, these types should already exist, but you may need to tweak them based on any changes you ma", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "be typed.\n- `preprocess` parameters and return value should be typed.\n\nIf you are using `gradio cc create`, these types should already exist, but you may need to tweak them based on any changes you make.\n\n`__init__`\n\nHere, you only need to type the parameters. If you have cloned a template with `gradio` cc create`, these should already be in place. You will only need to add new hints for anything you have added or changed:\n\n```py\ndef __init__(\n self,\n value: str | None = None,\n *,\n sources: Literal[\"upload\", \"microphone\"] = \"upload,\n every: Timer | float | None = None,\n ...\n):\n ...\n```\n\n`preprocess` and `postprocess`\n\nThe `preprocess` and `postprocess` methods determine the value passed to the user function and the value that needs to be returned.\n\nEven if the design of your component is primarily as an input or an output, it is worth adding type hints to both the input parameters and the return values because Gradio has no way of limiting how components can be used.\n\nIn this case, we specifically care about:\n\n- The return type of `preprocess`.\n- The input type of `postprocess`.\n\n```py\ndef preprocess(\n self, payload: FileData | None input is optional\n) -> tuple[int, str] | str | None:\n\nuser function input is the preprocess return \u25b2\nuser function output is the postprocess input \u25bc\n\ndef postprocess(\n self, value: tuple[int, str] | None\n) -> FileData | bytes | None: return is optional\n ...\n```\n\nDocstrings\n\nDocstrings are also used extensively to extract more meaningful, human-readable descriptions of certain parts of the API.\n\n
\n What are docstrings?\n\n\nIf you need to become more familiar with docstrings in Python, they are a way to annotate parts of your code with human-readable decisions and explanations. They offer a rich in-editor experience like type hints, but unlike type hints, they don't have any specific syntax requirements. They are simple strings and can take almost any form. The only requirement i", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "offer a rich in-editor experience like type hints, but unlike type hints, they don't have any specific syntax requirements. They are simple strings and can take almost any form. The only requirement is where they appear. Docstrings should be \"a string literal that occurs as the first statement in a module, function, class, or method definition\".\n\n[Read more about Python docstrings.](https://peps.python.org/pep-0257/what-is-a-docstring)\n\n
\n\nWhile docstrings don't have any syntax requirements, we need a particular structure for documentation purposes.\n\nAs with type hint, the specific information we care about is as follows:\n\n- `__init__` parameter docstrings.\n- `preprocess` return docstrings.\n- `postprocess` input parameter docstrings.\n\nEverything else is optional.\n\nDocstrings should always take this format to be picked up by the documentation generator:\n\nClasses\n\n```py\n\"\"\"\nA description of the class.\n\nThis can span multiple lines and can _contain_ *markdown*.\n\"\"\"\n```\n\nMethods and functions \n\nMarkdown in these descriptions will not be converted into formatted text.\n\n```py\n\"\"\"\nParameters:\n param_one: A description for this parameter.\n param_two: A description for this parameter.\nReturns:\n A description for this return value.\n\"\"\"\n```\n\nEvents\n\nIn custom components, events are expressed as a list stored on the `events` field of the component class. While we do not need types for events, we _do_ need a human-readable description so users can understand the behaviour of the event.\n\nTo facilitate this, we must create the event in a specific way.\n\nThere are two ways to add events to a custom component.\n\nBuilt-in events\n\nGradio comes with a variety of built-in events that may be enough for your component. If you are using built-in events, you do not need to do anything as they already have descriptions we can extract:\n\n```py\nfrom gradio.events import Events\n\nclass ParamViewer(Component):\n ...\n\n EVENTS = [\n Events.change,\n Events.up", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "do not need to do anything as they already have descriptions we can extract:\n\n```py\nfrom gradio.events import Events\n\nclass ParamViewer(Component):\n ...\n\n EVENTS = [\n Events.change,\n Events.upload,\n ]\n```\n\nCustom events\n\nYou can define a custom event if the built-in events are unsuitable for your use case. This is a straightforward process, but you must create the event in this way for docstrings to work correctly:\n\n```py\nfrom gradio.events import Events, EventListener\n\nclass ParamViewer(Component):\n ...\n\n EVENTS = [\n Events.change,\n EventListener(\n \"bingbong\",\n doc=\"This listener is triggered when the user does a bingbong.\"\n )\n ]\n```\n\nDemo\n\nThe `demo/app.py`, often used for developing the component, generates the live demo and code snippet. The only strict rule here is that the `demo.launch()` command must be contained with a `__name__ == \"__main__\"` conditional as below:\n\n```py\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThe documentation generator will scan for such a clause and error if absent. If you are _not_ launching the demo inside the `demo/app.py`, then you can pass `--suppress-demo-check` to turn off this check.\n\nDemo recommendations\n\nAlthough there are no additional rules, there are some best practices you should bear in mind to get the best experience from the documentation generator.\n\nThese are only guidelines, and every situation is unique, but they are sound principles to remember.\n\nKeep the demo compact\n\nCompact demos look better and make it easier for users to understand what the demo does. Try to remove as many extraneous UI elements as possible to focus the users' attention on the core use case. \n\nSometimes, it might make sense to have a `demo/app.py` just for the docs and an additional, more complex app for your testing purposes. You can also create other spaces, showcasing more complex examples and linking to them from the main class docstring or the `pyproject.toml` description.\n\n", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "ore complex app for your testing purposes. You can also create other spaces, showcasing more complex examples and linking to them from the main class docstring or the `pyproject.toml` description.\n\nKeep the code concise\n\nThe 'getting started' snippet utilises the demo code, which should be as short as possible to keep users engaged and avoid confusion.\n\nIt isn't the job of the sample snippet to demonstrate the whole API; this snippet should be the shortest path to success for a new user. It should be easy to type or copy-paste and easy to understand. Explanatory comments should be brief and to the point.\n\nAvoid external dependencies\n\nAs mentioned above, users should be able to copy-paste a snippet and have a fully working app. Try to avoid third-party library dependencies to facilitate this.\n\nYou should carefully consider any examples; avoiding examples that require additional files or that make assumptions about the environment is generally a good idea.\n\nEnsure the `demo` directory is self-contained\n\nOnly the `demo` directory will be uploaded to Hugging Face spaces in certain instances, as the component will be installed via PyPi if possible. It is essential that this directory is self-contained and any files needed for the correct running of the demo are present.\n\nAdditional URLs\n\nThe documentation generator will generate a few buttons, providing helpful information and links to users. They are obtained automatically in some cases, but some need to be explicitly included in the `pyproject.yaml`. \n\n- PyPi Version and link - This is generated automatically.\n- GitHub Repository - This is populated via the `pyproject.toml`'s `project.urls.repository`.\n- Hugging Face Space - This is populated via the `pyproject.toml`'s `project.urls.space`.\n\nAn example `pyproject.toml` urls section might look like this:\n\n```toml\n[project.urls]\nrepository = \"https://github.com/user/repo-name\"\nspace = \"https://huggingface.co/spaces/user/space-name\"\n```", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "pyproject.toml` urls section might look like this:\n\n```toml\n[project.urls]\nrepository = \"https://github.com/user/repo-name\"\nspace = \"https://huggingface.co/spaces/user/space-name\"\n```", "heading1": "What do I need to do?", "source_page_url": "https://gradio.app/guides/documenting-custom-components", "source_page_title": "Custom Components - Documenting Custom Components Guide"}, {"text": "For this demo we will be tweaking the existing Gradio `Chatbot` component to display text and media files in the same message.\nLet's create a new custom component directory by templating off of the `Chatbot` component source code.\n\n```bash\ngradio cc create MultimodalChatbot --template Chatbot\n```\n\nAnd we're ready to go!\n\nTip: Make sure to modify the `Author` key in the `pyproject.toml` file.\n\n", "heading1": "Part 1 - Creating our project", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "Open up the `multimodalchatbot.py` file in your favorite code editor and let's get started modifying the backend of our component.\n\nThe first thing we will do is create the `data_model` of our component.\nThe `data_model` is the data format that your python component will receive and send to the javascript client running the UI.\nYou can read more about the `data_model` in the [backend guide](./backend).\n\nFor our component, each chatbot message will consist of two keys: a `text` key that displays the text message and an optional list of media files that can be displayed underneath the text.\n\nImport the `FileData` and `GradioModel` classes from `gradio.data_classes` and modify the existing `ChatbotData` class to look like the following:\n\n```python\nclass FileMessage(GradioModel):\n file: FileData\n alt_text: Optional[str] = None\n\n\nclass MultimodalMessage(GradioModel):\n text: Optional[str] = None\n files: Optional[List[FileMessage]] = None\n\n\nclass ChatbotData(GradioRootModel):\n root: List[Tuple[Optional[MultimodalMessage], Optional[MultimodalMessage]]]\n\n\nclass MultimodalChatbot(Component):\n ...\n data_model = ChatbotData\n```\n\n\nTip: The `data_model`s are implemented using `Pydantic V2`. Read the documentation [here](https://docs.pydantic.dev/latest/).\n\nWe've done the hardest part already!\n\n", "heading1": "Part 2a - The backend data_model", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "For the `preprocess` method, we will keep it simple and pass a list of `MultimodalMessage`s to the python functions that use this component as input. \nThis will let users of our component access the chatbot data with `.text` and `.files` attributes.\nThis is a design choice that you can modify in your implementation!\nWe can return the list of messages with the `root` property of the `ChatbotData` like so:\n\n```python\ndef preprocess(\n self,\n payload: ChatbotData | None,\n) -> List[MultimodalMessage] | None:\n if payload is None:\n return payload\n return payload.root\n```\n\n\nTip: Learn about the reasoning behind the `preprocess` and `postprocess` methods in the [key concepts guide](./key-component-concepts)\n\nIn the `postprocess` method we will coerce each message returned by the python function to be a `MultimodalMessage` class. \nWe will also clean up any indentation in the `text` field so that it can be properly displayed as markdown in the frontend.\n\nWe can leave the `postprocess` method as is and modify the `_postprocess_chat_messages`\n\n```python\ndef _postprocess_chat_messages(\n self, chat_message: MultimodalMessage | dict | None\n) -> MultimodalMessage | None:\n if chat_message is None:\n return None\n if isinstance(chat_message, dict):\n chat_message = MultimodalMessage(**chat_message)\n chat_message.text = inspect.cleandoc(chat_message.text or \"\")\n for file_ in chat_message.files:\n file_.file.mime_type = client_utils.get_mimetype(file_.file.path)\n return chat_message\n```\n\nBefore we wrap up with the backend code, let's modify the `example_value` and `example_payload` method to return a valid dictionary representation of the `ChatbotData`:\n\n```python\ndef example_value(self) -> Any:\n return [[{\"text\": \"Hello!\", \"files\": []}, None]]\n\ndef example_payload(self) -> Any:\n return [[{\"text\": \"Hello!\", \"files\": []}, None]]\n```\n\nCongrats - the backend is complete!\n\n", "heading1": "Part 2b - The pre and postprocess methods", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "The frontend for the `Chatbot` component is divided into two parts - the `Index.svelte` file and the `shared/Chatbot.svelte` file.\nThe `Index.svelte` file applies some processing to the data received from the server and then delegates the rendering of the conversation to the `shared/Chatbot.svelte` file.\nFirst we will modify the `Index.svelte` file to apply processing to the new data type the backend will return.\n\nLet's begin by porting our custom types from our python `data_model` to typescript.\nOpen `frontend/shared/utils.ts` and add the following type definitions at the top of the file:\n\n```ts\nexport type FileMessage = {\n\tfile: FileData;\n\talt_text?: string;\n};\n\n\nexport type MultimodalMessage = {\n\ttext: string;\n\tfiles?: FileMessage[];\n}\n```\n\nNow let's import them in `Index.svelte` and modify the type annotations for `value` and `_value`.\n\n```ts\nimport type { FileMessage, MultimodalMessage } from \"./shared/utils\";\n\nexport let value: [\n MultimodalMessage | null,\n MultimodalMessage | null\n][] = [];\n\nlet _value: [\n MultimodalMessage | null,\n MultimodalMessage | null\n][];\n```\n\nWe need to normalize each message to make sure each file has a proper URL to fetch its contents from.\nWe also need to format any embedded file links in the `text` key.\nLet's add a `process_message` utility function and apply it whenever the `value` changes.\n\n```ts\nfunction process_message(msg: MultimodalMessage | null): MultimodalMessage | null {\n if (msg === null) {\n return msg;\n }\n msg.text = redirect_src_url(msg.text);\n msg.files = msg.files.map(normalize_messages);\n return msg;\n}\n\n$: _value = value\n ? value.map(([user_msg, bot_msg]) => [\n process_message(user_msg),\n process_message(bot_msg)\n ])\n : [];\n```\n\n", "heading1": "Part 3a - The Index.svelte file", "source_page_url": "https://gradio.app/guides/multimodal-chatbot-part1", "source_page_title": "Custom Components - Multimodal Chatbot Part1 Guide"}, {"text": "Let's begin similarly to the `Index.svelte` file and let's first modify the type annotations.\nImport `Mulimodal` message at the top of the `\n```\n\n3. That's it!\n\nYour website now has a chat widget that connects to your Gradio app! Users can click the chat button to open the widget and start interacting with your app.\n\nCustomization\n\nYou can customize the appearance of the widget by modifying the CSS. Some ideas:\n- Change the colors to match your website's theme\n- Adjust the size and position of the widget\n- Add animations for opening/closing\n- Modify the message styling\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/Screen%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif)\n\nIf you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are hap", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot", "source_page_title": "Chatbots - Creating A Website Widget From A Gradio Chatbot Guide"}, {"text": "%20Recording%202024-12-19%20at%203.32.46%E2%80%AFPM.gif)\n\nIf you build a website widget from a Gradio app, feel free to share it on X and tag [the Gradio account](https://x.com/Gradio), and we are happy to help you amplify!", "heading1": "Prerequisites", "source_page_url": "https://gradio.app/guides/creating-a-website-widget-from-a-gradio-chatbot", "source_page_title": "Chatbots - Creating A Website Widget From A Gradio Chatbot Guide"}, {"text": "Chatbots are a popular application of large language models (LLMs). Using Gradio, you can easily build a chat application and share that with your users, or try it yourself using an intuitive UI.\n\nThis tutorial uses `gr.ChatInterface()`, which is a high-level abstraction that allows you to create your chatbot UI fast, often with a _few lines of Python_. It can be easily adapted to support multimodal chatbots, or chatbots that require further customization.\n\n**Prerequisites**: please make sure you are using the latest version of Gradio:\n\n```bash\n$ pip install --upgrade gradio\n```\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "If you have a chat server serving an OpenAI-API compatible endpoint (such as Ollama), you can spin up a ChatInterface in a single line of Python. First, also run `pip install openai`. Then, with your own URL, model, and optional token:\n\n```python\nimport gradio as gr\n\ngr.load_chat(\"http://localhost:11434/v1/\", model=\"llama3.2\", token=\"***\").launch()\n```\n\nRead about `gr.load_chat` in [the docs](https://www.gradio.app/docs/gradio/load_chat). If you have your own model, keep reading to see how to create an application around any chat model in Python!\n\n", "heading1": "Note for OpenAI-API compatible endpoints", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "To create a chat application with `gr.ChatInterface()`, the first thing you should do is define your **chat function**. In the simplest case, your chat function should accept two arguments: `message` and `history` (the arguments can be named anything, but must be in this order).\n\n- `message`: a `str` representing the user's most recent message.\n- `history`: a list of openai-style dictionaries with `role` and `content` keys, representing the previous conversation history. May also include additional keys representing message metadata.\n\nFor example, the `history` could look like this:\n\n```python\n[\n {\"role\": \"user\", \"content\": \"What is the capital of France?\"},\n {\"role\": \"assistant\", \"content\": \"Paris\"}\n]\n```\n\nwhile the next `message` would be:\n\n```py\n\"And what is its largest city?\"\n```\n\nYour chat function simply needs to return: \n\n* a `str` value, which is the chatbot's response based on the chat `history` and most recent `message`, for example, in this case:\n\n```\nParis is also the largest city.\n```\n\nLet's take a look at a few example chat functions:\n\n**Example: a chatbot that randomly responds with yes or no**\n\nLet's write a chat function that responds `Yes` or `No` randomly.\n\nHere's our chat function:\n\n```python\nimport random\n\ndef random_response(message, history):\n return random.choice([\"Yes\", \"No\"])\n```\n\nNow, we can plug this into `gr.ChatInterface()` and call the `.launch()` method to create the web interface:\n\n```python\nimport gradio as gr\n\ngr.ChatInterface(\n fn=random_response, \n type=\"messages\"\n).launch()\n```\n\nTip: Always set type=\"messages\" in gr.ChatInterface. The default value (type=\"tuples\") is deprecated and will be removed in a future version of Gradio.\n\nThat's it! Here's our running demo, try it out:\n\n$demo_chatinterface_random_response\n\n**Example: a chatbot that alternates between agreeing and disagreeing**\n\nOf course, the previous example was very simplistic, it didn't take user input or the previous history into account! Here's another", "heading1": "Defining a chat function", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "ample: a chatbot that alternates between agreeing and disagreeing**\n\nOf course, the previous example was very simplistic, it didn't take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history.\n\n```python\nimport gradio as gr\n\ndef alternatingly_agree(message, history):\n if len([h for h in history if h['role'] == \"assistant\"]) % 2 == 0:\n return f\"Yes, I do think that: {message}\"\n else:\n return \"I don't think so\"\n\ngr.ChatInterface(\n fn=alternatingly_agree, \n type=\"messages\"\n).launch()\n```\n\nWe'll look at more realistic examples of chat functions in our next Guide, which shows [examples of using `gr.ChatInterface` with popular LLMs](../guides/chatinterface-examples). \n\n", "heading1": "Defining a chat function", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "In your chat function, you can use `yield` to generate a sequence of partial responses, each replacing the previous ones. This way, you'll end up with a streaming chatbot. It's that simple!\n\n```python\nimport time\nimport gradio as gr\n\ndef slow_echo(message, history):\n for i in range(len(message)):\n time.sleep(0.3)\n yield \"You typed: \" + message[: i+1]\n\ngr.ChatInterface(\n fn=slow_echo, \n type=\"messages\"\n).launch()\n```\n\nWhile the response is streaming, the \"Submit\" button turns into a \"Stop\" button that can be used to stop the generator function.\n\nTip: Even though you are yielding the latest message at each iteration, Gradio only sends the \"diff\" of each message from the server to the frontend, which reduces latency and data consumption over your network.\n\n", "heading1": "Streaming chatbots", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "If you're familiar with Gradio's `gr.Interface` class, the `gr.ChatInterface` includes many of the same arguments that you can use to customize the look and feel of your Chatbot. For example, you can:\n\n- add a title and description above your chatbot using `title` and `description` arguments.\n- add a theme or custom css using `theme` and `css` arguments respectively.\n- add `examples` and even enable `cache_examples`, which make your Chatbot easier for users to try it out.\n- customize the chatbot (e.g. to change the height or add a placeholder) or textbox (e.g. to add a max number of characters or add a placeholder).\n\n**Adding examples**\n\nYou can add preset examples to your `gr.ChatInterface` with the `examples` parameter, which takes a list of string examples. Any examples will appear as \"buttons\" within the Chatbot before any messages are sent. If you'd like to include images or other files as part of your examples, you can do so by using this dictionary format for each example instead of a string: `{\"text\": \"What's in this image?\", \"files\": [\"cheetah.jpg\"]}`. Each file will be a separate message that is added to your Chatbot history.\n\nYou can change the displayed text for each example by using the `example_labels` argument. You can add icons to each example as well using the `example_icons` argument. Both of these arguments take a list of strings, which should be the same length as the `examples` list.\n\nIf you'd like to cache the examples so that they are pre-computed and the results appear instantly, set `cache_examples=True`.\n\n**Customizing the chatbot or textbox component**\n\nIf you want to customize the `gr.Chatbot` or `gr.Textbox` that compose the `ChatInterface`, then you can pass in your own chatbot or textbox components. Here's an example of how we to apply the parameters we've discussed in this section:\n\n```python\nimport gradio as gr\n\ndef yes_man(message, history):\n if message.endswith(\"?\"):\n return \"Yes\"\n else:\n return \"Ask me anything", "heading1": "Customizing the Chat UI", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": " parameters we've discussed in this section:\n\n```python\nimport gradio as gr\n\ndef yes_man(message, history):\n if message.endswith(\"?\"):\n return \"Yes\"\n else:\n return \"Ask me anything!\"\n\ngr.ChatInterface(\n yes_man,\n type=\"messages\",\n chatbot=gr.Chatbot(height=300),\n textbox=gr.Textbox(placeholder=\"Ask me a yes or no question\", container=False, scale=7),\n title=\"Yes Man\",\n description=\"Ask Yes Man any question\",\n theme=\"ocean\",\n examples=[\"Hello\", \"Am I cool?\", \"Are tomatoes vegetables?\"],\n cache_examples=True,\n).launch()\n```\n\nHere's another example that adds a \"placeholder\" for your chat interface, which appears before the user has started chatting. The `placeholder` argument of `gr.Chatbot` accepts Markdown or HTML:\n\n```python\ngr.ChatInterface(\n yes_man,\n type=\"messages\",\n chatbot=gr.Chatbot(placeholder=\"Your Personal Yes-Man
Ask Me Anything\"),\n...\n```\n\nThe placeholder appears vertically and horizontally centered in the chatbot.\n\n", "heading1": "Customizing the Chat UI", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may want to add multimodal capabilities to your chat interface. For example, you may want users to be able to upload images or files to your chatbot and ask questions about them. You can make your chatbot \"multimodal\" by passing in a single parameter (`multimodal=True`) to the `gr.ChatInterface` class.\n\nWhen `multimodal=True`, the signature of your chat function changes slightly: the first parameter of your function (what we referred to as `message` above) should accept a dictionary consisting of the submitted text and uploaded files that looks like this: \n\n```py\n{\n \"text\": \"user input\", \n \"files\": [\n \"updated_file_1_path.ext\",\n \"updated_file_2_path.ext\", \n ...\n ]\n}\n```\n\nThis second parameter of your chat function, `history`, will be in the same openai-style dictionary format as before. However, if the history contains uploaded files, the `content` key for a file will be not a string, but rather a single-element tuple consisting of the filepath. Each file will be a separate message in the history. So after uploading two files and asking a question, your history might look like this:\n\n```python\n[\n {\"role\": \"user\", \"content\": (\"cat1.png\")},\n {\"role\": \"user\", \"content\": (\"cat2.png\")},\n {\"role\": \"user\", \"content\": \"What's the difference between these two images?\"},\n]\n```\n\nThe return type of your chat function does *not change* when setting `multimodal=True` (i.e. in the simplest case, you should still return a string value). We discuss more complex cases, e.g. returning files [below](returning-complex-responses).\n\nIf you are customizing a multimodal chat interface, you should pass in an instance of `gr.MultimodalTextbox` to the `textbox` parameter. You can customize the `MultimodalTextbox` further by passing in the `sources` parameter, which is a list of sources to enable. Here's an example that illustrates how to set up and customize and multimodal chat interface:\n \n\n```python\nimport gradio as gr\n\ndef count_images(message, hi", "heading1": "Multimodal Chat Interface", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "eter, which is a list of sources to enable. Here's an example that illustrates how to set up and customize and multimodal chat interface:\n \n\n```python\nimport gradio as gr\n\ndef count_images(message, history):\n num_images = len(message[\"files\"])\n total_images = 0\n for message in history:\n if isinstance(message[\"content\"], tuple):\n total_images += 1\n return f\"You just uploaded {num_images} images, total uploaded: {total_images+num_images}\"\n\ndemo = gr.ChatInterface(\n fn=count_images, \n type=\"messages\", \n examples=[\n {\"text\": \"No files\", \"files\": []}\n ], \n multimodal=True,\n textbox=gr.MultimodalTextbox(file_count=\"multiple\", file_types=[\"image\"], sources=[\"upload\", \"microphone\"])\n)\n\ndemo.launch()\n```\n\n", "heading1": "Multimodal Chat Interface", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may want to add additional inputs to your chat function and expose them to your users through the chat UI. For example, you could add a textbox for a system prompt, or a slider that sets the number of tokens in the chatbot's response. The `gr.ChatInterface` class supports an `additional_inputs` parameter which can be used to add additional input components.\n\nThe `additional_inputs` parameters accepts a component or a list of components. You can pass the component instances directly, or use their string shortcuts (e.g. `\"textbox\"` instead of `gr.Textbox()`). If you pass in component instances, and they have _not_ already been rendered, then the components will appear underneath the chatbot within a `gr.Accordion()`. \n\nHere's a complete example:\n\n$code_chatinterface_system_prompt\n\nIf the components you pass into the `additional_inputs` have already been rendered in a parent `gr.Blocks()`, then they will _not_ be re-rendered in the accordion. This provides flexibility in deciding where to lay out the input components. In the example below, we position the `gr.Textbox()` on top of the Chatbot UI, while keeping the slider underneath.\n\n```python\nimport gradio as gr\nimport time\n\ndef echo(message, history, system_prompt, tokens):\n response = f\"System prompt: {system_prompt}\\n Message: {message}.\"\n for i in range(min(len(response), int(tokens))):\n time.sleep(0.05)\n yield response[: i+1]\n\nwith gr.Blocks() as demo:\n system_prompt = gr.Textbox(\"You are helpful AI.\", label=\"System Prompt\")\n slider = gr.Slider(10, 100, render=False)\n\n gr.ChatInterface(\n echo, additional_inputs=[system_prompt, slider], type=\"messages\"\n )\n\ndemo.launch()\n```\n\n**Examples with additional inputs**\n\nYou can also add example values for your additional inputs. Pass in a list of lists to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should ", "heading1": "Additional Inputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "n a list of lists to the `examples` parameter, where each inner list represents one sample, and each inner list should be `1 + len(additional_inputs)` long. The first element in the inner list should be the example value for the chat message, and each subsequent element should be an example value for one of the additional inputs, in order. When additional inputs are provided, examples are rendered in a table underneath the chat interface.\n\nIf you need to create something even more custom, then its best to construct the chatbot UI using the low-level `gr.Blocks()` API. We have [a dedicated guide for that here](/guides/creating-a-custom-chatbot-with-blocks).\n\n", "heading1": "Additional Inputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "In the same way that you can accept additional inputs into your chat function, you can also return additional outputs. Simply pass in a list of components to the `additional_outputs` parameter in `gr.ChatInterface` and return additional values for each component from your chat function. Here's an example that extracts code and outputs it into a separate `gr.Code` component:\n\n$code_chatinterface_artifacts\n\n**Note:** unlike the case of additional inputs, the components passed in `additional_outputs` must be already defined in your `gr.Blocks` context -- they are not rendered automatically. If you need to render them after your `gr.ChatInterface`, you can set `render=False` when they are first defined and then `.render()` them in the appropriate section of your `gr.Blocks()` as we do in the example above.\n\n", "heading1": "Additional Outputs", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "We mentioned earlier that in the simplest case, your chat function should return a `str` response, which will be rendered as Markdown in the chatbot. However, you can also return more complex responses as we discuss below:\n\n\n**Returning files or Gradio components**\n\nCurrently, the following Gradio components can be displayed inside the chat interface:\n* `gr.Image`\n* `gr.Plot`\n* `gr.Audio`\n* `gr.HTML`\n* `gr.Video`\n* `gr.Gallery`\n* `gr.File`\n\nSimply return one of these components from your function to use it with `gr.ChatInterface`. Here's an example that returns an audio file:\n\n```py\nimport gradio as gr\n\ndef music(message, history):\n if message.strip():\n return gr.Audio(\"https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav\")\n else:\n return \"Please provide the name of an artist\"\n\ngr.ChatInterface(\n music,\n type=\"messages\",\n textbox=gr.Textbox(placeholder=\"Which artist's music do you want to listen to?\", scale=7),\n).launch()\n```\n\nSimilarly, you could return image files with `gr.Image`, video files with `gr.Video`, or arbitrary files with the `gr.File` component.\n\n**Returning Multiple Messages**\n\nYou can return multiple assistant messages from your chat function simply by returning a `list` of messages, each of which is a valid chat type. This lets you, for example, send a message along with files, as in the following example:\n\n$code_chatinterface_echo_multimodal\n\n\n**Displaying intermediate thoughts or tool usage**\n\nThe `gr.ChatInterface` class supports displaying intermediate thoughts or tool usage direct in the chatbot.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/nested-thought.png)\n\n To do this, you will need to return a `gr.ChatMessage` object from your chat function. Here is the schema of the `gr.ChatMessage` data class as well as two internal typed dictionaries:\n \n ```py\n@dataclass\nclass ChatMessage:\n content: str | Component\n metadata: MetadataDict = ", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "ion. Here is the schema of the `gr.ChatMessage` data class as well as two internal typed dictionaries:\n \n ```py\n@dataclass\nclass ChatMessage:\n content: str | Component\n metadata: MetadataDict = None\n options: list[OptionDict] = None\n\nclass MetadataDict(TypedDict):\n title: NotRequired[str]\n id: NotRequired[int | str]\n parent_id: NotRequired[int | str]\n log: NotRequired[str]\n duration: NotRequired[float]\n status: NotRequired[Literal[\"pending\", \"done\"]]\n\nclass OptionDict(TypedDict):\n label: NotRequired[str]\n value: str\n ```\n \nAs you can see, the `gr.ChatMessage` dataclass is similar to the openai-style message format, e.g. it has a \"content\" key that refers to the chat message content. But it also includes a \"metadata\" key whose value is a dictionary. If this dictionary includes a \"title\" key, the resulting message is displayed as an intermediate thought with the title being displayed on top of the thought. Here's an example showing the usage:\n\n$code_chatinterface_thoughts\n\nYou can even show nested thoughts, which is useful for agent demos in which one tool may call other tools. To display nested thoughts, include \"id\" and \"parent_id\" keys in the \"metadata\" dictionary. Read our [dedicated guide on displaying intermediate thoughts and tool usage](/guides/agents-and-tool-usage) for more realistic examples.\n\n**Providing preset responses**\n\nWhen returning an assistant message, you may want to provide preset options that a user can choose in response. To do this, again, you will again return a `gr.ChatMessage` instance from your chat function. This time, make sure to set the `options` key specifying the preset responses.\n\nAs shown in the schema for `gr.ChatMessage` above, the value corresponding to the `options` key should be a list of dictionaries, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an optional `label` (if provided, is the text displayed as the preset r", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "ies, each with a `value` (a string that is the value that should be sent to the chat function when this response is clicked) and an optional `label` (if provided, is the text displayed as the preset response instead of the `value`). \n\nThis example illustrates how to use preset responses:\n\n$code_chatinterface_options\n\n", "heading1": "Returning Complex Responses", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You may wish to modify the value of the chatbot with your own events, other than those prebuilt in the `gr.ChatInterface`. For example, you could create a dropdown that prefills the chat history with certain conversations or add a separate button to clear the conversation history. The `gr.ChatInterface` supports these events, but you need to use the `gr.ChatInterface.chatbot_value` as the input or output component in such events. In this example, we use a `gr.Radio` component to prefill the the chatbot with certain conversations:\n\n$code_chatinterface_prefill\n\n", "heading1": "Modifying the Chatbot Value Directly", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "Once you've built your Gradio chat interface and are hosting it on [Hugging Face Spaces](https://hf.space) or somewhere else, then you can query it with a simple API at the `/chat` endpoint. The endpoint just expects the user's message and will return the response, internally keeping track of the message history.\n\n![](https://github.com/gradio-app/gradio/assets/1778297/7b10d6db-6476-4e2e-bebd-ecda802c3b8f)\n\nTo use the endpoint, you should use either the [Gradio Python Client](/guides/getting-started-with-the-python-client) or the [Gradio JS client](/guides/getting-started-with-the-js-client). Or, you can deploy your Chat Interface to other platforms, such as a:\n\n* Discord bot [[tutorial]](../guides/creating-a-discord-bot-from-a-gradio-app)\n* Slack bot [[tutorial]](../guides/creating-a-slack-bot-from-a-gradio-app)\n* Website widget [[tutorial]](../guides/creating-a-website-widget-from-a-gradio-chatbot)\n\n", "heading1": "Using Your Chatbot via API", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "You can enable persistent chat history for your ChatInterface, allowing users to maintain multiple conversations and easily switch between them. When enabled, conversations are stored locally and privately in the user's browser using local storage. So if you deploy a ChatInterface e.g. on [Hugging Face Spaces](https://hf.space), each user will have their own separate chat history that won't interfere with other users' conversations. This means multiple users can interact with the same ChatInterface simultaneously while maintaining their own private conversation histories.\n\nTo enable this feature, simply set `gr.ChatInterface(save_history=True)` (as shown in the example in the next section). Users will then see their previous conversations in a side panel and can continue any previous chat or start a new one.\n\n", "heading1": "Chat History", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "To gather feedback on your chat model, set `gr.ChatInterface(flagging_mode=\"manual\")` and users will be able to thumbs-up or thumbs-down assistant responses. Each flagged response, along with the entire chat history, will get saved in a CSV file in the app working directory (this can be configured via the `flagging_dir` parameter). \n\nYou can also change the feedback options via `flagging_options` parameter. The default options are \"Like\" and \"Dislike\", which appear as the thumbs-up and thumbs-down icons. Any other options appear under a dedicated flag icon. This example shows a ChatInterface that has both chat history (mentioned in the previous section) and user feedback enabled:\n\n$code_chatinterface_streaming_echo\n\nNote that in this example, we set several flagging options: \"Like\", \"Spam\", \"Inappropriate\", \"Other\". Because the case-sensitive string \"Like\" is one of the flagging options, the user will see a thumbs-up icon next to each assistant message. The three other flagging options will appear in a dropdown under the flag icon.\n\n", "heading1": "Collecting User Feedback", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "Now that you've learned about the `gr.ChatInterface` class and how it can be used to create chatbot UIs quickly, we recommend reading one of the following:\n\n* [Our next Guide](../guides/chatinterface-examples) shows examples of how to use `gr.ChatInterface` with popular LLM libraries.\n* If you'd like to build very custom chat applications from scratch, you can build them using the low-level Blocks API, as [discussed in this Guide](../guides/creating-a-custom-chatbot-with-blocks).\n* Once you've deployed your Gradio Chat Interface, its easy to use in other applications because of the built-in API. Here's a tutorial on [how to deploy a Gradio chat interface as a Discord bot](../guides/creating-a-discord-bot-from-a-gradio-app).\n\n\n", "heading1": "What's Next?", "source_page_url": "https://gradio.app/guides/creating-a-chatbot-fast", "source_page_title": "Chatbots - Creating A Chatbot Fast Guide"}, {"text": "Each message in Gradio's chatbot is a dataclass of type `ChatMessage` (this is assuming that chatbot's `type=\"message\"`, which is strongly recommended). The schema of `ChatMessage` is as follows:\n\n ```py\n@dataclass\nclass ChatMessage:\n content: str | Component\n role: Literal[\"user\", \"assistant\"]\n metadata: MetadataDict = None\n options: list[OptionDict] = None\n\nclass MetadataDict(TypedDict):\n title: NotRequired[str]\n id: NotRequired[int | str]\n parent_id: NotRequired[int | str]\n log: NotRequired[str]\n duration: NotRequired[float]\n status: NotRequired[Literal[\"pending\", \"done\"]]\n\nclass OptionDict(TypedDict):\n label: NotRequired[str]\n value: str\n ```\n\n\nFor our purposes, the most important key is the `metadata` key, which accepts a dictionary. If this dictionary includes a `title` for the message, it will be displayed in a collapsible accordion representing a thought. It's that simple! Take a look at this example:\n\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n chatbot = gr.Chatbot(\n type=\"messages\",\n value=[\n gr.ChatMessage(\n role=\"user\", \n content=\"What is the weather in San Francisco?\"\n ),\n gr.ChatMessage(\n role=\"assistant\", \n content=\"I need to use the weather API tool?\",\n metadata={\"title\": \"\ud83e\udde0 Thinking\"}\n ]\n )\n\ndemo.launch()\n```\n\n\n\nIn addition to `title`, the dictionary provided to `metadata` can take several optional keys:\n\n* `log`: an optional string value to be displayed in a subdued font next to the thought title.\n* `duration`: an optional numeric value representing the duration of the thought/tool usage, in seconds. Displayed in a subdued font next inside parentheses next to the thought title.\n* `status`: if set to `\"pending\"`, a spinner appears next to the thought title and the accordion is initialized open. If `status` is `\"done\"`, the thought accordion is initialized closed. I", "heading1": "The `ChatMessage` dataclass", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "ht title.\n* `status`: if set to `\"pending\"`, a spinner appears next to the thought title and the accordion is initialized open. If `status` is `\"done\"`, the thought accordion is initialized closed. If `status` is not provided, the thought accordion is initialized open and no spinner is displayed.\n* `id` and `parent_id`: if these are provided, they can be used to nest thoughts inside other thoughts.\n\nBelow, we show several complete examples of using `gr.Chatbot` and `gr.ChatInterface` to display tool use or thinking UIs.\n\n", "heading1": "The `ChatMessage` dataclass", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "A real example using transformers.agents\n\nWe'll create a Gradio application simple agent that has access to a text-to-image tool.\n\nTip: Make sure you read the [smolagents documentation](https://huggingface.co/docs/smolagents/index) first\n\nWe'll start by importing the necessary classes from transformers and gradio. \n\n```python\nimport gradio as gr\nfrom gradio import ChatMessage\nfrom transformers import Tool, ReactCodeAgent type: ignore\nfrom transformers.agents import stream_to_gradio, HfApiEngine type: ignore\n\nImport tool from Hub\nimage_generation_tool = Tool.from_space(\n space_id=\"black-forest-labs/FLUX.1-schnell\",\n name=\"image_generator\",\n description=\"Generates an image following your prompt. Returns a PIL Image.\",\n api_name=\"/infer\",\n)\n\nllm_engine = HfApiEngine(\"Qwen/Qwen2.5-Coder-32B-Instruct\")\nInitialize the agent with both tools and engine\nagent = ReactCodeAgent(tools=[image_generation_tool], llm_engine=llm_engine)\n```\n\nThen we'll build the UI:\n\n```python\ndef interact_with_agent(prompt, history):\n messages = []\n yield messages\n for msg in stream_to_gradio(agent, prompt):\n messages.append(asdict(msg))\n yield messages\n yield messages\n\n\ndemo = gr.ChatInterface(\n interact_with_agent,\n chatbot= gr.Chatbot(\n label=\"Agent\",\n type=\"messages\",\n avatar_images=(\n None,\n \"https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png\",\n ),\n ),\n examples=[\n [\"Generate an image of an astronaut riding an alligator\"],\n [\"I am writing a children's book for my daughter. Can you help me with some illustrations?\"],\n ],\n type=\"messages\",\n)\n```\n\nYou can see the full demo code [here](https://huggingface.co/spaces/gradio/agent_chatbot/blob/main/app.py).\n\n\n![transformers_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d)\n\n\nA real example using langchain agents\n\nWe'll create a UI for l", "heading1": "Building with Agents", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "\n\n\n![transformers_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/c8d21336-e0e6-4878-88ea-e6fcfef3552d)\n\n\nA real example using langchain agents\n\nWe'll create a UI for langchain agent that has access to a search engine.\n\nWe'll begin with imports and setting up the langchain agent. Note that you'll need an .env file with the following environment variables set - \n\n```\nSERPAPI_API_KEY=\nHF_TOKEN=\nOPENAI_API_KEY=\n```\n\n```python\nfrom langchain import hub\nfrom langchain.agents import AgentExecutor, create_openai_tools_agent, load_tools\nfrom langchain_openai import ChatOpenAI\nfrom gradio import ChatMessage\nimport gradio as gr\n\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nmodel = ChatOpenAI(temperature=0, streaming=True)\n\ntools = load_tools([\"serpapi\"])\n\nGet the prompt to use - you can modify this!\nprompt = hub.pull(\"hwchase17/openai-tools-agent\")\nagent = create_openai_tools_agent(\n model.with_config({\"tags\": [\"agent_llm\"]}), tools, prompt\n)\nagent_executor = AgentExecutor(agent=agent, tools=tools).with_config(\n {\"run_name\": \"Agent\"}\n)\n```\n\nThen we'll create the Gradio UI\n\n```python\nasync def interact_with_langchain_agent(prompt, messages):\n messages.append(ChatMessage(role=\"user\", content=prompt))\n yield messages\n async for chunk in agent_executor.astream(\n {\"input\": prompt}\n ):\n if \"steps\" in chunk:\n for step in chunk[\"steps\"]:\n messages.append(ChatMessage(role=\"assistant\", content=step.action.log,\n metadata={\"title\": f\"\ud83d\udee0\ufe0f Used tool {step.action.tool}\"}))\n yield messages\n if \"output\" in chunk:\n messages.append(ChatMessage(role=\"assistant\", content=chunk[\"output\"]))\n yield messages\n\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with a LangChain Agent \ud83e\udd9c\u26d3\ufe0f and see its thoughts \ud83d\udcad\")\n chatbot = gr.Chatbot(\n type=\"messages\",\n label=\"Agent\",\n avatar_images=(\n None,\n ", "heading1": "Building with Agents", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": " gr.Markdown(\"Chat with a LangChain Agent \ud83e\udd9c\u26d3\ufe0f and see its thoughts \ud83d\udcad\")\n chatbot = gr.Chatbot(\n type=\"messages\",\n label=\"Agent\",\n avatar_images=(\n None,\n \"https://em-content.zobj.net/source/twitter/141/parrot_1f99c.png\",\n ),\n )\n input = gr.Textbox(lines=1, label=\"Chat Message\")\n input.submit(interact_with_langchain_agent, [input_2, chatbot_2], [chatbot_2])\n\ndemo.launch()\n```\n\n![langchain_agent_code](https://github.com/freddyaboulton/freddyboulton/assets/41651716/762283e5-3937-47e5-89e0-79657279ea67)\n\nThat's it! See our finished langchain demo [here](https://huggingface.co/spaces/gradio/langchain-agent).\n\n\n", "heading1": "Building with Agents", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "The Gradio Chatbot can natively display intermediate thoughts of a _thinking_ LLM. This makes it perfect for creating UIs that show how an AI model \"thinks\" while generating responses. Below guide will show you how to build a chatbot that displays Gemini AI's thought process in real-time.\n\n\nA real example using Gemini 2.0 Flash Thinking API\n\nLet's create a complete chatbot that shows its thoughts and responses in real-time. We'll use Google's Gemini API for accessing Gemini 2.0 Flash Thinking LLM and Gradio for the UI.\n\nWe'll begin with imports and setting up the gemini client. Note that you'll need to [acquire a Google Gemini API key](https://aistudio.google.com/apikey) first -\n\n```python\nimport gradio as gr\nfrom gradio import ChatMessage\nfrom typing import Iterator\nimport google.generativeai as genai\n\ngenai.configure(api_key=\"your-gemini-api-key\")\nmodel = genai.GenerativeModel(\"gemini-2.0-flash-thinking-exp-1219\")\n```\n\nFirst, let's set up our streaming function that handles the model's output:\n\n```python\ndef stream_gemini_response(user_message: str, messages: list) -> Iterator[list]:\n \"\"\"\n Streams both thoughts and responses from the Gemini model.\n \"\"\"\n Initialize response from Gemini\n response = model.generate_content(user_message, stream=True)\n \n Initialize buffers\n thought_buffer = \"\"\n response_buffer = \"\"\n thinking_complete = False\n \n Add initial thinking message\n messages.append(\n ChatMessage(\n role=\"assistant\",\n content=\"\",\n metadata={\"title\": \"\u23f3Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental\"}\n )\n )\n \n for chunk in response:\n parts = chunk.candidates[0].content.parts\n current_chunk = parts[0].text\n \n if len(parts) == 2 and not thinking_complete:\n Complete thought and start response\n thought_buffer += current_chunk\n messages[-1] = ChatMessage(\n rol", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": " if len(parts) == 2 and not thinking_complete:\n Complete thought and start response\n thought_buffer += current_chunk\n messages[-1] = ChatMessage(\n role=\"assistant\",\n content=thought_buffer,\n metadata={\"title\": \"\u23f3Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental\"}\n )\n \n Add response message\n messages.append(\n ChatMessage(\n role=\"assistant\",\n content=parts[1].text\n )\n )\n thinking_complete = True\n \n elif thinking_complete:\n Continue streaming response\n response_buffer += current_chunk\n messages[-1] = ChatMessage(\n role=\"assistant\",\n content=response_buffer\n )\n \n else:\n Continue streaming thoughts\n thought_buffer += current_chunk\n messages[-1] = ChatMessage(\n role=\"assistant\",\n content=thought_buffer,\n metadata={\"title\": \"\u23f3Thinking: *The thoughts produced by the Gemini2.0 Flash model are experimental\"}\n )\n \n yield messages\n```\n\nThen, let's create the Gradio interface:\n\n```python\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with Gemini 2.0 Flash and See its Thoughts \ud83d\udcad\")\n \n chatbot = gr.Chatbot(\n type=\"messages\",\n label=\"Gemini2.0 'Thinking' Chatbot\",\n render_markdown=True,\n )\n \n input_box = gr.Textbox(\n lines=1,\n label=\"Chat Message\",\n placeholder=\"Type your message here and press Enter...\"\n )\n \n Set up event handlers\n msg_store = gr.State(\"\") Store for preserving user message\n \n input_box.submit(\n lambda msg: (msg, msg, \"\"), Store message and clear input\n inputs=[input_box],\n outputs=[msg_store, input_box, inp", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "Store for preserving user message\n \n input_box.submit(\n lambda msg: (msg, msg, \"\"), Store message and clear input\n inputs=[input_box],\n outputs=[msg_store, input_box, input_box],\n queue=False\n ).then(\n user_message, Add user message to chat\n inputs=[msg_store, chatbot],\n outputs=[input_box, chatbot],\n queue=False\n ).then(\n stream_gemini_response, Generate and stream response\n inputs=[msg_store, chatbot],\n outputs=chatbot\n )\n\ndemo.launch()\n```\n\nThis creates a chatbot that:\n\n- Displays the model's thoughts in a collapsible section\n- Streams the thoughts and final response in real-time\n- Maintains a clean chat history\n\n That's it! You now have a chatbot that not only responds to users but also shows its thinking process, creating a more transparent and engaging interaction. See our finished Gemini 2.0 Flash Thinking demo [here](https://huggingface.co/spaces/ysharma/Gemini2-Flash-Thinking).\n\n\n Building with Citations \n\nThe Gradio Chatbot can display citations from LLM responses, making it perfect for creating UIs that show source documentation and references. This guide will show you how to build a chatbot that displays Claude's citations in real-time.\n\nA real example using Anthropic's Citations API\nLet's create a complete chatbot that shows both responses and their supporting citations. We'll use Anthropic's Claude API with citations enabled and Gradio for the UI.\n\nWe'll begin with imports and setting up the Anthropic client. Note that you'll need an `ANTHROPIC_API_KEY` environment variable set:\n\n```python\nimport gradio as gr\nimport anthropic\nimport base64\nfrom typing import List, Dict, Any\n\nclient = anthropic.Anthropic()\n```\n\nFirst, let's set up our message formatting functions that handle document preparation:\n\n```python\ndef encode_pdf_to_base64(file_obj) -> str:\n \"\"\"Convert uploaded PDF file to base64 string.\"\"\"\n if file_obj is None:\n return None\n", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "ng functions that handle document preparation:\n\n```python\ndef encode_pdf_to_base64(file_obj) -> str:\n \"\"\"Convert uploaded PDF file to base64 string.\"\"\"\n if file_obj is None:\n return None\n with open(file_obj.name, 'rb') as f:\n return base64.b64encode(f.read()).decode('utf-8')\n\ndef format_message_history(\n history: list, \n enable_citations: bool,\n doc_type: str,\n text_input: str,\n pdf_file: str\n) -> List[Dict]:\n \"\"\"Convert Gradio chat history to Anthropic message format.\"\"\"\n formatted_messages = []\n \n Add previous messages\n for msg in history[:-1]:\n if msg[\"role\"] == \"user\":\n formatted_messages.append({\"role\": \"user\", \"content\": msg[\"content\"]})\n \n Prepare the latest message with document\n latest_message = {\"role\": \"user\", \"content\": []}\n \n if enable_citations:\n if doc_type == \"plain_text\":\n latest_message[\"content\"].append({\n \"type\": \"document\",\n \"source\": {\n \"type\": \"text\",\n \"media_type\": \"text/plain\",\n \"data\": text_input.strip()\n },\n \"title\": \"Text Document\",\n \"citations\": {\"enabled\": True}\n })\n elif doc_type == \"pdf\" and pdf_file:\n pdf_data = encode_pdf_to_base64(pdf_file)\n if pdf_data:\n latest_message[\"content\"].append({\n \"type\": \"document\",\n \"source\": {\n \"type\": \"base64\",\n \"media_type\": \"application/pdf\",\n \"data\": pdf_data\n },\n \"title\": pdf_file.name,\n \"citations\": {\"enabled\": True}\n })\n \n Add the user's question\n latest_message[\"content\"].append({\"type\": \"text\", \"text\": history[-1][\"content\"]})\n \n formatted_messages.append(latest_message)\n return formatted_messages\n```\n\nThen, ", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": " the user's question\n latest_message[\"content\"].append({\"type\": \"text\", \"text\": history[-1][\"content\"]})\n \n formatted_messages.append(latest_message)\n return formatted_messages\n```\n\nThen, let's create our bot response handler that processes citations:\n\n```python\ndef bot_response(\n history: list,\n enable_citations: bool,\n doc_type: str,\n text_input: str,\n pdf_file: str\n) -> List[Dict[str, Any]]:\n try:\n messages = format_message_history(history, enable_citations, doc_type, text_input, pdf_file)\n response = client.messages.create(model=\"claude-3-5-sonnet-20241022\", max_tokens=1024, messages=messages)\n \n Initialize main response and citations\n main_response = \"\"\n citations = []\n \n Process each content block\n for block in response.content:\n if block.type == \"text\":\n main_response += block.text\n if enable_citations and hasattr(block, 'citations') and block.citations:\n for citation in block.citations:\n if citation.cited_text not in citations:\n citations.append(citation.cited_text)\n \n Add main response\n history.append({\"role\": \"assistant\", \"content\": main_response})\n \n Add citations in a collapsible section\n if enable_citations and citations:\n history.append({\n \"role\": \"assistant\",\n \"content\": \"\\n\".join([f\"\u2022 {cite}\" for cite in citations]),\n \"metadata\": {\"title\": \"\ud83d\udcda Citations\"}\n })\n \n return history\n \n except Exception as e:\n history.append({\n \"role\": \"assistant\",\n \"content\": \"I apologize, but I encountered an error while processing your request.\"\n })\n return history\n```\n\nFinally, let's create the Gradio interface:\n\n```python\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with Citations\"", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "an error while processing your request.\"\n })\n return history\n```\n\nFinally, let's create the Gradio interface:\n\n```python\nwith gr.Blocks() as demo:\n gr.Markdown(\"Chat with Citations\")\n \n with gr.Row(scale=1):\n with gr.Column(scale=4):\n chatbot = gr.Chatbot(type=\"messages\", bubble_full_width=False, show_label=False, scale=1)\n msg = gr.Textbox(placeholder=\"Enter your message here...\", show_label=False, container=False)\n \n with gr.Column(scale=1):\n enable_citations = gr.Checkbox(label=\"Enable Citations\", value=True, info=\"Toggle citation functionality\" )\n doc_type_radio = gr.Radio( choices=[\"plain_text\", \"pdf\"], value=\"plain_text\", label=\"Document Type\", info=\"Choose the type of document to use\")\n text_input = gr.Textbox(label=\"Document Content\", lines=10, info=\"Enter the text you want to reference\")\n pdf_input = gr.File(label=\"Upload PDF\", file_types=[\".pdf\"], file_count=\"single\", visible=False)\n \n Handle message submission\n msg.submit(\n user_message,\n [msg, chatbot, enable_citations, doc_type_radio, text_input, pdf_input],\n [msg, chatbot]\n ).then(\n bot_response,\n [chatbot, enable_citations, doc_type_radio, text_input, pdf_input],\n chatbot\n )\n\ndemo.launch()\n```\n\nThis creates a chatbot that:\n- Supports both plain text and PDF documents for Claude to cite from \n- Displays Citations in collapsible sections using our `metadata` feature\n- Shows source quotes directly from the given documents\n\nThe citations feature works particularly well with the Gradio Chatbot's `metadata` support, allowing us to create collapsible sections that keep the chat interface clean while still providing easy access to source documentation.\n\nThat's it! You now have a chatbot that not only responds to users but also shows its sources, creating a more transparent and trustworthy interaction. See our finished Citations demo [her", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "umentation.\n\nThat's it! You now have a chatbot that not only responds to users but also shows its sources, creating a more transparent and trustworthy interaction. See our finished Citations demo [here](https://huggingface.co/spaces/ysharma/anthropic-citations-with-gradio-metadata-key).\n\n", "heading1": "Building with Visibly Thinking LLMs", "source_page_url": "https://gradio.app/guides/agents-and-tool-usage", "source_page_title": "Chatbots - Agents And Tool Usage Guide"}, {"text": "Gradio-Lite\n\nGradio-Lite is the serverless version of Gradio, allowing you to build serverless web UI applications by embedding Python code within HTML. For a detailed introduction to Gradio-Lite itself, please read [this Guide](./gradio-lite).\n\nTransformers.js and Transformers.js.py\n\nTransformers.js is the JavaScript version of the Transformers library that allows you to run machine learning models entirely in the browser.\nSince Transformers.js is a JavaScript library, it cannot be directly used from the Python code of Gradio-Lite applications. To address this, we use a wrapper library called [Transformers.js.py](https://github.com/whitphx/transformers.js.py).\nThe name Transformers.js.py may sound unusual, but it represents the necessary technology stack for using Transformers.js from Python code within a browser environment. The regular Transformers library is not compatible with browser environments.\n\n", "heading1": "Libraries Used", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "Here's an example of how to use Gradio-Lite and Transformers.js together.\nPlease create an HTML file and paste the following code:\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis')\n\ndemo = gr.Interface.from_pipeline(pipe)\n\ndemo.launch()\n\n\t\t\t\ntransformers-js-py\n\t\t\t\n\t\t\n\t\n\n```\n\nHere is a running example of the code above (after the app has loaded, you could disconnect your Internet connection and the app will still work since its running entirely in your browser):\n\n\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis')\n\ndemo = gr.Interface.from_pipeline(pipe)\n\ndemo.launch()\n\ntransformers-js-py\n\n\n\nAnd you you can open your HTML file in a browser to see the Gradio app running!\n\nThe Python code inside the `` tag is the Gradio application code. For more details on this part, please refer to [this article](./gradio-lite).\nThe `` tag is used to specify packages to be installed in addition to Gradio-Lite and its dependencies. In this case, we are using Transformers.js.py (`transformers-js-py`), so it is specified here.\n\nLet's break down the code:\n\n`pipe = await pipeline('sentiment-analysis')` creates a Transformers.js pipeline.\nIn this example, we create a sentiment analysis pipeline.\nFor more information on the available pipeline types and usage, please refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).\n\n`demo = gr.Interface.from_pipeline(pipe)` creates a Gradio a", "heading1": "Sample Code", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "vailable pipeline types and usage, please refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).\n\n`demo = gr.Interface.from_pipeline(pipe)` creates a Gradio app instance. By passing the Transformers.js.py pipeline to `gr.Interface.from_pipeline()`, we can create an interface that utilizes that pipeline with predefined input and output components.\n\nFinally, `demo.launch()` launches the created app.\n\n", "heading1": "Sample Code", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "You can modify the line `pipe = await pipeline('sentiment-analysis')` in the sample above to try different models or tasks.\n\nFor example, if you change it to `pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')`, you can test the same sentiment analysis task but with a different model. The second argument of the `pipeline` function specifies the model name.\nIf it's not specified like in the first example, the default model is used. For more details on these specs, refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).\n\n\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')\n\ndemo = gr.Interface.from_pipeline(pipe)\n\ndemo.launch()\n\ntransformers-js-py\n\n\n\nAs another example, changing it to `pipe = await pipeline('image-classification')` creates a pipeline for image classification instead of sentiment analysis.\nIn this case, the interface created with `demo = gr.Interface.from_pipeline(pipe)` will have a UI for uploading an image and displaying the classification result. The `gr.Interface.from_pipeline` function automatically creates an appropriate UI based on the type of pipeline.\n\n\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('image-classification')\n\ndemo = gr.Interface.from_pipeline(pipe)\n\ndemo.launch()\n\ntransformers-js-py\n\n\n\n
\n\n**Note**: If you use an audio pipeline, such as `automatic-speech-recognition`, you will need to put `transformers-js-py[audio]` in your `` as there are additional requirements needed to process audio files.\n\n", "heading1": "Customizing the Model or Pipeline", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "Instead of using `gr.Interface.from_pipeline()`, you can define the user interface using Gradio's regular API.\nHere's an example where the Python code inside the `` tag has been modified from the previous sample:\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis')\n\nasync def fn(text):\n\tresult = await pipe(text)\n\treturn result\n\ndemo = gr.Interface(\n\tfn=fn,\n\tinputs=gr.Textbox(),\n\toutputs=gr.JSON(),\n)\n\ndemo.launch()\n\n\t\t\t\ntransformers-js-py\n\t\t\t\n\t\t\n\t\n\n```\n\nIn this example, we modified the code to construct the Gradio user interface manually so that we could output the result as JSON.\n\n\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis')\n\nasync def fn(text):\n\tresult = await pipe(text)\n\treturn result\n\ndemo = gr.Interface(\n\tfn=fn,\n\tinputs=gr.Textbox(),\n\toutputs=gr.JSON(),\n)\n\ndemo.launch()\n\ntransformers-js-py\n\n\n\n", "heading1": "Customizing the UI", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "By combining Gradio-Lite and Transformers.js (and Transformers.js.py), you can create serverless machine learning applications that run entirely in the browser.\n\nGradio-Lite provides a convenient method to create an interface for a given Transformers.js pipeline, `gr.Interface.from_pipeline()`.\nThis method automatically constructs the interface based on the pipeline's task type.\n\nAlternatively, you can define the interface manually using Gradio's regular API, as shown in the second example.\n\nBy using these libraries, you can build and deploy machine learning applications without the need for server-side Python setup or external dependencies.\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/gradio-lite-and-transformers-js", "source_page_title": "Gradio Clients And Lite - Gradio Lite And Transformers Js Guide"}, {"text": "What are agents?\n\nA [LangChain agent](https://docs.langchain.com/docs/components/agents/agent) is a Large Language Model (LLM) that takes user input and reports an output based on using one of many tools at its disposal.\n\nWhat is Gradio?\n\n[Gradio](https://github.com/gradio-app/gradio) is the defacto standard framework for building Machine Learning Web Applications and sharing them with the world - all with just python! \ud83d\udc0d\n\n", "heading1": "Some background", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "To get started with `gradio_tools`, all you need to do is import and initialize your tools and pass them to the langchain agent!\n\nIn the following example, we import the `StableDiffusionPromptGeneratorTool` to create a good prompt for stable diffusion, the\n`StableDiffusionTool` to create an image with our improved prompt, the `ImageCaptioningTool` to caption the generated image, and\nthe `TextToVideoTool` to create a video from a prompt.\n\nWe then tell our agent to create an image of a dog riding a skateboard, but to please improve our prompt ahead of time. We also ask\nit to caption the generated image and create a video for it. The agent can decide which tool to use without us explicitly telling it.\n\n```python\nimport os\n\nif not os.getenv(\"OPENAI_API_KEY\"):\n raise ValueError(\"OPENAI_API_KEY must be set\")\n\nfrom langchain.agents import initialize_agent\nfrom langchain.llms import OpenAI\nfrom gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,\n TextToVideoTool)\n\nfrom langchain.memory import ConversationBufferMemory\n\nllm = OpenAI(temperature=0)\nmemory = ConversationBufferMemory(memory_key=\"chat_history\")\ntools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,\n StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]\n\n\nagent = initialize_agent(tools, llm, memory=memory, agent=\"conversational-react-description\", verbose=True)\noutput = agent.run(input=(\"Please create a photo of a dog riding a skateboard \"\n \"but improve my prompt prior to using an image generator.\"\n \"Please caption the generated image and create a video for it using the improved prompt.\"))\n```\n\nYou'll note that we are using some pre-built tools that come with `gradio_tools`. Please see this [doc](https://github.com/freddyaboulton/gradio-toolsgradio-tools-gradio--llm-agents) for a complete list of the tools that come with `gradio_tools`.\nIf ", "heading1": "gradio_tools - An end-to-end example", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "that come with `gradio_tools`. Please see this [doc](https://github.com/freddyaboulton/gradio-toolsgradio-tools-gradio--llm-agents) for a complete list of the tools that come with `gradio_tools`.\nIf you would like to use a tool that's not currently in `gradio_tools`, it is very easy to add your own. That's what the next section will cover.\n\n", "heading1": "gradio_tools - An end-to-end example", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "The core abstraction is the `GradioTool`, which lets you define a new tool for your LLM as long as you implement a standard interface:\n\n```python\nclass GradioTool(BaseTool):\n\n def __init__(self, name: str, description: str, src: str) -> None:\n\n @abstractmethod\n def create_job(self, query: str) -> Job:\n pass\n\n @abstractmethod\n def postprocess(self, output: Tuple[Any] | Any) -> str:\n pass\n```\n\nThe requirements are:\n\n1. The name for your tool\n2. The description for your tool. This is crucial! Agents decide which tool to use based on their description. Be precise and be sure to include example of what the input and the output of the tool should look like.\n3. The url or space id, e.g. `freddyaboulton/calculator`, of the Gradio application. Based on this value, `gradio_tool` will create a [gradio client](https://github.com/gradio-app/gradio/blob/main/client/python/README.md) instance to query the upstream application via API. Be sure to click the link and learn more about the gradio client library if you are not familiar with it.\n4. create_job - Given a string, this method should parse that string and return a job from the client. Most times, this is as simple as passing the string to the `submit` function of the client. More info on creating jobs [here](https://github.com/gradio-app/gradio/blob/main/client/python/README.mdmaking-a-prediction)\n5. postprocess - Given the result of the job, convert it to a string the LLM can display to the user.\n6. _Optional_ - Some libraries, e.g. [MiniChain](https://github.com/srush/MiniChain/tree/main), may need some info about the underlying gradio input and output types used by the tool. By default, this will return gr.Textbox() but\n if you'd like to provide more accurate info, implement the `_block_input(self, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be\n automatically imported by the `GradiTool` parent", "heading1": "gradio_tools - creating your own tool", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "lf, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be\n automatically imported by the `GradiTool` parent class and passed to the `_block_input` and `_block_output` methods.\n\nAnd that's it!\n\nOnce you have created your tool, open a pull request to the `gradio_tools` repo! We welcome all contributions.\n\n", "heading1": "gradio_tools - creating your own tool", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "Here is the code for the StableDiffusion tool as an example:\n\n```python\nfrom gradio_tool import GradioTool\nimport os\n\nclass StableDiffusionTool(GradioTool):\n \"\"\"Tool for calling stable diffusion from llm\"\"\"\n\n def __init__(\n self,\n name=\"StableDiffusion\",\n description=(\n \"An image generator. Use this to generate images based on \"\n \"text input. Input should be a description of what the image should \"\n \"look like. The output will be a path to an image file.\"\n ),\n src=\"gradio-client-demos/stable-diffusion\",\n hf_token=None,\n ) -> None:\n super().__init__(name, description, src, hf_token)\n\n def create_job(self, query: str) -> Job:\n return self.client.submit(query, \"\", 9, fn_index=1)\n\n def postprocess(self, output: str) -> str:\n return [os.path.join(output, i) for i in os.listdir(output) if not i.endswith(\"json\")][0]\n\n def _block_input(self, gr) -> \"gr.components.Component\":\n return gr.Textbox()\n\n def _block_output(self, gr) -> \"gr.components.Component\":\n return gr.Image()\n```\n\nSome notes on this implementation:\n\n1. All instances of `GradioTool` have an attribute called `client` that is a pointed to the underlying [gradio client](https://github.com/gradio-app/gradio/tree/main/client/pythongradio_client-use-a-gradio-app-as-an-api----in-3-lines-of-python). That is what you should use\n in the `create_job` method.\n2. `create_job` just passes the query string to the `submit` function of the client with some other parameters hardcoded, i.e. the negative prompt string and the guidance scale. We could modify our tool to also accept these values from the input string in a subsequent version.\n3. The `postprocess` method simply returns the first image from the gallery of images created by the stable diffusion space. We use the `os` module to get the full path of the image.\n\n", "heading1": "Example tool - Stable Diffusion", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "You now know how to extend the abilities of your LLM with the 1000s of gradio spaces running in the wild!\nAgain, we welcome any contributions to the [gradio_tools](https://github.com/freddyaboulton/gradio-tools) library.\nWe're excited to see the tools you all build!\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/gradio-and-llm-agents", "source_page_title": "Gradio Clients And Lite - Gradio And Llm Agents Guide"}, {"text": "Let's start with what seems like the most complex bit -- using machine learning to remove the music from a video.\n\nLuckily for us, there's an existing Space we can use to make this process easier: [https://huggingface.co/spaces/abidlabs/music-separation](https://huggingface.co/spaces/abidlabs/music-separation). This Space takes an audio file and produces two separate audio files: one with the instrumental music and one with all other sounds in the original clip. Perfect to use with our client!\n\nOpen a new Python file, say `main.py`, and start by importing the `Client` class from `gradio_client` and connecting it to this Space:\n\n```py\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"abidlabs/music-separation\")\n\ndef acapellify(audio_path):\n result = client.predict(handle_file(audio_path), api_name=\"/predict\")\n return result[0]\n```\n\nThat's all the code that's needed -- notice that the API endpoints returns two audio files (one without the music, and one with just the music) in a list, and so we just return the first element of the list.\n\n---\n\n**Note**: since this is a public Space, there might be other users using this Space as well, which might result in a slow experience. You can duplicate this Space with your own [Hugging Face token](https://huggingface.co/settings/tokens) and create a private Space that only you have will have access to and bypass the queue. To do that, simply replace the first two lines above with:\n\n```py\nfrom gradio_client import Client\n\nclient = Client.duplicate(\"abidlabs/music-separation\", hf_token=YOUR_HF_TOKEN)\n```\n\nEverything else remains the same!\n\n---\n\nNow, of course, we are working with video files, so we first need to extract the audio from the video files. For this, we will be using the `ffmpeg` library, which does a lot of heavy lifting when it comes to working with audio and video files. The most common way to use `ffmpeg` is through the command line, which we'll call via Python's `subprocess` module:\n\nOur video p", "heading1": "Step 1: Write the Video Processing Function", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "t of heavy lifting when it comes to working with audio and video files. The most common way to use `ffmpeg` is through the command line, which we'll call via Python's `subprocess` module:\n\nOur video processing workflow will consist of three steps:\n\n1. First, we start by taking in a video filepath and extracting the audio using `ffmpeg`.\n2. Then, we pass in the audio file through the `acapellify()` function above.\n3. Finally, we combine the new audio with the original video to produce a final acapellified video.\n\nHere's the complete code in Python, which you can add to your `main.py` file:\n\n```python\nimport subprocess\n\ndef process_video(video_path):\n old_audio = os.path.basename(video_path).split(\".\")[0] + \".m4a\"\n subprocess.run(['ffmpeg', '-y', '-i', video_path, '-vn', '-acodec', 'copy', old_audio])\n\n new_audio = acapellify(old_audio)\n\n new_video = f\"acap_{video_path}\"\n subprocess.call(['ffmpeg', '-y', '-i', video_path, '-i', new_audio, '-map', '0:v', '-map', '1:a', '-c:v', 'copy', '-c:a', 'aac', '-strict', 'experimental', f\"static/{new_video}\"])\n return new_video\n```\n\nYou can read up on [ffmpeg documentation](https://ffmpeg.org/ffmpeg.html) if you'd like to understand all of the command line parameters, as they are beyond the scope of this tutorial.\n\n", "heading1": "Step 1: Write the Video Processing Function", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "Next up, we'll create a simple FastAPI app. If you haven't used FastAPI before, check out [the great FastAPI docs](https://fastapi.tiangolo.com/). Otherwise, this basic template, which we add to `main.py`, will look pretty familiar:\n\n```python\nimport os\nfrom fastapi import FastAPI, File, UploadFile, Request\nfrom fastapi.responses import HTMLResponse, RedirectResponse\nfrom fastapi.staticfiles import StaticFiles\nfrom fastapi.templating import Jinja2Templates\n\napp = FastAPI()\nos.makedirs(\"static\", exist_ok=True)\napp.mount(\"/static\", StaticFiles(directory=\"static\"), name=\"static\")\ntemplates = Jinja2Templates(directory=\"templates\")\n\nvideos = []\n\n@app.get(\"/\", response_class=HTMLResponse)\nasync def home(request: Request):\n return templates.TemplateResponse(\n \"home.html\", {\"request\": request, \"videos\": videos})\n\n@app.post(\"/uploadvideo/\")\nasync def upload_video(video: UploadFile = File(...)):\n video_path = video.filename\n with open(video_path, \"wb+\") as fp:\n fp.write(video.file.read())\n\n new_video = process_video(video.filename)\n videos.append(new_video)\n return RedirectResponse(url='/', status_code=303)\n```\n\nIn this example, the FastAPI app has two routes: `/` and `/uploadvideo/`.\n\nThe `/` route returns an HTML template that displays a gallery of all uploaded videos.\n\nThe `/uploadvideo/` route accepts a `POST` request with an `UploadFile` object, which represents the uploaded video file. The video file is \"acapellified\" via the `process_video()` method, and the output video is stored in a list which stores all of the uploaded videos in memory.\n\nNote that this is a very basic example and if this were a production app, you will need to add more logic to handle file storage, user authentication, and security considerations.\n\n", "heading1": "Step 2: Create a FastAPI app (Backend Routes)", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "Finally, we create the frontend of our web application. First, we create a folder called `templates` in the same directory as `main.py`. We then create a template, `home.html` inside the `templates` folder. Here is the resulting file structure:\n\n```csv\n\u251c\u2500\u2500 main.py\n\u251c\u2500\u2500 templates\n\u2502 \u2514\u2500\u2500 home.html\n```\n\nWrite the following as the contents of `home.html`:\n\n```html\n<!DOCTYPE html> <html> <head> <title>Video Gallery</title>\n<style> body { font-family: sans-serif; margin: 0; padding: 0;\nbackground-color: f5f5f5; } h1 { text-align: center; margin-top: 30px;\nmargin-bottom: 20px; } .gallery { display: flex; flex-wrap: wrap;\njustify-content: center; gap: 20px; padding: 20px; } .video { border: 2px solid\nccc; box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.2); border-radius: 5px; overflow:\nhidden; width: 300px; margin-bottom: 20px; } .video video { width: 100%; height:\n200px; } .video p { text-align: center; margin: 10px 0; } form { margin-top:\n20px; text-align: center; } input[type=\"file\"] { display: none; } .upload-btn {\ndisplay: inline-block; background-color: 3498db; color: fff; padding: 10px\n20px; font-size: 16px; border: none; border-radius: 5px; cursor: pointer; }\n.upload-btn:hover { background-color: 2980b9; } .file-name { margin-left: 10px;\n} </style> </head> <body> <h1>Video Gallery</h1> {% if videos %}\n<div class=\"gallery\"> {% for video in videos %} <div class=\"video\">\n<video controls> <source src=\"{{ url_for('static', path=video) }}\"\ntype=\"video/mp4\"> Your browser does not support the video tag. </video>\n<p>{{ video }}</p> </div> {% endfor %} </div> {% else %} <p>No\nvideos uploaded yet.</p> {% endif %} <form action=\"/uploadvideo/\"\nmethod=\"post\" enctype=\"multipart/form-data\"> <label for=\"video-upload\"\nclass=\"upload-btn\">Choose video file</label> <input type=\"file\"\nname=\"video\" id=\"video-upload\"> <span class=\"file-name\"></span> <button\ntype=\"submit\" class=\"upload-btn\">Upload</butto", "heading1": "Step 3: Create a FastAPI app (Frontend Template)", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "class=\"upload-btn\">Choose video file</label> <input type=\"file\"\nname=\"video\" id=\"video-upload\"> <span class=\"file-name\"></span> <button\ntype=\"submit\" class=\"upload-btn\">Upload</button> </form> <script> //\nDisplay selected file name in the form const fileUpload =\ndocument.getElementById(\"video-upload\"); const fileName =\ndocument.querySelector(\".file-name\"); fileUpload.addEventListener(\"change\", (e)\n=> { fileName.textContent = e.target.files[0].name; }); </script> </body>\n</html>\n```\n\n", "heading1": "Step 3: Create a FastAPI app (Frontend Template)", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "Finally, we are ready to run our FastAPI app, powered by the Gradio Python Client!\n\nOpen up a terminal and navigate to the directory containing `main.py`. Then run the following command in the terminal:\n\n```bash\n$ uvicorn main:app\n```\n\nYou should see an output that looks like this:\n\n```csv\nLoaded as API: https://abidlabs-music-separation.hf.space \u2714\nINFO: Started server process [1360]\nINFO: Waiting for application startup.\nINFO: Application startup complete.\nINFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)\n```\n\nAnd that's it! Start uploading videos and you'll get some \"acapellified\" videos in response (might take seconds to minutes to process depending on the length of your videos). Here's how the UI looks after uploading two videos:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/acapellify.png)\n\nIf you'd like to learn more about how to use the Gradio Python Client in your projects, [read the dedicated Guide](/guides/getting-started-with-the-python-client/).\n", "heading1": "Step 4: Run your FastAPI app", "source_page_url": "https://gradio.app/guides/fastapi-app-with-the-gradio-client", "source_page_title": "Gradio Clients And Lite - Fastapi App With The Gradio Client Guide"}, {"text": "If you already have a recent version of `gradio`, then the `gradio_client` is included as a dependency. But note that this documentation reflects the latest version of the `gradio_client`, so upgrade if you're not sure!\n\nThe lightweight `gradio_client` package can be installed from pip (or pip3) and is tested to work with **Python versions 3.10 or higher**:\n\n```bash\n$ pip install --upgrade gradio_client\n```\n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Start by connecting instantiating a `Client` object and connecting it to a Gradio app that is running on Hugging Face Spaces.\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/en2fr\") a Space that translates from English to French\n```\n\nYou can also connect to private Spaces by passing in your HF token with the `hf_token` parameter. You can get your HF token here: https://huggingface.co/settings/tokens\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/my-private-space\", hf_token=\"...\")\n```\n\n\n", "heading1": "Connecting to a Gradio App on Hugging Face Spaces", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,\nand then use it to make as many requests as you'd like!\n\nThe `gradio_client` includes a class method: `Client.duplicate()` to make this process simple (you'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens) or be logged in using the Hugging Face CLI):\n\n```python\nimport os\nfrom gradio_client import Client, handle_file\n\nHF_TOKEN = os.environ.get(\"HF_TOKEN\")\n\nclient = Client.duplicate(\"abidlabs/whisper\", hf_token=HF_TOKEN)\nclient.predict(handle_file(\"audio_sample.wav\"))\n\n>> \"This is a test of the whisper speech recognition model.\"\n```\n\nIf you have previously duplicated a Space, re-running `duplicate()` will _not_ create a new Space. Instead, the Client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate()` method multiple times.\n\n**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 1 hour of inactivity. You can also set the hardware using the `hardware` parameter of `duplicate()`.\n\n", "heading1": "Duplicating a Space for private use", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "If your app is running somewhere else, just provide the full URL instead, including the \"http://\" or \"https://\". Here's an example of making predictions to a Gradio app that is running on a share URL:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"https://bec81a83-5b5c-471e.gradio.live\")\n```\n\n", "heading1": "Connecting a general Gradio app", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "If the Gradio application you are connecting to [requires a username and password](/guides/sharing-your-appauthentication), then provide them as a tuple to the `auth` argument of the `Client` class:\n\n```python\nfrom gradio_client import Client\n\nClient(\n space_name,\n auth=[username, password]\n)\n```\n\n\n", "heading1": "Connecting to a Gradio app with auth", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Once you have connected to a Gradio app, you can view the APIs that are available to you by calling the `Client.view_api()` method. For the Whisper Space, we see the following:\n\n```bash\nClient.predict() Usage Info\n---------------------------\nNamed API endpoints: 1\n\n - predict(audio, api_name=\"/predict\") -> output\n Parameters:\n - [Audio] audio: filepath (required) \n Returns:\n - [Textbox] output: str \n```\n\nWe see that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `str`, which is a `filepath or URL`.\n\nWe should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available.\n\n", "heading1": "Inspecting the API endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "As an alternative to running the `.view_api()` method, you can click on the \"Use via API\" link in the footer of the Gradio app, which shows us the same information, along with example usage. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)\n\nThe View API page also includes an \"API Recorder\" that lets you interact with the Gradio UI normally and converts your interactions into the corresponding code to run with the Python Client.\n\n", "heading1": "The \"View API\" Page", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "The simplest way to make a prediction is simply to call the `.predict()` function with the appropriate arguments:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/en2fr\", api_name='/predict')\nclient.predict(\"Hello\")\n\n>> Bonjour\n```\n\nIf there are multiple parameters, then you should pass them as separate arguments to `.predict()`, like this:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"gradio/calculator\")\nclient.predict(4, \"add\", 5)\n\n>> 9.0\n```\n\nIt is recommended to provide key-word arguments instead of positional arguments:\n\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"gradio/calculator\")\nclient.predict(num1=4, operation=\"add\", num2=5)\n\n>> 9.0\n```\n\nThis allows you to take advantage of default arguments. For example, this Space includes the default value for the Slider component so you do not need to provide it when accessing it with the client.\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/image_generator\")\nclient.predict(text=\"an astronaut riding a camel\")\n```\n\nThe default value is the initial value of the corresponding Gradio component. If the component does not have an initial value, but if the corresponding argument in the predict function has a default value of `None`, then that parameter is also optional in the client. Of course, if you'd like to override it, you can include it as well:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(\"abidlabs/image_generator\")\nclient.predict(text=\"an astronaut riding a camel\", steps=25)\n```\n\nFor providing files or URLs as inputs, you should pass in the filepath or URL to the file enclosed within `gradio_client.handle_file()`. This takes care of uploading the file to the Gradio server and ensures that the file is preprocessed correctly:\n\n```python\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"abidlabs/whisper\")\nclient.predict(\n audio=handle_file(\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/s", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "```python\nfrom gradio_client import Client, handle_file\n\nclient = Client(\"abidlabs/whisper\")\nclient.predict(\n audio=handle_file(\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\")\n)\n\n>> \"My thought I have nobody by a beauty and will as you poured. Mr. Rochester is serve in that so don't find simpus, and devoted abode, to at might in a r\u2014\"\n```\n\n", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "One should note that `.predict()` is a _blocking_ operation as it waits for the operation to complete before returning the prediction.\n\nIn many cases, you may be better off letting the job run in the background until you need the results of the prediction. You can do this by creating a `Job` instance using the `.submit()` method, and then later calling `.result()` on the job to get the result. For example:\n\n```python\nfrom gradio_client import Client\n\nclient = Client(space=\"abidlabs/en2fr\")\njob = client.submit(\"Hello\", api_name=\"/predict\") This is not blocking\n\nDo something else\n\njob.result() This is blocking\n\n>> Bonjour\n```\n\n", "heading1": "Running jobs asynchronously", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Alternatively, one can add one or more callbacks to perform actions after the job has completed running, like this:\n\n```python\nfrom gradio_client import Client\n\ndef print_result(x):\n print(\"The translated result is: {x}\")\n\nclient = Client(space=\"abidlabs/en2fr\")\n\njob = client.submit(\"Hello\", api_name=\"/predict\", result_callbacks=[print_result])\n\nDo something else\n\n>> The translated result is: Bonjour\n\n```\n\n", "heading1": "Adding callbacks", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "The `Job` object also allows you to get the status of the running job by calling the `.status()` method. This returns a `StatusUpdate` object with the following attributes: `code` (the status code, one of a set of defined strings representing the status. See the `utils.Status` class), `rank` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` (the time that the status was generated).\n\n```py\nfrom gradio_client import Client\n\nclient = Client(src=\"gradio/calculator\")\njob = client.submit(5, \"add\", 4, api_name=\"/predict\")\njob.status()\n\n>> \n```\n\n_Note_: The `Job` class also has a `.done()` instance method which returns a boolean indicating whether the job has completed.\n\n", "heading1": "Status", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "The `Job` class also has a `.cancel()` instance method that cancels jobs that have been queued but not started. For example, if you run:\n\n```py\nclient = Client(\"abidlabs/whisper\")\njob1 = client.submit(handle_file(\"audio_sample1.wav\"))\njob2 = client.submit(handle_file(\"audio_sample2.wav\"))\njob1.cancel() will return False, assuming the job has started\njob2.cancel() will return True, indicating that the job has been canceled\n```\n\nIf the first job has started processing, then it will not be canceled. If the second job\nhas not yet started, it will be successfully canceled and removed from the queue.\n\n", "heading1": "Cancelling Jobs", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Some Gradio API endpoints do not return a single value, rather they return a series of values. You can get the series of values that have been returned at any time from such a generator endpoint by running `job.outputs()`:\n\n```py\nfrom gradio_client import Client\n\nclient = Client(src=\"gradio/count_generator\")\njob = client.submit(3, api_name=\"/count\")\nwhile not job.done():\n time.sleep(0.1)\njob.outputs()\n\n>> ['0', '1', '2']\n```\n\nNote that running `job.result()` on a generator endpoint only gives you the _first_ value returned by the endpoint.\n\nThe `Job` object is also iterable, which means you can use it to display the results of a generator function as they are returned from the endpoint. Here's the equivalent example using the `Job` as a generator:\n\n```py\nfrom gradio_client import Client\n\nclient = Client(src=\"gradio/count_generator\")\njob = client.submit(3, api_name=\"/count\")\n\nfor o in job:\n print(o)\n\n>> 0\n>> 1\n>> 2\n```\n\nYou can also cancel jobs that that have iterative outputs, in which case the job will finish as soon as the current iteration finishes running.\n\n```py\nfrom gradio_client import Client\nimport time\n\nclient = Client(\"abidlabs/test-yield\")\njob = client.submit(\"abcdef\")\ntime.sleep(3)\njob.cancel() job cancels after 2 iterations\n```\n\n", "heading1": "Generator Endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "Gradio demos can include [session state](https://www.gradio.app/guides/state-in-blocks), which provides a way for demos to persist information from user interactions within a page session.\n\nFor example, consider the following demo, which maintains a list of words that a user has submitted in a `gr.State` component. When a user submits a new word, it is added to the state, and the number of previous occurrences of that word is displayed:\n\n```python\nimport gradio as gr\n\ndef count(word, list_of_words):\n return list_of_words.count(word), list_of_words + [word]\n\nwith gr.Blocks() as demo:\n words = gr.State([])\n textbox = gr.Textbox()\n number = gr.Number()\n textbox.submit(count, inputs=[textbox, words], outputs=[number, words])\n \ndemo.launch()\n```\n\nIf you were to connect this this Gradio app using the Python Client, you would notice that the API information only shows a single input and output:\n\n```csv\nClient.predict() Usage Info\n---------------------------\nNamed API endpoints: 1\n\n - predict(word, api_name=\"/count\") -> value_31\n Parameters:\n - [Textbox] word: str (required) \n Returns:\n - [Number] value_31: float \n```\n\nThat is because the Python client handles state automatically for you -- as you make a series of requests, the returned state from one request is stored internally and automatically supplied for the subsequent request. If you'd like to reset the state, you can do that by calling `Client.reset_session()`.\n", "heading1": "Demos with Session State", "source_page_url": "https://gradio.app/guides/getting-started-with-the-python-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Python Client Guide"}, {"text": "You generally don't need to install cURL, as it comes pre-installed on many operating systems. Run:\n\n```bash\ncurl --version\n```\n\nto confirm that `curl` is installed. If it is not already installed, you can install it by visiting https://curl.se/download.html. \n\n\n", "heading1": "Installation", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "To query a Gradio app, you'll need its full URL. This is usually just the URL that the Gradio app is hosted on, for example: https://bec81a83-5b5c-471e.gradio.live\n\n\n**Hugging Face Spaces**\n\nHowever, if you are querying a Gradio on Hugging Face Spaces, you will need to use the URL of the embedded Gradio app, not the URL of the Space webpage. For example:\n\n```bash\n\u274c Space URL: https://huggingface.co/spaces/abidlabs/en2fr\n\u2705 Gradio app URL: https://abidlabs-en2fr.hf.space/\n```\n\nYou can get the Gradio app URL by clicking the \"view API\" link at the bottom of the page. Or, you can right-click on the page and then click on \"View Frame Source\" or the equivalent in your browser to view the URL of the embedded Gradio app.\n\nWhile you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,\nand then use it to make as many requests as you'd like!\n\nNote: to query private Spaces, you will need to pass in your Hugging Face (HF) token. You can get your HF token here: https://huggingface.co/settings/tokens. In this case, you will need to include an additional header in both of your `curl` calls that we'll discuss below:\n\n```bash\n-H \"Authorization: Bearer $HF_TOKEN\"\n```\n\nNow, we are ready to make the two `curl` requests.\n\n", "heading1": "Step 0: Get the URL for your Gradio App", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "The first of the two `curl` requests is `POST` request that submits the input payload to the Gradio app. \n\nThe syntax of the `POST` request is as follows:\n\n```bash\n$ curl -X POST $URL/call/$API_NAME -H \"Content-Type: application/json\" -d '{\n \"data\": $PAYLOAD\n}'\n```\n\nHere:\n\n* `$URL` is the URL of the Gradio app as obtained in Step 0\n* `$API_NAME` is the name of the API endpoint for the event that you are running. You can get the API endpoint names by clicking the \"view API\" link at the bottom of the page.\n* `$PAYLOAD` is a valid JSON data list containing the input payload, one element for each input component.\n\nWhen you make this `POST` request successfully, you will get an event id that is printed to the terminal in this format:\n\n```bash\n>> {\"event_id\": $EVENT_ID} \n```\n\nThis `EVENT_ID` will be needed in the subsequent `curl` request to fetch the results of the prediction. \n\nHere are some examples of how to make the `POST` request\n\n**Basic Example**\n\nRevisiting the example at the beginning of the page, here is how to make the `POST` request for a simple Gradio application that takes in a single input text component:\n\n```bash\n$ curl -X POST https://abidlabs-en2fr.hf.space/call/predict -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Hello, my friend.\"] \n}'\n```\n\n**Multiple Input Components**\n\nThis [Gradio demo](https://huggingface.co/spaces/gradio/hello_world_3) accepts three inputs: a string corresponding to the `gr.Textbox`, a boolean value corresponding to the `gr.Checkbox`, and a numerical value corresponding to the `gr.Slider`. Here is the `POST` request:\n\n```bash\ncurl -X POST https://gradio-hello-world-3.hf.space/call/predict -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Hello\", true, 5]\n}'\n```\n\n**Private Spaces**\n\nAs mentioned earlier, if you are making a request to a private Space, you will need to pass in a [Hugging Face token](https://huggingface.co/settings/tokens) that has read access to the Space. The request will look like this:\n\n```bash\n", "heading1": "Step 1: Make a Prediction (POST)", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "king a request to a private Space, you will need to pass in a [Hugging Face token](https://huggingface.co/settings/tokens) that has read access to the Space. The request will look like this:\n\n```bash\n$ curl -X POST https://private-space.hf.space/call/predict -H \"Content-Type: application/json\" -H \"Authorization: Bearer $HF_TOKEN\" -d '{\n \"data\": [\"Hello, my friend.\"] \n}'\n```\n\n**Files**\n\nIf you are using `curl` to query a Gradio application that requires file inputs, the files *need* to be provided as URLs, and The URL needs to be enclosed in a dictionary in this format:\n\n```bash\n{\"path\": $URL}\n```\n\nHere is an example `POST` request:\n\n```bash\n$ curl -X POST https://gradio-image-mod.hf.space/call/predict -H \"Content-Type: application/json\" -d '{\n \"data\": [{\"path\": \"https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png\"}] \n}'\n```\n\n\n**Stateful Demos**\n\nIf your Gradio demo [persists user state](/guides/interface-state) across multiple interactions (e.g. is a chatbot), you can pass in a `session_hash` alongside the `data`. Requests with the same `session_hash` are assumed to be part of the same user session. Here's how that might look:\n\n```bash\nThese two requests will share a session\n\ncurl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Are you sentient?\"],\n \"session_hash\": \"randomsequence1234\"\n}'\n\ncurl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Really?\"],\n \"session_hash\": \"randomsequence1234\"\n}'\n\nThis request will be treated as a new session\n\ncurl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H \"Content-Type: application/json\" -d '{\n \"data\": [\"Are you sentient?\"],\n \"session_hash\": \"newsequence5678\"\n}'\n```\n\n\n\n", "heading1": "Step 1: Make a Prediction (POST)", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "ient?\"],\n \"session_hash\": \"newsequence5678\"\n}'\n```\n\n\n\n", "heading1": "Step 1: Make a Prediction (POST)", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "Once you have received the `EVENT_ID` corresponding to your prediction, you can stream the results. Gradio stores these results in a least-recently-used cache in the Gradio app. By default, the cache can store 2,000 results (across all users and endpoints of the app). \n\nTo stream the results for your prediction, make a `GET` request with the following syntax:\n\n```bash\n$ curl -N $URL/call/$API_NAME/$EVENT_ID\n```\n\n\nTip: If you are fetching results from a private Space, include a header with your HF token like this: `-H \"Authorization: Bearer $HF_TOKEN\"` in the `GET` request.\n\nThis should produce a stream of responses in this format:\n\n```bash\nevent: ... \ndata: ...\nevent: ... \ndata: ...\n...\n```\n\nHere: `event` can be one of the following:\n* `generating`: indicating an intermediate result\n* `complete`: indicating that the prediction is complete and the final result \n* `error`: indicating that the prediction was not completed successfully\n* `heartbeat`: sent every 15 seconds to keep the request alive\n\nThe `data` is in the same format as the input payload: valid JSON data list containing the output result, one element for each output component.\n\nHere are some examples of what results you should expect if a request is completed successfully:\n\n**Basic Example**\n\nRevisiting the example at the beginning of the page, we would expect the result to look like this:\n\n```bash\nevent: complete\ndata: [\"Bonjour, mon ami.\"]\n```\n\n**Multiple Outputs**\n\nIf your endpoint returns multiple values, they will appear as elements of the `data` list:\n\n```bash\nevent: complete\ndata: [\"Good morning Hello. It is 5 degrees today\", -15.0]\n```\n\n**Streaming Example**\n\nIf your Gradio app [streams a sequence of values](/guides/streaming-outputs), then they will be streamed directly to your terminal, like this:\n\n```bash\nevent: generating\ndata: [\"Hello, w!\"]\nevent: generating\ndata: [\"Hello, wo!\"]\nevent: generating\ndata: [\"Hello, wor!\"]\nevent: generating\ndata: [\"Hello, worl!\"]\nevent: generating\ndata: [\"Hello, w", "heading1": "Step 2: GET the result", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "```bash\nevent: generating\ndata: [\"Hello, w!\"]\nevent: generating\ndata: [\"Hello, wo!\"]\nevent: generating\ndata: [\"Hello, wor!\"]\nevent: generating\ndata: [\"Hello, worl!\"]\nevent: generating\ndata: [\"Hello, world!\"]\nevent: complete\ndata: [\"Hello, world!\"]\n```\n\n**File Example**\n\nIf your Gradio app returns a file, the file will be represented as a dictionary in this format (including potentially some additional keys):\n\n```python\n{\n \"orig_name\": \"example.jpg\",\n \"path\": \"/path/in/server.jpg\",\n \"url\": \"https:/example.com/example.jpg\",\n \"meta\": {\"_type\": \"gradio.FileData\"}\n}\n```\n\nIn your terminal, it may appear like this:\n\n```bash\nevent: complete\ndata: [{\"path\": \"/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp\", \"url\": \"https://gradio-image-mod.hf.space/c/file=/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp\", \"size\": null, \"orig_name\": \"image.webp\", \"mime_type\": null, \"is_stream\": false, \"meta\": {\"_type\": \"gradio.FileData\"}}]\n```\n\n", "heading1": "Step 2: GET the result", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "What if your Gradio application has [authentication enabled](/guides/sharing-your-appauthentication)? In that case, you'll need to make an additional `POST` request with cURL to authenticate yourself before you make any queries. Here are the complete steps:\n\nFirst, login with a `POST` request supplying a valid username and password:\n\n```bash\ncurl -X POST $URL/login \\\n -d \"username=$USERNAME&password=$PASSWORD\" \\\n -c cookies.txt\n```\n\nIf the credentials are correct, you'll get `{\"success\":true}` in response and the cookies will be saved in `cookies.txt`.\n\nNext, you'll need to include these cookies when you make the original `POST` request, like this:\n\n```bash\n$ curl -X POST $URL/call/$API_NAME -b cookies.txt -H \"Content-Type: application/json\" -d '{\n \"data\": $PAYLOAD\n}'\n```\n\nFinally, you'll need to `GET` the results, again supplying the cookies from the file:\n\n```bash\ncurl -N $URL/call/$API_NAME/$EVENT_ID -b cookies.txt\n```\n", "heading1": "Authentication", "source_page_url": "https://gradio.app/guides/querying-gradio-apps-with-curl", "source_page_title": "Gradio Clients And Lite - Querying Gradio Apps With Curl Guide"}, {"text": "`@gradio/lite` is a JavaScript library that enables you to run Gradio applications directly within your web browser. It achieves this by utilizing Pyodide, a Python runtime for WebAssembly, which allows Python code to be executed in the browser environment. With `@gradio/lite`, you can **write regular Python code for your Gradio applications**, and they will **run seamlessly in the browser** without the need for server-side infrastructure.\n\n", "heading1": "What is `@gradio/lite`?", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "Let's build a \"Hello World\" Gradio app in `@gradio/lite`\n\n\n1. Import JS and CSS\n\nStart by creating a new HTML file, if you don't have one already. Importing the JavaScript and CSS corresponding to the `@gradio/lite` package by using the following code:\n\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\n```\n\nNote that you should generally use the latest version of `@gradio/lite` that is available. You can see the [versions available here](https://www.jsdelivr.com/package/npm/@gradio/lite?tab=files).\n\n2. Create the `` tags\n\nSomewhere in the body of your HTML page (wherever you'd like the Gradio app to be rendered), create opening and closing `` tags.\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\n\t\t\n\t\n\n```\n\nNote: you can add the `theme` attribute to the `` tag to force the theme to be dark or light (by default, it respects the system theme). E.g.\n\n```html\n\n...\n\n```\n\n3. Write your Gradio app inside of the tags\n\nNow, write your Gradio app as you would normally, in Python! Keep in mind that since this is Python, whitespace and indentations matter.\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\n\t\timport gradio as gr\n\n\t\tdef greet(name):\n\t\t\treturn \"Hello, \" + name + \"!\"\n\n\t\tgr.Interface(greet, \"textbox\", \"textbox\").launch()\n\t\t\n\t\n\n```\n\nAn", "heading1": "Getting Started", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "head>\n\t\n\t\t\n\t\timport gradio as gr\n\n\t\tdef greet(name):\n\t\t\treturn \"Hello, \" + name + \"!\"\n\n\t\tgr.Interface(greet, \"textbox\", \"textbox\").launch()\n\t\t\n\t\n\n```\n\nAnd that's it! You should now be able to open your HTML page in the browser and see the Gradio app rendered! Note that it may take a little while for the Gradio app to load initially since Pyodide can take a while to install in your browser.\n\n**Note on debugging**: to see any errors in your Gradio-lite application, open the inspector in your web browser. All errors (including Python errors) will be printed there.\n\n", "heading1": "Getting Started", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "What if you want to create a Gradio app that spans multiple files? Or that has custom Python requirements? Both are possible with `@gradio/lite`!\n\nMultiple Files\n\nAdding multiple files within a `@gradio/lite` app is very straightforward: use the `` tag. You can have as many `` tags as you want, but each one needs to have a `name` attribute and the entry point to your Gradio app should have the `entrypoint` attribute.\n\nHere's an example:\n\n```html\n\n\n\nimport gradio as gr\nfrom utils import add\n\ndemo = gr.Interface(fn=add, inputs=[\"number\", \"number\"], outputs=\"number\")\n\ndemo.launch()\n\n\n\ndef add(a, b):\n\treturn a + b\n\n\n\n\n```\n\nAdditional Requirements\n\nIf your Gradio app has additional requirements, it is usually possible to [install them in the browser using micropip](https://pyodide.org/en/stable/usage/loading-packages.htmlloading-packages). We've created a wrapper to make this paticularly convenient: simply list your requirements in the same syntax as a `requirements.txt` and enclose them with `` tags.\n\nHere, we install `transformers_js_py` to run a text classification model directly in the browser!\n\n```html\n\n\n\ntransformers_js_py\n\n\n\nfrom transformers_js import import_transformers_js\nimport gradio as gr\n\ntransformers = await import_transformers_js()\npipeline = transformers.pipeline\npipe = await pipeline('sentiment-analysis')\n\nasync def classify(text):\n\treturn await pipe(text)\n\ndemo = gr.Interface(classify, \"textbox\", \"json\")\ndemo.launch()\n\n\n\n\n```\n\n**Try it out**: You can see this example running in [this Hugging Face Static Space](https://huggingface.co/spaces/abidlabs/gradio-lite-classify), which lets you host static (serverless) web applications for free. Visit the page and y", "heading1": "More Examples: Adding Additional Files and Requirements", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "xample running in [this Hugging Face Static Space](https://huggingface.co/spaces/abidlabs/gradio-lite-classify), which lets you host static (serverless) web applications for free. Visit the page and you'll be able to run a machine learning model without internet access!\n\nSharedWorker mode\n\nBy default, Gradio-Lite executes Python code in a [Web Worker](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API) with [Pyodide](https://pyodide.org/) runtime, and each Gradio-Lite app has its own worker.\nIt has some benefits such as environment isolation.\n\nHowever, when there are many Gradio-Lite apps in the same page, it may cause performance issues such as high memory usage because each app has its own worker and Pyodide runtime.\nIn such cases, you can use the **SharedWorker mode** to share a single Pyodide runtime in a [SharedWorker](https://developer.mozilla.org/en-US/docs/Web/API/SharedWorker) among multiple Gradio-Lite apps. To enable the SharedWorker mode, set the `shared-worker` attribute to the `` tag.\n\n```html\n\n\n\nimport gradio as gr\n...\n\n\n\nimport gradio as gr\n...\n\n```\n\nWhen using the SharedWorker mode, you should be aware of the following points:\n* The apps share the same Python environment, which means that they can access the same modules and objects. If, for example, one app makes changes to some modules, the changes will be visible to other apps.\n* The file system is shared among the apps, while each app's files are mounted in each home directory, so each app can access the files of other apps.\n\nCode and Demo Playground\n\nIf you'd like to see the code side-by-side with the demo just pass in the `playground` attribute to the gradio-lite element. This will create an interactive playground that allows you to change the code and update the demo! If you're using playground, you can also set layo", "heading1": "More Examples: Adding Additional Files and Requirements", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": " `playground` attribute to the gradio-lite element. This will create an interactive playground that allows you to change the code and update the demo! If you're using playground, you can also set layout to either 'vertical' or 'horizontal' which will determine if the code editor and preview are side-by-side or on top of each other (by default it's reposnsive with the width of the page).\n\n```html\n\nimport gradio as gr\n\ngr.Interface(fn=lambda x: x,\n\t\t\tinputs=gr.Textbox(),\n\t\t\toutputs=gr.Textbox()\n\t\t).launch()\n\n```\n\n", "heading1": "More Examples: Adding Additional Files and Requirements", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "1. Serverless Deployment\nThe primary advantage of @gradio/lite is that it eliminates the need for server infrastructure. This simplifies deployment, reduces server-related costs, and makes it easier to share your Gradio applications with others.\n\n2. Low Latency\nBy running in the browser, @gradio/lite offers low-latency interactions for users. There's no need for data to travel to and from a server, resulting in faster responses and a smoother user experience.\n\n3. Privacy and Security\nSince all processing occurs within the user's browser, `@gradio/lite` enhances privacy and security. User data remains on their device, providing peace of mind regarding data handling.\n\nLimitations\n\n* Currently, the biggest limitation in using `@gradio/lite` is that your Gradio apps will generally take more time (usually 5-15 seconds) to load initially in the browser. This is because the browser needs to load the Pyodide runtime before it can render Python code.\n\n* Not every Python package is supported by Pyodide. While `gradio` and many other popular packages (including `numpy`, `scikit-learn`, and `transformers-js`) can be installed in Pyodide, if your app has many dependencies, its worth checking whether whether the dependencies are included in Pyodide, or can be [installed with `micropip`](https://micropip.pyodide.org/en/v0.2.2/project/api.htmlmicropip.install).\n\n", "heading1": "Benefits of Using `@gradio/lite`", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "You can immediately try out `@gradio/lite` by copying and pasting this code in a local `index.html` file and opening it with your browser:\n\n```html\n\n\t\n\t\t\n\t\t\n\t\n\t\n\t\t\n\t\timport gradio as gr\n\n\t\tdef greet(name):\n\t\t\treturn \"Hello, \" + name + \"!\"\n\n\t\tgr.Interface(greet, \"textbox\", \"textbox\").launch()\n\t\t\n\t\n\n```\n\n\nWe've also created a playground on the Gradio website that allows you to interactively edit code and see the results immediately!\n\nPlayground: https://www.gradio.app/playground\n", "heading1": "Try it out!", "source_page_url": "https://gradio.app/guides/gradio-lite", "source_page_title": "Gradio Clients And Lite - Gradio Lite Guide"}, {"text": "Install the @gradio/client package to interact with Gradio APIs using Node.js version >=18.0.0 or in browser-based projects. Use npm or any compatible package manager:\n\n```bash\nnpm i @gradio/client\n```\n\nThis command adds @gradio/client to your project dependencies, allowing you to import it in your JavaScript or TypeScript files.\n\n", "heading1": "Installation via npm", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "For quick addition to your web project, you can use the jsDelivr CDN to load the latest version of @gradio/client directly into your HTML:\n\n```html\n\n```\n\nBe sure to add this to the `` of your HTML. This will install the latest version but we advise hardcoding the version in production. You can find all available versions [here](https://www.jsdelivr.com/package/npm/@gradio/client). This approach is ideal for experimental or prototying purposes, though has some limitations. A complete example would look like this:\n\n```html\n\n\n\n \n\n\n```\n\n", "heading1": "Installation via CDN", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "Start by connecting instantiating a `client` instance and connecting it to a Gradio app that is running on Hugging Face Spaces or generally anywhere on the web.\n\n", "heading1": "Connecting to a running Gradio App", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\"); // a Space that translates from English to French\n```\n\nYou can also connect to private Spaces by passing in your HF token with the `hf_token` property of the options parameter. You can get your HF token here: https://huggingface.co/settings/tokens\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/my-private-space\", { hf_token: \"hf_...\" })\n```\n\n", "heading1": "Connecting to a Hugging Face Space", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space, and then use it to make as many requests as you'd like! You'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens)).\n\n`Client.duplicate` is almost identical to `Client.connect`, the only difference is under the hood:\n\n```js\nimport { Client, handle_file } from \"@gradio/client\";\n\nconst response = await fetch(\n\t\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\"\n);\nconst audio_file = await response.blob();\n\nconst app = await Client.duplicate(\"abidlabs/whisper\", { hf_token: \"hf_...\" });\nconst transcription = await app.predict(\"/predict\", [handle_file(audio_file)]);\n```\n\nIf you have previously duplicated a Space, re-running `Client.duplicate` will _not_ create a new Space. Instead, the client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate` method multiple times with the same space.\n\n**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 5 minutes of inactivity. You can also set the hardware using the `hardware` and `timeout` properties of `duplicate`'s options object like this:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.duplicate(\"abidlabs/whisper\", {\n\thf_token: \"hf_...\",\n\ttimeout: 60,\n\thardware: \"a10g-small\"\n});\n```\n\n", "heading1": "Duplicating a Space for private use", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "If your app is running somewhere else, just provide the full URL instead, including the \"http://\" or \"https://\". Here's an example of making predictions to a Gradio app that is running on a share URL:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = Client.connect(\"https://bec81a83-5b5c-471e.gradio.live\");\n```\n\n", "heading1": "Connecting a general Gradio app", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "If the Gradio application you are connecting to [requires a username and password](/guides/sharing-your-appauthentication), then provide them as a tuple to the `auth` argument of the `Client` class:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nClient.connect(\n space_name,\n { auth: [username, password] }\n)\n```\n\n\n", "heading1": "Connecting to a Gradio app with auth", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "Once you have connected to a Gradio app, you can view the APIs that are available to you by calling the `Client`'s `view_api` method.\n\nFor the Whisper Space, we can do this:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/whisper\");\n\nconst app_info = await app.view_api();\n\nconsole.log(app_info);\n```\n\nAnd we will see the following:\n\n```json\n{\n\t\"named_endpoints\": {\n\t\t\"/predict\": {\n\t\t\t\"parameters\": [\n\t\t\t\t{\n\t\t\t\t\t\"label\": \"text\",\n\t\t\t\t\t\"component\": \"Textbox\",\n\t\t\t\t\t\"type\": \"string\"\n\t\t\t\t}\n\t\t\t],\n\t\t\t\"returns\": [\n\t\t\t\t{\n\t\t\t\t\t\"label\": \"output\",\n\t\t\t\t\t\"component\": \"Textbox\",\n\t\t\t\t\t\"type\": \"string\"\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t},\n\t\"unnamed_endpoints\": {}\n}\n```\n\nThis shows us that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `string`, which is a url to a file.\n\nWe should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available. If an app has unnamed API endpoints, these can also be displayed by running `.view_api(all_endpoints=True)`.\n\n", "heading1": "Inspecting the API endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "As an alternative to running the `.view_api()` method, you can click on the \"Use via API\" link in the footer of the Gradio app, which shows us the same information, along with example usage. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)\n\nThe View API page also includes an \"API Recorder\" that lets you interact with the Gradio UI normally and converts your interactions into the corresponding code to run with the JS Client.\n\n\n", "heading1": "The \"View API\" Page", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "The simplest way to make a prediction is simply to call the `.predict()` method with the appropriate arguments:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\");\nconst result = await app.predict(\"/predict\", [\"Hello\"]);\n```\n\nIf there are multiple parameters, then you should pass them as an array to `.predict()`, like this:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"gradio/calculator\");\nconst result = await app.predict(\"/predict\", [4, \"add\", 5]);\n```\n\nFor certain inputs, such as images, you should pass in a `Buffer`, `Blob` or `File` depending on what is most convenient. In node, this would be a `Buffer` or `Blob`; in a browser environment, this would be a `Blob` or `File`.\n\n```js\nimport { Client, handle_file } from \"@gradio/client\";\n\nconst response = await fetch(\n\t\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\"\n);\nconst audio_file = await response.blob();\n\nconst app = await Client.connect(\"abidlabs/whisper\");\nconst result = await app.predict(\"/predict\", [handle_file(audio_file)]);\n```\n\n", "heading1": "Making a prediction", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "If the API you are working with can return results over time, or you wish to access information about the status of a job, you can use the iterable interface for more flexibility. This is especially useful for iterative endpoints or generator endpoints that will produce a series of values over time as discrete responses.\n\n```js\nimport { Client } from \"@gradio/client\";\n\nfunction log_result(payload) {\n\tconst {\n\t\tdata: [translation]\n\t} = payload;\n\n\tconsole.log(`The translated result is: ${translation}`);\n}\n\nconst app = await Client.connect(\"abidlabs/en2fr\");\nconst job = app.submit(\"/predict\", [\"Hello\"]);\n\nfor await (const message of job) {\n\tlog_result(message);\n}\n```\n\n", "heading1": "Using events", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "The event interface also allows you to get the status of the running job by instantiating the client with the `events` options passing `status` and `data` as an array:\n\n\n```ts\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\", {\n\tevents: [\"status\", \"data\"]\n});\n```\n\nThis ensures that status messages are also reported to the client.\n\n`status`es are returned as an object with the following attributes: `status` (a human readbale status of the current job, `\"pending\" | \"generating\" | \"complete\" | \"error\"`), `code` (the detailed gradio code for the job), `position` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` ( as `Date` object detailing the time that the status was generated).\n\n```js\nimport { Client } from \"@gradio/client\";\n\nfunction log_status(status) {\n\tconsole.log(\n\t\t`The current status for this job is: ${JSON.stringify(status, null, 2)}.`\n\t);\n}\n\nconst app = await Client.connect(\"abidlabs/en2fr\", {\n\tevents: [\"status\", \"data\"]\n});\nconst job = app.submit(\"/predict\", [\"Hello\"]);\n\nfor await (const message of job) {\n\tif (message.type === \"status\") {\n\t\tlog_status(message);\n\t}\n}\n```\n\n", "heading1": "Status", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "The job instance also has a `.cancel()` method that cancels jobs that have been queued but not started. For example, if you run:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"abidlabs/en2fr\");\nconst job_one = app.submit(\"/predict\", [\"Hello\"]);\nconst job_two = app.submit(\"/predict\", [\"Friends\"]);\n\njob_one.cancel();\njob_two.cancel();\n```\n\nIf the first job has started processing, then it will not be canceled but the client will no longer listen for updates (throwing away the job). If the second job has not yet started, it will be successfully canceled and removed from the queue.\n\n", "heading1": "Cancelling Jobs", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "Some Gradio API endpoints do not return a single value, rather they return a series of values. You can listen for these values in real time using the iterable interface:\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"gradio/count_generator\");\nconst job = app.submit(0, [9]);\n\nfor await (const message of job) {\n\tconsole.log(message.data);\n}\n```\n\nThis will log out the values as they are generated by the endpoint.\n\nYou can also cancel jobs that that have iterative outputs, in which case the job will finish immediately.\n\n```js\nimport { Client } from \"@gradio/client\";\n\nconst app = await Client.connect(\"gradio/count_generator\");\nconst job = app.submit(0, [9]);\n\nfor await (const message of job) {\n\tconsole.log(message.data);\n}\n\nsetTimeout(() => {\n\tjob.cancel();\n}, 3000);\n```\n", "heading1": "Generator Endpoints", "source_page_url": "https://gradio.app/guides/getting-started-with-the-js-client", "source_page_title": "Gradio Clients And Lite - Getting Started With The Js Client Guide"}, {"text": "**[OpenAPI](https://www.openapis.org/)** is a widely adopted standard for describing RESTful APIs in a machine-readable format, typically as a JSON file. \n\nYou can create a Gradio UI from an OpenAPI Spec **in 1 line of Python**, instantly generating an interactive web interface for any API, making it accessible for demos, testing, or sharing with non-developers, without writing custom frontend code.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/from-openapi-spec", "source_page_title": "Other Tutorials - From Openapi Spec Guide"}, {"text": "Gradio now provides a convenient function, `gr.load_openapi`, that can automatically generate a Gradio app from an OpenAPI v3 specification. This function parses the spec, creates UI components for each endpoint and parameter, and lets you interact with the API directly from your browser.\n\nHere's a minimal example:\n\n```python\nimport gradio as gr\n\ndemo = gr.load_openapi(\n openapi_spec=\"https://petstore3.swagger.io/api/v3/openapi.json\",\n base_url=\"https://petstore3.swagger.io/api/v3\",\n paths=[\"/pet.*\"],\n methods=[\"get\", \"post\"],\n)\n\ndemo.launch()\n```\n\n**Parameters:**\n- **openapi_spec**: URL, file path, or Python dictionary containing the OpenAPI v3 spec (JSON format only).\n- **base_url**: The base URL for the API endpoints (e.g., `https://api.example.com/v1`).\n- **paths** (optional): List of endpoint path patterns (supports regex) to include. If not set, all paths are included.\n- **methods** (optional): List of HTTP methods (e.g., `[\"get\", \"post\"]`) to include. If not set, all methods are included.\n\nThe generated app will display a sidebar with available endpoints and create interactive forms for each operation, letting you make API calls and view responses in real time.\n\n", "heading1": "How it works", "source_page_url": "https://gradio.app/guides/from-openapi-spec", "source_page_title": "Other Tutorials - From Openapi Spec Guide"}, {"text": "Once your Gradio app is running, you can share the URL with others so they can try out the API through a friendly web interface\u2014no code required. For even more power, you can launch the app as an MCP (Model Control Protocol) server using [Gradio's MCP integration](https://www.gradio.app/guides/building-mcp-server-with-gradio), enabling programmatic access and orchestration of your API via the MCP ecosystem. This makes it easy to build, share, and automate API workflows with minimal effort.\n\n", "heading1": "Next steps", "source_page_url": "https://gradio.app/guides/from-openapi-spec", "source_page_title": "Other Tutorials - From Openapi Spec Guide"}, {"text": "In this guide we will demonstrate some of the ways you can use Gradio with Comet. We will cover the basics of using Comet with Gradio and show you some of the ways that you can leverage Gradio's advanced features such as [Embedding with iFrames](https://www.gradio.app/guides/sharing-your-app/embedding-with-iframes) and [State](https://www.gradio.app/docs/state) to build some amazing model evaluation workflows.\n\nHere is a list of the topics covered in this guide.\n\n1. Logging Gradio UI's to your Comet Experiments\n2. Embedding Gradio Applications directly into your Comet Projects\n3. Embedding Hugging Face Spaces directly into your Comet Projects\n4. Logging Model Inferences from your Gradio Application to Comet\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "[Comet](https://www.comet.com?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs) is an MLOps Platform that is designed to help Data Scientists and Teams build better models faster! Comet provides tooling to Track, Explain, Manage, and Monitor your models in a single place! It works with Jupyter Notebooks and Scripts and most importantly it's 100% free!\n\n", "heading1": "What is Comet?", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "First, install the dependencies needed to run these examples\n\n```shell\npip install comet_ml torch torchvision transformers gradio shap requests Pillow\n```\n\nNext, you will need to [sign up for a Comet Account](https://www.comet.com/signup?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs). Once you have your account set up, [grab your API Key](https://www.comet.com/docs/v2/guides/getting-started/quickstart/get-an-api-key?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs) and configure your Comet credentials\n\nIf you're running these examples as a script, you can either export your credentials as environment variables\n\n```shell\nexport COMET_API_KEY=\"\"\nexport COMET_WORKSPACE=\"\"\nexport COMET_PROJECT_NAME=\"\"\n```\n\nor set them in a `.comet.config` file in your working directory. You file should be formatted in the following way.\n\n```shell\n[comet]\napi_key=\nworkspace=\nproject_name=\n```\n\nIf you are using the provided Colab Notebooks to run these examples, please run the cell with the following snippet before starting the Gradio UI. Running this cell allows you to interactively add your API key to the notebook.\n\n```python\nimport comet_ml\ncomet_ml.init()\n```\n\n", "heading1": "Setup", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-evaluation/gradio/notebooks/Gradio_and_Comet.ipynb)\n\nIn this example, we will go over how to log your Gradio Applications to Comet and interact with them using the Gradio Custom Panel.\n\nLet's start by building a simple Image Classification example using `resnet18`.\n\n```python\nimport comet_ml\n\nimport requests\nimport torch\nfrom PIL import Image\nfrom torchvision import transforms\n\ntorch.hub.download_url_to_file(\"https://github.com/pytorch/hub/raw/master/images/dog.jpg\", \"dog.jpg\")\n\nif torch.cuda.is_available():\n device = \"cuda\"\nelse:\n device = \"cpu\"\n\nmodel = torch.hub.load(\"pytorch/vision:v0.6.0\", \"resnet18\", pretrained=True).eval()\nmodel = model.to(device)\n\nDownload human-readable labels for ImageNet.\nresponse = requests.get(\"https://git.io/JJkYN\")\nlabels = response.text.split(\"\\n\")\n\n\ndef predict(inp):\n inp = Image.fromarray(inp.astype(\"uint8\"), \"RGB\")\n inp = transforms.ToTensor()(inp).unsqueeze(0)\n with torch.no_grad():\n prediction = torch.nn.functional.softmax(model(inp.to(device))[0], dim=0)\n return {labels[i]: float(prediction[i]) for i in range(1000)}\n\n\ninputs = gr.Image()\noutputs = gr.Label(num_top_classes=3)\n\nio = gr.Interface(\n fn=predict, inputs=inputs, outputs=outputs, examples=[\"dog.jpg\"]\n)\nio.launch(inline=False, share=True)\n\nexperiment = comet_ml.Experiment()\nexperiment.add_tag(\"image-classifier\")\n\nio.integrate(comet_ml=experiment)\n```\n\nThe last line in this snippet will log the URL of the Gradio Application to your Comet Experiment. You can find the URL in the Text Tab of your Experiment.\n\n\n\nAdd the Gradio Panel to your Experiment to interact with your application.\n\n\n\nAdd the Gradio Panel to your Experiment to interact with your application.\n\n\n\n", "heading1": "1. Logging Gradio UI's to your Comet Experiments", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "\n\nIf you are permanently hosting your Gradio application, you can embed the UI using the Gradio Panel Extended custom Panel.\n\nGo to your Comet Project page, and head over to the Panels tab. Click the `+ Add` button to bring up the Panels search page.\n\n\"adding-panels\"\n\nNext, search for Gradio Panel Extended in the Public Panels section and click `Add`.\n\n\"gradio-panel-extended\"\n\nOnce you have added your Panel, click `Edit` to access to the Panel Options page and paste in the URL of your Gradio application.\n\n![Edit-Gradio-Panel-Options](https://user-images.githubusercontent.com/7529846/214573001-23814b5a-ca65-4ace-a8a5-b27cdda70f7a.gif)\n\n\"Edit-Gradio-Panel-URL\"\n\n", "heading1": "2. Embedding Gradio Applications directly into your Comet Projects", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "\n\nYou can also embed Gradio Applications that are hosted on Hugging Faces Spaces into your Comet Projects using the Hugging Face Spaces Panel.\n\nGo to your Comet Project page, and head over to the Panels tab. Click the `+ Add` button to bring up the Panels search page. Next, search for the Hugging Face Spaces Panel in the Public Panels section and click `Add`.\n\n\"huggingface-spaces-panel\"\n\nOnce you have added your Panel, click Edit to access to the Panel Options page and paste in the path of your Hugging Face Space e.g. `pytorch/ResNet`\n\n\"Edit-HF-Space\"\n\n", "heading1": "3. Embedding Hugging Face Spaces directly into your Comet Projects", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/comet-ml/comet-examples/blob/master/integrations/model-evaluation/gradio/notebooks/Logging_Model_Inferences_with_Comet_and_Gradio.ipynb)\n\nIn the previous examples, we demonstrated the various ways in which you can interact with a Gradio application through the Comet UI. Additionally, you can also log model inferences, such as SHAP plots, from your Gradio application to Comet.\n\nIn the following snippet, we're going to log inferences from a Text Generation model. We can persist an Experiment across multiple inference calls using Gradio's [State](https://www.gradio.app/docs/state) object. This will allow you to log multiple inferences from a model to a single Experiment.\n\n```python\nimport comet_ml\nimport gradio as gr\nimport shap\nimport torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\nif torch.cuda.is_available():\n device = \"cuda\"\nelse:\n device = \"cpu\"\n\nMODEL_NAME = \"gpt2\"\n\nmodel = AutoModelForCausalLM.from_pretrained(MODEL_NAME)\n\nset model decoder to true\nmodel.config.is_decoder = True\nset text-generation params under task_specific_params\nmodel.config.task_specific_params[\"text-generation\"] = {\n \"do_sample\": True,\n \"max_length\": 50,\n \"temperature\": 0.7,\n \"top_k\": 50,\n \"no_repeat_ngram_size\": 2,\n}\nmodel = model.to(device)\n\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\nexplainer = shap.Explainer(model, tokenizer)\n\n\ndef start_experiment():\n \"\"\"Returns an APIExperiment object that is thread safe\n and can be used to log inferences to a single Experiment\n \"\"\"\n try:\n api = comet_ml.API()\n workspace = api.get_default_", "heading1": "4. Logging Model Inferences to Comet", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": " \"\"\"Returns an APIExperiment object that is thread safe\n and can be used to log inferences to a single Experiment\n \"\"\"\n try:\n api = comet_ml.API()\n workspace = api.get_default_workspace()\n project_name = comet_ml.config.get_config()[\"comet.project_name\"]\n\n experiment = comet_ml.APIExperiment(\n workspace=workspace, project_name=project_name\n )\n experiment.log_other(\"Created from\", \"gradio-inference\")\n\n message = f\"Started Experiment: [{experiment.name}]({experiment.url})\"\n\n return (experiment, message)\n\n except Exception as e:\n return None, None\n\n\ndef predict(text, state, message):\n experiment = state\n\n shap_values = explainer([text])\n plot = shap.plots.text(shap_values, display=False)\n\n if experiment is not None:\n experiment.log_other(\"message\", message)\n experiment.log_html(plot)\n\n return plot\n\n\nwith gr.Blocks() as demo:\n start_experiment_btn = gr.Button(\"Start New Experiment\")\n experiment_status = gr.Markdown()\n\n Log a message to the Experiment to provide more context\n experiment_message = gr.Textbox(label=\"Experiment Message\")\n experiment = gr.State()\n\n input_text = gr.Textbox(label=\"Input Text\", lines=5, interactive=True)\n submit_btn = gr.Button(\"Submit\")\n\n output = gr.HTML(interactive=True)\n\n start_experiment_btn.click(\n start_experiment, outputs=[experiment, experiment_status]\n )\n submit_btn.click(\n predict, inputs=[input_text, experiment, experiment_message], outputs=[output]\n )\n```\n\nInferences from this snippet will be saved in the HTML tab of your experiment.\n\n\n\n", "heading1": "4. Logging Model Inferences to Comet", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "887c-065aca14dd30.mp4\">\n\n\n", "heading1": "4. Logging Model Inferences to Comet", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "We hope you found this guide useful and that it provides some inspiration to help you build awesome model evaluation workflows with Comet and Gradio.\n\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "- Create an account on Hugging Face [here](https://huggingface.co/join).\n- Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.\n- Request to join the Comet organization [here](https://huggingface.co/Comet).\n\n", "heading1": "How to contribute Gradio demos on HF spaces on the Comet organization", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "- [Comet Documentation](https://www.comet.com/docs/v2/?utm_source=gradio&utm_medium=referral&utm_campaign=gradio-integration&utm_content=gradio-docs)\n", "heading1": "Additional Resources", "source_page_url": "https://gradio.app/guides/Gradio-and-Comet", "source_page_title": "Other Tutorials - Gradio And Comet Guide"}, {"text": "Gradio is a Python library that allows you to quickly create customizable web apps for your machine learning models and data processing pipelines. Gradio apps can be deployed on [Hugging Face Spaces](https://hf.space) for free.\n\nIn some cases though, you might want to deploy a Gradio app on your own web server. You might already be using [Nginx](https://www.nginx.com/), a highly performant web server, to serve your website (say `https://www.example.com`), and you want to attach Gradio to a specific subpath on your website (e.g. `https://www.example.com/gradio-demo`).\n\nIn this Guide, we will guide you through the process of running a Gradio app behind Nginx on your own web server to achieve this.\n\n**Prerequisites**\n\n1. A Linux web server with [Nginx installed](https://www.nginx.com/blog/setting-up-nginx/) and [Gradio installed](/quickstart)\n2. A working Gradio app saved as a python file on your web server\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "1. Start by editing the Nginx configuration file on your web server. By default, this is located at: `/etc/nginx/nginx.conf`\n\nIn the `http` block, add the following line to include server block configurations from a separate file:\n\n```bash\ninclude /etc/nginx/sites-enabled/*;\n```\n\n2. Create a new file in the `/etc/nginx/sites-available` directory (create the directory if it does not already exist), using a filename that represents your app, for example: `sudo nano /etc/nginx/sites-available/my_gradio_app`\n\n3. Paste the following into your file editor:\n\n```bash\nserver {\n listen 80;\n server_name example.com www.example.com; Change this to your domain name\n\n location /gradio-demo/ { Change this if you'd like to server your Gradio app on a different path\n proxy_pass http://127.0.0.1:7860/; Change this if your Gradio app will be running on a different port\n proxy_buffering off;\n proxy_redirect off;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";\n proxy_set_header Host $host;\n proxy_set_header X-Forwarded-Host $host;\n proxy_set_header X-Forwarded-Proto $scheme;\n }\n}\n```\n\n\nTip: Setting the `X-Forwarded-Host` and `X-Forwarded-Proto` headers is important as Gradio uses these, in conjunction with the `root_path` parameter discussed below, to construct the public URL that your app is being served on. Gradio uses the public URL to fetch various static assets. If these headers are not set, your Gradio app may load in a broken state.\n\n*Note:* The `$host` variable does not include the host port. If you are serving your Gradio application on a raw IP address and port, you should use the `$http_host` variable instead, in these lines:\n\n```bash\n proxy_set_header Host $host;\n proxy_set_header X-Forwarded-Host $host;\n```\n\n", "heading1": "Editing your Nginx configuration file", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "1. Before you launch your Gradio app, you'll need to set the `root_path` to be the same as the subpath that you specified in your nginx configuration. This is necessary for Gradio to run on any subpath besides the root of the domain.\n\n *Note:* Instead of a subpath, you can also provide a complete URL for `root_path` (beginning with `http` or `https`) in which case the `root_path` is treated as an absolute URL instead of a URL suffix (but in this case, you'll need to update the `root_path` if the domain changes).\n\nHere's a simple example of a Gradio app with a custom `root_path` corresponding to the Nginx configuration above.\n\n```python\nimport gradio as gr\nimport time\n\ndef test(x):\ntime.sleep(4)\nreturn x\n\ngr.Interface(test, \"textbox\", \"textbox\").queue().launch(root_path=\"/gradio-demo\")\n```\n\n2. Start a `tmux` session by typing `tmux` and pressing enter (optional)\n\nIt's recommended that you run your Gradio app in a `tmux` session so that you can keep it running in the background easily\n\n3. Then, start your Gradio app. Simply type in `python` followed by the name of your Gradio python file. By default, your app will run on `localhost:7860`, but if it starts on a different port, you will need to update the nginx configuration file above.\n\n", "heading1": "Run your Gradio app on your web server", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "1. If you are in a tmux session, exit by typing CTRL+B (or CMD+B), followed by the \"D\" key.\n\n2. Finally, restart nginx by running `sudo systemctl restart nginx`.\n\nAnd that's it! If you visit `https://example.com/gradio-demo` on your browser, you should see your Gradio app running there\n\n", "heading1": "Restart Nginx", "source_page_url": "https://gradio.app/guides/running-gradio-on-your-web-server-with-nginx", "source_page_title": "Other Tutorials - Running Gradio On Your Web Server With Nginx Guide"}, {"text": "When you demo a machine learning model, you might want to collect data from users who try the model, particularly data points in which the model is not behaving as expected. Capturing these \"hard\" data points is valuable because it allows you to improve your machine learning model and make it more reliable and robust.\n\nGradio simplifies the collection of this data by including a **Flag** button with every `Interface`. This allows a user or tester to easily send data back to the machine where the demo is running. In this Guide, we discuss more about how to use the flagging feature, both with `gradio.Interface` as well as with `gradio.Blocks`.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "Flagging with Gradio's `Interface` is especially easy. By default, underneath the output components, there is a button marked **Flag**. When a user testing your model sees input with interesting output, they can click the flag button to send the input and output data back to the machine where the demo is running. The sample is saved to a CSV log file (by default). If the demo involves images, audio, video, or other types of files, these are saved separately in a parallel directory and the paths to these files are saved in the CSV file.\n\nThere are [four parameters](https://gradio.app/docs/interfaceinitialization) in `gradio.Interface` that control how flagging works. We will go over them in greater detail.\n\n- `flagging_mode`: this parameter can be set to either `\"manual\"` (default), `\"auto\"`, or `\"never\"`.\n - `manual`: users will see a button to flag, and samples are only flagged when the button is clicked.\n - `auto`: users will not see a button to flag, but every sample will be flagged automatically.\n - `never`: users will not see a button to flag, and no sample will be flagged.\n- `flagging_options`: this parameter can be either `None` (default) or a list of strings.\n - If `None`, then the user simply clicks on the **Flag** button and no additional options are shown.\n - If a list of strings are provided, then the user sees several buttons, corresponding to each of the strings that are provided. For example, if the value of this parameter is `[\"Incorrect\", \"Ambiguous\"]`, then buttons labeled **Flag as Incorrect** and **Flag as Ambiguous** appear. This only applies if `flagging_mode` is `\"manual\"`.\n - The chosen option is then logged along with the input and output.\n- `flagging_dir`: this parameter takes a string.\n - It represents what to name the directory where flagged data is stored.\n- `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class\n - Using this parameter allows you to write custom code that gets run whe", "heading1": "The **Flag** button in `gradio.Interface`", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "flagged data is stored.\n- `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class\n - Using this parameter allows you to write custom code that gets run when the flag button is clicked\n - By default, this is set to an instance of `gr.JSONLogger`\n\n", "heading1": "The **Flag** button in `gradio.Interface`", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "Within the directory provided by the `flagging_dir` argument, a JSON file will log the flagged data.\n\nHere's an example: The code below creates the calculator interface embedded below it:\n\n```python\nimport gradio as gr\n\n\ndef calculator(num1, operation, num2):\n if operation == \"add\":\n return num1 + num2\n elif operation == \"subtract\":\n return num1 - num2\n elif operation == \"multiply\":\n return num1 * num2\n elif operation == \"divide\":\n return num1 / num2\n\n\niface = gr.Interface(\n calculator,\n [\"number\", gr.Radio([\"add\", \"subtract\", \"multiply\", \"divide\"]), \"number\"],\n \"number\",\n flagging_mode=\"manual\"\n)\n\niface.launch()\n```\n\n\n\nWhen you click the flag button above, the directory where the interface was launched will include a new flagged subfolder, with a csv file inside it. This csv file includes all the data that was flagged.\n\n```directory\n+-- flagged/\n| +-- logs.csv\n```\n\n_flagged/logs.csv_\n\n```csv\nnum1,operation,num2,Output,timestamp\n5,add,7,12,2022-01-31 11:40:51.093412\n6,subtract,1.5,4.5,2022-01-31 03:25:32.023542\n```\n\nIf the interface involves file data, such as for Image and Audio components, folders will be created to store those flagged data as well. For example an `image` input to `image` output interface will create the following structure.\n\n```directory\n+-- flagged/\n| +-- logs.csv\n| +-- image/\n| | +-- 0.png\n| | +-- 1.png\n| +-- Output/\n| | +-- 0.png\n| | +-- 1.png\n```\n\n_flagged/logs.csv_\n\n```csv\nim,Output timestamp\nim/0.png,Output/0.png,2022-02-04 19:49:58.026963\nim/1.png,Output/1.png,2022-02-02 10:40:51.093412\n```\n\nIf you wish for the user to provide a reason for flagging, you can pass a list of strings to the `flagging_options` argument of Interface. Users will have to select one of these choices when flagging, and the option will be saved as an additional column to the CSV.\n\nIf we go back to the calculator example, the fo", "heading1": "What happens to flagged data?", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "` argument of Interface. Users will have to select one of these choices when flagging, and the option will be saved as an additional column to the CSV.\n\nIf we go back to the calculator example, the following code will create the interface embedded below it.\n\n```python\niface = gr.Interface(\n calculator,\n [\"number\", gr.Radio([\"add\", \"subtract\", \"multiply\", \"divide\"]), \"number\"],\n \"number\",\n flagging_mode=\"manual\",\n flagging_options=[\"wrong sign\", \"off by one\", \"other\"]\n)\n\niface.launch()\n```\n\n\n\nWhen users click the flag button, the csv file will now include a column indicating the selected option.\n\n_flagged/logs.csv_\n\n```csv\nnum1,operation,num2,Output,flag,timestamp\n5,add,7,-12,wrong sign,2022-02-04 11:40:51.093412\n6,subtract,1.5,3.5,off by one,2022-02-04 11:42:32.062512\n```\n\n", "heading1": "What happens to flagged data?", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "What about if you are using `gradio.Blocks`? On one hand, you have even more flexibility\nwith Blocks -- you can write whatever Python code you want to run when a button is clicked,\nand assign that using the built-in events in Blocks.\n\nAt the same time, you might want to use an existing `FlaggingCallback` to avoid writing extra code.\nThis requires two steps:\n\n1. You have to run your callback's `.setup()` somewhere in the code prior to the\n first time you flag data\n2. When the flagging button is clicked, then you trigger the callback's `.flag()` method,\n making sure to collect the arguments correctly and disabling the typical preprocessing.\n\nHere is an example with an image sepia filter Blocks demo that lets you flag\ndata using the default `CSVLogger`:\n\n$code_blocks_flag\n$demo_blocks_flag\n\n", "heading1": "Flagging with Blocks", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "Important Note: please make sure your users understand when the data they submit is being saved, and what you plan on doing with it. This is especially important when you use `flagging_mode=auto` (when all of the data submitted through the demo is being flagged)\n\nThat's all! Happy building :)\n", "heading1": "Privacy", "source_page_url": "https://gradio.app/guides/using-flagging", "source_page_title": "Other Tutorials - Using Flagging Guide"}, {"text": "Gradio features [blocks](https://www.gradio.app/docs/blocks) to easily layout applications. To use this feature, you need to stack or nest layout components and create a hierarchy with them. This isn't difficult to implement and maintain for small projects, but after the project gets more complex, this component hierarchy becomes difficult to maintain and reuse.\n\nIn this guide, we are going to explore how we can wrap the layout classes to create more maintainable and easy-to-read applications without sacrificing flexibility.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "We are going to follow the implementation from this Huggingface Space example:\n\n\n\n\n", "heading1": "Example", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "The wrapping utility has two important classes. The first one is the ```LayoutBase``` class and the other one is the ```Application``` class.\n\nWe are going to look at the ```render``` and ```attach_event``` functions of them for brevity. You can look at the full implementation from [the example code](https://huggingface.co/spaces/WoWoWoWololo/wrapping-layouts/blob/main/app.py).\n\nSo let's start with the ```LayoutBase``` class.\n\nLayoutBase Class\n\n1. Render Function\n\n Let's look at the ```render``` function in the ```LayoutBase``` class:\n\n```python\nother LayoutBase implementations\n\ndef render(self) -> None:\n with self.main_layout:\n for renderable in self.renderables:\n renderable.render()\n\n self.main_layout.render()\n```\nThis is a little confusing at first but if you consider the default implementation you can understand it easily.\nLet's look at an example:\n\nIn the default implementation, this is what we're doing:\n\n```python\nwith Row():\n left_textbox = Textbox(value=\"left_textbox\")\n right_textbox = Textbox(value=\"right_textbox\")\n```\n\nNow, pay attention to the Textbox variables. These variables' ```render``` parameter is true by default. So as we use the ```with``` syntax and create these variables, they are calling the ```render``` function under the ```with``` syntax.\n\nWe know the render function is called in the constructor with the implementation from the ```gradio.blocks.Block``` class:\n\n```python\nclass Block:\n constructor parameters are omitted for brevity\n def __init__(self, ...):\n other assign functions \n\n if render:\n self.render()\n```\n\nSo our implementation looks like this:\n\n```python\nself.main_layout -> Row()\nwith self.main_layout:\n left_textbox.render()\n right_textbox.render()\n```\n\nWhat this means is by calling the components' render functions under the ```with``` syntax, we are actually simulating the default implementation.\n\nSo now let's consider two nested ```with```s to see ho", "heading1": "Implementation", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "at this means is by calling the components' render functions under the ```with``` syntax, we are actually simulating the default implementation.\n\nSo now let's consider two nested ```with```s to see how the outer one works. For this, let's expand our example with the ```Tab``` component:\n\n```python\nwith Tab():\n with Row():\n first_textbox = Textbox(value=\"first_textbox\")\n second_textbox = Textbox(value=\"second_textbox\")\n```\n\nPay attention to the Row and Tab components this time. We have created the Textbox variables above and added them to Row with the ```with``` syntax. Now we need to add the Row component to the Tab component. You can see that the Row component is created with default parameters, so its render parameter is true, that's why the render function is going to be executed under the Tab component's ```with``` syntax.\n\nTo mimic this implementation, we need to call the ```render``` function of the ```main_layout``` variable after the ```with``` syntax of the ```main_layout``` variable.\n\nSo the implementation looks like this:\n\n```python\nwith tab_main_layout:\n with row_main_layout:\n first_textbox.render()\n second_textbox.render()\n\n row_main_layout.render()\n\ntab_main_layout.render()\n```\n\nThe default implementation and our implementation are the same, but we are using the render function ourselves. So it requires a little work.\n\nNow, let's take a look at the ```attach_event``` function.\n\n2. Attach Event Function\n\n The function is left as not implemented because it is specific to the class, so each class has to implement its `attach_event` function.\n\n```python\n other LayoutBase implementations\n\n def attach_event(self, block_dict: Dict[str, Block]) -> None:\n raise NotImplementedError\n```\n\nCheck out the ```block_dict``` variable in the ```Application``` class's ```attach_event``` function.\n\nApplication Class\n\n1. Render Function\n\n```python\n other Application implementations\n\n def _render(self):\n ", "heading1": "Implementation", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "ct``` variable in the ```Application``` class's ```attach_event``` function.\n\nApplication Class\n\n1. Render Function\n\n```python\n other Application implementations\n\n def _render(self):\n with self.app:\n for child in self.children:\n child.render()\n\n self.app.render()\n```\n\nFrom the explanation of the ```LayoutBase``` class's ```render``` function, we can understand the ```child.render``` part.\n\nSo let's look at the bottom part, why are we calling the ```app``` variable's ```render``` function? It's important to call this function because if we look at the implementation in the ```gradio.blocks.Blocks``` class, we can see that it is adding the components and event functions into the root component. To put it another way, it is creating and structuring the gradio application.\n\n2. Attach Event Function\n\n Let's see how we can attach events to components:\n\n```python\n other Application implementations\n\n def _attach_event(self):\n block_dict: Dict[str, Block] = {}\n\n for child in self.children:\n block_dict.update(child.global_children_dict)\n\n with self.app:\n for child in self.children:\n try:\n child.attach_event(block_dict=block_dict)\n except NotImplementedError:\n print(f\"{child.name}'s attach_event is not implemented\")\n```\n\nYou can see why the ```global_children_list``` is used in the ```LayoutBase``` class from the example code. With this, all the components in the application are gathered into one dictionary, so the component can access all the components with their names.\n\nThe ```with``` syntax is used here again to attach events to components. If we look at the ```__exit__``` function in the ```gradio.blocks.Blocks``` class, we can see that it is calling the ```attach_load_events``` function which is used for setting event triggers to components. So we have to use the ```with``` syntax to trigger the ```_", "heading1": "Implementation", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "Blocks``` class, we can see that it is calling the ```attach_load_events``` function which is used for setting event triggers to components. So we have to use the ```with``` syntax to trigger the ```__exit__``` function.\n\nOf course, we can call ```attach_load_events``` without using the ```with``` syntax, but the function needs a ```Context.root_block```, and it is set in the ```__enter__``` function. So we used the ```with``` syntax here rather than calling the function ourselves.\n\n", "heading1": "Implementation", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "In this guide, we saw\n\n- How we can wrap the layouts\n- How components are rendered\n- How we can structure our application with wrapped layout classes\n\nBecause the classes used in this guide are used for demonstration purposes, they may still not be totally optimized or modular. But that would make the guide much longer!\n\nI hope this guide helps you gain another view of the layout classes and gives you an idea about how you can use them for your needs. See the full implementation of our example [here](https://huggingface.co/spaces/WoWoWoWololo/wrapping-layouts/blob/main/app.py).\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/wrapping-layouts", "source_page_title": "Other Tutorials - Wrapping Layouts Guide"}, {"text": "This guide explains how you can run background tasks from your gradio app.\nBackground tasks are operations that you'd like to perform outside the request-response\nlifecycle of your app either once or on a periodic schedule.\nExamples of background tasks include periodically synchronizing data to an external database or\nsending a report of model predictions via email.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "We will be creating a simple \"Google-forms-style\" application to gather feedback from users of the gradio library.\nWe will use a local sqlite database to store our data, but we will periodically synchronize the state of the database\nwith a [HuggingFace Dataset](https://huggingface.co/datasets) so that our user reviews are always backed up.\nThe synchronization will happen in a background task running every 60 seconds.\n\nAt the end of the demo, you'll have a fully working application like this one:\n\n \n\n", "heading1": "Overview", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "Our application will store the name of the reviewer, their rating of gradio on a scale of 1 to 5, as well as\nany comments they want to share about the library. Let's write some code that creates a database table to\nstore this data. We'll also write some functions to insert a review into that table and fetch the latest 10 reviews.\n\nWe're going to use the `sqlite3` library to connect to our sqlite database but gradio will work with any library.\n\nThe code will look like this:\n\n```python\nDB_FILE = \"./reviews.db\"\ndb = sqlite3.connect(DB_FILE)\n\nCreate table if it doesn't already exist\ntry:\n db.execute(\"SELECT * FROM reviews\").fetchall()\n db.close()\nexcept sqlite3.OperationalError:\n db.execute(\n '''\n CREATE TABLE reviews (id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,\n created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,\n name TEXT, review INTEGER, comments TEXT)\n ''')\n db.commit()\n db.close()\n\ndef get_latest_reviews(db: sqlite3.Connection):\n reviews = db.execute(\"SELECT * FROM reviews ORDER BY id DESC limit 10\").fetchall()\n total_reviews = db.execute(\"Select COUNT(id) from reviews\").fetchone()[0]\n reviews = pd.DataFrame(reviews, columns=[\"id\", \"date_created\", \"name\", \"review\", \"comments\"])\n return reviews, total_reviews\n\n\ndef add_review(name: str, review: int, comments: str):\n db = sqlite3.connect(DB_FILE)\n cursor = db.cursor()\n cursor.execute(\"INSERT INTO reviews(name, review, comments) VALUES(?,?,?)\", [name, review, comments])\n db.commit()\n reviews, total_reviews = get_latest_reviews(db)\n db.close()\n return reviews, total_reviews\n```\n\nLet's also write a function to load the latest reviews when the gradio application loads:\n\n```python\ndef load_data():\n db = sqlite3.connect(DB_FILE)\n reviews, total_reviews = get_latest_reviews(db)\n db.close()\n return reviews, total_reviews\n```\n\n", "heading1": "Step 1 - Write your database logic \ud83d\udcbe", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "Now that we have our database logic defined, we can use gradio create a dynamic web page to ask our users for feedback!\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n name = gr.Textbox(label=\"Name\", placeholder=\"What is your name?\")\n review = gr.Radio(label=\"How satisfied are you with using gradio?\", choices=[1, 2, 3, 4, 5])\n comments = gr.Textbox(label=\"Comments\", lines=10, placeholder=\"Do you have any feedback on gradio?\")\n submit = gr.Button(value=\"Submit Feedback\")\n with gr.Column():\n data = gr.Dataframe(label=\"Most recently created 10 rows\")\n count = gr.Number(label=\"Total number of reviews\")\n submit.click(add_review, [name, review, comments], [data, count])\n demo.load(load_data, None, [data, count])\n```\n\n", "heading1": "Step 2 - Create a gradio app \u26a1", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "We could call `demo.launch()` after step 2 and have a fully functioning application. However,\nour data would be stored locally on our machine. If the sqlite file were accidentally deleted, we'd lose all of our reviews!\nLet's back up our data to a dataset on the HuggingFace hub.\n\nCreate a dataset [here](https://huggingface.co/datasets) before proceeding.\n\nNow at the **top** of our script, we'll use the [huggingface hub client library](https://huggingface.co/docs/huggingface_hub/index)\nto connect to our dataset and pull the latest backup.\n\n```python\nTOKEN = os.environ.get('HUB_TOKEN')\nrepo = huggingface_hub.Repository(\n local_dir=\"data\",\n repo_type=\"dataset\",\n clone_from=\"\",\n use_auth_token=TOKEN\n)\nrepo.git_pull()\n\nshutil.copyfile(\"./data/reviews.db\", DB_FILE)\n```\n\nNote that you'll have to get an access token from the \"Settings\" tab of your HuggingFace for the above code to work.\nIn the script, the token is securely accessed via an environment variable.\n\n![access_token](https://github.com/gradio-app/gradio/blob/main/guides/assets/access_token.png?raw=true)\n\nNow we will create a background task to synch our local database to the dataset hub every 60 seconds.\nWe will use the [AdvancedPythonScheduler](https://apscheduler.readthedocs.io/en/3.x/) to handle the scheduling.\nHowever, this is not the only task scheduling library available. Feel free to use whatever you are comfortable with.\n\nThe function to back up our data will look like this:\n\n```python\nfrom apscheduler.schedulers.background import BackgroundScheduler\n\ndef backup_db():\n shutil.copyfile(DB_FILE, \"./data/reviews.db\")\n db = sqlite3.connect(DB_FILE)\n reviews = db.execute(\"SELECT * FROM reviews\").fetchall()\n pd.DataFrame(reviews).to_csv(\"./data/reviews.csv\", index=False)\n print(\"updating db\")\n repo.push_to_hub(blocking=False, commit_message=f\"Updating data at {datetime.datetime.now()}\")\n\n\nscheduler = BackgroundScheduler()\nscheduler.add_job(func=backup_db, trigge", "heading1": "Step 3 - Synchronize with HuggingFace Datasets \ud83e\udd17", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": " print(\"updating db\")\n repo.push_to_hub(blocking=False, commit_message=f\"Updating data at {datetime.datetime.now()}\")\n\n\nscheduler = BackgroundScheduler()\nscheduler.add_job(func=backup_db, trigger=\"interval\", seconds=60)\nscheduler.start()\n```\n\n", "heading1": "Step 3 - Synchronize with HuggingFace Datasets \ud83e\udd17", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "You can use the HuggingFace [Spaces](https://huggingface.co/spaces) platform to deploy this application for free \u2728\n\nIf you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).\nYou will have to use the `HUB_TOKEN` environment variable as a secret in the Guides.\n\n", "heading1": "Step 4 (Bonus) - Deployment to HuggingFace Spaces", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "Congratulations! You know how to run background tasks from your gradio app on a schedule \u23f2\ufe0f.\n\nCheckout the application running on Spaces [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms).\nThe complete code is [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms/blob/main/app.py)\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/running-background-tasks", "source_page_title": "Other Tutorials - Running Background Tasks Guide"}, {"text": "When you are building a Gradio demo, particularly out of Blocks, you may find it cumbersome to keep re-running your code to test your changes.\n\nTo make it faster and more convenient to write your code, we've made it easier to \"reload\" your Gradio apps instantly when you are developing in a **Python IDE** (like VS Code, Sublime Text, PyCharm, or so on) or generally running your Python code from the terminal. We've also developed an analogous \"magic command\" that allows you to re-run cells faster if you use **Jupyter Notebooks** (or any similar environment like Colab).\n\nThis short Guide will cover both of these methods, so no matter how you write Python, you'll leave knowing how to build Gradio apps faster.\n\n", "heading1": "Why Hot Reloading?", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "If you are building Gradio Blocks using a Python IDE, your file of code (let's name it `run.py`) might look something like this:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"Greetings from Gradio!\")\n inp = gr.Textbox(placeholder=\"What is your name?\")\n out = gr.Textbox()\n\n inp.change(fn=lambda x: f\"Welcome, {x}!\",\n inputs=inp,\n outputs=out)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThe problem is that anytime that you want to make a change to your layout, events, or components, you have to close and rerun your app by writing `python run.py`.\n\nInstead of doing this, you can run your code in **reload mode** by changing 1 word: `python` to `gradio`:\n\nIn the terminal, run `gradio run.py`. That's it!\n\nNow, you'll see that after you'll see something like this:\n\n```bash\nWatching: '/Users/freddy/sources/gradio/gradio', '/Users/freddy/sources/gradio/demo/'\n\nRunning on local URL: http://127.0.0.1:7860\n```\n\nThe important part here is the line that says `Watching...` What's happening here is that Gradio will be observing the directory where `run.py` file lives, and if the file changes, it will automatically rerun the file for you. So you can focus on writing your code, and your Gradio demo will refresh automatically \ud83e\udd73\n\nTip: the `gradio` command does not detect the parameters passed to the `launch()` methods because the `launch()` method is never called in reload mode. For example, setting `auth`, or `show_error` in `launch()` will not be reflected in the app.\n\nThere is one important thing to keep in mind when using the reload mode: Gradio specifically looks for a Gradio Blocks/Interface demo called `demo` in your code. If you have named your demo something else, you will need to pass in the name of your demo as the 2nd parameter in your code. So if your `run.py` file looked like this:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as my_demo:\n gr.Markdown(\"Greetings from Gradio!\")\n inp = gr.", "heading1": "Python IDE Reload \ud83d\udd25", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "emo as the 2nd parameter in your code. So if your `run.py` file looked like this:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as my_demo:\n gr.Markdown(\"Greetings from Gradio!\")\n inp = gr.Textbox(placeholder=\"What is your name?\")\n out = gr.Textbox()\n\n inp.change(fn=lambda x: f\"Welcome, {x}!\",\n inputs=inp,\n outputs=out)\n\nif __name__ == \"__main__\":\n my_demo.launch()\n```\n\nThen you would launch it in reload mode like this: `gradio run.py --demo-name=my_demo`.\n\nBy default, the Gradio use UTF-8 encoding for scripts. **For reload mode**, If you are using encoding formats other than UTF-8 (such as cp1252), make sure you've done like this:\n\n1. Configure encoding declaration of python script, for example: `-*- coding: cp1252 -*-`\n2. Confirm that your code editor has identified that encoding format. \n3. Run like this: `gradio run.py --encoding cp1252`\n\n\ud83d\udd25 If your application accepts command line arguments, you can pass them in as well. Here's an example:\n\n```python\nimport gradio as gr\nimport argparse\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--name\", type=str, default=\"User\")\nargs, unknown = parser.parse_known_args()\n\nwith gr.Blocks() as demo:\n gr.Markdown(f\"Greetings {args.name}!\")\n inp = gr.Textbox()\n out = gr.Textbox()\n\n inp.change(fn=lambda x: x, inputs=inp, outputs=out)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nWhich you could run like this: `gradio run.py --name Gretel`\n\nAs a small aside, this auto-reloading happens if you change your `run.py` source code or the Gradio source code. Meaning that this can be useful if you decide to [contribute to Gradio itself](https://github.com/gradio-app/gradio/blob/main/CONTRIBUTING.md) \u2705\n\n\n", "heading1": "Python IDE Reload \ud83d\udd25", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "By default, reload mode will re-run your entire script for every change you make.\nBut there are some cases where this is not desirable.\nFor example, loading a machine learning model should probably only happen once to save time. There are also some Python libraries that use C or Rust extensions that throw errors when they are reloaded, like `numpy` and `tiktoken`.\n\nIn these situations, you can place code that you do not want to be re-run inside an `if gr.NO_RELOAD:` codeblock. Here's an example of how you can use it to only load a transformers model once during the development process.\n\nTip: The value of `gr.NO_RELOAD` is `True`. So you don't have to change your script when you are done developing and want to run it in production. Simply run the file with `python` instead of `gradio`.\n\n```python\nimport gradio as gr\n\nif gr.NO_RELOAD:\n\tfrom transformers import pipeline\n\tpipe = pipeline(\"text-classification\", model=\"cardiffnlp/twitter-roberta-base-sentiment-latest\")\n\ndemo = gr.Interface(lambda s: {d[\"label\"]: d[\"score\"] for d in pipe(s)}, gr.Textbox(), gr.Label())\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\n", "heading1": "Controlling the Reload \ud83c\udf9b\ufe0f", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "You can also enable Gradio's **Vibe Mode**, which, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. To enable this, simply run use the `--vibe` flag with Gradio, e.g. `gradio --vibe app.py`.\n\nVibe Mode lets you describe commands using natural language and have an LLM write or edit the code in your Gradio app. The LLM is powered by Hugging Face's [Inference Providers](https://huggingface.co/docs/inference-providers/en/index), so you must be logged into Hugging Face locally to use this. \n\nNote: When Vibe Mode is enabled, anyone who can access the Gradio endpoint can modify files and run arbitrary code on the host machine. Use only for local development.\n\n", "heading1": "Vibe Mode", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "What about if you use Jupyter Notebooks (or Colab Notebooks, etc.) to develop code? We got something for you too!\n\nWe've developed a **magic command** that will create and run a Blocks demo for you. To use this, load the gradio extension at the top of your notebook:\n\n`%load_ext gradio`\n\nThen, in the cell that you are developing your Gradio demo, simply write the magic command **`%%blocks`** at the top, and then write the layout and components like you would normally:\n\n```py\n%%blocks\n\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(f\"Greetings {args.name}!\")\n inp = gr.Textbox()\n out = gr.Textbox()\n\n inp.change(fn=lambda x: x, inputs=inp, outputs=out)\n```\n\nNotice that:\n\n- You do not need to launch your demo \u2014 Gradio does that for you automatically!\n\n- Every time you rerun the cell, Gradio will re-render your app on the same port and using the same underlying web server. This means you'll see your changes _much, much faster_ than if you were rerunning the cell normally.\n\nHere's what it looks like in a jupyter notebook:\n\n![](https://gradio-builds.s3.amazonaws.com/demo-files/jupyter_reload.gif)\n\n\ud83e\ude84 This works in colab notebooks too! [Here's a colab notebook](https://colab.research.google.com/drive/1zAuWoiTIb3O2oitbtVb2_ekv1K6ggtC1?usp=sharing) where you can see the Blocks magic in action. Try making some changes and re-running the cell with the Gradio code!\n\nTip: You may have to use `%%blocks --share` in Colab to get the demo to appear in the cell.\n\nThe Notebook Magic is now the author's preferred way of building Gradio demos. Regardless of how you write Python code, we hope either of these methods will give you a much better development experience using Gradio.\n\n---\n\n", "heading1": "Jupyter Notebook Magic \ud83d\udd2e", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "Now that you know how to develop quickly using Gradio, start building your own!\n\nIf you are looking for inspiration, try exploring demos other people have built with Gradio, [browse public Hugging Face Spaces](http://hf.space/) \ud83e\udd17\n", "heading1": "Next Steps", "source_page_url": "https://gradio.app/guides/developing-faster-with-reload-mode", "source_page_title": "Other Tutorials - Developing Faster With Reload Mode Guide"}, {"text": "3D models are becoming more popular in machine learning and make for some of the most fun demos to experiment with. Using `gradio`, you can easily build a demo of your 3D image model and share it with anyone. The Gradio 3D Model component accepts 3 file types including: _.obj_, _.glb_, & _.gltf_.\n\nThis guide will show you how to build a demo for your 3D image model in a few lines of code; like the one below. Play around with 3D object by clicking around, dragging and zooming:\n\n \n\nPrerequisites\n\nMake sure you have the `gradio` Python package already [installed](https://gradio.app/guides/quickstart).\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/how-to-use-3D-model-component", "source_page_title": "Other Tutorials - How To Use 3D Model Component Guide"}, {"text": "Let's take a look at how to create the minimal interface above. The prediction function in this case will just return the original 3D model mesh, but you can change this function to run inference on your machine learning model. We'll take a look at more complex examples below.\n\n```python\nimport gradio as gr\nimport os\n\n\ndef load_mesh(mesh_file_name):\n return mesh_file_name\n\n\ndemo = gr.Interface(\n fn=load_mesh,\n inputs=gr.Model3D(),\n outputs=gr.Model3D(\n clear_color=[0.0, 0.0, 0.0, 0.0], label=\"3D Model\"),\n examples=[\n [os.path.join(os.path.dirname(__file__), \"files/Bunny.obj\")],\n [os.path.join(os.path.dirname(__file__), \"files/Duck.glb\")],\n [os.path.join(os.path.dirname(__file__), \"files/Fox.gltf\")],\n [os.path.join(os.path.dirname(__file__), \"files/face.obj\")],\n ],\n)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nLet's break down the code above:\n\n`load_mesh`: This is our 'prediction' function and for simplicity, this function will take in the 3D model mesh and return it.\n\nCreating the Interface:\n\n- `fn`: the prediction function that is used when the user clicks submit. In our case this is the `load_mesh` function.\n- `inputs`: create a model3D input component. The input expects an uploaded file as a {str} filepath.\n- `outputs`: create a model3D output component. The output component also expects a file as a {str} filepath.\n - `clear_color`: this is the background color of the 3D model canvas. Expects RGBa values.\n - `label`: the label that appears on the top left of the component.\n- `examples`: list of 3D model files. The 3D model component can accept _.obj_, _.glb_, & _.gltf_ file types.\n- `cache_examples`: saves the predicted output for the examples, to save time on inference.\n\n", "heading1": "Taking a Look at the Code", "source_page_url": "https://gradio.app/guides/how-to-use-3D-model-component", "source_page_title": "Other Tutorials - How To Use 3D Model Component Guide"}, {"text": "Below is a demo that uses the DPT model to predict the depth of an image and then uses 3D Point Cloud to create a 3D object. Take a look at the [app.py](https://huggingface.co/spaces/gradio/dpt-depth-estimation-3d-obj/blob/main/app.py) file for a peek into the code and the model prediction function.\n \n\n---\n\nAnd you're done! That's all the code you need to build an interface for your Model3D model. Here are some references that you may find useful:\n\n- Gradio's [\"Getting Started\" guide](https://gradio.app/getting_started/)\n- The first [3D Model Demo](https://huggingface.co/spaces/gradio/Model3D) and [complete code](https://huggingface.co/spaces/gradio/Model3D/tree/main) (on Hugging Face Spaces)\n", "heading1": "Exploring a more complex Model3D Demo:", "source_page_url": "https://gradio.app/guides/how-to-use-3D-model-component", "source_page_title": "Other Tutorials - How To Use 3D Model Component Guide"}, {"text": "Let\u2019s start with a simple example of integrating a C++ program into a Gradio app. Suppose we have the following C++ program that adds two numbers:\n\n```cpp\n// add.cpp\ninclude \n\nint main() {\n double a, b;\n std::cin >> a >> b;\n std::cout << a + b << std::endl;\n return 0;\n}\n```\n\nThis program reads two numbers from standard input, adds them, and outputs the result.\n\nWe can build a Gradio interface around this C++ program using Python's `subprocess` module. Here\u2019s the corresponding Python code:\n\n```python\nimport gradio as gr\nimport subprocess\n\ndef add_numbers(a, b):\n process = subprocess.Popen(\n ['./add'], \n stdin=subprocess.PIPE, \n stdout=subprocess.PIPE, \n stderr=subprocess.PIPE\n )\n output, error = process.communicate(input=f\"{a} {b}\\n\".encode())\n \n if error:\n return f\"Error: {error.decode()}\"\n return float(output.decode().strip())\n\ndemo = gr.Interface(\n fn=add_numbers, \n inputs=[gr.Number(label=\"Number 1\"), gr.Number(label=\"Number 2\")], \n outputs=gr.Textbox(label=\"Result\")\n)\n\ndemo.launch()\n```\n\nHere, `subprocess.Popen` is used to execute the compiled C++ program (`add`), pass the input values, and capture the output. You can compile the C++ program by running:\n\n```bash\ng++ -o add add.cpp\n```\n\nThis example shows how easy it is to call C++ from Python using `subprocess` and build a Gradio interface around it.\n\n", "heading1": "Using Gradio with C++", "source_page_url": "https://gradio.app/guides/using-gradio-in-other-programming-languages", "source_page_title": "Other Tutorials - Using Gradio In Other Programming Languages Guide"}, {"text": "Now, let\u2019s move to another example: calling a Rust program to apply a sepia filter to an image. The Rust code could look something like this:\n\n```rust\n// sepia.rs\nextern crate image;\n\nuse image::{GenericImageView, ImageBuffer, Rgba};\n\nfn sepia_filter(input: &str, output: &str) {\n let img = image::open(input).unwrap();\n let (width, height) = img.dimensions();\n let mut img_buf = ImageBuffer::new(width, height);\n\n for (x, y, pixel) in img.pixels() {\n let (r, g, b, a) = (pixel[0] as f32, pixel[1] as f32, pixel[2] as f32, pixel[3]);\n let tr = (0.393 * r + 0.769 * g + 0.189 * b).min(255.0);\n let tg = (0.349 * r + 0.686 * g + 0.168 * b).min(255.0);\n let tb = (0.272 * r + 0.534 * g + 0.131 * b).min(255.0);\n img_buf.put_pixel(x, y, Rgba([tr as u8, tg as u8, tb as u8, a]));\n }\n\n img_buf.save(output).unwrap();\n}\n\nfn main() {\n let args: Vec = std::env::args().collect();\n if args.len() != 3 {\n eprintln!(\"Usage: sepia \");\n return;\n }\n sepia_filter(&args[1], &args[2]);\n}\n```\n\nThis Rust program applies a sepia filter to an image. It takes two command-line arguments: the input image path and the output image path. You can compile this program using:\n\n```bash\ncargo build --release\n```\n\nNow, we can call this Rust program from Python and use Gradio to build the interface:\n\n```python\nimport gradio as gr\nimport subprocess\n\ndef apply_sepia(input_path):\n output_path = \"output.png\"\n \n process = subprocess.Popen(\n ['./target/release/sepia', input_path, output_path], \n stdout=subprocess.PIPE, \n stderr=subprocess.PIPE\n )\n process.wait()\n \n return output_path\n\ndemo = gr.Interface(\n fn=apply_sepia, \n inputs=gr.Image(type=\"filepath\", label=\"Input Image\"), \n outputs=gr.Image(label=\"Sepia Image\")\n)\n\ndemo.launch()\n```\n\nHere, when a user uploads an image and clicks submit, Gradio calls the Rust binary (`sepia`) to process the image, and re", "heading1": "Using Gradio with Rust", "source_page_url": "https://gradio.app/guides/using-gradio-in-other-programming-languages", "source_page_title": "Other Tutorials - Using Gradio In Other Programming Languages Guide"}, {"text": "nput Image\"), \n outputs=gr.Image(label=\"Sepia Image\")\n)\n\ndemo.launch()\n```\n\nHere, when a user uploads an image and clicks submit, Gradio calls the Rust binary (`sepia`) to process the image, and returns the sepia-filtered output to Gradio.\n\nThis setup showcases how you can integrate performance-critical or specialized code written in Rust into a Gradio interface.\n\n", "heading1": "Using Gradio with Rust", "source_page_url": "https://gradio.app/guides/using-gradio-in-other-programming-languages", "source_page_title": "Other Tutorials - Using Gradio In Other Programming Languages Guide"}, {"text": "Integrating Gradio with R is particularly straightforward thanks to the `reticulate` package, which allows you to run Python code directly in R. Let\u2019s walk through an example of using Gradio in R. \n\n**Installation**\n\nFirst, you need to install the `reticulate` package in R:\n\n```r\ninstall.packages(\"reticulate\")\n```\n\n\nOnce installed, you can use the package to run Gradio directly from within an R script.\n\n\n```r\nlibrary(reticulate)\n\npy_install(\"gradio\", pip = TRUE)\n\ngr <- import(\"gradio\") import gradio as gr\n```\n\n**Building a Gradio Application**\n\nWith gradio installed and imported, we now have access to gradio's app building methods. Let's build a simple app for an R function that returns a greeting\n\n```r\ngreeting <- \\(name) paste(\"Hello\", name)\n\napp <- gr$Interface(\n fn = greeting,\n inputs = gr$Text(label = \"Name\"),\n outputs = gr$Text(label = \"Greeting\"),\n title = \"Hello! &128515 &128075\"\n)\n\napp$launch(server_name = \"localhost\", \n server_port = as.integer(3000))\n```\n\nCredit to [@IfeanyiIdiaye](https://github.com/Ifeanyi55) for contributing this section. You can see more examples [here](https://github.com/Ifeanyi55/Gradio-in-R/tree/main/Code), including using Gradio Blocks to build a machine learning application in R.\n", "heading1": "Using Gradio with R (via `reticulate`)", "source_page_url": "https://gradio.app/guides/using-gradio-in-other-programming-languages", "source_page_title": "Other Tutorials - Using Gradio In Other Programming Languages Guide"}, {"text": "To use Gradio with BigQuery, you will need to obtain your BigQuery credentials and use them with the [BigQuery Python client](https://pypi.org/project/google-cloud-bigquery/). If you already have BigQuery credentials (as a `.json` file), you can skip this section. If not, you can do this for free in just a couple of minutes.\n\n1. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/)\n\n2. In the Cloud Console, click on the hamburger menu in the top-left corner and select \"APIs & Services\" from the menu. If you do not have an existing project, you will need to create one.\n\n3. Then, click the \"+ Enabled APIs & services\" button, which allows you to enable specific services for your project. Search for \"BigQuery API\", click on it, and click the \"Enable\" button. If you see the \"Manage\" button, then the BigQuery is already enabled, and you're all set.\n\n4. In the APIs & Services menu, click on the \"Credentials\" tab and then click on the \"Create credentials\" button.\n\n5. In the \"Create credentials\" dialog, select \"Service account key\" as the type of credentials to create, and give it a name. Also grant the service account permissions by giving it a role such as \"BigQuery User\", which will allow you to run queries.\n\n6. After selecting the service account, select the \"JSON\" key type and then click on the \"Create\" button. This will download the JSON key file containing your credentials to your computer. It will look something like this:\n\n```json\n{\n\t\"type\": \"service_account\",\n\t\"project_id\": \"your project\",\n\t\"private_key_id\": \"your private key id\",\n\t\"private_key\": \"private key\",\n\t\"client_email\": \"email\",\n\t\"client_id\": \"client id\",\n\t\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n\t\"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n\t\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n\t\"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/email_id\"\n}\n```\n\n", "heading1": "Setting up your BigQuery Credentials", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-bigquery-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Bigquery Data Guide"}, {"text": "Once you have the credentials, you will need to use the BigQuery Python client to authenticate using your credentials. To do this, you will need to install the BigQuery Python client by running the following command in the terminal:\n\n```bash\npip install google-cloud-bigquery[pandas]\n```\n\nYou'll notice that we've installed the pandas add-on, which will be helpful for processing the BigQuery dataset as a pandas dataframe. Once the client is installed, you can authenticate using your credentials by running the following code:\n\n```py\nfrom google.cloud import bigquery\n\nclient = bigquery.Client.from_service_account_json(\"path/to/key.json\")\n```\n\nWith your credentials authenticated, you can now use the BigQuery Python client to interact with your BigQuery datasets.\n\nHere is an example of a function which queries the `covid19_nyt.us_counties` dataset in BigQuery to show the top 20 counties with the most confirmed cases as of the current day:\n\n```py\nimport numpy as np\n\nQUERY = (\n 'SELECT * FROM `bigquery-public-data.covid19_nyt.us_counties` '\n 'ORDER BY date DESC,confirmed_cases DESC '\n 'LIMIT 20')\n\ndef run_query():\n query_job = client.query(QUERY)\n query_result = query_job.result()\n df = query_result.to_dataframe()\n Select a subset of columns\n df = df[[\"confirmed_cases\", \"deaths\", \"county\", \"state_name\"]]\n Convert numeric columns to standard numpy types\n df = df.astype({\"deaths\": np.int64, \"confirmed_cases\": np.int64})\n return df\n```\n\n", "heading1": "Using the BigQuery Client", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-bigquery-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Bigquery Data Guide"}, {"text": "Once you have a function to query the data, you can use the `gr.DataFrame` component from the Gradio library to display the results in a tabular format. This is a useful way to inspect the data and make sure that it has been queried correctly.\n\nHere is an example of how to use the `gr.DataFrame` component to display the results. By passing in the `run_query` function to `gr.DataFrame`, we instruct Gradio to run the function as soon as the page loads and show the results. In addition, you also pass in the keyword `every` to tell the dashboard to refresh every hour (60\\*60 seconds).\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.DataFrame(run_query, every=gr.Timer(60*60))\n\ndemo.launch()\n```\n\nPerhaps you'd like to add a visualization to our dashboard. You can use the `gr.ScatterPlot()` component to visualize the data in a scatter plot. This allows you to see the relationship between different variables such as case count and case deaths in the dataset and can be useful for exploring the data and gaining insights. Again, we can do this in real-time\nby passing in the `every` parameter.\n\nHere is a complete example showing how to use the `gr.ScatterPlot` to visualize in addition to displaying data with the `gr.DataFrame`\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"\ud83d\udc89 Covid Dashboard (Updated Hourly)\")\n with gr.Row():\n gr.DataFrame(run_query, every=gr.Timer(60*60))\n gr.ScatterPlot(run_query, every=gr.Timer(60*60), x=\"confirmed_cases\",\n y=\"deaths\", tooltip=\"county\", width=500, height=500)\n\ndemo.queue().launch() Run the demo with queuing enabled\n```\n", "heading1": "Building the Real-Time Dashboard", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-bigquery-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Bigquery Data Guide"}, {"text": "First of all, we need some data to visualize. Following this [excellent guide](https://supabase.com/blog/loading-data-supabase-python), we'll create fake commerce data and put it in Supabase.\n\n1\\. Start by creating a new project in Supabase. Once you're logged in, click the \"New Project\" button\n\n2\\. Give your project a name and database password. You can also choose a pricing plan (for our purposes, the Free Tier is sufficient!)\n\n3\\. You'll be presented with your API keys while the database spins up (can take up to 2 minutes).\n\n4\\. Click on \"Table Editor\" (the table icon) in the left pane to create a new table. We'll create a single table called `Product`, with the following schema:\n\n
\n\n\n\n\n\n
product_idint8
inventory_countint8
pricefloat8
product_namevarchar
\n
\n\n5\\. Click Save to save the table schema.\n\nOur table is now ready!\n\n", "heading1": "Create a table in Supabase", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "The next step is to write data to a Supabase dataset. We will use the Supabase Python library to do this.\n\n6\\. Install `supabase` by running the following command in your terminal:\n\n```bash\npip install supabase\n```\n\n7\\. Get your project URL and API key. Click the Settings (gear icon) on the left pane and click 'API'. The URL is listed in the Project URL box, while the API key is listed in Project API keys (with the tags `service_role`, `secret`)\n\n8\\. Now, run the following Python script to write some fake data to the table (note you have to put the values of `SUPABASE_URL` and `SUPABASE_SECRET_KEY` from step 7):\n\n```python\nimport supabase\n\nInitialize the Supabase client\nclient = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')\n\nDefine the data to write\nimport random\n\nmain_list = []\nfor i in range(10):\n value = {'product_id': i,\n 'product_name': f\"Item {i}\",\n 'inventory_count': random.randint(1, 100),\n 'price': random.random()*100\n }\n main_list.append(value)\n\nWrite the data to the table\ndata = client.table('Product').insert(main_list).execute()\n```\n\nReturn to your Supabase dashboard and refresh the page, you should now see 10 rows populated in the `Product` table!\n\n", "heading1": "Write data to Supabase", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "Finally, we will read the data from the Supabase dataset using the same `supabase` Python library and create a realtime dashboard using `gradio`.\n\nNote: We repeat certain steps in this section (like creating the Supabase client) in case you did not go through the previous sections. As described in Step 7, you will need the project URL and API Key for your database.\n\n9\\. Write a function that loads the data from the `Product` table and returns it as a pandas Dataframe:\n\n```python\nimport supabase\nimport pandas as pd\n\nclient = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')\n\ndef read_data():\n response = client.table('Product').select(\"*\").execute()\n df = pd.DataFrame(response.data)\n return df\n```\n\n10\\. Create a small Gradio Dashboard with 2 Barplots that plots the prices and inventories of all of the items every minute and updates in real-time:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as dashboard:\n with gr.Row():\n gr.BarPlot(read_data, x=\"product_id\", y=\"price\", title=\"Prices\", every=gr.Timer(60))\n gr.BarPlot(read_data, x=\"product_id\", y=\"inventory_count\", title=\"Inventory\", every=gr.Timer(60))\n\ndashboard.queue().launch()\n```\n\nNotice that by passing in a function to `gr.BarPlot()`, we have the BarPlot query the database as soon as the web app loads (and then again every 60 seconds because of the `every` parameter). Your final dashboard should look something like this:\n\n\n\n", "heading1": "Visualize the Data in a Real-Time Gradio Dashboard", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "That's it! In this tutorial, you learned how to write data to a Supabase dataset, and then read that data and plot the results as bar plots. If you update the data in the Supabase database, you'll notice that the Gradio dashboard will update within a minute.\n\nTry adding more plots and visualizations to this example (or with a different dataset) to build a more complex dashboard!\n", "heading1": "Conclusion", "source_page_url": "https://gradio.app/guides/creating-a-dashboard-from-supabase-data", "source_page_title": "Other Tutorials - Creating A Dashboard From Supabase Data Guide"}, {"text": "A virtual environment in Python is a self-contained directory that holds a Python installation for a particular version of Python, along with a number of additional packages. This environment is isolated from the main Python installation and other virtual environments. Each environment can have its own independent set of installed Python packages, which allows you to maintain different versions of libraries for different projects without conflicts.\n\n\nUsing virtual environments ensures that you can work on multiple Python projects on the same machine without any conflicts. This is particularly useful when different projects require different versions of the same library. It also simplifies dependency management and enhances reproducibility, as you can easily share the requirements of your project with others.\n\n\n", "heading1": "Virtual Environments", "source_page_url": "https://gradio.app/guides/installing-gradio-in-a-virtual-environment", "source_page_title": "Other Tutorials - Installing Gradio In A Virtual Environment Guide"}, {"text": "To install Gradio on a Windows system in a virtual environment, follow these steps:\n\n1. **Install Python**: Ensure you have Python 3.10 or higher installed. You can download it from [python.org](https://www.python.org/). You can verify the installation by running `python --version` or `python3 --version` in Command Prompt.\n\n\n2. **Create a Virtual Environment**:\n Open Command Prompt and navigate to your project directory. Then create a virtual environment using the following command:\n\n ```bash\n python -m venv gradio-env\n ```\n\n This command creates a new directory `gradio-env` in your project folder, containing a fresh Python installation.\n\n3. **Activate the Virtual Environment**:\n To activate the virtual environment, run:\n\n ```bash\n .\\gradio-env\\Scripts\\activate\n ```\n\n Your command prompt should now indicate that you are working inside `gradio-env`. Note: you can choose a different name than `gradio-env` for your virtual environment in this step.\n\n\n4. **Install Gradio**:\n Now, you can install Gradio using pip:\n\n ```bash\n pip install gradio\n ```\n\n5. **Verification**:\n To verify the installation, run `python` and then type:\n\n ```python\n import gradio as gr\n print(gr.__version__)\n ```\n\n This will display the installed version of Gradio.\n\n", "heading1": "Installing Gradio on Windows", "source_page_url": "https://gradio.app/guides/installing-gradio-in-a-virtual-environment", "source_page_title": "Other Tutorials - Installing Gradio In A Virtual Environment Guide"}, {"text": "The installation steps on MacOS and Linux are similar to Windows but with some differences in commands.\n\n1. **Install Python**:\n Python usually comes pre-installed on MacOS and most Linux distributions. You can verify the installation by running `python --version` in the terminal (note that depending on how Python is installed, you might have to use `python3` instead of `python` throughout these steps). \n \n Ensure you have Python 3.10 or higher installed. If you do not have it installed, you can download it from [python.org](https://www.python.org/). \n\n2. **Create a Virtual Environment**:\n Open Terminal and navigate to your project directory. Then create a virtual environment using:\n\n ```bash\n python -m venv gradio-env\n ```\n\n Note: you can choose a different name than `gradio-env` for your virtual environment in this step.\n\n3. **Activate the Virtual Environment**:\n To activate the virtual environment on MacOS/Linux, use:\n\n ```bash\n source gradio-env/bin/activate\n ```\n\n4. **Install Gradio**:\n With the virtual environment activated, install Gradio using pip:\n\n ```bash\n pip install gradio\n ```\n\n5. **Verification**:\n To verify the installation, run `python` and then type:\n\n ```python\n import gradio as gr\n print(gr.__version__)\n ```\n\n This will display the installed version of Gradio.\n\nBy following these steps, you can successfully install Gradio in a virtual environment on your operating system, ensuring a clean and managed workspace for your Python projects.", "heading1": "Installing Gradio on MacOS/Linux", "source_page_url": "https://gradio.app/guides/installing-gradio-in-a-virtual-environment", "source_page_title": "Other Tutorials - Installing Gradio In A Virtual Environment Guide"}, {"text": "Named-entity recognition (NER), also known as token classification or text tagging, is the task of taking a sentence and classifying every word (or \"token\") into different categories, such as names of people or names of locations, or different parts of speech.\n\nFor example, given the sentence:\n\n> Does Chicago have any Pakistani restaurants?\n\nA named-entity recognition algorithm may identify:\n\n- \"Chicago\" as a **location**\n- \"Pakistani\" as an **ethnicity**\n\nand so on.\n\nUsing `gradio` (specifically the `HighlightedText` component), you can easily build a web demo of your NER model and share that with the rest of your team.\n\nHere is an example of a demo that you'll be able to build:\n\n$demo_ner_pipeline\n\nThis tutorial will show how to take a pretrained NER model and deploy it with a Gradio interface. We will show two different ways to use the `HighlightedText` component -- depending on your NER model, either of these two ways may be easier to learn!\n\nPrerequisites\n\nMake sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained named-entity recognition model. You can use your own, while in this tutorial, we will use one from the `transformers` library.\n\nApproach 1: List of Entity Dictionaries\n\nMany named-entity recognition models output a list of dictionaries. Each dictionary consists of an _entity_, a \"start\" index, and an \"end\" index. This is, for example, how NER models in the `transformers` library operate:\n\n```py\nfrom transformers import pipeline\nner_pipeline = pipeline(\"ner\")\nner_pipeline(\"Does Chicago have any Pakistani restaurants\")\n```\n\nOutput:\n\n```bash\n[{'entity': 'I-LOC',\n 'score': 0.9988978,\n 'index': 2,\n 'word': 'Chicago',\n 'start': 5,\n 'end': 12},\n {'entity': 'I-MISC',\n 'score': 0.9958592,\n 'index': 5,\n 'word': 'Pakistani',\n 'start': 22,\n 'end': 31}]\n```\n\nIf you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this ", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/named-entity-recognition", "source_page_title": "Other Tutorials - Named Entity Recognition Guide"}, {"text": "index': 5,\n 'word': 'Pakistani',\n 'start': 22,\n 'end': 31}]\n```\n\nIf you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this **list of entities**, along with the **original text** to the model, together as dictionary, with the keys being `\"entities\"` and `\"text\"` respectively.\n\nHere is a complete example:\n\n$code_ner_pipeline\n$demo_ner_pipeline\n\nApproach 2: List of Tuples\n\nAn alternative way to pass data into the `HighlightedText` component is a list of tuples. The first element of each tuple should be the word or words that are being classified into a particular entity. The second element should be the entity label (or `None` if they should be unlabeled). The `HighlightedText` component automatically strings together the words and labels to display the entities.\n\nIn some cases, this can be easier than the first approach. Here is a demo showing this approach using Spacy's parts-of-speech tagger:\n\n$code_text_analysis\n$demo_text_analysis\n\n---\n\nAnd you're done! That's all you need to know to build a web-based GUI for your NER model.\n\nFun tip: you can share your NER demo instantly with others simply by setting `share=True` in `launch()`.\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/named-entity-recognition", "source_page_title": "Other Tutorials - Named Entity Recognition Guide"}, {"text": "In this Guide, we'll walk you through:\n\n- Introduction of ONNX, ONNX model zoo, Gradio, and Hugging Face Spaces\n- How to setup a Gradio demo for EfficientNet-Lite4\n- How to contribute your own Gradio demos for the ONNX organization on Hugging Face\n\nHere's an [example](https://onnx-efficientnet-lite4.hf.space/) of an ONNX model.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "Open Neural Network Exchange ([ONNX](https://onnx.ai/)) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. For example, if you have trained a model in TensorFlow or PyTorch, you can convert it to ONNX easily, and from there run it on a variety of devices using an engine/compiler like ONNX Runtime.\n\nThe [ONNX Model Zoo](https://github.com/onnx/models) is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture.\n\n", "heading1": "What is the ONNX Model Zoo?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "Gradio\n\nGradio lets users demo their machine learning models as a web app all in python code. Gradio wraps a python function into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.\n\nGet started [here](https://gradio.app/getting_started)\n\nHugging Face Spaces\n\nHugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).\n\nHugging Face Models\n\nHugging Face Model Hub also supports ONNX models and ONNX models can be filtered through the [ONNX tag](https://huggingface.co/models?library=onnx&sort=downloads)\n\n", "heading1": "What are Hugging Face Spaces & Gradio?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "There are a lot of Jupyter notebooks in the ONNX Model Zoo for users to test models. Previously, users needed to download the models themselves and run those notebooks locally for testing. With Hugging Face, the testing process can be much simpler and more user-friendly. Users can easily try certain ONNX Model Zoo model on Hugging Face Spaces and run a quick demo powered by Gradio with ONNX Runtime, all on cloud without downloading anything locally. Note, there are various runtimes for ONNX, e.g., [ONNX Runtime](https://github.com/microsoft/onnxruntime), [MXNet](https://github.com/apache/incubator-mxnet).\n\n", "heading1": "How did Hugging Face help the ONNX Model Zoo?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "ONNX Runtime is a cross-platform inference and training machine-learning accelerator. It makes live Gradio demos with ONNX Model Zoo model on Hugging Face possible.\n\nONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. For more information please see the [official website](https://onnxruntime.ai/).\n\n", "heading1": "What is the role of ONNX Runtime?", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite models. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU. To learn more read the [model card](https://github.com/onnx/models/tree/main/vision/classification/efficientnet-lite4)\n\nHere we walk through setting up a example demo for EfficientNet-Lite4 using Gradio\n\nFirst we import our dependencies and download and load the efficientnet-lite4 model from the onnx model zoo. Then load the labels from the labels_map.txt file. We then setup our preprocessing functions, load the model for inference, and setup the inference function. Finally, the inference function is wrapped into a gradio interface for a user to interact with. See the full code below.\n\n```python\nimport numpy as np\nimport math\nimport matplotlib.pyplot as plt\nimport cv2\nimport json\nimport gradio as gr\nfrom huggingface_hub import hf_hub_download\nfrom onnx import hub\nimport onnxruntime as ort\n\nloads ONNX model from ONNX Model Zoo\nmodel = hub.load(\"efficientnet-lite4\")\nloads the labels text file\nlabels = json.load(open(\"labels_map.txt\", \"r\"))\n\nsets image file dimensions to 224x224 by resizing and cropping image from center\ndef pre_process_edgetpu(img, dims):\n output_height, output_width, _ = dims\n img = resize_with_aspectratio(img, output_height, output_width, inter_pol=cv2.INTER_LINEAR)\n img = center_crop(img, output_height, output_width)\n img = np.asarray(img, dtype='float32')\n converts jpg pixel value from [0 - 255] to float array [-1.0 - 1.0]\n img -= [127.0, 127.0, 127.0]\n img /= [128.0, 128.0, 128.0]\n return img\n\nresizes the image with a proportional scale\ndef resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):\n height, width, _ = img.shape\n new_height = int(100. * out_he", "heading1": "Setting up a Gradio Demo for EfficientNet-Lite4", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "the image with a proportional scale\ndef resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):\n height, width, _ = img.shape\n new_height = int(100. * out_height / scale)\n new_width = int(100. * out_width / scale)\n if height > width:\n w = new_width\n h = int(new_height * height / width)\n else:\n h = new_height\n w = int(new_width * width / height)\n img = cv2.resize(img, (w, h), interpolation=inter_pol)\n return img\n\ncrops the image around the center based on given height and width\ndef center_crop(img, out_height, out_width):\n height, width, _ = img.shape\n left = int((width - out_width) / 2)\n right = int((width + out_width) / 2)\n top = int((height - out_height) / 2)\n bottom = int((height + out_height) / 2)\n img = img[top:bottom, left:right]\n return img\n\n\nsess = ort.InferenceSession(model)\n\ndef inference(img):\n img = cv2.imread(img)\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n img = pre_process_edgetpu(img, (224, 224, 3))\n\n img_batch = np.expand_dims(img, axis=0)\n\n results = sess.run([\"Softmax:0\"], {\"images:0\": img_batch})[0]\n result = reversed(results[0].argsort()[-5:])\n resultdic = {}\n for r in result:\n resultdic[labels[str(r)]] = float(results[0][r])\n return resultdic\n\ntitle = \"EfficientNet-Lite4\"\ndescription = \"EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite model. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU.\"\nexamples = [['catonnx.jpg']]\ngr.Interface(inference, gr.Image(type=\"filepath\"), \"label\", title=title, description=description, examples=examples).launch()\n```\n\n", "heading1": "Setting up a Gradio Demo for EfficientNet-Lite4", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": " examples=examples).launch()\n```\n\n", "heading1": "Setting up a Gradio Demo for EfficientNet-Lite4", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "- Add model to the [onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)\n- Create an account on Hugging Face [here](https://huggingface.co/join).\n- See list of models left to add to ONNX organization, please refer to the table with the [Models list](https://github.com/onnx/modelsmodels)\n- Add Gradio Demo under your username, see this [blog post](https://huggingface.co/blog/gradio-spaces) for setting up Gradio Demo on Hugging Face.\n- Request to join ONNX Organization [here](https://huggingface.co/onnx).\n- Once approved transfer model from your username to ONNX organization\n- Add a badge for model in model table, see examples in [Models list](https://github.com/onnx/modelsmodels)\n", "heading1": "How to contribute Gradio demos on HF spaces using ONNX models", "source_page_url": "https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face", "source_page_title": "Other Tutorials - Gradio And Onnx On Hugging Face Guide"}, {"text": "Gradio features a built-in theming engine that lets you customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `Blocks` or `Interface` constructor. For example:\n\n```python\nwith gr.Blocks(theme=gr.themes.Soft()) as demo:\n ...\n```\n\n
\n\n
\n\nGradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. These are:\n\n\n* `gr.themes.Base()` - the `\"base\"` theme sets the primary color to blue but otherwise has minimal styling, making it particularly useful as a base for creating new, custom themes.\n* `gr.themes.Default()` - the `\"default\"` Gradio 5 theme, with a vibrant orange primary color and gray secondary color.\n* `gr.themes.Origin()` - the `\"origin\"` theme is most similar to Gradio 4 styling. Colors, especially in light mode, are more subdued than the Gradio 5 default theme.\n* `gr.themes.Citrus()` - the `\"citrus\"` theme uses a yellow primary color, highlights form elements that are in focus, and includes fun 3D effects when buttons are clicked.\n* `gr.themes.Monochrome()` - the `\"monochrome\"` theme uses a black primary and white secondary color, and uses serif-style fonts, giving the appearance of a black-and-white newspaper. \n* `gr.themes.Soft()` - the `\"soft\"` theme uses a purple primary color and white secondary color. It also increases the border radius around buttons and form elements and highlights labels.\n* `gr.themes.Glass()` - the `\"glass\"` theme has a blue primary color and a transclucent gray secondary color. The theme also uses vertical gradients to create a glassy effect.\n* `gr.themes.Ocean()` - the `\"ocean\"` theme has a blue-green primary color and gray secondary color. The theme also uses horizontal gradients, especially for buttons and some form elements.\n\n\nEach of these themes set values for hundreds of CSS variables. You can use preb", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "lor and gray secondary color. The theme also uses horizontal gradients, especially for buttons and some form elements.\n\n\nEach of these themes set values for hundreds of CSS variables. You can use prebuilt themes as a starting point for your own custom themes, or you can create your own themes from scratch. Let's take a look at each approach.\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "The easiest way to build a theme is using the Theme Builder. To launch the Theme Builder locally, run the following code:\n\n```python\nimport gradio as gr\n\ngr.themes.builder()\n```\n\n$demo_theme_builder\n\nYou can use the Theme Builder running on Spaces above, though it runs much faster when you launch it locally via `gr.themes.builder()`.\n\nAs you edit the values in the Theme Builder, the app will preview updates in real time. You can download the code to generate the theme you've created so you can use it in any Gradio app.\n\nIn the rest of the guide, we will cover building themes programmatically.\n\n", "heading1": "Using the Theme Builder", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "Although each theme has hundreds of CSS variables, the values for most these variables are drawn from 8 core variables which can be set through the constructor of each prebuilt theme. Modifying these 8 arguments allows you to quickly change the look and feel of your app.\n\nCore Colors\n\nThe first 3 constructor arguments set the colors of the theme and are `gradio.themes.Color` objects. Internally, these Color objects hold brightness values for the palette of a single hue, ranging from 50, 100, 200..., 800, 900, 950. Other CSS variables are derived from these 3 colors.\n\nThe 3 color constructor arguments are:\n\n- `primary_hue`: This is the color draws attention in your theme. In the default theme, this is set to `gradio.themes.colors.orange`.\n- `secondary_hue`: This is the color that is used for secondary elements in your theme. In the default theme, this is set to `gradio.themes.colors.blue`.\n- `neutral_hue`: This is the color that is used for text and other neutral elements in your theme. In the default theme, this is set to `gradio.themes.colors.gray`.\n\nYou could modify these values using their string shortcuts, such as\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(primary_hue=\"red\", secondary_hue=\"pink\")) as demo:\n ...\n```\n\nor you could use the `Color` objects directly, like this:\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(primary_hue=gr.themes.colors.red, secondary_hue=gr.themes.colors.pink)) as demo:\n ...\n```\n\n
\n\n
\n\nPredefined colors are:\n\n- `slate`\n- `gray`\n- `zinc`\n- `neutral`\n- `stone`\n- `red`\n- `orange`\n- `amber`\n- `yellow`\n- `lime`\n- `green`\n- `emerald`\n- `teal`\n- `cyan`\n- `sky`\n- `blue`\n- `indigo`\n- `violet`\n- `purple`\n- `fuchsia`\n- `pink`\n- `rose`\n\nYou could also create your own custom `Color` objects and pass them in.\n\nCore Sizing\n\nThe next 3 constructor arguments set the sizing of the theme and are `gradio.", "heading1": "Extending Themes via the Constructor", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "`\n- `fuchsia`\n- `pink`\n- `rose`\n\nYou could also create your own custom `Color` objects and pass them in.\n\nCore Sizing\n\nThe next 3 constructor arguments set the sizing of the theme and are `gradio.themes.Size` objects. Internally, these Size objects hold pixel size values that range from `xxs` to `xxl`. Other CSS variables are derived from these 3 sizes.\n\n- `spacing_size`: This sets the padding within and spacing between elements. In the default theme, this is set to `gradio.themes.sizes.spacing_md`.\n- `radius_size`: This sets the roundedness of corners of elements. In the default theme, this is set to `gradio.themes.sizes.radius_md`.\n- `text_size`: This sets the font size of text. In the default theme, this is set to `gradio.themes.sizes.text_md`.\n\nYou could modify these values using their string shortcuts, such as\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(spacing_size=\"sm\", radius_size=\"none\")) as demo:\n ...\n```\n\nor you could use the `Size` objects directly, like this:\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(spacing_size=gr.themes.sizes.spacing_sm, radius_size=gr.themes.sizes.radius_none)) as demo:\n ...\n```\n\n
\n\n
\n\nThe predefined size objects are:\n\n- `radius_none`\n- `radius_sm`\n- `radius_md`\n- `radius_lg`\n- `spacing_sm`\n- `spacing_md`\n- `spacing_lg`\n- `text_sm`\n- `text_md`\n- `text_lg`\n\nYou could also create your own custom `Size` objects and pass them in.\n\nCore Fonts\n\nThe final 2 constructor arguments set the fonts of the theme. You can pass a list of fonts to each of these arguments to specify fallbacks. If you provide a string, it will be loaded as a system font. If you provide a `gradio.themes.GoogleFont`, the font will be loaded from Google Fonts.\n\n- `font`: This sets the primary font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont(\"IBM Plex Sans\")`.\n- `font_mono`: This sets th", "heading1": "Extending Themes via the Constructor", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "font will be loaded from Google Fonts.\n\n- `font`: This sets the primary font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont(\"IBM Plex Sans\")`.\n- `font_mono`: This sets the monospace font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont(\"IBM Plex Mono\")`.\n\nYou could modify these values such as the following:\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(font=[gr.themes.GoogleFont(\"Inconsolata\"), \"Arial\", \"sans-serif\"])) as demo:\n ...\n```\n\n
\n\n
\n\n", "heading1": "Extending Themes via the Constructor", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "You can also modify the values of CSS variables after the theme has been loaded. To do so, use the `.set()` method of the theme object to get access to the CSS variables. For example:\n\n```python\ntheme = gr.themes.Default(primary_hue=\"blue\").set(\n loader_color=\"FF0000\",\n slider_color=\"FF0000\",\n)\n\nwith gr.Blocks(theme=theme) as demo:\n ...\n```\n\nIn the example above, we've set the `loader_color` and `slider_color` variables to `FF0000`, despite the overall `primary_color` using the blue color palette. You can set any CSS variable that is defined in the theme in this manner.\n\nYour IDE type hinting should help you navigate these variables. Since there are so many CSS variables, let's take a look at how these variables are named and organized.\n\nCSS Variable Naming Conventions\n\nCSS variable names can get quite long, like `button_primary_background_fill_hover_dark`! However they follow a common naming convention that makes it easy to understand what they do and to find the variable you're looking for. Separated by underscores, the variable name is made up of:\n\n1. The target element, such as `button`, `slider`, or `block`.\n2. The target element type or sub-element, such as `button_primary`, or `block_label`.\n3. The property, such as `button_primary_background_fill`, or `block_label_border_width`.\n4. Any relevant state, such as `button_primary_background_fill_hover`.\n5. If the value is different in dark mode, the suffix `_dark`. For example, `input_border_color_focus_dark`.\n\nOf course, many CSS variable names are shorter than this, such as `table_border_color`, or `input_shadow`.\n\nCSS Variable Organization\n\nThough there are hundreds of CSS variables, they do not all have to have individual values. They draw their values by referencing a set of core variables and referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of individual elements that we may wan", "heading1": "Extending Themes via `.set()`", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "d referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of individual elements that we may want to modify.\n\nReferencing Core Variables\n\nTo reference one of the core constructor variables, precede the variable name with an asterisk. To reference a core color, use the `*primary_`, `*secondary_`, or `*neutral_` prefix, followed by the brightness value. For example:\n\n```python\ntheme = gr.themes.Default(primary_hue=\"blue\").set(\n button_primary_background_fill=\"*primary_200\",\n button_primary_background_fill_hover=\"*primary_300\",\n)\n```\n\nIn the example above, we've set the `button_primary_background_fill` and `button_primary_background_fill_hover` variables to `*primary_200` and `*primary_300`. These variables will be set to the 200 and 300 brightness values of the blue primary color palette, respectively.\n\nSimilarly, to reference a core size, use the `*spacing_`, `*radius_`, or `*text_` prefix, followed by the size value. For example:\n\n```python\ntheme = gr.themes.Default(radius_size=\"md\").set(\n button_primary_border_radius=\"*radius_xl\",\n)\n```\n\nIn the example above, we've set the `button_primary_border_radius` variable to `*radius_xl`. This variable will be set to the `xl` setting of the medium radius size range.\n\nReferencing Other Variables\n\nVariables can also reference each other. For example, look at the example below:\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"FF0000\",\n button_primary_background_fill_hover=\"FF0000\",\n button_primary_border=\"FF0000\",\n)\n```\n\nHaving to set these values to a common color is a bit tedious. Instead, we can reference the `button_primary_background_fill` variable in the `button_primary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"FF0000\",\n button_primary_back", "heading1": "Extending Themes via `.set()`", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "mary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"FF0000\",\n button_primary_background_fill_hover=\"*button_primary_background_fill\",\n button_primary_border=\"*button_primary_background_fill\",\n)\n```\n\nNow, if we change the `button_primary_background_fill` variable, the `button_primary_background_fill_hover` and `button_primary_border` variables will automatically update as well.\n\nThis is particularly useful if you intend to share your theme - it makes it easy to modify the theme without having to change every variable.\n\nNote that dark mode variables automatically reference each other. For example:\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"FF0000\",\n button_primary_background_fill_dark=\"AAAAAA\",\n button_primary_border=\"*button_primary_background_fill\",\n button_primary_border_dark=\"*button_primary_background_fill_dark\",\n)\n```\n\n`button_primary_border_dark` will draw its value from `button_primary_background_fill_dark`, because dark mode always draw from the dark version of the variable.\n\n", "heading1": "Extending Themes via `.set()`", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "Let's say you want to create a theme from scratch! We'll go through it step by step - you can also see the source of prebuilt themes in the gradio source repo for reference - [here's the source](https://github.com/gradio-app/gradio/blob/main/gradio/themes/monochrome.py) for the Monochrome theme.\n\nOur new theme class will inherit from `gradio.themes.Base`, a theme that sets a lot of convenient defaults. Let's make a simple demo that creates a dummy theme called Seafoam, and make a simple app that uses it.\n\n$code_theme_new_step_1\n\n
\n\n
\n\nThe Base theme is very barebones, and uses `gr.themes.Blue` as it primary color - you'll note the primary button and the loading animation are both blue as a result. Let's change the defaults core arguments of our app. We'll overwrite the constructor and pass new defaults for the core constructor arguments.\n\nWe'll use `gr.themes.Emerald` as our primary color, and set secondary and neutral hues to `gr.themes.Blue`. We'll make our text larger using `text_lg`. We'll use `Quicksand` as our default font, loaded from Google Fonts.\n\n$code_theme_new_step_2\n\n
\n\n
\n\nSee how the primary button and the loading animation are now green? These CSS variables are tied to the `primary_hue` variable.\n\nLet's modify the theme a bit more directly. We'll call the `set()` method to overwrite CSS variable values explicitly. We can use any CSS logic, and reference our core constructor arguments using the `*` prefix.\n\n$code_theme_new_step_3\n\n
\n\n
\n\nLook how fun our theme looks now! With just a few variable changes, our theme looks completely different.\n\nYou may find it helpful to explore the [source code ", "heading1": "Creating a Full Theme", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "ght\"\n\tframeborder=\"0\"\n>\n\n\nLook how fun our theme looks now! With just a few variable changes, our theme looks completely different.\n\nYou may find it helpful to explore the [source code of the other prebuilt themes](https://github.com/gradio-app/gradio/blob/main/gradio/themes) to see how they modified the base theme. You can also find your browser's Inspector useful to select elements from the UI and see what CSS variables are being used in the styles panel.\n\n", "heading1": "Creating a Full Theme", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "Once you have created a theme, you can upload it to the HuggingFace Hub to let others view it, use it, and build off of it!\n\nUploading a Theme\n\nThere are two ways to upload a theme, via the theme class instance or the command line. We will cover both of them with the previously created `seafoam` theme.\n\n- Via the class instance\n\nEach theme instance has a method called `push_to_hub` we can use to upload a theme to the HuggingFace hub.\n\n```python\nseafoam.push_to_hub(repo_name=\"seafoam\",\n version=\"0.0.1\",\n\t\t\t\t\thf_token=\"\")\n```\n\n- Via the command line\n\nFirst save the theme to disk\n\n```python\nseafoam.dump(filename=\"seafoam.json\")\n```\n\nThen use the `upload_theme` command:\n\n```bash\nupload_theme\\\n\"seafoam.json\"\\\n\"seafoam\"\\\n--version \"0.0.1\"\\\n--hf_token \"\"\n```\n\nIn order to upload a theme, you must have a HuggingFace account and pass your [Access Token](https://huggingface.co/docs/huggingface_hub/quick-startlogin)\nas the `hf_token` argument. However, if you log in via the [HuggingFace command line](https://huggingface.co/docs/huggingface_hub/quick-startlogin) (which comes installed with `gradio`),\nyou can omit the `hf_token` argument.\n\nThe `version` argument lets you specify a valid [semantic version](https://www.geeksforgeeks.org/introduction-semantic-versioning/) string for your theme.\nThat way your users are able to specify which version of your theme they want to use in their apps. This also lets you publish updates to your theme without worrying\nabout changing how previously created apps look. The `version` argument is optional. If omitted, the next patch version is automatically applied.\n\nTheme Previews\n\nBy calling `push_to_hub` or `upload_theme`, the theme assets will be stored in a [HuggingFace space](https://huggingface.co/docs/hub/spaces-overview).\n\nFor example, the theme preview for the calm seafoam theme is here: [calm seafoam preview](https://huggingface.co/spaces/shivalikasingh/calm_seafoam).\n\n
\n\n\n
\n\nDiscovering Themes\n\nThe [Theme Gallery](https://huggingface.co/spaces/gradio/theme-gallery) shows all the public gradio themes. After publishing your theme,\nit will automatically show up in the theme gallery after a couple of minutes.\n\nYou can sort the themes by the number of likes on the space and from most to least recently created as well as toggling themes between light and dark mode.\n\n
\n\n
\n\nDownloading\n\nTo use a theme from the hub, use the `from_hub` method on the `ThemeClass` and pass it to your app:\n\n```python\nmy_theme = gr.Theme.from_hub(\"gradio/seafoam\")\n\nwith gr.Blocks(theme=my_theme) as demo:\n ....\n```\n\nYou can also pass the theme string directly to `Blocks` or `Interface` (`gr.Blocks(theme=\"gradio/seafoam\")`)\n\nYou can pin your app to an upstream theme version by using semantic versioning expressions.\n\nFor example, the following would ensure the theme we load from the `seafoam` repo was between versions `0.0.1` and `0.1.0`:\n\n```python\nwith gr.Blocks(theme=\"gradio/seafoam@>=0.0.1,<0.1.0\") as demo:\n ....\n```\n\nEnjoy creating your own themes! If you make one you're proud of, please share it with the world by uploading it to the hub!\nIf you tag us on [Twitter](https://twitter.com/gradio) we can give your theme a shout out!\n\n\n", "heading1": "Sharing Themes", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "er iframe {\n position: absolute;\n top: 0;\n left: 0;\n width: 100%;\n height: 100%;\n}\n\n", "heading1": "Sharing Themes", "source_page_url": "https://gradio.app/guides/theming-guide", "source_page_title": "Other Tutorials - Theming Guide Guide"}, {"text": "In this Guide, we'll walk you through:\n\n- Introduction of Gradio, and Hugging Face Spaces, and Wandb\n- How to setup a Gradio demo using the Wandb integration for JoJoGAN\n- How to contribute your own Gradio demos after tracking your experiments on wandb to the Wandb organization on Hugging Face\n\n\n", "heading1": "Introduction", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "Weights and Biases (W&B) allows data scientists and machine learning scientists to track their machine learning experiments at every stage, from training to production. Any metric can be aggregated over samples and shown in panels in a customizable and searchable dashboard, like below:\n\n\"Screen\n\n", "heading1": "What is Wandb?", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "Gradio\n\nGradio lets users demo their machine learning models as a web app, all in a few lines of Python. Gradio wraps any Python function (such as a machine learning model's inference function) into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.\n\nGet started [here](https://gradio.app/getting_started)\n\nHugging Face Spaces\n\nHugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).\n\n", "heading1": "What are Hugging Face Spaces & Gradio?", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "Now, let's walk you through how to do this on your own. We'll make the assumption that you're new to W&B and Gradio for the purposes of this tutorial.\n\nLet's get started!\n\n1. Create a W&B account\n\n Follow [these quick instructions](https://app.wandb.ai/login) to create your free account if you don\u2019t have one already. It shouldn't take more than a couple minutes. Once you're done (or if you've already got an account), next, we'll run a quick colab.\n\n2. Open Colab Install Gradio and W&B\n\n We'll be following along with the colab provided in the JoJoGAN repo with some minor modifications to use Wandb and Gradio more effectively.\n\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)\n\n Install Gradio and Wandb at the top:\n\n ```sh\n pip install gradio wandb\n ```\n\n3. Finetune StyleGAN and W&B experiment tracking\n\n This next step will open a W&B dashboard to track your experiments and a gradio panel showing pretrained models to choose from a drop down menu from a Gradio Demo hosted on Huggingface Spaces. Here's the code you need for that:\n\n ```python\n alpha = 1.0\n alpha = 1-alpha\n\n preserve_color = True\n num_iter = 100\n log_interval = 50\n\n samples = []\n column_names = [\"Reference (y)\", \"Style Code(w)\", \"Real Face Image(x)\"]\n\n wandb.init(project=\"JoJoGAN\")\n config = wandb.config\n config.num_iter = num_iter\n config.preserve_color = preserve_color\n wandb.log(\n {\"Style reference\": [wandb.Image(transforms.ToPILImage()(target_im))]},\n step=0)\n\n load discriminator for perceptual loss\n discriminator = Discriminator(1024, 2).eval().to(device)\n ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)\n discriminator.load_state_dict(ckpt[\"d\"], strict=False)\n\n reset generator\n del generator\n generator = deepcopy(original_generator)\n\n g_optim = optim.Adam(generator.parameters(),", "heading1": "Setting up a Gradio Demo for JoJoGAN", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": ": storage)\n discriminator.load_state_dict(ckpt[\"d\"], strict=False)\n\n reset generator\n del generator\n generator = deepcopy(original_generator)\n\n g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))\n\n Which layers to swap for generating a family of plausible real images -> fake image\n if preserve_color:\n id_swap = [9,11,15,16,17]\n else:\n id_swap = list(range(7, generator.n_latent))\n\n for idx in tqdm(range(num_iter)):\n mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)\n in_latent = latents.clone()\n in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]\n\n img = generator(in_latent, input_is_latent=True)\n\n with torch.no_grad():\n real_feat = discriminator(targets)\n fake_feat = discriminator(img)\n\n loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)\n\n wandb.log({\"loss\": loss}, step=idx)\n if idx % log_interval == 0:\n generator.eval()\n my_sample = generator(my_w, input_is_latent=True)\n generator.train()\n my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))\n wandb.log(\n {\"Current stylization\": [wandb.Image(my_sample)]},\n step=idx)\n table_data = [\n wandb.Image(transforms.ToPILImage()(target_im)),\n wandb.Image(img),\n wandb.Image(my_sample),\n ]\n samples.append(table_data)\n\n g_optim.zero_grad()\n loss.backward()\n g_optim.step()\n\n out_table = wandb.Table(data=samples, columns=column_names)\n wandb.log({\"Current Samples\": out_table})\n ```\n4. Save, Download, and Load Model\n\n Here's how to save and download your model.\n\n ```python\n from PIL import Image\n import torch\n torch.backends.cudnn.benchmark = True\n from torchvision impor", "heading1": "Setting up a Gradio Demo for JoJoGAN", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": "ave, Download, and Load Model\n\n Here's how to save and download your model.\n\n ```python\n from PIL import Image\n import torch\n torch.backends.cudnn.benchmark = True\n from torchvision import transforms, utils\n from util import *\n import math\n import random\n import numpy as np\n from torch import nn, autograd, optim\n from torch.nn import functional as F\n from tqdm import tqdm\n import lpips\n from model import *\n from e4e_projection import projection as e4e_projection\n \n from copy import deepcopy\n import imageio\n \n import os\n import sys\n import torchvision.transforms as transforms\n from argparse import Namespace\n from e4e.models.psp import pSp\n from util import *\n from huggingface_hub import hf_hub_download\n from google.colab import files\n \n torch.save({\"g\": generator.state_dict()}, \"your-model-name.pt\")\n \n files.download('your-model-name.pt')\n \n latent_dim = 512\n device=\"cuda\"\n model_path_s = hf_hub_download(repo_id=\"akhaliq/jojogan-stylegan2-ffhq-config-f\", filename=\"stylegan2-ffhq-config-f.pt\")\n original_generator = Generator(1024, latent_dim, 8, 2).to(device)\n ckpt = torch.load(model_path_s, map_location=lambda storage, loc: storage)\n original_generator.load_state_dict(ckpt[\"g_ema\"], strict=False)\n mean_latent = original_generator.mean_latent(10000)\n \n generator = deepcopy(original_generator)\n \n ckpt = torch.load(\"/content/JoJoGAN/your-model-name.pt\", map_location=lambda storage, loc: storage)\n generator.load_state_dict(ckpt[\"g\"], strict=False)\n generator.eval()\n \n plt.rcParams['figure.dpi'] = 150\n \n transform = transforms.Compose(\n [\n transforms.Resize((1024, 1024)),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ]\n )\n \n def inference(img):\n img.save('out.jpg')\n aligned_face = align_face('out.jpg')\n \n my_w = e4e_projection(aligned_face, \"out.pt\", device).unsqueeze(0)", "heading1": "Setting up a Gradio Demo for JoJoGAN", "source_page_url": "https://gradio.app/guides/Gradio-and-Wandb-Integration", "source_page_title": "Other Tutorials - Gradio And Wandb Integration Guide"}, {"text": ".5, 0.5)),\n ]\n )\n \n def inference(img):\n img.save('out.jpg')\n aligned_face = align_face('out.jpg')\n \n my_w = e4e_projection(aligned_face, \"out.pt\", device).unsqueeze(0)\n with torch.no_grad():\n my_sample = generator(my_w, input_is_latent=True)\n \n npimage = my_sample[0].cpu().permute(1, 2, 0).detach().numpy()\n imageio.imwrite('filename.jpeg', npimage)\n return 'filename.jpeg'\n ````\n\n5. Build a Gradio Demo\n\n ```python\n import gradio as gr\n \n title = \"JoJoGAN\"\n description = \"Gradio Demo for JoJoGAN: One Shot Face Stylization. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below.\"\n \n demo = gr.Interface(\n inference,\n gr.Image(type=\"pil\"),\n gr.Image(type=\"file\"),\n title=title,\n description=description\n )\n \n demo.launch(share=True)\n ```\n\n6. Integrate Gradio into your W&B Dashboard\n\n The last step\u2014integrating your Gradio demo with your W&B dashboard\u2014is just one extra line:\n\n ```python\n demo.integrate(wandb=wandb)\n ```\n\n Once you call integrate, a demo will be created and you can integrate it into your dashboard or report.\n\n Outside of W&B with Web components, using the `gradio-app` tags, anyone can embed Gradio demos on HF spaces directly into their blogs, websites, documentation, etc.:\n \n ```html\n \n ```\n\n7. (Optional) Embed W&B plots in your Gradio App\n\n It's also possible to embed W&B plots within Gradio apps. To do so, you can create a W&B Report of your plots and\n embed them within your Gradio app within a `gr.HTML` block.\n\n The Report will need to be public and you will need to wrap the URL within an iFrame like this:\n\n ```python\n import gradio as gr\n \n def wandb_report(url):\n iframe = f'