text
stringlengths
0
2k
heading1
stringlengths
4
79
source_page_url
stringclasses
178 values
source_page_title
stringclasses
178 values
`` This will create a new route `/add_and_slice` which will show up in the "view API" page. It can be programmatically called by the Python or JS Clients (discussed below) like this: ```py from gradio_client import Client client = Client(url) result = client.predict( a=3, b=5, c=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], api_name="/add_and_slice" ) print(result) ```
Configuring the API Page
https://gradio.app/guides/view-api-page
Additional Features - View Api Page Guide
This API page not only lists all of the endpoints that can be used to query the Gradio app, but also shows the usage of both [the Gradio Python client](https://gradio.app/guides/getting-started-with-the-python-client/), and [the Gradio JavaScript client](https://gradio.app/guides/getting-started-with-the-js-client/). For each endpoint, Gradio automatically generates a complete code snippet with the parameters and their types, as well as example inputs, allowing you to immediately test an endpoint. Here's an example showing an image file input and `str` output: ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-snippet.png)
The Clients
https://gradio.app/guides/view-api-page
Additional Features - View Api Page Guide
Instead of reading through the view API page, you can also use Gradio's built-in API recorder to generate the relevant code snippet. Simply click on the "API Recorder" button, use your Gradio app via the UI as you would normally, and then the API Recorder will generate the code using the Clients to recreate your all of your interactions programmatically. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/api-recorder.gif)
The API Recorder 🪄
https://gradio.app/guides/view-api-page
Additional Features - View Api Page Guide
The API page also includes instructions on how to use the Gradio app as an Model Context Protocol (MCP) server, which is a standardized way to expose functions as tools so that they can be used by LLMs. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-mcp.png) For the MCP sever, each tool, its description, and its parameters are listed, along with instructions on how to integrate with popular MCP Clients. Read more about Gradio's [MCP integration here](https://www.gradio.app/guides/building-mcp-server-with-gradio).
MCP Server
https://gradio.app/guides/view-api-page
Additional Features - View Api Page Guide
You can access the complete OpenAPI (formerly Swagger) specification of your Gradio app's API at the endpoint `<your-gradio-app-url>/gradio_api/openapi.json`. The OpenAPI specification is a standardized, language-agnostic interface description for REST APIs that enables both humans and computers to discover and understand the capabilities of your service.
OpenAPI Specification
https://gradio.app/guides/view-api-page
Additional Features - View Api Page Guide
Let's create a demo where a user can choose a filter to apply to their webcam stream. Users can choose from an edge-detection filter, a cartoon filter, or simply flipping the stream vertically. $code_streaming_filter $demo_streaming_filter You will notice that if you change the filter value it will immediately take effect in the output stream. That is an important difference of stream events in comparison to other Gradio events. The input values of the stream can be changed while the stream is being processed. Tip: We set the "streaming" parameter of the image output component to be "True". Doing so lets the server automatically convert our output images into base64 format, a format that is efficient for streaming.
A Realistic Image Demo
https://gradio.app/guides/streaming-inputs
Additional Features - Streaming Inputs Guide
For some image streaming demos, like the one above, we don't need to display separate input and output components. Our app would look cleaner if we could just display the modified output stream. We can do so by just specifying the input image component as the output of the stream event. $code_streaming_filter_unified $demo_streaming_filter_unified
Unified Image Demos
https://gradio.app/guides/streaming-inputs
Additional Features - Streaming Inputs Guide
Your streaming function should be stateless. It should take the current input and return its corresponding output. However, there are cases where you may want to keep track of past inputs or outputs. For example, you may want to keep a buffer of the previous `k` inputs to improve the accuracy of your transcription demo. You can do this with Gradio's `gr.State()` component. Let's showcase this with a sample demo: ```python def transcribe_handler(current_audio, state, transcript): next_text = transcribe(current_audio, history=state) state.append(current_audio) state = state[-3:] return state, transcript + next_text with gr.Blocks() as demo: with gr.Row(): with gr.Column(): mic = gr.Audio(sources="microphone") state = gr.State(value=[]) with gr.Column(): transcript = gr.Textbox(label="Transcript") mic.stream(transcribe_handler, [mic, state, transcript], [state, transcript], time_limit=10, stream_every=1) demo.launch() ```
Keeping track of past inputs or outputs
https://gradio.app/guides/streaming-inputs
Additional Features - Streaming Inputs Guide
For an end-to-end example of streaming from the webcam, see the object detection from webcam [guide](/main/guides/object-detection-from-webcam-with-webrtc).
End-to-End Examples
https://gradio.app/guides/streaming-inputs
Additional Features - Streaming Inputs Guide
Client side functions are ideal for updating component properties (like visibility, placeholders, interactive state, or styling). Here's a basic example: ```py import gradio as gr with gr.Blocks() as demo: with gr.Row() as row: btn = gr.Button("Hide this row") This function runs in the browser without a server roundtrip btn.click( lambda: gr.Row(visible=False), None, row, js=True ) demo.launch() ```
When to Use Client Side Functions
https://gradio.app/guides/client-side-functions
Additional Features - Client Side Functions Guide
Client side functions have some important restrictions: * They can only update component properties (not values) * They cannot take any inputs Here are some functions that will work with `js=True`: ```py Simple property updates lambda: gr.Textbox(lines=4) Multiple component updates lambda: [gr.Textbox(lines=4), gr.Button(interactive=False)] Using gr.update() for property changes lambda: gr.update(visible=True, interactive=False) ``` We are working to increase the space of functions that can be transpiled to JavaScript so that they can be run in the browser. [Follow the Groovy library for more info](https://github.com/abidlabs/groovy-transpiler).
Limitations
https://gradio.app/guides/client-side-functions
Additional Features - Client Side Functions Guide
Here's a more complete example showing how client side functions can improve the user experience: $code_todo_list_js
Complete Example
https://gradio.app/guides/client-side-functions
Additional Features - Client Side Functions Guide
When you set `js=True`, Gradio: 1. Transpiles your Python function to JavaScript 2. Runs the function directly in the browser 3. Still sends the request to the server (for consistency and to handle any side effects) This provides immediate visual feedback while ensuring your application state remains consistent.
Behind the Scenes
https://gradio.app/guides/client-side-functions
Additional Features - Client Side Functions Guide
- **1. Static files**. You can designate static files or directories using the `gr.set_static_paths` function. Static files are not be copied to the Gradio cache (see below) and will be served directly from your computer. This can help save disk space and reduce the time your app takes to launch but be mindful of possible security implications as any static files are accessible to all useres of your Gradio app. - **2. Files in the `allowed_paths` parameter in `launch()`**. This parameter allows you to pass in a list of additional directories or exact filepaths you'd like to allow users to have access to. (By default, this parameter is an empty list). - **3. Files in Gradio's cache**. After you launch your Gradio app, Gradio copies certain files into a temporary cache and makes these files accessible to users. Let's unpack this in more detail below.
Files Gradio allows users to access
https://gradio.app/guides/file-access
Additional Features - File Access Guide
First, it's important to understand why Gradio has a cache at all. Gradio copies files to a cache directory before returning them to the frontend. This prevents files from being overwritten by one user while they are still needed by another user of your application. For example, if your prediction function returns a video file, then Gradio will move that video to the cache after your prediction function runs and returns a URL the frontend can use to show the video. Any file in the cache is available via URL to all users of your running application. Tip: You can customize the location of the cache by setting the `GRADIO_TEMP_DIR` environment variable to an absolute path, such as `/home/usr/scripts/project/temp/`. Files Gradio moves to the cache Gradio moves three kinds of files into the cache 1. Files specified by the developer before runtime, e.g. cached examples, default values of components, or files passed into parameters such as the `avatar_images` of `gr.Chatbot` 2. File paths returned by a prediction function in your Gradio application, if they ALSO meet one of the conditions below: * It is in the `allowed_paths` parameter of the `Blocks.launch` method. * It is in the current working directory of the python interpreter. * It is in the temp directory obtained by `tempfile.gettempdir()`. **Note:** files in the current working directory whose name starts with a period (`.`) will not be moved to the cache, even if they are returned from a prediction function, since they often contain sensitive information. If none of these criteria are met, the prediction function that is returning that file will raise an exception instead of moving the file to cache. Gradio performs this check so that arbitrary files on your machine cannot be accessed. 3. Files uploaded by a user to your Gradio app (e.g. through the `File` or `Image` input components). Tip: If at any time Gradio blocks a file that you would like it to process, add its path to the `allowed_paths` p
The Gradio cache
https://gradio.app/guides/file-access
Additional Features - File Access Guide
d by a user to your Gradio app (e.g. through the `File` or `Image` input components). Tip: If at any time Gradio blocks a file that you would like it to process, add its path to the `allowed_paths` parameter.
The Gradio cache
https://gradio.app/guides/file-access
Additional Features - File Access Guide
While running, Gradio apps will NOT ALLOW users to access: - **Files that you explicitly block via the `blocked_paths` parameter in `launch()`**. You can pass in a list of additional directories or exact filepaths to the `blocked_paths` parameter in `launch()`. This parameter takes precedence over the files that Gradio exposes by default, or by the `allowed_paths` parameter or the `gr.set_static_paths` function. - **Any other paths on the host machine**. Users should NOT be able to access other arbitrary paths on the host.
The files Gradio will not allow others to access
https://gradio.app/guides/file-access
Additional Features - File Access Guide
Sharing your Gradio application will also allow users to upload files to your computer or server. You can set a maximum file size for uploads to prevent abuse and to preserve disk space. You can do this with the `max_file_size` parameter of `.launch`. For example, the following two code snippets limit file uploads to 5 megabytes per file. ```python import gradio as gr demo = gr.Interface(lambda x: x, "image", "image") demo.launch(max_file_size="5mb") or demo.launch(max_file_size=5 * gr.FileSize.MB) ```
Uploading Files
https://gradio.app/guides/file-access
Additional Features - File Access Guide
* Set a `max_file_size` for your application. * Do not return arbitrary user input from a function that is connected to a file-based output component (`gr.Image`, `gr.File`, etc.). For example, the following interface would allow anyone to move an arbitrary file in your local directory to the cache: `gr.Interface(lambda s: s, "text", "file")`. This is because the user input is treated as an arbitrary file path. * Make `allowed_paths` as small as possible. If a path in `allowed_paths` is a directory, any file within that directory can be accessed. Make sure the entires of `allowed_paths` only contains files related to your application. * Run your gradio application from the same directory the application file is located in. This will narrow the scope of files Gradio will be allowed to move into the cache. For example, prefer `python app.py` to `python Users/sources/project/app.py`.
Best Practices
https://gradio.app/guides/file-access
Additional Features - File Access Guide
Both `gr.set_static_paths` and the `allowed_paths` parameter in launch expect absolute paths. Below is a minimal example to display a local `.png` image file in an HTML block. ```txt ├── assets │ └── logo.png └── app.py ``` For the example directory structure, `logo.png` and any other files in the `assets` folder can be accessed from your Gradio app in `app.py` as follows: ```python from pathlib import Path import gradio as gr gr.set_static_paths(paths=[Path.cwd().absolute()/"assets"]) with gr.Blocks() as demo: gr.HTML("<img src='/gradio_api/file=assets/logo.png'>") demo.launch() ```
Example: Accessing local files
https://gradio.app/guides/file-access
Additional Features - File Access Guide
Gradio can stream audio and video directly from your generator function. This lets your user hear your audio or see your video nearly as soon as it's `yielded` by your function. All you have to do is 1. Set `streaming=True` in your `gr.Audio` or `gr.Video` output component. 2. Write a python generator that yields the next "chunk" of audio or video. 3. Set `autoplay=True` so that the media starts playing automatically. For audio, the next "chunk" can be either an `.mp3` or `.wav` file or a `bytes` sequence of audio. For video, the next "chunk" has to be either `.mp4` file or a file with `h.264` codec with a `.ts` extension. For smooth playback, make sure chunks are consistent lengths and larger than 1 second. We'll finish with some simple examples illustrating these points. Streaming Audio ```python import gradio as gr from time import sleep def keep_repeating(audio_file): for _ in range(10): sleep(0.5) yield audio_file gr.Interface(keep_repeating, gr.Audio(sources=["microphone"], type="filepath"), gr.Audio(streaming=True, autoplay=True) ).launch() ``` Streaming Video ```python import gradio as gr from time import sleep def keep_repeating(video_file): for _ in range(10): sleep(0.5) yield video_file gr.Interface(keep_repeating, gr.Video(sources=["webcam"], format="mp4"), gr.Video(streaming=True, autoplay=True) ).launch() ```
Streaming Media
https://gradio.app/guides/streaming-outputs
Additional Features - Streaming Outputs Guide
For an end-to-end example of streaming media, see the object detection from video [guide](/main/guides/object-detection-from-video) or the streaming AI-generated audio with [transformers](https://huggingface.co/docs/transformers/index) [guide](/main/guides/streaming-ai-generated-audio).
End-to-End Examples
https://gradio.app/guides/streaming-outputs
Additional Features - Streaming Outputs Guide
You can initialize the `I18n` class with multiple language dictionaries to add custom translations: ```python import gradio as gr Create an I18n instance with translations for multiple languages i18n = gr.I18n( en={"greeting": "Hello, welcome to my app!", "submit": "Submit"}, es={"greeting": "¡Hola, bienvenido a mi aplicación!", "submit": "Enviar"}, fr={"greeting": "Bonjour, bienvenue dans mon application!", "submit": "Soumettre"} ) with gr.Blocks() as demo: Use the i18n method to translate the greeting gr.Markdown(i18n("greeting")) with gr.Row(): input_text = gr.Textbox(label="Input") output_text = gr.Textbox(label="Output") submit_btn = gr.Button(i18n("submit")) Pass the i18n instance to the launch method demo.launch(i18n=i18n) ```
Setting Up Translations
https://gradio.app/guides/internationalization
Additional Features - Internationalization Guide
When you use the `i18n` instance with a translation key, Gradio will show the corresponding translation to users based on their browser's language settings or the language they've selected in your app. If a translation isn't available for the user's locale, the system will fall back to English (if available) or display the key itself.
How It Works
https://gradio.app/guides/internationalization
Additional Features - Internationalization Guide
Locale codes should follow the BCP 47 format (e.g., 'en', 'en-US', 'zh-CN'). The `I18n` class will warn you if you use an invalid locale code.
Valid Locale Codes
https://gradio.app/guides/internationalization
Additional Features - Internationalization Guide
The following component properties typically support internationalization: - `description` - `info` - `title` - `placeholder` - `value` - `label` Note that support may vary depending on the component, and some properties might have exceptions where internationalization is not applicable. You can check this by referring to the typehint for the parameter and if it contains `I18nData`, then it supports internationalization.
Supported Component Properties
https://gradio.app/guides/internationalization
Additional Features - Internationalization Guide
When a user closes their browser tab, Gradio will automatically delete any `gr.State` variables associated with that user session after 60 minutes. If the user connects again within those 60 minutes, no state will be deleted. You can control the deletion behavior further with the following two parameters of `gr.State`: 1. `delete_callback` - An arbitrary function that will be called when the variable is deleted. This function must take the state value as input. This function is useful for deleting variables from GPU memory. 2. `time_to_live` - The number of seconds the state should be stored for after it is created or updated. This will delete variables before the session is closed, so it's useful for clearing state for potentially long running sessions.
Automatic deletion of `gr.State`
https://gradio.app/guides/resource-cleanup
Additional Features - Resource Cleanup Guide
Your Gradio application will save uploaded and generated files to a special directory called the cache directory. Gradio uses a hashing scheme to ensure that duplicate files are not saved to the cache but over time the size of the cache will grow (especially if your app goes viral 😉). Gradio can periodically clean up the cache for you if you specify the `delete_cache` parameter of `gr.Blocks()`, `gr.Interface()`, or `gr.ChatInterface()`. This parameter is a tuple of the form `[frequency, age]` both expressed in number of seconds. Every `frequency` seconds, the temporary files created by this Blocks instance will be deleted if more than `age` seconds have passed since the file was created. For example, setting this to (86400, 86400) will delete temporary files every day if they are older than a day old. Additionally, the cache will be deleted entirely when the server restarts.
Automatic cache cleanup via `delete_cache`
https://gradio.app/guides/resource-cleanup
Additional Features - Resource Cleanup Guide
Additionally, Gradio now includes a `Blocks.unload()` event, allowing you to run arbitrary cleanup functions when users disconnect (this does not have a 60 minute delay). Unlike other gradio events, this event does not accept inputs or outptus. You can think of the `unload` event as the opposite of the `load` event.
The `unload` event
https://gradio.app/guides/resource-cleanup
Additional Features - Resource Cleanup Guide
The following demo uses all of these features. When a user visits the page, a special unique directory is created for that user. As the user interacts with the app, images are saved to disk in that special directory. When the user closes the page, the images created in that session are deleted via the `unload` event. The state and files in the cache are cleaned up automatically as well. $code_state_cleanup $demo_state_cleanup
Putting it all together
https://gradio.app/guides/resource-cleanup
Additional Features - Resource Cleanup Guide
1. `GRADIO_SERVER_PORT` - **Description**: Specifies the port on which the Gradio app will run. - **Default**: `7860` - **Example**: ```bash export GRADIO_SERVER_PORT=8000 ``` 2. `GRADIO_SERVER_NAME` - **Description**: Defines the host name for the Gradio server. To make Gradio accessible from any IP address, set this to `"0.0.0.0"` - **Default**: `"127.0.0.1"` - **Example**: ```bash export GRADIO_SERVER_NAME="0.0.0.0" ``` 3. `GRADIO_NUM_PORTS` - **Description**: Defines the number of ports to try when starting the Gradio server. - **Default**: `100` - **Example**: ```bash export GRADIO_NUM_PORTS=200 ``` 4. `GRADIO_ANALYTICS_ENABLED` - **Description**: Whether Gradio should provide - **Default**: `"True"` - **Options**: `"True"`, `"False"` - **Example**: ```sh export GRADIO_ANALYTICS_ENABLED="True" ``` 5. `GRADIO_DEBUG` - **Description**: Enables or disables debug mode in Gradio. If debug mode is enabled, the main thread does not terminate allowing error messages to be printed in environments such as Google Colab. - **Default**: `0` - **Example**: ```sh export GRADIO_DEBUG=1 ``` 6. `GRADIO_FLAGGING_MODE` - **Description**: Controls whether users can flag inputs/outputs in the Gradio interface. See [the Guide on flagging](/guides/using-flagging) for more details. - **Default**: `"manual"` - **Options**: `"never"`, `"manual"`, `"auto"` - **Example**: ```sh export GRADIO_FLAGGING_MODE="never" ``` 7. `GRADIO_TEMP_DIR` - **Description**: Specifies the directory where temporary files created by Gradio are stored. - **Default**: System default temporary directory - **Example**: ```sh export GRADIO_TEMP_DIR="/path/to/temp" ``` 8. `GRADIO_ROOT_PATH` - **Description**: Sets the root path for the Gradio application. Useful if running Gradio [behind a reverse proxy](/guides/running-gradio-on-your-web-server-with-nginx). - **Default**: `""` - **Example**: ```sh export GRADIO_ROOT_PATH=
Key Environment Variables
https://gradio.app/guides/environment-variables
Additional Features - Environment Variables Guide
r the Gradio application. Useful if running Gradio [behind a reverse proxy](/guides/running-gradio-on-your-web-server-with-nginx). - **Default**: `""` - **Example**: ```sh export GRADIO_ROOT_PATH="/myapp" ``` 9. `GRADIO_SHARE` - **Description**: Enables or disables sharing the Gradio app. - **Default**: `"False"` - **Options**: `"True"`, `"False"` - **Example**: ```sh export GRADIO_SHARE="True" ``` 10. `GRADIO_ALLOWED_PATHS` - **Description**: Sets a list of complete filepaths or parent directories that gradio is allowed to serve. Must be absolute paths. Warning: if you provide directories, any files in these directories or their subdirectories are accessible to all users of your app. Multiple items can be specified by separating items with commas. - **Default**: `""` - **Example**: ```sh export GRADIO_ALLOWED_PATHS="/mnt/sda1,/mnt/sda2" ``` 11. `GRADIO_BLOCKED_PATHS` - **Description**: Sets a list of complete filepaths or parent directories that gradio is not allowed to serve (i.e. users of your app are not allowed to access). Must be absolute paths. Warning: takes precedence over `allowed_paths` and all other directories exposed by Gradio by default. Multiple items can be specified by separating items with commas. - **Default**: `""` - **Example**: ```sh export GRADIO_BLOCKED_PATHS="/users/x/gradio_app/admin,/users/x/gradio_app/keys" ``` 12. `FORWARDED_ALLOW_IPS` - **Description**: This is not a Gradio-specific environment variable, but rather one used in server configurations, specifically `uvicorn` which is used by Gradio internally. This environment variable is useful when deploying applications behind a reverse proxy. It defines a list of IP addresses that are trusted to forward traffic to your application. When set, the application will trust the `X-Forwarded-For` header from these IP addresses to determine the original IP address of the user making the request. This means that if you use the `gr.Request` [objec
Key Environment Variables
https://gradio.app/guides/environment-variables
Additional Features - Environment Variables Guide
the application will trust the `X-Forwarded-For` header from these IP addresses to determine the original IP address of the user making the request. This means that if you use the `gr.Request` [object's](https://www.gradio.app/docs/gradio/request) `client.host` property, it will correctly get the user's IP address instead of the IP address of the reverse proxy server. Note that only trusted IP addresses (i.e. the IP addresses of your reverse proxy servers) should be added, as any server with these IP addresses can modify the `X-Forwarded-For` header and spoof the client's IP address. - **Default**: `"127.0.0.1"` - **Example**: ```sh export FORWARDED_ALLOW_IPS="127.0.0.1,192.168.1.100" ``` 13. `GRADIO_CACHE_EXAMPLES` - **Description**: Whether or not to cache examples by default in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()` when no explicit argument is passed for the `cache_examples` parameter. You can set this environment variable to either the string "true" or "false". - **Default**: `"false"` - **Example**: ```sh export GRADIO_CACHE_EXAMPLES="true" ``` 14. `GRADIO_CACHE_MODE` - **Description**: How to cache examples. Only applies if `cache_examples` is set to `True` either via enviornment variable or by an explicit parameter, AND no no explicit argument is passed for the `cache_mode` parameter in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()`. Can be set to either the strings "lazy" or "eager." If "lazy", examples are cached after their first use for all users of the app. If "eager", all examples are cached at app launch. - **Default**: `"eager"` - **Example**: ```sh export GRADIO_CACHE_MODE="lazy" ``` 15. `GRADIO_EXAMPLES_CACHE` - **Description**: If you set `cache_examples=True` in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()`, Gradio will run your prediction function and save the results to disk. By default, this is in the `.gradio/cached_examples//` subdirectory within your
Key Environment Variables
https://gradio.app/guides/environment-variables
Additional Features - Environment Variables Guide
e()`, `gr.ChatInterface()` or in `gr.Examples()`, Gradio will run your prediction function and save the results to disk. By default, this is in the `.gradio/cached_examples//` subdirectory within your app's working directory. You can customize the location of cached example files created by Gradio by setting the environment variable `GRADIO_EXAMPLES_CACHE` to an absolute path or a path relative to your working directory. - **Default**: `".gradio/cached_examples/"` - **Example**: ```sh export GRADIO_EXAMPLES_CACHE="custom_cached_examples/" ``` 16. `GRADIO_SSR_MODE` - **Description**: Controls whether server-side rendering (SSR) is enabled. When enabled, the initial HTML is rendered on the server rather than the client, which can improve initial page load performance and SEO. - **Default**: `"False"` (except on Hugging Face Spaces, where this environment variable sets it to `True`) - **Options**: `"True"`, `"False"` - **Example**: ```sh export GRADIO_SSR_MODE="True" ``` 17. `GRADIO_NODE_SERVER_NAME` - **Description**: Defines the host name for the Gradio node server. (Only applies if `ssr_mode` is set to `True`.) - **Default**: `GRADIO_SERVER_NAME` if it is set, otherwise `"127.0.0.1"` - **Example**: ```sh export GRADIO_NODE_SERVER_NAME="0.0.0.0" ``` 18. `GRADIO_NODE_NUM_PORTS` - **Description**: Defines the number of ports to try when starting the Gradio node server. (Only applies if `ssr_mode` is set to `True`.) - **Default**: `100` - **Example**: ```sh export GRADIO_NODE_NUM_PORTS=200 ``` 19. `GRADIO_RESET_EXAMPLES_CACHE` - **Description**: If set to "True", Gradio will delete and recreate the examples cache directory when the app starts instead of reusing the cached example if they already exist. - **Default**: `"False"` - **Options**: `"True"`, `"False"` - **Example**: ```sh export GRADIO_RESET_EXAMPLES_CACHE="True" ``` 20. `GRADIO_CHAT_FLAGGING_MODE` - **Description**: Controls whether users can flag
Key Environment Variables
https://gradio.app/guides/environment-variables
Additional Features - Environment Variables Guide
e"` - **Options**: `"True"`, `"False"` - **Example**: ```sh export GRADIO_RESET_EXAMPLES_CACHE="True" ``` 20. `GRADIO_CHAT_FLAGGING_MODE` - **Description**: Controls whether users can flag messages in `gr.ChatInterface` applications. Similar to `GRADIO_FLAGGING_MODE` but specifically for chat interfaces. - **Default**: `"never"` - **Options**: `"never"`, `"manual"` - **Example**: ```sh export GRADIO_CHAT_FLAGGING_MODE="manual" ``` 21. `GRADIO_WATCH_DIRS` - **Description**: Specifies directories to watch for file changes when running Gradio in development mode. When files in these directories change, the Gradio app will automatically reload. Multiple directories can be specified by separating them with commas. This is primarily used by the `gradio` CLI command for development workflows. - **Default**: `""` - **Example**: ```sh export GRADIO_WATCH_DIRS="/path/to/src,/path/to/templates" ``` 22. `GRADIO_VIBE_MODE` - **Description**: Enables the Vibe editor mode, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. When enabled, anyone who can access the Gradio endpoint can modify files and run arbitrary code on the host machine. Use with extreme caution in production environments. - **Default**: `""` - **Options**: Any non-empty string enables the mode - **Example**: ```sh export GRADIO_VIBE_MODE="1" ```
Key Environment Variables
https://gradio.app/guides/environment-variables
Additional Features - Environment Variables Guide
To set environment variables in your terminal, use the `export` command followed by the variable name and its value. For example: ```sh export GRADIO_SERVER_PORT=8000 ``` If you're using a `.env` file to manage your environment variables, you can add them like this: ```sh GRADIO_SERVER_PORT=8000 GRADIO_SERVER_NAME="localhost" ``` Then, use a tool like `dotenv` to load these variables when running your application.
How to Set Environment Variables
https://gradio.app/guides/environment-variables
Additional Features - Environment Variables Guide
**Prerequisite**: Gradio requires [Python 3.10 or higher](https://www.python.org/downloads/). We recommend installing Gradio using `pip`, which is included by default in Python. Run this in your terminal or command prompt: ```bash pip install --upgrade gradio ``` Tip: It is best to install Gradio in a virtual environment. Detailed installation instructions for all common operating systems <a href="https://www.gradio.app/main/guides/installing-gradio-in-a-virtual-environment">are provided here</a>.
Installation
https://gradio.app/guides/quickstart
Getting Started - Quickstart Guide
You can run Gradio in your favorite code editor, Jupyter notebook, Google Colab, or anywhere else you write Python. Let's write your first Gradio app: $code_hello_world_4 Tip: We shorten the imported name from <code>gradio</code> to <code>gr</code>. This is a widely adopted convention for better readability of code. Now, run your code. If you've written the Python code in a file named `app.py`, then you would run `python app.py` from the terminal. The demo below will open in a browser on [http://localhost:7860](http://localhost:7860) if running from a file. If you are running within a notebook, the demo will appear embedded within the notebook. $demo_hello_world_4 Type your name in the textbox on the left, drag the slider, and then press the Submit button. You should see a friendly greeting on the right. Tip: When developing locally, you can run your Gradio app in <strong>hot reload mode</strong>, which automatically reloads the Gradio app whenever you make changes to the file. To do this, simply type in <code>gradio</code> before the name of the file instead of <code>python</code>. In the example above, you would type: `gradio app.py` in your terminal. You can also enable <strong>vibe mode</strong> by using the <code>--vibe</code> flag, e.g. <code>gradio --vibe app.py</code>, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. Learn more in the <a href="https://www.gradio.app/guides/developing-faster-with-reload-mode">Hot Reloading Guide</a>. **Understanding the `Interface` Class** You'll notice that in order to make your first demo, you created an instance of the `gr.Interface` class. The `Interface` class is designed to create demos for machine learning models which accept one or more inputs, and return one or more outputs. The `Interface` class has three core arguments: - `fn`: the function to wrap a user interface (UI) around - `inputs`: the Gradio component(s) to use for the input. The num
Building Your First Demo
https://gradio.app/guides/quickstart
Getting Started - Quickstart Guide
turn one or more outputs. The `Interface` class has three core arguments: - `fn`: the function to wrap a user interface (UI) around - `inputs`: the Gradio component(s) to use for the input. The number of components should match the number of arguments in your function. - `outputs`: the Gradio component(s) to use for the output. The number of components should match the number of return values from your function. The `fn` argument is very flexible -- you can pass *any* Python function that you want to wrap with a UI. In the example above, we saw a relatively simple function, but the function could be anything from a music generator to a tax calculator to the prediction function of a pretrained machine learning model. The `inputs` and `outputs` arguments take one or more Gradio components. As we'll see, Gradio includes more than [30 built-in components](https://www.gradio.app/docs/gradio/introduction) (such as the `gr.Textbox()`, `gr.Image()`, and `gr.HTML()` components) that are designed for machine learning applications. Tip: For the `inputs` and `outputs` arguments, you can pass in the name of these components as a string (`"textbox"`) or an instance of the class (`gr.Textbox()`). If your function accepts more than one argument, as is the case above, pass a list of input components to `inputs`, with each input component corresponding to one of the arguments of the function, in order. The same holds true if your function returns more than one value: simply pass in a list of components to `outputs`. This flexibility makes the `Interface` class a very powerful way to create demos. We'll dive deeper into the `gr.Interface` on our series on [building Interfaces](https://www.gradio.app/main/guides/the-interface-class).
Building Your First Demo
https://gradio.app/guides/quickstart
Getting Started - Quickstart Guide
What good is a beautiful demo if you can't share it? Gradio lets you easily share a machine learning demo without having to worry about the hassle of hosting on a web server. Simply set `share=True` in `launch()`, and a publicly accessible URL will be created for your demo. Let's revisit our example demo, but change the last line as follows: ```python import gradio as gr def greet(name): return "Hello " + name + "!" demo = gr.Interface(fn=greet, inputs="textbox", outputs="textbox") demo.launch(share=True) Share your demo with just 1 extra parameter 🚀 ``` When you run this code, a public URL will be generated for your demo in a matter of seconds, something like: 👉 &nbsp; `https://a23dsf231adb.gradio.live` Now, anyone around the world can try your Gradio demo from their browser, while the machine learning model and all computation continues to run locally on your computer. To learn more about sharing your demo, read our dedicated guide on [sharing your Gradio application](https://www.gradio.app/guides/sharing-your-app).
Sharing Your Demo
https://gradio.app/guides/quickstart
Getting Started - Quickstart Guide
So far, we've been discussing the `Interface` class, which is a high-level class that lets to build demos quickly with Gradio. But what else does Gradio include? Custom Demos with `gr.Blocks` Gradio offers a low-level approach for designing web apps with more customizable layouts and data flows with the `gr.Blocks` class. Blocks supports things like controlling where components appear on the page, handling multiple data flows and more complex interactions (e.g. outputs can serve as inputs to other functions), and updating properties/visibility of components based on user interaction — still all in Python. You can build very custom and complex applications using `gr.Blocks()`. For example, the popular image generation [Automatic1111 Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is built using Gradio Blocks. We dive deeper into the `gr.Blocks` on our series on [building with Blocks](https://www.gradio.app/guides/blocks-and-event-listeners). Chatbots with `gr.ChatInterface` Gradio includes another high-level class, `gr.ChatInterface`, which is specifically designed to create Chatbot UIs. Similar to `Interface`, you supply a function and Gradio creates a fully working Chatbot UI. If you're interested in creating a chatbot, you can jump straight to [our dedicated guide on `gr.ChatInterface`](https://www.gradio.app/guides/creating-a-chatbot-fast). The Gradio Python & JavaScript Ecosystem That's the gist of the core `gradio` Python library, but Gradio is actually so much more! It's an entire ecosystem of Python and JavaScript libraries that let you build machine learning applications, or query them programmatically, in Python or JavaScript. Here are other related parts of the Gradio ecosystem: * [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python. * [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-t
An Overview of Gradio
https://gradio.app/guides/quickstart
Getting Started - Quickstart Guide
app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python. * [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-the-js-client) (`@gradio/client`): query any Gradio app programmatically in JavaScript. * [Gradio-Lite](https://www.gradio.app/guides/gradio-lite) (`@gradio/lite`): write Gradio apps in Python that run entirely in the browser (no server needed!), thanks to Pyodide. * [Hugging Face Spaces](https://huggingface.co/spaces): the most popular place to host Gradio applications — for free!
An Overview of Gradio
https://gradio.app/guides/quickstart
Getting Started - Quickstart Guide
Keep learning about Gradio sequentially using the Gradio Guides, which include explanations as well as example code and embedded interactive demos. Next up: [let's dive deeper into the Interface class](https://www.gradio.app/guides/the-interface-class). Or, if you already know the basics and are looking for something specific, you can search the more [technical API documentation](https://www.gradio.app/docs/).
What's Next?
https://gradio.app/guides/quickstart
Getting Started - Quickstart Guide
You can also build Gradio applications without writing any code. Simply type `gradio sketch` into your terminal to open up an editor that lets you define and modify Gradio components, adjust their layouts, add events, all through a web editor. Or [use this hosted version of Gradio Sketch, running on Hugging Face Spaces](https://huggingface.co/spaces/aliabid94/Sketch).
Gradio Sketch
https://gradio.app/guides/quickstart
Getting Started - Quickstart Guide
The Model Context Protocol (MCP) standardizes how applications provide context to LLMs. It allows Claude to interact with external tools, like image generators, file systems, or APIs, etc.
What is MCP?
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
- Python 3.10+ - An Anthropic API key - Basic understanding of Python programming
Prerequisites
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
First, install the required packages: ```bash pip install gradio anthropic mcp ``` Create a `.env` file in your project directory and add your Anthropic API key: ``` ANTHROPIC_API_KEY=your_api_key_here ```
Setup
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
The server provides tools that Claude can use. In this example, we'll create a server that generates images through [a HuggingFace space](https://huggingface.co/spaces/ysharma/SanaSprint). Create a file named `gradio_mcp_server.py`: ```python from mcp.server.fastmcp import FastMCP import json import sys import io import time from gradio_client import Client sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace') sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace') mcp = FastMCP("huggingface_spaces_image_display") @mcp.tool() async def generate_image(prompt: str, width: int = 512, height: int = 512) -> str: """Generate an image using SanaSprint model. Args: prompt: Text prompt describing the image to generate width: Image width (default: 512) height: Image height (default: 512) """ client = Client("https://ysharma-sanasprint.hf.space/") try: result = client.predict( prompt, "0.6B", 0, True, width, height, 4.0, 2, api_name="/infer" ) if isinstance(result, list) and len(result) >= 1: image_data = result[0] if isinstance(image_data, dict) and "url" in image_data: return json.dumps({ "type": "image", "url": image_data["url"], "message": f"Generated image for prompt: {prompt}" }) return json.dumps({ "type": "error", "message": "Failed to generate image" }) except Exception as e: return json.dumps({ "type": "error", "message": f"Error generating image: {str(e)}" }) if __name__ == "__main__": mcp.run(transport='stdio') ``` What this server does: 1. It creates an MCP server that exposes a `gene
Part 1: Building the MCP Server
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
"message": f"Error generating image: {str(e)}" }) if __name__ == "__main__": mcp.run(transport='stdio') ``` What this server does: 1. It creates an MCP server that exposes a `generate_image` tool 2. The tool connects to the SanaSprint model hosted on HuggingFace Spaces 3. It handles the asynchronous nature of image generation by polling for results 4. When an image is ready, it returns the URL in a structured JSON format
Part 1: Building the MCP Server
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
Now let's create a Gradio chat interface as MCP Client that connects Claude to our MCP server. Create a file named `app.py`: ```python import asyncio import os import json from typing import List, Dict, Any, Union from contextlib import AsyncExitStack import gradio as gr from gradio.components.chatbot import ChatMessage from mcp import ClientSession, StdioServerParameters from mcp.client.stdio import stdio_client from anthropic import Anthropic from dotenv import load_dotenv load_dotenv() loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) class MCPClientWrapper: def __init__(self): self.session = None self.exit_stack = None self.anthropic = Anthropic() self.tools = [] def connect(self, server_path: str) -> str: return loop.run_until_complete(self._connect(server_path)) async def _connect(self, server_path: str) -> str: if self.exit_stack: await self.exit_stack.aclose() self.exit_stack = AsyncExitStack() is_python = server_path.endswith('.py') command = "python" if is_python else "node" server_params = StdioServerParameters( command=command, args=[server_path], env={"PYTHONIOENCODING": "utf-8", "PYTHONUNBUFFERED": "1"} ) stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params)) self.stdio, self.write = stdio_transport self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write)) await self.session.initialize() response = await self.session.list_tools() self.tools = [{ "name": tool.name, "description": tool.description, "input_schema": tool.inputSchema } for tool in response.tools] tool_names = [tool["name"] for tool in self.tools] return f"Connected to MCP server.
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
iption, "input_schema": tool.inputSchema } for tool in response.tools] tool_names = [tool["name"] for tool in self.tools] return f"Connected to MCP server. Available tools: {', '.join(tool_names)}" def process_message(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]) -> tuple: if not self.session: return history + [ {"role": "user", "content": message}, {"role": "assistant", "content": "Please connect to an MCP server first."} ], gr.Textbox(value="") new_messages = loop.run_until_complete(self._process_query(message, history)) return history + [{"role": "user", "content": message}] + new_messages, gr.Textbox(value="") async def _process_query(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]): claude_messages = [] for msg in history: if isinstance(msg, ChatMessage): role, content = msg.role, msg.content else: role, content = msg.get("role"), msg.get("content") if role in ["user", "assistant", "system"]: claude_messages.append({"role": role, "content": content}) claude_messages.append({"role": "user", "content": message}) response = self.anthropic.messages.create( model="claude-3-5-sonnet-20241022", max_tokens=1000, messages=claude_messages, tools=self.tools ) result_messages = [] for content in response.content: if content.type == 'text': result_messages.append({ "role": "assistant", "content": content.text }) elif content.type == 'tool_use': tool_name = content.name tool_args = content.input
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
ntent": content.text }) elif content.type == 'tool_use': tool_name = content.name tool_args = content.input result_messages.append({ "role": "assistant", "content": f"I'll use the {tool_name} tool to help answer your question.", "metadata": { "title": f"Using tool: {tool_name}", "log": f"Parameters: {json.dumps(tool_args, ensure_ascii=True)}", "status": "pending", "id": f"tool_call_{tool_name}" } }) result_messages.append({ "role": "assistant", "content": "```json\n" + json.dumps(tool_args, indent=2, ensure_ascii=True) + "\n```", "metadata": { "parent_id": f"tool_call_{tool_name}", "id": f"params_{tool_name}", "title": "Tool Parameters" } }) result = await self.session.call_tool(tool_name, tool_args) if result_messages and "metadata" in result_messages[-2]: result_messages[-2]["metadata"]["status"] = "done" result_messages.append({ "role": "assistant", "content": "Here are the results from the tool:", "metadata": { "title": f"Tool Result for {tool_name}", "status": "done", "id": f"result_{tool_name}" } }) result_content = result.content if isinstance(result_content, list): result_content = "\n".join(str(item) for item in re
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
}) result_content = result.content if isinstance(result_content, list): result_content = "\n".join(str(item) for item in result_content) try: result_json = json.loads(result_content) if isinstance(result_json, dict) and "type" in result_json: if result_json["type"] == "image" and "url" in result_json: result_messages.append({ "role": "assistant", "content": {"path": result_json["url"], "alt_text": result_json.get("message", "Generated image")}, "metadata": { "parent_id": f"result_{tool_name}", "id": f"image_{tool_name}", "title": "Generated Image" } }) else: result_messages.append({ "role": "assistant", "content": "```\n" + result_content + "\n```", "metadata": { "parent_id": f"result_{tool_name}", "id": f"raw_result_{tool_name}", "title": "Raw Output" } }) except: result_messages.append({ "role": "assistant", "content": "```\n" + result_content + "\n```", "metadata": { "parent_id": f"result_{tool_name}", "id": f"raw_result_{tool_name}", "title": "Raw Output" } })
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
"parent_id": f"result_{tool_name}", "id": f"raw_result_{tool_name}", "title": "Raw Output" } }) claude_messages.append({"role": "user", "content": f"Tool result for {tool_name}: {result_content}"}) next_response = self.anthropic.messages.create( model="claude-3-5-sonnet-20241022", max_tokens=1000, messages=claude_messages, ) if next_response.content and next_response.content[0].type == 'text': result_messages.append({ "role": "assistant", "content": next_response.content[0].text }) return result_messages client = MCPClientWrapper() def gradio_interface(): with gr.Blocks(title="MCP Weather Client") as demo: gr.Markdown("MCP Weather Assistant") gr.Markdown("Connect to your MCP weather server and chat with the assistant") with gr.Row(equal_height=True): with gr.Column(scale=4): server_path = gr.Textbox( label="Server Script Path", placeholder="Enter path to server script (e.g., weather.py)", value="gradio_mcp_server.py" ) with gr.Column(scale=1): connect_btn = gr.Button("Connect") status = gr.Textbox(label="Connection Status", interactive=False) chatbot = gr.Chatbot( value=[], height=500, type="messages", show_copy_button=True, avatar_images=("👤", "🤖") ) with gr.Row(equal_height=True): msg = gr.Textbox( label="Your Question", placeholder="Ask about weather or alerts (e.g., What's the weath
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
) with gr.Row(equal_height=True): msg = gr.Textbox( label="Your Question", placeholder="Ask about weather or alerts (e.g., What's the weather in New York?)", scale=4 ) clear_btn = gr.Button("Clear Chat", scale=1) connect_btn.click(client.connect, inputs=server_path, outputs=status) msg.submit(client.process_message, [msg, chatbot], [chatbot, msg]) clear_btn.click(lambda: [], None, chatbot) return demo if __name__ == "__main__": if not os.getenv("ANTHROPIC_API_KEY"): print("Warning: ANTHROPIC_API_KEY not found in environment. Please set it in your .env file.") interface = gradio_interface() interface.launch(debug=True) ``` What this MCP Client does: - Creates a friendly Gradio chat interface for user interaction - Connects to the MCP server you specify - Handles conversation history and message formatting - Makes call to Claude API with tool definitions - Processes tool usage requests from Claude - Displays images and other tool outputs in the chat - Sends tool results back to Claude for interpretation
Part 2: Building the MCP Client with Gradio
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
To run your MCP application: - Start a terminal window and run the MCP Client: ```bash python app.py ``` - Open the Gradio interface at the URL shown (typically http://127.0.0.1:7860) - In the Gradio interface, you'll see a field for the MCP Server path. It should default to `gradio_mcp_server.py`. - Click "Connect" to establish the connection to the MCP server. - You should see a message indicating the server connection was successful.
Running the Application
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
Now you can chat with Claude and it will be able to generate images based on your descriptions. Try prompts like: - "Can you generate an image of a mountain landscape at sunset?" - "Create an image of a cool tabby cat" - "Generate a picture of a panda wearing sunglasses" Claude will recognize these as image generation requests and automatically use the `generate_image` tool from your MCP server.
Example Usage
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
Here's the high-level flow of what happens during a chat session: 1. Your prompt enters the Gradio interface 2. The client forwards your prompt to Claude 3. Claude analyzes the prompt and decides to use the `generate_image` tool 4. The client sends the tool call to the MCP server 5. The server calls the external image generation API 6. The image URL is returned to the client 7. The client sends the image URL back to Claude 8. Claude provides a response that references the generated image 9. The Gradio chat interface displays both Claude's response and the image
How it Works
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
Now that you have a working MCP system, here are some ideas to extend it: - Add more tools to your server - Improve error handling - Add private Huggingface Spaces with authentication for secure tool access - Create custom tools that connect to your own APIs or services - Implement streaming responses for better user experience
Next Steps
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
Congratulations! You've successfully built an MCP Client and Server that allows Claude to generate images based on text prompts. This is just the beginning of what you can do with Gradio and MCP. This guide enables you to build complex AI applications that can use Claude or any other powerful LLM to interact with virtually any external tool or service. Read our other Guide on using [Gradio apps as MCP Servers](./building-mcp-server-with-gradio).
Conclusion
https://gradio.app/guides/building-an-mcp-client-with-gradio
Mcp - Building An Mcp Client With Gradio Guide
As of version 5.36.0, Gradio now comes with a built-in MCP server that can upload files to a running Gradio application. In the `View API` page of the server, you should see the following code snippet if any of the tools require file inputs: <img src="https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/MCPConnectionDocs.png"> The command to start the MCP server takes two arguments: - The URL (or Hugging Face space id) of the gradio application to upload the files to. In this case, `http://127.0.0.1:7860`. - The local directory on your computer with which the server is allowed to upload files from (`<UPLOAD_DIRECTORY>`). For security, please make this directory as narrow as possible to prevent unintended file uploads. As stated in the image, you need to install [uv](https://docs.astral.sh/uv/getting-started/installation/) (a python package manager that can run python scripts) before connecting from your MCP client. If you have gradio installed locally and you don't want to install uv, you can replace the `uvx` command with the path to gradio binary. It should look like this: ```json "upload-files": { "command": "<absoluate-path-to-gradio>", "args": [ "upload-mcp", "http://localhost:7860/", "/Users/freddyboulton/Pictures" ] } ``` After connecting to the upload server, your LLM agent will know when to upload files for you automatically! <img src="https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/Ghibliafy.png">
Using the File Upload MCP Server
https://gradio.app/guides/file-upload-mcp
Mcp - File Upload Mcp Guide
In this guide, we've covered how you can connect to the Upload File MCP Server so that your agent can upload files before using Gradio MCP servers. Remember to set the `<UPLOAD_DIRECTORY>` as small as possible to prevent unintended file uploads!
Conclusion
https://gradio.app/guides/file-upload-mcp
Mcp - File Upload Mcp Guide
An MCP (Model Control Protocol) server is a standardized way to expose tools so that they can be used by LLMs. A tool can provide an LLM functionality that it does not have natively, such as the ability to generate images or calculate the prime factors of a number.
What is an MCP Server?
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
LLMs are famously not great at counting the number of letters in a word (e.g. the number of "r"-s in "strawberry"). But what if we equip them with a tool to help? Let's start by writing a simple Gradio app that counts the number of letters in a word or phrase: $code_letter_counter Notice that we have: (1) included a detailed docstring for our function, and (2) set `mcp_server=True` in `.launch()`. This is all that's needed for your Gradio app to serve as an MCP server! Now, when you run this app, it will: 1. Start the regular Gradio web interface 2. Start the MCP server 3. Print the MCP server URL in the console The MCP server will be accessible at: ``` http://your-server:port/gradio_api/mcp/sse ``` Gradio automatically converts the `letter_counter` function into an MCP tool that can be used by LLMs. The docstring of the function and the type hints of arguments will be used to generate the description of the tool and its parameters. The name of the function will be used as the name of your tool. Any initial values you provide to your input components (e.g. "strawberry" and "r" in the `gr.Textbox` components above) will be used as the default values if your LLM doesn't specify a value for that particular input parameter. Now, all you need to do is add this URL endpoint to your MCP Client (e.g. Claude Desktop, Cursor, or Cline), which typically means pasting this config in the settings: ``` { "mcpServers": { "gradio": { "url": "http://your-server:port/gradio_api/mcp/sse" } } } ``` (By the way, you can find the exact config to copy-paste by going to the "View API" link in the footer of your Gradio app, and then clicking on "MCP"). ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api-mcp.png)
Example: Counting Letters in a Word
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
1. **Tool Conversion**: Each API endpoint in your Gradio app is automatically converted into an MCP tool with a corresponding name, description, and input schema. To view the tools and schemas, visit http://your-server:port/gradio_api/mcp/schema or go to the "View API" link in the footer of your Gradio app, and then click on "MCP". 2. **Environment variable support**. There are two ways to enable the MCP server functionality: * Using the `mcp_server` parameter, as shown above: ```python demo.launch(mcp_server=True) ``` * Using environment variables: ```bash export GRADIO_MCP_SERVER=True ``` 3. **File Handling**: The Gradio MCP server automatically handles file data conversions, including: - Processing image files and returning them in the correct format - Managing temporary file storage By default, the Gradio MCP server accepts input images and files as full URLs ("http://..." or "https:/..."). For convenience, an additional STDIO-based MCP server is also generated, which can be used to upload files to any remote Gradio app and which returns a URL that can be used for subsequent tool calls. 4. **Hosted MCP Servers on 󠀠🤗 Spaces**: You can publish your Gradio application for free on Hugging Face Spaces, which will allow you to have a free hosted MCP server. Here's an example of such a Space: https://huggingface.co/spaces/abidlabs/mcp-tools. Notice that you can add this config to your MCP Client to start using the tools from this Space immediately: ``` { "mcpServers": { "gradio": { "url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse" } } } ``` <video src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/mcp_guide1.mp4" style="width:100%" controls preload> </video>
Key features of the Gradio <> MCP Integration
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
If there's an existing Space that you'd like to use an MCP server, you'll need to do three things: 1. First, [duplicate the Space](https://huggingface.co/docs/hub/en/spaces-more-ways-to-createduplicating-a-space) if it is not your own Space. This will allow you to make changes to the app. If the Space requires a GPU, set the hardware of the duplicated Space to be same as the original Space. You can make it either a public Space or a private Space, since it is possible to use either as an MCP server, as described below. 2. Then, add docstrings to the functions that you'd like the LLM to be able to call as a tool. The docstring should be in the same format as the example code above. 3. Finally, add `mcp_server=True` in `.launch()`. That's it!
Converting an Existing Space
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
You can use either a public Space or a private Space as an MCP server. If you'd like to use a private Space as an MCP server (or a ZeroGPU Space with your own quota), then you will need to provide your [Hugging Face token](https://huggingface.co/settings/token) when you make your request. To do this, simply add it as a header in your config like this: ``` { "mcpServers": { "gradio": { "url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse", "headers": { "Authorization": "Bearer <YOUR-HUGGING-FACE-TOKEN>" } } } } ```
Private Spaces
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
You may wish to authenticate users more precisely or let them provide other kinds of credentials or tokens in order to provide a custom experience for different users. Gradio allows you to access the underlying `starlette.Request` that has made the tool call, which means that you can access headers, originating IP address, or any other information that is part of the network request. To do this, simply add a parameter in your function of the type `gr.Request`, and Gradio will automatically inject the request object as the parameter. Here's an example: ```py import gradio as gr def echo_headers(x, request: gr.Request): return str(dict(request.headers)) gr.Interface(echo_headers, "textbox", "textbox").launch(mcp_server=True) ``` This MCP server will simply ignore the user's input and echo back all of the headers from a user's request. One can build more complex apps using the same idea. See the [docs on `gr.Request`](https://www.gradio.app/main/docs/gradio/request) for more information (note that only the core Starlette attributes of the `gr.Request` object will be present, attributes such as Gradio's `.session_hash` will not be present). Using the gr.Header class A common pattern in MCP server development is to use authentication headers to call services on behalf of your users. Instead of using a `gr.Request` object like in the example above, you can use a `gr.Header` argument. Gradio will automatically extract that header from the incoming request (if it exists) and pass it to your function. In the example below, the `X-API-Token` header is extracted from the incoming request and passed in as the `x_api_token` argument to `make_api_request_on_behalf_of_user`. The benefit of using `gr.Header` is that the MCP connection docs will automatically display the headers you need to supply when connecting to the server! See the image below: ```python import gradio as gr def make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header): """M
Authentication and Credentials
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
the headers you need to supply when connecting to the server! See the image below: ```python import gradio as gr def make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header): """Make a request to everyone's favorite API. Args: prompt: The prompt to send to the API. Returns: The response from the API. Raises: AssertionError: If the API token is not valid. """ return "Hello from the API" if not x_api_token else "Hello from the API with token!" demo = gr.Interface( make_api_request_on_behalf_of_user, [ gr.Textbox(label="Prompt"), ], gr.Textbox(label="Response"), ) demo.launch(mcp_server=True) ``` ![MCP Header Connection Page](https://github.com/user-attachments/assets/e264eedf-a91a-476b-880d-5be0d5934134) Sending Progress Updates The Gradio MCP server automatically sends progress updates to your MCP Client based on the queue in the Gradio application. If you'd like to send custom progress updates, you can do so using the same mechanism as you would use to display progress updates in the UI of your Gradio app: by using the `gr.Progress` class! Here's an example of how to do this: $code_mcp_progress [Here are the docs](https://www.gradio.app/docs/gradio/progress) for the `gr.Progress` class, which can also automatically track `tqdm` calls.
Authentication and Credentials
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
Gradio automatically sets the tool name based on the name of your function, and the description from the docstring of your function. But you may want to change how the description appears to your LLM. You can do this by using the `api_description` parameter in `Interface`, `ChatInterface`, or any event listener. This parameter takes three different kinds of values: * `None` (default): the tool description is automatically created from the docstring of the function (or its parent's docstring if it does not have a docstring but inherits from a method that does.) * `False`: no tool description appears to the LLM. * `str`: an arbitrary string to use as the tool description. In addition to modifying the tool descriptions, you can also toggle which tools appear to the LLM. You can do this by setting the `show_api` parameter, which is by default `True`. Setting it to `False` hides the endpoint from the API docs and from the MCP server. If you expose multiple tools, users of your app will also be able to toggle which tools they'd like to add to their MCP server by checking boxes in the "view MCP or API" panel. Here's an example that shows the `api_description` and `show_api` parameters in actions: $code_mcp_tools
Modifying Tool Descriptions
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
In addition to tools (which execute functions generally and are the default for any function exposed through the Gradio MCP integration), MCP supports two other important primitives: **resources** (for exposing data) and **prompts** (for defining reusable templates). Gradio provides decorators to easily create MCP servers with all three capabilities. Creating MCP Resources Use the `@gr.mcp.resource` decorator on any function to expose data through your Gradio app. Resources can be static (always available at a fixed URI) or templated (with parameters in the URI). $code_mcp_resources_and_prompts In this example: - The `get_greeting` function is exposed as a resource with a URI template `greeting://{name}` - When an MCP client requests `greeting://Alice`, it receives "Hello, Alice!" - Resources can also return images and other types of files or binary data. In order to return non-text data, you should specify the `mime_type` parameter in `@gr.mcp.resource()` and return a Base64 string from your function. Creating MCP Prompts Prompts help standardize how users interact with your tools. They're especially useful for complex workflows that require specific formatting or multiple steps. The `greet_user` function in the example above is decorated with `@gr.mcp.prompt()`, which: - Makes it available as a prompt template in MCP clients - Accepts parameters (`name` and `style`) to customize the output - Returns a structured prompt that guides the LLM's behavior
MCP Resources and Prompts
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
So far, all of our MCP tools, resources, or prompts have corresponded to event listeners in the UI. This works well for functions that directly update the UI, but may not work if you wish to expose a "pure logic" function that should return raw data (e.g. a JSON object) without directly causing a UI update. In order to expose such an MCP tool, you can create a pure Gradio API endpoint using `gr.api` (see [full docs here](https://www.gradio.app/main/docs/gradio/api)). Here's an example of creating an MCP tool that slices a list: $code_mcp_tool_only Note that if you use this approach, your function signature must be fully typed, including the return value, as these signature are used to determine the typing information for the MCP tool.
Adding MCP-Only Functions
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
In some cases, you may decide not to use Gradio's built-in integration and instead manually create an FastMCP Server that calls a Gradio app. This approach is useful when you want to: - Store state / identify users between calls instead of treating every tool call completely independently - Start the Gradio app MCP server when a tool is called (if you are running multiple Gradio apps locally and want to save memory / GPU) This is very doable thanks to the [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) and the [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)'s `FastMCP` class. Here's an example of creating a custom MCP server that connects to various Gradio apps hosted on [HuggingFace Spaces](https://huggingface.co/spaces) using the `stdio` protocol: ```python from mcp.server.fastmcp import FastMCP from gradio_client import Client import sys import io import json mcp = FastMCP("gradio-spaces") clients = {} def get_client(space_id: str) -> Client: """Get or create a Gradio client for the specified space.""" if space_id not in clients: clients[space_id] = Client(space_id) return clients[space_id] @mcp.tool() async def generate_image(prompt: str, space_id: str = "ysharma/SanaSprint") -> str: """Generate an image using Flux. Args: prompt: Text prompt describing the image to generate space_id: HuggingFace Space ID to use """ client = get_client(space_id) result = client.predict( prompt=prompt, model_size="1.6B", seed=0, randomize_seed=True, width=1024, height=1024, guidance_scale=4.5, num_inference_steps=2, api_name="/infer" ) return result @mcp.tool() async def run_dia_tts(prompt: str, space_id: str = "ysharma/Dia-1.6B") -> str: """Text-to-Speech Synthesis. Args: prompt: Text prompt describing the co
Gradio with FastMCP
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
return result @mcp.tool() async def run_dia_tts(prompt: str, space_id: str = "ysharma/Dia-1.6B") -> str: """Text-to-Speech Synthesis. Args: prompt: Text prompt describing the conversation between speakers S1, S2 space_id: HuggingFace Space ID to use """ client = get_client(space_id) result = client.predict( text_input=f"""{prompt}""", audio_prompt_input=None, max_new_tokens=3072, cfg_scale=3, temperature=1.3, top_p=0.95, cfg_filter_top_k=30, speed_factor=0.94, api_name="/generate_audio" ) return result if __name__ == "__main__": import sys import io sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8') mcp.run(transport='stdio') ``` This server exposes two tools: 1. `run_dia_tts` - Generates a conversation for the given transcript in the form of `[S1]first-sentence. [S2]second-sentence. [S1]...` 2. `generate_image` - Generates images using a fast text-to-image model To use this MCP Server with Claude Desktop (as MCP Client): 1. Save the code to a file (e.g., `gradio_mcp_server.py`) 2. Install the required dependencies: `pip install mcp gradio-client` 3. Configure Claude Desktop to use your server by editing the configuration file at `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows): ```json { "mcpServers": { "gradio-spaces": { "command": "python", "args": [ "/absolute/path/to/gradio_mcp_server.py" ] } } } ``` 4. Restart Claude Desktop Now, when you ask Claude about generating an image or transcribing audio, it can use your Gradio-powered tools to accomplish these tasks.
Gradio with FastMCP
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
use your Gradio-powered tools to accomplish these tasks.
Gradio with FastMCP
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
The MCP protocol is still in its infancy and you might see issues connecting to an MCP Server that you've built. We generally recommend using the [MCP Inspector Tool](https://github.com/modelcontextprotocol/inspector) to try connecting and debugging your MCP Server. Here are some things that may help: **1. Ensure that you've provided type hints and valid docstrings for your functions** As mentioned earlier, Gradio reads the docstrings for your functions and the type hints of input arguments to generate the description of the tool and parameters. A valid function and docstring looks like this (note the "Args:" block with indented parameter names underneath): ```py def image_orientation(image: Image.Image) -> str: """ Returns whether image is portrait or landscape. Args: image (Image.Image): The image to check. """ return "Portrait" if image.height > image.width else "Landscape" ``` Note: You can preview the schema that is created for your MCP server by visiting the `http://your-server:port/gradio_api/mcp/schema` URL. **2. Try accepting input arguments as `str`** Some MCP Clients do not recognize parameters that are numeric or other complex types, but all of the MCP Clients that we've tested accept `str` input parameters. When in doubt, change your input parameter to be a `str` and then cast to a specific type in the function, as in this example: ```py def prime_factors(n: str): """ Compute the prime factorization of a positive integer. Args: n (str): The integer to factorize. Must be greater than 1. """ n_int = int(n) if n_int <= 1: raise ValueError("Input must be an integer greater than 1.") factors = [] while n_int % 2 == 0: factors.append(2) n_int //= 2 divisor = 3 while divisor * divisor <= n_int: while n_int % divisor == 0: factors.append(divisor) n_int //= divisor divisor += 2 if n_int > 1: factors.
Troubleshooting your MCP Servers
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
= 3 while divisor * divisor <= n_int: while n_int % divisor == 0: factors.append(divisor) n_int //= divisor divisor += 2 if n_int > 1: factors.append(n_int) return factors ``` **3. Ensure that your MCP Client Supports SSE** Some MCP Clients, notably [Claude Desktop](https://claude.ai/download), do not yet support SSE-based MCP Servers. In those cases, you can use a tool such as [mcp-remote](https://github.com/geelen/mcp-remote). First install [Node.js](https://nodejs.org/en/download/). Then, add the following to your own MCP Client config: ``` { "mcpServers": { "gradio": { "command": "npx", "args": [ "mcp-remote", "http://your-server:port/gradio_api/mcp/sse" ] } } } ``` **4. Restart your MCP Client and MCP Server** Some MCP Clients require you to restart them every time you update the MCP configuration. Other times, if the connection between the MCP Client and servers breaks, you might need to restart the MCP server. If all else fails, try restarting both your MCP Client and MCP Servers!
Troubleshooting your MCP Servers
https://gradio.app/guides/building-mcp-server-with-gradio
Mcp - Building Mcp Server With Gradio Guide
If you're using LLMs in your workflow, adding this server will augment them with just the right context on gradio - which makes your experience a lot faster and smoother. <video src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/mcp-docs.mp4" style="width:100%" controls preload> </video> The server is running on Spaces and was launched entirely using Gradio, you can see all the code [here](https://huggingface.co/spaces/gradio/docs-mcp). For more on building an mcp server with gradio, see the [previous guide](./building-an-mcp-client-with-gradio).
Why an MCP Server?
https://gradio.app/guides/using-docs-mcp
Mcp - Using Docs Mcp Guide
For clients that support SSE (e.g. Cursor, Windsurf, Cline), simply add the following configuration to your MCP config: ```json { "mcpServers": { "gradio": { "url": "https://gradio-docs-mcp.hf.space/gradio_api/mcp/sse" } } } ``` We've included step-by-step instructions for Cursor below, but you can consult the docs for Windsurf [here](https://docs.windsurf.com/windsurf/mcp), and Cline [here](https://docs.cline.bot/mcp-servers/configuring-mcp-servers) which are similar to set up. Cursor 1. Make sure you're using the latest version of Cursor, and go to Cursor > Settings > Cursor Settings > MCP 2. Click on '+ Add new global MCP server' 3. Copy paste this json into the file that opens and then save it. ```json { "mcpServers": { "gradio": { "url": "https://gradio-docs-mcp.hf.space/gradio_api/mcp/sse" } } } ``` 4. That's it! You should see the tools load and the status go green in the settings page. You may have to click the refresh icon or wait a few seconds. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/cursor-mcp.png) Claude Desktop 1. Since Claude Desktop only supports stdio, you will need to [install Node.js](https://nodejs.org/en/download/) to get this to work. 2. Make sure you're using the latest version of Claude Desktop, and go to Claude > Settings > Developer > Edit Config 3. Open the file with your favorite editor and copy paste this json, then save the file. ```json { "mcpServers": { "gradio": { "command": "npx", "args": [ "mcp-remote", "https://gradio-docs-mcp.hf.space/gradio_api/mcp/sse", "--transport", "sse-only" ] } } } ``` 4. Quit and re-open Claude Desktop, and you should be good to go. You should see it loaded in the Search and Tools icon or on the developer settings page. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/claude-deskt
Installing in the Clients
https://gradio.app/guides/using-docs-mcp
Mcp - Using Docs Mcp Guide
You should see it loaded in the Search and Tools icon or on the developer settings page. ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/claude-desktop-mcp.gif)
Installing in the Clients
https://gradio.app/guides/using-docs-mcp
Mcp - Using Docs Mcp Guide
There are currently only two tools in the server: `gradio_docs_mcp_load_gradio_docs` and `gradio_docs_mcp_search_gradio_docs`. 1. `gradio_docs_mcp_load_gradio_docs`: This tool takes no arguments and will load an /llms.txt style summary of Gradio's latest, full documentation. Very useful context the LLM can parse before answering questions or generating code. 2. `gradio_docs_mcp_search_gradio_docs`: This tool takes a query as an argument and will run embedding search on Gradio's docs, guides, and demos to return the most useful context for the LLM to parse.
Tools
https://gradio.app/guides/using-docs-mcp
Mcp - Using Docs Mcp Guide
The next generation of AI user interfaces is moving towards audio-native experiences. Users will be able to speak to chatbots and receive spoken responses in return. Several models have been built under this paradigm, including GPT-4o and [mini omni](https://github.com/gpt-omni/mini-omni). In this guide, we'll walk you through building your own conversational chat application using mini omni as an example. You can see a demo of the finished app below: <video src="https://github.com/user-attachments/assets/db36f4db-7535-49f1-a2dd-bd36c487ebdf" controls height="600" width="600" style="display: block; margin: auto;" autoplay="true" loop="true"> </video>
Introduction
https://gradio.app/guides/conversational-chatbot
Streaming - Conversational Chatbot Guide
Our application will enable the following user experience: 1. Users click a button to start recording their message 2. The app detects when the user has finished speaking and stops recording 3. The user's audio is passed to the omni model, which streams back a response 4. After omni mini finishes speaking, the user's microphone is reactivated 5. All previous spoken audio, from both the user and omni, is displayed in a chatbot component Let's dive into the implementation details.
Application Overview
https://gradio.app/guides/conversational-chatbot
Streaming - Conversational Chatbot Guide
We'll stream the user's audio from their microphone to the server and determine if the user has stopped speaking on each new chunk of audio. Here's our `process_audio` function: ```python import numpy as np from utils import determine_pause def process_audio(audio: tuple, state: AppState): if state.stream is None: state.stream = audio[1] state.sampling_rate = audio[0] else: state.stream = np.concatenate((state.stream, audio[1])) pause_detected = determine_pause(state.stream, state.sampling_rate, state) state.pause_detected = pause_detected if state.pause_detected and state.started_talking: return gr.Audio(recording=False), state return None, state ``` This function takes two inputs: 1. The current audio chunk (a tuple of `(sampling_rate, numpy array of audio)`) 2. The current application state We'll use the following `AppState` dataclass to manage our application state: ```python from dataclasses import dataclass @dataclass class AppState: stream: np.ndarray | None = None sampling_rate: int = 0 pause_detected: bool = False stopped: bool = False conversation: list = [] ``` The function concatenates new audio chunks to the existing stream and checks if the user has stopped speaking. If a pause is detected, it returns an update to stop recording. Otherwise, it returns `None` to indicate no changes. The implementation of the `determine_pause` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/eb027808c7bfe5179b46d9352e3fa1813a45f7c3/app.pyL98).
Processing User Audio
https://gradio.app/guides/conversational-chatbot
Streaming - Conversational Chatbot Guide
After processing the user's audio, we need to generate and stream the chatbot's response. Here's our `response` function: ```python import io import tempfile from pydub import AudioSegment def response(state: AppState): if not state.pause_detected and not state.started_talking: return None, AppState() audio_buffer = io.BytesIO() segment = AudioSegment( state.stream.tobytes(), frame_rate=state.sampling_rate, sample_width=state.stream.dtype.itemsize, channels=(1 if len(state.stream.shape) == 1 else state.stream.shape[1]), ) segment.export(audio_buffer, format="wav") with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as f: f.write(audio_buffer.getvalue()) state.conversation.append({"role": "user", "content": {"path": f.name, "mime_type": "audio/wav"}}) output_buffer = b"" for mp3_bytes in speaking(audio_buffer.getvalue()): output_buffer += mp3_bytes yield mp3_bytes, state with tempfile.NamedTemporaryFile(suffix=".mp3", delete=False) as f: f.write(output_buffer) state.conversation.append({"role": "assistant", "content": {"path": f.name, "mime_type": "audio/mp3"}}) yield None, AppState(conversation=state.conversation) ``` This function: 1. Converts the user's audio to a WAV file 2. Adds the user's message to the conversation history 3. Generates and streams the chatbot's response using the `speaking` function 4. Saves the chatbot's response as an MP3 file 5. Adds the chatbot's response to the conversation history Note: The implementation of the `speaking` function is specific to the omni-mini project and can be found [here](https://huggingface.co/spaces/gradio/omni-mini/blob/main/app.pyL116).
Generating the Response
https://gradio.app/guides/conversational-chatbot
Streaming - Conversational Chatbot Guide
Now let's put it all together using Gradio's Blocks API: ```python import gradio as gr def start_recording_user(state: AppState): if not state.stopped: return gr.Audio(recording=True) with gr.Blocks() as demo: with gr.Row(): with gr.Column(): input_audio = gr.Audio( label="Input Audio", sources="microphone", type="numpy" ) with gr.Column(): chatbot = gr.Chatbot(label="Conversation", type="messages") output_audio = gr.Audio(label="Output Audio", streaming=True, autoplay=True) state = gr.State(value=AppState()) stream = input_audio.stream( process_audio, [input_audio, state], [input_audio, state], stream_every=0.5, time_limit=30, ) respond = input_audio.stop_recording( response, [state], [output_audio, state] ) respond.then(lambda s: s.conversation, [state], [chatbot]) restart = output_audio.stop( start_recording_user, [state], [input_audio] ) cancel = gr.Button("Stop Conversation", variant="stop") cancel.click(lambda: (AppState(stopped=True), gr.Audio(recording=False)), None, [state, input_audio], cancels=[respond, restart]) if __name__ == "__main__": demo.launch() ``` This setup creates a user interface with: - An input audio component for recording user messages - A chatbot component to display the conversation history - An output audio component for the chatbot's responses - A button to stop and reset the conversation The app streams user audio in 0.5-second chunks, processes it, generates responses, and updates the conversation history accordingly.
Building the Gradio App
https://gradio.app/guides/conversational-chatbot
Streaming - Conversational Chatbot Guide
This guide demonstrates how to build a conversational chatbot application using Gradio and the mini omni model. You can adapt this framework to create various audio-based chatbot demos. To see the full application in action, visit the Hugging Face Spaces demo: https://huggingface.co/spaces/gradio/omni-mini Feel free to experiment with different models, audio processing techniques, or user interface designs to create your own unique conversational AI experiences!
Conclusion
https://gradio.app/guides/conversational-chatbot
Streaming - Conversational Chatbot Guide
Modern voice applications should feel natural and responsive, moving beyond the traditional "click-to-record" pattern. By combining Groq's fast inference capabilities with automatic speech detection, we can create a more intuitive interaction model where users can simply start talking whenever they want to engage with the AI. > Credits: VAD and Gradio code inspired by [WillHeld's Diva-audio-chat](https://huggingface.co/spaces/WillHeld/diva-audio-chat/tree/main). In this tutorial, you will learn how to create a multimodal Gradio and Groq app that has automatic speech detection. You can also watch the full video tutorial which includes a demo of the application: <iframe width="560" height="315" src="https://www.youtube.com/embed/azXaioGdm2Q" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
Introduction
https://gradio.app/guides/automatic-voice-detection
Streaming - Automatic Voice Detection Guide
Many voice apps currently work by the user clicking record, speaking, then stopping the recording. While this can be a powerful demo, the most natural mode of interaction with voice requires the app to dynamically detect when the user is speaking, so they can talk back and forth without having to continually click a record button. Creating a natural interaction with voice and text requires a dynamic and low-latency response. Thus, we need both automatic voice detection and fast inference. With @ricky0123/vad-web powering speech detection and Groq powering the LLM, both of these requirements are met. Groq provides a lightning fast response, and Gradio allows for easy creation of impressively functional apps. This tutorial shows you how to build a calorie tracking app where you speak to an AI that automatically detects when you start and stop your response, and provides its own text response back to guide you with questions that allow it to give a calorie estimate of your last meal.
Background
https://gradio.app/guides/automatic-voice-detection
Streaming - Automatic Voice Detection Guide
- **Gradio**: Provides the web interface and audio handling capabilities - **@ricky0123/vad-web**: Handles voice activity detection - **Groq**: Powers fast LLM inference for natural conversations - **Whisper**: Transcribes speech to text Setting Up the Environment First, let’s install and import our essential libraries and set up a client for using the Groq API. Here’s how to do it: `requirements.txt` ``` gradio groq numpy soundfile librosa spaces xxhash datasets ``` `app.py` ```python import groq import gradio as gr import soundfile as sf from dataclasses import dataclass, field import os Initialize Groq client securely api_key = os.environ.get("GROQ_API_KEY") if not api_key: raise ValueError("Please set the GROQ_API_KEY environment variable.") client = groq.Client(api_key=api_key) ``` Here, we’re pulling in key libraries to interact with the Groq API, build a sleek UI with Gradio, and handle audio data. We’re accessing the Groq API key securely with a key stored in an environment variable, which is a security best practice for avoiding leaking the API key. --- State Management for Seamless Conversations We need a way to keep track of our conversation history, so the chatbot remembers past interactions, and manage other states like whether recording is currently active. To do this, let’s create an `AppState` class: ```python @dataclass class AppState: conversation: list = field(default_factory=list) stopped: bool = False model_outs: Any = None ``` Our `AppState` class is a handy tool for managing conversation history and tracking whether recording is on or off. Each instance will have its own fresh list of conversations, making sure chat history is isolated to each session. --- Transcribing Audio with Whisper on Groq Next, we’ll create a function to transcribe the user’s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there’s meani
Key Components
https://gradio.app/guides/automatic-voice-detection
Streaming - Automatic Voice Detection Guide
e’ll create a function to transcribe the user’s audio input into text using Whisper, a powerful transcription model hosted on Groq. This transcription will also help us determine whether there’s meaningful speech in the input. Here’s how: ```python def transcribe_audio(client, file_name): if file_name is None: return None try: with open(file_name, "rb") as audio_file: response = client.audio.transcriptions.with_raw_response.create( model="whisper-large-v3-turbo", file=("audio.wav", audio_file), response_format="verbose_json", ) completion = process_whisper_response(response.parse()) return completion except Exception as e: print(f"Error in transcription: {e}") return f"Error in transcription: {str(e)}" ``` This function opens the audio file and sends it to Groq’s Whisper model for transcription, requesting detailed JSON output. verbose_json is needed to get information to determine if speech was included in the audio. We also handle any potential errors so our app doesn’t fully crash if there’s an issue with the API request. ```python def process_whisper_response(completion): """ Process Whisper transcription response and return text or null based on no_speech_prob Args: completion: Whisper transcription response object Returns: str or None: Transcribed text if no_speech_prob <= 0.7, otherwise None """ if completion.segments and len(completion.segments) > 0: no_speech_prob = completion.segments[0].get('no_speech_prob', 0) print("No speech prob:", no_speech_prob) if no_speech_prob > 0.7: return None return completion.text.strip() return None ``` We also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was j
Key Components
https://gradio.app/guides/automatic-voice-detection
Streaming - Automatic Voice Detection Guide
ext.strip() return None ``` We also need to interpret the audio data response. The process_whisper_response function takes the resulting completion from Whisper and checks if the audio was just background noise or had actual speaking that was transcribed. It uses a threshold of 0.7 to interpret the no_speech_prob, and will return None if there was no speech. Otherwise, it will return the text transcript of the conversational response from the human. --- Adding Conversational Intelligence with LLM Integration Our chatbot needs to provide intelligent, friendly responses that flow naturally. We’ll use a Groq-hosted Llama-3.2 for this: ```python def generate_chat_completion(client, history): messages = [] messages.append( { "role": "system", "content": "In conversation with the user, ask questions to estimate and provide (1) total calories, (2) protein, carbs, and fat in grams, (3) fiber and sugar content. Only ask *one question at a time*. Be conversational and natural.", } ) for message in history: messages.append(message) try: completion = client.chat.completions.create( model="llama-3.2-11b-vision-preview", messages=messages, ) return completion.choices[0].message.content except Exception as e: return f"Error in generating chat completion: {str(e)}" ``` We’re defining a system prompt to guide the chatbot’s behavior, ensuring it asks one question at a time and keeps things conversational. This setup also includes error handling to ensure the app gracefully manages any issues. --- Voice Activity Detection for Hands-Free Interaction To make our chatbot hands-free, we’ll add Voice Activity Detection (VAD) to automatically detect when someone starts or stops speaking. Here’s how to implement it using ONNX in JavaScript: ```javascript async function main() { const script1 = document.createElement("script"); scrip
Key Components
https://gradio.app/guides/automatic-voice-detection
Streaming - Automatic Voice Detection Guide
ly detect when someone starts or stops speaking. Here’s how to implement it using ONNX in JavaScript: ```javascript async function main() { const script1 = document.createElement("script"); script1.src = "https://cdn.jsdelivr.net/npm/[email protected]/dist/ort.js"; document.head.appendChild(script1) const script2 = document.createElement("script"); script2.onload = async () => { console.log("vad loaded"); var record = document.querySelector('.record-button'); record.textContent = "Just Start Talking!" const myvad = await vad.MicVAD.new({ onSpeechStart: () => { var record = document.querySelector('.record-button'); var player = document.querySelector('streaming-out') if (record != null && (player == null || player.paused)) { record.click(); } }, onSpeechEnd: (audio) => { var stop = document.querySelector('.stop-button'); if (stop != null) { stop.click(); } } }) myvad.start() } script2.src = "https://cdn.jsdelivr.net/npm/@ricky0123/[email protected]/dist/bundle.min.js"; } ``` This script loads our VAD model and sets up functions to start and stop recording automatically. When the user starts speaking, it triggers the recording, and when they stop, it ends the recording. --- Building a User Interface with Gradio Now, let’s create an intuitive and visually appealing user interface with Gradio. This interface will include an audio input for capturing voice, a chat window for displaying responses, and state management to keep things synchronized. ```python with gr.Blocks(theme=theme, js=js) as demo: with gr.Row(): input_audio = gr.Audio( label="Input Audio", sources=["microphone"], type="numpy", streaming=False, waveform_options=gr.WaveformOptions(waveform_color="B83A4B"), ) with gr.Row(): chatbot = gr.Chatbot(label="Conversati
Key Components
https://gradio.app/guides/automatic-voice-detection
Streaming - Automatic Voice Detection Guide
type="numpy", streaming=False, waveform_options=gr.WaveformOptions(waveform_color="B83A4B"), ) with gr.Row(): chatbot = gr.Chatbot(label="Conversation", type="messages") state = gr.State(value=AppState()) ``` In this code block, we’re using Gradio’s `Blocks` API to create an interface with an audio input, a chat display, and an application state manager. The color customization for the waveform adds a nice visual touch. --- Handling Recording and Responses Finally, let’s link the recording and response components to ensure the app reacts smoothly to user inputs and provides responses in real-time. ```python stream = input_audio.start_recording( process_audio, [input_audio, state], [input_audio, state], ) respond = input_audio.stop_recording( response, [state, input_audio], [state, chatbot] ) ``` These lines set up event listeners for starting and stopping the recording, processing the audio input, and generating responses. By linking these events, we create a cohesive experience where users can simply talk, and the chatbot handles the rest. ---
Key Components
https://gradio.app/guides/automatic-voice-detection
Streaming - Automatic Voice Detection Guide
1. When you open the app, the VAD system automatically initializes and starts listening for speech 2. As soon as you start talking, it triggers the recording automatically 3. When you stop speaking, the recording ends and: - The audio is transcribed using Whisper - The transcribed text is sent to the LLM - The LLM generates a response about calorie tracking - The response is displayed in the chat interface 4. This creates a natural back-and-forth conversation where you can simply talk about your meals and get instant feedback on nutritional content This app demonstrates how to create a natural voice interface that feels responsive and intuitive. By combining Groq's fast inference with automatic speech detection, we've eliminated the need for manual recording controls while maintaining high-quality interactions. The result is a practical calorie tracking assistant that users can simply talk to as naturally as they would to a human nutritionist. Link to GitHub repository: [Groq Gradio Basics](https://github.com/bklieger-groq/gradio-groq-basics/tree/main/calorie-tracker)
Summary
https://gradio.app/guides/automatic-voice-detection
Streaming - Automatic Voice Detection Guide
First, we'll install the following requirements in our system: ``` opencv-python torch transformers>=4.43.0 spaces ``` Then, we'll download the model from the Hugging Face Hub: ```python from transformers import RTDetrForObjectDetection, RTDetrImageProcessor image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_r50vd") model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd").to("cuda") ``` We're moving the model to the GPU. We'll be deploying our model to Hugging Face Spaces and running the inference in the [free ZeroGPU cluster](https://huggingface.co/zero-gpu-explorers).
Setting up the Model
https://gradio.app/guides/object-detection-from-video
Streaming - Object Detection From Video Guide
Our inference function will accept a video and a desired confidence threshold. Object detection models identify many objects and assign a confidence score to each object. The lower the confidence, the higher the chance of a false positive. So we will let our users set the conference threshold. Our function will iterate over the frames in the video and run the RT-DETR model over each frame. We will then draw the bounding boxes for each detected object in the frame and save the frame to a new output video. The function will yield each output video in chunks of two seconds. In order to keep inference times as low as possible on ZeroGPU (there is a time-based quota), we will halve the original frames-per-second in the output video and resize the input frames to be half the original size before running the model. The code for the inference function is below - we'll go over it piece by piece. ```python import spaces import cv2 from PIL import Image import torch import time import numpy as np import uuid from draw_boxes import draw_bounding_boxes SUBSAMPLE = 2 @spaces.GPU def stream_object_detection(video, conf_threshold): cap = cv2.VideoCapture(video) This means we will output mp4 videos video_codec = cv2.VideoWriter_fourcc(*"mp4v") type: ignore fps = int(cap.get(cv2.CAP_PROP_FPS)) desired_fps = fps // SUBSAMPLE width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) // 2 height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) // 2 iterating, frame = cap.read() n_frames = 0 Use UUID to create a unique video file output_video_name = f"output_{uuid.uuid4()}.mp4" Output Video output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore batch = [] while iterating: frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5) frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) if n_frames % SUBSAMPLE == 0: batch.append(frame) if len(batc
The Inference Function
https://gradio.app/guides/object-detection-from-video
Streaming - Object Detection From Video Guide
frame = cv2.resize( frame, (0,0), fx=0.5, fy=0.5) frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) if n_frames % SUBSAMPLE == 0: batch.append(frame) if len(batch) == 2 * desired_fps: inputs = image_processor(images=batch, return_tensors="pt").to("cuda") with torch.no_grad(): outputs = model(**inputs) boxes = image_processor.post_process_object_detection( outputs, target_sizes=torch.tensor([(height, width)] * len(batch)), threshold=conf_threshold) for i, (array, box) in enumerate(zip(batch, boxes)): pil_image = draw_bounding_boxes(Image.fromarray(array), box, model, conf_threshold) frame = np.array(pil_image) Convert RGB to BGR frame = frame[:, :, ::-1].copy() output_video.write(frame) batch = [] output_video.release() yield output_video_name output_video_name = f"output_{uuid.uuid4()}.mp4" output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height)) type: ignore iterating, frame = cap.read() n_frames += 1 ``` 1. **Reading from the Video** One of the industry standards for creating videos in python is OpenCV so we will use it in this app. The `cap` variable is how we will read from the input video. Whenever we call `cap.read()`, we are reading the next frame in the video. In order to stream video in Gradio, we need to yield a different video file for each "chunk" of the output video. We create the next video file to write to with the `output_video = cv2.VideoWriter(output_video_name, video_codec, desired_fps, (width, height))` line. The `video_codec` is how we specify the type of video file. Only "mp4" and "ts" files are supported for video sreaming at the moment. 2. **The Inference Loop** For each frame i
The Inference Function
https://gradio.app/guides/object-detection-from-video
Streaming - Object Detection From Video Guide
dth, height))` line. The `video_codec` is how we specify the type of video file. Only "mp4" and "ts" files are supported for video sreaming at the moment. 2. **The Inference Loop** For each frame in the video, we will resize it to be half the size. OpenCV reads files in `BGR` format, so will convert to the expected `RGB` format of transfomers. That's what the first two lines of the while loop are doing. We take every other frame and add it to a `batch` list so that the output video is half the original FPS. When the batch covers two seconds of video, we will run the model. The two second threshold was chosen to keep the processing time of each batch small enough so that video is smoothly displayed in the server while not requiring too many separate forward passes. In order for video streaming to work properly in Gradio, the batch size should be at least 1 second. We run the forward pass of the model and then use the `post_process_object_detection` method of the model to scale the detected bounding boxes to the size of the input frame. We make use of a custom function to draw the bounding boxes (source [here](https://huggingface.co/spaces/gradio/rt-detr-object-detection/blob/main/draw_boxes.pyL14)). We then have to convert from `RGB` to `BGR` before writing back to the output video. Once we have finished processing the batch, we create a new output video file for the next batch.
The Inference Function
https://gradio.app/guides/object-detection-from-video
Streaming - Object Detection From Video Guide
The UI code is pretty similar to other kinds of Gradio apps. We'll use a standard two-column layout so that users can see the input and output videos side by side. In order for streaming to work, we have to set `streaming=True` in the output video. Setting the video to autoplay is not necessary but it's a better experience for users. ```python import gradio as gr with gr.Blocks() as app: gr.HTML( """ <h1 style='text-align: center'> Video Object Detection with <a href='https://huggingface.co/PekingU/rtdetr_r101vd_coco_o365' target='_blank'>RT-DETR</a> </h1> """) with gr.Row(): with gr.Column(): video = gr.Video(label="Video Source") conf_threshold = gr.Slider( label="Confidence Threshold", minimum=0.0, maximum=1.0, step=0.05, value=0.30, ) with gr.Column(): output_video = gr.Video(label="Processed Video", streaming=True, autoplay=True) video.upload( fn=stream_object_detection, inputs=[video, conf_threshold], outputs=[output_video], ) ```
The Gradio Demo
https://gradio.app/guides/object-detection-from-video
Streaming - Object Detection From Video Guide