How to Use FastAPI MCP Server: A Complete Guide
In today's AI-driven world, integrating large language models (LLMs) with APIs and tools has become essential for building powerful applications. The Model Context Protocol (MCP) has emerged as a standard way to expose data sources and tools to LLMs, enabling rich interactions between AI models and external systems. This guide will walk you through using FastAPI MCP server, a zero-configuration tool that automatically transforms your existing FastAPI endpoints into MCP-compatible tools.
Tired of Postman? Want a decent postman alternative that doesn't suck?
Apidog is a powerful all-in-one API development platform that's revolutionizing how developers design, test, and document their APIs.
Unlike traditional tools like Postman, Apidog seamlessly integrates API design, automated testing, mock servers, and documentation into a single cohesive workflow. With its intuitive interface, collaborative features, and comprehensive toolset, Apidog eliminates the need to juggle multiple applications during your API development process.
Whether you're a solo developer or part of a large team, Apidog streamlines your workflow, increases productivity, and ensures consistent API quality across your projects.
What is FastAPI MCP?
FastAPI MCP is an elegant solution that allows you to expose your FastAPI endpoints as Model Context Protocol (MCP) tools with minimal effort. It bridges the gap between your existing APIs and the AI tooling ecosystem, allowing AI assistants like Claude, Anthropic's AI assistant in Cursor, and other MCP-compatible tools to interact directly with your services.
https://github.com/tadata-org/fastapi_mcp
Understanding the Model Context Protocol (MCP)
Before diving into FastAPI MCP, let's understand what MCP is and why it matters:
MCP (Model Context Protocol) is an open protocol developed to standardize how applications provide context to large language models. Think of it as a "USB-C port for AI applications" that enables seamless integration between AI models and external data sources or tools.
The protocol allows language models to:
- Discover available tools and their capabilities
- Understand how to use these tools
- Retrieve data from various sources
- Execute operations through standardized interfaces
MCP has become essential for building AI agents that can interact with various systems, access relevant data, and perform tasks on behalf of users.
Getting Started with FastAPI MCP
Installation
First, you'll need to install the FastAPI MCP package. The recommended way is to use UV, a fast Python package installer:
uv add fastapi-mcp
Alternatively, you can install using pip:
pip install fastapi-mcp
Basic Implementation
The most straightforward way to use FastAPI MCP is to add an MCP server directly to your existing FastAPI application. Here's a minimal example:
from fastapi import FastAPI
from fastapi_mcp import FastApiMCP
# Your existing FastAPI application
app = FastAPI()
# Define your endpoints as you normally would
@app.get("/items/{item_id}", operation_id="get_item")
async def read_item(item_id: int):
return {"item_id": item_id, "name": f"Item {item_id}"}
# Add the MCP server to your FastAPI app
mcp = FastApiMCP(
app,
name="My API MCP", # Name for your MCP server
description="MCP server for my API", # Description
base_url="http://localhost:8000" # Where your API is running
)
# Mount the MCP server to your FastAPI app
mcp.mount()
# Run the app
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
That's it! With just a few lines of code, your FastAPI endpoints are now available as MCP tools at the /mcp
path. The MCP server automatically discovers all your FastAPI endpoints, including their request and response models, and exposes them as tools that can be used by AI assistants.
Tool Naming Best Practices
FastAPI MCP uses the operation_id
from your FastAPI routes as the MCP tool names. To make your tools more intuitive for AI models to use, consider adding explicit operation_id
parameters to your route definitions:
@app.get(
"/users/{user_id}",
operation_id="get_user_info" # This becomes the tool name
)
async def read_user(user_id: int):
return {"user_id": user_id}
Without this explicit naming, FastAPI auto-generates operation IDs that might be cryptic (like "read_user_users__user_id__get"), making it harder for AI models to understand when to use which tool.
Advanced Configuration
FastAPI MCP provides several options to customize how your MCP server behaves:
Enhancing Tool Descriptions
You can include more detailed schemas in your tool descriptions to help AI models better understand how to use your endpoints:
mcp = FastApiMCP(
app,
name="My API MCP",
base_url="http://localhost:8000",
describe_all_responses=True, # Include all possible response schemas
describe_full_response_schema=True # Include full JSON schema in descriptions
)
Selective Tool Exposure
You might not want all your FastAPI endpoints exposed as MCP tools. FastAPI MCP allows you to control which endpoints are exposed through several filtering mechanisms:
By Operation IDs:
# Include only specific operations
mcp = FastApiMCP(
app,
include_operations=["get_user", "create_user"]
)
# Or exclude specific operations
mcp = FastApiMCP(
app,
exclude_operations=["delete_user"]
)
By Tags:
# Include only operations with specific tags
mcp = FastApiMCP(
app,
include_tags=["users", "public"]
)
# Or exclude operations with specific tags
mcp = FastApiMCP(
app,
exclude_tags=["admin", "internal"]
)
Combined Filtering:
You can combine operation filtering with tag filtering:
# Include endpoints that have either the specified operation ID OR the specified tag
mcp = FastApiMCP(
app,
include_operations=["user_login"],
include_tags=["public"]
)
Note that you cannot use both include_operations
and exclude_operations
at the same time, nor can you use both include_tags
and exclude_tags
simultaneously.
Deployment Options
Separate Server Deployment
One powerful feature of FastAPI MCP is the ability to deploy the MCP server separately from your original API. This provides flexibility in how you structure your services:
from fastapi import FastAPI
from fastapi_mcp import FastApiMCP
# Your API app
api_app = FastAPI()
# Define endpoints on api_app...
# A separate app for the MCP server
mcp_app = FastAPI()
# Create MCP server from the API app
mcp = FastApiMCP(
api_app,
base_url="http://api-host:8001" # The URL where the API app will be running
)
# Mount the MCP server to the separate app
mcp.mount(mcp_app)
# Now you can run both apps separately:
# uvicorn main:api_app --host api-host --port 8001
# uvicorn main:mcp_app --host mcp-host --port 8000
This architecture allows you to:
- Scale your API and MCP server independently
- Apply different security policies to each
- Avoid exposing the MCP server in certain environments
Refreshing the Server
If you add endpoints to your FastAPI app after creating the MCP server, you'll need to refresh the server to include them:
app = FastAPI()
mcp = FastApiMCP(app)
mcp.mount()
# Add new endpoints
@app.get("/new/endpoint/", operation_id="new_endpoint")
async def new_endpoint():
return {"message": "Hello, world!"}
# Refresh the MCP server
mcp.setup_server()
Connecting to Your MCP Server
Once your FastAPI app with MCP integration is running, you can connect to it using MCP-compatible clients:
Using Server-Sent Events (SSE)
For clients that support SSE, such as Cursor:
- Run your application.
- In Cursor → Settings → MCP, use your MCP server endpoint (e.g.,
http://localhost:8000/mcp
) as the SSE URL. - Cursor will automatically discover all available tools and resources.
Using MCP-Proxy with stdio
For clients that don't support SSE directly, like Claude Desktop:
Run your application.
Install
mcp-proxy
:uv tool install mcp-proxy
Add the proxy to your Claude Desktop MCP config file (
claude_desktop_config.json
):For Windows:
{ "mcpServers": { "my-api-mcp-proxy": { "command": "mcp-proxy", "args": ["http://127.0.0.1:8000/mcp"] } } }
For MacOS:
{ "mcpServers": { "my-api-mcp-proxy": { "command": "/Full/Path/To/Your/Executable/mcp-proxy", "args": ["http://127.0.0.1:8000/mcp"] } } }
To find the path to mcp-proxy on MacOS, run
which mcp-proxy
in Terminal.
Real-World Use Cases
FastAPI MCP is versatile and can be used in various scenarios:
AI-Powered Documentation: Expose your API to AI assistants that can help users understand and utilize your services.
Internal Tooling: Create AI agents that can interact with your company's internal systems through a standardized interface.
Data Access: Give AI assistants controlled access to your databases or data services through well-defined API endpoints.
Automation: Enable AI agents to perform complex workflows by interacting with multiple systems through your API.
Best Practices
Clear Operation IDs: Always use explicit, descriptive
operation_id
values that clearly indicate what the operation does.Comprehensive Documentation: Add detailed descriptions to your FastAPI endpoints, as these will be passed to the MCP tools.
Proper Input Validation: Since AI models will be calling your endpoints, ensure robust input validation to prevent unexpected behavior.
Security Considerations: Apply appropriate security measures, especially if your MCP server is exposed to external users.
Monitoring and Logging: Track how AI assistants are using your tools to identify patterns and potential improvements.
Conclusion
FastAPI MCP offers a streamlined approach to integrating your existing FastAPI applications with the growing ecosystem of AI models and tools. With minimal configuration, you can transform your APIs into MCP-compatible tools that can be leveraged by AI assistants to provide enhanced user experiences.
Whether you're building a complex AI agent or simply want to make your APIs accessible to language models, FastAPI MCP provides a flexible, powerful solution that grows with your needs.
By following this guide, you should now be equipped to implement, customize, and deploy FastAPI MCP servers for various use cases, contributing to the next generation of AI-powered applications.