Update README.md
Browse files
README.md
CHANGED
@@ -49,136 +49,168 @@ The user interface is organized into clear tabs for different functions:
|
|
49 |
|
50 |
---
|
51 |
|
52 |
-
##
|
53 |
|
54 |
-
This application is a fully compliant **Model Context Protocol (MCP)** server, allowing AI agents to use its functions as tools.
|
55 |
|
56 |
### Connection Endpoint
|
57 |
|
58 |
The server's public endpoint is:
|
59 |
`https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse`
|
60 |
|
61 |
-
###
|
62 |
|
63 |
You can test this server immediately using this public MCP Client Space. Just paste the server URL above into the client's URL input field.
|
64 |
|
65 |
**[➡️ Test with Public MCP Client](https://huggingface.co/spaces/ABDALLALSWAITI/MCPclient)**
|
66 |
|
67 |
-
###
|
68 |
|
69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
|
71 |
**1. Setup Your Environment**
|
72 |
|
73 |
-
|
74 |
|
75 |
```text
|
76 |
gradio
|
77 |
smol-agents
|
78 |
litellm
|
79 |
-
|
80 |
|
81 |
-
|
82 |
|
83 |
```bash
|
84 |
-
# Create and activate a virtual environment
|
85 |
python -m venv venv
|
86 |
source venv/bin/activate # On Windows use `venv\Scripts\activate`
|
87 |
-
|
88 |
-
# Install the required libraries
|
89 |
pip install -r requirements.txt
|
90 |
```
|
91 |
|
92 |
**2. Set Your API Key**
|
93 |
|
94 |
-
This client uses
|
95 |
-
|
96 |
-
- On **macOS/Linux**:
|
97 |
-
```bash
|
98 |
-
export GOOGLE_API_KEY="YOUR_API_KEY_HERE"
|
99 |
-
```
|
100 |
-
- On **Windows (Command Prompt)**:
|
101 |
-
```bash
|
102 |
-
set GOOGLE_API_KEY="YOUR_API_KEY_HERE"
|
103 |
-
```
|
104 |
|
105 |
-
|
|
|
|
|
106 |
|
107 |
-
|
108 |
|
109 |
-
|
110 |
-
import gradio as gr
|
111 |
-
import os
|
112 |
-
from smolagents import CodeAgent, MCPClient
|
113 |
-
from smolagents import LiteLLMModel
|
114 |
|
115 |
-
|
116 |
-
|
117 |
-
# You can get one from Google AI Studio: [https://aistudio.google.com/app/apikey](https://aistudio.google.com/app/apikey)
|
118 |
-
API_KEY = os.getenv("GOOGLE_API_KEY")
|
119 |
-
|
120 |
-
# This is the public URL of the MCP server we built.
|
121 |
-
MCP_SERVER_URL = "[https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse](https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse)"
|
122 |
-
|
123 |
-
if not API_KEY:
|
124 |
-
raise ValueError("GOOGLE_API_KEY environment variable not set. Please set your API key to run this app.")
|
125 |
-
|
126 |
-
# --- Main Application ---
|
127 |
-
try:
|
128 |
-
print(f"🔌 Connecting to MCP Server: {MCP_SERVER_URL}")
|
129 |
-
mcp_client = MCPClient(
|
130 |
-
{"url": MCP_SERVER_URL}
|
131 |
-
)
|
132 |
-
tools = mcp_client.get_tools()
|
133 |
-
print(f"✅ Successfully connected. Found {len(tools)} tools.")
|
134 |
-
|
135 |
-
# We use LiteLLM to connect to the Gemini API
|
136 |
-
model = LiteLLMModel(
|
137 |
-
model_id="gemini/gemini-1.5-flash",
|
138 |
-
temperature=0.2,
|
139 |
-
api_key=API_KEY
|
140 |
-
)
|
141 |
-
|
142 |
-
# The CodeAgent is effective at using tools
|
143 |
-
agent = CodeAgent(tools=[*tools], model=model)
|
144 |
-
|
145 |
-
# Create the Gradio ChatInterface
|
146 |
-
demo = gr.ChatInterface(
|
147 |
-
fn=lambda message, history: str(agent.run(message)),
|
148 |
-
title="📚 Hugging Face Research Agent",
|
149 |
-
description="This agent uses the Hugging Face Information Server to answer questions about models, datasets, and documentation.",
|
150 |
-
examples=[
|
151 |
-
"What is a Hugging Face pipeline?",
|
152 |
-
"Find 3 popular models for text classification",
|
153 |
-
"Get the info for the 'squad' dataset",
|
154 |
-
"What is PEFT?"
|
155 |
-
],
|
156 |
-
)
|
157 |
-
|
158 |
-
demo.launch()
|
159 |
-
|
160 |
-
finally:
|
161 |
-
# Ensure the connection is closed when the app stops
|
162 |
-
if 'mcp_client' in locals() and mcp_client.is_connected:
|
163 |
-
print("🔌 Disconnecting from MCP Server...")
|
164 |
-
mcp_client.disconnect()
|
165 |
```
|
166 |
|
167 |
-
|
|
|
|
|
|
|
|
|
168 |
|
169 |
-
|
170 |
|
171 |
```bash
|
172 |
-
|
173 |
```
|
174 |
|
175 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
176 |
|
177 |
-----
|
178 |
|
179 |
## Available Tools
|
180 |
|
181 |
-
The server exposes the following tools to
|
182 |
|
183 |
- `search_documentation(query, max_results)`
|
184 |
- `get_model_info(model_name)`
|
@@ -189,23 +221,13 @@ The server exposes the following tools to the AI agent:
|
|
189 |
|
190 |
-----
|
191 |
|
192 |
-
## Technology Stack
|
193 |
-
|
194 |
-
- **Backend:** Python
|
195 |
-
- **Web UI & API:** Gradio
|
196 |
-
- **Data Retrieval:** Requests & BeautifulSoup
|
197 |
-
|
198 |
-
-----
|
199 |
-
|
200 |
## References & Further Reading
|
201 |
|
202 |
- **[Building an MCP Server with Gradio](https://huggingface.co/learn/mcp-course/unit2/gradio-server)** - A tutorial on the concepts used to build this server.
|
|
|
203 |
|
204 |
-----
|
205 |
|
206 |
## License
|
207 |
|
208 |
-
This project is licensed under the Apache 2.0 License.
|
209 |
-
|
210 |
-
```
|
211 |
-
```
|
|
|
49 |
|
50 |
---
|
51 |
|
52 |
+
## Integrating with MCP Clients
|
53 |
|
54 |
+
This application is a fully compliant **Model Context Protocol (MCP)** server, allowing AI agents and other clients to use its functions as tools.
|
55 |
|
56 |
### Connection Endpoint
|
57 |
|
58 |
The server's public endpoint is:
|
59 |
`https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse`
|
60 |
|
61 |
+
### Method 1: Test with a Public Client
|
62 |
|
63 |
You can test this server immediately using this public MCP Client Space. Just paste the server URL above into the client's URL input field.
|
64 |
|
65 |
**[➡️ Test with Public MCP Client](https://huggingface.co/spaces/ABDALLALSWAITI/MCPclient)**
|
66 |
|
67 |
+
### Method 2: Integrate with UI Clients (e.g., Cursor IDE)
|
68 |
|
69 |
+
MCP hosts often use a configuration file, typically named `mcp.json`, to manage server connections [cite].
|
70 |
+
|
71 |
+
#### General `mcp.json` Configuration
|
72 |
+
|
73 |
+
For a remote server using HTTP+SSE transport, the configuration points to the server's URL [cite]. You would create an `mcp.json` file with the following structure:
|
74 |
+
|
75 |
+
```json
|
76 |
+
{
|
77 |
+
"servers": [
|
78 |
+
{
|
79 |
+
"name": "HF Info Server",
|
80 |
+
"transport": {
|
81 |
+
"type": "sse",
|
82 |
+
"url": "[https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse](https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse)"
|
83 |
+
}
|
84 |
+
}
|
85 |
+
]
|
86 |
+
}
|
87 |
+
````
|
88 |
+
|
89 |
+
#### Configuring Cursor IDE
|
90 |
+
|
91 |
+
Cursor IDE has built-in MCP support. To connect this server, you can use the `mcp-remote` tool, which acts as a bridge for clients that don't natively support remote SSE servers [cite].
|
92 |
+
|
93 |
+
1. Open Cursor settings (`Ctrl + Shift + J` / `Cmd + Shift + J`).
|
94 |
+
2. Go to the `MCP` tab and click `Add new global MCP server`.
|
95 |
+
3. Paste the appropriate configuration below.
|
96 |
+
|
97 |
+
**For macOS/Linux:**
|
98 |
+
|
99 |
+
```json
|
100 |
+
{
|
101 |
+
"mcpServers": {
|
102 |
+
"hf-info-server": {
|
103 |
+
"command": "npx",
|
104 |
+
"args": [
|
105 |
+
"-y",
|
106 |
+
"mcp-remote",
|
107 |
+
"[https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse](https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse)",
|
108 |
+
"--transport",
|
109 |
+
"sse-only"
|
110 |
+
]
|
111 |
+
}
|
112 |
+
}
|
113 |
+
}
|
114 |
+
```
|
115 |
+
|
116 |
+
**For Windows:**
|
117 |
+
|
118 |
+
```json
|
119 |
+
{
|
120 |
+
"mcpServers": {
|
121 |
+
"hf-info-server": {
|
122 |
+
"command": "cmd",
|
123 |
+
"args": [
|
124 |
+
"/c",
|
125 |
+
"npx",
|
126 |
+
"-y",
|
127 |
+
"mcp-remote",
|
128 |
+
"[https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse](https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse)",
|
129 |
+
"--transport",
|
130 |
+
"sse-only"
|
131 |
+
]
|
132 |
+
}
|
133 |
+
}
|
134 |
+
}
|
135 |
+
```
|
136 |
+
|
137 |
+
### Method 3: Build a Python Client with `smol-agents`
|
138 |
+
|
139 |
+
You can also run your own agent locally.
|
140 |
|
141 |
**1. Setup Your Environment**
|
142 |
|
143 |
+
Create a `requirements.txt` file:
|
144 |
|
145 |
```text
|
146 |
gradio
|
147 |
smol-agents
|
148 |
litellm
|
149 |
+
```
|
150 |
|
151 |
+
Then, install the dependencies:
|
152 |
|
153 |
```bash
|
|
|
154 |
python -m venv venv
|
155 |
source venv/bin/activate # On Windows use `venv\Scripts\activate`
|
|
|
|
|
156 |
pip install -r requirements.txt
|
157 |
```
|
158 |
|
159 |
**2. Set Your API Key**
|
160 |
|
161 |
+
This client uses an LLM for reasoning. Export your API key as an environment variable. For example, for Google Gemini:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
162 |
|
163 |
+
```bash
|
164 |
+
export GOOGLE_API_KEY="YOUR_API_KEY_HERE"
|
165 |
+
```
|
166 |
|
167 |
+
**3. Create and Run the Client Script**
|
168 |
|
169 |
+
Save the client code from the previous section as `client_app.py` and run it:
|
|
|
|
|
|
|
|
|
170 |
|
171 |
+
```bash
|
172 |
+
python client_app.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
173 |
```
|
174 |
|
175 |
+
-----
|
176 |
+
|
177 |
+
## Advanced: Using `LiteLLMModel` Directly
|
178 |
+
|
179 |
+
The `smol-agents` library uses `LiteLLMModel` to interact with various language models. If you want to use a model like Anthropic's Claude 3.5 Sonnet directly in your Python code, you can do so easily.
|
180 |
|
181 |
+
First, ensure you have your Anthropic API key set as an environment variable:
|
182 |
|
183 |
```bash
|
184 |
+
export ANTHROPIC_API_KEY="YOUR_ANTHROPIC_KEY"
|
185 |
```
|
186 |
|
187 |
+
Then, you can use the following pattern in Python:
|
188 |
+
|
189 |
+
```python
|
190 |
+
from smolagents import LiteLLMModel
|
191 |
+
|
192 |
+
# Define the messages in the standard conversation format
|
193 |
+
messages = [
|
194 |
+
{"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]}
|
195 |
+
]
|
196 |
+
|
197 |
+
# Instantiate the model
|
198 |
+
model = LiteLLMModel(
|
199 |
+
model_id="anthropic/claude-3-5-sonnet-latest",
|
200 |
+
temperature=0.2,
|
201 |
+
max_tokens=1024
|
202 |
+
)
|
203 |
+
|
204 |
+
# Call the model with the messages
|
205 |
+
response = model(messages)
|
206 |
+
print(response)
|
207 |
+
```
|
208 |
|
209 |
-----
|
210 |
|
211 |
## Available Tools
|
212 |
|
213 |
+
The server exposes the following tools to any connected MCP client:
|
214 |
|
215 |
- `search_documentation(query, max_results)`
|
216 |
- `get_model_info(model_name)`
|
|
|
221 |
|
222 |
-----
|
223 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
224 |
## References & Further Reading
|
225 |
|
226 |
- **[Building an MCP Server with Gradio](https://huggingface.co/learn/mcp-course/unit2/gradio-server)** - A tutorial on the concepts used to build this server.
|
227 |
+
- **[Building MCP Clients (JS & Python)](https://huggingface.co/learn/mcp-course/unit2/clients)** - A guide on creating clients to interact with an MCP server.
|
228 |
|
229 |
-----
|
230 |
|
231 |
## License
|
232 |
|
233 |
+
This project is licensed under the Apache 2.0 License.
|
|
|
|
|
|