ABDALLALSWAITI commited on
Commit
ebe2155
Β·
verified Β·
1 Parent(s): edca24f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +166 -7
README.md CHANGED
@@ -28,12 +28,16 @@ Watch a demonstration of this MCP server in action with an AI agent client.
28
 
29
  **[➑️ Click here to watch the demo video](https://www.YOUR_VIDEO_LINK_HERE.com)**
30
 
 
 
31
  ## Key Features
32
 
33
- - **Comprehensive Documentation Search:** Delivers structured summaries of official documentation, complete with overviews, installation steps, and parameter lists.
34
- - **In-Depth Model & Dataset Analysis:** Goes beyond basic stats to provide rich profiles of models and datasets, including descriptions, download counts, and ready-to-use code snippets.
35
- - **Task-Oriented Model Discovery:** Helps you find the right tool for the job by searching for models based on specific tasks like `text-classification` or `image-generation`.
36
- - **Live & Relevant Data:** Utilizes a combination of live API calls and intelligent web scraping to ensure the information is always up-to-date.
 
 
37
 
38
  ## How to Use the Web Interface
39
 
@@ -41,12 +45,167 @@ The user interface is organized into clear tabs for different functions:
41
 
42
  1. **Select a tab** at the top (e.g., "Model Information", "Documentation Search").
43
  2. **Enter your query** into the textbox.
44
- 3. **Click the button** to get a detailed, formatted response with code examples and usage instructions.
 
 
45
 
46
  ## How to Use as an MCP Server for AI Agents
47
 
48
- This application is also a fully compliant **Model Context Protocol (MCP)** server, allowing AI agents to use its functions as tools.
49
 
50
  ### Connection Endpoint
51
 
52
- An AI agent (MCP client) can connect to this server using the following Server-Sent Events (SSE) endpoint:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  **[➑️ Click here to watch the demo video](https://www.YOUR_VIDEO_LINK_HERE.com)**
30
 
31
+ ---
32
+
33
  ## Key Features
34
 
35
+ - **Comprehensive Documentation Search:** Delivers structured summaries of official documentation.
36
+ - **In-Depth Model & Dataset Analysis:** Provides rich profiles of models and datasets with code snippets.
37
+ - **Task-Oriented Model Discovery:** Helps you find the right tool for the job by searching for models based on a task.
38
+ - **Live & Relevant Data:** Utilizes live API calls and intelligent web scraping to ensure information is always up-to-date.
39
+
40
+ ---
41
 
42
  ## How to Use the Web Interface
43
 
 
45
 
46
  1. **Select a tab** at the top (e.g., "Model Information", "Documentation Search").
47
  2. **Enter your query** into the textbox.
48
+ 3. **Click the button** to get a detailed, formatted response.
49
+
50
+ ---
51
 
52
  ## How to Use as an MCP Server for AI Agents
53
 
54
+ This application is a fully compliant **Model Context Protocol (MCP)** server, allowing AI agents to use its functions as tools.
55
 
56
  ### Connection Endpoint
57
 
58
+ The server's public endpoint is:
59
+ `https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse`
60
+
61
+ ### Testing with a Public Client
62
+
63
+ You can test this server immediately using this public MCP Client Space. Just paste the server URL above into the client's URL input field.
64
+
65
+ **[➑️ Test with Public MCP Client](https://huggingface.co/spaces/ABDALLALSWAITI/MCPclient)**
66
+
67
+ ### Building Your Own Python Client (Locally)
68
+
69
+ You can run your own agent locally that connects to this server. Follow these steps:
70
+
71
+ **1. Setup Your Environment**
72
+
73
+ First, create a `requirements.txt` file for the client with the following content:
74
+
75
+ ```text
76
+ gradio
77
+ smol-agents
78
+ litellm
79
+ ````
80
+
81
+ Now, set up a virtual environment and install the dependencies:
82
+
83
+ ```bash
84
+ # Create and activate a virtual environment
85
+ python -m venv venv
86
+ source venv/bin/activate # On Windows use `venv\Scripts\activate`
87
+
88
+ # Install the required libraries
89
+ pip install -r requirements.txt
90
+ ```
91
+
92
+ **2. Set Your API Key**
93
+
94
+ This client uses the Gemini API via LiteLLM. You need to set your Google API key as an environment variable.
95
+
96
+ - On **macOS/Linux**:
97
+ ```bash
98
+ export GOOGLE_API_KEY="YOUR_API_KEY_HERE"
99
+ ```
100
+ - On **Windows (Command Prompt)**:
101
+ ```bash
102
+ set GOOGLE_API_KEY="YOUR_API_KEY_HERE"
103
+ ```
104
+
105
+ **3. Create the Client Script**
106
+
107
+ Save the following code as `client_app.py`:
108
+
109
+ ```python
110
+ import gradio as gr
111
+ import os
112
+ from smolagents import CodeAgent, MCPClient
113
+ from smolagents import LiteLLMModel
114
+
115
+ # --- Configuration ---
116
+ # Ensure you have your GOOGLE_API_KEY set as an environment variable
117
+ # You can get one from Google AI Studio: [https://aistudio.google.com/app/apikey](https://aistudio.google.com/app/apikey)
118
+ API_KEY = os.getenv("GOOGLE_API_KEY")
119
+
120
+ # This is the public URL of the MCP server we built.
121
+ MCP_SERVER_URL = "[https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse](https://agents-mcp-hackathon-huggingfacedoc.hf.space/gradio_api/mcp/sse)"
122
+
123
+ if not API_KEY:
124
+ raise ValueError("GOOGLE_API_KEY environment variable not set. Please set your API key to run this app.")
125
+
126
+ # --- Main Application ---
127
+ try:
128
+ print(f"πŸ”Œ Connecting to MCP Server: {MCP_SERVER_URL}")
129
+ mcp_client = MCPClient(
130
+ {"url": MCP_SERVER_URL}
131
+ )
132
+ tools = mcp_client.get_tools()
133
+ print(f"βœ… Successfully connected. Found {len(tools)} tools.")
134
+
135
+ # We use LiteLLM to connect to the Gemini API
136
+ model = LiteLLMModel(
137
+ model_id="gemini/gemini-1.5-flash",
138
+ temperature=0.2,
139
+ api_key=API_KEY
140
+ )
141
+
142
+ # The CodeAgent is effective at using tools
143
+ agent = CodeAgent(tools=[*tools], model=model)
144
+
145
+ # Create the Gradio ChatInterface
146
+ demo = gr.ChatInterface(
147
+ fn=lambda message, history: str(agent.run(message)),
148
+ title="πŸ“š Hugging Face Research Agent",
149
+ description="This agent uses the Hugging Face Information Server to answer questions about models, datasets, and documentation.",
150
+ examples=[
151
+ "What is a Hugging Face pipeline?",
152
+ "Find 3 popular models for text classification",
153
+ "Get the info for the 'squad' dataset",
154
+ "What is PEFT?"
155
+ ],
156
+ )
157
+
158
+ demo.launch()
159
+
160
+ finally:
161
+ # Ensure the connection is closed when the app stops
162
+ if 'mcp_client' in locals() and mcp_client.is_connected:
163
+ print("πŸ”Œ Disconnecting from MCP Server...")
164
+ mcp_client.disconnect()
165
+ ```
166
+
167
+ **4. Run the Client App**
168
+
169
+ Execute the script from your terminal:
170
+
171
+ ```bash
172
+ python client_app.py
173
+ ```
174
+
175
+ This will launch a local Gradio interface where you can chat with an agent that uses your live MCP server.
176
+
177
+ -----
178
+
179
+ ## Available Tools
180
+
181
+ The server exposes the following tools to the AI agent:
182
+
183
+ - `search_documentation(query, max_results)`
184
+ - `get_model_info(model_name)`
185
+ - `get_dataset_info(dataset_name)`
186
+ - `search_models(task, limit)`
187
+ - `get_transformers_docs(topic)`
188
+ - `get_trending_models(limit)`
189
+
190
+ -----
191
+
192
+ ## Technology Stack
193
+
194
+ - **Backend:** Python
195
+ - **Web UI & API:** Gradio
196
+ - **Data Retrieval:** Requests & BeautifulSoup
197
+
198
+ -----
199
+
200
+ ## References & Further Reading
201
+
202
+ - **[Building an MCP Server with Gradio](https://huggingface.co/learn/mcp-course/unit2/gradio-server)** - A tutorial on the concepts used to build this server.
203
+
204
+ -----
205
+
206
+ ## License
207
+
208
+ This project is licensed under the Apache 2.0 License.
209
+
210
+ ```
211
+ ```