Updated readme file
Browse files
README.md
CHANGED
@@ -21,15 +21,22 @@ tags:
|
|
21 |
- modal
|
22 |
python_version: "3.12"
|
23 |
---
|
24 |
-
---
|
25 |
|
26 |
-
#
|
27 |
|
28 |
-
π **
|
29 |
|
30 |
## What is MCP Hub?
|
31 |
|
32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
## β¨ Key Features
|
35 |
|
@@ -37,14 +44,21 @@ MCP Hub is a sophisticated multi-agent research and code assistant built using G
|
|
37 |
- π **Intelligent Research**: Web search with automatic summarization and citation formatting
|
38 |
- π» **Code Generation**: Context-aware Python code creation with secure execution
|
39 |
- π **MCP Server**: Built-in MCP server for seamless agent communication
|
40 |
-
- π― **Multiple LLM Support**: Compatible with Nebius, OpenAI, Anthropic, and HuggingFace
|
41 |
- π‘οΈ **Secure Execution**: Modal sandbox environment for safe code execution
|
42 |
- π **Performance Monitoring**: Advanced metrics collection and health monitoring
|
43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
## π Quick Start
|
45 |
|
46 |
1. **Configure your environment** by setting up API keys in the Settings tab
|
47 |
-
2. **Choose your LLM provider**
|
48 |
3. **Input your research query** in the Orchestrator Flow tab
|
49 |
4. **Watch the magic happen** as agents collaborate to research and generate code
|
50 |
|
@@ -98,20 +112,15 @@ Set these environment variables or configure in the app:
|
|
98 |
LLM_PROVIDER=nebius # Your chosen provider
|
99 |
NEBIUS_API_KEY=your_key_here
|
100 |
TAVILY_API_KEY=your_key_here
|
101 |
-
|
|
|
102 |
```
|
103 |
|
104 |
## π― Use Cases
|
105 |
|
106 |
-
### Research & Development
|
107 |
-
- **Academic Research**: Automated literature review and citation management
|
108 |
-
- **Technical Documentation**: Generate comprehensive guides with current information
|
109 |
-
- **Market Analysis**: Research trends and generate analytical reports
|
110 |
-
|
111 |
### Code Generation
|
112 |
- **Prototype Development**: Rapidly create functional code based on requirements
|
113 |
-
- **
|
114 |
-
- **Data Analysis**: Create scripts for data processing and visualization
|
115 |
|
116 |
### Learning & Education
|
117 |
- **Code Examples**: Generate educational code samples with explanations
|
|
|
21 |
- modal
|
22 |
python_version: "3.12"
|
23 |
---
|
|
|
24 |
|
25 |
+
# Shallow Research Code Assistant - Multi-Agent AI Code Assistant
|
26 |
|
27 |
+
π **Multi-agent system for AI-powered search and code generation**
|
28 |
|
29 |
## What is MCP Hub?
|
30 |
|
31 |
+
Shallow Research Code Assistant is a sophisticated multi-agent research and code assistant built using Gradio's Model Context Protocol (MCP) server functionality. It orchestrates specialized AI agents to provide comprehensive research capabilities and generate executable Python code. This "shallow" research tool (Its definitely not deep research) augments
|
32 |
+
the initial user query to broaden scope before performing web searches for grounding.
|
33 |
+
|
34 |
+
The coding agent then generates the code to answer the user question and checks for errors. To ensure the code is valid, the code is executed in a remote sandbox using the
|
35 |
+
Modal infrustructure. These sandboxes are spawned when needed with a small footprint (only pandas, numpy, request and scikit-learn are installed).
|
36 |
+
|
37 |
+
However, if additional packages are required, this will be installed prior to execution (some delays expected here depending on the request).
|
38 |
+
|
39 |
+
Once executed the whole process is summarised and returned to the user.
|
40 |
|
41 |
## β¨ Key Features
|
42 |
|
|
|
44 |
- π **Intelligent Research**: Web search with automatic summarization and citation formatting
|
45 |
- π» **Code Generation**: Context-aware Python code creation with secure execution
|
46 |
- π **MCP Server**: Built-in MCP server for seamless agent communication
|
47 |
+
- π― **Multiple LLM Support**: Compatible with Nebius, OpenAI, Anthropic, and HuggingFace (Currently set to Nebius Inference)
|
48 |
- π‘οΈ **Secure Execution**: Modal sandbox environment for safe code execution
|
49 |
- π **Performance Monitoring**: Advanced metrics collection and health monitoring
|
50 |
|
51 |
+
## ποΈ MCP Workflow Architecture
|
52 |
+
|
53 |
+

|
54 |
+
|
55 |
+
The diagram above illustrates the complete Multi-Agent workflow architecture, showing how different agents communicate through the MCP (Model Context Protocol) server to deliver comprehensive research and code generation capabilities.
|
56 |
+
|
57 |
+
|
58 |
## π Quick Start
|
59 |
|
60 |
1. **Configure your environment** by setting up API keys in the Settings tab
|
61 |
+
2. **Choose your LLM provider** Nebius Set By Default in the Space
|
62 |
3. **Input your research query** in the Orchestrator Flow tab
|
63 |
4. **Watch the magic happen** as agents collaborate to research and generate code
|
64 |
|
|
|
112 |
LLM_PROVIDER=nebius # Your chosen provider
|
113 |
NEBIUS_API_KEY=your_key_here
|
114 |
TAVILY_API_KEY=your_key_here
|
115 |
+
MODAL_ID=your-id-here
|
116 |
+
MODEL_SECRET_TOKEN=your-token-here
|
117 |
```
|
118 |
|
119 |
## π― Use Cases
|
120 |
|
|
|
|
|
|
|
|
|
|
|
121 |
### Code Generation
|
122 |
- **Prototype Development**: Rapidly create functional code based on requirements
|
123 |
+
- **IDE Integration**: Add this to your IDE for grounded LLM support
|
|
|
124 |
|
125 |
### Learning & Education
|
126 |
- **Code Examples**: Generate educational code samples with explanations
|