bpHigh commited on
Commit
8e7cf2f
·
verified ·
1 Parent(s): ee63b9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -2
README.md CHANGED
@@ -17,16 +17,45 @@ tags:
17
  - Modal
18
 
19
  ---
 
20
  ## Demo Video
21
  🎥 Watch the [Demo Video here](https://drive.google.com/file/d/1FlvN_tV1BQ4OmFmGsWPSQt_H6Ok92dmy/view?usp=sharing)
22
 
23
  ## Acknowledgements
24
  Made with ❤️ by [Bhavish Pahwa](https://huggingface.co/bpHigh) & [Abhinav Bhatnagar](https://huggingface.co/Master-warrier)
25
 
 
26
 
27
- ## License
28
- This project is licensed under the MIT License – see the LICENSE file for details.
 
 
 
 
 
 
 
 
 
 
 
 
29
 
 
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
17
  - Modal
18
 
19
  ---
20
+
21
  ## Demo Video
22
  🎥 Watch the [Demo Video here](https://drive.google.com/file/d/1FlvN_tV1BQ4OmFmGsWPSQt_H6Ok92dmy/view?usp=sharing)
23
 
24
  ## Acknowledgements
25
  Made with ❤️ by [Bhavish Pahwa](https://huggingface.co/bpHigh) & [Abhinav Bhatnagar](https://huggingface.co/Master-warrier)
26
 
27
+ Here’s the refined **How It Works** section with the iterative back‑and‑forth and LlamaIndex MCP integration clearly outlined:
28
 
29
+ ## 🔧 How It Works
30
+
31
+ ### 1. **Gather Requirements**
32
+
33
+ * The user engages in a conversation with the chatbot, describing their data science / AI / ML problem.
34
+ * There’s an iterative back-and-forth between the user and **Gemini‑2.5‑Pro**—the model asks clarifying questions, the user responds, and this continues until Gemini‑2.5‑Pro is satisfied that requirements are complete. Only then does it issue a “satisfied” response and release the structured requirements. ([huggingface.co][1], [youtube.com][2])
35
+
36
+ ### 2. 🛠️ **Generate Plan** (button)
37
+
38
+ * Clicking **Generate Plan** makes use of **LlamaIndex’s MCP integration**, which:
39
+
40
+ * Discovers all available tools listed via MCP on the Hugging Face server (hf.co/mcp) ([medium.com][3])
41
+ * Prompts **Gemini‑2.5‑Pro** again to select the appropriate tools and construct the plan workflows and call syntax.
42
+ * All logic for tool discovery, orchestration, and MCP communication is deployed as a **Modal app**.
43
 
44
+ ### 3. 🚀 **Generate Code** (button)
45
 
46
+ * When the user clicks **Generate Code**, the **Mistral DevStral** model (served via vLLM, OpenAI-compatible) generates runnable code matching the plan and selected tools. This model, and its integration, are hosted on **Modal Labs**.
47
+
48
+ ### 4. ▶️ **Execute Code** (button)
49
+
50
+ * The **Execute Code** button sends the generated script to a sandboxed environment in **Modal Labs**, where it’s securely run. Execution results and any outputs are then presented back to the user.
51
+
52
+ This workflow flows user ↔ requirements collection ↔ tool planning ↔ code generation ↔ secure execution—with each step backed by powerful LLMs (Gemini‑2.5‑Pro, Mistral DevStral), LlamaIndex + MCP, and Modal Labs deployment. Samabanova models with Cline are used as devtools / copilots.
53
+
54
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623f2f5828672458f74879b3/DELDAtNnCJbS63b1-Fml8.png)
55
+
56
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623f2f5828672458f74879b3/_xKyLcuJS42uBC2FqjeZP.png)
57
+
58
+ ## License
59
+ This project is licensed under the MIT License – see the LICENSE file for details.
60
 
61
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference