Spaces:
Running
Running
Updated pics and readme
Browse files- README.md +5 -32
- image-1.png +0 -0
- image-2.png +0 -0
- image-3.png +0 -0
- image-4.png +0 -0
- image.png +0 -0
- langgraph.json +8 -0
- learning_path_orchestrator.py +6 -14
README.md
CHANGED
@@ -1,10 +1,8 @@
|
|
1 |
-
# Blog Evaluator
|
2 |
|
3 |

|
4 |

|
5 |
|
6 |
-
# Blog Generation Workflow
|
7 |
-
|
8 |
## Overview
|
9 |
This project implements an **Evaluator-Optimizer Workflow** using **LangGraph** and **LangChain** to generate and refine short blogs. The workflow follows an iterative process where an LLM generates a blog, evaluates it against predefined criteria, and either accepts it or provides feedback for revision. This ensures that the final output meets quality standards.
|
10 |
|
@@ -32,11 +30,7 @@ graph TD;
|
|
32 |
- If the blog **needs revision**, feedback is given, and a new version is generated.
|
33 |
|
34 |
## Setup & Usage
|
35 |
-
|
36 |
### Install dependencies:
|
37 |
-
```bash
|
38 |
-
pip install langchain_groq langgraph pydantic python-dotenv
|
39 |
-
```
|
40 |
|
41 |
### Set environment variables in a `.env` file:
|
42 |
```env
|
@@ -45,18 +39,13 @@ LANGCHAIN_API_KEY=your_langchain_api_key
|
|
45 |
```
|
46 |
|
47 |
### Run the script in an IDE or Jupyter Notebook:
|
48 |
-
```python
|
49 |
-
state = optimizer_workflow.invoke({"topic": "MCP from Anthropic"})
|
50 |
-
print(state["blog"])
|
51 |
-
```
|
52 |
|
53 |
## Testing in LangSmith Studio
|
54 |
- Deploy the workflow and **provide only the topic** as input.
|
55 |
- Monitor execution flow and **validate outputs** by logging into your LangSmith account (Adding @traceable to your function helps track it)
|
56 |
-
- You can also test via Langraph dev (ensure you have the langgraph.json file for this)
|
57 |
|
58 |
-
|
59 |
-
# Parallelized Code Review with LLMs
|
60 |

|
61 |
|
62 |
## Introduction
|
@@ -81,14 +70,6 @@ This project demonstrates a **parallelized workflow** for **automated code revie
|
|
81 |
- Best practices adherence
|
82 |
3. The results from these processes are aggregated into a final feedback report.
|
83 |
|
84 |
-
## Technologies Used
|
85 |
-
- **Python**
|
86 |
-
- **LangChain** (LLM-based workflow automation)
|
87 |
-
- **LangGraph** (Parallel execution of LLM tasks)
|
88 |
-
- **Groq API** (LLM inference)
|
89 |
-
- **Pydantic & TypedDict** (Data validation)
|
90 |
-
- **Dotenv & OS** (Environment variable management)
|
91 |
-
|
92 |
## Running the Code
|
93 |
1. Clone this repository:
|
94 |
|
@@ -100,14 +81,12 @@ This project demonstrates a **parallelized workflow** for **automated code revie
|
|
100 |
|
101 |
4. Run the script
|
102 |
|
103 |
-
|
104 |
## Testing in LangSmith Studio
|
105 |
- Deploy the workflow and **provide only the topic** as input.
|
106 |
- Monitor execution flow and **validate outputs** by logging into your LangSmith account (Adding @traceable to your function helps track it)
|
107 |
-
- You can also test via Langraph dev (ensure you have the langgraph.json file for this)
|
108 |
|
109 |
-
|
110 |
-
# Learning Path Generator
|
111 |

|
112 |

|
113 |
|
@@ -145,7 +124,6 @@ The workflow consists of three key components:
|
|
145 |
|
146 |
## Running the Workflow
|
147 |
To generate a personalized learning path, the workflow takes the following inputs:
|
148 |
-
|
149 |
```python
|
150 |
user_skills = "Python programming, basic machine learning concepts"
|
151 |
user_goals = "Learn advanced AI, master prompt engineering, and build AI applications"
|
@@ -153,10 +131,5 @@ user_goals = "Learn advanced AI, master prompt engineering, and build AI applica
|
|
153 |
|
154 |
It then executes the **Orchestrator → Workers → Synthesizer** pipeline, producing a structured learning roadmap.
|
155 |
|
156 |
-
## Future Enhancements
|
157 |
-
- **Incorporate user feedback loops** to refine study plans over time.
|
158 |
-
- **Add multimodal learning resources** (e.g., videos, interactive exercises).
|
159 |
-
- **Expand to different learning domains** beyond AI and machine learning.
|
160 |
-
---
|
161 |
|
162 |
|
|
|
1 |
+
# Blog Generation App with Evaluator-Optimizer Workflow
|
2 |
|
3 |

|
4 |

|
5 |
|
|
|
|
|
6 |
## Overview
|
7 |
This project implements an **Evaluator-Optimizer Workflow** using **LangGraph** and **LangChain** to generate and refine short blogs. The workflow follows an iterative process where an LLM generates a blog, evaluates it against predefined criteria, and either accepts it or provides feedback for revision. This ensures that the final output meets quality standards.
|
8 |
|
|
|
30 |
- If the blog **needs revision**, feedback is given, and a new version is generated.
|
31 |
|
32 |
## Setup & Usage
|
|
|
33 |
### Install dependencies:
|
|
|
|
|
|
|
34 |
|
35 |
### Set environment variables in a `.env` file:
|
36 |
```env
|
|
|
39 |
```
|
40 |
|
41 |
### Run the script in an IDE or Jupyter Notebook:
|
|
|
|
|
|
|
|
|
42 |
|
43 |
## Testing in LangSmith Studio
|
44 |
- Deploy the workflow and **provide only the topic** as input.
|
45 |
- Monitor execution flow and **validate outputs** by logging into your LangSmith account (Adding @traceable to your function helps track it)
|
46 |
+
- You can also test via "Langraph dev" command on your console which will open up Studio for enhanced debugging (ensure you have the langgraph.json file for this and customize it for your project)
|
47 |
|
48 |
+
# Code Review App with Parallelization Workflow
|
|
|
49 |

|
50 |
|
51 |
## Introduction
|
|
|
70 |
- Best practices adherence
|
71 |
3. The results from these processes are aggregated into a final feedback report.
|
72 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
73 |
## Running the Code
|
74 |
1. Clone this repository:
|
75 |
|
|
|
81 |
|
82 |
4. Run the script
|
83 |
|
|
|
84 |
## Testing in LangSmith Studio
|
85 |
- Deploy the workflow and **provide only the topic** as input.
|
86 |
- Monitor execution flow and **validate outputs** by logging into your LangSmith account (Adding @traceable to your function helps track it)
|
87 |
+
- You can also test via "Langraph dev" (ensure you have the langgraph.json file for this)
|
88 |
|
89 |
+
# Learning Path Generator App with Orchestrator-Synthesizer Workflow
|
|
|
90 |

|
91 |

|
92 |
|
|
|
124 |
|
125 |
## Running the Workflow
|
126 |
To generate a personalized learning path, the workflow takes the following inputs:
|
|
|
127 |
```python
|
128 |
user_skills = "Python programming, basic machine learning concepts"
|
129 |
user_goals = "Learn advanced AI, master prompt engineering, and build AI applications"
|
|
|
131 |
|
132 |
It then executes the **Orchestrator → Workers → Synthesizer** pipeline, producing a structured learning roadmap.
|
133 |
|
|
|
|
|
|
|
|
|
|
|
134 |
|
135 |
|
image-1.png
ADDED
![]() |
image-2.png
ADDED
![]() |
image-3.png
ADDED
![]() |
image-4.png
ADDED
![]() |
image.png
ADDED
![]() |
langgraph.json
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"dependencies": ["."],
|
3 |
+
"graphs": {
|
4 |
+
"learning_path_workflow": "./learning_path_orchestrator.py:learning_path_workflow"
|
5 |
+
},
|
6 |
+
"env": "../.env"
|
7 |
+
}
|
8 |
+
|
learning_path_orchestrator.py
CHANGED
@@ -19,10 +19,7 @@ os.environ["LANGCHAIN_API_KEY"] = os.getenv("LANGCHAIN_API_KEY")
|
|
19 |
# Initialize LLM model
|
20 |
llm = ChatGroq(model="qwen-2.5-32b")
|
21 |
|
22 |
-
#
|
23 |
-
# 1️⃣ Define Custom Data Structures
|
24 |
-
# ----------------------------
|
25 |
-
|
26 |
class Topic(BaseModel):
|
27 |
"""Represents a learning topic with a name and description."""
|
28 |
name: str = Field(description="Name of the learning topic.")
|
@@ -48,10 +45,7 @@ class WorkerState(TypedDict):
|
|
48 |
topic: Topic
|
49 |
completed_topics: List[str]
|
50 |
|
51 |
-
#
|
52 |
-
# 2️⃣ Define Core Processing Functions
|
53 |
-
# ----------------------------
|
54 |
-
|
55 |
@traceable
|
56 |
def orchestrator(state: State):
|
57 |
"""Creates a study plan based on user skills and goals."""
|
@@ -97,18 +91,16 @@ def synthesizer(state: State):
|
|
97 |
|
98 |
return {"learning_roadmap": learning_roadmap} # Returns final roadmap
|
99 |
|
100 |
-
|
101 |
-
#
|
102 |
-
# ----------------------------
|
103 |
|
104 |
def assign_workers(state: State):
|
105 |
"""Assigns a worker (llm_call) to each topic in the plan."""
|
106 |
|
107 |
return [Send("llm_call", {"topic": t}) for t in state["topics"]] # Creates worker tasks
|
108 |
|
109 |
-
|
110 |
-
#
|
111 |
-
# ----------------------------
|
112 |
|
113 |
learning_path_builder = StateGraph(State)
|
114 |
|
|
|
19 |
# Initialize LLM model
|
20 |
llm = ChatGroq(model="qwen-2.5-32b")
|
21 |
|
22 |
+
# Define Custom Data structures
|
|
|
|
|
|
|
23 |
class Topic(BaseModel):
|
24 |
"""Represents a learning topic with a name and description."""
|
25 |
name: str = Field(description="Name of the learning topic.")
|
|
|
45 |
topic: Topic
|
46 |
completed_topics: List[str]
|
47 |
|
48 |
+
# Define Node Functions
|
|
|
|
|
|
|
49 |
@traceable
|
50 |
def orchestrator(state: State):
|
51 |
"""Creates a study plan based on user skills and goals."""
|
|
|
91 |
|
92 |
return {"learning_roadmap": learning_roadmap} # Returns final roadmap
|
93 |
|
94 |
+
|
95 |
+
# Define Conditional Edge Function
|
|
|
96 |
|
97 |
def assign_workers(state: State):
|
98 |
"""Assigns a worker (llm_call) to each topic in the plan."""
|
99 |
|
100 |
return [Send("llm_call", {"topic": t}) for t in state["topics"]] # Creates worker tasks
|
101 |
|
102 |
+
|
103 |
+
# Build Workflow
|
|
|
104 |
|
105 |
learning_path_builder = StateGraph(State)
|
106 |
|