Devi Priya K commited on
Commit
b57423f
·
1 Parent(s): daa78e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -64
README.md CHANGED
@@ -8,140 +8,129 @@ sdk_version: 1.42.0
8
  app_file: app7.py
9
  pinned: false
10
  ---
 
11
 
12
- # To run STREAMLIT app with the consolidated workflows
13
- - ``streamlit run app7.py``
14
- - Individual workflow Python files are listed in the following sections
15
 
 
16
 
17
- ### You can view the deployed app here: https://huggingface.co/spaces/Deepri24/LangGraph_Workflows
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  - ⚠️ Please enter your GROQ API key to proceed. Don't have? refer : https://console.groq.com/keys
19
 
20
- # Blog Generation App with Evaluator-Optimizer Workflow
21
 
22
- ![alt text](assets/image.png)
23
  ![alt text](assets/image-1.png)
24
 
25
- ## Overview
26
  This project implements an **Evaluator-Optimizer Workflow** using **LangGraph** and **LangChain** to generate and refine short blogs. The workflow follows an iterative process where an LLM generates a blog, evaluates it against predefined criteria, and either accepts it or provides feedback for revision. This ensures that the final output meets quality standards.
27
 
28
- ## Why This Workflow Works
29
  The **Evaluator-Optimizer Workflow** is effective because it automates content generation while maintaining **quality control** through an LLM-powered evaluation loop. If the initial blog meets the set criteria (**concise, engaging, structured with subtitles and a conclusion**), it is accepted. Otherwise, the LLM provides feedback, and the blog is regenerated with improvements.
30
 
31
- ## Features
32
  - **Automated Blog Generation**: Generates a blog based on a given topic.
33
  - **Evaluation & Feedback**: Reviews the blog for conciseness, structure, and entertainment value.
34
  - **Iterative Refinement**: If the blog needs revision, feedback is provided, and a revised version is generated.
35
  - **LangSmith Studio Integration**: Visualizes and tests workflow execution.
36
 
37
- ## Workflow Overview
38
- ```mermaid
39
- graph TD;
40
- A[Start] --> B[Generate Blog];
41
- B --> C[Evaluate Blog];
42
- C -->|Needs Revision| B;
43
- C -->|Accepted| D[End];
44
- ```
45
- - **Generates** an initial blog based on the provided topic.
46
- - **Evaluates** the blog and determines if it meets quality standards.
47
- - **Routing Decision**:
48
- - If the blog is **good**, the workflow **ends**.
49
- - If the blog **needs revision**, feedback is given, and a new version is generated.
50
-
51
- ## Setup & Usage
52
- ### Install dependencies:
53
-
54
- ### Set environment variables in a `.env` file:
55
- ```env
56
- GROQ_API_KEY=your_api_key
57
- LANGCHAIN_API_KEY=your_langchain_api_key
58
- ```
59
-
60
- ### Run the script in an IDE or Jupyter Notebook:
61
-
62
- ## Testing in LangSmith Studio
63
  - Deploy the workflow and **provide only the topic** as input.
64
  - Monitor execution flow and **validate outputs** by logging into your LangSmith account (Adding @traceable to your function helps track it)
65
  - You can also test via "Langraph dev" command on your console which will open up Studio for enhanced debugging (ensure you have the langgraph.json file for this and customize it for your project)
66
 
67
- # Code Review App with Parallelization Workflow
68
  ![alt text](assets/image-2.png)
69
 
70
- ## Introduction
71
  This project demonstrates a **parallelized workflow** for **automated code review** using **large language models (LLMs)**. Instead of running feedback checks sequentially, the system executes multiple review processes **in parallel**, making it an **efficient and scalable** solution for code assessment.
72
 
73
- ### Why Parallelization?
74
  - **Faster Execution:** Multiple feedback checks run **simultaneously**, reducing the overall processing time.
75
  - **Improved Scalability:** New review criteria can be added without significant slowdowns.
76
  - **Better Resource Utilization:** Leverages LLM calls efficiently by distributing tasks.
77
 
78
- ## Features
79
  - **Readability Analysis**: Evaluates the clarity and structure of the code.
80
  - **Security Review**: Identifies potential vulnerabilities.
81
  - **Best Practices Compliance**: Checks adherence to industry-standard coding best practices.
82
  - **Feedback Aggregation**: Combines results into a single, structured response.
83
 
84
- ## How It Works
85
  1. A **code snippet** is provided as input.
86
  2. Three independent LLM processes analyze the snippet for:
87
  - Readability
88
  - Security vulnerabilities
89
  - Best practices adherence
90
  3. The results from these processes are aggregated into a final feedback report.
91
-
92
- ## Running the Code
93
- 1. Clone this repository:
94
-
95
- 2. Install dependencies:
96
- ```sh
97
- pip install -r requirements.txt
98
- ```
99
- 3. Set up your environment variables in .env file
100
-
101
- 4. Run the script
102
 
103
- ## Testing in LangSmith Studio
104
  - Deploy the workflow and **provide only the topic** as input.
105
  - Monitor execution flow and **validate outputs** by logging into your LangSmith account (Adding @traceable to your function helps track it)
106
  - You can also test via "Langraph dev" (ensure you have the langgraph.json file for this)
107
 
108
- # Learning Path Generator App with Orchestrator-Synthesizer Workflow
109
  ![alt text](assets/image-3.png)
110
- ![alt text](assets/image-4.png)
111
 
112
- ## Overview
113
  This project implements an **Orchestrator-Synthesizer** workflow to dynamically generate a personalized **learning roadmap** based on a user's existing skills and learning goals. It uses **LangChain, LangGraph, and Groq AI models** to generate structured study plans and topic summaries.
114
 
115
- ## Why Orchestrator-Synthesizer?
116
  The **Orchestrator-Synthesizer** pattern is ideal for structured content generation workflows where tasks need to be dynamically assigned, processed independently, and then combined into a final output. It differs from traditional parallelization in the following ways:
117
  - **Orchestration** dynamically determines what needs to be processed, ensuring relevant tasks are executed based on user input.
118
  - **Workers** independently generate content summaries for each topic in the study plan.
119
  - **Synthesis** intelligently merges topic summaries into a well-structured learning roadmap.
120
 
121
- This ensures a **scalable, modular, and adaptable** approach to content generation, avoiding unnecessary processing while keeping results contextual.
122
-
123
- ## Workflow Breakdown
124
  The workflow consists of three key components:
125
 
126
- ### 1️⃣ Orchestrator
127
  - Creates a **study plan** based on the user's **skills and learning goals**.
128
  - Uses an LLM with a structured output schema to generate a list of **learning topics**.
129
 
130
- ### 2️⃣ Workers
131
  - Each **worker** processes an individual **learning topic**.
132
  - Generates a **markdown-formatted content summary** for the topic, including key concepts and learning resources.
133
 
134
- ### 3️⃣ Synthesizer
135
  - Collects all **topic summaries** and organizes them into a **cohesive learning roadmap**.
136
  - Ensures smooth flow and structured representation of the learning journey.
137
 
138
- ## Code Structure
139
  - `orchestrator(state: State)`: Generates the study plan dynamically.
140
  - `llm_call(state: WorkerState)`: Summarizes a single topic.
141
  - `synthesizer(state: State)`: Merges all topic summaries into the final roadmap.
142
  - `assign_workers(state: State)`: Dynamically assigns tasks based on generated topics.
143
 
144
- ## Running the Workflow
145
  To generate a personalized learning path, the workflow takes the following inputs:
146
  ```python
147
  user_skills = "Python programming, basic machine learning concepts"
 
8
  app_file: app7.py
9
  pinned: false
10
  ---
11
+ # LangGraph Agentic Workflow Use Cases
12
 
13
+ This repository contains 3 Python scripts and a Streamlit application demonstrating various use cases built with LangGraph Studio.
 
 
14
 
15
+ ## Scripts
16
 
17
+ * **`blog_evaluater_optimizer.py`**: Evaluates and optimizes blog content. Run using: `python blog_evaluater_optimizer.py`
18
+ * **`code_peer_review_parallel.py`**: Performs parallel code peer reviews. Run using: `python code_peer_review_parallel.py`
19
+ * **`learning_path_orchestrator.py`**: Orchestrates learning paths. Run using: `python learning_path_orchestrator.py`
20
+
21
+ Note: These scripts utilize LangGraph Studio to debug workflows.
22
+
23
+ ## Streamlit Application
24
+
25
+ * **`app7.py`**: A Streamlit interface that integrates the use cases from the above scripts, providing a user-friendly way to interact with them. Run using: `streamlit run app7.py`
26
+
27
+ ## Usage
28
+
29
+ 1. Clone the repository.
30
+ 2. Install the required dependencies (refer to `requirements.txt`).
31
+ 3. **Create a `.env` file in the root directory and add your API keys:**
32
+ ```
33
+ GROQ_API_KEY=your_groq_api_key
34
+ HF_TOKEN=your_huggingface_token
35
+ ```
36
+ 4. Run the desired script using the commands provided above.
37
+ 5. To run the Streamlit application, execute `streamlit run app7.py`.
38
+
39
+ ## Notes
40
+
41
+ * The scripts were initially developed for evaluating and finalizing use cases using LangGraph Studio.
42
+ * The Streamlit application provides a unified interface for these use cases.
43
+
44
+ ### You can view the deployed app here:
45
+ - https://huggingface.co/spaces/Deepri24/LangGraph_Workflows
46
  - ⚠️ Please enter your GROQ API key to proceed. Don't have? refer : https://console.groq.com/keys
47
 
48
+ ## Detailed Workflow Overviews
49
 
50
+ ### Blog Generation App with Evaluator-Optimizer Workflow
51
  ![alt text](assets/image-1.png)
52
 
53
+ #### Overview
54
  This project implements an **Evaluator-Optimizer Workflow** using **LangGraph** and **LangChain** to generate and refine short blogs. The workflow follows an iterative process where an LLM generates a blog, evaluates it against predefined criteria, and either accepts it or provides feedback for revision. This ensures that the final output meets quality standards.
55
 
56
+ #### Why This Workflow Works
57
  The **Evaluator-Optimizer Workflow** is effective because it automates content generation while maintaining **quality control** through an LLM-powered evaluation loop. If the initial blog meets the set criteria (**concise, engaging, structured with subtitles and a conclusion**), it is accepted. Otherwise, the LLM provides feedback, and the blog is regenerated with improvements.
58
 
59
+ #### Features
60
  - **Automated Blog Generation**: Generates a blog based on a given topic.
61
  - **Evaluation & Feedback**: Reviews the blog for conciseness, structure, and entertainment value.
62
  - **Iterative Refinement**: If the blog needs revision, feedback is provided, and a revised version is generated.
63
  - **LangSmith Studio Integration**: Visualizes and tests workflow execution.
64
 
65
+ #### Testing in LangSmith Studio
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  - Deploy the workflow and **provide only the topic** as input.
67
  - Monitor execution flow and **validate outputs** by logging into your LangSmith account (Adding @traceable to your function helps track it)
68
  - You can also test via "Langraph dev" command on your console which will open up Studio for enhanced debugging (ensure you have the langgraph.json file for this and customize it for your project)
69
 
70
+ ### Code Review App with Parallelization Workflow
71
  ![alt text](assets/image-2.png)
72
 
73
+ #### Introduction
74
  This project demonstrates a **parallelized workflow** for **automated code review** using **large language models (LLMs)**. Instead of running feedback checks sequentially, the system executes multiple review processes **in parallel**, making it an **efficient and scalable** solution for code assessment.
75
 
76
+ #### Why Parallelization?
77
  - **Faster Execution:** Multiple feedback checks run **simultaneously**, reducing the overall processing time.
78
  - **Improved Scalability:** New review criteria can be added without significant slowdowns.
79
  - **Better Resource Utilization:** Leverages LLM calls efficiently by distributing tasks.
80
 
81
+ #### Features
82
  - **Readability Analysis**: Evaluates the clarity and structure of the code.
83
  - **Security Review**: Identifies potential vulnerabilities.
84
  - **Best Practices Compliance**: Checks adherence to industry-standard coding best practices.
85
  - **Feedback Aggregation**: Combines results into a single, structured response.
86
 
87
+ #### How It Works
88
  1. A **code snippet** is provided as input.
89
  2. Three independent LLM processes analyze the snippet for:
90
  - Readability
91
  - Security vulnerabilities
92
  - Best practices adherence
93
  3. The results from these processes are aggregated into a final feedback report.
 
 
 
 
 
 
 
 
 
 
 
94
 
95
+ #### Testing in LangSmith Studio
96
  - Deploy the workflow and **provide only the topic** as input.
97
  - Monitor execution flow and **validate outputs** by logging into your LangSmith account (Adding @traceable to your function helps track it)
98
  - You can also test via "Langraph dev" (ensure you have the langgraph.json file for this)
99
 
100
+ ### Learning Path Generator App with Orchestrator-Synthesizer Workflow
101
  ![alt text](assets/image-3.png)
 
102
 
103
+ #### Overview
104
  This project implements an **Orchestrator-Synthesizer** workflow to dynamically generate a personalized **learning roadmap** based on a user's existing skills and learning goals. It uses **LangChain, LangGraph, and Groq AI models** to generate structured study plans and topic summaries.
105
 
106
+ #### Why Orchestrator-Synthesizer?
107
  The **Orchestrator-Synthesizer** pattern is ideal for structured content generation workflows where tasks need to be dynamically assigned, processed independently, and then combined into a final output. It differs from traditional parallelization in the following ways:
108
  - **Orchestration** dynamically determines what needs to be processed, ensuring relevant tasks are executed based on user input.
109
  - **Workers** independently generate content summaries for each topic in the study plan.
110
  - **Synthesis** intelligently merges topic summaries into a well-structured learning roadmap.
111
 
112
+ #### Workflow Breakdown
 
 
113
  The workflow consists of three key components:
114
 
115
+ ##### 1️⃣ Orchestrator
116
  - Creates a **study plan** based on the user's **skills and learning goals**.
117
  - Uses an LLM with a structured output schema to generate a list of **learning topics**.
118
 
119
+ ##### 2️⃣ Workers
120
  - Each **worker** processes an individual **learning topic**.
121
  - Generates a **markdown-formatted content summary** for the topic, including key concepts and learning resources.
122
 
123
+ ##### 3️⃣ Synthesizer
124
  - Collects all **topic summaries** and organizes them into a **cohesive learning roadmap**.
125
  - Ensures smooth flow and structured representation of the learning journey.
126
 
127
+ #### Code Structure
128
  - `orchestrator(state: State)`: Generates the study plan dynamically.
129
  - `llm_call(state: WorkerState)`: Summarizes a single topic.
130
  - `synthesizer(state: State)`: Merges all topic summaries into the final roadmap.
131
  - `assign_workers(state: State)`: Dynamically assigns tasks based on generated topics.
132
 
133
+ #### Running the Workflow
134
  To generate a personalized learning path, the workflow takes the following inputs:
135
  ```python
136
  user_skills = "Python programming, basic machine learning concepts"