Tonic commited on
Commit
92ae60d
·
verified ·
1 Parent(s): c6f9e88

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +367 -20
README.md CHANGED
@@ -1,26 +1,373 @@
1
- ---
2
- title: ChatUI
3
- emoji: 🧠
4
- colorFrom: yellow
5
- colorTo: indigo
6
- sdk: static
7
- pinned: false
8
- short_description: Langchain / LangGraph Chat UI
9
- ---
10
 
11
- # Nerfies
12
 
13
- This is the repository that contains source code for the [Nerfies website](https://nerfies.github.io).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- If you find Nerfies useful for your work please cite:
16
  ```
17
- @article{park2021nerfies
18
- author = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
19
- title = {Nerfies: Deformable Neural Radiance Fields},
20
- journal = {ICCV},
21
- year = {2021},
22
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- # Website License
26
- <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
 
 
 
 
 
 
 
 
 
 
1
 
 
2
 
3
+ ## LangGraph Agent Chat UI: Your Gateway to Agent Interaction
4
+
5
+ The Agent Chat UI,, is a React/Vite application that provides a clean, chat-based interface for interacting with your LangGraph agents. Here's why it's a valuable tool:
6
+
7
+ * **Easy Connection:** Connect to local or deployed LangGraph agents with a simple URL and graph ID.
8
+ * **Intuitive Chat:** Interact naturally with your agents, sending and receiving messages in a familiar chat format.
9
+ * **Visualize Agent Actions:** See tool calls and their results rendered directly in the UI.
10
+ * **Human-in-the-Loop Made Easy:** Seamlessly integrate human input using LangGraph's `interrupt` feature. The UI handles the presentation and interaction, allowing for approvals, edits, and responses.
11
+ * **Explore Execution Paths:** Use the UI to travel through time, inspect checkpoints, and fork conversations, all powered by LangGraph's state management.
12
+ * **Debug and Understand:** Inspect the full state of your LangGraph thread at any point.
13
+
14
+ ## Get Started with the Agent Chat UI (and LangGraph!)
15
+
16
+ You have several options to start using the UI:
17
+
18
+ ### 1. Try the Deployed Version (No Setup Required!)
19
+
20
+ * **Visit:** [agentchat.vercel.app](https://agentchat.vercel.app/)
21
+ * **Connect:** Enter your LangGraph deployment URL and graph ID (the `path` you set with `langserve.add_routes`). If using a production deployment, also include your LangSmith API key.
22
+ * **Chat!** You're ready to interact with your agent.
23
+
24
+ ### 2. Run Locally (for Development and Customization)
25
+
26
+ * **Option A: Clone the Repository:**
27
+ ```bash
28
+ git clone https://github.com/langchain-ai/agent-chat-ui.git
29
+ cd agent-chat-ui
30
+ pnpm install # Or npm install/yarn install
31
+ pnpm dev # Or npm run dev/yarn dev
32
+ ```
33
+ * **Option B: Quickstart with `npx`:**
34
+ ```bash
35
+ npx create-agent-chat-app
36
+ cd agent-chat-app
37
+ pnpm install # Or npm install/yarn install
38
+ pnpm dev # Or npm run dev/yarn dev
39
+ ```
40
+
41
+ Open your browser to `http://localhost:5173` (or the port indicated in your terminal).
42
+
43
+ # LangGraph Agent Chat UI
44
+
45
+ This project provides a simple, intuitive user interface (UI) for interacting with LangGraph agents. It's built with React and Vite, offering a responsive chat-like experience for testing and demonstrating your LangGraph deployments. It's designed to work seamlessly with LangGraph's core concepts, including checkpoints, thread management, and human-in-the-loop capabilities.
46
+
47
+ ## Features
48
+
49
+ * **Easy Connection:** Connect to both local and production LangGraph deployments by simply providing the deployment URL and graph ID (the path used when defining the graph).
50
+ * **Chat Interface:** Interact with your agents through a familiar chat interface, sending and receiving messages in real-time. The UI manages the conversation thread, automatically using checkpoints for persistence.
51
+ * **Tool Call Rendering:** The UI automatically renders tool calls and their results, making it easy to visualize the agent's actions. This is compatible with LangGraph's [tool calling and function calling capabilities](https://python.langchain.com/docs/guides/tools/custom_tools).
52
+ * **Human-in-the-Loop Support:** Seamlessly integrate human intervention using LangGraph's `interrupt` function. The UI presents a dedicated interface for reviewing, editing, and responding to interrupt requests (e.g., for approval or modification of agent actions), following the standardized schema.
53
+ * **Thread History:** View and navigate through past chat threads, enabling you to review previous interactions. This leverages LangGraph's checkpointing for persistent conversation history.
54
+ * **Time Travel and Forking:** Leverage LangGraph's powerful state management features, including [checkpointing](https://python.langchain.com/docs/modules/agents/concepts#checkpointing) and thread manipulation. Run the graph from specific checkpoints, explore different execution paths, and edit previous messages.
55
+ * **State Inspection:** Examine the current state of your LangGraph thread for debugging and understanding the agent's internal workings. This allows you to inspect the full state object managed by LangGraph.
56
+ * **Multiple Deployment Options:**
57
+ * **Deployed Site:** Use the hosted version at [agentchat.vercel.app](https://agentchat.vercel.app/)
58
+ * **Local Development:** Clone the repository and run it locally for development and customization.
59
+ * **Quick Setup:** Use `npx create-agent-chat-app` for a fast, streamlined setup.
60
+ * **Langsmith API key:** When utilizing a product deployment you must provide an Langsmith API key.
61
+
62
+ ## Getting Started
63
+
64
+ There are three main ways to use the Agent Chat UI:
65
+
66
+ ### 1. Using the Deployed Site (Easiest)
67
+
68
+ 1. **Navigate:** Go to [agentchat.vercel.app](https://agentchat.vercel.app/).
69
+ 2. **Enter Details:**
70
+ * **Deployment URL:** The URL of your LangGraph deployment (e.g., `http://localhost:2024` for a local deployment using LangServe, or the URL provided by LangSmith for a production deployment).
71
+ * **Assistant / Graph ID:** The path of the graph you want to interact with (e.g., `chat`, `email_agent`). This is defined when adding routes with `add_routes(..., path="/your_path")`.
72
+ * **LangSmith API Key** (Production Deployments Only): If you are connecting to a deployment hosted on LangSmith, you will need to provide your LangSmith API key for authentication. *This is NOT required for local LangGraph servers.* The key is stored locally in your browser's storage.
73
+ 3. **Click "Continue":** You'll be taken to the chat interface, ready to interact with your agent.
74
+
75
+ ### 2. Local Development (Full Control)
76
+
77
+ 1. **Clone the Repository:**
78
+
79
+ ```bash
80
+ git clone https://github.com/langchain-ai/agent-chat-ui.git
81
+ cd agent-chat-ui
82
+ ```
83
+
84
+ 2. **Install Dependencies:**
85
+
86
+ ```bash
87
+ pnpm install # Or npm install, or yarn install
88
+ ```
89
+
90
+ 3. **Start the Development Server:**
91
+
92
+ ```bash
93
+ pnpm dev # Or npm run dev, or yarn dev
94
+ ```
95
+
96
+ 4. **Open in Browser:** The application will typically be available at `http://localhost:5173` (the port may vary; check your terminal output). Follow the instructions in "Using the Deployed Site" to connect to your LangGraph.
97
+
98
+ ### 3. Quick Setup with `npx create-agent-chat-app`
99
+
100
+ This method creates a new project directory with the Agent Chat UI already set up.
101
+
102
+ 1. **Run the Command:**
103
+
104
+ ```bash
105
+ npx create-agent-chat-app
106
+ ```
107
+
108
+ 2. **Follow Prompts:** You'll be prompted for a project name (default is `agent-chat-app`).
109
+
110
+ 3. **Navigate to Project Directory:**
111
+
112
+ ```bash
113
+ cd agent-chat-app
114
+ ```
115
+
116
+ 4. **Install and Run:**
117
+
118
+ ```bash
119
+ pnpm install # Or npm install, or yarn install
120
+ pnpm dev # Or npm run dev, or yarn dev
121
+ ```
122
+
123
+ 5. **Open in Browser:** The application will be available at `http://localhost:5173`. Follow the instructions in "Using the Deployed Site" to connect.
124
+
125
+ ## LangGraph Setup (Prerequisites)
126
+
127
+ Before using the Agent Chat UI, you need a running LangGraph agent served via LangServe. Below are examples using both a simple agent and an agent with human-in-the-loop.
128
+
129
+ ### Basic LangGraph Example (Python)
130
+
131
+ ```python
132
+ # agent.py (Example LangGraph agent - Python)
133
+ from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
134
+ from langchain_core.runnables import chain
135
+ from langchain_openai import ChatOpenAI
136
+ from langchain_core.messages import AIMessage, HumanMessage
137
+ from langgraph.prebuilt import create_agent_executor
138
+ from langchain_core.tools import tool
139
+
140
+ # FastAPI and LangServe for serving the graph
141
+ from fastapi import FastAPI
142
+ from langserve import add_routes
143
+
144
+
145
+ @tool
146
+ def get_weather(city: str):
147
+ """
148
+ Gets the weather for a specified city
149
+ """
150
+ if city.lower() == "new york":
151
+ return "The weather in New York is nice today with a high of 75F."
152
+ else:
153
+ return "The weather for that city is not supported"
154
+
155
+
156
+ # Define the tools
157
+ tools = [get_weather]
158
+
159
+ prompt = ChatPromptTemplate.from_messages(
160
+ [
161
+ ("system", "You are a helpful assistant"),
162
+ MessagesPlaceholder(variable_name="messages"),
163
+ MessagesPlaceholder(variable_name="agent_scratchpad"),
164
+ ]
165
+ )
166
+
167
+ model = ChatOpenAI(temperature=0).bind_tools(tools)
168
+
169
+
170
+ @chain
171
+ def transform_messages(data):
172
+ messages = data["messages"]
173
+ if not isinstance(messages[-1], HumanMessage):
174
+ messages.append(
175
+ AIMessage(
176
+ content="I don't know how to respond to messages other than a final answer"
177
+ )
178
+ )
179
+ return {"messages": messages}
180
+
181
+
182
+ agent = (
183
+ {
184
+ "messages": transform_messages,
185
+ "agent_scratchpad": lambda x: [], # No tools in this simple example
186
+ }
187
+ | prompt
188
+ | model
189
+ )
190
+
191
+ # Wrap the agent in a RunnableGraph
192
+ app = create_agent_executor(agent, tools)
193
+
194
+ # Serve the graph using FastAPI and langserve
195
+ fastapi_app = FastAPI(
196
+ title="LangGraph Agent",
197
+ version="1.0",
198
+ description="A simple LangGraph agent server",
199
+ )
200
+
201
+ # Mount LangServe at the /agent endpoint
202
+ add_routes(
203
+ fastapi_app,
204
+ app,
205
+ path="/chat", # Matches the graph ID we'll use in the UI
206
+ )
207
+
208
+ if __name__ == "__main__":
209
+ import uvicorn
210
+
211
+ uvicorn.run(fastapi_app, host="localhost", port=2024)
212
 
 
213
  ```
214
+ To run this example:
215
+
216
+ 1. Save the code as `agent.py`.
217
+ 2. Install necessary packages: `pip install langchain langchain-core langchain-openai langgraph fastapi uvicorn "langserve[all]"` (add any other packages for your tools).
218
+ 3. Set your OpenAI API key: `export OPENAI_API_KEY="your-openai-api-key"`
219
+ 4. Run the script: `python agent.py`
220
+ 5. Your LangGraph agent will be running at `http://localhost:2024/chat`, and the graph ID to enter into the ui is `chat`.
221
+
222
+ ### LangGraph with Human-in-the-Loop Example (Python)
223
+
224
+ ```python
225
+ from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
226
+ from langchain_core.runnables import chain
227
+ from langchain_openai import ChatOpenAI
228
+ from langchain_core.messages import AIMessage, HumanMessage
229
+ from langgraph.prebuilt import create_agent_executor, ToolInvocation, interrupt
230
+ from langchain_core.tools import tool
231
+ from fastapi import FastAPI
232
+ from langserve import add_routes
233
+
234
+
235
+ @tool
236
+ def write_email(subject: str, body: str, to: str):
237
+ """
238
+ Drafts an email with a specified subject, body and recipient
239
+ """
240
+ print(f"Writing email with subject '{subject}' to '{to}'") # Debugging
241
+ return f"Draft email to {to} with subject {subject} sent."
242
+
243
+
244
+ tools = [write_email]
245
+
246
+ prompt = ChatPromptTemplate.from_messages(
247
+ [
248
+ ("system", "You are a helpful assistant that drafts emails."),
249
+ MessagesPlaceholder(variable_name="messages"),
250
+ MessagesPlaceholder(variable_name="agent_scratchpad"),
251
+ ]
252
+ )
253
+
254
+
255
+ model = ChatOpenAI(temperature=0, model="gpt-4-turbo-preview").bind_tools(tools)
256
+
257
+
258
+ @chain
259
+ def transform_messages(data):
260
+ messages = data["messages"]
261
+ if not isinstance(messages[-1], HumanMessage):
262
+ messages.append(
263
+ AIMessage(
264
+ content="I don't know how to respond to messages other than a final answer"
265
+ )
266
+ )
267
+ return {"messages": messages}
268
+
269
+
270
+
271
+ def handle_interrupt(state):
272
+ """Handles human-in-the-loop interruptions."""
273
+ print("---INTERRUPT---") # Debugging
274
+ messages = state["messages"]
275
+ last_message = messages[-1]
276
+
277
+ if isinstance(last_message, AIMessage) and isinstance(
278
+ last_message.content, list
279
+ ):
280
+ # Find the tool call
281
+ for msg in last_message.content:
282
+ if isinstance(msg, ToolInvocation):
283
+ tool_name = msg.name
284
+ tool_args = msg.args
285
+ if tool_name == "write_email":
286
+ # Construct the human interrupt request
287
+ interrupt_data = {
288
+ "type": "interrupt",
289
+ "args": {
290
+ "type": "response",
291
+ "studio": { # optional
292
+ "subject": tool_args["subject"],
293
+ "body": tool_args["body"],
294
+ "to": tool_args["to"],
295
+ },
296
+ "description": "Response Instruction: \n\n- **Response**: Any response submitted will be passed to an LLM to rewrite the email. It can rewrite the email body, subject, or recipient.\n\n- **Edit or Accept**: Editing/Accepting the email.",
297
+ },
298
+ }
299
+ # Call the interrupt function and return the new state
300
+ return interrupt(messages, interrupt_data)
301
+ return {"messages": messages}
302
+
303
+
304
+ agent = (
305
+ {
306
+ "messages": transform_messages,
307
+ "agent_scratchpad": lambda x: x.get("agent_scratchpad", []),
308
+ }
309
+ | prompt
310
+ | model
311
+ | handle_interrupt # Add the interrupt handler
312
+ )
313
+
314
+ # Wrap the agent in a RunnableGraph
315
+ app = create_agent_executor(agent, tools)
316
+
317
+ # Serve the graph using FastAPI and langserve
318
+ fastapi_app = FastAPI(
319
+ title="LangGraph Agent",
320
+ version="1.0",
321
+ description="A simple LangGraph agent server",
322
+ )
323
+
324
+ # Mount LangServe at the /agent endpoint
325
+ add_routes(
326
+ fastapi_app,
327
+ app,
328
+ path="/email_agent", # Matches the graph ID we'll use in the UI
329
+ )
330
+
331
+ if __name__ == "__main__":
332
+ import uvicorn
333
+
334
+ uvicorn.run(fastapi_app, host="localhost", port=2024)
335
+
336
  ```
337
+ To run this example:
338
+
339
+ 1. Save the code as `agent.py`.
340
+ 2. Install necessary packages: `pip install langchain langchain-core langchain-openai langgraph fastapi uvicorn "langserve[all]"` (add any other packages for your tools).
341
+ 3. Set your OpenAI API key: `export OPENAI_API_KEY="your-openai-api-key"`
342
+ 4. Run the script: `python agent.py`
343
+ 5. Your LangGraph agent will be running at `http://localhost:2024/email_agent`, and the graph ID to enter into the ui is `email_agent`.
344
+
345
+ ## Key Concepts (LangGraph Integration)
346
+
347
+ * **Messages Key:** The Agent Chat UI expects your LangGraph state to include a `messages` key, which holds a list of `langchain_core.messages.BaseMessage` instances (e.g., `HumanMessage`, `AIMessage`, `SystemMessage`, `ToolMessage`). This is standard practice in LangChain and LangGraph for conversational agents.
348
+ * **Checkpoints:** The UI automatically utilizes LangGraph's checkpointing mechanism to save and restore the conversation state. This ensures that you can resume conversations and explore different branches without losing progress.
349
+ * **`add_routes` and `path`:** The `path` argument in `add_routes` (from `langserve`) determines the "Graph ID" that you'll enter in the UI. This is crucial for the UI to connect to the correct LangGraph endpoint.
350
+ * **Tool Calling:** If you use `bind_tools` with your LLM, tool calls and tool results will be rendered in the UI, with clear labels showing the function call and the response.
351
+
352
+ ## Human-in-the-Loop Details
353
+
354
+ The Agent Chat UI supports human-in-the-loop interactions using the standard LangGraph interrupt schema. Here's how it works:
355
+
356
+ 1. **Interrupt Schema:** Your LangGraph agent should call the `interrupt` function (from `langgraph.prebuilt`) with a specific schema to pause execution and request human input. The schema should include:
357
+ * `type`: `interrupt`.
358
+ * `args`: A dictionary containing information about the interruption. This is where you provide the data the human needs to review (e.g., a draft email, a proposed action).
359
+ * `type`: Can be one of `"response"`, `"accept"`, or `"ignore"`. This indicates the type of human interaction expected.
360
+ * `args`: Further arguments specific to the interrupt type. For instance, if the interrupt type is `response`, the `args` could contain a message to give to the user.
361
+ * `studio`: *Optional.* If included, this must contain `subject`, `body`, and `to` keys for interrupt requests.
362
+ * `description`: *Optional.* If used, this provides a static prompt to the user that displays the fields the human needs to complete.
363
+ * `name` (optional): A name for the interrupt.
364
+ * `id` (optional): A unique identifier for the interrupt.
365
+
366
+ 2. **UI Rendering:** When the Agent Chat UI detects an interrupt with this schema, it will automatically render a user-friendly interface for human interaction. This interface allows the user to:
367
+ * **Inspect:** View the data provided in the `args` of the interrupt (e.g., the content of a draft email).
368
+ * **Edit:** Modify the data (if the interrupt schema allows for it).
369
+ * **Respond:** Provide a response (if the interrupt type is `"response"`).
370
+ * **Accept/Reject:** Approve or reject the proposed action (if the interrupt type is `"accept"`).
371
+ * **Ignore:** Ignore the interrupt (if the interrupt type is `"ignore"`).
372
 
373
+ 3. **Resuming Execution:** After the human interacts with the interrupt, the UI sends the response back to the LangGraph via LangServe, and execution resumes.