GoogleSearchTool-fix
#33
by
davidxia
- opened
- README.md +3 -22
- bonus-unit1/bonus-unit1.ipynb +13 -18
- bonus-unit2/monitoring-and-evaluating-agents.ipynb +0 -657
- unit1/dummy_agent_library.ipynb β dummy_agent_library.ipynb +17 -20
- unit2/langgraph/agent.ipynb +0 -332
- unit2/langgraph/mail_sorting.ipynb +0 -457
- unit2/llama-index/agents.ipynb +0 -334
- unit2/llama-index/components.ipynb +0 -0
- unit2/llama-index/tools.ipynb +0 -274
- unit2/llama-index/workflows.ipynb +0 -401
- unit2/smolagents/code_agents.ipynb +0 -0
- unit2/smolagents/multiagent_notebook.ipynb +0 -0
- unit2/smolagents/retrieval_agents.ipynb +8 -8
- unit2/smolagents/tool_calling_agents.ipynb +14 -14
- unit2/smolagents/tools.ipynb +19 -19
- unit2/smolagents/vision_agents.ipynb +15 -18
- unit2/smolagents/vision_web_browser.py +2 -2
README.md
CHANGED
@@ -2,26 +2,7 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
|
6 |
-
|
7 |
-
## π Notebook Index
|
8 |
-
|
9 |
-
| Unit | Notebook Name | Redirect Link |
|
10 |
-
|--------------|------------------------------------|----------------|
|
11 |
-
| Unit 1 | Dummy Agent Library | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit1/dummy_agent_library.ipynb) |
|
12 |
-
| Unit 2.1 - smolagents | Code Agents | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/code_agents.ipynb) |
|
13 |
-
| Unit 2.1 - smolagents | Multi-agent Notebook | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/multiagent_notebook.ipynb) |
|
14 |
-
| Unit 2.1 - smolagents | Retrieval Agents | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/retrieval_agents.ipynb) |
|
15 |
-
| Unit 2.1 - smolagents | Tool Calling Agents | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/tool_calling_agents.ipynb) |
|
16 |
-
| Unit 2.1 - smolagents | Tools | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/tools.ipynb) |
|
17 |
-
| Unit 2.1 - smolagents | Vision Agents | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/vision_agents.ipynb) |
|
18 |
-
| Unit 2.1 - smolagents | Vision Web Browser | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/vision_web_browser.py) |
|
19 |
-
| Unit 2.2 - LlamaIndex | Agents | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/llama-index/agents.ipynb) |
|
20 |
-
| Unit 2.2 - LlamaIndex | Components | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/llama-index/components.ipynb) |
|
21 |
-
| Unit 2.2 - LlamaIndex | Tools | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/llama-index/tools.ipynb) |
|
22 |
-
| Unit 2.2 - LlamaIndex | Workflows | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/llama-index/workflows.ipynb) |
|
23 |
-
| Unit 2.3 - LangGraph | Agent | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/langgraph/agent.ipynb) |
|
24 |
-
| Unit 2.3 - LangGraph | Mail Sorting | [β](https://huggingface.co/agents-course/notebooks/blob/main/unit2/langgraph/mail_sorting.ipynb) |
|
25 |
-
| Bonus Unit 1 | Gemma SFT & Thinking Function Call | [β](https://huggingface.co/agents-course/notebooks/blob/main/bonus-unit1/bonus-unit1.ipynb) |
|
26 |
-
| Bonus Unit 2 | Monitoring & Evaluating Agents | [β](https://huggingface.co/agents-course/notebooks/blob/main/bonus-unit2/monitoring-and-evaluating-agents.ipynb) |
|
27 |
|
|
|
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
Example notebooks, part of the [Hugging Face Agents Course](https://huggingface.co/learn/agents-course/unit0/introduction).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
7 |
+
* [Dummy Agent Library](https://huggingface.co/agents-course/notebooks/blob/main/dummy_agent_library.ipynb) β Creating an Agent from scratch. It's complicated!
|
8 |
+
*
|
bonus-unit1/bonus-unit1.ipynb
CHANGED
@@ -23,7 +23,7 @@
|
|
23 |
"id": "gWR4Rvpmjq5T"
|
24 |
},
|
25 |
"source": [
|
26 |
-
"##
|
27 |
"\n",
|
28 |
"Before diving into the notebook, you need to:\n",
|
29 |
"\n",
|
@@ -103,7 +103,7 @@
|
|
103 |
"source": [
|
104 |
"## Step 2: Install dependencies π\n",
|
105 |
"\n",
|
106 |
-
"We need multiple
|
107 |
"\n",
|
108 |
"- `bitsandbytes` for quantization\n",
|
109 |
"- `peft`for LoRA adapters\n",
|
@@ -130,9 +130,7 @@
|
|
130 |
"!pip install -q -U peft\n",
|
131 |
"!pip install -q -U trl\n",
|
132 |
"!pip install -q -U tensorboardX\n",
|
133 |
-
"!pip install -q wandb
|
134 |
-
"!pip install -q -U torchvision\n",
|
135 |
-
"!pip install -q -U transformers"
|
136 |
]
|
137 |
},
|
138 |
{
|
@@ -165,7 +163,7 @@
|
|
165 |
"id": "vBAkwg9zu6A1"
|
166 |
},
|
167 |
"source": [
|
168 |
-
"## Step 4: Import the
|
169 |
"\n",
|
170 |
"Don't forget to put your HF token."
|
171 |
]
|
@@ -186,7 +184,7 @@
|
|
186 |
"import torch\n",
|
187 |
"import json\n",
|
188 |
"\n",
|
189 |
-
"from transformers import AutoModelForCausalLM, AutoTokenizer,
|
190 |
"from datasets import load_dataset\n",
|
191 |
"from trl import SFTConfig, SFTTrainer\n",
|
192 |
"from peft import LoraConfig, TaskType\n",
|
@@ -321,10 +319,7 @@
|
|
321 |
"source": [
|
322 |
"dataset = dataset.map(preprocess, remove_columns=\"messages\")\n",
|
323 |
"dataset = dataset[\"train\"].train_test_split(0.1)\n",
|
324 |
-
"print(dataset)
|
325 |
-
"\n",
|
326 |
-
"dataset[\"train\"] = dataset[\"train\"].select(range(100))\n",
|
327 |
-
"dataset[\"test\"] = dataset[\"test\"].select(range(10))"
|
328 |
]
|
329 |
},
|
330 |
{
|
@@ -342,7 +337,7 @@
|
|
342 |
"\n",
|
343 |
"1. A *User message* containing the **necessary information with the list of available tools** inbetween `<tools></tools>` then the user query, here: `\"Can you get me the latest news headlines for the United States?\"`\n",
|
344 |
"\n",
|
345 |
-
"2. An *Assistant message* here called \"model\" to fit the criterias from gemma models containing two new phases, a **\"thinking\"** phase contained in `<think></think>` and an **\"Act\"** phase contained in `<tool_call
|
346 |
"\n",
|
347 |
"3. If the model contains a `<tools_call>`, we will append the result of this action in a new **\"Tool\"** message containing a `<tool_response></tool_response>` with the answer from the tool."
|
348 |
]
|
@@ -624,8 +619,8 @@
|
|
624 |
" eothink = \"</think>\"\n",
|
625 |
" tool_call=\"<tool_call>\"\n",
|
626 |
" eotool_call=\"</tool_call>\"\n",
|
627 |
-
" tool_response=\"<
|
628 |
-
" eotool_response=\"</
|
629 |
" pad_token = \"<pad>\"\n",
|
630 |
" eos_token = \"<eos>\"\n",
|
631 |
" @classmethod\n",
|
@@ -655,7 +650,7 @@
|
|
655 |
"source": [
|
656 |
"## Step 9: Let's configure the LoRA\n",
|
657 |
"\n",
|
658 |
-
"This is we are going to define the parameter of our adapter. Those
|
659 |
]
|
660 |
},
|
661 |
{
|
@@ -707,7 +702,7 @@
|
|
707 |
},
|
708 |
"outputs": [],
|
709 |
"source": [
|
710 |
-
"username=\"Jofthomas\"#
|
711 |
"output_dir = \"gemma-2-2B-it-thinking-function_calling-V0\" # The directory where the trained model checkpoints, logs, and other artifacts will be saved. It will also be the default name of the model when pushed to the hub if not redefined later.\n",
|
712 |
"per_device_train_batch_size = 1\n",
|
713 |
"per_device_eval_batch_size = 1\n",
|
@@ -1196,7 +1191,7 @@
|
|
1196 |
},
|
1197 |
{
|
1198 |
"cell_type": "code",
|
1199 |
-
"execution_count":
|
1200 |
"id": "56b89825-70ac-42c1-934c-26e2d54f3b7b",
|
1201 |
"metadata": {
|
1202 |
"colab": {
|
@@ -1476,7 +1471,7 @@
|
|
1476 |
"device = \"auto\"\n",
|
1477 |
"config = PeftConfig.from_pretrained(peft_model_id)\n",
|
1478 |
"model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path,\n",
|
1479 |
-
" device_map
|
1480 |
" )\n",
|
1481 |
"tokenizer = AutoTokenizer.from_pretrained(peft_model_id)\n",
|
1482 |
"model.resize_token_embeddings(len(tokenizer))\n",
|
|
|
23 |
"id": "gWR4Rvpmjq5T"
|
24 |
},
|
25 |
"source": [
|
26 |
+
"## Prerequisites ποΈ\n",
|
27 |
"\n",
|
28 |
"Before diving into the notebook, you need to:\n",
|
29 |
"\n",
|
|
|
103 |
"source": [
|
104 |
"## Step 2: Install dependencies π\n",
|
105 |
"\n",
|
106 |
+
"We need multiple librairies:\n",
|
107 |
"\n",
|
108 |
"- `bitsandbytes` for quantization\n",
|
109 |
"- `peft`for LoRA adapters\n",
|
|
|
130 |
"!pip install -q -U peft\n",
|
131 |
"!pip install -q -U trl\n",
|
132 |
"!pip install -q -U tensorboardX\n",
|
133 |
+
"!pip install -q wandb"
|
|
|
|
|
134 |
]
|
135 |
},
|
136 |
{
|
|
|
163 |
"id": "vBAkwg9zu6A1"
|
164 |
},
|
165 |
"source": [
|
166 |
+
"## Step 4: Import the librairies\n",
|
167 |
"\n",
|
168 |
"Don't forget to put your HF token."
|
169 |
]
|
|
|
184 |
"import torch\n",
|
185 |
"import json\n",
|
186 |
"\n",
|
187 |
+
"from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed\n",
|
188 |
"from datasets import load_dataset\n",
|
189 |
"from trl import SFTConfig, SFTTrainer\n",
|
190 |
"from peft import LoraConfig, TaskType\n",
|
|
|
319 |
"source": [
|
320 |
"dataset = dataset.map(preprocess, remove_columns=\"messages\")\n",
|
321 |
"dataset = dataset[\"train\"].train_test_split(0.1)\n",
|
322 |
+
"print(dataset)"
|
|
|
|
|
|
|
323 |
]
|
324 |
},
|
325 |
{
|
|
|
337 |
"\n",
|
338 |
"1. A *User message* containing the **necessary information with the list of available tools** inbetween `<tools></tools>` then the user query, here: `\"Can you get me the latest news headlines for the United States?\"`\n",
|
339 |
"\n",
|
340 |
+
"2. An *Assistant message* here called \"model\" to fit the criterias from gemma models containing two new phases, a **\"thinking\"** phase contained in `<think></think>` and an **\"Act\"** phase contained in `<tool_call></<tool_call>`.\n",
|
341 |
"\n",
|
342 |
"3. If the model contains a `<tools_call>`, we will append the result of this action in a new **\"Tool\"** message containing a `<tool_response></tool_response>` with the answer from the tool."
|
343 |
]
|
|
|
619 |
" eothink = \"</think>\"\n",
|
620 |
" tool_call=\"<tool_call>\"\n",
|
621 |
" eotool_call=\"</tool_call>\"\n",
|
622 |
+
" tool_response=\"<tool_reponse>\"\n",
|
623 |
+
" eotool_response=\"</tool_reponse>\"\n",
|
624 |
" pad_token = \"<pad>\"\n",
|
625 |
" eos_token = \"<eos>\"\n",
|
626 |
" @classmethod\n",
|
|
|
650 |
"source": [
|
651 |
"## Step 9: Let's configure the LoRA\n",
|
652 |
"\n",
|
653 |
+
"This is we are going to define the parameter of our adapter. Those a the most important parameters in LoRA as they define the size and importance of the adapters we are training."
|
654 |
]
|
655 |
},
|
656 |
{
|
|
|
702 |
},
|
703 |
"outputs": [],
|
704 |
"source": [
|
705 |
+
"username=\"Jofthomas\"# REPLCAE with your Hugging Face username\n",
|
706 |
"output_dir = \"gemma-2-2B-it-thinking-function_calling-V0\" # The directory where the trained model checkpoints, logs, and other artifacts will be saved. It will also be the default name of the model when pushed to the hub if not redefined later.\n",
|
707 |
"per_device_train_batch_size = 1\n",
|
708 |
"per_device_eval_batch_size = 1\n",
|
|
|
1191 |
},
|
1192 |
{
|
1193 |
"cell_type": "code",
|
1194 |
+
"execution_count": 19,
|
1195 |
"id": "56b89825-70ac-42c1-934c-26e2d54f3b7b",
|
1196 |
"metadata": {
|
1197 |
"colab": {
|
|
|
1471 |
"device = \"auto\"\n",
|
1472 |
"config = PeftConfig.from_pretrained(peft_model_id)\n",
|
1473 |
"model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path,\n",
|
1474 |
+
" device_map=\"auto\",\n",
|
1475 |
" )\n",
|
1476 |
"tokenizer = AutoTokenizer.from_pretrained(peft_model_id)\n",
|
1477 |
"model.resize_token_embeddings(len(tokenizer))\n",
|
bonus-unit2/monitoring-and-evaluating-agents.ipynb
DELETED
@@ -1,657 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"cell_type": "markdown",
|
5 |
-
"metadata": {},
|
6 |
-
"source": [
|
7 |
-
"# Bonus Unit 1: Observability and Evaluation of Agents\n",
|
8 |
-
"\n",
|
9 |
-
"In this tutorial, we will learn how to **monitor the internal steps (traces) of our AI agent** and **evaluate its performance** using open-source observability tools.\n",
|
10 |
-
"\n",
|
11 |
-
"The ability to observe and evaluate an agentβs behavior is essential for:\n",
|
12 |
-
"- Debugging issues when tasks fail or produce suboptimal results\n",
|
13 |
-
"- Monitoring costs and performance in real-time\n",
|
14 |
-
"- Improving reliability and safety through continuous feedback\n",
|
15 |
-
"\n",
|
16 |
-
"This notebook is part of the [Hugging Face Agents Course](https://www.hf.co/learn/agents-course/unit1/introduction)."
|
17 |
-
]
|
18 |
-
},
|
19 |
-
{
|
20 |
-
"cell_type": "markdown",
|
21 |
-
"metadata": {},
|
22 |
-
"source": [
|
23 |
-
"## Exercise Prerequisites ποΈ\n",
|
24 |
-
"\n",
|
25 |
-
"Before running this notebook, please be sure you have:\n",
|
26 |
-
"\n",
|
27 |
-
"π² π **Studied** [Introduction to Agents](https://huggingface.co/learn/agents-course/unit1/introduction)\n",
|
28 |
-
"\n",
|
29 |
-
"π² π **Studied** [The smolagents framework](https://huggingface.co/learn/agents-course/unit2/smolagents/introduction)"
|
30 |
-
]
|
31 |
-
},
|
32 |
-
{
|
33 |
-
"cell_type": "markdown",
|
34 |
-
"metadata": {},
|
35 |
-
"source": [
|
36 |
-
"## Step 0: Install the Required Libraries\n",
|
37 |
-
"\n",
|
38 |
-
"We will need a few libraries that allow us to run, monitor, and evaluate our agents:"
|
39 |
-
]
|
40 |
-
},
|
41 |
-
{
|
42 |
-
"cell_type": "code",
|
43 |
-
"execution_count": null,
|
44 |
-
"metadata": {},
|
45 |
-
"outputs": [],
|
46 |
-
"source": [
|
47 |
-
"%pip install langfuse 'smolagents[telemetry]' openinference-instrumentation-smolagents datasets 'smolagents[gradio]' gradio --upgrade"
|
48 |
-
]
|
49 |
-
},
|
50 |
-
{
|
51 |
-
"cell_type": "markdown",
|
52 |
-
"metadata": {},
|
53 |
-
"source": [
|
54 |
-
"## Step 1: Instrument Your Agent\n",
|
55 |
-
"\n",
|
56 |
-
"In this notebook, we will use [Langfuse](https://langfuse.com/) as our observability tool, but you can use **any other OpenTelemetry-compatible service**. The code below shows how to set environment variables for Langfuse (or any OTel endpoint) and how to instrument your smolagent.\n",
|
57 |
-
"\n",
|
58 |
-
"**Note:** If you are using LlamaIndex or LangGraph, you can find documentation on instrumenting them [here](https://langfuse.com/docs/integrations/llama-index/workflows) and [here](https://langfuse.com/docs/integrations/langchain/example-python-langgraph). "
|
59 |
-
]
|
60 |
-
},
|
61 |
-
{
|
62 |
-
"cell_type": "code",
|
63 |
-
"execution_count": 1,
|
64 |
-
"metadata": {},
|
65 |
-
"outputs": [],
|
66 |
-
"source": [
|
67 |
-
"import os\n",
|
68 |
-
"\n",
|
69 |
-
"# Get keys for your project from the project settings page: https://cloud.langfuse.com\n",
|
70 |
-
"os.environ[\"LANGFUSE_PUBLIC_KEY\"] = \"pk-lf-...\" \n",
|
71 |
-
"os.environ[\"LANGFUSE_SECRET_KEY\"] = \"sk-lf-...\" \n",
|
72 |
-
"os.environ[\"LANGFUSE_HOST\"] = \"https://cloud.langfuse.com\" # πͺπΊ EU region\n",
|
73 |
-
"# os.environ[\"LANGFUSE_HOST\"] = \"https://us.cloud.langfuse.com\" # πΊπΈ US region\n",
|
74 |
-
"\n",
|
75 |
-
"# Set your Hugging Face and other tokens/secrets as environment variable\n",
|
76 |
-
"os.environ[\"HF_TOKEN\"] = \"hf_...\" "
|
77 |
-
]
|
78 |
-
},
|
79 |
-
{
|
80 |
-
"cell_type": "markdown",
|
81 |
-
"metadata": {},
|
82 |
-
"source": [
|
83 |
-
"With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables."
|
84 |
-
]
|
85 |
-
},
|
86 |
-
{
|
87 |
-
"cell_type": "code",
|
88 |
-
"execution_count": 12,
|
89 |
-
"metadata": {},
|
90 |
-
"outputs": [
|
91 |
-
{
|
92 |
-
"name": "stdout",
|
93 |
-
"output_type": "stream",
|
94 |
-
"text": [
|
95 |
-
"Langfuse client is authenticated and ready!\n"
|
96 |
-
]
|
97 |
-
}
|
98 |
-
],
|
99 |
-
"source": [
|
100 |
-
"from langfuse import get_client\n",
|
101 |
-
" \n",
|
102 |
-
"langfuse = get_client()\n",
|
103 |
-
" \n",
|
104 |
-
"# Verify connection\n",
|
105 |
-
"if langfuse.auth_check():\n",
|
106 |
-
" print(\"Langfuse client is authenticated and ready!\")\n",
|
107 |
-
"else:\n",
|
108 |
-
" print(\"Authentication failed. Please check your credentials and host.\")"
|
109 |
-
]
|
110 |
-
},
|
111 |
-
{
|
112 |
-
"cell_type": "code",
|
113 |
-
"execution_count": 13,
|
114 |
-
"metadata": {},
|
115 |
-
"outputs": [
|
116 |
-
{
|
117 |
-
"name": "stderr",
|
118 |
-
"output_type": "stream",
|
119 |
-
"text": [
|
120 |
-
"Attempting to instrument while already instrumented\n"
|
121 |
-
]
|
122 |
-
}
|
123 |
-
],
|
124 |
-
"source": [
|
125 |
-
"from openinference.instrumentation.smolagents import SmolagentsInstrumentor\n",
|
126 |
-
" \n",
|
127 |
-
"SmolagentsInstrumentor().instrument()"
|
128 |
-
]
|
129 |
-
},
|
130 |
-
{
|
131 |
-
"cell_type": "markdown",
|
132 |
-
"metadata": {},
|
133 |
-
"source": [
|
134 |
-
"## Step 2: Test Your Instrumentation\n",
|
135 |
-
"\n",
|
136 |
-
"Here is a simple CodeAgent from smolagents that calculates `1+1`. We run it to confirm that the instrumentation is working correctly. If everything is set up correctly, you will see logs/spans in your observability dashboard."
|
137 |
-
]
|
138 |
-
},
|
139 |
-
{
|
140 |
-
"cell_type": "code",
|
141 |
-
"execution_count": null,
|
142 |
-
"metadata": {},
|
143 |
-
"outputs": [],
|
144 |
-
"source": [
|
145 |
-
"from smolagents import InferenceClientModel, CodeAgent\n",
|
146 |
-
"\n",
|
147 |
-
"# Create a simple agent to test instrumentation\n",
|
148 |
-
"agent = CodeAgent(\n",
|
149 |
-
" tools=[],\n",
|
150 |
-
" model=InferenceClientModel()\n",
|
151 |
-
")\n",
|
152 |
-
"\n",
|
153 |
-
"agent.run(\"1+1=\")"
|
154 |
-
]
|
155 |
-
},
|
156 |
-
{
|
157 |
-
"cell_type": "markdown",
|
158 |
-
"metadata": {},
|
159 |
-
"source": [
|
160 |
-
"Check your [Langfuse Traces Dashboard](https://cloud.langfuse.com/traces) (or your chosen observability tool) to confirm that the spans and logs have been recorded.\n",
|
161 |
-
"\n",
|
162 |
-
"Example screenshot from Langfuse:\n",
|
163 |
-
"\n",
|
164 |
-
"\n",
|
165 |
-
"\n",
|
166 |
-
"_[Link to the trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/1b94d6888258e0998329cdb72a371155?timestamp=2025-03-10T11%3A59%3A41.743Z)_"
|
167 |
-
]
|
168 |
-
},
|
169 |
-
{
|
170 |
-
"cell_type": "markdown",
|
171 |
-
"metadata": {},
|
172 |
-
"source": [
|
173 |
-
"## Step 3: Observe and Evaluate a More Complex Agent\n",
|
174 |
-
"\n",
|
175 |
-
"Now that you have confirmed your instrumentation works, let's try a more complex query so we can see how advanced metrics (token usage, latency, costs, etc.) are tracked."
|
176 |
-
]
|
177 |
-
},
|
178 |
-
{
|
179 |
-
"cell_type": "code",
|
180 |
-
"execution_count": null,
|
181 |
-
"metadata": {},
|
182 |
-
"outputs": [],
|
183 |
-
"source": [
|
184 |
-
"from smolagents import (CodeAgent, DuckDuckGoSearchTool, InferenceClientModel)\n",
|
185 |
-
"\n",
|
186 |
-
"search_tool = DuckDuckGoSearchTool()\n",
|
187 |
-
"agent = CodeAgent(tools=[search_tool], model=InferenceClientModel())\n",
|
188 |
-
"\n",
|
189 |
-
"agent.run(\"How many Rubik's Cubes could you fit inside the Notre Dame Cathedral?\")"
|
190 |
-
]
|
191 |
-
},
|
192 |
-
{
|
193 |
-
"cell_type": "markdown",
|
194 |
-
"metadata": {},
|
195 |
-
"source": [
|
196 |
-
"### Trace Structure\n",
|
197 |
-
"\n",
|
198 |
-
"Most observability tools record a **trace** that contains **spans**, which represent each step of your agentβs logic. Here, the trace contains the overall agent run and sub-spans for:\n",
|
199 |
-
"- The tool calls (DuckDuckGoSearchTool)\n",
|
200 |
-
"- The LLM calls (InferenceClientModel)\n",
|
201 |
-
"\n",
|
202 |
-
"You can inspect these to see precisely where time is spent, how many tokens are used, and so on:\n",
|
203 |
-
"\n",
|
204 |
-
"\n",
|
205 |
-
"\n",
|
206 |
-
"_[Link to the trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/1ac33b89ffd5e75d4265b62900c348ed?timestamp=2025-03-07T13%3A45%3A09.149Z&display=preview)_"
|
207 |
-
]
|
208 |
-
},
|
209 |
-
{
|
210 |
-
"cell_type": "markdown",
|
211 |
-
"metadata": {},
|
212 |
-
"source": [
|
213 |
-
"## Online Evaluation\n",
|
214 |
-
"\n",
|
215 |
-
"In the previous section, we learned about the difference between online and offline evaluation. Now, we will see how to monitor your agent in production and evaluate it live.\n",
|
216 |
-
"\n",
|
217 |
-
"### Common Metrics to Track in Production\n",
|
218 |
-
"\n",
|
219 |
-
"1. **Costs** β The smolagents instrumentation captures token usage, which you can transform into approximate costs by assigning a price per token.\n",
|
220 |
-
"2. **Latency** β Observe the time it takes to complete each step, or the entire run.\n",
|
221 |
-
"3. **User Feedback** β Users can provide direct feedback (thumbs up/down) to help refine or correct the agent.\n",
|
222 |
-
"4. **LLM-as-a-Judge** β Use a separate LLM to evaluate your agentβs output in near real-time (e.g., checking for toxicity or correctness).\n",
|
223 |
-
"\n",
|
224 |
-
"Below, we show examples of these metrics."
|
225 |
-
]
|
226 |
-
},
|
227 |
-
{
|
228 |
-
"cell_type": "markdown",
|
229 |
-
"metadata": {},
|
230 |
-
"source": [
|
231 |
-
"#### 1. Costs\n",
|
232 |
-
"\n",
|
233 |
-
"Below is a screenshot showing usage for `Qwen2.5-Coder-32B-Instruct` calls. This is useful to see costly steps and optimize your agent. \n",
|
234 |
-
"\n",
|
235 |
-
"\n",
|
236 |
-
"\n",
|
237 |
-
"_[Link to the trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/1ac33b89ffd5e75d4265b62900c348ed?timestamp=2025-03-07T13%3A45%3A09.149Z&display=preview)_"
|
238 |
-
]
|
239 |
-
},
|
240 |
-
{
|
241 |
-
"cell_type": "markdown",
|
242 |
-
"metadata": {},
|
243 |
-
"source": [
|
244 |
-
"#### 2. Latency\n",
|
245 |
-
"\n",
|
246 |
-
"We can also see how long it took to complete each step. In the example below, the entire conversation took 32 seconds, which you can break down by step. This helps you identify bottlenecks and optimize your agent.\n",
|
247 |
-
"\n",
|
248 |
-
"\n",
|
249 |
-
"\n",
|
250 |
-
"_[Link to the trace](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/1ac33b89ffd5e75d4265b62900c348ed?timestamp=2025-03-07T13%3A45%3A09.149Z&display=preview)_"
|
251 |
-
]
|
252 |
-
},
|
253 |
-
{
|
254 |
-
"cell_type": "markdown",
|
255 |
-
"metadata": {},
|
256 |
-
"source": [
|
257 |
-
"#### 3. Additional Attributes\n",
|
258 |
-
"\n",
|
259 |
-
"You may also pass additional attributes to your spans. These can include `user_id`, `tags`, `session_id`, and custom metadata. Enriching traces with these details is important for analysis, debugging, and monitoring of your applicationβs behavior across different users or sessions."
|
260 |
-
]
|
261 |
-
},
|
262 |
-
{
|
263 |
-
"cell_type": "code",
|
264 |
-
"execution_count": null,
|
265 |
-
"metadata": {},
|
266 |
-
"outputs": [],
|
267 |
-
"source": [
|
268 |
-
"from smolagents import (CodeAgent, DuckDuckGoSearchTool, InferenceClientModel)\n",
|
269 |
-
"\n",
|
270 |
-
"search_tool = DuckDuckGoSearchTool()\n",
|
271 |
-
"agent = CodeAgent(\n",
|
272 |
-
" tools=[search_tool],\n",
|
273 |
-
" model=InferenceClientModel()\n",
|
274 |
-
")\n",
|
275 |
-
"\n",
|
276 |
-
"with langfuse.start_as_current_span(\n",
|
277 |
-
" name=\"Smolagent-Trace\",\n",
|
278 |
-
" ) as span:\n",
|
279 |
-
" \n",
|
280 |
-
" # Run your application here\n",
|
281 |
-
" response = agent.run(\"What is the capital of Germany?\")\n",
|
282 |
-
" \n",
|
283 |
-
" # Pass additional attributes to the span\n",
|
284 |
-
" span.update_trace(\n",
|
285 |
-
" input=\"What is the capital of Germany?\",\n",
|
286 |
-
" output=response,\n",
|
287 |
-
" user_id=\"smolagent-user-123\",\n",
|
288 |
-
" session_id=\"smolagent-session-123456789\",\n",
|
289 |
-
" tags=[\"city-question\", \"testing-agents\"],\n",
|
290 |
-
" metadata={\"email\": \"[email protected]\"},\n",
|
291 |
-
" )\n",
|
292 |
-
" \n",
|
293 |
-
"# Flush events in short-lived applications\n",
|
294 |
-
"langfuse.flush()"
|
295 |
-
]
|
296 |
-
},
|
297 |
-
{
|
298 |
-
"cell_type": "markdown",
|
299 |
-
"metadata": {},
|
300 |
-
"source": [
|
301 |
-
""
|
302 |
-
]
|
303 |
-
},
|
304 |
-
{
|
305 |
-
"cell_type": "markdown",
|
306 |
-
"metadata": {},
|
307 |
-
"source": [
|
308 |
-
"#### 4. User Feedback\n",
|
309 |
-
"\n",
|
310 |
-
"If your agent is embedded into a user interface, you can record direct user feedback (like a thumbs-up/down in a chat UI). Below is an example using [Gradio](https://gradio.app/) to embed a chat with a simple feedback mechanism.\n",
|
311 |
-
"\n",
|
312 |
-
"In the code snippet below, when a user sends a chat message, we capture the trace in Langfuse. If the user likes/dislikes the last answer, we attach a score to the trace."
|
313 |
-
]
|
314 |
-
},
|
315 |
-
{
|
316 |
-
"cell_type": "code",
|
317 |
-
"execution_count": null,
|
318 |
-
"metadata": {},
|
319 |
-
"outputs": [],
|
320 |
-
"source": [
|
321 |
-
"import gradio as gr\n",
|
322 |
-
"from smolagents import (CodeAgent, InferenceClientModel)\n",
|
323 |
-
"from langfuse import get_client\n",
|
324 |
-
"\n",
|
325 |
-
"langfuse = get_client()\n",
|
326 |
-
"\n",
|
327 |
-
"model = InferenceClientModel()\n",
|
328 |
-
"agent = CodeAgent(tools=[], model=model, add_base_tools=True)\n",
|
329 |
-
"\n",
|
330 |
-
"trace_id = None\n",
|
331 |
-
"\n",
|
332 |
-
"def respond(prompt, history):\n",
|
333 |
-
" with langfuse.start_as_current_span(\n",
|
334 |
-
" name=\"Smolagent-Trace\"):\n",
|
335 |
-
" \n",
|
336 |
-
" # Run your application here\n",
|
337 |
-
" output = agent.run(prompt)\n",
|
338 |
-
"\n",
|
339 |
-
" global trace_id\n",
|
340 |
-
" trace_id = langfuse.get_current_trace_id()\n",
|
341 |
-
"\n",
|
342 |
-
" history.append({\"role\": \"assistant\", \"content\": str(output)})\n",
|
343 |
-
" return history\n",
|
344 |
-
"\n",
|
345 |
-
"def handle_like(data: gr.LikeData):\n",
|
346 |
-
" # For demonstration, we map user feedback to a 1 (like) or 0 (dislike)\n",
|
347 |
-
" if data.liked:\n",
|
348 |
-
" langfuse.create_score(\n",
|
349 |
-
" value=1,\n",
|
350 |
-
" name=\"user-feedback\",\n",
|
351 |
-
" trace_id=trace_id\n",
|
352 |
-
" )\n",
|
353 |
-
" else:\n",
|
354 |
-
" langfuse.create_score(\n",
|
355 |
-
" value=0,\n",
|
356 |
-
" name=\"user-feedback\",\n",
|
357 |
-
" trace_id=trace_id\n",
|
358 |
-
" )\n",
|
359 |
-
"\n",
|
360 |
-
"with gr.Blocks() as demo:\n",
|
361 |
-
" chatbot = gr.Chatbot(label=\"Chat\", type=\"messages\")\n",
|
362 |
-
" prompt_box = gr.Textbox(placeholder=\"Type your message...\", label=\"Your message\")\n",
|
363 |
-
"\n",
|
364 |
-
" # When the user presses 'Enter' on the prompt, we run 'respond'\n",
|
365 |
-
" prompt_box.submit(\n",
|
366 |
-
" fn=respond,\n",
|
367 |
-
" inputs=[prompt_box, chatbot],\n",
|
368 |
-
" outputs=chatbot\n",
|
369 |
-
" )\n",
|
370 |
-
"\n",
|
371 |
-
" # When the user clicks a 'like' button on a message, we run 'handle_like'\n",
|
372 |
-
" chatbot.like(handle_like, None, None)\n",
|
373 |
-
"\n",
|
374 |
-
"demo.launch()\n"
|
375 |
-
]
|
376 |
-
},
|
377 |
-
{
|
378 |
-
"cell_type": "markdown",
|
379 |
-
"metadata": {},
|
380 |
-
"source": [
|
381 |
-
"User feedback is then captured in your observability tool:\n",
|
382 |
-
"\n",
|
383 |
-
""
|
384 |
-
]
|
385 |
-
},
|
386 |
-
{
|
387 |
-
"cell_type": "markdown",
|
388 |
-
"metadata": {},
|
389 |
-
"source": [
|
390 |
-
"#### 5. LLM-as-a-Judge\n",
|
391 |
-
"\n",
|
392 |
-
"LLM-as-a-Judge is another way to automatically evaluate your agent's output. You can set up a separate LLM call to gauge the outputβs correctness, toxicity, style, or any other criteria you care about.\n",
|
393 |
-
"\n",
|
394 |
-
"**Workflow**:\n",
|
395 |
-
"1. You define an **Evaluation Template**, e.g., \"Check if the text is toxic.\"\n",
|
396 |
-
"2. Each time your agent generates output, you pass that output to your \"judge\" LLM with the template.\n",
|
397 |
-
"3. The judge LLM responds with a rating or label that you log to your observability tool.\n",
|
398 |
-
"\n",
|
399 |
-
"Example from Langfuse:\n",
|
400 |
-
"\n",
|
401 |
-
"\n",
|
402 |
-
""
|
403 |
-
]
|
404 |
-
},
|
405 |
-
{
|
406 |
-
"cell_type": "code",
|
407 |
-
"execution_count": null,
|
408 |
-
"metadata": {},
|
409 |
-
"outputs": [],
|
410 |
-
"source": [
|
411 |
-
"# Example: Checking if the agentβs output is toxic or not.\n",
|
412 |
-
"from smolagents import (CodeAgent, DuckDuckGoSearchTool, InferenceClientModel)\n",
|
413 |
-
"\n",
|
414 |
-
"search_tool = DuckDuckGoSearchTool()\n",
|
415 |
-
"agent = CodeAgent(tools=[search_tool], model=InferenceClientModel())\n",
|
416 |
-
"\n",
|
417 |
-
"agent.run(\"Can eating carrots improve your vision?\")"
|
418 |
-
]
|
419 |
-
},
|
420 |
-
{
|
421 |
-
"cell_type": "markdown",
|
422 |
-
"metadata": {},
|
423 |
-
"source": [
|
424 |
-
"You can see that the answer of this example is judged as \"not toxic\".\n",
|
425 |
-
"\n",
|
426 |
-
""
|
427 |
-
]
|
428 |
-
},
|
429 |
-
{
|
430 |
-
"cell_type": "markdown",
|
431 |
-
"metadata": {},
|
432 |
-
"source": [
|
433 |
-
"#### 6. Observability Metrics Overview\n",
|
434 |
-
"\n",
|
435 |
-
"All of these metrics can be visualized together in dashboards. This enables you to quickly see how your agent performs across many sessions and helps you to track quality metrics over time.\n",
|
436 |
-
"\n",
|
437 |
-
""
|
438 |
-
]
|
439 |
-
},
|
440 |
-
{
|
441 |
-
"cell_type": "markdown",
|
442 |
-
"metadata": {},
|
443 |
-
"source": [
|
444 |
-
"## Offline Evaluation\n",
|
445 |
-
"\n",
|
446 |
-
"Online evaluation is essential for live feedback, but you also need **offline evaluation**βsystematic checks before or during development. This helps maintain quality and reliability before rolling changes into production."
|
447 |
-
]
|
448 |
-
},
|
449 |
-
{
|
450 |
-
"cell_type": "markdown",
|
451 |
-
"metadata": {},
|
452 |
-
"source": [
|
453 |
-
"### Dataset Evaluation\n",
|
454 |
-
"\n",
|
455 |
-
"In offline evaluation, you typically:\n",
|
456 |
-
"1. Have a benchmark dataset (with prompt and expected output pairs)\n",
|
457 |
-
"2. Run your agent on that dataset\n",
|
458 |
-
"3. Compare outputs to the expected results or use an additional scoring mechanism\n",
|
459 |
-
"\n",
|
460 |
-
"Below, we demonstrate this approach with the [GSM8K dataset](https://huggingface.co/datasets/gsm8k), which contains math questions and solutions."
|
461 |
-
]
|
462 |
-
},
|
463 |
-
{
|
464 |
-
"cell_type": "code",
|
465 |
-
"execution_count": null,
|
466 |
-
"metadata": {},
|
467 |
-
"outputs": [],
|
468 |
-
"source": [
|
469 |
-
"import pandas as pd\n",
|
470 |
-
"from datasets import load_dataset\n",
|
471 |
-
"\n",
|
472 |
-
"# Fetch GSM8K from Hugging Face\n",
|
473 |
-
"dataset = load_dataset(\"openai/gsm8k\", 'main', split='train')\n",
|
474 |
-
"df = pd.DataFrame(dataset)\n",
|
475 |
-
"print(\"First few rows of GSM8K dataset:\")\n",
|
476 |
-
"print(df.head())"
|
477 |
-
]
|
478 |
-
},
|
479 |
-
{
|
480 |
-
"cell_type": "markdown",
|
481 |
-
"metadata": {},
|
482 |
-
"source": [
|
483 |
-
"Next, we create a dataset entity in Langfuse to track the runs. Then, we add each item from the dataset to the system. (If youβre not using Langfuse, you might simply store these in your own database or local file for analysis.)"
|
484 |
-
]
|
485 |
-
},
|
486 |
-
{
|
487 |
-
"cell_type": "code",
|
488 |
-
"execution_count": null,
|
489 |
-
"metadata": {},
|
490 |
-
"outputs": [],
|
491 |
-
"source": [
|
492 |
-
"from langfuse import get_client\n",
|
493 |
-
"langfuse = get_client()\n",
|
494 |
-
"\n",
|
495 |
-
"langfuse_dataset_name = \"gsm8k_dataset_huggingface\"\n",
|
496 |
-
"\n",
|
497 |
-
"# Create a dataset in Langfuse\n",
|
498 |
-
"langfuse.create_dataset(\n",
|
499 |
-
" name=langfuse_dataset_name,\n",
|
500 |
-
" description=\"GSM8K benchmark dataset uploaded from Huggingface\",\n",
|
501 |
-
" metadata={\n",
|
502 |
-
" \"date\": \"2025-03-10\", \n",
|
503 |
-
" \"type\": \"benchmark\"\n",
|
504 |
-
" }\n",
|
505 |
-
")"
|
506 |
-
]
|
507 |
-
},
|
508 |
-
{
|
509 |
-
"cell_type": "code",
|
510 |
-
"execution_count": null,
|
511 |
-
"metadata": {},
|
512 |
-
"outputs": [],
|
513 |
-
"source": [
|
514 |
-
"for idx, row in df.iterrows():\n",
|
515 |
-
" langfuse.create_dataset_item(\n",
|
516 |
-
" dataset_name=langfuse_dataset_name,\n",
|
517 |
-
" input={\"text\": row[\"question\"]},\n",
|
518 |
-
" expected_output={\"text\": row[\"answer\"]},\n",
|
519 |
-
" metadata={\"source_index\": idx}\n",
|
520 |
-
" )\n",
|
521 |
-
" if idx >= 9: # Upload only the first 10 items for demonstration\n",
|
522 |
-
" break"
|
523 |
-
]
|
524 |
-
},
|
525 |
-
{
|
526 |
-
"cell_type": "markdown",
|
527 |
-
"metadata": {},
|
528 |
-
"source": [
|
529 |
-
""
|
530 |
-
]
|
531 |
-
},
|
532 |
-
{
|
533 |
-
"cell_type": "markdown",
|
534 |
-
"metadata": {},
|
535 |
-
"source": [
|
536 |
-
"#### Running the Agent on the Dataset\n",
|
537 |
-
"\n",
|
538 |
-
"We define a helper function `run_smolagent()` that:\n",
|
539 |
-
"1. Starts a Langfuse span\n",
|
540 |
-
"2. Runs our agent on the prompt\n",
|
541 |
-
"3. Records the trace ID in Langfuse\n",
|
542 |
-
"\n",
|
543 |
-
"Then, we loop over each dataset item, run the agent, and link the trace to the dataset item. We can also attach a quick evaluation score if desired."
|
544 |
-
]
|
545 |
-
},
|
546 |
-
{
|
547 |
-
"cell_type": "code",
|
548 |
-
"execution_count": null,
|
549 |
-
"metadata": {},
|
550 |
-
"outputs": [],
|
551 |
-
"source": [
|
552 |
-
"from opentelemetry.trace import format_trace_id\n",
|
553 |
-
"from smolagents import (CodeAgent, InferenceClientModel, LiteLLMModel)\n",
|
554 |
-
"from langfuse import get_client\n",
|
555 |
-
" \n",
|
556 |
-
"langfuse = get_client()\n",
|
557 |
-
"\n",
|
558 |
-
"\n",
|
559 |
-
"# Example: using InferenceClientModel or LiteLLMModel to access openai, anthropic, gemini, etc. models:\n",
|
560 |
-
"model = InferenceClientModel()\n",
|
561 |
-
"\n",
|
562 |
-
"agent = CodeAgent(\n",
|
563 |
-
" tools=[],\n",
|
564 |
-
" model=model,\n",
|
565 |
-
" add_base_tools=True\n",
|
566 |
-
")\n",
|
567 |
-
"\n",
|
568 |
-
"dataset_name = \"gsm8k_dataset_huggingface\"\n",
|
569 |
-
"current_run_name = \"smolagent-notebook-run-01\" # Identifies this specific evaluation run\n",
|
570 |
-
" \n",
|
571 |
-
"# Assume 'run_smolagent' is your instrumented application function\n",
|
572 |
-
"def run_smolagent(question):\n",
|
573 |
-
" with langfuse.start_as_current_generation(name=\"qna-llm-call\") as generation:\n",
|
574 |
-
" # Simulate LLM call\n",
|
575 |
-
" result = agent.run(question)\n",
|
576 |
-
" \n",
|
577 |
-
" # Update the trace with the input and output\n",
|
578 |
-
" generation.update_trace(\n",
|
579 |
-
" input= question,\n",
|
580 |
-
" output=result,\n",
|
581 |
-
" )\n",
|
582 |
-
" \n",
|
583 |
-
" return result\n",
|
584 |
-
" \n",
|
585 |
-
"dataset = langfuse.get_dataset(name=dataset_name) # Fetch your pre-populated dataset\n",
|
586 |
-
" \n",
|
587 |
-
"for item in dataset.items:\n",
|
588 |
-
" \n",
|
589 |
-
" # Use the item.run() context manager\n",
|
590 |
-
" with item.run(\n",
|
591 |
-
" run_name=current_run_name,\n",
|
592 |
-
" run_metadata={\"model_provider\": \"Hugging Face\", \"temperature_setting\": 0.7},\n",
|
593 |
-
" run_description=\"Evaluation run for GSM8K dataset\"\n",
|
594 |
-
" ) as root_span: # root_span is the root span of the new trace for this item and run.\n",
|
595 |
-
" # All subsequent langfuse operations within this block are part of this trace.\n",
|
596 |
-
" \n",
|
597 |
-
" # Call your application logic\n",
|
598 |
-
" generated_answer = run_smolagent(question=item.input[\"text\"])\n",
|
599 |
-
" \n",
|
600 |
-
" print(item.input)"
|
601 |
-
]
|
602 |
-
},
|
603 |
-
{
|
604 |
-
"cell_type": "markdown",
|
605 |
-
"metadata": {},
|
606 |
-
"source": [
|
607 |
-
"You can repeat this process with different:\n",
|
608 |
-
"- Models (OpenAI GPT, local LLM, etc.)\n",
|
609 |
-
"- Tools (search vs. no search)\n",
|
610 |
-
"- Prompts (different system messages)\n",
|
611 |
-
"\n",
|
612 |
-
"Then compare them side-by-side in your observability tool:\n",
|
613 |
-
"\n",
|
614 |
-
"\n",
|
615 |
-
"\n"
|
616 |
-
]
|
617 |
-
},
|
618 |
-
{
|
619 |
-
"cell_type": "markdown",
|
620 |
-
"metadata": {},
|
621 |
-
"source": [
|
622 |
-
"## Final Thoughts\n",
|
623 |
-
"\n",
|
624 |
-
"In this notebook, we covered how to:\n",
|
625 |
-
"1. **Set up Observability** using smolagents + OpenTelemetry exporters\n",
|
626 |
-
"2. **Check Instrumentation** by running a simple agent\n",
|
627 |
-
"3. **Capture Detailed Metrics** (cost, latency, etc.) through an observability tools\n",
|
628 |
-
"4. **Collect User Feedback** via a Gradio interface\n",
|
629 |
-
"5. **Use LLM-as-a-Judge** to automatically evaluate outputs\n",
|
630 |
-
"6. **Perform Offline Evaluation** with a benchmark dataset\n",
|
631 |
-
"\n",
|
632 |
-
"π€ Happy coding!"
|
633 |
-
]
|
634 |
-
}
|
635 |
-
],
|
636 |
-
"metadata": {
|
637 |
-
"kernelspec": {
|
638 |
-
"display_name": ".venv",
|
639 |
-
"language": "python",
|
640 |
-
"name": "python3"
|
641 |
-
},
|
642 |
-
"language_info": {
|
643 |
-
"codemirror_mode": {
|
644 |
-
"name": "ipython",
|
645 |
-
"version": 3
|
646 |
-
},
|
647 |
-
"file_extension": ".py",
|
648 |
-
"mimetype": "text/x-python",
|
649 |
-
"name": "python",
|
650 |
-
"nbconvert_exporter": "python",
|
651 |
-
"pygments_lexer": "ipython3",
|
652 |
-
"version": "3.13.2"
|
653 |
-
}
|
654 |
-
},
|
655 |
-
"nbformat": 4,
|
656 |
-
"nbformat_minor": 2
|
657 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
unit1/dummy_agent_library.ipynb β dummy_agent_library.ipynb
RENAMED
@@ -40,16 +40,14 @@
|
|
40 |
"\n",
|
41 |
"In the Hugging Face ecosystem, there is a convenient feature called Serverless API that allows you to easily run inference on many models. There's no installation or deployment required.\n",
|
42 |
"\n",
|
43 |
-
"To run this notebook, **you need a Hugging Face token** that you can get from https://hf.co/settings/tokens.
|
44 |
-
"- If you are running this notebook on Google Colab, you can set it up in the \"settings\" tab under \"secrets\". Make sure to call it \"HF_TOKEN\" and restart the session to load the environment variable (Runtime -> Restart session).\n",
|
45 |
-
"- If you are running this notebook locally, you can set it up as an [environment variable](https://huggingface.co/docs/huggingface_hub/en/package_reference/environment_variables). Make sure you restart the kernel after installing or updating huggingface_hub. You can update huggingface_hub by modifying the above `!pip install -q huggingface_hub -U`\n",
|
46 |
"\n",
|
47 |
-
"You also need to request access to [the Meta Llama models](
|
48 |
]
|
49 |
},
|
50 |
{
|
51 |
"cell_type": "code",
|
52 |
-
"execution_count":
|
53 |
"id": "5af6ec14-bb7d-49a4-b911-0cf0ec084df5",
|
54 |
"metadata": {
|
55 |
"id": "5af6ec14-bb7d-49a4-b911-0cf0ec084df5",
|
@@ -60,11 +58,10 @@
|
|
60 |
"import os\n",
|
61 |
"from huggingface_hub import InferenceClient\n",
|
62 |
"\n",
|
63 |
-
"#
|
64 |
"\n",
|
65 |
-
"\n",
|
66 |
-
"
|
67 |
-
"# if the outputs for next cells are wrong, the free model may be overloaded. You can also use this public endpoint that contains Llama-3.3-70B-Instruct\n",
|
68 |
"#client = InferenceClient(\"https://jc26mwg228mkj8dw.us-east-1.aws.endpoints.huggingface.cloud\")"
|
69 |
]
|
70 |
},
|
@@ -117,7 +114,7 @@
|
|
117 |
"id": "T9-6h-eVAWrR"
|
118 |
},
|
119 |
"source": [
|
120 |
-
"If we now add the special tokens related to the <a href=\"https://huggingface.co/meta-llama/Llama-3.
|
121 |
]
|
122 |
},
|
123 |
{
|
@@ -142,7 +139,7 @@
|
|
142 |
}
|
143 |
],
|
144 |
"source": [
|
145 |
-
"# If we now add the special tokens related to Llama3.
|
146 |
"prompt=\"\"\"<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n",
|
147 |
"\n",
|
148 |
"The capital of france is<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n",
|
@@ -237,7 +234,7 @@
|
|
237 |
"outputs": [],
|
238 |
"source": [
|
239 |
"# This system prompt is a bit more complex and actually contains the function description already appended.\n",
|
240 |
-
"# Here we suppose that the textual description of the tools
|
241 |
"SYSTEM_PROMPT = \"\"\"Answer the following questions as best you can. You have access to the following tools:\n",
|
242 |
"\n",
|
243 |
"get_weather: Get the current weather in a given location\n",
|
@@ -246,7 +243,7 @@
|
|
246 |
"Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n",
|
247 |
"\n",
|
248 |
"The only values that should be in the \"action\" field are:\n",
|
249 |
-
"get_weather: Get the current weather in a given location, args: {
|
250 |
"example use :\n",
|
251 |
"```\n",
|
252 |
"{{\n",
|
@@ -316,7 +313,7 @@
|
|
316 |
" {\"role\": \"user\", \"content\": \"What's the weather in London ?\"},\n",
|
317 |
"]\n",
|
318 |
"from transformers import AutoTokenizer\n",
|
319 |
-
"tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-3.
|
320 |
"\n",
|
321 |
"tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True)\n",
|
322 |
"```"
|
@@ -357,7 +354,7 @@
|
|
357 |
"Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n",
|
358 |
"\n",
|
359 |
"The only values that should be in the \"action\" field are:\n",
|
360 |
-
"get_weather: Get the current weather in a given location, args: {
|
361 |
"example use :\n",
|
362 |
"```\n",
|
363 |
"{{\n",
|
@@ -459,7 +456,7 @@
|
|
459 |
},
|
460 |
{
|
461 |
"cell_type": "code",
|
462 |
-
"execution_count":
|
463 |
"id": "9fc783f2-66ac-42cf-8a57-51788f81d436",
|
464 |
"metadata": {
|
465 |
"colab": {
|
@@ -491,7 +488,7 @@
|
|
491 |
"# The answer was hallucinated by the model. We need to stop to actually execute the function!\n",
|
492 |
"output = client.text_generation(\n",
|
493 |
" prompt,\n",
|
494 |
-
" max_new_tokens=
|
495 |
" stop=[\"Observation:\"] # Let's stop before any actual function is called\n",
|
496 |
")\n",
|
497 |
"\n",
|
@@ -507,7 +504,7 @@
|
|
507 |
"source": [
|
508 |
"Much Better!\n",
|
509 |
"\n",
|
510 |
-
"Let's now create a **dummy get weather function**. In
|
511 |
]
|
512 |
},
|
513 |
{
|
@@ -582,7 +579,7 @@
|
|
582 |
"Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n",
|
583 |
"\n",
|
584 |
"The only values that should be in the \"action\" field are:\n",
|
585 |
-
"get_weather: Get the current weather in a given location, args: {
|
586 |
"example use :\n",
|
587 |
"```\n",
|
588 |
"{{\n",
|
@@ -608,7 +605,7 @@
|
|
608 |
"\n",
|
609 |
"Now begin! Reminder to ALWAYS use the exact characters `Final Answer:` when you provide a definitive answer. \n",
|
610 |
"<|eot_id|><|start_header_id|>user<|end_header_id|>\n",
|
611 |
-
"What's the
|
612 |
"<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n",
|
613 |
"Question: What's the weather in London?\n",
|
614 |
"\n",
|
|
|
40 |
"\n",
|
41 |
"In the Hugging Face ecosystem, there is a convenient feature called Serverless API that allows you to easily run inference on many models. There's no installation or deployment required.\n",
|
42 |
"\n",
|
43 |
+
"To run this notebook, **you need a Hugging Face token** that you can get from https://hf.co/settings/tokens. If you are running this notebook on Google Colab, you can set it up in the \"settings\" tab under \"secrets\". Make sure to call it \"HF_TOKEN\".\n",
|
|
|
|
|
44 |
"\n",
|
45 |
+
"You also need to request access to [the Meta Llama models](meta-llama/Llama-3.2-3B-Instruct), if you haven't done it before. Approval usually takes up to an hour."
|
46 |
]
|
47 |
},
|
48 |
{
|
49 |
"cell_type": "code",
|
50 |
+
"execution_count": 2,
|
51 |
"id": "5af6ec14-bb7d-49a4-b911-0cf0ec084df5",
|
52 |
"metadata": {
|
53 |
"id": "5af6ec14-bb7d-49a4-b911-0cf0ec084df5",
|
|
|
58 |
"import os\n",
|
59 |
"from huggingface_hub import InferenceClient\n",
|
60 |
"\n",
|
61 |
+
"# os.environ[\"HF_TOKEN\"]=\"hf_xxxxxxxxxxx\"\n",
|
62 |
"\n",
|
63 |
+
"client = InferenceClient(\"meta-llama/Llama-3.2-3B-Instruct\")\n",
|
64 |
+
"# if the outputs for next cells are wrong, the free model may be overloaded. You can also use this public endpoint that contains Llama-3.2-3B-Instruct\n",
|
|
|
65 |
"#client = InferenceClient(\"https://jc26mwg228mkj8dw.us-east-1.aws.endpoints.huggingface.cloud\")"
|
66 |
]
|
67 |
},
|
|
|
114 |
"id": "T9-6h-eVAWrR"
|
115 |
},
|
116 |
"source": [
|
117 |
+
"If we now add the special tokens related to the <a href=\"https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct\">Llama-3.2-3B-Instruct model</a> that we're using, the behavior changes and it now produces the expected EOS."
|
118 |
]
|
119 |
},
|
120 |
{
|
|
|
139 |
}
|
140 |
],
|
141 |
"source": [
|
142 |
+
"# If we now add the special tokens related to Llama3.2 model, the behaviour changes and is now the expected one.\n",
|
143 |
"prompt=\"\"\"<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n",
|
144 |
"\n",
|
145 |
"The capital of france is<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n",
|
|
|
234 |
"outputs": [],
|
235 |
"source": [
|
236 |
"# This system prompt is a bit more complex and actually contains the function description already appended.\n",
|
237 |
+
"# Here we suppose that the textual description of the tools has already been appended\n",
|
238 |
"SYSTEM_PROMPT = \"\"\"Answer the following questions as best you can. You have access to the following tools:\n",
|
239 |
"\n",
|
240 |
"get_weather: Get the current weather in a given location\n",
|
|
|
243 |
"Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n",
|
244 |
"\n",
|
245 |
"The only values that should be in the \"action\" field are:\n",
|
246 |
+
"get_weather: Get the current weather in a given location, args: {\"location\": {\"type\": \"string\"}}\n",
|
247 |
"example use :\n",
|
248 |
"```\n",
|
249 |
"{{\n",
|
|
|
313 |
" {\"role\": \"user\", \"content\": \"What's the weather in London ?\"},\n",
|
314 |
"]\n",
|
315 |
"from transformers import AutoTokenizer\n",
|
316 |
+
"tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-3.2-3B-Instruct\")\n",
|
317 |
"\n",
|
318 |
"tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True)\n",
|
319 |
"```"
|
|
|
354 |
"Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n",
|
355 |
"\n",
|
356 |
"The only values that should be in the \"action\" field are:\n",
|
357 |
+
"get_weather: Get the current weather in a given location, args: {\"location\": {\"type\": \"string\"}}\n",
|
358 |
"example use :\n",
|
359 |
"```\n",
|
360 |
"{{\n",
|
|
|
456 |
},
|
457 |
{
|
458 |
"cell_type": "code",
|
459 |
+
"execution_count": 12,
|
460 |
"id": "9fc783f2-66ac-42cf-8a57-51788f81d436",
|
461 |
"metadata": {
|
462 |
"colab": {
|
|
|
488 |
"# The answer was hallucinated by the model. We need to stop to actually execute the function!\n",
|
489 |
"output = client.text_generation(\n",
|
490 |
" prompt,\n",
|
491 |
+
" max_new_tokens=200,\n",
|
492 |
" stop=[\"Observation:\"] # Let's stop before any actual function is called\n",
|
493 |
")\n",
|
494 |
"\n",
|
|
|
504 |
"source": [
|
505 |
"Much Better!\n",
|
506 |
"\n",
|
507 |
+
"Let's now create a **dummy get weather function**. In real situation you could call an API."
|
508 |
]
|
509 |
},
|
510 |
{
|
|
|
579 |
"Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n",
|
580 |
"\n",
|
581 |
"The only values that should be in the \"action\" field are:\n",
|
582 |
+
"get_weather: Get the current weather in a given location, args: {\"location\": {\"type\": \"string\"}}\n",
|
583 |
"example use :\n",
|
584 |
"```\n",
|
585 |
"{{\n",
|
|
|
605 |
"\n",
|
606 |
"Now begin! Reminder to ALWAYS use the exact characters `Final Answer:` when you provide a definitive answer. \n",
|
607 |
"<|eot_id|><|start_header_id|>user<|end_header_id|>\n",
|
608 |
+
"What's the weither in London ?\n",
|
609 |
"<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n",
|
610 |
"Question: What's the weather in London?\n",
|
611 |
"\n",
|
unit2/langgraph/agent.ipynb
DELETED
@@ -1,332 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"metadata": {},
|
5 |
-
"cell_type": "markdown",
|
6 |
-
"source": [
|
7 |
-
"# Agent\n",
|
8 |
-
"\n",
|
9 |
-
"In this notebook, **we're going to build a simple agent using using LangGraph**.\n",
|
10 |
-
"\n",
|
11 |
-
"This notebook is part of the <a href=\"https://www.hf.co/learn/agents-course\">Hugging Face Agents Course</a>, a free course from beginner to expert, where you learn to build Agents.\n",
|
12 |
-
"\n",
|
13 |
-
"\n",
|
14 |
-
"\n",
|
15 |
-
"As seen in the Unit 1, an agent needs 3 steps as introduced in the ReAct architecture :\n",
|
16 |
-
"[ReAct](https://react-lm.github.io/), a general agent architecture.\n",
|
17 |
-
"\n",
|
18 |
-
"* `act` - let the model call specific tools\n",
|
19 |
-
"* `observe` - pass the tool output back to the model\n",
|
20 |
-
"* `reason` - let the model reason about the tool output to decide what to do next (e.g., call another tool or just respond directly)\n",
|
21 |
-
"\n",
|
22 |
-
"\n",
|
23 |
-
""
|
24 |
-
],
|
25 |
-
"id": "89791f21c171372a"
|
26 |
-
},
|
27 |
-
{
|
28 |
-
"metadata": {},
|
29 |
-
"cell_type": "code",
|
30 |
-
"outputs": [],
|
31 |
-
"execution_count": null,
|
32 |
-
"source": "%pip install -q -U langchain_openai langchain_core langgraph",
|
33 |
-
"id": "bef6c5514bd263ce"
|
34 |
-
},
|
35 |
-
{
|
36 |
-
"metadata": {},
|
37 |
-
"cell_type": "code",
|
38 |
-
"outputs": [],
|
39 |
-
"execution_count": null,
|
40 |
-
"source": [
|
41 |
-
"import os\n",
|
42 |
-
"\n",
|
43 |
-
"# Please setp your own key.\n",
|
44 |
-
"os.environ[\"OPENAI_API_KEY\"] = \"sk-xxxxxx\""
|
45 |
-
],
|
46 |
-
"id": "61d0ed53b26fa5c6"
|
47 |
-
},
|
48 |
-
{
|
49 |
-
"metadata": {},
|
50 |
-
"cell_type": "code",
|
51 |
-
"outputs": [],
|
52 |
-
"execution_count": null,
|
53 |
-
"source": [
|
54 |
-
"import base64\n",
|
55 |
-
"from langchain_core.messages import HumanMessage\n",
|
56 |
-
"from langchain_openai import ChatOpenAI\n",
|
57 |
-
"\n",
|
58 |
-
"vision_llm = ChatOpenAI(model=\"gpt-4o\")\n",
|
59 |
-
"\n",
|
60 |
-
"\n",
|
61 |
-
"def extract_text(img_path: str) -> str:\n",
|
62 |
-
" \"\"\"\n",
|
63 |
-
" Extract text from an image file using a multimodal model.\n",
|
64 |
-
"\n",
|
65 |
-
" Args:\n",
|
66 |
-
" img_path: A local image file path (strings).\n",
|
67 |
-
"\n",
|
68 |
-
" Returns:\n",
|
69 |
-
" A single string containing the concatenated text extracted from each image.\n",
|
70 |
-
" \"\"\"\n",
|
71 |
-
" all_text = \"\"\n",
|
72 |
-
" try:\n",
|
73 |
-
"\n",
|
74 |
-
" # Read image and encode as base64\n",
|
75 |
-
" with open(img_path, \"rb\") as image_file:\n",
|
76 |
-
" image_bytes = image_file.read()\n",
|
77 |
-
"\n",
|
78 |
-
" image_base64 = base64.b64encode(image_bytes).decode(\"utf-8\")\n",
|
79 |
-
"\n",
|
80 |
-
" # Prepare the prompt including the base64 image data\n",
|
81 |
-
" message = [\n",
|
82 |
-
" HumanMessage(\n",
|
83 |
-
" content=[\n",
|
84 |
-
" {\n",
|
85 |
-
" \"type\": \"text\",\n",
|
86 |
-
" \"text\": (\n",
|
87 |
-
" \"Extract all the text from this image. \"\n",
|
88 |
-
" \"Return only the extracted text, no explanations.\"\n",
|
89 |
-
" ),\n",
|
90 |
-
" },\n",
|
91 |
-
" {\n",
|
92 |
-
" \"type\": \"image_url\",\n",
|
93 |
-
" \"image_url\": {\n",
|
94 |
-
" \"url\": f\"data:image/png;base64,{image_base64}\"\n",
|
95 |
-
" },\n",
|
96 |
-
" },\n",
|
97 |
-
" ]\n",
|
98 |
-
" )\n",
|
99 |
-
" ]\n",
|
100 |
-
"\n",
|
101 |
-
" # Call the vision-capable model\n",
|
102 |
-
" response = vision_llm.invoke(message)\n",
|
103 |
-
"\n",
|
104 |
-
" # Append extracted text\n",
|
105 |
-
" all_text += response.content + \"\\n\\n\"\n",
|
106 |
-
"\n",
|
107 |
-
" return all_text.strip()\n",
|
108 |
-
" except Exception as e:\n",
|
109 |
-
" # You can choose whether to raise or just return an empty string / error message\n",
|
110 |
-
" error_msg = f\"Error extracting text: {str(e)}\"\n",
|
111 |
-
" print(error_msg)\n",
|
112 |
-
" return \"\"\n",
|
113 |
-
"\n",
|
114 |
-
"\n",
|
115 |
-
"llm = ChatOpenAI(model=\"gpt-4o\")\n",
|
116 |
-
"\n",
|
117 |
-
"\n",
|
118 |
-
"def divide(a: int, b: int) -> float:\n",
|
119 |
-
" \"\"\"Divide a and b.\"\"\"\n",
|
120 |
-
" return a / b\n",
|
121 |
-
"\n",
|
122 |
-
"\n",
|
123 |
-
"tools = [\n",
|
124 |
-
" divide,\n",
|
125 |
-
" extract_text\n",
|
126 |
-
"]\n",
|
127 |
-
"llm_with_tools = llm.bind_tools(tools, parallel_tool_calls=False)"
|
128 |
-
],
|
129 |
-
"id": "a4a8bf0d5ac25a37"
|
130 |
-
},
|
131 |
-
{
|
132 |
-
"metadata": {},
|
133 |
-
"cell_type": "markdown",
|
134 |
-
"source": "Let's create our LLM and prompt it with the overall desired agent behavior.",
|
135 |
-
"id": "3e7c17a2e155014e"
|
136 |
-
},
|
137 |
-
{
|
138 |
-
"metadata": {},
|
139 |
-
"cell_type": "code",
|
140 |
-
"outputs": [],
|
141 |
-
"execution_count": null,
|
142 |
-
"source": [
|
143 |
-
"from typing import TypedDict, Annotated, Optional\n",
|
144 |
-
"from langchain_core.messages import AnyMessage\n",
|
145 |
-
"from langgraph.graph.message import add_messages\n",
|
146 |
-
"\n",
|
147 |
-
"\n",
|
148 |
-
"class AgentState(TypedDict):\n",
|
149 |
-
" # The input document\n",
|
150 |
-
" input_file: Optional[str] # Contains file path, type (PNG)\n",
|
151 |
-
" messages: Annotated[list[AnyMessage], add_messages]"
|
152 |
-
],
|
153 |
-
"id": "f31250bc1f61da81"
|
154 |
-
},
|
155 |
-
{
|
156 |
-
"metadata": {},
|
157 |
-
"cell_type": "code",
|
158 |
-
"outputs": [],
|
159 |
-
"execution_count": null,
|
160 |
-
"source": [
|
161 |
-
"from langchain_core.messages import HumanMessage, SystemMessage\n",
|
162 |
-
"from langchain_core.utils.function_calling import convert_to_openai_tool\n",
|
163 |
-
"\n",
|
164 |
-
"\n",
|
165 |
-
"def assistant(state: AgentState):\n",
|
166 |
-
" # System message\n",
|
167 |
-
" textual_description_of_tool = \"\"\"\n",
|
168 |
-
"extract_text(img_path: str) -> str:\n",
|
169 |
-
" Extract text from an image file using a multimodal model.\n",
|
170 |
-
"\n",
|
171 |
-
" Args:\n",
|
172 |
-
" img_path: A local image file path (strings).\n",
|
173 |
-
"\n",
|
174 |
-
" Returns:\n",
|
175 |
-
" A single string containing the concatenated text extracted from each image.\n",
|
176 |
-
"divide(a: int, b: int) -> float:\n",
|
177 |
-
" Divide a and b\n",
|
178 |
-
"\"\"\"\n",
|
179 |
-
" image = state[\"input_file\"]\n",
|
180 |
-
" sys_msg = SystemMessage(content=f\"You are an helpful agent that can analyse some images and run some computatio without provided tools :\\n{textual_description_of_tool} \\n You have access to some otpional images. Currently the loaded images is : {image}\")\n",
|
181 |
-
"\n",
|
182 |
-
" return {\"messages\": [llm_with_tools.invoke([sys_msg] + state[\"messages\"])], \"input_file\": state[\"input_file\"]}"
|
183 |
-
],
|
184 |
-
"id": "3c4a736f9e55afa9"
|
185 |
-
},
|
186 |
-
{
|
187 |
-
"metadata": {},
|
188 |
-
"cell_type": "markdown",
|
189 |
-
"source": [
|
190 |
-
"We define a `tools` node with our list of tools.\n",
|
191 |
-
"\n",
|
192 |
-
"The `assistant` node is just our model with bound tools.\n",
|
193 |
-
"\n",
|
194 |
-
"We create a graph with `assistant` and `tools` nodes.\n",
|
195 |
-
"\n",
|
196 |
-
"We add `tools_condition` edge, which routes to `End` or to `tools` based on whether the `assistant` calls a tool.\n",
|
197 |
-
"\n",
|
198 |
-
"Now, we add one new step:\n",
|
199 |
-
"\n",
|
200 |
-
"We connect the `tools` node *back* to the `assistant`, forming a loop.\n",
|
201 |
-
"\n",
|
202 |
-
"* After the `assistant` node executes, `tools_condition` checks if the model's output is a tool call.\n",
|
203 |
-
"* If it is a tool call, the flow is directed to the `tools` node.\n",
|
204 |
-
"* The `tools` node connects back to `assistant`.\n",
|
205 |
-
"* This loop continues as long as the model decides to call tools.\n",
|
206 |
-
"* If the model response is not a tool call, the flow is directed to END, terminating the process."
|
207 |
-
],
|
208 |
-
"id": "6f1efedd943d8b1d"
|
209 |
-
},
|
210 |
-
{
|
211 |
-
"metadata": {},
|
212 |
-
"cell_type": "code",
|
213 |
-
"outputs": [],
|
214 |
-
"execution_count": null,
|
215 |
-
"source": [
|
216 |
-
"from langgraph.graph import START, StateGraph\n",
|
217 |
-
"from langgraph.prebuilt import ToolNode, tools_condition\n",
|
218 |
-
"from IPython.display import Image, display\n",
|
219 |
-
"\n",
|
220 |
-
"# Graph\n",
|
221 |
-
"builder = StateGraph(AgentState)\n",
|
222 |
-
"\n",
|
223 |
-
"# Define nodes: these do the work\n",
|
224 |
-
"builder.add_node(\"assistant\", assistant)\n",
|
225 |
-
"builder.add_node(\"tools\", ToolNode(tools))\n",
|
226 |
-
"\n",
|
227 |
-
"# Define edges: these determine how the control flow moves\n",
|
228 |
-
"builder.add_edge(START, \"assistant\")\n",
|
229 |
-
"builder.add_conditional_edges(\n",
|
230 |
-
" \"assistant\",\n",
|
231 |
-
" # If the latest message (result) from assistant is a tool call -> tools_condition routes to tools\n",
|
232 |
-
" # If the latest message (result) from assistant is a not a tool call -> tools_condition routes to END\n",
|
233 |
-
" tools_condition,\n",
|
234 |
-
")\n",
|
235 |
-
"builder.add_edge(\"tools\", \"assistant\")\n",
|
236 |
-
"react_graph = builder.compile()\n",
|
237 |
-
"\n",
|
238 |
-
"# Show\n",
|
239 |
-
"display(Image(react_graph.get_graph(xray=True).draw_mermaid_png()))"
|
240 |
-
],
|
241 |
-
"id": "e013061de784638a"
|
242 |
-
},
|
243 |
-
{
|
244 |
-
"metadata": {},
|
245 |
-
"cell_type": "code",
|
246 |
-
"outputs": [],
|
247 |
-
"execution_count": null,
|
248 |
-
"source": [
|
249 |
-
"messages = [HumanMessage(content=\"Divide 6790 by 5\")]\n",
|
250 |
-
"\n",
|
251 |
-
"messages = react_graph.invoke({\"messages\": messages, \"input_file\": None})"
|
252 |
-
],
|
253 |
-
"id": "d3b0ba5be1a54aad"
|
254 |
-
},
|
255 |
-
{
|
256 |
-
"metadata": {},
|
257 |
-
"cell_type": "code",
|
258 |
-
"outputs": [],
|
259 |
-
"execution_count": null,
|
260 |
-
"source": [
|
261 |
-
"for m in messages['messages']:\n",
|
262 |
-
" m.pretty_print()"
|
263 |
-
],
|
264 |
-
"id": "55eb0f1afd096731"
|
265 |
-
},
|
266 |
-
{
|
267 |
-
"metadata": {},
|
268 |
-
"cell_type": "markdown",
|
269 |
-
"source": [
|
270 |
-
"## Training program\n",
|
271 |
-
"MR Wayne left a note with his training program for the week. I came up with a recipe for dinner left in a note.\n",
|
272 |
-
"\n",
|
273 |
-
"you can find the document [HERE](https://huggingface.co/datasets/agents-course/course-images/blob/main/en/unit2/LangGraph/Batman_training_and_meals.png), so download it and upload it in the local folder.\n",
|
274 |
-
"\n",
|
275 |
-
""
|
276 |
-
],
|
277 |
-
"id": "e0062c1b99cb4779"
|
278 |
-
},
|
279 |
-
{
|
280 |
-
"metadata": {},
|
281 |
-
"cell_type": "code",
|
282 |
-
"outputs": [],
|
283 |
-
"execution_count": null,
|
284 |
-
"source": [
|
285 |
-
"messages = [HumanMessage(content=\"According the note provided by MR wayne in the provided images. What's the list of items I should buy for the dinner menu ?\")]\n",
|
286 |
-
"\n",
|
287 |
-
"messages = react_graph.invoke({\"messages\": messages, \"input_file\": \"Batman_training_and_meals.png\"})"
|
288 |
-
],
|
289 |
-
"id": "2e166ebba82cfd2a"
|
290 |
-
},
|
291 |
-
{
|
292 |
-
"metadata": {},
|
293 |
-
"cell_type": "code",
|
294 |
-
"outputs": [],
|
295 |
-
"execution_count": null,
|
296 |
-
"source": [
|
297 |
-
"for m in messages['messages']:\n",
|
298 |
-
" m.pretty_print()"
|
299 |
-
],
|
300 |
-
"id": "5bfd67af70b7dcf3"
|
301 |
-
},
|
302 |
-
{
|
303 |
-
"metadata": {},
|
304 |
-
"cell_type": "code",
|
305 |
-
"outputs": [],
|
306 |
-
"execution_count": null,
|
307 |
-
"source": "",
|
308 |
-
"id": "8cd664ab5ee5450e"
|
309 |
-
}
|
310 |
-
],
|
311 |
-
"metadata": {
|
312 |
-
"kernelspec": {
|
313 |
-
"display_name": "Python 3 (ipykernel)",
|
314 |
-
"language": "python",
|
315 |
-
"name": "python3"
|
316 |
-
},
|
317 |
-
"language_info": {
|
318 |
-
"codemirror_mode": {
|
319 |
-
"name": "ipython",
|
320 |
-
"version": 3
|
321 |
-
},
|
322 |
-
"file_extension": ".py",
|
323 |
-
"mimetype": "text/x-python",
|
324 |
-
"name": "python",
|
325 |
-
"nbconvert_exporter": "python",
|
326 |
-
"pygments_lexer": "ipython3",
|
327 |
-
"version": "3.9.5"
|
328 |
-
}
|
329 |
-
},
|
330 |
-
"nbformat": 4,
|
331 |
-
"nbformat_minor": 5
|
332 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
unit2/langgraph/mail_sorting.ipynb
DELETED
@@ -1,457 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"cell_type": "markdown",
|
5 |
-
"metadata": {},
|
6 |
-
"source": [
|
7 |
-
"# Alfred the Mail Sorting Butler: A LangGraph Example\n",
|
8 |
-
"\n",
|
9 |
-
"In this notebook, **we're going to build a complete email processing workflow using LangGraph**.\n",
|
10 |
-
"\n",
|
11 |
-
"This notebook is part of the <a href=\"https://www.hf.co/learn/agents-course\">Hugging Face Agents Course</a>, a free course from beginner to expert, where you learn to build Agents.\n",
|
12 |
-
"\n",
|
13 |
-
"\n",
|
14 |
-
"\n",
|
15 |
-
"## What You'll Learn\n",
|
16 |
-
"\n",
|
17 |
-
"In this notebook, you'll learn how to:\n",
|
18 |
-
"1. Set up a LangGraph workflow\n",
|
19 |
-
"2. Define state and nodes for email processing\n",
|
20 |
-
"3. Create conditional branching in a graph\n",
|
21 |
-
"4. Connect an LLM for classification and content generation\n",
|
22 |
-
"5. Visualize the workflow graph\n",
|
23 |
-
"6. Execute the workflow with example data"
|
24 |
-
]
|
25 |
-
},
|
26 |
-
{
|
27 |
-
"cell_type": "code",
|
28 |
-
"execution_count": null,
|
29 |
-
"metadata": {},
|
30 |
-
"outputs": [],
|
31 |
-
"source": [
|
32 |
-
"# Install the required packages\n",
|
33 |
-
"%pip install -q langgraph langchain_openai langchain_huggingface"
|
34 |
-
]
|
35 |
-
},
|
36 |
-
{
|
37 |
-
"cell_type": "markdown",
|
38 |
-
"metadata": {},
|
39 |
-
"source": [
|
40 |
-
"## Setting Up Our Environment\n",
|
41 |
-
"\n",
|
42 |
-
"First, let's import all the necessary libraries. LangGraph provides the graph structure, while LangChain offers convenient interfaces for working with LLMs."
|
43 |
-
]
|
44 |
-
},
|
45 |
-
{
|
46 |
-
"cell_type": "code",
|
47 |
-
"execution_count": null,
|
48 |
-
"metadata": {},
|
49 |
-
"outputs": [],
|
50 |
-
"source": [
|
51 |
-
"import os\n",
|
52 |
-
"from typing import TypedDict, List, Dict, Any, Optional\n",
|
53 |
-
"from langgraph.graph import StateGraph, START, END\n",
|
54 |
-
"from langchain_openai import ChatOpenAI\n",
|
55 |
-
"from langchain_core.messages import HumanMessage\n",
|
56 |
-
"\n",
|
57 |
-
"# Set your OpenAI API key here\n",
|
58 |
-
"os.environ[\"OPENAI_API_KEY\"] = \"sk-xxxxx\" # Replace with your actual API key\n",
|
59 |
-
"\n",
|
60 |
-
"# Initialize our LLM\n",
|
61 |
-
"model = ChatOpenAI(model=\"gpt-4o\", temperature=0)"
|
62 |
-
]
|
63 |
-
},
|
64 |
-
{
|
65 |
-
"cell_type": "markdown",
|
66 |
-
"metadata": {},
|
67 |
-
"source": [
|
68 |
-
"## Step 1: Define Our State\n",
|
69 |
-
"\n",
|
70 |
-
"In LangGraph, **State** is the central concept. It represents all the information that flows through our workflow.\n",
|
71 |
-
"\n",
|
72 |
-
"For Alfred's email processing system, we need to track:\n",
|
73 |
-
"- The email being processed\n",
|
74 |
-
"- Whether it's spam or not\n",
|
75 |
-
"- The draft response (for legitimate emails)\n",
|
76 |
-
"- Conversation history with the LLM"
|
77 |
-
]
|
78 |
-
},
|
79 |
-
{
|
80 |
-
"cell_type": "code",
|
81 |
-
"execution_count": null,
|
82 |
-
"metadata": {},
|
83 |
-
"outputs": [],
|
84 |
-
"source": [
|
85 |
-
"class EmailState(TypedDict):\n",
|
86 |
-
" email: Dict[str, Any]\n",
|
87 |
-
" is_spam: Optional[bool]\n",
|
88 |
-
" spam_reason: Optional[str]\n",
|
89 |
-
" email_category: Optional[str]\n",
|
90 |
-
" email_draft: Optional[str]\n",
|
91 |
-
" messages: List[Dict[str, Any]]"
|
92 |
-
]
|
93 |
-
},
|
94 |
-
{
|
95 |
-
"cell_type": "markdown",
|
96 |
-
"metadata": {},
|
97 |
-
"source": [
|
98 |
-
"## Step 2: Define Our Nodes"
|
99 |
-
]
|
100 |
-
},
|
101 |
-
{
|
102 |
-
"cell_type": "code",
|
103 |
-
"execution_count": null,
|
104 |
-
"metadata": {},
|
105 |
-
"outputs": [],
|
106 |
-
"source": [
|
107 |
-
"def read_email(state: EmailState):\n",
|
108 |
-
" email = state[\"email\"]\n",
|
109 |
-
" print(f\"Alfred is processing an email from {email['sender']} with subject: {email['subject']}\")\n",
|
110 |
-
" return {}\n",
|
111 |
-
"\n",
|
112 |
-
"\n",
|
113 |
-
"def classify_email(state: EmailState):\n",
|
114 |
-
" email = state[\"email\"]\n",
|
115 |
-
"\n",
|
116 |
-
" prompt = f\"\"\"\n",
|
117 |
-
"As Alfred the butler of Mr wayne and it's SECRET identity Batman, analyze this email and determine if it is spam or legitimate and should be brought to Mr wayne's attention.\n",
|
118 |
-
"\n",
|
119 |
-
"Email:\n",
|
120 |
-
"From: {email['sender']}\n",
|
121 |
-
"Subject: {email['subject']}\n",
|
122 |
-
"Body: {email['body']}\n",
|
123 |
-
"\n",
|
124 |
-
"First, determine if this email is spam.\n",
|
125 |
-
"answer with SPAM or HAM if it's legitimate. Only return the answer\n",
|
126 |
-
"Answer :\n",
|
127 |
-
" \"\"\"\n",
|
128 |
-
" messages = [HumanMessage(content=prompt)]\n",
|
129 |
-
" response = model.invoke(messages)\n",
|
130 |
-
"\n",
|
131 |
-
" response_text = response.content.lower()\n",
|
132 |
-
" print(response_text)\n",
|
133 |
-
" is_spam = \"spam\" in response_text and \"ham\" not in response_text\n",
|
134 |
-
"\n",
|
135 |
-
" if not is_spam:\n",
|
136 |
-
" new_messages = state.get(\"messages\", []) + [\n",
|
137 |
-
" {\"role\": \"user\", \"content\": prompt},\n",
|
138 |
-
" {\"role\": \"assistant\", \"content\": response.content}\n",
|
139 |
-
" ]\n",
|
140 |
-
" else:\n",
|
141 |
-
" new_messages = state.get(\"messages\", [])\n",
|
142 |
-
"\n",
|
143 |
-
" return {\n",
|
144 |
-
" \"is_spam\": is_spam,\n",
|
145 |
-
" \"messages\": new_messages\n",
|
146 |
-
" }\n",
|
147 |
-
"\n",
|
148 |
-
"\n",
|
149 |
-
"def handle_spam(state: EmailState):\n",
|
150 |
-
" print(f\"Alfred has marked the email as spam.\")\n",
|
151 |
-
" print(\"The email has been moved to the spam folder.\")\n",
|
152 |
-
" return {}\n",
|
153 |
-
"\n",
|
154 |
-
"\n",
|
155 |
-
"def drafting_response(state: EmailState):\n",
|
156 |
-
" email = state[\"email\"]\n",
|
157 |
-
"\n",
|
158 |
-
" prompt = f\"\"\"\n",
|
159 |
-
"As Alfred the butler, draft a polite preliminary response to this email.\n",
|
160 |
-
"\n",
|
161 |
-
"Email:\n",
|
162 |
-
"From: {email['sender']}\n",
|
163 |
-
"Subject: {email['subject']}\n",
|
164 |
-
"Body: {email['body']}\n",
|
165 |
-
"\n",
|
166 |
-
"Draft a brief, professional response that Mr. Wayne can review and personalize before sending.\n",
|
167 |
-
" \"\"\"\n",
|
168 |
-
"\n",
|
169 |
-
" messages = [HumanMessage(content=prompt)]\n",
|
170 |
-
" response = model.invoke(messages)\n",
|
171 |
-
"\n",
|
172 |
-
" new_messages = state.get(\"messages\", []) + [\n",
|
173 |
-
" {\"role\": \"user\", \"content\": prompt},\n",
|
174 |
-
" {\"role\": \"assistant\", \"content\": response.content}\n",
|
175 |
-
" ]\n",
|
176 |
-
"\n",
|
177 |
-
" return {\n",
|
178 |
-
" \"email_draft\": response.content,\n",
|
179 |
-
" \"messages\": new_messages\n",
|
180 |
-
" }\n",
|
181 |
-
"\n",
|
182 |
-
"\n",
|
183 |
-
"def notify_mr_wayne(state: EmailState):\n",
|
184 |
-
" email = state[\"email\"]\n",
|
185 |
-
"\n",
|
186 |
-
" print(\"\\n\" + \"=\" * 50)\n",
|
187 |
-
" print(f\"Sir, you've received an email from {email['sender']}.\")\n",
|
188 |
-
" print(f\"Subject: {email['subject']}\")\n",
|
189 |
-
" print(\"\\nI've prepared a draft response for your review:\")\n",
|
190 |
-
" print(\"-\" * 50)\n",
|
191 |
-
" print(state[\"email_draft\"])\n",
|
192 |
-
" print(\"=\" * 50 + \"\\n\")\n",
|
193 |
-
"\n",
|
194 |
-
" return {}\n",
|
195 |
-
"\n",
|
196 |
-
"\n",
|
197 |
-
"# Define routing logic\n",
|
198 |
-
"def route_email(state: EmailState) -> str:\n",
|
199 |
-
" if state[\"is_spam\"]:\n",
|
200 |
-
" return \"spam\"\n",
|
201 |
-
" else:\n",
|
202 |
-
" return \"legitimate\"\n",
|
203 |
-
"\n",
|
204 |
-
"\n",
|
205 |
-
"# Create the graph\n",
|
206 |
-
"email_graph = StateGraph(EmailState)\n",
|
207 |
-
"\n",
|
208 |
-
"# Add nodes\n",
|
209 |
-
"email_graph.add_node(\"read_email\", read_email) # the read_email node executes the read_mail function\n",
|
210 |
-
"email_graph.add_node(\"classify_email\", classify_email) # the classify_email node will execute the classify_email function\n",
|
211 |
-
"email_graph.add_node(\"handle_spam\", handle_spam) #same logic\n",
|
212 |
-
"email_graph.add_node(\"drafting_response\", drafting_response) #same logic\n",
|
213 |
-
"email_graph.add_node(\"notify_mr_wayne\", notify_mr_wayne) # same logic\n"
|
214 |
-
]
|
215 |
-
},
|
216 |
-
{
|
217 |
-
"cell_type": "markdown",
|
218 |
-
"metadata": {},
|
219 |
-
"source": [
|
220 |
-
"## Step 3: Define Our Routing Logic"
|
221 |
-
]
|
222 |
-
},
|
223 |
-
{
|
224 |
-
"cell_type": "code",
|
225 |
-
"execution_count": null,
|
226 |
-
"metadata": {},
|
227 |
-
"outputs": [],
|
228 |
-
"source": [
|
229 |
-
"# Add edges\n",
|
230 |
-
"email_graph.add_edge(START, \"read_email\") # After starting we go to the \"read_email\" node\n",
|
231 |
-
"\n",
|
232 |
-
"email_graph.add_edge(\"read_email\", \"classify_email\") # after_reading we classify\n",
|
233 |
-
"\n",
|
234 |
-
"# Add conditional edges\n",
|
235 |
-
"email_graph.add_conditional_edges(\n",
|
236 |
-
" \"classify_email\", # after classify, we run the \"route_email\" function\"\n",
|
237 |
-
" route_email,\n",
|
238 |
-
" {\n",
|
239 |
-
" \"spam\": \"handle_spam\", # if it return \"Spam\", we go the \"handle_span\" node\n",
|
240 |
-
" \"legitimate\": \"drafting_response\" # and if it's legitimate, we go to the \"drafting response\" node\n",
|
241 |
-
" }\n",
|
242 |
-
")\n",
|
243 |
-
"\n",
|
244 |
-
"# Add final edges\n",
|
245 |
-
"email_graph.add_edge(\"handle_spam\", END) # after handling spam we always end\n",
|
246 |
-
"email_graph.add_edge(\"drafting_response\", \"notify_mr_wayne\")\n",
|
247 |
-
"email_graph.add_edge(\"notify_mr_wayne\", END) # after notifyinf Me wayne, we can end too\n"
|
248 |
-
]
|
249 |
-
},
|
250 |
-
{
|
251 |
-
"cell_type": "markdown",
|
252 |
-
"metadata": {},
|
253 |
-
"source": [
|
254 |
-
"## Step 4: Create the StateGraph and Define Edges"
|
255 |
-
]
|
256 |
-
},
|
257 |
-
{
|
258 |
-
"cell_type": "code",
|
259 |
-
"execution_count": null,
|
260 |
-
"metadata": {},
|
261 |
-
"outputs": [],
|
262 |
-
"source": [
|
263 |
-
"# Compile the graph\n",
|
264 |
-
"compiled_graph = email_graph.compile()"
|
265 |
-
]
|
266 |
-
},
|
267 |
-
{
|
268 |
-
"cell_type": "code",
|
269 |
-
"execution_count": null,
|
270 |
-
"metadata": {},
|
271 |
-
"outputs": [],
|
272 |
-
"source": [
|
273 |
-
"from IPython.display import Image, display\n",
|
274 |
-
"\n",
|
275 |
-
"display(Image(compiled_graph.get_graph().draw_mermaid_png()))"
|
276 |
-
]
|
277 |
-
},
|
278 |
-
{
|
279 |
-
"cell_type": "code",
|
280 |
-
"execution_count": null,
|
281 |
-
"metadata": {},
|
282 |
-
"outputs": [],
|
283 |
-
"source": [
|
284 |
-
" # Example emails for testing\n",
|
285 |
-
"legitimate_email = {\n",
|
286 |
-
" \"sender\": \"Joker\",\n",
|
287 |
-
" \"subject\": \"Found you Batman ! \",\n",
|
288 |
-
" \"body\": \"Mr. Wayne,I found your secret identity ! I know you're batman ! Ther's no denying it, I have proof of that and I'm coming to find you soon. I'll get my revenge. JOKER\"\n",
|
289 |
-
"}\n",
|
290 |
-
"\n",
|
291 |
-
"spam_email = {\n",
|
292 |
-
" \"sender\": \"Crypto bro\",\n",
|
293 |
-
" \"subject\": \"The best investment of 2025\",\n",
|
294 |
-
" \"body\": \"Mr Wayne, I just launched an ALT coin and want you to buy some !\"\n",
|
295 |
-
"}\n",
|
296 |
-
"# Process legitimate email\n",
|
297 |
-
"print(\"\\nProcessing legitimate email...\")\n",
|
298 |
-
"legitimate_result = compiled_graph.invoke({\n",
|
299 |
-
" \"email\": legitimate_email,\n",
|
300 |
-
" \"is_spam\": None,\n",
|
301 |
-
" \"spam_reason\": None,\n",
|
302 |
-
" \"email_category\": None,\n",
|
303 |
-
" \"email_draft\": None,\n",
|
304 |
-
" \"messages\": []\n",
|
305 |
-
"})\n",
|
306 |
-
"\n",
|
307 |
-
"# Process spam email\n",
|
308 |
-
"print(\"\\nProcessing spam email...\")\n",
|
309 |
-
"spam_result = compiled_graph.invoke({\n",
|
310 |
-
" \"email\": spam_email,\n",
|
311 |
-
" \"is_spam\": None,\n",
|
312 |
-
" \"spam_reason\": None,\n",
|
313 |
-
" \"email_category\": None,\n",
|
314 |
-
" \"email_draft\": None,\n",
|
315 |
-
" \"messages\": []\n",
|
316 |
-
"})"
|
317 |
-
]
|
318 |
-
},
|
319 |
-
{
|
320 |
-
"cell_type": "markdown",
|
321 |
-
"metadata": {},
|
322 |
-
"source": [
|
323 |
-
"## Step 5: Inspecting Our Mail Sorting Agent with Langfuse π‘\n",
|
324 |
-
"\n",
|
325 |
-
"As Alfred fine-tunes the Main Sorting Agent, he's growing weary of debugging its runs. Agents, by nature, are unpredictable and difficult to inspect. But since he aims to build the ultimate Spam Detection Agent and deploy it in production, he needs robust traceability for future monitoring and analysis.\n",
|
326 |
-
"\n",
|
327 |
-
"To do this, Alfred can use an observability tool such as [Langfuse](https://langfuse.com/) to trace and monitor the inner steps of the agent.\n",
|
328 |
-
"\n",
|
329 |
-
"First, we need to install the necessary dependencies:"
|
330 |
-
]
|
331 |
-
},
|
332 |
-
{
|
333 |
-
"cell_type": "code",
|
334 |
-
"execution_count": null,
|
335 |
-
"metadata": {},
|
336 |
-
"outputs": [],
|
337 |
-
"source": [
|
338 |
-
"%pip install -q langfuse"
|
339 |
-
]
|
340 |
-
},
|
341 |
-
{
|
342 |
-
"cell_type": "markdown",
|
343 |
-
"metadata": {},
|
344 |
-
"source": [
|
345 |
-
"Next, we set the Langfuse API keys and host address as environment variables. You can get your Langfuse credentials by signing up for [Langfuse Cloud](https://cloud.langfuse.com) or [self-hosting Langfuse](https://langfuse.com/self-hosting)."
|
346 |
-
]
|
347 |
-
},
|
348 |
-
{
|
349 |
-
"cell_type": "code",
|
350 |
-
"execution_count": null,
|
351 |
-
"metadata": {},
|
352 |
-
"outputs": [],
|
353 |
-
"source": [
|
354 |
-
"import os\n",
|
355 |
-
"\n",
|
356 |
-
"# Get keys for your project from the project settings page: https://cloud.langfuse.com\n",
|
357 |
-
"os.environ[\"LANGFUSE_PUBLIC_KEY\"] = \"pk-lf-...\"\n",
|
358 |
-
"os.environ[\"LANGFUSE_SECRET_KEY\"] = \"sk-lf-...\"\n",
|
359 |
-
"os.environ[\"LANGFUSE_HOST\"] = \"https://cloud.langfuse.com\" # πͺπΊ EU region\n",
|
360 |
-
"# os.environ[\"LANGFUSE_HOST\"] = \"https://us.cloud.langfuse.com\" # πΊπΈ US region"
|
361 |
-
]
|
362 |
-
},
|
363 |
-
{
|
364 |
-
"cell_type": "markdown",
|
365 |
-
"metadata": {},
|
366 |
-
"source": [
|
367 |
-
"Now, we configure the [Langfuse `callback_handler`](https://langfuse.com/docs/integrations/langchain/tracing#add-langfuse-to-your-langchain-application)."
|
368 |
-
]
|
369 |
-
},
|
370 |
-
{
|
371 |
-
"cell_type": "code",
|
372 |
-
"execution_count": null,
|
373 |
-
"metadata": {},
|
374 |
-
"outputs": [],
|
375 |
-
"source": [
|
376 |
-
"from langfuse.langchain import CallbackHandler\n",
|
377 |
-
"\n",
|
378 |
-
"# Initialize Langfuse CallbackHandler for LangGraph/Langchain (tracing)\n",
|
379 |
-
"langfuse_handler = CallbackHandler()"
|
380 |
-
]
|
381 |
-
},
|
382 |
-
{
|
383 |
-
"cell_type": "markdown",
|
384 |
-
"metadata": {},
|
385 |
-
"source": [
|
386 |
-
"We then add `config={\"callbacks\": [langfuse_handler]}` to the invocation of the agents and run them again."
|
387 |
-
]
|
388 |
-
},
|
389 |
-
{
|
390 |
-
"cell_type": "code",
|
391 |
-
"execution_count": null,
|
392 |
-
"metadata": {},
|
393 |
-
"outputs": [],
|
394 |
-
"source": [
|
395 |
-
"# Process legitimate email\n",
|
396 |
-
"print(\"\\nProcessing legitimate email...\")\n",
|
397 |
-
"legitimate_result = compiled_graph.invoke(\n",
|
398 |
-
" input={\n",
|
399 |
-
" \"email\": legitimate_email,\n",
|
400 |
-
" \"is_spam\": None,\n",
|
401 |
-
" \"draft_response\": None,\n",
|
402 |
-
" \"messages\": []\n",
|
403 |
-
" },\n",
|
404 |
-
" config={\"callbacks\": [langfuse_handler]}\n",
|
405 |
-
")\n",
|
406 |
-
"\n",
|
407 |
-
"# Process spam email\n",
|
408 |
-
"print(\"\\nProcessing spam email...\")\n",
|
409 |
-
"spam_result = compiled_graph.invoke(\n",
|
410 |
-
" input={\n",
|
411 |
-
" \"email\": spam_email,\n",
|
412 |
-
" \"is_spam\": None,\n",
|
413 |
-
" \"draft_response\": None,\n",
|
414 |
-
" \"messages\": []\n",
|
415 |
-
" },\n",
|
416 |
-
" config={\"callbacks\": [langfuse_handler]}\n",
|
417 |
-
")"
|
418 |
-
]
|
419 |
-
},
|
420 |
-
{
|
421 |
-
"cell_type": "markdown",
|
422 |
-
"metadata": {},
|
423 |
-
"source": [
|
424 |
-
"Alfred is now connected π! The runs from LangGraph are being logged in Langfuse, giving him full visibility into the agent's behavior. With this setup, he's ready to revisit previous runs and refine his Mail Sorting Agent even further.\n",
|
425 |
-
"\n",
|
426 |
-
"\n",
|
427 |
-
"\n",
|
428 |
-
"_[Public link to the trace with the legit email](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/f5d6d72e-20af-4357-b232-af44c3728a7b?timestamp=2025-03-17T10%3A13%3A28.413Z&observation=6997ba69-043f-4f77-9445-700a033afba1)_\n",
|
429 |
-
"\n",
|
430 |
-
"\n",
|
431 |
-
"\n",
|
432 |
-
"_[Public link to the trace with the spam email](https://langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/6e498053-fee4-41fd-b1ab-d534aca15f82?timestamp=2025-03-17T10%3A13%3A30.884Z&observation=84770fc8-4276-4720-914f-bf52738d44ba)_\n"
|
433 |
-
]
|
434 |
-
}
|
435 |
-
],
|
436 |
-
"metadata": {
|
437 |
-
"kernelspec": {
|
438 |
-
"display_name": "Python 3",
|
439 |
-
"language": "python",
|
440 |
-
"name": "python3"
|
441 |
-
},
|
442 |
-
"language_info": {
|
443 |
-
"codemirror_mode": {
|
444 |
-
"name": "ipython",
|
445 |
-
"version": 3
|
446 |
-
},
|
447 |
-
"file_extension": ".py",
|
448 |
-
"mimetype": "text/x-python",
|
449 |
-
"name": "python",
|
450 |
-
"nbconvert_exporter": "python",
|
451 |
-
"pygments_lexer": "ipython3",
|
452 |
-
"version": "3.13.2"
|
453 |
-
}
|
454 |
-
},
|
455 |
-
"nbformat": 4,
|
456 |
-
"nbformat_minor": 2
|
457 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
unit2/llama-index/agents.ipynb
DELETED
@@ -1,334 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"cell_type": "markdown",
|
5 |
-
"metadata": {
|
6 |
-
"vscode": {
|
7 |
-
"languageId": "plaintext"
|
8 |
-
}
|
9 |
-
},
|
10 |
-
"source": [
|
11 |
-
"# Agents in LlamaIndex\n",
|
12 |
-
"\n",
|
13 |
-
"This notebook is part of the [Hugging Face Agents Course](https://www.hf.co/learn/agents-course), a free Course from beginner to expert, where you learn to build Agents.\n",
|
14 |
-
"\n",
|
15 |
-
"\n",
|
16 |
-
"\n",
|
17 |
-
"## Let's install the dependencies\n",
|
18 |
-
"\n",
|
19 |
-
"We will install the dependencies for this unit."
|
20 |
-
]
|
21 |
-
},
|
22 |
-
{
|
23 |
-
"cell_type": "code",
|
24 |
-
"execution_count": null,
|
25 |
-
"metadata": {},
|
26 |
-
"outputs": [],
|
27 |
-
"source": [
|
28 |
-
"!pip install llama-index llama-index-vector-stores-chroma llama-index-llms-huggingface-api llama-index-embeddings-huggingface -U -q"
|
29 |
-
]
|
30 |
-
},
|
31 |
-
{
|
32 |
-
"cell_type": "markdown",
|
33 |
-
"metadata": {},
|
34 |
-
"source": [
|
35 |
-
"And, let's log in to Hugging Face to use serverless Inference APIs."
|
36 |
-
]
|
37 |
-
},
|
38 |
-
{
|
39 |
-
"cell_type": "code",
|
40 |
-
"execution_count": null,
|
41 |
-
"metadata": {},
|
42 |
-
"outputs": [],
|
43 |
-
"source": [
|
44 |
-
"from huggingface_hub import login\n",
|
45 |
-
"\n",
|
46 |
-
"login()"
|
47 |
-
]
|
48 |
-
},
|
49 |
-
{
|
50 |
-
"cell_type": "markdown",
|
51 |
-
"metadata": {
|
52 |
-
"vscode": {
|
53 |
-
"languageId": "plaintext"
|
54 |
-
}
|
55 |
-
},
|
56 |
-
"source": [
|
57 |
-
"## Initialising agents\n",
|
58 |
-
"\n",
|
59 |
-
"Let's start by initialising an agent. We will use the basic `AgentWorkflow` class to create an agent."
|
60 |
-
]
|
61 |
-
},
|
62 |
-
{
|
63 |
-
"cell_type": "code",
|
64 |
-
"execution_count": null,
|
65 |
-
"metadata": {},
|
66 |
-
"outputs": [],
|
67 |
-
"source": [
|
68 |
-
"from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI\n",
|
69 |
-
"from llama_index.core.agent.workflow import AgentWorkflow, ToolCallResult, AgentStream\n",
|
70 |
-
"\n",
|
71 |
-
"\n",
|
72 |
-
"def add(a: int, b: int) -> int:\n",
|
73 |
-
" \"\"\"Add two numbers\"\"\"\n",
|
74 |
-
" return a + b\n",
|
75 |
-
"\n",
|
76 |
-
"\n",
|
77 |
-
"def subtract(a: int, b: int) -> int:\n",
|
78 |
-
" \"\"\"Subtract two numbers\"\"\"\n",
|
79 |
-
" return a - b\n",
|
80 |
-
"\n",
|
81 |
-
"\n",
|
82 |
-
"def multiply(a: int, b: int) -> int:\n",
|
83 |
-
" \"\"\"Multiply two numbers\"\"\"\n",
|
84 |
-
" return a * b\n",
|
85 |
-
"\n",
|
86 |
-
"\n",
|
87 |
-
"def divide(a: int, b: int) -> int:\n",
|
88 |
-
" \"\"\"Divide two numbers\"\"\"\n",
|
89 |
-
" return a / b\n",
|
90 |
-
"\n",
|
91 |
-
"\n",
|
92 |
-
"llm = HuggingFaceInferenceAPI(model_name=\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n",
|
93 |
-
"\n",
|
94 |
-
"agent = AgentWorkflow.from_tools_or_functions(\n",
|
95 |
-
" tools_or_functions=[subtract, multiply, divide, add],\n",
|
96 |
-
" llm=llm,\n",
|
97 |
-
" system_prompt=\"You are a math agent that can add, subtract, multiply, and divide numbers using provided tools.\",\n",
|
98 |
-
")"
|
99 |
-
]
|
100 |
-
},
|
101 |
-
{
|
102 |
-
"cell_type": "markdown",
|
103 |
-
"metadata": {},
|
104 |
-
"source": [
|
105 |
-
"Then, we can run the agent and get the response and reasoning behind the tool calls."
|
106 |
-
]
|
107 |
-
},
|
108 |
-
{
|
109 |
-
"cell_type": "code",
|
110 |
-
"execution_count": null,
|
111 |
-
"metadata": {},
|
112 |
-
"outputs": [],
|
113 |
-
"source": [
|
114 |
-
"handler = agent.run(\"What is (2 + 2) * 2?\")\n",
|
115 |
-
"async for ev in handler.stream_events():\n",
|
116 |
-
" if isinstance(ev, ToolCallResult):\n",
|
117 |
-
" print(\"\")\n",
|
118 |
-
" print(\"Called tool: \", ev.tool_name, ev.tool_kwargs, \"=>\", ev.tool_output)\n",
|
119 |
-
" elif isinstance(ev, AgentStream): # showing the thought process\n",
|
120 |
-
" print(ev.delta, end=\"\", flush=True)\n",
|
121 |
-
"\n",
|
122 |
-
"resp = await handler\n",
|
123 |
-
"resp"
|
124 |
-
]
|
125 |
-
},
|
126 |
-
{
|
127 |
-
"cell_type": "markdown",
|
128 |
-
"metadata": {},
|
129 |
-
"source": [
|
130 |
-
"In a similar fashion, we can pass state and context to the agent.\n"
|
131 |
-
]
|
132 |
-
},
|
133 |
-
{
|
134 |
-
"cell_type": "code",
|
135 |
-
"execution_count": 27,
|
136 |
-
"metadata": {},
|
137 |
-
"outputs": [
|
138 |
-
{
|
139 |
-
"data": {
|
140 |
-
"text/plain": [
|
141 |
-
"AgentOutput(response=ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='Your name is Bob.')]), tool_calls=[], raw={'id': 'chatcmpl-B5sDHfGpSwsVyzvMVH8EWokYwdIKT', 'choices': [{'delta': {'content': None, 'function_call': None, 'refusal': None, 'role': None, 'tool_calls': None}, 'finish_reason': 'stop', 'index': 0, 'logprobs': None}], 'created': 1740739735, 'model': 'gpt-4o-2024-08-06', 'object': 'chat.completion.chunk', 'service_tier': 'default', 'system_fingerprint': 'fp_eb9dce56a8', 'usage': None}, current_agent_name='Agent')"
|
142 |
-
]
|
143 |
-
},
|
144 |
-
"execution_count": 27,
|
145 |
-
"metadata": {},
|
146 |
-
"output_type": "execute_result"
|
147 |
-
}
|
148 |
-
],
|
149 |
-
"source": [
|
150 |
-
"from llama_index.core.workflow import Context\n",
|
151 |
-
"\n",
|
152 |
-
"ctx = Context(agent)\n",
|
153 |
-
"\n",
|
154 |
-
"response = await agent.run(\"My name is Bob.\", ctx=ctx)\n",
|
155 |
-
"response = await agent.run(\"What was my name again?\", ctx=ctx)\n",
|
156 |
-
"response"
|
157 |
-
]
|
158 |
-
},
|
159 |
-
{
|
160 |
-
"cell_type": "markdown",
|
161 |
-
"metadata": {},
|
162 |
-
"source": [
|
163 |
-
"## Creating RAG Agents with QueryEngineTools\n",
|
164 |
-
"\n",
|
165 |
-
"Let's now re-use the `QueryEngine` we defined in the [previous unit on tools](/tools.ipynb) and convert it into a `QueryEngineTool`. We will pass it to the `AgentWorkflow` class to create a RAG agent."
|
166 |
-
]
|
167 |
-
},
|
168 |
-
{
|
169 |
-
"cell_type": "code",
|
170 |
-
"execution_count": 46,
|
171 |
-
"metadata": {},
|
172 |
-
"outputs": [],
|
173 |
-
"source": [
|
174 |
-
"import chromadb\n",
|
175 |
-
"\n",
|
176 |
-
"from llama_index.core import VectorStoreIndex\n",
|
177 |
-
"from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI\n",
|
178 |
-
"from llama_index.embeddings.huggingface import HuggingFaceEmbedding\n",
|
179 |
-
"from llama_index.core.tools import QueryEngineTool\n",
|
180 |
-
"from llama_index.vector_stores.chroma import ChromaVectorStore\n",
|
181 |
-
"\n",
|
182 |
-
"# Create a vector store\n",
|
183 |
-
"db = chromadb.PersistentClient(path=\"./alfred_chroma_db\")\n",
|
184 |
-
"chroma_collection = db.get_or_create_collection(\"alfred\")\n",
|
185 |
-
"vector_store = ChromaVectorStore(chroma_collection=chroma_collection)\n",
|
186 |
-
"\n",
|
187 |
-
"# Create a query engine\n",
|
188 |
-
"embed_model = HuggingFaceEmbedding(model_name=\"BAAI/bge-small-en-v1.5\")\n",
|
189 |
-
"llm = HuggingFaceInferenceAPI(model_name=\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n",
|
190 |
-
"index = VectorStoreIndex.from_vector_store(\n",
|
191 |
-
" vector_store=vector_store, embed_model=embed_model\n",
|
192 |
-
")\n",
|
193 |
-
"query_engine = index.as_query_engine(llm=llm)\n",
|
194 |
-
"query_engine_tool = QueryEngineTool.from_defaults(\n",
|
195 |
-
" query_engine=query_engine,\n",
|
196 |
-
" name=\"personas\",\n",
|
197 |
-
" description=\"descriptions for various types of personas\",\n",
|
198 |
-
" return_direct=False,\n",
|
199 |
-
")\n",
|
200 |
-
"\n",
|
201 |
-
"# Create a RAG agent\n",
|
202 |
-
"query_engine_agent = AgentWorkflow.from_tools_or_functions(\n",
|
203 |
-
" tools_or_functions=[query_engine_tool],\n",
|
204 |
-
" llm=llm,\n",
|
205 |
-
" system_prompt=\"You are a helpful assistant that has access to a database containing persona descriptions. \",\n",
|
206 |
-
")"
|
207 |
-
]
|
208 |
-
},
|
209 |
-
{
|
210 |
-
"cell_type": "markdown",
|
211 |
-
"metadata": {},
|
212 |
-
"source": [
|
213 |
-
"And, we can once more get the response and reasoning behind the tool calls."
|
214 |
-
]
|
215 |
-
},
|
216 |
-
{
|
217 |
-
"cell_type": "code",
|
218 |
-
"execution_count": null,
|
219 |
-
"metadata": {},
|
220 |
-
"outputs": [],
|
221 |
-
"source": [
|
222 |
-
"handler = query_engine_agent.run(\n",
|
223 |
-
" \"Search the database for 'science fiction' and return some persona descriptions.\"\n",
|
224 |
-
")\n",
|
225 |
-
"async for ev in handler.stream_events():\n",
|
226 |
-
" if isinstance(ev, ToolCallResult):\n",
|
227 |
-
" print(\"\")\n",
|
228 |
-
" print(\"Called tool: \", ev.tool_name, ev.tool_kwargs, \"=>\", ev.tool_output)\n",
|
229 |
-
" elif isinstance(ev, AgentStream): # showing the thought process\n",
|
230 |
-
" print(ev.delta, end=\"\", flush=True)\n",
|
231 |
-
"\n",
|
232 |
-
"resp = await handler\n",
|
233 |
-
"resp"
|
234 |
-
]
|
235 |
-
},
|
236 |
-
{
|
237 |
-
"cell_type": "markdown",
|
238 |
-
"metadata": {},
|
239 |
-
"source": [
|
240 |
-
"## Creating multi-agent systems\n",
|
241 |
-
"\n",
|
242 |
-
"We can also create multi-agent systems by passing multiple agents to the `AgentWorkflow` class."
|
243 |
-
]
|
244 |
-
},
|
245 |
-
{
|
246 |
-
"cell_type": "code",
|
247 |
-
"execution_count": null,
|
248 |
-
"metadata": {},
|
249 |
-
"outputs": [],
|
250 |
-
"source": [
|
251 |
-
"from llama_index.core.agent.workflow import (\n",
|
252 |
-
" AgentWorkflow,\n",
|
253 |
-
" ReActAgent,\n",
|
254 |
-
")\n",
|
255 |
-
"\n",
|
256 |
-
"\n",
|
257 |
-
"# Define some tools\n",
|
258 |
-
"def add(a: int, b: int) -> int:\n",
|
259 |
-
" \"\"\"Add two numbers.\"\"\"\n",
|
260 |
-
" return a + b\n",
|
261 |
-
"\n",
|
262 |
-
"\n",
|
263 |
-
"def subtract(a: int, b: int) -> int:\n",
|
264 |
-
" \"\"\"Subtract two numbers.\"\"\"\n",
|
265 |
-
" return a - b\n",
|
266 |
-
"\n",
|
267 |
-
"\n",
|
268 |
-
"# Create agent configs\n",
|
269 |
-
"# NOTE: we can use FunctionAgent or ReActAgent here.\n",
|
270 |
-
"# FunctionAgent works for LLMs with a function calling API.\n",
|
271 |
-
"# ReActAgent works for any LLM.\n",
|
272 |
-
"calculator_agent = ReActAgent(\n",
|
273 |
-
" name=\"calculator\",\n",
|
274 |
-
" description=\"Performs basic arithmetic operations\",\n",
|
275 |
-
" system_prompt=\"You are a calculator assistant. Use your tools for any math operation.\",\n",
|
276 |
-
" tools=[add, subtract],\n",
|
277 |
-
" llm=llm,\n",
|
278 |
-
")\n",
|
279 |
-
"\n",
|
280 |
-
"query_agent = ReActAgent(\n",
|
281 |
-
" name=\"info_lookup\",\n",
|
282 |
-
" description=\"Looks up information about XYZ\",\n",
|
283 |
-
" system_prompt=\"Use your tool to query a RAG system to answer information about XYZ\",\n",
|
284 |
-
" tools=[query_engine_tool],\n",
|
285 |
-
" llm=llm,\n",
|
286 |
-
")\n",
|
287 |
-
"\n",
|
288 |
-
"# Create and run the workflow\n",
|
289 |
-
"agent = AgentWorkflow(agents=[calculator_agent, query_agent], root_agent=\"calculator\")\n",
|
290 |
-
"\n",
|
291 |
-
"# Run the system\n",
|
292 |
-
"handler = agent.run(user_msg=\"Can you add 5 and 3?\")"
|
293 |
-
]
|
294 |
-
},
|
295 |
-
{
|
296 |
-
"cell_type": "code",
|
297 |
-
"execution_count": null,
|
298 |
-
"metadata": {},
|
299 |
-
"outputs": [],
|
300 |
-
"source": [
|
301 |
-
"async for ev in handler.stream_events():\n",
|
302 |
-
" if isinstance(ev, ToolCallResult):\n",
|
303 |
-
" print(\"\")\n",
|
304 |
-
" print(\"Called tool: \", ev.tool_name, ev.tool_kwargs, \"=>\", ev.tool_output)\n",
|
305 |
-
" elif isinstance(ev, AgentStream): # showing the thought process\n",
|
306 |
-
" print(ev.delta, end=\"\", flush=True)\n",
|
307 |
-
"\n",
|
308 |
-
"resp = await handler\n",
|
309 |
-
"resp"
|
310 |
-
]
|
311 |
-
}
|
312 |
-
],
|
313 |
-
"metadata": {
|
314 |
-
"kernelspec": {
|
315 |
-
"display_name": ".venv",
|
316 |
-
"language": "python",
|
317 |
-
"name": "python3"
|
318 |
-
},
|
319 |
-
"language_info": {
|
320 |
-
"codemirror_mode": {
|
321 |
-
"name": "ipython",
|
322 |
-
"version": 3
|
323 |
-
},
|
324 |
-
"file_extension": ".py",
|
325 |
-
"mimetype": "text/x-python",
|
326 |
-
"name": "python",
|
327 |
-
"nbconvert_exporter": "python",
|
328 |
-
"pygments_lexer": "ipython3",
|
329 |
-
"version": "3.11.11"
|
330 |
-
}
|
331 |
-
},
|
332 |
-
"nbformat": 4,
|
333 |
-
"nbformat_minor": 2
|
334 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
unit2/llama-index/components.ipynb
DELETED
The diff for this file is too large to render.
See raw diff
|
|
unit2/llama-index/tools.ipynb
DELETED
@@ -1,274 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"cell_type": "markdown",
|
5 |
-
"metadata": {},
|
6 |
-
"source": [
|
7 |
-
"# Tools in LlamaIndex\n",
|
8 |
-
"\n",
|
9 |
-
"\n",
|
10 |
-
"This notebook is part of the [Hugging Face Agents Course](https://www.hf.co/learn/agents-course), a free Course from beginner to expert, where you learn to build Agents.\n",
|
11 |
-
"\n",
|
12 |
-
"\n",
|
13 |
-
"\n",
|
14 |
-
"## Let's install the dependencies\n",
|
15 |
-
"\n",
|
16 |
-
"We will install the dependencies for this unit."
|
17 |
-
]
|
18 |
-
},
|
19 |
-
{
|
20 |
-
"cell_type": "code",
|
21 |
-
"execution_count": null,
|
22 |
-
"metadata": {},
|
23 |
-
"outputs": [],
|
24 |
-
"source": [
|
25 |
-
"!pip install llama-index llama-index-vector-stores-chroma llama-index-llms-huggingface-api llama-index-embeddings-huggingface llama-index-tools-google -U -q"
|
26 |
-
]
|
27 |
-
},
|
28 |
-
{
|
29 |
-
"cell_type": "markdown",
|
30 |
-
"metadata": {},
|
31 |
-
"source": [
|
32 |
-
"And, let's log in to Hugging Face to use serverless Inference APIs."
|
33 |
-
]
|
34 |
-
},
|
35 |
-
{
|
36 |
-
"cell_type": "code",
|
37 |
-
"execution_count": null,
|
38 |
-
"metadata": {},
|
39 |
-
"outputs": [],
|
40 |
-
"source": [
|
41 |
-
"from huggingface_hub import login\n",
|
42 |
-
"\n",
|
43 |
-
"login()"
|
44 |
-
]
|
45 |
-
},
|
46 |
-
{
|
47 |
-
"cell_type": "markdown",
|
48 |
-
"metadata": {},
|
49 |
-
"source": [
|
50 |
-
"## Creating a FunctionTool\n",
|
51 |
-
"\n",
|
52 |
-
"Let's create a basic `FunctionTool` and call it."
|
53 |
-
]
|
54 |
-
},
|
55 |
-
{
|
56 |
-
"cell_type": "code",
|
57 |
-
"execution_count": 4,
|
58 |
-
"metadata": {},
|
59 |
-
"outputs": [],
|
60 |
-
"source": [
|
61 |
-
"from llama_index.core.tools import FunctionTool\n",
|
62 |
-
"\n",
|
63 |
-
"\n",
|
64 |
-
"def get_weather(location: str) -> str:\n",
|
65 |
-
" \"\"\"Useful for getting the weather for a given location.\"\"\"\n",
|
66 |
-
" print(f\"Getting weather for {location}\")\n",
|
67 |
-
" return f\"The weather in {location} is sunny\"\n",
|
68 |
-
"\n",
|
69 |
-
"\n",
|
70 |
-
"tool = FunctionTool.from_defaults(\n",
|
71 |
-
" get_weather,\n",
|
72 |
-
" name=\"my_weather_tool\",\n",
|
73 |
-
" description=\"Useful for getting the weather for a given location.\",\n",
|
74 |
-
")\n",
|
75 |
-
"tool.call(\"New York\")"
|
76 |
-
]
|
77 |
-
},
|
78 |
-
{
|
79 |
-
"cell_type": "markdown",
|
80 |
-
"metadata": {},
|
81 |
-
"source": [
|
82 |
-
"## Creating a QueryEngineTool\n",
|
83 |
-
"\n",
|
84 |
-
"Let's now re-use the `QueryEngine` we defined in the [previous unit on tools](/tools.ipynb) and convert it into a `QueryEngineTool`. "
|
85 |
-
]
|
86 |
-
},
|
87 |
-
{
|
88 |
-
"cell_type": "code",
|
89 |
-
"execution_count": 8,
|
90 |
-
"metadata": {},
|
91 |
-
"outputs": [
|
92 |
-
{
|
93 |
-
"data": {
|
94 |
-
"text/plain": [
|
95 |
-
"ToolOutput(content=' As an anthropologist, I am intrigued by the potential implications of AI on the future of work and society. My research focuses on the cultural and social aspects of technological advancements, and I believe it is essential to understand how AI will shape the lives of Cypriot people and the broader society. I am particularly interested in exploring how AI will impact traditional industries, such as agriculture and tourism, and how it will affect the skills and knowledge required for future employment. As someone who has spent extensive time in Cyprus, I am well-positioned to investigate the unique cultural and historical context of the island and how it will influence the adoption and impact of AI. My research will not only provide valuable insights into the future of work but also contribute to the development of policies and strategies that support the well-being of Cypriot citizens and the broader society. \\n\\nAs an environmental historian or urban planner, I am more focused on the ecological and sustainability aspects of AI, particularly in the context of urban planning and conservation. I believe that AI has the potential to significantly impact the built environment and the natural world, and I am eager to explore how it can be used to create more sustainable and resilient cities. My research will focus on the intersection of AI, urban planning, and environmental conservation, and I', tool_name='some useful name', raw_input={'input': 'Responds about research on the impact of AI on the future of work and society?'}, raw_output=Response(response=' As an anthropologist, I am intrigued by the potential implications of AI on the future of work and society. My research focuses on the cultural and social aspects of technological advancements, and I believe it is essential to understand how AI will shape the lives of Cypriot people and the broader society. I am particularly interested in exploring how AI will impact traditional industries, such as agriculture and tourism, and how it will affect the skills and knowledge required for future employment. As someone who has spent extensive time in Cyprus, I am well-positioned to investigate the unique cultural and historical context of the island and how it will influence the adoption and impact of AI. My research will not only provide valuable insights into the future of work but also contribute to the development of policies and strategies that support the well-being of Cypriot citizens and the broader society. \\n\\nAs an environmental historian or urban planner, I am more focused on the ecological and sustainability aspects of AI, particularly in the context of urban planning and conservation. I believe that AI has the potential to significantly impact the built environment and the natural world, and I am eager to explore how it can be used to create more sustainable and resilient cities. My research will focus on the intersection of AI, urban planning, and environmental conservation, and I', source_nodes=[NodeWithScore(node=TextNode(id_='f0ea24d2-4ed3-4575-a41f-740a3fa8b521', embedding=None, metadata={'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1.txt', 'file_name': 'persona_1.txt', 'file_type': 'text/plain', 'file_size': 266, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={<NodeRelationship.SOURCE: '1'>: RelatedNodeInfo(node_id='d5db5bf4-daac-41e5-b5aa-271e8305da25', node_type='4', metadata={'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1.txt', 'file_name': 'persona_1.txt', 'file_type': 'text/plain', 'file_size': 266, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}, hash='e6c87149a97bf9e5dbdf33922a4e5023c6b72550ca0b63472bd5d25103b28e99')}, metadata_template='{key}: {value}', metadata_separator='\\n', text='An anthropologist or a cultural expert interested in the intricacies of Cypriot culture, history, and society, particularly someone who has spent considerable time researching and living in Cyprus to gain a deep understanding of its people, customs, and way of life.', mimetype='text/plain', start_char_idx=0, end_char_idx=266, metadata_seperator='\\n', text_template='{metadata_str}\\n\\n{content}'), score=0.3761845613489774), NodeWithScore(node=TextNode(id_='cebcd676-3180-4cda-be99-d535babc1b96', embedding=None, metadata={'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1004.txt', 'file_name': 'persona_1004.txt', 'file_type': 'text/plain', 'file_size': 160, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={<NodeRelationship.SOURCE: '1'>: RelatedNodeInfo(node_id='1347651d-7fc8-42d4-865c-a0151a534a1b', node_type='4', metadata={'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1004.txt', 'file_name': 'persona_1004.txt', 'file_type': 'text/plain', 'file_size': 160, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}, hash='19628b0ae4a0f0ebd63b75e13df7d9183f42e8bb84358fdc2c9049c016c4b67d')}, metadata_template='{key}: {value}', metadata_separator='\\n', text='An environmental historian or urban planner focused on ecological conservation and sustainability, likely working in local government or a related organization.', mimetype='text/plain', start_char_idx=0, end_char_idx=160, metadata_seperator='\\n', text_template='{metadata_str}\\n\\n{content}'), score=0.3733060058493167)], metadata={'f0ea24d2-4ed3-4575-a41f-740a3fa8b521': {'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1.txt', 'file_name': 'persona_1.txt', 'file_type': 'text/plain', 'file_size': 266, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}, 'cebcd676-3180-4cda-be99-d535babc1b96': {'file_path': '/Users/davidberenstein/Documents/programming/huggingface/agents-course/notebooks/unit2/llama-index/data/persona_1004.txt', 'file_name': 'persona_1004.txt', 'file_type': 'text/plain', 'file_size': 160, 'creation_date': '2025-02-27', 'last_modified_date': '2025-02-27'}}), is_error=False)"
|
96 |
-
]
|
97 |
-
},
|
98 |
-
"execution_count": 8,
|
99 |
-
"metadata": {},
|
100 |
-
"output_type": "execute_result"
|
101 |
-
}
|
102 |
-
],
|
103 |
-
"source": [
|
104 |
-
"import chromadb\n",
|
105 |
-
"\n",
|
106 |
-
"from llama_index.core import VectorStoreIndex\n",
|
107 |
-
"from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI\n",
|
108 |
-
"from llama_index.embeddings.huggingface import HuggingFaceEmbedding\n",
|
109 |
-
"from llama_index.core.tools import QueryEngineTool\n",
|
110 |
-
"from llama_index.vector_stores.chroma import ChromaVectorStore\n",
|
111 |
-
"\n",
|
112 |
-
"db = chromadb.PersistentClient(path=\"./alfred_chroma_db\")\n",
|
113 |
-
"chroma_collection = db.get_or_create_collection(\"alfred\")\n",
|
114 |
-
"vector_store = ChromaVectorStore(chroma_collection=chroma_collection)\n",
|
115 |
-
"embed_model = HuggingFaceEmbedding(model_name=\"BAAI/bge-small-en-v1.5\")\n",
|
116 |
-
"llm = HuggingFaceInferenceAPI(model_name=\"meta-llama/Llama-3.2-3B-Instruct\")\n",
|
117 |
-
"index = VectorStoreIndex.from_vector_store(\n",
|
118 |
-
" vector_store=vector_store, embed_model=embed_model\n",
|
119 |
-
")\n",
|
120 |
-
"query_engine = index.as_query_engine(llm=llm)\n",
|
121 |
-
"tool = QueryEngineTool.from_defaults(\n",
|
122 |
-
" query_engine=query_engine,\n",
|
123 |
-
" name=\"some useful name\",\n",
|
124 |
-
" description=\"some useful description\",\n",
|
125 |
-
")\n",
|
126 |
-
"await tool.acall(\n",
|
127 |
-
" \"Responds about research on the impact of AI on the future of work and society?\"\n",
|
128 |
-
")"
|
129 |
-
]
|
130 |
-
},
|
131 |
-
{
|
132 |
-
"cell_type": "markdown",
|
133 |
-
"metadata": {},
|
134 |
-
"source": [
|
135 |
-
"## Creating Toolspecs\n",
|
136 |
-
"\n",
|
137 |
-
"Let's create a `ToolSpec` from the `GmailToolSpec` from the LlamaHub and convert it to a list of tools. "
|
138 |
-
]
|
139 |
-
},
|
140 |
-
{
|
141 |
-
"cell_type": "code",
|
142 |
-
"execution_count": 1,
|
143 |
-
"metadata": {},
|
144 |
-
"outputs": [
|
145 |
-
{
|
146 |
-
"data": {
|
147 |
-
"text/plain": [
|
148 |
-
"[<llama_index.core.tools.function_tool.FunctionTool at 0x7f0d50623d90>,\n",
|
149 |
-
" <llama_index.core.tools.function_tool.FunctionTool at 0x7f0d1c055210>,\n",
|
150 |
-
" <llama_index.core.tools.function_tool.FunctionTool at 0x7f0d1c055780>,\n",
|
151 |
-
" <llama_index.core.tools.function_tool.FunctionTool at 0x7f0d1c0556f0>,\n",
|
152 |
-
" <llama_index.core.tools.function_tool.FunctionTool at 0x7f0d1c0559f0>,\n",
|
153 |
-
" <llama_index.core.tools.function_tool.FunctionTool at 0x7f0d1c055b40>]"
|
154 |
-
]
|
155 |
-
},
|
156 |
-
"execution_count": 1,
|
157 |
-
"metadata": {},
|
158 |
-
"output_type": "execute_result"
|
159 |
-
}
|
160 |
-
],
|
161 |
-
"source": [
|
162 |
-
"from llama_index.tools.google import GmailToolSpec\n",
|
163 |
-
"\n",
|
164 |
-
"tool_spec = GmailToolSpec()\n",
|
165 |
-
"tool_spec_list = tool_spec.to_tool_list()\n",
|
166 |
-
"tool_spec_list"
|
167 |
-
]
|
168 |
-
},
|
169 |
-
{
|
170 |
-
"cell_type": "markdown",
|
171 |
-
"metadata": {},
|
172 |
-
"source": [
|
173 |
-
"To get a more detailed view of the tools, we can take a look at the `metadata` of each tool."
|
174 |
-
]
|
175 |
-
},
|
176 |
-
{
|
177 |
-
"cell_type": "code",
|
178 |
-
"execution_count": 2,
|
179 |
-
"metadata": {},
|
180 |
-
"outputs": [
|
181 |
-
{
|
182 |
-
"name": "stdout",
|
183 |
-
"output_type": "stream",
|
184 |
-
"text": [
|
185 |
-
"load_data load_data() -> List[llama_index.core.schema.Document]\n",
|
186 |
-
"Load emails from the user's account.\n",
|
187 |
-
"search_messages search_messages(query: str, max_results: Optional[int] = None)\n",
|
188 |
-
"Searches email messages given a query string and the maximum number\n",
|
189 |
-
" of results requested by the user\n",
|
190 |
-
" Returns: List of relevant message objects up to the maximum number of results.\n",
|
191 |
-
"\n",
|
192 |
-
" Args:\n",
|
193 |
-
" query[str]: The user's query\n",
|
194 |
-
" max_results (Optional[int]): The maximum number of search results\n",
|
195 |
-
" to return.\n",
|
196 |
-
" \n",
|
197 |
-
"create_draft create_draft(to: Optional[List[str]] = None, subject: Optional[str] = None, message: Optional[str] = None) -> str\n",
|
198 |
-
"Create and insert a draft email.\n",
|
199 |
-
" Print the returned draft's message and id.\n",
|
200 |
-
" Returns: Draft object, including draft id and message meta data.\n",
|
201 |
-
"\n",
|
202 |
-
" Args:\n",
|
203 |
-
" to (Optional[str]): The email addresses to send the message to\n",
|
204 |
-
" subject (Optional[str]): The subject for the event\n",
|
205 |
-
" message (Optional[str]): The message for the event\n",
|
206 |
-
" \n",
|
207 |
-
"update_draft update_draft(to: Optional[List[str]] = None, subject: Optional[str] = None, message: Optional[str] = None, draft_id: str = None) -> str\n",
|
208 |
-
"Update a draft email.\n",
|
209 |
-
" Print the returned draft's message and id.\n",
|
210 |
-
" This function is required to be passed a draft_id that is obtained when creating messages\n",
|
211 |
-
" Returns: Draft object, including draft id and message meta data.\n",
|
212 |
-
"\n",
|
213 |
-
" Args:\n",
|
214 |
-
" to (Optional[str]): The email addresses to send the message to\n",
|
215 |
-
" subject (Optional[str]): The subject for the event\n",
|
216 |
-
" message (Optional[str]): The message for the event\n",
|
217 |
-
" draft_id (str): the id of the draft to be updated\n",
|
218 |
-
" \n",
|
219 |
-
"get_draft get_draft(draft_id: str = None) -> str\n",
|
220 |
-
"Get a draft email.\n",
|
221 |
-
" Print the returned draft's message and id.\n",
|
222 |
-
" Returns: Draft object, including draft id and message meta data.\n",
|
223 |
-
"\n",
|
224 |
-
" Args:\n",
|
225 |
-
" draft_id (str): the id of the draft to be updated\n",
|
226 |
-
" \n",
|
227 |
-
"send_draft send_draft(draft_id: str = None) -> str\n",
|
228 |
-
"Sends a draft email.\n",
|
229 |
-
" Print the returned draft's message and id.\n",
|
230 |
-
" Returns: Draft object, including draft id and message meta data.\n",
|
231 |
-
"\n",
|
232 |
-
" Args:\n",
|
233 |
-
" draft_id (str): the id of the draft to be updated\n",
|
234 |
-
" \n"
|
235 |
-
]
|
236 |
-
},
|
237 |
-
{
|
238 |
-
"data": {
|
239 |
-
"text/plain": [
|
240 |
-
"[None, None, None, None, None, None]"
|
241 |
-
]
|
242 |
-
},
|
243 |
-
"execution_count": 2,
|
244 |
-
"metadata": {},
|
245 |
-
"output_type": "execute_result"
|
246 |
-
}
|
247 |
-
],
|
248 |
-
"source": [
|
249 |
-
"[print(tool.metadata.name, tool.metadata.description) for tool in tool_spec_list]"
|
250 |
-
]
|
251 |
-
}
|
252 |
-
],
|
253 |
-
"metadata": {
|
254 |
-
"kernelspec": {
|
255 |
-
"display_name": "Python 3 (ipykernel)",
|
256 |
-
"language": "python",
|
257 |
-
"name": "python3"
|
258 |
-
},
|
259 |
-
"language_info": {
|
260 |
-
"codemirror_mode": {
|
261 |
-
"name": "ipython",
|
262 |
-
"version": 3
|
263 |
-
},
|
264 |
-
"file_extension": ".py",
|
265 |
-
"mimetype": "text/x-python",
|
266 |
-
"name": "python",
|
267 |
-
"nbconvert_exporter": "python",
|
268 |
-
"pygments_lexer": "ipython3",
|
269 |
-
"version": "3.10.12"
|
270 |
-
}
|
271 |
-
},
|
272 |
-
"nbformat": 4,
|
273 |
-
"nbformat_minor": 4
|
274 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
unit2/llama-index/workflows.ipynb
DELETED
@@ -1,401 +0,0 @@
|
|
1 |
-
{
|
2 |
-
"cells": [
|
3 |
-
{
|
4 |
-
"cell_type": "markdown",
|
5 |
-
"metadata": {},
|
6 |
-
"source": [
|
7 |
-
"# Workflows in LlamaIndex\n",
|
8 |
-
"\n",
|
9 |
-
"\n",
|
10 |
-
"This notebook is part of the [Hugging Face Agents Course](https://www.hf.co/learn/agents-course), a free Course from beginner to expert, where you learn to build Agents.\n",
|
11 |
-
"\n",
|
12 |
-
"\n",
|
13 |
-
"\n",
|
14 |
-
"## Let's install the dependencies\n",
|
15 |
-
"\n",
|
16 |
-
"We will install the dependencies for this unit."
|
17 |
-
]
|
18 |
-
},
|
19 |
-
{
|
20 |
-
"cell_type": "code",
|
21 |
-
"execution_count": null,
|
22 |
-
"metadata": {},
|
23 |
-
"outputs": [],
|
24 |
-
"source": [
|
25 |
-
"!pip install llama-index llama-index-vector-stores-chroma llama-index-utils-workflow llama-index-llms-huggingface-api pyvis -U -q"
|
26 |
-
]
|
27 |
-
},
|
28 |
-
{
|
29 |
-
"cell_type": "markdown",
|
30 |
-
"metadata": {},
|
31 |
-
"source": [
|
32 |
-
"And, let's log in to Hugging Face to use serverless Inference APIs."
|
33 |
-
]
|
34 |
-
},
|
35 |
-
{
|
36 |
-
"cell_type": "code",
|
37 |
-
"execution_count": null,
|
38 |
-
"metadata": {},
|
39 |
-
"outputs": [],
|
40 |
-
"source": [
|
41 |
-
"from huggingface_hub import login\n",
|
42 |
-
"\n",
|
43 |
-
"login()"
|
44 |
-
]
|
45 |
-
},
|
46 |
-
{
|
47 |
-
"cell_type": "markdown",
|
48 |
-
"metadata": {},
|
49 |
-
"source": [
|
50 |
-
"## Basic Workflow Creation\n",
|
51 |
-
"\n",
|
52 |
-
"We can start by creating a simple workflow. We use the `StartEvent` and `StopEvent` classes to define the start and stop of the workflow."
|
53 |
-
]
|
54 |
-
},
|
55 |
-
{
|
56 |
-
"cell_type": "code",
|
57 |
-
"execution_count": 3,
|
58 |
-
"metadata": {},
|
59 |
-
"outputs": [
|
60 |
-
{
|
61 |
-
"data": {
|
62 |
-
"text/plain": [
|
63 |
-
"'Hello, world!'"
|
64 |
-
]
|
65 |
-
},
|
66 |
-
"execution_count": 3,
|
67 |
-
"metadata": {},
|
68 |
-
"output_type": "execute_result"
|
69 |
-
}
|
70 |
-
],
|
71 |
-
"source": [
|
72 |
-
"from llama_index.core.workflow import StartEvent, StopEvent, Workflow, step\n",
|
73 |
-
"\n",
|
74 |
-
"\n",
|
75 |
-
"class MyWorkflow(Workflow):\n",
|
76 |
-
" @step\n",
|
77 |
-
" async def my_step(self, ev: StartEvent) -> StopEvent:\n",
|
78 |
-
" # do something here\n",
|
79 |
-
" return StopEvent(result=\"Hello, world!\")\n",
|
80 |
-
"\n",
|
81 |
-
"\n",
|
82 |
-
"w = MyWorkflow(timeout=10, verbose=False)\n",
|
83 |
-
"result = await w.run()\n",
|
84 |
-
"result"
|
85 |
-
]
|
86 |
-
},
|
87 |
-
{
|
88 |
-
"cell_type": "markdown",
|
89 |
-
"metadata": {},
|
90 |
-
"source": [
|
91 |
-
"## Connecting Multiple Steps\n",
|
92 |
-
"\n",
|
93 |
-
"We can also create multi-step workflows. Here we pass the event information between steps. Note that we can use type hinting to specify the event type and the flow of the workflow."
|
94 |
-
]
|
95 |
-
},
|
96 |
-
{
|
97 |
-
"cell_type": "code",
|
98 |
-
"execution_count": 4,
|
99 |
-
"metadata": {},
|
100 |
-
"outputs": [
|
101 |
-
{
|
102 |
-
"data": {
|
103 |
-
"text/plain": [
|
104 |
-
"'Finished processing: Step 1 complete'"
|
105 |
-
]
|
106 |
-
},
|
107 |
-
"execution_count": 4,
|
108 |
-
"metadata": {},
|
109 |
-
"output_type": "execute_result"
|
110 |
-
}
|
111 |
-
],
|
112 |
-
"source": [
|
113 |
-
"from llama_index.core.workflow import Event\n",
|
114 |
-
"\n",
|
115 |
-
"\n",
|
116 |
-
"class ProcessingEvent(Event):\n",
|
117 |
-
" intermediate_result: str\n",
|
118 |
-
"\n",
|
119 |
-
"\n",
|
120 |
-
"class MultiStepWorkflow(Workflow):\n",
|
121 |
-
" @step\n",
|
122 |
-
" async def step_one(self, ev: StartEvent) -> ProcessingEvent:\n",
|
123 |
-
" # Process initial data\n",
|
124 |
-
" return ProcessingEvent(intermediate_result=\"Step 1 complete\")\n",
|
125 |
-
"\n",
|
126 |
-
" @step\n",
|
127 |
-
" async def step_two(self, ev: ProcessingEvent) -> StopEvent:\n",
|
128 |
-
" # Use the intermediate result\n",
|
129 |
-
" final_result = f\"Finished processing: {ev.intermediate_result}\"\n",
|
130 |
-
" return StopEvent(result=final_result)\n",
|
131 |
-
"\n",
|
132 |
-
"\n",
|
133 |
-
"w = MultiStepWorkflow(timeout=10, verbose=False)\n",
|
134 |
-
"result = await w.run()\n",
|
135 |
-
"result"
|
136 |
-
]
|
137 |
-
},
|
138 |
-
{
|
139 |
-
"cell_type": "markdown",
|
140 |
-
"metadata": {},
|
141 |
-
"source": [
|
142 |
-
"## Loops and Branches\n",
|
143 |
-
"\n",
|
144 |
-
"We can also use type hinting to create branches and loops. Note that we can use the `|` operator to specify that the step can return multiple types."
|
145 |
-
]
|
146 |
-
},
|
147 |
-
{
|
148 |
-
"cell_type": "code",
|
149 |
-
"execution_count": 28,
|
150 |
-
"metadata": {},
|
151 |
-
"outputs": [
|
152 |
-
{
|
153 |
-
"name": "stdout",
|
154 |
-
"output_type": "stream",
|
155 |
-
"text": [
|
156 |
-
"Bad thing happened\n",
|
157 |
-
"Bad thing happened\n",
|
158 |
-
"Bad thing happened\n",
|
159 |
-
"Good thing happened\n"
|
160 |
-
]
|
161 |
-
},
|
162 |
-
{
|
163 |
-
"data": {
|
164 |
-
"text/plain": [
|
165 |
-
"'Finished processing: First step complete.'"
|
166 |
-
]
|
167 |
-
},
|
168 |
-
"execution_count": 28,
|
169 |
-
"metadata": {},
|
170 |
-
"output_type": "execute_result"
|
171 |
-
}
|
172 |
-
],
|
173 |
-
"source": [
|
174 |
-
"from llama_index.core.workflow import Event\n",
|
175 |
-
"import random\n",
|
176 |
-
"\n",
|
177 |
-
"\n",
|
178 |
-
"class ProcessingEvent(Event):\n",
|
179 |
-
" intermediate_result: str\n",
|
180 |
-
"\n",
|
181 |
-
"\n",
|
182 |
-
"class LoopEvent(Event):\n",
|
183 |
-
" loop_output: str\n",
|
184 |
-
"\n",
|
185 |
-
"\n",
|
186 |
-
"class MultiStepWorkflow(Workflow):\n",
|
187 |
-
" @step\n",
|
188 |
-
" async def step_one(self, ev: StartEvent | LoopEvent) -> ProcessingEvent | LoopEvent:\n",
|
189 |
-
" if random.randint(0, 1) == 0:\n",
|
190 |
-
" print(\"Bad thing happened\")\n",
|
191 |
-
" return LoopEvent(loop_output=\"Back to step one.\")\n",
|
192 |
-
" else:\n",
|
193 |
-
" print(\"Good thing happened\")\n",
|
194 |
-
" return ProcessingEvent(intermediate_result=\"First step complete.\")\n",
|
195 |
-
"\n",
|
196 |
-
" @step\n",
|
197 |
-
" async def step_two(self, ev: ProcessingEvent) -> StopEvent:\n",
|
198 |
-
" # Use the intermediate result\n",
|
199 |
-
" final_result = f\"Finished processing: {ev.intermediate_result}\"\n",
|
200 |
-
" return StopEvent(result=final_result)\n",
|
201 |
-
"\n",
|
202 |
-
"\n",
|
203 |
-
"w = MultiStepWorkflow(verbose=False)\n",
|
204 |
-
"result = await w.run()\n",
|
205 |
-
"result"
|
206 |
-
]
|
207 |
-
},
|
208 |
-
{
|
209 |
-
"cell_type": "markdown",
|
210 |
-
"metadata": {},
|
211 |
-
"source": [
|
212 |
-
"## Drawing Workflows\n",
|
213 |
-
"\n",
|
214 |
-
"We can also draw workflows using the `draw_all_possible_flows` function.\n"
|
215 |
-
]
|
216 |
-
},
|
217 |
-
{
|
218 |
-
"cell_type": "code",
|
219 |
-
"execution_count": 24,
|
220 |
-
"metadata": {},
|
221 |
-
"outputs": [
|
222 |
-
{
|
223 |
-
"name": "stdout",
|
224 |
-
"output_type": "stream",
|
225 |
-
"text": [
|
226 |
-
"<class 'NoneType'>\n",
|
227 |
-
"<class '__main__.ProcessingEvent'>\n",
|
228 |
-
"<class '__main__.LoopEvent'>\n",
|
229 |
-
"<class 'llama_index.core.workflow.events.StopEvent'>\n",
|
230 |
-
"workflow_all_flows.html\n"
|
231 |
-
]
|
232 |
-
}
|
233 |
-
],
|
234 |
-
"source": [
|
235 |
-
"from llama_index.utils.workflow import draw_all_possible_flows\n",
|
236 |
-
"\n",
|
237 |
-
"draw_all_possible_flows(w)"
|
238 |
-
]
|
239 |
-
},
|
240 |
-
{
|
241 |
-
"cell_type": "markdown",
|
242 |
-
"metadata": {},
|
243 |
-
"source": [
|
244 |
-
""
|
245 |
-
]
|
246 |
-
},
|
247 |
-
{
|
248 |
-
"cell_type": "markdown",
|
249 |
-
"metadata": {},
|
250 |
-
"source": [
|
251 |
-
"### State Management\n",
|
252 |
-
"\n",
|
253 |
-
"Instead of passing the event information between steps, we can use the `Context` type hint to pass information between steps. \n",
|
254 |
-
"This might be useful for long running workflows, where you want to store information between steps."
|
255 |
-
]
|
256 |
-
},
|
257 |
-
{
|
258 |
-
"cell_type": "code",
|
259 |
-
"execution_count": 25,
|
260 |
-
"metadata": {},
|
261 |
-
"outputs": [
|
262 |
-
{
|
263 |
-
"name": "stdout",
|
264 |
-
"output_type": "stream",
|
265 |
-
"text": [
|
266 |
-
"Query: What is the capital of France?\n"
|
267 |
-
]
|
268 |
-
},
|
269 |
-
{
|
270 |
-
"data": {
|
271 |
-
"text/plain": [
|
272 |
-
"'Finished processing: Step 1 complete'"
|
273 |
-
]
|
274 |
-
},
|
275 |
-
"execution_count": 25,
|
276 |
-
"metadata": {},
|
277 |
-
"output_type": "execute_result"
|
278 |
-
}
|
279 |
-
],
|
280 |
-
"source": [
|
281 |
-
"from llama_index.core.workflow import Event, Context\n",
|
282 |
-
"from llama_index.core.agent.workflow import ReActAgent\n",
|
283 |
-
"\n",
|
284 |
-
"\n",
|
285 |
-
"class ProcessingEvent(Event):\n",
|
286 |
-
" intermediate_result: str\n",
|
287 |
-
"\n",
|
288 |
-
"\n",
|
289 |
-
"class MultiStepWorkflow(Workflow):\n",
|
290 |
-
" @step\n",
|
291 |
-
" async def step_one(self, ev: StartEvent, ctx: Context) -> ProcessingEvent:\n",
|
292 |
-
" # Process initial data\n",
|
293 |
-
" await ctx.store.set(\"query\", \"What is the capital of France?\")\n",
|
294 |
-
" return ProcessingEvent(intermediate_result=\"Step 1 complete\")\n",
|
295 |
-
"\n",
|
296 |
-
" @step\n",
|
297 |
-
" async def step_two(self, ev: ProcessingEvent, ctx: Context) -> StopEvent:\n",
|
298 |
-
" # Use the intermediate result\n",
|
299 |
-
" query = await ctx.store.get(\"query\")\n",
|
300 |
-
" print(f\"Query: {query}\")\n",
|
301 |
-
" final_result = f\"Finished processing: {ev.intermediate_result}\"\n",
|
302 |
-
" return StopEvent(result=final_result)\n",
|
303 |
-
"\n",
|
304 |
-
"\n",
|
305 |
-
"w = MultiStepWorkflow(timeout=10, verbose=False)\n",
|
306 |
-
"result = await w.run()\n",
|
307 |
-
"result"
|
308 |
-
]
|
309 |
-
},
|
310 |
-
{
|
311 |
-
"cell_type": "markdown",
|
312 |
-
"metadata": {},
|
313 |
-
"source": [
|
314 |
-
"## Multi-Agent Workflows\n",
|
315 |
-
"\n",
|
316 |
-
"We can also create multi-agent workflows. Here we define two agents, one that multiplies two integers and one that adds two integers."
|
317 |
-
]
|
318 |
-
},
|
319 |
-
{
|
320 |
-
"cell_type": "code",
|
321 |
-
"execution_count": null,
|
322 |
-
"metadata": {},
|
323 |
-
"outputs": [
|
324 |
-
{
|
325 |
-
"data": {
|
326 |
-
"text/plain": [
|
327 |
-
"AgentOutput(response=ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='5 and 3 add up to 8.')]), tool_calls=[ToolCallResult(tool_name='handoff', tool_kwargs={'to_agent': 'add_agent', 'reason': 'The user wants to add two numbers, and the add_agent is better suited for this task.'}, tool_id='831895e7-3502-4642-92ea-8626e21ed83b', tool_output=ToolOutput(content='Agent add_agent is now handling the request due to the following reason: The user wants to add two numbers, and the add_agent is better suited for this task..\nPlease continue with the current request.', tool_name='handoff', raw_input={'args': (), 'kwargs': {'to_agent': 'add_agent', 'reason': 'The user wants to add two numbers, and the add_agent is better suited for this task.'}}, raw_output='Agent add_agent is now handling the request due to the following reason: The user wants to add two numbers, and the add_agent is better suited for this task..\nPlease continue with the current request.', is_error=False), return_direct=True), ToolCallResult(tool_name='add', tool_kwargs={'a': 5, 'b': 3}, tool_id='c29dc3f7-eaa7-4ba7-b49b-90908f860cc5', tool_output=ToolOutput(content='8', tool_name='add', raw_input={'args': (), 'kwargs': {'a': 5, 'b': 3}}, raw_output=8, is_error=False), return_direct=False)], raw=ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(role='assistant', content='.', tool_call_id=None, tool_calls=None), index=0, finish_reason=None, logprobs=None)], created=1744553546, id='', model='Qwen/Qwen2.5-Coder-32B-Instruct', system_fingerprint='3.2.1-sha-4d28897', usage=None, object='chat.completion.chunk'), current_agent_name='add_agent')"
|
328 |
-
]
|
329 |
-
},
|
330 |
-
"execution_count": 33,
|
331 |
-
"metadata": {},
|
332 |
-
"output_type": "execute_result"
|
333 |
-
}
|
334 |
-
],
|
335 |
-
"source": [
|
336 |
-
"from llama_index.core.agent.workflow import AgentWorkflow, ReActAgent\n",
|
337 |
-
"from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI\n",
|
338 |
-
"from llama_index.core.agent.workflow import AgentWorkflow\n",
|
339 |
-
"\n",
|
340 |
-
"# Define some tools\n",
|
341 |
-
"def add(a: int, b: int) -> int:\n",
|
342 |
-
" \"\"\"Add two numbers.\"\"\"\n",
|
343 |
-
" return a + b\n",
|
344 |
-
"\n",
|
345 |
-
"def multiply(a: int, b: int) -> int:\n",
|
346 |
-
" \"\"\"Multiply two numbers.\"\"\"\n",
|
347 |
-
" return a * b\n",
|
348 |
-
"\n",
|
349 |
-
"llm = HuggingFaceInferenceAPI(model_name=\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n",
|
350 |
-
"\n",
|
351 |
-
"# we can pass functions directly without FunctionTool -- the fn/docstring are parsed for the name/description\n",
|
352 |
-
"multiply_agent = ReActAgent(\n",
|
353 |
-
" name=\"multiply_agent\",\n",
|
354 |
-
" description=\"Is able to multiply two integers\",\n",
|
355 |
-
" system_prompt=\"A helpful assistant that can use a tool to multiply numbers.\",\n",
|
356 |
-
" tools=[multiply], \n",
|
357 |
-
" llm=llm,\n",
|
358 |
-
")\n",
|
359 |
-
"\n",
|
360 |
-
"addition_agent = ReActAgent(\n",
|
361 |
-
" name=\"add_agent\",\n",
|
362 |
-
" description=\"Is able to add two integers\",\n",
|
363 |
-
" system_prompt=\"A helpful assistant that can use a tool to add numbers.\",\n",
|
364 |
-
" tools=[add], \n",
|
365 |
-
" llm=llm,\n",
|
366 |
-
")\n",
|
367 |
-
"\n",
|
368 |
-
"# Create the workflow\n",
|
369 |
-
"workflow = AgentWorkflow(\n",
|
370 |
-
" agents=[multiply_agent, addition_agent],\n",
|
371 |
-
" root_agent=\"multiply_agent\"\n",
|
372 |
-
")\n",
|
373 |
-
"\n",
|
374 |
-
"# Run the system\n",
|
375 |
-
"response = await workflow.run(user_msg=\"Can you add 5 and 3?\")\n",
|
376 |
-
"response"
|
377 |
-
]
|
378 |
-
}
|
379 |
-
],
|
380 |
-
"metadata": {
|
381 |
-
"kernelspec": {
|
382 |
-
"display_name": ".venv",
|
383 |
-
"language": "python",
|
384 |
-
"name": "python3"
|
385 |
-
},
|
386 |
-
"language_info": {
|
387 |
-
"codemirror_mode": {
|
388 |
-
"name": "ipython",
|
389 |
-
"version": 3
|
390 |
-
},
|
391 |
-
"file_extension": ".py",
|
392 |
-
"mimetype": "text/x-python",
|
393 |
-
"name": "python",
|
394 |
-
"nbconvert_exporter": "python",
|
395 |
-
"pygments_lexer": "ipython3",
|
396 |
-
"version": "3.11.11"
|
397 |
-
}
|
398 |
-
},
|
399 |
-
"nbformat": 4,
|
400 |
-
"nbformat_minor": 2
|
401 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
unit2/smolagents/code_agents.ipynb
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
unit2/smolagents/multiagent_notebook.ipynb
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
unit2/smolagents/retrieval_agents.ipynb
CHANGED
@@ -93,7 +93,7 @@
|
|
93 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
94 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Search for luxury superhero-themed party ideas, including decorations, entertainment, and catering.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
95 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
96 |
-
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β
|
97 |
"</pre>\n"
|
98 |
],
|
99 |
"text/plain": [
|
@@ -101,7 +101,7 @@
|
|
101 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
102 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mSearch for luxury superhero-themed party ideas, including decorations, entertainment, and catering.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
103 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
104 |
-
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m
|
105 |
]
|
106 |
},
|
107 |
"metadata": {},
|
@@ -1733,13 +1733,13 @@
|
|
1733 |
}
|
1734 |
],
|
1735 |
"source": [
|
1736 |
-
"from smolagents import CodeAgent, DuckDuckGoSearchTool,
|
1737 |
"\n",
|
1738 |
"# Initialize the search tool\n",
|
1739 |
"search_tool = DuckDuckGoSearchTool()\n",
|
1740 |
"\n",
|
1741 |
"# Initialize the model\n",
|
1742 |
-
"model =
|
1743 |
"\n",
|
1744 |
"agent = CodeAgent(\n",
|
1745 |
" model = model,\n",
|
@@ -1812,7 +1812,7 @@
|
|
1812 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
1813 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Find ideas for a luxury superhero-themed party, including entertainment, catering, and decoration options.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
1814 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
1815 |
-
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β
|
1816 |
"</pre>\n"
|
1817 |
],
|
1818 |
"text/plain": [
|
@@ -1820,7 +1820,7 @@
|
|
1820 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
1821 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mFind ideas for a luxury superhero-themed party, including entertainment, catering, and decoration options.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
1822 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
1823 |
-
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m
|
1824 |
]
|
1825 |
},
|
1826 |
"metadata": {},
|
@@ -2783,7 +2783,7 @@
|
|
2783 |
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
2784 |
"from smolagents import Tool\n",
|
2785 |
"from langchain_community.retrievers import BM25Retriever\n",
|
2786 |
-
"from smolagents import CodeAgent,
|
2787 |
"\n",
|
2788 |
"class PartyPlanningRetrieverTool(Tool):\n",
|
2789 |
" name = \"party_planning_retriever\"\n",
|
@@ -2843,7 +2843,7 @@
|
|
2843 |
"party_planning_retriever = PartyPlanningRetrieverTool(docs_processed)\n",
|
2844 |
"\n",
|
2845 |
"# Initialize the agent\n",
|
2846 |
-
"agent = CodeAgent(tools=[party_planning_retriever], model=
|
2847 |
"\n",
|
2848 |
"# Example usage\n",
|
2849 |
"response = agent.run(\n",
|
|
|
93 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
94 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Search for luxury superhero-themed party ideas, including decorations, entertainment, and catering.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
95 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
96 |
+
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―</span>\n",
|
97 |
"</pre>\n"
|
98 |
],
|
99 |
"text/plain": [
|
|
|
101 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
102 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mSearch for luxury superhero-themed party ideas, including decorations, entertainment, and catering.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
103 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
104 |
+
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct \u001b[0m\u001b[38;2;212;183;2mβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\u001b[0m\u001b[38;2;212;183;2mββ―\u001b[0m\n"
|
105 |
]
|
106 |
},
|
107 |
"metadata": {},
|
|
|
1733 |
}
|
1734 |
],
|
1735 |
"source": [
|
1736 |
+
"from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel\n",
|
1737 |
"\n",
|
1738 |
"# Initialize the search tool\n",
|
1739 |
"search_tool = DuckDuckGoSearchTool()\n",
|
1740 |
"\n",
|
1741 |
"# Initialize the model\n",
|
1742 |
+
"model = HfApiModel()\n",
|
1743 |
"\n",
|
1744 |
"agent = CodeAgent(\n",
|
1745 |
" model = model,\n",
|
|
|
1812 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
1813 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Find ideas for a luxury superhero-themed party, including entertainment, catering, and decoration options.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
1814 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
1815 |
+
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct οΏ½οΏ½ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―</span>\n",
|
1816 |
"</pre>\n"
|
1817 |
],
|
1818 |
"text/plain": [
|
|
|
1820 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
1821 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mFind ideas for a luxury superhero-themed party, including entertainment, catering, and decoration options.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
1822 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
1823 |
+
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct \u001b[0m\u001b[38;2;212;183;2mβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\u001b[0m\u001b[38;2;212;183;2mββ―\u001b[0m\n"
|
1824 |
]
|
1825 |
},
|
1826 |
"metadata": {},
|
|
|
2783 |
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
2784 |
"from smolagents import Tool\n",
|
2785 |
"from langchain_community.retrievers import BM25Retriever\n",
|
2786 |
+
"from smolagents import CodeAgent, HfApiModel\n",
|
2787 |
"\n",
|
2788 |
"class PartyPlanningRetrieverTool(Tool):\n",
|
2789 |
" name = \"party_planning_retriever\"\n",
|
|
|
2843 |
"party_planning_retriever = PartyPlanningRetrieverTool(docs_processed)\n",
|
2844 |
"\n",
|
2845 |
"# Initialize the agent\n",
|
2846 |
+
"agent = CodeAgent(tools=[party_planning_retriever], model=HfApiModel())\n",
|
2847 |
"\n",
|
2848 |
"# Example usage\n",
|
2849 |
"response = agent.run(\n",
|
unit2/smolagents/tool_calling_agents.ipynb
CHANGED
@@ -6,7 +6,7 @@
|
|
6 |
"id": "Pi9CF0391ARI"
|
7 |
},
|
8 |
"source": [
|
9 |
-
"#
|
10 |
"\n",
|
11 |
"This notebook is part of the [Hugging Face Agents Course](https://www.hf.co/learn/agents-course), a free Course from beginner to expert, where you learn to build Agents.\n",
|
12 |
"\n",
|
@@ -37,12 +37,12 @@
|
|
37 |
},
|
38 |
{
|
39 |
"cell_type": "markdown",
|
40 |
-
"metadata": {
|
41 |
-
"id": "cH-4W1GhYL4T"
|
42 |
-
},
|
43 |
"source": [
|
44 |
"Let's also login to the Hugging Face Hub to have access to the Inference API."
|
45 |
-
]
|
|
|
|
|
|
|
46 |
},
|
47 |
{
|
48 |
"cell_type": "code",
|
@@ -87,7 +87,7 @@
|
|
87 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
88 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Search for the best music recommendations for a party at the Wayne's mansion.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
89 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
90 |
-
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β
|
91 |
"</pre>\n"
|
92 |
],
|
93 |
"text/plain": [
|
@@ -95,7 +95,7 @@
|
|
95 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
96 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mSearch for the best music recommendations for a party at the Wayne's mansion.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
97 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
98 |
-
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m
|
99 |
]
|
100 |
},
|
101 |
"metadata": {},
|
@@ -550,18 +550,15 @@
|
|
550 |
}
|
551 |
],
|
552 |
"source": [
|
553 |
-
"from smolagents import ToolCallingAgent, DuckDuckGoSearchTool,
|
554 |
"\n",
|
555 |
-
"agent = ToolCallingAgent(tools=[DuckDuckGoSearchTool()], model=
|
556 |
"\n",
|
557 |
"agent.run(\"Search for the best music recommendations for a party at the Wayne's mansion.\")"
|
558 |
]
|
559 |
},
|
560 |
{
|
561 |
"cell_type": "markdown",
|
562 |
-
"metadata": {
|
563 |
-
"id": "Cl19VWGRYXrr"
|
564 |
-
},
|
565 |
"source": [
|
566 |
"\n",
|
567 |
"When you examine the agent's trace, instead of seeing `Executing parsed code:`, you'll see something like:\n",
|
@@ -576,7 +573,10 @@
|
|
576 |
"The agent generates a structured tool call that the system processes to produce the output, rather than directly executing code like a `CodeAgent`.\n",
|
577 |
"\n",
|
578 |
"Now that we understand both agent types, we can choose the right one for our needs. Let's continue exploring `smolagents` to make Alfred's party a success! π"
|
579 |
-
]
|
|
|
|
|
|
|
580 |
}
|
581 |
],
|
582 |
"metadata": {
|
@@ -593,4 +593,4 @@
|
|
593 |
},
|
594 |
"nbformat": 4,
|
595 |
"nbformat_minor": 0
|
596 |
-
}
|
|
|
6 |
"id": "Pi9CF0391ARI"
|
7 |
},
|
8 |
"source": [
|
9 |
+
"# Integrating Agents With Tools\n",
|
10 |
"\n",
|
11 |
"This notebook is part of the [Hugging Face Agents Course](https://www.hf.co/learn/agents-course), a free Course from beginner to expert, where you learn to build Agents.\n",
|
12 |
"\n",
|
|
|
37 |
},
|
38 |
{
|
39 |
"cell_type": "markdown",
|
|
|
|
|
|
|
40 |
"source": [
|
41 |
"Let's also login to the Hugging Face Hub to have access to the Inference API."
|
42 |
+
],
|
43 |
+
"metadata": {
|
44 |
+
"id": "cH-4W1GhYL4T"
|
45 |
+
}
|
46 |
},
|
47 |
{
|
48 |
"cell_type": "code",
|
|
|
87 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
88 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Search for the best music recommendations for a party at the Wayne's mansion.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
89 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
90 |
+
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―</span>\n",
|
91 |
"</pre>\n"
|
92 |
],
|
93 |
"text/plain": [
|
|
|
95 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
96 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mSearch for the best music recommendations for a party at the Wayne's mansion.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
97 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
98 |
+
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct \u001b[0m\u001b[38;2;212;183;2mβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\u001b[0m\u001b[38;2;212;183;2mββ―\u001b[0m\n"
|
99 |
]
|
100 |
},
|
101 |
"metadata": {},
|
|
|
550 |
}
|
551 |
],
|
552 |
"source": [
|
553 |
+
"from smolagents import ToolCallingAgent, DuckDuckGoSearchTool, HfApiModel\n",
|
554 |
"\n",
|
555 |
+
"agent = ToolCallingAgent(tools=[DuckDuckGoSearchTool()], model=HfApiModel())\n",
|
556 |
"\n",
|
557 |
"agent.run(\"Search for the best music recommendations for a party at the Wayne's mansion.\")"
|
558 |
]
|
559 |
},
|
560 |
{
|
561 |
"cell_type": "markdown",
|
|
|
|
|
|
|
562 |
"source": [
|
563 |
"\n",
|
564 |
"When you examine the agent's trace, instead of seeing `Executing parsed code:`, you'll see something like:\n",
|
|
|
573 |
"The agent generates a structured tool call that the system processes to produce the output, rather than directly executing code like a `CodeAgent`.\n",
|
574 |
"\n",
|
575 |
"Now that we understand both agent types, we can choose the right one for our needs. Let's continue exploring `smolagents` to make Alfred's party a success! π"
|
576 |
+
],
|
577 |
+
"metadata": {
|
578 |
+
"id": "Cl19VWGRYXrr"
|
579 |
+
}
|
580 |
}
|
581 |
],
|
582 |
"metadata": {
|
|
|
593 |
},
|
594 |
"nbformat": 4,
|
595 |
"nbformat_minor": 0
|
596 |
+
}
|
unit2/smolagents/tools.ipynb
CHANGED
@@ -91,7 +91,7 @@
|
|
91 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
92 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Can you give me the name of the highest-rated catering service in Gotham City?</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
93 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
94 |
-
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β
|
95 |
"</pre>\n"
|
96 |
],
|
97 |
"text/plain": [
|
@@ -99,7 +99,7 @@
|
|
99 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
100 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mCan you give me the name of the highest-rated catering service in Gotham City?\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
101 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
102 |
-
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m
|
103 |
]
|
104 |
},
|
105 |
"metadata": {},
|
@@ -246,7 +246,7 @@
|
|
246 |
}
|
247 |
],
|
248 |
"source": [
|
249 |
-
"from smolagents import CodeAgent,
|
250 |
"\n",
|
251 |
"# Let's pretend we have a function that fetches the highest-rated catering services.\n",
|
252 |
"@tool\n",
|
@@ -270,7 +270,7 @@
|
|
270 |
" return best_service\n",
|
271 |
"\n",
|
272 |
"\n",
|
273 |
-
"agent = CodeAgent(tools=[catering_service_tool], model=
|
274 |
"\n",
|
275 |
"# Run the agent to find the best catering service\n",
|
276 |
"result = agent.run(\n",
|
@@ -314,7 +314,7 @@
|
|
314 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
315 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">What would be a good superhero party idea for a 'villain masquerade' theme?</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
316 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
317 |
-
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β
|
318 |
"</pre>\n"
|
319 |
],
|
320 |
"text/plain": [
|
@@ -322,7 +322,7 @@
|
|
322 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
323 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mWhat would be a good superhero party idea for a 'villain masquerade' theme?\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
324 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
325 |
-
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m
|
326 |
]
|
327 |
},
|
328 |
"metadata": {},
|
@@ -395,7 +395,7 @@
|
|
395 |
}
|
396 |
],
|
397 |
"source": [
|
398 |
-
"from smolagents import Tool, CodeAgent,
|
399 |
"\n",
|
400 |
"class SuperheroPartyThemeTool(Tool):\n",
|
401 |
" name = \"superhero_party_theme_generator\"\n",
|
@@ -423,7 +423,7 @@
|
|
423 |
"\n",
|
424 |
"# Instantiate the tool\n",
|
425 |
"party_theme_tool = SuperheroPartyThemeTool()\n",
|
426 |
-
"agent = CodeAgent(tools=[party_theme_tool], model=
|
427 |
"\n",
|
428 |
"# Run the agent to generate a party theme idea\n",
|
429 |
"result = agent.run(\n",
|
@@ -514,7 +514,7 @@
|
|
514 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
515 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Generate an image of a luxurious superhero-themed party at Wayne Manor with made-up superheros.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
516 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
517 |
-
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β
|
518 |
"</pre>\n"
|
519 |
],
|
520 |
"text/plain": [
|
@@ -522,7 +522,7 @@
|
|
522 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
523 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mGenerate an image of a luxurious superhero-themed party at Wayne Manor with made-up superheros.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
524 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
525 |
-
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m
|
526 |
]
|
527 |
},
|
528 |
"metadata": {},
|
@@ -604,7 +604,7 @@
|
|
604 |
}
|
605 |
],
|
606 |
"source": [
|
607 |
-
"from smolagents import load_tool, CodeAgent,
|
608 |
"\n",
|
609 |
"image_generation_tool = load_tool(\n",
|
610 |
" \"m-ric/text-to-image\",\n",
|
@@ -613,7 +613,7 @@
|
|
613 |
"\n",
|
614 |
"agent = CodeAgent(\n",
|
615 |
" tools=[image_generation_tool],\n",
|
616 |
-
" model=
|
617 |
")\n",
|
618 |
"\n",
|
619 |
"agent.run(\"Generate an image of a luxurious superhero-themed party at Wayne Manor with made-up superheros.\")"
|
@@ -679,7 +679,7 @@
|
|
679 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">python code:</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
680 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">{'user_prompt': 'A grand superhero-themed party at Wayne Manor, with Alfred overseeing a luxurious gala'}.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
681 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
682 |
-
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β
|
683 |
"</pre>\n"
|
684 |
],
|
685 |
"text/plain": [
|
@@ -690,7 +690,7 @@
|
|
690 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mpython code:\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
691 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1m{'user_prompt': 'A grand superhero-themed party at Wayne Manor, with Alfred overseeing a luxurious gala'}.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
692 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
693 |
-
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m
|
694 |
]
|
695 |
},
|
696 |
"metadata": {},
|
@@ -799,7 +799,7 @@
|
|
799 |
}
|
800 |
],
|
801 |
"source": [
|
802 |
-
"from smolagents import CodeAgent,
|
803 |
"\n",
|
804 |
"image_generation_tool = Tool.from_space(\n",
|
805 |
" \"black-forest-labs/FLUX.1-schnell\",\n",
|
@@ -807,7 +807,7 @@
|
|
807 |
" description=\"Generate an image from a prompt\"\n",
|
808 |
")\n",
|
809 |
"\n",
|
810 |
-
"model =
|
811 |
"\n",
|
812 |
"agent = CodeAgent(tools=[image_generation_tool], model=model)\n",
|
813 |
"\n",
|
@@ -913,7 +913,7 @@
|
|
913 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Search for luxury entertainment ideas for a superhero-themed event, such as live performances and interactive </span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
914 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">experiences.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
915 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
916 |
-
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β
|
917 |
"</pre>\n"
|
918 |
],
|
919 |
"text/plain": [
|
@@ -922,7 +922,7 @@
|
|
922 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mSearch for luxury entertainment ideas for a superhero-themed event, such as live performances and interactive \u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
923 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mexperiences.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
924 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
925 |
-
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m
|
926 |
]
|
927 |
},
|
928 |
"metadata": {},
|
@@ -1208,7 +1208,7 @@
|
|
1208 |
],
|
1209 |
"source": [
|
1210 |
"from langchain.agents import load_tools\n",
|
1211 |
-
"from smolagents import CodeAgent,
|
1212 |
"\n",
|
1213 |
"search_tool = Tool.from_langchain(load_tools([\"serpapi\"])[0])\n",
|
1214 |
"\n",
|
|
|
91 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
92 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Can you give me the name of the highest-rated catering service in Gotham City?</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
93 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
94 |
+
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―</span>\n",
|
95 |
"</pre>\n"
|
96 |
],
|
97 |
"text/plain": [
|
|
|
99 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
100 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mCan you give me the name of the highest-rated catering service in Gotham City?\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
101 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
102 |
+
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct \u001b[0m\u001b[38;2;212;183;2mβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\u001b[0m\u001b[38;2;212;183;2mββ―\u001b[0m\n"
|
103 |
]
|
104 |
},
|
105 |
"metadata": {},
|
|
|
246 |
}
|
247 |
],
|
248 |
"source": [
|
249 |
+
"from smolagents import CodeAgent, HfApiModel, tool\n",
|
250 |
"\n",
|
251 |
"# Let's pretend we have a function that fetches the highest-rated catering services.\n",
|
252 |
"@tool\n",
|
|
|
270 |
" return best_service\n",
|
271 |
"\n",
|
272 |
"\n",
|
273 |
+
"agent = CodeAgent(tools=[catering_service_tool], model=HfApiModel())\n",
|
274 |
"\n",
|
275 |
"# Run the agent to find the best catering service\n",
|
276 |
"result = agent.run(\n",
|
|
|
314 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
315 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">What would be a good superhero party idea for a 'villain masquerade' theme?</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
316 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
317 |
+
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―</span>\n",
|
318 |
"</pre>\n"
|
319 |
],
|
320 |
"text/plain": [
|
|
|
322 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
323 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mWhat would be a good superhero party idea for a 'villain masquerade' theme?\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
324 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
325 |
+
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct \u001b[0m\u001b[38;2;212;183;2mβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\u001b[0m\u001b[38;2;212;183;2mββ―\u001b[0m\n"
|
326 |
]
|
327 |
},
|
328 |
"metadata": {},
|
|
|
395 |
}
|
396 |
],
|
397 |
"source": [
|
398 |
+
"from smolagents import Tool, CodeAgent, HfApiModel\n",
|
399 |
"\n",
|
400 |
"class SuperheroPartyThemeTool(Tool):\n",
|
401 |
" name = \"superhero_party_theme_generator\"\n",
|
|
|
423 |
"\n",
|
424 |
"# Instantiate the tool\n",
|
425 |
"party_theme_tool = SuperheroPartyThemeTool()\n",
|
426 |
+
"agent = CodeAgent(tools=[party_theme_tool], model=HfApiModel())\n",
|
427 |
"\n",
|
428 |
"# Run the agent to generate a party theme idea\n",
|
429 |
"result = agent.run(\n",
|
|
|
514 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
515 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Generate an image of a luxurious superhero-themed party at Wayne Manor with made-up superheros.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
516 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
517 |
+
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―</span>\n",
|
518 |
"</pre>\n"
|
519 |
],
|
520 |
"text/plain": [
|
|
|
522 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
523 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mGenerate an image of a luxurious superhero-themed party at Wayne Manor with made-up superheros.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
524 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
525 |
+
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct \u001b[0m\u001b[38;2;212;183;2mβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\u001b[0m\u001b[38;2;212;183;2mββ―\u001b[0m\n"
|
526 |
]
|
527 |
},
|
528 |
"metadata": {},
|
|
|
604 |
}
|
605 |
],
|
606 |
"source": [
|
607 |
+
"from smolagents import load_tool, CodeAgent, HfApiModel\n",
|
608 |
"\n",
|
609 |
"image_generation_tool = load_tool(\n",
|
610 |
" \"m-ric/text-to-image\",\n",
|
|
|
613 |
"\n",
|
614 |
"agent = CodeAgent(\n",
|
615 |
" tools=[image_generation_tool],\n",
|
616 |
+
" model=HfApiModel()\n",
|
617 |
")\n",
|
618 |
"\n",
|
619 |
"agent.run(\"Generate an image of a luxurious superhero-themed party at Wayne Manor with made-up superheros.\")"
|
|
|
679 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">python code:</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
680 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">{'user_prompt': 'A grand superhero-themed party at Wayne Manor, with Alfred overseeing a luxurious gala'}.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
681 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
682 |
+
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―</span>\n",
|
683 |
"</pre>\n"
|
684 |
],
|
685 |
"text/plain": [
|
|
|
690 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mpython code:\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
691 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1m{'user_prompt': 'A grand superhero-themed party at Wayne Manor, with Alfred overseeing a luxurious gala'}.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
692 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
693 |
+
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct \u001b[0m\u001b[38;2;212;183;2mβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\u001b[0m\u001b[38;2;212;183;2mββ―\u001b[0m\n"
|
694 |
]
|
695 |
},
|
696 |
"metadata": {},
|
|
|
799 |
}
|
800 |
],
|
801 |
"source": [
|
802 |
+
"from smolagents import CodeAgent, HfApiModel, Tool\n",
|
803 |
"\n",
|
804 |
"image_generation_tool = Tool.from_space(\n",
|
805 |
" \"black-forest-labs/FLUX.1-schnell\",\n",
|
|
|
807 |
" description=\"Generate an image from a prompt\"\n",
|
808 |
")\n",
|
809 |
"\n",
|
810 |
+
"model = HfApiModel(\"Qwen/Qwen2.5-Coder-32B-Instruct\")\n",
|
811 |
"\n",
|
812 |
"agent = CodeAgent(tools=[image_generation_tool], model=model)\n",
|
813 |
"\n",
|
|
|
913 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">Search for luxury entertainment ideas for a superhero-themed event, such as live performances and interactive </span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
914 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"font-weight: bold\">experiences.</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
915 |
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span> <span style=\"color: #d4b702; text-decoration-color: #d4b702\">β</span>\n",
|
916 |
+
"<span style=\"color: #d4b702; text-decoration-color: #d4b702\">β°β HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―</span>\n",
|
917 |
"</pre>\n"
|
918 |
],
|
919 |
"text/plain": [
|
|
|
922 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mSearch for luxury entertainment ideas for a superhero-themed event, such as live performances and interactive \u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
923 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[1mexperiences.\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
924 |
"\u001b[38;2;212;183;2mβ\u001b[0m \u001b[38;2;212;183;2mβ\u001b[0m\n",
|
925 |
+
"\u001b[38;2;212;183;2mβ°β\u001b[0m\u001b[38;2;212;183;2m HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct \u001b[0m\u001b[38;2;212;183;2mβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ\u001b[0m\u001b[38;2;212;183;2mββ―\u001b[0m\n"
|
926 |
]
|
927 |
},
|
928 |
"metadata": {},
|
|
|
1208 |
],
|
1209 |
"source": [
|
1210 |
"from langchain.agents import load_tools\n",
|
1211 |
+
"from smolagents import CodeAgent, HfApiModel, Tool\n",
|
1212 |
"\n",
|
1213 |
"search_tool = Tool.from_langchain(load_tools([\"serpapi\"])[0])\n",
|
1214 |
"\n",
|
unit2/smolagents/vision_agents.ipynb
CHANGED
@@ -38,12 +38,12 @@
|
|
38 |
},
|
39 |
{
|
40 |
"cell_type": "markdown",
|
41 |
-
"metadata": {
|
42 |
-
"id": "WJGFjRbZbL50"
|
43 |
-
},
|
44 |
"source": [
|
45 |
"Let's also login to the Hugging Face Hub to have access to the Inference API."
|
46 |
-
]
|
|
|
|
|
|
|
47 |
},
|
48 |
{
|
49 |
"cell_type": "code",
|
@@ -72,7 +72,7 @@
|
|
72 |
"\n",
|
73 |
"In this case, a guest is trying to enter, and Alfred suspects that this visitor might be The Joker impersonating Wonder Woman. Alfred needs to verify their identity to prevent anyone unwanted from entering. \n",
|
74 |
"\n",
|
75 |
-
"Letβs build the example. First, the images are loaded. In this case, we use images from Wikipedia to keep the example minimal, but
|
76 |
]
|
77 |
},
|
78 |
{
|
@@ -94,22 +94,19 @@
|
|
94 |
"\n",
|
95 |
"images = []\n",
|
96 |
"for url in image_urls:\n",
|
97 |
-
"
|
98 |
-
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36\" \n",
|
99 |
-
" }\n",
|
100 |
-
" response = requests.get(url,headers=headers)\n",
|
101 |
" image = Image.open(BytesIO(response.content)).convert(\"RGB\")\n",
|
102 |
" images.append(image)"
|
103 |
]
|
104 |
},
|
105 |
{
|
106 |
"cell_type": "markdown",
|
107 |
-
"metadata": {
|
108 |
-
"id": "vUBQjETkbRU6"
|
109 |
-
},
|
110 |
"source": [
|
111 |
"Now that we have the images, the agent will tell us wether the guests is actually a superhero (Wonder Woman) or a villian (The Joker)."
|
112 |
-
]
|
|
|
|
|
|
|
113 |
},
|
114 |
{
|
115 |
"cell_type": "code",
|
@@ -502,12 +499,12 @@
|
|
502 |
},
|
503 |
{
|
504 |
"cell_type": "markdown",
|
505 |
-
"metadata": {
|
506 |
-
"id": "NrV-yK5zbT9r"
|
507 |
-
},
|
508 |
"source": [
|
509 |
"In this case, the output reveals that the person is impersonating someone else, so we can prevent The Joker from entering the party!"
|
510 |
-
]
|
|
|
|
|
|
|
511 |
},
|
512 |
{
|
513 |
"cell_type": "markdown",
|
@@ -535,4 +532,4 @@
|
|
535 |
},
|
536 |
"nbformat": 4,
|
537 |
"nbformat_minor": 0
|
538 |
-
}
|
|
|
38 |
},
|
39 |
{
|
40 |
"cell_type": "markdown",
|
|
|
|
|
|
|
41 |
"source": [
|
42 |
"Let's also login to the Hugging Face Hub to have access to the Inference API."
|
43 |
+
],
|
44 |
+
"metadata": {
|
45 |
+
"id": "WJGFjRbZbL50"
|
46 |
+
}
|
47 |
},
|
48 |
{
|
49 |
"cell_type": "code",
|
|
|
72 |
"\n",
|
73 |
"In this case, a guest is trying to enter, and Alfred suspects that this visitor might be The Joker impersonating Wonder Woman. Alfred needs to verify their identity to prevent anyone unwanted from entering. \n",
|
74 |
"\n",
|
75 |
+
"Letβs build the example. First, the images are loaded. In this case, we use images from Wikipedia to keep the example minimal, but image the possible use-case!"
|
76 |
]
|
77 |
},
|
78 |
{
|
|
|
94 |
"\n",
|
95 |
"images = []\n",
|
96 |
"for url in image_urls:\n",
|
97 |
+
" response = requests.get(url)\n",
|
|
|
|
|
|
|
98 |
" image = Image.open(BytesIO(response.content)).convert(\"RGB\")\n",
|
99 |
" images.append(image)"
|
100 |
]
|
101 |
},
|
102 |
{
|
103 |
"cell_type": "markdown",
|
|
|
|
|
|
|
104 |
"source": [
|
105 |
"Now that we have the images, the agent will tell us wether the guests is actually a superhero (Wonder Woman) or a villian (The Joker)."
|
106 |
+
],
|
107 |
+
"metadata": {
|
108 |
+
"id": "vUBQjETkbRU6"
|
109 |
+
}
|
110 |
},
|
111 |
{
|
112 |
"cell_type": "code",
|
|
|
499 |
},
|
500 |
{
|
501 |
"cell_type": "markdown",
|
|
|
|
|
|
|
502 |
"source": [
|
503 |
"In this case, the output reveals that the person is impersonating someone else, so we can prevent The Joker from entering the party!"
|
504 |
+
],
|
505 |
+
"metadata": {
|
506 |
+
"id": "NrV-yK5zbT9r"
|
507 |
+
}
|
508 |
},
|
509 |
{
|
510 |
"cell_type": "markdown",
|
|
|
532 |
},
|
533 |
"nbformat": 4,
|
534 |
"nbformat_minor": 0
|
535 |
+
}
|
unit2/smolagents/vision_web_browser.py
CHANGED
@@ -34,7 +34,7 @@ def parse_arguments():
|
|
34 |
"--model-type",
|
35 |
type=str,
|
36 |
default="LiteLLMModel",
|
37 |
-
help="The model type to use (e.g., OpenAIServerModel, LiteLLMModel, TransformersModel,
|
38 |
)
|
39 |
parser.add_argument(
|
40 |
"--model-id",
|
@@ -199,7 +199,7 @@ def main():
|
|
199 |
agent = initialize_agent(model)
|
200 |
|
201 |
# Run the agent with the provided prompt
|
202 |
-
agent.python_executor("from helium import *")
|
203 |
agent.run(args.prompt + helium_instructions)
|
204 |
|
205 |
|
|
|
34 |
"--model-type",
|
35 |
type=str,
|
36 |
default="LiteLLMModel",
|
37 |
+
help="The model type to use (e.g., OpenAIServerModel, LiteLLMModel, TransformersModel, HfApiModel)",
|
38 |
)
|
39 |
parser.add_argument(
|
40 |
"--model-id",
|
|
|
199 |
agent = initialize_agent(model)
|
200 |
|
201 |
# Run the agent with the provided prompt
|
202 |
+
agent.python_executor("from helium import *", agent.state)
|
203 |
agent.run(args.prompt + helium_instructions)
|
204 |
|
205 |
|