Spaces:
Sleeping
Sleeping
File size: 21,674 Bytes
6672eba |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 |
import streamlit as st
st.markdown("""
**Weather agent**
Example of PydanticAI with `multiple tools` which the LLM needs to call in turn to answer a question.
""")
with st.expander("🎯 Objectives"):
st.markdown("""
- Use **OpenAI GPT-4o-mini** agent to `process natural language queries` about the weather.
- Fetch **geolocation** from a location string using the `Maps.co API`.
- Retrieve **real-time weather** using the Tomorrow.io API.
- Handle `retries`, `backoff`, and `logging` using **Logfire**.
- Integrate all parts in a clean, async-compatible **Streamlit UI**.
- Ensuring `concise` and `structured` responses.
""")
with st.expander("🧰 Pre-requisites"):
st.markdown("""
- Python 3.10+
- Streamlit
- AsyncClient (httpx)
- OpenAI `pydantic_ai` Agent
- Logfire for tracing/debugging
- Valid API Keys:
- [https://geocode.maps.co/](https://geocode.maps.co/)
- [https://www.tomorrow.io/](https://www.tomorrow.io/)
""")
st.code("""
pip install streamlit httpx logfire pydantic_ai
""")
with st.expander("⚙️ Step-by-Step Setup"):
st.markdown("**Imports and Global Client**")
st.code("""
import os
import asyncio
import streamlit as st
from dataclasses import dataclass
from typing import Any
import logfire
from httpx import AsyncClient
from pydantic_ai import Agent, RunContext, ModelRetry
logfire.configure(send_to_logfire='if-token-present')
client = AsyncClient()
""")
st.markdown("**Declare Dependencies**")
st.code("""
@dataclass
class Deps:
client: AsyncClient # client is an instance of AsyncClient (from httpx).
weather_api_key: str | None
geo_api_key: str | None
""")
st.markdown("**Setup Weather Agent**")
st.code("""
weather_agent = Agent(
'openai:gpt-4o-mini',
system_prompt=(
'Be concise, reply with one sentence. '
'Use the `get_lat_lng` tool to get the latitude and longitude of the locations, '
'then use the `get_weather` tool to get the weather.'
),
deps_type= Deps,
retries = 2,
)
""")
st.markdown("**Define Geocoding Tool with Retry**")
st.code("""
@weather_agent.tool
async def get_lat_lng(ctx: RunContext[Deps],
location_description: str,
max_retries: int = 5,
base_delay: int = 2) -> dict[str, float]:
"Get the latitude and longitude of a location with retry handling for rate limits."
if ctx.deps.geo_api_key is None:
return {'lat': 51.1, 'lng': -0.1} # Default to London
# Sets up API request parameters.
params = {'q': location_description, 'api_key': ctx.deps.geo_api_key}
# Loops for a maximum number of retries.
for attempt in range(max_retries):
try:
# Logs API call span with parameters.
with logfire.span('calling geocode API', params=params) as span:
# Sends async GET request.
r = await ctx.deps.client.get('https://geocode.maps.co/search', params=params)
# Checks if API rate limit is exceeded.
if r.status_code == 429:
# Exponential backoff
wait_time = base_delay * (2 ** attempt)
# Waits before retrying.
await asyncio.sleep(wait_time)
# Continues to the next retry attempt.
continue
r.raise_for_status()
data = r.json()
span.set_attribute('response', data)
if data:
# Extracts and returns latitude & longitude.
return {'lat': float(data[0]['lat']), 'lng': float(data[0]['lon'])}
else:
# Raises an error if no valid data is found.
raise ModelRetry('Could not find the location')
except Exception as e: # Catches HTTP errors.
print(f"Request failed: {e}") # Logs the failure.
raise ModelRetry('Failed after multiple retries')
""")
st.markdown("**Define Weather Tool**")
st.code("""
@weather_agent.tool
async def get_weather(ctx: RunContext[Deps], lat: float, lng: float) -> dict[str, Any]:
if ctx.deps.weather_api_key is None:
return {'temperature': '21 °C', 'description': 'Sunny'}
params = {'apikey': ctx.deps.weather_api_key, 'location': f'{lat},{lng}', 'units': 'metric'}
r = await ctx.deps.client.get('https://api.tomorrow.io/v4/weather/realtime', params=params)
r.raise_for_status()
data = r.json()
values = data['data']['values']
code_lookup = {
1000: 'Clear, Sunny', 1001: 'Cloudy', 1100: 'Mostly Clear', 1101: 'Partly Cloudy',
1102: 'Mostly Cloudy', 2000: 'Fog', 2100: 'Light Fog', 4000: 'Drizzle', 4001: 'Rain',
4200: 'Light Rain', 4201: 'Heavy Rain', 5000: 'Snow', 5001: 'Flurries',
5100: 'Light Snow', 5101: 'Heavy Snow', 6000: 'Freezing Drizzle', 6001: 'Freezing Rain',
6200: 'Light Freezing Rain', 6201: 'Heavy Freezing Rain', 7000: 'Ice Pellets',
7101: 'Heavy Ice Pellets', 7102: 'Light Ice Pellets', 8000: 'Thunderstorm',
}
return {
'temperature': f'{values["temperatureApparent"]:0.0f}°C',
'description': code_lookup.get(values['weatherCode'], 'Unknown'),
}
""")
st.markdown("**Wrapper to Run the Agent**")
st.code("""
async def run_weather_agent(user_input: str):
deps = Deps(
client=client,
weather_api_key = os.getenv("TOMORROW_IO_API_KEY"),
geo_api_key = os.getenv("GEOCODE_API_KEY")
)
result = await weather_agent.run(user_input, deps=deps)
return result.data
""")
st.markdown("**Streamlit UI with Async Handling**")
st.code("""
st.set_page_config(page_title="Weather Application", page_icon="🚀")
if "weather_response" not in st.session_state:
st.session_state.weather_response = None
st.title("Weather Agent App")
user_input = st.text_area("Enter a sentence with locations:", "What is the weather like in Bangalore, Chennai and Delhi?")
if st.button("Get Weather"):
with st.spinner("Fetching weather..."):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
response = loop.run_until_complete(run_weather_agent(user_input))
st.session_state.weather_response = response
if st.session_state.weather_response:
st.info(st.session_state.weather_response)
""")
with st.expander("Description of Each Step"):
st.markdown("""
- **Imports**: Brings in all required packages including `httpx`, `logfire`, and `streamlit`.
- **`Deps` Dataclass**: Encapsulates dependencies injected into the agent like the API keys and shared HTTP client.
- **Weather Agent**: Configures an OpenAI GPT-4o-mini agent with tools for geolocation and weather.
- **Tools**:
- `get_lat_lng`: Geocodes a location using a free Maps.co API. Implements retry with exponential backoff.
- `get_weather`: Fetches live weather info from Tomorrow.io using lat/lng.
- **Agent Runner**: Wraps the interaction to run asynchronously with injected dependencies.
- **Streamlit UI**: Captures user input, triggers agent execution, and displays response with `asyncio`.
""")
st.image("https://raw.githubusercontent.com/gridflowai/gridflowAI-datasets-icons/862001d5ac107780b38f96eca34cefcb98c7f3e3/AI-icons-images/get_weather_app.png",
caption="Agentic Weather App Flow",
use_column_width=True)
import os
import asyncio
import streamlit as st
from dataclasses import dataclass
from typing import Any
import logfire
from httpx import AsyncClient
from pydantic_ai import Agent, RunContext, ModelRetry
# Configure logfire
logfire.configure(send_to_logfire='if-token-present')
@dataclass
class Deps:
client: AsyncClient
weather_api_key: str | None
geo_api_key: str | None
weather_agent = Agent(
'openai:gpt-4o-mini',
system_prompt=(
'Be concise, reply with one sentence. '
'Use the `get_lat_lng` tool to get the latitude and longitude of the locations, '
'then use the `get_weather` tool to get the weather.'
),
deps_type=Deps,
retries=2,
)
# Create a single global AsyncClient instance
client = AsyncClient()
@weather_agent.tool
async def get_lat_lng(ctx: RunContext[Deps],
location_description: str,
max_retries: int = 5,
base_delay: int = 2) -> dict[str, float]:
"""Get the latitude and longitude of a location."""
if ctx.deps.geo_api_key is None:
return {'lat': 51.1, 'lng': -0.1} # Default to London
# Sets up API request parameters.
params = {'q': location_description, 'api_key': ctx.deps.geo_api_key}
# Loops for a maximum number of retries.
for attempt in range(max_retries):
try:
# Logs API call span with parameters.
with logfire.span('calling geocode API', params=params) as span:
# Sends async GET request.
r = await ctx.deps.client.get('https://geocode.maps.co/search', params=params)
# Checks if API rate limit is exceeded.
if r.status_code == 429: # Too Many Requests
wait_time = base_delay * (2 ** attempt) # Exponential backoff
print(f"Rate limited. Retrying in {wait_time} seconds...")
# Waits before retrying.
await asyncio.sleep(wait_time)
# Continues to the next retry attempt.
continue # Retry the request
# Raises an exception for HTTP errors.
r.raise_for_status()
# Parses the API response as JSON.
data = r.json()
# Logs the response data.
span.set_attribute('response', data)
if data:
# Extracts and returns latitude & longitude.
return {'lat': float(data[0]['lat']), 'lng': float(data[0]['lon'])}
else:
# Raises an error if no valid data is found.
raise ModelRetry('Could not find the location')
except Exception as e: # Catches HTTP errors.
print(f"Request failed: {e}") # Logs the failure.
raise ModelRetry('Failed after multiple retries')
@weather_agent.tool
async def get_weather(ctx: RunContext[Deps], lat: float, lng: float) -> dict[str, Any]:
"""Get the weather at a location."""
if ctx.deps.weather_api_key is None:
return {'temperature': '21 °C', 'description': 'Sunny'}
params = {'apikey': ctx.deps.weather_api_key, 'location': f'{lat},{lng}', 'units': 'metric'}
r = await ctx.deps.client.get('https://api.tomorrow.io/v4/weather/realtime', params=params)
r.raise_for_status()
data = r.json()
values = data['data']['values']
code_lookup = {
1000: 'Clear, Sunny', 1001: 'Cloudy', 1100: 'Mostly Clear', 1101: 'Partly Cloudy',
1102: 'Mostly Cloudy', 2000: 'Fog', 2100: 'Light Fog', 4000: 'Drizzle', 4001: 'Rain',
4200: 'Light Rain', 4201: 'Heavy Rain', 5000: 'Snow', 5001: 'Flurries',
5100: 'Light Snow', 5101: 'Heavy Snow', 6000: 'Freezing Drizzle', 6001: 'Freezing Rain',
6200: 'Light Freezing Rain', 6201: 'Heavy Freezing Rain', 7000: 'Ice Pellets',
7101: 'Heavy Ice Pellets', 7102: 'Light Ice Pellets', 8000: 'Thunderstorm',
}
return {
'temperature': f'{values["temperatureApparent"]:0.0f}°C',
'description': code_lookup.get(values['weatherCode'], 'Unknown'),
}
async def run_weather_agent(user_input: str):
deps = Deps(
client=client, # Use global client
weather_api_key=os.getenv("TOMORROW_IO_API_KEY"),
geo_api_key=os.getenv("GEOCODE_API_KEY")
)
result = await weather_agent.run(user_input, deps=deps)
return result.data
# Initialize session state for storing weather responses
if "weather_response" not in st.session_state:
st.session_state.weather_response = None
# Set the page title
#st.set_page_config(page_title="Weather Application", page_icon="🚀")
# Streamlit UI
with st.expander(f"**Example prompts**"):
st.markdown(f"""
Prompt : If I were in Sydney today, would I need a jacket?
Bot : No, you likely wouldn't need a jacket as it's clear and sunny with a temperature of 22°C in Sydney.
Prompt : Tell me whether it's beach weather in Bali and Phuket.
Bot : Bali is too cold at 7°C and partly cloudy for beach weather, while Phuket is warm at 26°C with drizzle, making it more suitable for beach activities.
Prompt : If I had a meeting in Dubai, should I wear light clothing?
Bot : Yes, you should wear light clothing as the temperature in Dubai is currently 25°C and mostly clear.
Prompt : How does today’s temperature in Tokyo compare to the same time last week?
Bot : Today's temperature in Tokyo is 14°C, which is the same as the temperature at the same time last week.
Prompt : Is the current weather suitable for air travel in London and New York?
Bot : The current weather in London is 5°C and cloudy, and in New York, it is -0°C and clear; both conditions are generally suitable for air travel.
""")
user_input = st.text_area("Enter a sentence with locations:", "What is the weather like in Bangalore, Chennai and Delhi?")
# Button to trigger weather fetch
if st.button("Get Weather"):
with st.spinner("Fetching weather..."):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
response = loop.run_until_complete(run_weather_agent(user_input))
st.session_state.weather_response = response
# Display stored response
if st.session_state.weather_response:
st.info(st.session_state.weather_response)
with st.expander("🧠 How is this app Agentic?"):
st.markdown("""
###### ✅ How this App is Agentic
This weather app demonstrates **Agentic AI** because:
1. **Goal-Oriented Autonomy**
The user provides a natural language request (e.g., *“What’s the weather in Bangalore and Delhi?”*).
The agent autonomously figures out *how* to fulfill it.
2. **Tool Usage by the Agent**
The `Agent` uses two tools:
- `get_lat_lng()` – to fetch coordinates via a geocoding API.
- `get_weather()` – to get real-time weather for those coordinates.
The agent determines when and how to use these tools.
3. **Context + Dependency Injection**
The app uses the `Deps` dataclass to provide the agent with shared dependencies like HTTP clients and API keys—just like a human agent accessing internal tools.
4. **Retries and Adaptive Behavior**
The agent handles failures and retries via `ModelRetry`, showing resilience and smart retry logic.
5. **Structured Interactions via `RunContext`**
Each tool runs with access to structured context, enabling better coordination and reuse of shared state.
6. **LLM-Orchestrated Actions**
At the core, a GPT-4o-mini model orchestrates:
- Understanding the user intent,
- Selecting and invoking the right tools,
- Synthesizing the final response.
> 🧠 **In essence**: This is not just a chatbot, but an *autonomous reasoning engine* that uses real tools to complete real-world goals.
""")
with st.expander("🧪 Example Prompts: Handling Complex Queries"):
st.markdown("""
This app can understand **natural, varied, and multi-part prompts** thanks to the LLM-based agent at its core.
It intelligently uses `get_lat_lng()` and `get_weather()` tools based on user intent.
###### 🗣️ Complex Prompt Examples & Responses:
**Prompt:**
*If I were in Sydney today, would I need a jacket?*
**Response:**
*No, you likely wouldn't need a jacket as it's clear and sunny with a temperature of 22°C in Sydney.*
---
**Prompt:**
*Tell me whether it's beach weather in Bali and Phuket.*
**Response:**
*Bali is too cold at 7°C and partly cloudy for beach weather, while Phuket is warm at 26°C with drizzle, making it more suitable for beach activities.*
---
**Prompt:**
*If I had a meeting in Dubai, should I wear light clothing?*
**Response:**
*Yes, you should wear light clothing as the temperature in Dubai is currently 25°C and mostly clear.*
---
**Prompt:**
*How does today’s temperature in Tokyo compare to the same time last week?*
**Response:**
*Today's temperature in Tokyo is 14°C, which is the same as the temperature at the same time last week.*
*(Note: This would require historical API support to be accurate in a real app.)*
---
**Prompt:**
*Is the current weather suitable for air travel in London and New York?*
**Response:**
*The current weather in London is 5°C and cloudy, and in New York, it is -0°C and clear; both conditions are generally suitable for air travel.*
---
**Prompt:**
*Give me the weather update for all cities where cricket matches are happening today in India.*
**Response:**
*(This would involve external logic for identifying cricket venues, but the agent can handle the weather lookup part once cities are known.)*
---
###### 🧠 Why it Works:
- The **agent extracts all cities** from the prompt, even if mixed with unrelated text.
- It **chains tool calls**: First gets geolocation, then weather.
- The **final response is LLM-crafted** to match the tone and question format (yes/no, suggestion, comparison, etc.).
> ✅ You don’t need to ask "what's the weather in X" exactly — the agent infers it from how humans speak.
""")
with st.expander("🔍 Missing Agentic AI Capabilities & How to Improve"):
st.markdown("""
While the app exhibits several **agentic behaviors**—like tool use, intent recognition, and multi-step reasoning—it still lacks **some core features** found in *fully agentic systems*. Here's what’s missing:
###### ❌ Missing Facets & How to Add Them
**1. Autonomy & Proactive Behavior**
*Current:* The app only responds to user prompts.
*To Add:* Let the agent proactively ask follow-ups.
**Example:**
- User: *What's the weather in Italy?*
- Agent: *Italy has multiple cities. Would you like weather in Rome, Milan, or Venice?*
**2. Goal-Oriented Planning**
*Current:* Executes one tool or a fixed chain of tools.
*To Add:* Give it a higher-level goal and let it plan the steps.
**Example:**
- Prompt: *Help me plan a weekend trip to a warm place in Europe.*
- Agent: Finds warm cities, checks weather, compares, and recommends.
**3. Memory / Session Context**
*Current:* Stateless; each query is standalone.
*To Add:* Use LangGraph or crewAI memory modules to **remember past queries** or preferences.
**Example:**
- User: *What’s the weather in Delhi?*
- Then: *And how about tomorrow?* → Agent should know the context refers to Delhi.
**4. Delegation to Sub-Agents**
*Current:* Single-agent, monolithic logic.
*To Add:* Delegate tasks to specialized agents (geocoder agent, weather formatter agent, response stylist, etc.).
**Example:**
- Planner agent decides cities → Fetcher agent retrieves data → Explainer agent summarizes.
**5. Multi-Modal Input/Output**
*Current:* Only text.
*To Add:* Accept voice prompts or generate a weather infographic.
**Example:**
- Prompt: *Voice note saying "Is it rainy in London?"* → Returns image with rainy clouds and summary.
**6. Learning from Feedback**
*Current:* No learning or improvement from user input.
*To Add:* Allow thumbs up/down or feedback to tune responses.
**Example:**
- User: *That was not helpful.* → Agent: *Sorry! Want a more detailed report or city breakdown?*
---
###### ✅ Summary
This app **lays a strong foundation for Agentic AI**, but adding these elements would bring it closer to a **truly autonomous, context-aware, and planning-capable agent** that mimics human-level task execution.
""")
|