Spaces:
Sleeping
A newer version of the Gradio SDK is available:
5.23.3
title: First Agent Template
emoji: π€
colorFrom: pink
colorTo: yellow
sdk: gradio
sdk_version: 5.15.0
app_file: app.py
pinned: false
tags:
- smolagents
- agent
- smolagent
- tool
- agent-course
SmoLAgents Conversational Agent
A powerful conversational agent built with SmoLAgents that can connect to various language models, perform web searches, create visualizations, execute code, and much more.
π Overview
This project provides a flexible and powerful conversational agent that can:
- Connect to different types of language models (local or cloud-based)
- Perform web searches to retrieve up-to-date information
- Visit and extract content from webpages
- Execute shell commands with appropriate security measures
- Create and modify files
- Generate data visualizations based on natural language requests
- Execute Python code within the chat interface
The agent is available through two interfaces:
- A Gradio interface (original)
- A Streamlit interface (new) with enhanced features and configuration options
π οΈ Prerequisites
- Python 3.8+
- A language model, which can be one of:
π Installation
Clone this repository:
git clone https://github.com/yourusername/smolagents-conversational-agent.git cd smolagents-conversational-agent
Install the required dependencies:
pip install -r requirements.txt
π§ Setup
Setting Up a Language Model
You have several options for the language model:
Option 1: Local Model with LM Studio (Recommended for beginners)
- Download and install LM Studio
- Launch LM Studio and download a model (e.g., Mistral 7B, Llama 2, etc.)
- Start the local server by clicking "Start Server"
- Note the server URL (typically http://localhost:1234/v1)
Option 2: Using OpenRouter
- Create an account on OpenRouter
- Get your API key from the dashboard
- Use the OpenRouter URL and your API key in the agent configuration
Option 3: Hugging Face API ( no more tested be careful )
- If you have access to Hugging Face API endpoints, you can use them directly
- Configure the URL and parameters in the agent interface
π» Usage
Streamlit Interface (Recommended)
The Streamlit interface offers a more user-friendly experience with additional features:
Launch the Streamlit application:
streamlit run streamlit_app.py
Access the interface in your web browser at http://localhost:8501
Configure your model in the sidebar:
- Select the model type (OpenAI Server, Hugging Face API, or Hugging Face Cloud)
- Enter the required configuration parameters
- Click "Apply Configuration"
Start chatting with the agent in the main interface
Gradio Interface
The original Gradio interface is still available:
Launch the Gradio application:
python app.py
Access the interface in your web browser at the URL displayed in the terminal (typically http://localhost:7860)
π Features
Streamlit Interface Features
- Interactive Chat Interface: Engage in natural conversations with the agent
- Multiple Model Support:
- OpenAI Server (LM Studio or other OpenAI-compatible servers)
- Hugging Face API
- Hugging Face Cloud
- Real-time Agent Reasoning: See the agent's thought process as it works on your request
- Customizable Configuration: Adjust model parameters without modifying code
- Data Visualization: Request and generate charts directly in the chat
- Code Execution: Run Python code generated by the agent within the chat interface
- Timezone Display: Check current time in different time zones
- Custom Icon: Uses a custom ico.webp icon for the application and sidebar
Agent Tools
The agent comes equipped with several powerful tools:
- Web Search: Search the web via DuckDuckGo to get up-to-date information
- Webpage Visiting: Visit and extract content from specific webpages
- Shell Command Execution: Run commands on your system (with appropriate security)
- File Operations: Create and modify files on your system
- Data Visualization: Generate charts and graphs based on your requests
- Code Execution: Run Python code within the chat interface
𧩠Extending the Agent
Adding Custom Tools
You can extend the agent with your own custom tools by modifying the app.py
file:
@tool
def my_custom_tool(arg1: str, arg2: int) -> str:
"""Description of what the tool does
Args:
arg1: description of the first argument
arg2: description of the second argument
"""
# Your tool implementation
return "Tool result"
Customizing Prompts
The agent's behavior can be customized by modifying the prompt templates in the prompts.yaml
file.
π Visualization Examples
The agent can generate visualizations based on natural language requests. Try asking:
- "Show me a line chart of temperature trends over the past year"
- "Create a bar chart of sales by region"
- "Display a scatter plot of age vs. income"
π Troubleshooting
- Agent not responding: Verify that your LLM server is running and accessible
- Connection errors: Check the URL and API key in your configuration
- Slow responses: Consider using a smaller or more efficient model
- Missing dependencies: Ensure all requirements are installed via
pip install -r requirements.txt
π Examples
Here are some example queries you can try with the agent:
- "What's the current time in Tokyo?"
- "Can you summarize the latest news about AI?"
- "Create a Python function to sort a list of dictionaries by a specific key"
- "Explain how transformer models work in AI"
- "Show me a bar chart of population by continent"
- "Write a simple web scraper to extract headlines from a news website"
π€ Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
For more information on Hugging Face Spaces configuration, visit https://huggingface.co/docs/hub/spaces-config-reference