KebabLover commited on
Commit
b11932d
·
1 Parent(s): 47728dd

update app with readme and hide step func

Browse files
Files changed (3) hide show
  1. .gitignore +130 -0
  2. README.md +132 -65
  3. streamlit_app.py +191 -154
.gitignore ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ share/python-wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+
28
+ # PyInstaller
29
+ # Usually these files are written by a python script from a template
30
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
31
+ *.manifest
32
+ *.spec
33
+
34
+ # Installer logs
35
+ pip-log.txt
36
+ pip-delete-this-directory.txt
37
+
38
+ # Unit test / coverage reports
39
+ htmlcov/
40
+ .tox/
41
+ .nox/
42
+ .coverage
43
+ .coverage.*
44
+ .cache
45
+ nosetests.xml
46
+ coverage.xml
47
+ *.cover
48
+ *.py,cover
49
+ .hypothesis/
50
+
51
+ # Translations
52
+ *.mo
53
+ *.pot
54
+
55
+ # Django stuff:
56
+ *.log
57
+ local_settings.py
58
+ db.sqlite3
59
+ db.sqlite3-journal
60
+
61
+ # Flask stuff:
62
+ instance/
63
+ .webassets-cache
64
+
65
+ # Scrapy stuff:
66
+ .scrapy
67
+
68
+ # Sphinx documentation
69
+ docs/_build/
70
+
71
+ # PyBuilder
72
+ target/
73
+
74
+ # Jupyter Notebook
75
+ .ipynb_checkpoints
76
+
77
+ # IPython
78
+ profile_default/
79
+ ipython_config.py
80
+
81
+ # pyenv
82
+ .python-version
83
+
84
+ # pipenv
85
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
86
+ # However, in case you do not want to do that, uncomment the following line to ignore it.
87
+ # Pipfile.lock
88
+
89
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow
90
+ __pypackages__/
91
+
92
+ # Celery stuff
93
+ celerybeat-schedule
94
+ celerybeat.pid
95
+
96
+ # SageMath parsed files
97
+ *.sage.py
98
+
99
+ # Environments
100
+ .env
101
+ .venv
102
+ env/
103
+ venv/
104
+ ENV/
105
+ env.bak/
106
+ venv.bak/
107
+
108
+ # Spyder project settings
109
+ .spyderproject
110
+ .spyderworkspace
111
+
112
+ # Rope project settings
113
+ .ropeproject
114
+
115
+ # mkdocs documentation
116
+ /site
117
+
118
+ # mypy
119
+ .mypy_cache/
120
+ .dmypy.json
121
+ dmypy.json
122
+
123
+ # Pyre type checker
124
+ .pyre/
125
+
126
+ # pytype static type analyzer
127
+ .pytype/
128
+
129
+ # Cython debug symbols
130
+ cython_debug/
README.md CHANGED
@@ -15,113 +15,180 @@ tags:
15
  - agent-course
16
  ---
17
 
18
- # Simple Local Agent
19
 
20
- Un agent conversationnel simple utilisant SmoLAgents pour se connecter à un modèle de langage, que ce soit via un serveur local (LMStudio) ou via d'autres APIs.
21
 
22
- ## Prérequis
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  - Python 3.8+
25
- - Un modèle de langage hébergé localement ou accessible via une API
 
 
 
26
 
27
- ## Installation
28
 
29
- 1. Installez les dépendances requises :
 
 
 
 
30
 
31
- ```bash
32
- pip install -r requirements.txt
33
- ```
 
34
 
35
- ## Utilisation
36
 
37
- ### Interface Gradio
38
 
39
- 1. Assurez-vous que votre serveur LLM est en cours d'exécution à l'adresse spécifiée.
40
 
41
- 2. Lancez l'agent avec l'interface Gradio :
42
 
43
- ```bash
44
- python app.py
45
- ```
 
46
 
47
- ### Interface Streamlit (Nouvelle !)
48
 
49
- Nous avons également ajouté une interface Streamlit qui offre plus de flexibilité et d'options de configuration :
 
 
50
 
51
- 1. Lancez l'application Streamlit :
52
 
53
- ```bash
54
- streamlit run streamlit_app.py
55
- ```
56
 
57
- 2. Accédez à l'interface via votre navigateur web (généralement à l'adresse http://localhost:8501).
58
 
59
- ### Fonctionnalités de l'interface Streamlit
60
 
61
- - **Interface de chat interactive** pour discuter avec l'agent
62
- - **Choix entre différents types de modèles** :
63
- - OpenAI Server (LMStudio ou autre serveur compatible OpenAI)
64
- - Hugging Face API
65
- - Hugging Face Cloud
66
- - **Configuration personnalisable** pour chaque type de modèle
67
- - **Affichage en temps réel** du raisonnement de l'agent
68
- - **Informations utiles** dans la barre latérale
 
 
 
 
 
69
 
70
- ## Configuration ⚙️
71
 
72
- ### Configuration du modèle
73
 
74
- L'interface Streamlit permet de configurer facilement le modèle sans modifier le code source :
75
 
76
- - **OpenAI Server** : URL du serveur, ID du modèle, clé API
77
- - **Hugging Face API** : URL du modèle, tokens maximum, température
78
- - **Hugging Face Cloud** : URL de l'endpoint, tokens maximum, température
 
79
 
80
- ### Configuration des outils
81
 
82
- L'agent est équipé de plusieurs outils puissants qui lui permettent d'interagir avec le monde extérieur et d'effectuer diverses actions :
83
 
84
- #### Outils principaux intégrés
85
 
86
- - **DuckDuckGoSearchTool** : Permet à l'agent d'effectuer des recherches web via DuckDuckGo pour obtenir des informations à jour sur n'importe quel sujet.
87
- - **VisitWebpageTool** : Permet à l'agent de visiter une page web spécifique et d'en extraire le contenu pour analyse.
88
- - **ShellCommandTool** : Donne à l'agent la capacité d'exécuter des commandes shell sur le système hôte (avec les précautions de sécurité appropriées).
89
- - **CreateFileTool** : Permet à l'agent de créer de nouveaux fichiers dans le système.
90
- - **ModifyFileTool** : Permet à l'agent de modifier des fichiers existants.
91
- - **FinalAnswerTool** : Fournit une réponse finale structurée à l'utilisateur, résumant les informations trouvées.
 
 
 
 
 
 
92
 
93
- #### Outils personnalisés
94
 
95
- L'agent inclut également quelques outils personnalisés :
 
 
 
 
 
96
 
97
- - **get_current_realtime** : Renvoie l'heure actuelle du système.
98
- - **get_current_time_in_timezone** : Récupère l'heure locale actuelle dans un fuseau horaire spécifié (par exemple, "Europe/Paris", "America/New_York").
99
 
100
- #### Extensibilité
101
 
102
- L'architecture de l'agent est conçue pour être facilement extensible. Vous pouvez ajouter vos propres outils personnalisés en suivant le modèle d'exemple dans le fichier `app.py` :
103
 
104
  ```python
105
  @tool
106
  def my_custom_tool(arg1: str, arg2: int) -> str:
107
- """Description de ce que fait l'outil
108
  Args:
109
- arg1: description du premier argument
110
- arg2: description du second argument
111
  """
112
- # Implémentation de votre outil
113
- return "Résultat de l'outil"
114
  ```
115
 
116
- ## Exemples d'utilisation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
 
118
- Voici quelques exemples de questions que vous pouvez poser à l'agent :
119
 
120
- - "Quelle est l'heure actuelle à Tokyo ?"
121
- - "Peux-tu me faire un résumé des dernières nouvelles sur l'IA ?"
122
- - "Crée un fichier contenant un exemple de code Python pour trier une liste"
123
- - "Explique-moi comment fonctionne la technologie des transformers en IA"
124
 
125
  ---
126
 
127
- *Consultez la référence de configuration sur https://huggingface.co/docs/hub/spaces-config-reference*
 
15
  - agent-course
16
  ---
17
 
18
+ # SmoLAgents Conversational Agent
19
 
20
+ A powerful conversational agent built with SmoLAgents that can connect to various language models, perform web searches, create visualizations, execute code, and much more.
21
 
22
+ ## 📋 Overview
23
+
24
+ This project provides a flexible and powerful conversational agent that can:
25
+
26
+ - Connect to different types of language models (local or cloud-based)
27
+ - Perform web searches to retrieve up-to-date information
28
+ - Visit and extract content from webpages
29
+ - Execute shell commands with appropriate security measures
30
+ - Create and modify files
31
+ - Generate data visualizations based on natural language requests
32
+ - Execute Python code within the chat interface
33
+
34
+ The agent is available through two interfaces:
35
+ - A Gradio interface (original)
36
+ - A Streamlit interface (new) with enhanced features and configuration options
37
+
38
+ ## 🛠️ Prerequisites
39
 
40
  - Python 3.8+
41
+ - A language model, which can be one of:
42
+ - A local model running through an OpenAI-compatible API server (like [LM Studio](https://lmstudio.ai/), [Ollama](https://ollama.ai/), etc.)
43
+ - A Hugging Face model accessible via API
44
+ - A cloud-based model with API access
45
 
46
+ ## 🚀 Installation
47
 
48
+ 1. Clone this repository:
49
+ ```bash
50
+ git clone https://github.com/yourusername/smolagents-conversational-agent.git
51
+ cd smolagents-conversational-agent
52
+ ```
53
 
54
+ 2. Install the required dependencies:
55
+ ```bash
56
+ pip install -r requirements.txt
57
+ ```
58
 
59
+ ## 🔧 Setup
60
 
61
+ ### Setting Up a Language Model
62
 
63
+ You have several options for the language model:
64
 
65
+ #### Option 1: Local Model with LM Studio (Recommended for beginners)
66
 
67
+ 1. Download and install [LM Studio](https://lmstudio.ai/)
68
+ 2. Launch LM Studio and download a model (e.g., Mistral 7B, Llama 2, etc.)
69
+ 3. Start the local server by clicking "Start Server"
70
+ 4. Note the server URL (typically http://localhost:1234/v1)
71
 
72
+ #### Option 2: Using OpenRouter
73
 
74
+ 1. Create an account on [OpenRouter](https://openrouter.ai/)
75
+ 2. Get your API key from the dashboard
76
+ 3. Use the OpenRouter URL and your API key in the agent configuration
77
 
78
+ #### Option 3: Hugging Face API ( no more tested be careful )
79
 
80
+ 1. If you have access to Hugging Face API endpoints, you can use them directly
81
+ 2. Configure the URL and parameters in the agent interface
 
82
 
83
+ ## 💻 Usage
84
 
85
+ ### Streamlit Interface (Recommended)
86
 
87
+ The Streamlit interface offers a more user-friendly experience with additional features:
88
+
89
+ 1. Launch the Streamlit application:
90
+ ```bash
91
+ streamlit run streamlit_app.py
92
+ ```
93
+
94
+ 2. Access the interface in your web browser at http://localhost:8501
95
+
96
+ 3. Configure your model in the sidebar:
97
+ - Select the model type (OpenAI Server, Hugging Face API, or Hugging Face Cloud)
98
+ - Enter the required configuration parameters
99
+ - Click "Apply Configuration"
100
 
101
+ 4. Start chatting with the agent in the main interface
102
 
103
+ ### Gradio Interface
104
 
105
+ The original Gradio interface is still available:
106
 
107
+ 1. Launch the Gradio application:
108
+ ```bash
109
+ python app.py
110
+ ```
111
 
112
+ 2. Access the interface in your web browser at the URL displayed in the terminal (typically http://localhost:7860)
113
 
114
+ ## 🌟 Features
115
 
116
+ ### Streamlit Interface Features
117
 
118
+ - **Interactive Chat Interface**: Engage in natural conversations with the agent
119
+ - **Multiple Model Support**:
120
+ - OpenAI Server (LM Studio or other OpenAI-compatible servers)
121
+ - Hugging Face API
122
+ - Hugging Face Cloud
123
+ - **Real-time Agent Reasoning**: See the agent's thought process as it works on your request
124
+ - **Customizable Configuration**: Adjust model parameters without modifying code
125
+ - **Data Visualization**: Request and generate charts directly in the chat
126
+ - **Code Execution**: Run Python code generated by the agent within the chat interface
127
+ - **Timezone Display**: Check current time in different time zones
128
+
129
+ ### Agent Tools
130
 
131
+ The agent comes equipped with several powerful tools:
132
 
133
+ - **Web Search**: Search the web via DuckDuckGo to get up-to-date information
134
+ - **Webpage Visiting**: Visit and extract content from specific webpages
135
+ - **Shell Command Execution**: Run commands on your system (with appropriate security)
136
+ - **File Operations**: Create and modify files on your system
137
+ - **Data Visualization**: Generate charts and graphs based on your requests
138
+ - **Code Execution**: Run Python code within the chat interface
139
 
140
+ ## 🧩 Extending the Agent
 
141
 
142
+ ### Adding Custom Tools
143
 
144
+ You can extend the agent with your own custom tools by modifying the `app.py` file:
145
 
146
  ```python
147
  @tool
148
  def my_custom_tool(arg1: str, arg2: int) -> str:
149
+ """Description of what the tool does
150
  Args:
151
+ arg1: description of the first argument
152
+ arg2: description of the second argument
153
  """
154
+ # Your tool implementation
155
+ return "Tool result"
156
  ```
157
 
158
+ ### Customizing Prompts
159
+
160
+ The agent's behavior can be customized by modifying the prompt templates in the `prompts.yaml` file.
161
+
162
+ ## 📊 Visualization Examples
163
+
164
+ The agent can generate visualizations based on natural language requests. Try asking:
165
+
166
+ - "Show me a line chart of temperature trends over the past year"
167
+ - "Create a bar chart of sales by region"
168
+ - "Display a scatter plot of age vs. income"
169
+
170
+ ## 🔍 Troubleshooting
171
+
172
+ - **Agent not responding**: Verify that your LLM server is running and accessible
173
+ - **Connection errors**: Check the URL and API key in your configuration
174
+ - **Slow responses**: Consider using a smaller or more efficient model
175
+ - **Missing dependencies**: Ensure all requirements are installed via `pip install -r requirements.txt`
176
+
177
+ ## 📚 Examples
178
+
179
+ Here are some example queries you can try with the agent:
180
+
181
+ - "What's the current time in Tokyo?"
182
+ - "Can you summarize the latest news about AI?"
183
+ - "Create a Python function to sort a list of dictionaries by a specific key"
184
+ - "Explain how transformer models work in AI"
185
+ - "Show me a bar chart of population by continent"
186
+ - "Write a simple web scraper to extract headlines from a news website"
187
 
188
+ ## 🤝 Contributing
189
 
190
+ Contributions are welcome! Please feel free to submit a Pull Request.
 
 
 
191
 
192
  ---
193
 
194
+ *For more information on Hugging Face Spaces configuration, visit https://huggingface.co/docs/hub/spaces-config-reference*
streamlit_app.py CHANGED
@@ -1,3 +1,12 @@
 
 
 
 
 
 
 
 
 
1
  import streamlit as st
2
  import os
3
  import sys
@@ -8,12 +17,15 @@ import pandas as pd
8
  import numpy as np
9
  from typing import List, Dict, Any, Optional, Union, Tuple
10
 
11
- # Ajout du répertoire courant au chemin Python pour importer les modules
12
  sys.path.append(os.path.dirname(os.path.abspath(__file__)))
13
 
14
- # Import des composants nécessaires pour l'agent
15
  from smolagents import CodeAgent
16
  from smolagents.models import OpenAIServerModel, HfApiModel
 
 
 
17
  from tools.final_answer import FinalAnswerTool
18
  from tools.validate_final_answer import ValidateFinalAnswer
19
  from tools.visit_webpage import VisitWebpageTool
@@ -21,13 +33,14 @@ from tools.web_search import DuckDuckGoSearchTool
21
  from tools.shell_tool import ShellCommandTool
22
  from tools.create_file_tool import CreateFileTool
23
  from tools.modify_file_tool import ModifyFileTool
 
 
24
  from phoenix.otel import register
25
  from openinference.instrumentation.smolagents import SmolagentsInstrumentor
26
- from smolagents.memory import ToolCall
27
  # register()
28
  # SmolagentsInstrumentor().instrument()
29
 
30
- # Import des fonctions de visualisation
31
  from visualizations import (
32
  create_line_chart,
33
  create_bar_chart,
@@ -36,47 +49,60 @@ from visualizations import (
36
  generate_sample_data
37
  )
38
 
39
- # Configuration de la page Streamlit
40
  st.set_page_config(
41
  page_title="Agent Conversationnel SmoLAgents 🤖",
42
  page_icon="🤖",
43
- layout="wide",
44
  )
45
 
46
  def initialize_agent(model_type="openai_server", model_config=None):
47
- """Initialise l'agent avec les outils et le modèle choisi
 
 
 
 
48
 
49
  Args:
50
- model_type: Type de modèle à utiliser ('openai_server', 'hf_api', etc.)
51
- model_config: Configuration spécifique au modèle
 
 
 
 
 
 
 
52
  """
53
 
54
- # Configuration du modèle en fonction du type choisi
55
  if model_type == "openai_server":
56
- # Configuration par défaut pour OpenAIServerModel
57
  if model_config is None:
58
  model_config = {
59
  "api_base": "https://openrouter.ai/api/v1",
60
  "model_id": "google/gemini-2.0-pro-exp-02-05:free",
61
- "api_key": "nop"
62
  }
63
 
 
64
  model = OpenAIServerModel(
65
  api_base=model_config["api_base"],
66
  model_id=model_config["model_id"],
67
  api_key=model_config["api_key"],
68
- max_tokens=12000
69
  )
70
 
71
  elif model_type == "hf_api":
72
- # Configuration par défaut pour HfApiModel
73
  if model_config is None:
74
  model_config = {
75
- "model_id": "http://192.168.1.141:1234/v1",
76
  "max_new_tokens": 2096,
77
- "temperature": 0.5
78
  }
79
 
 
80
  model = HfApiModel(
81
  model_id=model_config["model_id"],
82
  max_new_tokens=model_config["max_new_tokens"],
@@ -84,7 +110,7 @@ def initialize_agent(model_type="openai_server", model_config=None):
84
  )
85
 
86
  elif model_type == "hf_cloud":
87
- # Configuration pour HfApiModel avec un endpoint cloud
88
  if model_config is None:
89
  model_config = {
90
  "model_id": "https://pflgm2locj2t89co.us-east-1.aws.endpoints.huggingface.cloud",
@@ -92,6 +118,7 @@ def initialize_agent(model_type="openai_server", model_config=None):
92
  "temperature": 0.5
93
  }
94
 
 
95
  model = HfApiModel(
96
  model_id=model_config["model_id"],
97
  max_new_tokens=model_config["max_new_tokens"],
@@ -99,10 +126,11 @@ def initialize_agent(model_type="openai_server", model_config=None):
99
  )
100
 
101
  else:
 
102
  st.error(f"Type de modèle non supporté: {model_type}")
103
  return None
104
 
105
- # Chargement des templates de prompt depuis le fichier YAML
106
  try:
107
  with open("prompts.yaml", 'r') as stream:
108
  prompt_templates = yaml.safe_load(stream)
@@ -111,67 +139,85 @@ def initialize_agent(model_type="openai_server", model_config=None):
111
  prompt_templates = None
112
 
113
 
114
- # Création de l'agent avec les mêmes outils que dans app.py
115
  agent = CodeAgent(
116
  model=model,
117
  tools=[
118
- FinalAnswerTool(),
119
- ValidateFinalAnswer(),
120
- DuckDuckGoSearchTool(),
121
- VisitWebpageTool(),
122
- ShellCommandTool(),
123
- CreateFileTool(),
124
- ModifyFileTool()
 
125
  ],
126
- max_steps=20,
127
- verbosity_level=1,
128
- grammar=None,
129
- planning_interval=None,
130
- name=None,
131
- description=None,
132
- prompt_templates=prompt_templates,
 
133
  additional_authorized_imports=["pandas", "numpy", "matplotlib", "seaborn", "plotly", "requests", "yaml"]
134
  )
135
 
136
  return agent
137
 
138
  def format_step_message(step, is_final=False):
139
- """Formate les messages de l'agent pour l'affichage dans Streamlit"""
 
 
 
 
 
 
 
 
 
 
 
140
 
141
  if hasattr(step, "model_output") and step.model_output:
142
- # Nettoyer et formater la sortie du modèle pour l'affichage
143
  content = step.model_output.strip()
144
  if not is_final:
145
  return content
146
  else:
 
147
  return f"**Réponse finale :** {content}"
148
 
149
  if hasattr(step, "observations") and step.observations:
150
- # Afficher les observations des outils
151
  return f"**Observations :** {step.observations.strip()}"
152
 
153
  if hasattr(step, "error") and step.error:
154
- # Afficher les erreurs
155
- return f"**Erreur nooo:** {step.error}"
156
 
157
- # Cas par défaut
158
  return str(step)
159
 
160
  def process_visualization_request(user_input: str) -> Tuple[bool, Optional[st.delta_generator.DeltaGenerator]]:
161
  """
162
  Process a visualization request from the user.
163
 
 
 
 
164
  Args:
165
- user_input: The user's input message.
166
 
167
  Returns:
168
- A tuple containing:
169
- - Boolean indicating if a visualization was processed
170
- - The Streamlit delta generator if a visualization was created, None otherwise
171
  """
172
- # Detect if this is a visualization request
173
  viz_info = detect_visualization_request(user_input)
174
 
 
175
  if not viz_info['is_visualization'] or not viz_info['chart_type']:
176
  return False, None
177
 
@@ -180,15 +226,15 @@ def process_visualization_request(user_input: str) -> Tuple[bool, Optional[st.de
180
  data_description = viz_info['data_description']
181
  parameters = viz_info['parameters']
182
 
183
- # Generate sample data based on the description and chart type
184
  data = generate_sample_data(data_description, chart_type)
185
 
186
- # Set default parameters if not provided
187
  title = parameters.get('title', f"{chart_type.capitalize()} Chart" + (f" of {data_description}" if data_description else ""))
188
  x_label = parameters.get('x_label', data.columns[0] if len(data.columns) > 0 else "X-Axis")
189
  y_label = parameters.get('y_label', data.columns[1] if len(data.columns) > 1 else "Y-Axis")
190
 
191
- # Create the appropriate chart
192
  fig = None
193
  if chart_type == 'line':
194
  fig = create_line_chart(data, title=title, x_label=x_label, y_label=y_label)
@@ -197,6 +243,7 @@ def process_visualization_request(user_input: str) -> Tuple[bool, Optional[st.de
197
  elif chart_type == 'scatter':
198
  fig = create_scatter_plot(data, title=title, x_label=x_label, y_label=y_label)
199
 
 
200
  if fig:
201
  # Create a container for the visualization
202
  viz_container = st.container()
@@ -208,63 +255,104 @@ def process_visualization_request(user_input: str) -> Tuple[bool, Optional[st.de
208
  return False, None
209
 
210
  def process_user_input(agent, user_input):
211
- """Traite l'entrée utilisateur avec l'agent et renvoie les résultats étape par étape"""
 
 
 
 
212
 
213
- # Check if this is a visualization request
 
 
 
 
 
 
 
 
 
214
  is_viz_request, viz_container = process_visualization_request(user_input)
215
 
216
- # If it's a visualization request, we'll still run the agent but we've already displayed the chart
217
 
218
- # Vérification de la connexion au serveur LLM
219
  try:
220
- # Exécution de l'agent et capture des étapes
221
  with st.spinner("L'agent réfléchit..."):
222
- # Placeholder pour la sortie de l'agent
223
  response_container = st.container()
224
 
225
- # Exécution de l'agent et capture des étapes
226
  steps = []
227
  final_step = None
228
 
 
229
  with response_container:
230
  step_container = st.empty()
231
  step_text = ""
232
 
233
- # Exécute l'agent et capture les étapes de manière incrémentale
234
  for step in agent.run(user_input, stream=True):
235
  steps.append(step)
236
 
237
- # Mettre à jour l'affichage des étapes
238
  step_number = f"Étape {step.step_number}" if hasattr(step, "step_number") and step.step_number is not None else ""
239
  step_content = format_step_message(step)
240
 
241
- # Ajouter au texte des étapes
242
  if step_number:
243
  step_text += f"### {step_number}\n\n"
244
  step_text += f"{step_content}\n\n---\n\n"
245
 
246
- # Mettre à jour l'affichage
247
- step_container.markdown(step_text)
248
 
249
- # Conserver la dernière étape pour la réponse finale
250
  final_step = step
251
 
252
- # Afficher la réponse finale
253
  if final_step:
254
  final_answer = format_step_message(final_step, is_final=True)
255
 
256
- # If this was a visualization request, add a note about the visualization
257
  if is_viz_request:
258
  final_answer += "\n\n*Une visualisation a été générée en fonction de votre demande.*"
259
 
 
260
  return (final_answer, True)
261
 
 
262
  return final_step
 
263
  except Exception as e:
 
264
  st.error(f"Erreur lors de l'exécution de l'agent: {str(e)}")
265
  return None
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
266
 
267
  def main():
 
 
 
 
 
 
 
268
  st.title("Agent Conversationnel SmoLAgents 🤖")
269
 
270
  st.markdown("""
@@ -272,11 +360,11 @@ def main():
272
  Posez vos questions ci-dessous.
273
  """)
274
 
275
- # Sidebar pour la configuration du modèle
276
  with st.sidebar:
277
  st.title("Configuration du Modèle")
278
 
279
- # Sélectionner le type de modèle
280
  model_type = st.selectbox(
281
  "Type de modèle",
282
  ["openai_server", "hf_api", "hf_cloud"],
@@ -284,35 +372,41 @@ def main():
284
  help="Choisissez le type de modèle à utiliser avec l'agent"
285
  )
286
 
287
- # Configuration spécifique en fonction du type de modèle
288
  model_config = {}
289
 
 
290
  if model_type == "openai_server":
291
  st.subheader("Configuration OpenAI Server")
 
292
  model_config["api_base"] = st.text_input(
293
  "URL du serveur",
294
  value="https://openrouter.ai/api/v1",
295
  help="Adresse du serveur OpenAI compatible"
296
  )
 
297
  model_config["model_id"] = st.text_input(
298
  "ID du modèle",
299
  value="google/gemini-2.0-pro-exp-02-05:free",
300
  help="Identifiant du modèle local"
301
  )
 
302
  model_config["api_key"] = st.text_input(
303
  "Clé API",
304
- value="nop",
305
  type="password",
306
  help="Clé API pour le serveur (dummy pour LMStudio)"
307
  )
308
 
309
  elif model_type == "hf_api":
310
  st.subheader("Configuration Hugging Face API")
 
311
  model_config["model_id"] = st.text_input(
312
  "URL du modèle",
313
  value="http://192.168.1.141:1234/v1",
314
  help="URL du modèle ou endpoint"
315
  )
 
316
  model_config["max_new_tokens"] = st.slider(
317
  "Tokens maximum",
318
  min_value=512,
@@ -320,6 +414,7 @@ def main():
320
  value=2096,
321
  help="Nombre maximum de tokens à générer"
322
  )
 
323
  model_config["temperature"] = st.slider(
324
  "Température",
325
  min_value=0.1,
@@ -331,11 +426,13 @@ def main():
331
 
332
  elif model_type == "hf_cloud":
333
  st.subheader("Configuration Hugging Face Cloud")
 
334
  model_config["model_id"] = st.text_input(
335
  "URL du endpoint cloud",
336
  value="https://pflgm2locj2t89co.us-east-1.aws.endpoints.huggingface.cloud",
337
  help="URL de l'endpoint cloud Hugging Face"
338
  )
 
339
  model_config["max_new_tokens"] = st.slider(
340
  "Tokens maximum",
341
  min_value=512,
@@ -343,6 +440,7 @@ def main():
343
  value=2096,
344
  help="Nombre maximum de tokens à générer"
345
  )
 
346
  model_config["temperature"] = st.slider(
347
  "Température",
348
  min_value=0.1,
@@ -352,16 +450,18 @@ def main():
352
  help="Température pour la génération (plus élevée = plus créatif)"
353
  )
354
 
355
- # Bouton pour réinitialiser l'agent avec la nouvelle configuration
356
  if st.button("Appliquer la configuration"):
357
  with st.spinner("Initialisation de l'agent avec le nouveau modèle..."):
358
  st.session_state.agent = initialize_agent(model_type, model_config)
359
  st.success("✅ Configuration appliquée avec succès!")
360
 
361
- # Vérifier la connexion au serveur
362
  if model_type == "openai_server":
 
363
  llm_api_url = model_config["api_base"].split("/v1")[0]
364
  try:
 
365
  import requests
366
  response = requests.get(f"{llm_api_url}/health", timeout=2)
367
  if response.status_code == 200:
@@ -371,121 +471,56 @@ def main():
371
  except Exception:
372
  st.error("❌ Impossible de se connecter au serveur LLM. Vérifiez que le serveur est en cours d'exécution à l'adresse spécifiée.")
373
 
374
- # Initialisation de l'agent si ce n'est pas déjà fait
375
  if "agent" not in st.session_state:
376
  with st.spinner("Initialisation de l'agent..."):
377
  st.session_state.agent = initialize_agent(model_type, model_config)
378
 
379
- # Initialisation de l'historique de conversation
380
  if "messages" not in st.session_state:
381
  st.session_state.messages = [
382
  {"role": "assistant", "content": "Bonjour! Comment puis-je vous aider aujourd'hui?"}
383
  ]
384
 
385
- # Affichage de l'historique des messages
386
  for message in st.session_state.messages:
387
  with st.chat_message(message["role"]):
388
  st.markdown(message["content"])
389
 
390
- # Zone de saisie utilisateur
391
  if prompt := st.chat_input("Posez votre question..."):
392
- # Ajouter la question de l'utilisateur à l'historique
393
  st.session_state.messages.append({"role": "user", "content": prompt})
394
 
395
- # Afficher la question de l'utilisateur
396
  with st.chat_message("user"):
397
  st.markdown(prompt)
398
 
399
- # Traiter la demande avec l'agent
400
  with st.chat_message("assistant"):
 
401
  response = process_user_input(st.session_state.agent, prompt)
402
- if response is not None and response[1] == True:
403
- with st.container(border = True):
404
- def secure_imports(code_str):
405
- """
406
- Process Python code to replace import statements with exec-wrapped versions.
407
-
408
- Args:
409
- code_str (str): The Python code string to process
410
-
411
- Returns:
412
- str: The processed code with import statements wrapped in exec()
413
- """
414
- import re
415
-
416
- # Define regex patterns for both import styles
417
- # Pattern for 'import module' and 'import module as alias'
418
- import_pattern = r'^(\s*)import\s+([^\n]+)'
419
-
420
- # Pattern for 'from module import something'
421
- from_import_pattern = r'^(\s*)from\s+([^\n]+)\s+import\s+([^\n]+)'
422
-
423
- lines = code_str.split('\n')
424
- result_lines = []
425
-
426
- i = 0
427
- while i < len(lines):
428
- line = lines[i]
429
-
430
- # Check for multiline imports with parentheses
431
- if re.search(r'import\s+\(', line) or re.search(r'from\s+.+\s+import\s+\(', line):
432
- # Collect all lines until closing parenthesis
433
- start_line = i
434
- multiline_import = [line]
435
- i += 1
436
-
437
- while i < len(lines) and ')' not in lines[i]:
438
- multiline_import.append(lines[i])
439
- i += 1
440
-
441
- if i < len(lines): # Add the closing line with parenthesis
442
- multiline_import.append(lines[i])
443
-
444
- # Join the multiline import and wrap it with exec
445
- indentation = re.match(r'^(\s*)', multiline_import[0]).group(1)
446
- multiline_str = '\n'.join(multiline_import)
447
- result_lines.append(f'{indentation}exec("""\n{multiline_str}\n""")')
448
-
449
- else:
450
- # Handle single line imports
451
- import_match = re.match(import_pattern, line)
452
- from_import_match = re.match(from_import_pattern, line)
453
-
454
- if import_match:
455
- indentation = import_match.group(1)
456
- import_stmt = line[len(indentation):] # Remove indentation from statement
457
- result_lines.append(f'{indentation}exec("{import_stmt}")')
458
-
459
- elif from_import_match:
460
- indentation = from_import_match.group(1)
461
- from_import_stmt = line[len(indentation):] # Remove indentation from statement
462
- result_lines.append(f'{indentation}exec("{from_import_stmt}")')
463
-
464
- else:
465
- # Not an import statement, keep as is
466
- result_lines.append(line)
467
-
468
- i += 1
469
-
470
- return '\n'.join(result_lines)
471
-
472
- # Process response[0] to secure import statements
473
- # processed_response = secure_imports(response[0])
474
- # eval(processed_response)
475
- exec(response[0])
476
  if response and hasattr(response, "model_output"):
477
- # Ajouter la réponse à l'historique
478
  st.session_state.messages.append({"role": "assistant", "content": response.model_output})
479
 
480
- # Bouton pour effacer l'historique
481
  if st.sidebar.button("Nouvelle conversation"):
 
482
  st.session_state.messages = [
483
  {"role": "assistant", "content": "Bonjour! Comment puis-je vous aider aujourd'hui?"}
484
  ]
 
485
  st.rerun()
486
 
487
- # Afficher des informations supplémentaires dans la barre latérale
488
  with st.sidebar:
 
489
  st.title("À propos de cet agent")
490
  st.markdown("""
491
  Cet agent utilise SmoLAgents pour se connecter à un modèle de langage hébergé localement.
@@ -505,7 +540,7 @@ def main():
505
  - Assurez-vous que toutes les dépendances sont installées via `pip install -r requirements.txt`.
506
  """)
507
 
508
- # Section pour les visualisations
509
  st.subheader("Visualisations")
510
  st.markdown("""
511
  Vous pouvez demander des visualisations en utilisant des phrases comme:
@@ -516,13 +551,15 @@ def main():
516
  L'agent détectera automatiquement votre demande et générera une visualisation appropriée.
517
  """)
518
 
519
- # Afficher l'heure actuelle dans différents fuseaux horaires
520
  st.subheader("Heure actuelle")
 
521
  selected_timezone = st.selectbox(
522
  "Choisissez un fuseau horaire",
523
  ["Europe/Paris", "America/New_York", "Asia/Tokyo", "Australia/Sydney"]
524
  )
525
 
 
526
  tz = pytz.timezone(selected_timezone)
527
  local_time = datetime.datetime.now(tz).strftime("%Y-%m-%d %H:%M:%S")
528
  st.write(f"L'heure actuelle à {selected_timezone} est: {local_time}")
 
1
+ # =============================================================================
2
+ # STREAMLIT APPLICATION FOR SMOLAGENTS CONVERSATIONAL AGENT
3
+ # =============================================================================
4
+ # This application provides a web interface for interacting with a SmoLAgents-based
5
+ # conversational agent. It supports multiple model backends, visualization capabilities,
6
+ # and a rich chat interface.
7
+ # =============================================================================
8
+
9
+ # Standard library imports
10
  import streamlit as st
11
  import os
12
  import sys
 
17
  import numpy as np
18
  from typing import List, Dict, Any, Optional, Union, Tuple
19
 
20
+ # Add current directory to Python path to import local modules
21
  sys.path.append(os.path.dirname(os.path.abspath(__file__)))
22
 
23
+ # SmoLAgents and related imports
24
  from smolagents import CodeAgent
25
  from smolagents.models import OpenAIServerModel, HfApiModel
26
+ from smolagents.memory import ToolCall
27
+
28
+ # Tool imports for agent capabilities
29
  from tools.final_answer import FinalAnswerTool
30
  from tools.validate_final_answer import ValidateFinalAnswer
31
  from tools.visit_webpage import VisitWebpageTool
 
33
  from tools.shell_tool import ShellCommandTool
34
  from tools.create_file_tool import CreateFileTool
35
  from tools.modify_file_tool import ModifyFileTool
36
+
37
+ # Telemetry imports (currently disabled)
38
  from phoenix.otel import register
39
  from openinference.instrumentation.smolagents import SmolagentsInstrumentor
 
40
  # register()
41
  # SmolagentsInstrumentor().instrument()
42
 
43
+ # Visualization functionality imports
44
  from visualizations import (
45
  create_line_chart,
46
  create_bar_chart,
 
49
  generate_sample_data
50
  )
51
 
52
+ # Configure Streamlit page settings
53
  st.set_page_config(
54
  page_title="Agent Conversationnel SmoLAgents 🤖",
55
  page_icon="🤖",
56
+ layout="wide", # Use wide layout for better display of content
57
  )
58
 
59
  def initialize_agent(model_type="openai_server", model_config=None):
60
+ """Initialize the agent with the specified model and tools.
61
+
62
+ This function creates a SmoLAgents CodeAgent instance with the specified language model
63
+ and a set of tools that enable various capabilities like web search, file operations,
64
+ and shell command execution.
65
 
66
  Args:
67
+ model_type (str): Type of model to use. Options are:
68
+ - 'openai_server': For OpenAI-compatible API servers (like LMStudio or OpenRouter)
69
+ - 'hf_api': For Hugging Face API endpoints
70
+ - 'hf_cloud': For Hugging Face cloud endpoints
71
+ model_config (dict, optional): Configuration dictionary for the model.
72
+ If None, default configurations will be used.
73
+
74
+ Returns:
75
+ CodeAgent: Initialized agent instance, or None if model type is not supported
76
  """
77
 
78
+ # Configure the model based on the selected type
79
  if model_type == "openai_server":
80
+ # Default configuration for OpenAIServerModel (OpenRouter in this case)
81
  if model_config is None:
82
  model_config = {
83
  "api_base": "https://openrouter.ai/api/v1",
84
  "model_id": "google/gemini-2.0-pro-exp-02-05:free",
85
+ "api_key": "nop" # Replace with actual API key in production
86
  }
87
 
88
+ # Initialize OpenAI-compatible model
89
  model = OpenAIServerModel(
90
  api_base=model_config["api_base"],
91
  model_id=model_config["model_id"],
92
  api_key=model_config["api_key"],
93
+ max_tokens=12000 # Maximum tokens for response generation
94
  )
95
 
96
  elif model_type == "hf_api":
97
+ # Default configuration for local Hugging Face API endpoint
98
  if model_config is None:
99
  model_config = {
100
+ "model_id": "http://192.168.1.141:1234/v1", # Local API endpoint
101
  "max_new_tokens": 2096,
102
+ "temperature": 0.5 # Controls randomness (0.0 = deterministic, 1.0 = creative)
103
  }
104
 
105
+ # Initialize Hugging Face API model
106
  model = HfApiModel(
107
  model_id=model_config["model_id"],
108
  max_new_tokens=model_config["max_new_tokens"],
 
110
  )
111
 
112
  elif model_type == "hf_cloud":
113
+ # Default configuration for Hugging Face cloud endpoint
114
  if model_config is None:
115
  model_config = {
116
  "model_id": "https://pflgm2locj2t89co.us-east-1.aws.endpoints.huggingface.cloud",
 
118
  "temperature": 0.5
119
  }
120
 
121
+ # Initialize Hugging Face cloud model
122
  model = HfApiModel(
123
  model_id=model_config["model_id"],
124
  max_new_tokens=model_config["max_new_tokens"],
 
126
  )
127
 
128
  else:
129
+ # Handle unsupported model types
130
  st.error(f"Type de modèle non supporté: {model_type}")
131
  return None
132
 
133
+ # Load prompt templates from YAML file
134
  try:
135
  with open("prompts.yaml", 'r') as stream:
136
  prompt_templates = yaml.safe_load(stream)
 
139
  prompt_templates = None
140
 
141
 
142
+ # Create the agent with tools and configuration
143
  agent = CodeAgent(
144
  model=model,
145
  tools=[
146
+ # Core tools for agent functionality
147
+ FinalAnswerTool(), # Provides final answers to user queries
148
+ ValidateFinalAnswer(), # Validates final answers for quality
149
+ DuckDuckGoSearchTool(), # Enables web search capabilities
150
+ VisitWebpageTool(), # Allows visiting and extracting content from webpages
151
+ ShellCommandTool(), # Enables execution of shell commands
152
+ CreateFileTool(), # Allows creation of new files
153
+ ModifyFileTool() # Enables modification of existing files
154
  ],
155
+ max_steps=20, # Maximum number of reasoning steps
156
+ verbosity_level=1, # Level of detail in agent's output
157
+ grammar=None, # Optional grammar for structured output
158
+ planning_interval=None, # How often to re-plan (None = no explicit planning)
159
+ name=None, # Agent name
160
+ description=None, # Agent description
161
+ prompt_templates=prompt_templates, # Custom prompt templates
162
+ # Additional Python modules the agent is allowed to import in generated code
163
  additional_authorized_imports=["pandas", "numpy", "matplotlib", "seaborn", "plotly", "requests", "yaml"]
164
  )
165
 
166
  return agent
167
 
168
  def format_step_message(step, is_final=False):
169
+ """Format agent messages for display in Streamlit.
170
+
171
+ This function processes different types of agent step outputs (model outputs,
172
+ observations, errors) and formats them for display in the Streamlit interface.
173
+
174
+ Args:
175
+ step: The agent step object containing output information
176
+ is_final (bool): Whether this is the final answer step
177
+
178
+ Returns:
179
+ str: Formatted message ready for display
180
+ """
181
 
182
  if hasattr(step, "model_output") and step.model_output:
183
+ # Format the model's output (the agent's thinking or response)
184
  content = step.model_output.strip()
185
  if not is_final:
186
  return content
187
  else:
188
+ # Add special formatting for final answers
189
  return f"**Réponse finale :** {content}"
190
 
191
  if hasattr(step, "observations") and step.observations:
192
+ # Format tool observations (results from tool executions)
193
  return f"**Observations :** {step.observations.strip()}"
194
 
195
  if hasattr(step, "error") and step.error:
196
+ # Format any errors that occurred during agent execution
197
+ return f"**Erreur :** {step.error}"
198
 
199
+ # Default case - convert step to string
200
  return str(step)
201
 
202
  def process_visualization_request(user_input: str) -> Tuple[bool, Optional[st.delta_generator.DeltaGenerator]]:
203
  """
204
  Process a visualization request from the user.
205
 
206
+ This function detects if the user is requesting a data visualization,
207
+ generates appropriate sample data, and creates the requested chart.
208
+
209
  Args:
210
+ user_input (str): The user's input message
211
 
212
  Returns:
213
+ Tuple[bool, Optional[st.delta_generator.DeltaGenerator]]:
214
+ - Boolean indicating if a visualization was processed
215
+ - The Streamlit container if a visualization was created, None otherwise
216
  """
217
+ # Use NLP to detect if this is a visualization request and extract details
218
  viz_info = detect_visualization_request(user_input)
219
 
220
+ # If not a visualization request or chart type couldn't be determined, return early
221
  if not viz_info['is_visualization'] or not viz_info['chart_type']:
222
  return False, None
223
 
 
226
  data_description = viz_info['data_description']
227
  parameters = viz_info['parameters']
228
 
229
+ # Generate appropriate sample data based on the description and chart type
230
  data = generate_sample_data(data_description, chart_type)
231
 
232
+ # Set default parameters if not provided by the user
233
  title = parameters.get('title', f"{chart_type.capitalize()} Chart" + (f" of {data_description}" if data_description else ""))
234
  x_label = parameters.get('x_label', data.columns[0] if len(data.columns) > 0 else "X-Axis")
235
  y_label = parameters.get('y_label', data.columns[1] if len(data.columns) > 1 else "Y-Axis")
236
 
237
+ # Create the appropriate chart based on the requested type
238
  fig = None
239
  if chart_type == 'line':
240
  fig = create_line_chart(data, title=title, x_label=x_label, y_label=y_label)
 
243
  elif chart_type == 'scatter':
244
  fig = create_scatter_plot(data, title=title, x_label=x_label, y_label=y_label)
245
 
246
+ # If a chart was successfully created, display it
247
  if fig:
248
  # Create a container for the visualization
249
  viz_container = st.container()
 
255
  return False, None
256
 
257
  def process_user_input(agent, user_input):
258
+ """Process user input with the agent and return results step by step.
259
+
260
+ This function handles the execution of the agent with the user's input,
261
+ displays the agent's thinking process in real-time, and returns the final result.
262
+ It also handles visualization requests by integrating with the visualization system.
263
 
264
+ Args:
265
+ agent: The initialized SmoLAgents agent instance
266
+ user_input (str): The user's query or instruction
267
+
268
+ Returns:
269
+ tuple or None: A tuple containing the final answer and a boolean flag,
270
+ or None if an error occurred
271
+ """
272
+
273
+ # First check if this is a visualization request
274
  is_viz_request, viz_container = process_visualization_request(user_input)
275
 
276
+ # Even for visualization requests, we still run the agent to provide context and explanation
277
 
278
+ # Execute the agent and handle any exceptions
279
  try:
280
+ # Show a spinner while the agent is thinking
281
  with st.spinner("L'agent réfléchit..."):
282
+ # Create a container for the agent's output
283
  response_container = st.container()
284
 
285
+ # Initialize variables to track steps and final result
286
  steps = []
287
  final_step = None
288
 
289
+ # Display the agent's thinking process in real-time
290
  with response_container:
291
  step_container = st.empty()
292
  step_text = ""
293
 
294
+ # Execute the agent and stream results incrementally
295
  for step in agent.run(user_input, stream=True):
296
  steps.append(step)
297
 
298
+ # Format the current step for display
299
  step_number = f"Étape {step.step_number}" if hasattr(step, "step_number") and step.step_number is not None else ""
300
  step_content = format_step_message(step)
301
 
302
+ # Build the cumulative step text
303
  if step_number:
304
  step_text += f"### {step_number}\n\n"
305
  step_text += f"{step_content}\n\n---\n\n"
306
 
307
+ # Update the display with the latest step information
308
+ # step_container.markdown(step_text)
309
 
310
+ # Keep track of the final step for the response
311
  final_step = step
312
 
313
+ # Process and return the final answer
314
  if final_step:
315
  final_answer = format_step_message(final_step, is_final=True)
316
 
317
+ # If this was a visualization request, add a note about it
318
  if is_viz_request:
319
  final_answer += "\n\n*Une visualisation a été générée en fonction de votre demande.*"
320
 
321
+ # Return the final answer with a flag indicating success
322
  return (final_answer, True)
323
 
324
+ # If we somehow exit the loop without a final step
325
  return final_step
326
+
327
  except Exception as e:
328
+ # Handle any errors that occur during agent execution
329
  st.error(f"Erreur lors de l'exécution de l'agent: {str(e)}")
330
  return None
331
+
332
+ @st.fragment
333
+ def launch_app(code_to_launch):
334
+ """Execute code within a Streamlit fragment to prevent page reloads.
335
+
336
+ This function is decorated with @st.fragment to ensure that only this specific
337
+ part of the UI is updated when code is executed, without reloading the entire page.
338
+ This is particularly useful for executing code generated by the agent.
339
+
340
+ Args:
341
+ code_to_launch (str): Python code string to be executed
342
+ """
343
+ with st.container(border = True):
344
+ # Execute the code within a bordered container for visual separation
345
+ exec(code_to_launch)
346
+ return
347
 
348
  def main():
349
+ """Main application entry point.
350
+
351
+ This function sets up the Streamlit interface, initializes the agent,
352
+ manages the conversation history, and handles user interactions.
353
+ It's the central orchestrator of the application's functionality.
354
+ """
355
+ # Set up the main page title and welcome message
356
  st.title("Agent Conversationnel SmoLAgents 🤖")
357
 
358
  st.markdown("""
 
360
  Posez vos questions ci-dessous.
361
  """)
362
 
363
+ # Set up the sidebar for model configuration
364
  with st.sidebar:
365
  st.title("Configuration du Modèle")
366
 
367
+ # Model type selection dropdown
368
  model_type = st.selectbox(
369
  "Type de modèle",
370
  ["openai_server", "hf_api", "hf_cloud"],
 
372
  help="Choisissez le type de modèle à utiliser avec l'agent"
373
  )
374
 
375
+ # Initialize empty configuration dictionary
376
  model_config = {}
377
 
378
+ # Dynamic configuration UI based on selected model type
379
  if model_type == "openai_server":
380
  st.subheader("Configuration OpenAI Server")
381
+ # OpenAI-compatible server URL (OpenRouter, LMStudio, etc.)
382
  model_config["api_base"] = st.text_input(
383
  "URL du serveur",
384
  value="https://openrouter.ai/api/v1",
385
  help="Adresse du serveur OpenAI compatible"
386
  )
387
+ # Model ID to use with the server
388
  model_config["model_id"] = st.text_input(
389
  "ID du modèle",
390
  value="google/gemini-2.0-pro-exp-02-05:free",
391
  help="Identifiant du modèle local"
392
  )
393
+ # API key for authentication
394
  model_config["api_key"] = st.text_input(
395
  "Clé API",
396
+ value=os.getenv("OPEN_ROUTER_TOKEN") or "dummy",
397
  type="password",
398
  help="Clé API pour le serveur (dummy pour LMStudio)"
399
  )
400
 
401
  elif model_type == "hf_api":
402
  st.subheader("Configuration Hugging Face API")
403
+ # Hugging Face API endpoint URL
404
  model_config["model_id"] = st.text_input(
405
  "URL du modèle",
406
  value="http://192.168.1.141:1234/v1",
407
  help="URL du modèle ou endpoint"
408
  )
409
+ # Maximum tokens to generate in responses
410
  model_config["max_new_tokens"] = st.slider(
411
  "Tokens maximum",
412
  min_value=512,
 
414
  value=2096,
415
  help="Nombre maximum de tokens à générer"
416
  )
417
+ # Temperature controls randomness in generation
418
  model_config["temperature"] = st.slider(
419
  "Température",
420
  min_value=0.1,
 
426
 
427
  elif model_type == "hf_cloud":
428
  st.subheader("Configuration Hugging Face Cloud")
429
+ # Hugging Face cloud endpoint URL
430
  model_config["model_id"] = st.text_input(
431
  "URL du endpoint cloud",
432
  value="https://pflgm2locj2t89co.us-east-1.aws.endpoints.huggingface.cloud",
433
  help="URL de l'endpoint cloud Hugging Face"
434
  )
435
+ # Maximum tokens to generate in responses
436
  model_config["max_new_tokens"] = st.slider(
437
  "Tokens maximum",
438
  min_value=512,
 
440
  value=2096,
441
  help="Nombre maximum de tokens à générer"
442
  )
443
+ # Temperature controls randomness in generation
444
  model_config["temperature"] = st.slider(
445
  "Température",
446
  min_value=0.1,
 
450
  help="Température pour la génération (plus élevée = plus créatif)"
451
  )
452
 
453
+ # Button to apply configuration changes and reinitialize the agent
454
  if st.button("Appliquer la configuration"):
455
  with st.spinner("Initialisation de l'agent avec le nouveau modèle..."):
456
  st.session_state.agent = initialize_agent(model_type, model_config)
457
  st.success("✅ Configuration appliquée avec succès!")
458
 
459
+ # Check server connection for OpenAI server type
460
  if model_type == "openai_server":
461
+ # Extract base URL for health check
462
  llm_api_url = model_config["api_base"].split("/v1")[0]
463
  try:
464
+ # Attempt to connect to the server's health endpoint
465
  import requests
466
  response = requests.get(f"{llm_api_url}/health", timeout=2)
467
  if response.status_code == 200:
 
471
  except Exception:
472
  st.error("❌ Impossible de se connecter au serveur LLM. Vérifiez que le serveur est en cours d'exécution à l'adresse spécifiée.")
473
 
474
+ # Initialize the agent if not already in session state
475
  if "agent" not in st.session_state:
476
  with st.spinner("Initialisation de l'agent..."):
477
  st.session_state.agent = initialize_agent(model_type, model_config)
478
 
479
+ # Initialize conversation history if not already in session state
480
  if "messages" not in st.session_state:
481
  st.session_state.messages = [
482
  {"role": "assistant", "content": "Bonjour! Comment puis-je vous aider aujourd'hui?"}
483
  ]
484
 
485
+ # Display conversation history
486
  for message in st.session_state.messages:
487
  with st.chat_message(message["role"]):
488
  st.markdown(message["content"])
489
 
490
+ # User input area
491
  if prompt := st.chat_input("Posez votre question..."):
492
+ # Add user question to conversation history
493
  st.session_state.messages.append({"role": "user", "content": prompt})
494
 
495
+ # Display user question
496
  with st.chat_message("user"):
497
  st.markdown(prompt)
498
 
499
+ # Process user input with the agent and display response
500
  with st.chat_message("assistant"):
501
+ # Get response from agent
502
  response = process_user_input(st.session_state.agent, prompt)
503
+
504
+ # If response contains executable code, run it in a fragment
505
+ if response is not None and response[1] == True:
506
+ launch_app(response[0])
507
+
508
+ # Add agent's response to conversation history
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
509
  if response and hasattr(response, "model_output"):
 
510
  st.session_state.messages.append({"role": "assistant", "content": response.model_output})
511
 
512
+ # Button to clear conversation history and start a new chat
513
  if st.sidebar.button("Nouvelle conversation"):
514
+ # Reset conversation to initial greeting
515
  st.session_state.messages = [
516
  {"role": "assistant", "content": "Bonjour! Comment puis-je vous aider aujourd'hui?"}
517
  ]
518
+ # Reload the page to reset the UI
519
  st.rerun()
520
 
521
+ # Additional information and features in the sidebar
522
  with st.sidebar:
523
+ # About section with information about the agent
524
  st.title("À propos de cet agent")
525
  st.markdown("""
526
  Cet agent utilise SmoLAgents pour se connecter à un modèle de langage hébergé localement.
 
540
  - Assurez-vous que toutes les dépendances sont installées via `pip install -r requirements.txt`.
541
  """)
542
 
543
+ # Visualization examples section
544
  st.subheader("Visualisations")
545
  st.markdown("""
546
  Vous pouvez demander des visualisations en utilisant des phrases comme:
 
551
  L'agent détectera automatiquement votre demande et générera une visualisation appropriée.
552
  """)
553
 
554
+ # Current time display in different timezones
555
  st.subheader("Heure actuelle")
556
+ # Timezone selection dropdown
557
  selected_timezone = st.selectbox(
558
  "Choisissez un fuseau horaire",
559
  ["Europe/Paris", "America/New_York", "Asia/Tokyo", "Australia/Sydney"]
560
  )
561
 
562
+ # Get and display current time in selected timezone
563
  tz = pytz.timezone(selected_timezone)
564
  local_time = datetime.datetime.now(tz).strftime("%Y-%m-%d %H:%M:%S")
565
  st.write(f"L'heure actuelle à {selected_timezone} est: {local_time}")