Spaces:
Sleeping
Sleeping
mriusero
commited on
Commit
·
a9effa1
1
Parent(s):
f5f591a
feat: agent (1st draw)
Browse files- .gitignore +1 -1
- prompt.md +62 -0
- requirements.txt +6 -1
- src/agent/__init__.py +0 -0
- src/agent/inference.py +162 -0
- src/agent/tools/__init__.py +1 -0
- src/agent/tools/calculator.py +19 -0
- src/agent/utils/__init__.py +0 -0
- src/agent/utils/tooling.py +85 -0
- src/chat.py +0 -32
- src/ui/sidebar.py +46 -4
- tools.json +21 -0
.gitignore
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
.DS_Store
|
2 |
.idea/
|
3 |
-
.
|
4 |
__pycache__/
|
|
|
1 |
.DS_Store
|
2 |
.idea/
|
3 |
+
.env
|
4 |
__pycache__/
|
prompt.md
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
You are a general AI assistant equipped with various tools to enhance your problem-solving capabilities. Your task is to answer questions by following a structured chain-of-thought process and utilizing appropriate tools when necessary. Adhere to the following guidelines strictly:
|
2 |
+
|
3 |
+
### Initial Understanding
|
4 |
+
Begin by acknowledging the question and briefly restating it in your own words to ensure understanding.
|
5 |
+
|
6 |
+
### Step-by-Step Reasoning
|
7 |
+
Report your thoughts and reasoning process step by step. Each step should logically follow from the previous one. Use the template below for each step in your reasoning process:
|
8 |
+
|
9 |
+
#### THOUGHT STEP [X]:
|
10 |
+
- **Explanation**: [Provide a detailed explanation of your thought process here.]
|
11 |
+
- **Evidence/Assumptions**: [List any evidence, data, or assumptions you are using to support this step.]
|
12 |
+
- **Intermediate Conclusion**: [State any intermediate conclusions or insights derived from this step.]
|
13 |
+
- **Tool Calling**: [If applicable, mention any tools you plan to use to gather more information or perform specific tasks.]
|
14 |
+
|
15 |
+
#### TOOL CALLING:
|
16 |
+
1. **Tool Identification**: Identify the tool you need to use and the specific function within that tool.
|
17 |
+
2. **Tool Execution**: Execute the tool function with the appropriate parameters.
|
18 |
+
3. **Result Handling**: Handle the results from the tool execution. If the tool execution fails, note the error and consider alternative approaches.
|
19 |
+
|
20 |
+
#### THOUGHT STEP [X]:
|
21 |
+
- **Explanation**: [Provide a detailed explanation of your thought process here, incorporating the results from the tool.]
|
22 |
+
- **Evidence/Assumptions**: [List any new evidence, data, or assumptions you are using to support this step.]
|
23 |
+
- **Intermediate Conclusion**: [State any new intermediate conclusions or insights derived from this step.]
|
24 |
+
|
25 |
+
### Verification
|
26 |
+
After presenting your step-by-step reasoning and tool utilization, verify the logical consistency and coherence of your thoughts. Ensure that each step logically leads to the next and that there are no gaps in your reasoning.
|
27 |
+
If you find any inconsistencies or gaps, revisit the relevant steps and adjust your reasoning accordingly.
|
28 |
+
However, if everything is consistent, summarize your findings and conclusions in the final answer section.
|
29 |
+
|
30 |
+
### FINAL ANSWER
|
31 |
+
Conclude with your final answer, clearly stated and directly addressing the original question. Use the template below for your final answer:
|
32 |
+
[Provide a brief summary of your reasoning process and any tools used, then state your final answer clearly and concisely here.]
|
33 |
+
|
34 |
+
---
|
35 |
+
|
36 |
+
### Example:
|
37 |
+
|
38 |
+
**Question**: What is the weather like in Paris today?
|
39 |
+
|
40 |
+
#### THOUGHT STEP 1:
|
41 |
+
- **Explanation**: I need to find out the current weather conditions in Paris.
|
42 |
+
- **Evidence/Assumptions**: I assume that the user is asking for real-time weather information.
|
43 |
+
- **Intermediate Conclusion**: I need to use a weather API or a reliable weather website to get the latest information.
|
44 |
+
- **Tool Calling**: I will use the `get_weather` tool to retrieve the current weather data for Paris.
|
45 |
+
|
46 |
+
#### TOOL CALLING:
|
47 |
+
1. **Tool Identification**: Identify the `get_weather` tool and the specific function to retrieve weather data.
|
48 |
+
2. **Tool Execution**: Execute the `get_weather` function with the parameter set to "Paris".
|
49 |
+
3. **Result Handling**: The `get_weather` tool returns the current weather data for Paris.
|
50 |
+
|
51 |
+
#### THOUGHT STEP 2:
|
52 |
+
- **Explanation**: I have retrieved the weather data using the `get_weather` tool.
|
53 |
+
- **Evidence/Assumptions**: The data provided by the tool is accurate and up-to-date.
|
54 |
+
- **Intermediate Conclusion**: The current weather in Paris is sunny with a temperature of 22°C.
|
55 |
+
|
56 |
+
#### Verification:
|
57 |
+
- **Explanation**: The steps logically follow from the need to gather real-time data, and the tool used provides accurate information.
|
58 |
+
- **Evidence/Assumptions**: The weather data is consistent with typical weather patterns for this time of year in Paris.
|
59 |
+
- **Intermediate Conclusion**: The information retrieved is reliable and can be used to answer the user's question.
|
60 |
+
|
61 |
+
#### FINAL ANSWER:
|
62 |
+
Based on the data retrieved from the `get_weather` tool, the current weather in Paris is sunny with a temperature of 22°C.
|
requirements.txt
CHANGED
@@ -1,6 +1,11 @@
|
|
1 |
huggingface_hub
|
2 |
gradio==5.33.0
|
|
|
|
|
|
|
3 |
numpy
|
4 |
pandas
|
5 |
scipy
|
6 |
-
plotly
|
|
|
|
|
|
1 |
huggingface_hub
|
2 |
gradio==5.33.0
|
3 |
+
gradio[mcp]
|
4 |
+
requests
|
5 |
+
pydantic
|
6 |
numpy
|
7 |
pandas
|
8 |
scipy
|
9 |
+
plotly
|
10 |
+
dotenv
|
11 |
+
mistralai
|
src/agent/__init__.py
ADDED
File without changes
|
src/agent/inference.py
ADDED
@@ -0,0 +1,162 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
import json
|
3 |
+
import time
|
4 |
+
from dotenv import load_dotenv
|
5 |
+
from mistralai import Mistral
|
6 |
+
|
7 |
+
from src.agent.utils.tooling import generate_tools_json
|
8 |
+
from src.agent.tools import (
|
9 |
+
calculate_sum,
|
10 |
+
)
|
11 |
+
|
12 |
+
load_dotenv()
|
13 |
+
|
14 |
+
class MistralAgent:
|
15 |
+
def __init__(self):
|
16 |
+
self.api_key = os.getenv("MISTRAL_API_KEY")
|
17 |
+
self.agent_id = os.getenv("AGENT_ID")
|
18 |
+
self.client = Mistral(api_key=self.api_key)
|
19 |
+
self.model = "mistral-small"
|
20 |
+
self.prompt = None
|
21 |
+
self.names_to_functions = {
|
22 |
+
"calculate_sum": calculate_sum,
|
23 |
+
}
|
24 |
+
self.tools = self.get_tools()
|
25 |
+
|
26 |
+
@staticmethod
|
27 |
+
def get_tools():
|
28 |
+
"""Generate the tools.json file with the tools to be used by the agent."""
|
29 |
+
return generate_tools_json(
|
30 |
+
[
|
31 |
+
calculate_sum,
|
32 |
+
]
|
33 |
+
).get('tools')
|
34 |
+
|
35 |
+
def make_initial_request(self, input):
|
36 |
+
"""Make the initial request to the agent with the given input."""
|
37 |
+
with open("./prompt.md", 'r', encoding='utf-8') as file:
|
38 |
+
self.prompt = file.read()
|
39 |
+
messages = [
|
40 |
+
{"role": "system", "content": self.prompt},
|
41 |
+
{"role": "user", "content": input},
|
42 |
+
{
|
43 |
+
"role": "assistant",
|
44 |
+
"content": "THINKING:\nLet's tackle this problem, ",
|
45 |
+
"prefix": True,
|
46 |
+
},
|
47 |
+
]
|
48 |
+
payload = {
|
49 |
+
"agent_id": self.agent_id,
|
50 |
+
"messages": messages,
|
51 |
+
"max_tokens": None,
|
52 |
+
"stream": False,
|
53 |
+
"stop": None,
|
54 |
+
"random_seed": None,
|
55 |
+
"response_format": None,
|
56 |
+
"tools": self.tools,
|
57 |
+
"tool_choice": 'auto',
|
58 |
+
"presence_penalty": 0,
|
59 |
+
"frequency_penalty": 0,
|
60 |
+
"n": 1,
|
61 |
+
"prediction": None,
|
62 |
+
"parallel_tool_calls": None
|
63 |
+
}
|
64 |
+
return self.client.agents.complete(**payload), messages
|
65 |
+
|
66 |
+
def run(self, input):
|
67 |
+
"""Run the agent with the given input and process the response."""
|
68 |
+
print("\n===== Asking the agent =====\n")
|
69 |
+
response, messages = self.make_initial_request(input)
|
70 |
+
first_iteration = True
|
71 |
+
|
72 |
+
while True:
|
73 |
+
time.sleep(1)
|
74 |
+
if hasattr(response, 'choices') and response.choices:
|
75 |
+
choice = response.choices[0]
|
76 |
+
|
77 |
+
if first_iteration:
|
78 |
+
messages = [message for message in messages if not message.get("prefix")]
|
79 |
+
messages.append(
|
80 |
+
{
|
81 |
+
"role": "assistant",
|
82 |
+
"content": choice.message.content,
|
83 |
+
"prefix": True,
|
84 |
+
},
|
85 |
+
)
|
86 |
+
first_iteration = False
|
87 |
+
else:
|
88 |
+
if choice.message.tool_calls:
|
89 |
+
results = []
|
90 |
+
|
91 |
+
for tool_call in choice.message.tool_calls:
|
92 |
+
function_name = tool_call.function.name
|
93 |
+
function_params = json.loads(tool_call.function.arguments)
|
94 |
+
|
95 |
+
try:
|
96 |
+
function_result = self.names_to_functions[function_name](**function_params)
|
97 |
+
results.append((tool_call.id, function_name, function_result))
|
98 |
+
|
99 |
+
except Exception as e:
|
100 |
+
results.append((tool_call.id, function_name, None))
|
101 |
+
|
102 |
+
for tool_call_id, function_name, function_result in results:
|
103 |
+
messages.append({
|
104 |
+
"role": "assistant",
|
105 |
+
"tool_calls": [
|
106 |
+
{
|
107 |
+
"id": tool_call_id,
|
108 |
+
"type": "function",
|
109 |
+
"function": {
|
110 |
+
"name": function_name,
|
111 |
+
"arguments": json.dumps(function_params),
|
112 |
+
}
|
113 |
+
}
|
114 |
+
]
|
115 |
+
})
|
116 |
+
messages.append(
|
117 |
+
{
|
118 |
+
"role": "tool",
|
119 |
+
"content": function_result if function_result is not None else f"Error occurred: {function_name} failed to execute",
|
120 |
+
"tool_call_id": tool_call_id,
|
121 |
+
},
|
122 |
+
)
|
123 |
+
for message in messages:
|
124 |
+
if "prefix" in message:
|
125 |
+
del message["prefix"]
|
126 |
+
messages.append(
|
127 |
+
{
|
128 |
+
"role": "assistant",
|
129 |
+
"content": f"Based on the results, ",
|
130 |
+
"prefix": True,
|
131 |
+
}
|
132 |
+
)
|
133 |
+
else:
|
134 |
+
for message in messages:
|
135 |
+
if "prefix" in message:
|
136 |
+
del message["prefix"]
|
137 |
+
messages.append(
|
138 |
+
{
|
139 |
+
"role": "assistant",
|
140 |
+
"content": choice.message.content,
|
141 |
+
}
|
142 |
+
)
|
143 |
+
if 'FINAL ANSWER:' in choice.message.content:
|
144 |
+
print("\n===== END OF REQUEST =====\n", json.dumps(messages, indent=2))
|
145 |
+
ans = choice.message.content.split('FINAL ANSWER:')[1].strip()
|
146 |
+
|
147 |
+
timestamp = time.strftime("%Y%m%d-%H%M%S")
|
148 |
+
output_file = f"chat_{timestamp}.json"
|
149 |
+
with open(output_file, "w", encoding="utf-8") as f:
|
150 |
+
json.dump(messages, f, indent=2, ensure_ascii=False)
|
151 |
+
print(f"Conversation enregistrée dans {output_file}")
|
152 |
+
|
153 |
+
return ans
|
154 |
+
|
155 |
+
print("\n===== MESSAGES BEFORE API CALL =====\n", json.dumps(messages, indent=2))
|
156 |
+
time.sleep(1)
|
157 |
+
response = self.client.agents.complete(
|
158 |
+
agent_id=self.agent_id,
|
159 |
+
messages=messages,
|
160 |
+
tools=self.tools,
|
161 |
+
tool_choice='auto',
|
162 |
+
)
|
src/agent/tools/__init__.py
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
from .calculator import calculate_sum
|
src/agent/tools/calculator.py
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from typing import List
|
2 |
+
from src.agent.utils.tooling import tool
|
3 |
+
|
4 |
+
@tool
|
5 |
+
def calculate_sum(numbers: list[float]) -> float:
|
6 |
+
"""
|
7 |
+
Calculates the sum of a list of numbers.
|
8 |
+
WARNING: You have to be sure that the input is coherent to answer correctly to a given question.
|
9 |
+
Args:
|
10 |
+
numbers (List[float]): A list of numbers to be summed from the question.
|
11 |
+
Returns:
|
12 |
+
float: The sum of the numbers.
|
13 |
+
"""
|
14 |
+
try:
|
15 |
+
total = sum(numbers)
|
16 |
+
return f"The sum of the list of number is: {total:.2f}"
|
17 |
+
except Exception as e:
|
18 |
+
print(f"Error calculating sum: {e}")
|
19 |
+
return None
|
src/agent/utils/__init__.py
ADDED
File without changes
|
src/agent/utils/tooling.py
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from functools import wraps
|
2 |
+
import json
|
3 |
+
import inspect
|
4 |
+
import re
|
5 |
+
|
6 |
+
def generate_tools_json(functions):
|
7 |
+
"""
|
8 |
+
Generates a JSON representation of the tools based on the provided functions.
|
9 |
+
"""
|
10 |
+
tools = []
|
11 |
+
for func in functions:
|
12 |
+
tools.append(func.tool_info)
|
13 |
+
|
14 |
+
with open('tools.json', 'w') as file:
|
15 |
+
json.dump(tools, file, indent=4)
|
16 |
+
|
17 |
+
return {
|
18 |
+
"tools": tools
|
19 |
+
}
|
20 |
+
|
21 |
+
def tool(func):
|
22 |
+
"""
|
23 |
+
Decorator to get function information for tool generation.
|
24 |
+
Args:
|
25 |
+
func: The function to be decorated.
|
26 |
+
"""
|
27 |
+
@wraps(func)
|
28 |
+
def wrapper(*args, **kwargs):
|
29 |
+
return func(*args, **kwargs)
|
30 |
+
|
31 |
+
description = func.__doc__.strip() if func.__doc__ else "No description available."
|
32 |
+
|
33 |
+
sig = inspect.signature(func)
|
34 |
+
parameters = {}
|
35 |
+
required = []
|
36 |
+
|
37 |
+
for param_name, param in sig.parameters.items():
|
38 |
+
param_type = param.annotation
|
39 |
+
if param_type == param.empty:
|
40 |
+
param_type = "string"
|
41 |
+
else:
|
42 |
+
param_type = param_type.__name__
|
43 |
+
if param_type == 'str':
|
44 |
+
param_type = 'string'
|
45 |
+
elif param_type == 'int':
|
46 |
+
param_type = 'integer'
|
47 |
+
elif param_type == 'list':
|
48 |
+
param_type = 'array'
|
49 |
+
elif param_type == 'float':
|
50 |
+
param_type = 'number'
|
51 |
+
elif param_type == 'bool':
|
52 |
+
param_type = 'boolean'
|
53 |
+
elif param_type == 'dict':
|
54 |
+
param_type = 'object'
|
55 |
+
|
56 |
+
param_description = f"The {param_name}."
|
57 |
+
|
58 |
+
docstring_lines = description.split('\n')
|
59 |
+
for line in docstring_lines:
|
60 |
+
match = re.match(rf"\s*{param_name}\s*\(.*\):\s*(.*)", line)
|
61 |
+
if match:
|
62 |
+
param_description = match.group(1).strip()
|
63 |
+
break
|
64 |
+
|
65 |
+
parameters[param_name] = {
|
66 |
+
"type": param_type,
|
67 |
+
"description": param_description,
|
68 |
+
}
|
69 |
+
|
70 |
+
if param.default == param.empty:
|
71 |
+
required.append(param_name)
|
72 |
+
|
73 |
+
wrapper.tool_info = {
|
74 |
+
"type": "function",
|
75 |
+
"function": {
|
76 |
+
"name": func.__name__,
|
77 |
+
"description": description.split('\n')[0],
|
78 |
+
"parameters": {
|
79 |
+
"type": "object",
|
80 |
+
"properties": parameters,
|
81 |
+
"required": required,
|
82 |
+
},
|
83 |
+
},
|
84 |
+
}
|
85 |
+
return wrapper
|
src/chat.py
DELETED
@@ -1,32 +0,0 @@
|
|
1 |
-
from huggingface_hub import InferenceClient
|
2 |
-
|
3 |
-
client = InferenceClient("HuggingFaceH4/zephyr-7b-beta")
|
4 |
-
|
5 |
-
def respond(message, history):
|
6 |
-
system_message = "Your default system message"
|
7 |
-
max_tokens = 150
|
8 |
-
temperature = 0.7
|
9 |
-
top_p = 0.9
|
10 |
-
|
11 |
-
messages = [{"role": "system", "content": system_message}]
|
12 |
-
|
13 |
-
for val in history:
|
14 |
-
if val[0]:
|
15 |
-
messages.append({"role": "user", "content": val[0]})
|
16 |
-
if val[1]:
|
17 |
-
messages.append({"role": "assistant", "content": val[1]})
|
18 |
-
|
19 |
-
messages.append({"role": "user", "content": message})
|
20 |
-
|
21 |
-
response = ""
|
22 |
-
|
23 |
-
for message in client.chat_completion(
|
24 |
-
messages,
|
25 |
-
max_tokens=max_tokens,
|
26 |
-
stream=True,
|
27 |
-
temperature=temperature,
|
28 |
-
top_p=top_p,
|
29 |
-
):
|
30 |
-
token = message.choices[0].delta.content
|
31 |
-
response += token
|
32 |
-
yield response
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/ui/sidebar.py
CHANGED
@@ -1,6 +1,10 @@
|
|
1 |
import gradio as gr
|
2 |
|
3 |
-
from src.
|
|
|
|
|
|
|
|
|
4 |
|
5 |
|
6 |
def sidebar_ui(state, width=700, visible=True):
|
@@ -18,11 +22,49 @@ def sidebar_ui(state, width=700, visible=True):
|
|
18 |
2. **Ask Questions** - Interact with the chatbot to get insights on production processes.
|
19 |
3. **Ask for Help** - Get assistance with any issues or queries related to production.
|
20 |
|
21 |
-
Note: you can click on `
|
22 |
"""
|
23 |
)
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
sessions_state = gr.JSON(
|
27 |
label="Sessions State",
|
28 |
visible=True,
|
|
|
1 |
import gradio as gr
|
2 |
|
3 |
+
from src.agent.inference import MistralAgent
|
4 |
+
|
5 |
+
def respond(gr_message, history=None):
|
6 |
+
agent = MistralAgent()
|
7 |
+
yield agent.run(gr_message)
|
8 |
|
9 |
|
10 |
def sidebar_ui(state, width=700, visible=True):
|
|
|
22 |
2. **Ask Questions** - Interact with the chatbot to get insights on production processes.
|
23 |
3. **Ask for Help** - Get assistance with any issues or queries related to production.
|
24 |
|
25 |
+
Note: you can click on `Pause` or `Reset` to control the production simulation.
|
26 |
"""
|
27 |
)
|
28 |
+
|
29 |
+
with gr.Blocks():
|
30 |
+
with gr.Row(height=800):
|
31 |
+
with gr.Tabs():
|
32 |
+
with gr.TabItem("Agent"):
|
33 |
+
chatbot = gr.ChatInterface(
|
34 |
+
fn=respond,
|
35 |
+
type="messages",
|
36 |
+
multimodal=False,
|
37 |
+
chatbot=gr.Chatbot(
|
38 |
+
placeholder="⚡️ How can I help you today ?",
|
39 |
+
type="messages",
|
40 |
+
height=600,
|
41 |
+
show_copy_button=True,
|
42 |
+
),
|
43 |
+
show_progress='full',
|
44 |
+
stop_btn=True,
|
45 |
+
save_history=True,
|
46 |
+
examples=[
|
47 |
+
["How is the production process going?"],
|
48 |
+
["What are the common issues faced in production?"],
|
49 |
+
# ["What is the status of the current production line?"],
|
50 |
+
# ["Can you provide insights on equipment performance?"],
|
51 |
+
# ["How can I optimize the workflow in the production area?"],
|
52 |
+
# ["How do I troubleshoot a specific piece of equipment?"],
|
53 |
+
# ["What are the best practices for maintaining production efficiency?"]
|
54 |
+
],
|
55 |
+
cache_examples=False # désactive le cache si les réponses varient
|
56 |
+
)
|
57 |
+
with gr.TabItem("Documentation", visible=True):
|
58 |
+
md_output = gr.Markdown("📄 La documentation s'affichera ici.")
|
59 |
+
|
60 |
+
#textbox=gr.MultimodalTextbox(file_types=[".png", ".pdf"], sources=["upload", "microphone"]),
|
61 |
+
#additional_inputs=[gr.Textbox("Système", label="System prompt"), gr.Slider(0, 1)],
|
62 |
+
#additional_inputs_accordion="Options avancées",
|
63 |
+
#flagging_mode="manual",
|
64 |
+
#flagging_options=["👍", "👎"],
|
65 |
+
#title="Mon Chatbot",
|
66 |
+
#description="Testez un modèle multimodal",
|
67 |
+
|
68 |
sessions_state = gr.JSON(
|
69 |
label="Sessions State",
|
70 |
visible=True,
|
tools.json
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"type": "function",
|
4 |
+
"function": {
|
5 |
+
"name": "calculate_sum",
|
6 |
+
"description": "Calculates the sum of a list of numbers.",
|
7 |
+
"parameters": {
|
8 |
+
"type": "object",
|
9 |
+
"properties": {
|
10 |
+
"numbers": {
|
11 |
+
"type": "array",
|
12 |
+
"description": "A list of numbers to be summed from the question."
|
13 |
+
}
|
14 |
+
},
|
15 |
+
"required": [
|
16 |
+
"numbers"
|
17 |
+
]
|
18 |
+
}
|
19 |
+
}
|
20 |
+
}
|
21 |
+
]
|