Spaces:
Sleeping
Sleeping
mriusero
commited on
Commit
·
7f568ce
1
Parent(s):
b6f3c79
large to medium llm
Browse files- README.md +2 -2
- app.py +1 -1
- src/agent/stream.py +0 -18
README.md
CHANGED
@@ -35,8 +35,8 @@ This is a demo of an AI agent designed to assist industries and service provider
|
|
35 |
|
36 |
### Design
|
37 |
|
38 |
-
* The agent is implemented using **Mistral AI** via the
|
39 |
-
* Its capabilities have been enhanced with a chain-of-thought reasoning process, allowing it to think, act, observe, and respond effectively to user queries.
|
40 |
* The agent is presented through a **Gradio interface**, which is well-suited for both real-time visualization and LLM interaction.
|
41 |
|
42 |
### Purposes
|
|
|
35 |
|
36 |
### Design
|
37 |
|
38 |
+
* The agent is implemented using **Mistral AI** via the **mistral-medium-2505** LLM.
|
39 |
+
* Its capabilities have been enhanced with a **chain-of-thought** reasoning process, allowing it to think, act, observe, and respond effectively to user queries.
|
40 |
* The agent is presented through a **Gradio interface**, which is well-suited for both real-time visualization and LLM interaction.
|
41 |
|
42 |
### Purposes
|
app.py
CHANGED
@@ -48,7 +48,7 @@ This is a demo of an AI agent designed to assist industries and service provider
|
|
48 |
"""
|
49 |
## Design
|
50 |
|
51 |
-
The agent is implemented using **Mistral AI** via the `mistral-
|
52 |
|
53 |
[See video overview](https://drive.google.com/file/d/1Bv1uF3-4EeR1HePSafZN1yzInr7YcQXZ/view?usp=share_link)
|
54 |
|
|
|
48 |
"""
|
49 |
## Design
|
50 |
|
51 |
+
The agent is implemented using **Mistral AI** via the `mistral-medium-2505` LLM. Its capabilities have been enhanced with a chain-of-thought reasoning process, allowing it to `think`, `act`, `observe`, and `respond` effectively to user queries. The agent is presented through a **Gradio interface**, which is well-suited for both real-time visualization and LLM interaction.
|
52 |
|
53 |
[See video overview](https://drive.google.com/file/d/1Bv1uF3-4EeR1HePSafZN1yzInr7YcQXZ/view?usp=share_link)
|
54 |
|
src/agent/stream.py
CHANGED
@@ -92,24 +92,6 @@ async def respond(message, history=None, state=None):
|
|
92 |
if current_phase == "think":
|
93 |
history[-1] = ChatMessage(role="assistant", content=buffer, metadata={"title": "Thinking...", "status": "pending", "id": state['cycle']})
|
94 |
|
95 |
-
#elif current_phase == "act":
|
96 |
-
#parent_message = next((msg for msg in history if msg.metadata.get("id") == state['cycle']), None)
|
97 |
-
#if parent_message:
|
98 |
-
# parent_message.content += "\n\n" + buffer
|
99 |
-
# parent_message.metadata["title"] = "Acting..."
|
100 |
-
#else:
|
101 |
-
# history[-1] = ChatMessage(role="assistant", content=buffer, metadata={"title": "Acting...", "status": "pending", "id": state['cycle']+1, 'parent_id': state["cycle"]})
|
102 |
-
|
103 |
-
#elif current_phase == "observe":
|
104 |
-
# parent_message = next((msg for msg in history if msg.metadata.get("id") == state['cycle']), None)
|
105 |
-
# if parent_message:
|
106 |
-
# parent_message.content += "\n\n" + buffer
|
107 |
-
# parent_message.metadata["title"] = "Acting..."
|
108 |
-
# else:
|
109 |
-
# history[-1] = ChatMessage(role="assistant", content=buffer, metadata={"title": "Observing...", "status": "pending", "id": state['cycle']+2, 'parent_id': state["cycle"]})
|
110 |
-
#
|
111 |
-
#yield history
|
112 |
-
|
113 |
if current_phase == "final":
|
114 |
delta_content = delta.content or ""
|
115 |
final_full += delta_content
|
|
|
92 |
if current_phase == "think":
|
93 |
history[-1] = ChatMessage(role="assistant", content=buffer, metadata={"title": "Thinking...", "status": "pending", "id": state['cycle']})
|
94 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
if current_phase == "final":
|
96 |
delta_content = delta.content or ""
|
97 |
final_full += delta_content
|