Spaces:
Runtime error
Runtime error
Salvatore Rossitto
commited on
Commit
·
abd1162
1
Parent(s):
4190d92
updated readme
Browse files
README.md
CHANGED
@@ -1,14 +1,3 @@
|
|
1 |
-
---
|
2 |
-
title: AgentLlama007B
|
3 |
-
emoji: 👁
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.27.2
|
8 |
-
app_file: agent_llama_ui.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
# Agent Llama007B: A Conversational AI Assistant
|
13 |
|
14 |

|
@@ -21,27 +10,26 @@ AgentLlama007B is a powerful Conversational AI Assistant designed for natural la
|
|
21 |
|
22 |
- **Natural Language Conversations**: Engage in human-like conversations powered by local language models.
|
23 |
- **Tool Integration**: Execute various tools, including image generation, web search, Wikipedia queries, and more, all within the conversation.
|
24 |
-
- **
|
25 |
- **Modular Architecture**: Easily extend AgentLlama007B with additional skills and tools to suit your specific needs.
|
26 |
|
27 |
## Getting Started
|
28 |
|
29 |
-
To start using AgentLlama007B, follow these
|
30 |
|
31 |
-
Clone the
|
32 |
-
I use mistral-7b-instruct-v0.1.Q4_K_M.gguf for chat/instructions and dreamshaper_8 for images generation (:P you'll need dreamshaper_8.json and dreamshaper_8.safetensors)
|
33 |
|
34 |
-
|
35 |
|
36 |
-
|
37 |
|
38 |
-
```bash
|
39 |
-
streamlit run agent_llama_ui.py
|
40 |
-
```
|
41 |
|
42 |
-
|
43 |
|
44 |
-
```python
|
45 |
from agent_llama import SmartAgent
|
46 |
|
47 |
agent = SmartAgent()
|
@@ -52,36 +40,28 @@ while True:
|
|
52 |
print("Bot:", response)
|
53 |
```
|
54 |
|
55 |
-
For more details on customization, model configuration, and tool parameters, refer to the code documentation.
|
56 |
|
57 |
## Implementation
|
58 |
|
59 |
AgentLlama007B's core logic is encapsulated in the `RBotAgent` class, which manages the conversational flow and tool integration. The knowledge base tool, `StorageRetrievalLLM`, uses persistent memory with a FAISS index of document embeddings. Various tools are provided, each encapsulating specific skills such as image generation and web search. The modular architecture allows easy replacement of components like the language model.
|
60 |
|
61 |
-
|
62 |
## Why it matters
|
63 |
|
64 |
-
AgentLlama007B demonstrates the power of modern conversational AI in a real-world setting.
|
65 |
-
|
66 |
-
Remarkably, AgentLlama007B achieves language understanding and task automation using a quantized 7B parameter model. This is orders of magnitude smaller than models that power other conversational agents. For example, ChatGPT4 use a 180B parameter model.
|
67 |
-
|
68 |
-
In practice, this means AgentLlama007B can understand free-form instructions and execute complex workflows, the most of the times :-).
|
69 |
|
|
|
70 |
|
71 |
## Credits
|
72 |
|
73 |
-
AgentLlama007B has been evaluated using TheBloke's Mistral-7B-Instruct-v0.1-GGUF model. This 7 billion parameter model was converted from
|
74 |
|
75 |
This project was created by Salvatore Rossitto as a passion project and a learning endeavor. Contributions from the community are welcome and encouraged.
|
76 |
|
77 |
## License
|
78 |
|
79 |
-
|
80 |
-
|
81 |
-
[TheBloke MistralAI's Mistral-7B GGUF architecture](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF)
|
82 |
-
|
83 |
-
AgentLlama007B is an open-source project released under the MIT license.
|
84 |
-
You are free to use, modify, and distribute it as per the terms of the license.
|
85 |
-
|
86 |
-
The LLM model downloaded is subject to the original author license.
|
87 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Agent Llama007B: A Conversational AI Assistant
|
2 |
|
3 |

|
|
|
10 |
|
11 |
- **Natural Language Conversations**: Engage in human-like conversations powered by local language models.
|
12 |
- **Tool Integration**: Execute various tools, including image generation, web search, Wikipedia queries, and more, all within the conversation.
|
13 |
+
- **Knowledge Base Memory**: Documents knowledge is stored in a vector database, you can store there your own documents and texts, providing an extra mile in the conversational experience.
|
14 |
- **Modular Architecture**: Easily extend AgentLlama007B with additional skills and tools to suit your specific needs.
|
15 |
|
16 |
## Getting Started
|
17 |
|
18 |
+
To start using AgentLlama007B, follow these steps:
|
19 |
|
20 |
+
1. Clone the repository and create a folder named "models". Download the necessary models from Hugging Face and place them in the "models" folder. For chat/instructions, use "mistral-7b-instruct-v0.1.Q4_K_M.gguf", and for image generation, use "dreamshaper_8" (requires "dreamshaper_8.json" and "dreamshaper_8.safetensors").
|
|
|
21 |
|
22 |
+
2. Install the required dependencies by running `pip install -r requirements.txt`.
|
23 |
|
24 |
+
3. Run the main Streamlit app:
|
25 |
|
26 |
+
```bash
|
27 |
+
streamlit run agent_llama_ui.py
|
28 |
+
```
|
29 |
|
30 |
+
Alternatively, you can integrate the agent into your Python code:
|
31 |
|
32 |
+
```python
|
33 |
from agent_llama import SmartAgent
|
34 |
|
35 |
agent = SmartAgent()
|
|
|
40 |
print("Bot:", response)
|
41 |
```
|
42 |
|
43 |
+
For more details on customization, model configuration, and tool parameters, refer to the code documentation and to the original model repositories.
|
44 |
|
45 |
## Implementation
|
46 |
|
47 |
AgentLlama007B's core logic is encapsulated in the `RBotAgent` class, which manages the conversational flow and tool integration. The knowledge base tool, `StorageRetrievalLLM`, uses persistent memory with a FAISS index of document embeddings. Various tools are provided, each encapsulating specific skills such as image generation and web search. The modular architecture allows easy replacement of components like the language model.
|
48 |
|
|
|
49 |
## Why it matters
|
50 |
|
51 |
+
AgentLlama007B demonstrates the power of modern conversational AI in a real-world setting. It runs smoothly on consumer hardware with a single 8-core CPU and 16GB of RAM.
|
|
|
|
|
|
|
|
|
52 |
|
53 |
+
Remarkably, AgentLlama007B achieves language understanding and task automation using a quantized 7 billion parameter model, which is significantly smaller than models used by other conversational agents. This makes it efficient and practical for various applications.
|
54 |
|
55 |
## Credits
|
56 |
|
57 |
+
AgentLlama007B has been evaluated using TheBloke's Mistral-7B-Instruct-v0.1-GGUF model. This 7 billion parameter model was converted from MistralAI's original Mistral-7B architecture. The 7B model is impressive in its capabilities.
|
58 |
|
59 |
This project was created by Salvatore Rossitto as a passion project and a learning endeavor. Contributions from the community are welcome and encouraged.
|
60 |
|
61 |
## License
|
62 |
|
63 |
+
AgentLlama007B is an open-source project released under the MIT license. You are free to use, modify, and distribute it according to the terms of the license.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
+
```
|
66 |
+
The Mistral-7B-Instruct-v0.1 model by MistralAI and TheBloke's Mistral-7B-Instruct-v0.1-GGUF model are subject to their respective licenses. Please refer to the original authors' licenses for more information.
|
67 |
+
```
|