AgentLlama007B / README.md
srossitto79's picture
Update README.md
340e5c5

A newer version of the Streamlit SDK is available: 1.44.1

Upgrade
metadata
title: AgentLlama007B
emoji: πŸ‘
colorFrom: pink
colorTo: indigo
sdk: streamlit
sdk_version: 1.27.2
app_file: agent_llama_ui.py
pinned: false
license: mit

Agent Llama007B: A Conversational AI Assistant

AgentLlama007B Logo

Overview

AgentLlama007B is a powerful Conversational AI Assistant designed for natural language interactions and task automation. It leverages state-of-the-art language models and offers seamless integration with external tools and knowledge sources. Whether you need to engage in casual conversations or perform specific tasks, AgentLlama007B has you covered.

Key Features

  • Natural Language Conversations: Engage in human-like conversations powered by local language models.
  • Tool Integration: Execute various tools, including image generation, web search, Wikipedia queries, and more, all within the conversation.
  • Knowledge Base Memory: Documents knowledge is stored in a vector database, you can store there your own documents and texts, providing an extra mile in the conversational experience.
  • Modular Architecture: Easily extend AgentLlama007B with additional skills and tools to suit your specific needs.

Getting Started

To start using AgentLlama007B, follow these steps:

  1. Clone the repository and create a folder named "models". Download the necessary models from Hugging Face and place them in the "models" folder. For chat/instructions, use "mistral-7b-instruct-v0.1.Q4_K_M.gguf", and for image generation, use "dreamshaper_8" (requires "dreamshaper_8.json" and "dreamshaper_8.safetensors").

  2. Install the required dependencies by running pip install -r requirements.txt.

  3. Run the main Streamlit app:

    streamlit run agent_llama_ui.py
    

Alternatively, you can integrate the agent into your Python code:

from agent_llama import SmartAgent

agent = SmartAgent()

while True:
 user_input = input("You: ")
 response = agent.agent_generate_response(user_input)
 print("Bot:", response)

For more details on customization, model configuration, and tool parameters, refer to the code documentation and to the original model repositories.

Implementation

AgentLlama007B's core logic is encapsulated in the RBotAgent class, which manages the conversational flow and tool integration. The knowledge base tool, StorageRetrievalLLM, uses persistent memory with a FAISS index of document embeddings. Various tools are provided, each encapsulating specific skills such as image generation and web search. The modular architecture allows easy replacement of components like the language model.

Why it matters

AgentLlama007B demonstrates the power of modern conversational AI in a real-world setting. It runs smoothly on consumer hardware with a single 8-core CPU and 16GB of RAM.

Remarkably, AgentLlama007B achieves language understanding and task automation using a quantized 7 billion parameter model, which is significantly smaller than models used by other conversational agents. This makes it efficient and practical for various applications.

Credits

AgentLlama007B has been evaluated using TheBloke's Mistral-7B-Instruct-v0.1-GGUF model. This 7 billion parameter model was converted from MistralAI's original Mistral-7B architecture. The 7B model is impressive in its capabilities.

This project was created by Salvatore Rossitto as a passion project and a learning endeavor. Contributions from the community are welcome and encouraged.

License

AgentLlama007B is an open-source project released under the MIT license. You are free to use, modify, and distribute it according to the terms of the license.

The Mistral-7B-Instruct-v0.1 model by MistralAI and TheBloke's Mistral-7B-Instruct-v0.1-GGUF model are subject to their respective licenses. Please refer to the original authors' licenses for more information.