chatbot-backend / Readme.md
TalatMasood's picture
initial commit
640b1c8
|
raw
history blame
1.6 kB

RAG Chatbot Application

Project Overview

A modular Retrieval Augmented Generation (RAG) chatbot application built with FastAPI, supporting multiple LLM providers and embedding models.

Project Structure

  • config/: Configuration management
  • src/: Main application source code
  • tests/: Unit and integration tests
  • data/: Document storage and ingestion

Prerequisites

  • Python 3.9+
  • pip
  • (Optional) Virtual environment

Installation

  1. Clone the repository
git clone https://your-repo-url.git
cd rag-chatbot
  1. Create a virtual environment
python -m venv venv
source venv/bin/activate  # On Windows use `venv\Scripts\activate`
  1. Install dependencies
pip install -r requirements.txt
  1. Set up environment variables
cp .env.example .env
# Edit .env with your credentials

Configuration

Environment Variables

  • OPENAI_API_KEY: OpenAI API key
  • OLLAMA_BASE_URL: Ollama server URL
  • EMBEDDING_MODEL: Hugging Face embedding model
  • CHROMA_PATH: Vector store persistence path
  • DEBUG: Enable debug mode

Running the Application

Development Server

uvicorn src.main:app --reload

Production Deployment

gunicorn -w 4 -k uvicorn.workers.UvicornWorker src.main:app

Testing

pytest tests/

Features

  • Multiple LLM Provider Support
  • Retrieval Augmented Generation
  • Document Ingestion
  • Flexible Configuration
  • FastAPI Backend

Contributing

  1. Fork the repository
  2. Create your feature branch
  3. Commit your changes
  4. Push to the branch
  5. Create a Pull Request