aQuaBot / README.md
CamiloVega's picture
Update README.md
45fe324 verified

A newer version of the Gradio SDK is available: 5.23.3

Upgrade
metadata
title: AQuaBot
emoji: 💧
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.4.0
app_file: app.py
pinned: false
accelerator: gpu

AQuaBot - AI Water Consumption Awareness Chat

AQuaBot is an artificial intelligence assistant that helps raise awareness about water consumption in large language models while providing helpful responses to user queries. It uses Microsoft's Phi-1 model and tracks water consumption in real-time during conversations.

Author

Camilo Vega Barbosa

  • AI Professor and Artificial Intelligence Solutions Consultant
  • Connect with me:

Features

  • Real-time water consumption tracking for each interaction
  • Interactive chat interface using Gradio
  • Water usage calculations based on academic research
  • Educational information about AI's environmental impact

How It Works

The application calculates water consumption based on the research paper "Making AI Less Thirsty: Uncovering and Addressing the Secret Water Footprint of AI Models" by Li, P. et al. (2023). It tracks both:

  • Water consumption during training per token
  • Water consumption during inference per token

For each interaction, the application calculates:

  1. Water consumption for input tokens
  2. Water consumption for output tokens
  3. Total accumulated water usage

Technical Details

  • Model: Meta-llama/Llama-2-7b-hf
  • Framework: Gradio
  • Dependencies: Managed through requirements.txt
  • Device Configuration: Automatically detects GPU availability and assigns appropriate device
  • Optimization: Configured for efficient running on Hugging Face Spaces

Citation

Li, P. et al. (2023). Making AI Less Thirsty: Uncovering and Addressing the Secret 
Water Footprint of AI Models. ArXiv Preprint, https://arxiv.org/abs/2304.03271

Installation

To run this application locally:

  1. Clone the repository
  2. Install dependencies:
    pip install -r requirements.txt
    
  3. Run the application:
    python app.py
    

Note

This application uses Phi-2 model instead of GPT-3 for availability and cost reasons. However, the water consumption calculations per token (input/output) are based on the conclusions from the cited research paper.


Created by Camilo Vega Barbosa, AI Professor and Solutions Consultant. For more AI projects and collaborations, feel free to connect on LinkedIn or visit my GitHub.