CamiloVega commited on
Commit
00827cd
·
verified ·
1 Parent(s): 485dd71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -7
README.md CHANGED
@@ -1,14 +1,69 @@
1
  ---
2
  title: AQuaBot
3
- emoji: 🚀
4
- colorFrom: indigo
5
- colorTo: pink
6
  sdk: gradio
7
- sdk_version: 5.4.0
8
  app_file: app.py
9
  pinned: false
10
- license: apache-2.0
11
- short_description: Assistant that helps raise awareness about water consumption
12
  ---
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: AQuaBot
3
+ emoji: 💧
4
+ colorFrom: blue
5
+ colorTo: green
6
  sdk: gradio
7
+ sdk_version: 5.3.0
8
  app_file: app.py
9
  pinned: false
10
+ accelerator: gpu
 
11
  ---
12
 
13
+ # AQuaBot - AI Water Consumption Awareness Chat
14
+
15
+ AQuaBot is an artificial intelligence assistant that helps raise awareness about water consumption in large language models while providing helpful responses to user queries. It uses Microsoft's Phi-1 model and tracks water consumption in real-time during conversations.
16
+
17
+ ## Author
18
+ **Camilo Vega Barbosa**
19
+ - AI Professor and Artificial Intelligence Solutions Consultant
20
+ - Connect with me:
21
+ - [LinkedIn](https://www.linkedin.com/in/camilo-vega-169084b1/)
22
+ - [GitHub](https://github.com/CamiloVga)
23
+
24
+ ## Features
25
+ - Real-time water consumption tracking for each interaction
26
+ - Interactive chat interface using Gradio
27
+ - Water usage calculations based on academic research
28
+ - Educational information about AI's environmental impact
29
+
30
+ ## How It Works
31
+ The application calculates water consumption based on the research paper "Making AI Less Thirsty: Uncovering and Addressing the Secret Water Footprint of AI Models" by Li, P. et al. (2023). It tracks both:
32
+ - Water consumption during training per token
33
+ - Water consumption during inference per token
34
+
35
+ For each interaction, the application calculates:
36
+ 1. Water consumption for input tokens
37
+ 2. Water consumption for output tokens
38
+ 3. Total accumulated water usage
39
+
40
+ ## Technical Details
41
+ - **Model**: Meta-llama/Llama-2-7b-hf
42
+ - **Framework**: Gradio
43
+ - **Dependencies**: Managed through `requirements.txt`
44
+ - **Device Configuration**: Automatically detects GPU availability and assigns appropriate device
45
+ - **Optimization**: Configured for efficient running on Hugging Face Spaces
46
+
47
+ ## Citation
48
+ ```
49
+ Li, P. et al. (2023). Making AI Less Thirsty: Uncovering and Addressing the Secret
50
+ Water Footprint of AI Models. ArXiv Preprint, https://arxiv.org/abs/2304.03271
51
+ ```
52
+
53
+ ## Installation
54
+ To run this application locally:
55
+ 1. Clone the repository
56
+ 2. Install dependencies:
57
+ ```bash
58
+ pip install -r requirements.txt
59
+ ```
60
+ 3. Run the application:
61
+ ```bash
62
+ python app.py
63
+ ```
64
+
65
+ ## Note
66
+ This application uses Phi-2 model instead of GPT-3 for availability and cost reasons. However, the water consumption calculations per token (input/output) are based on the conclusions from the cited research paper.
67
+
68
+ ---
69
+ Created by Camilo Vega Barbosa, AI Professor and Solutions Consultant. For more AI projects and collaborations, feel free to connect on [LinkedIn](https://www.linkedin.com/in/camilo-vega-169084b1/) or visit my [GitHub](https://github.com/CamiloVga).