Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
@@ -1,77 +1,8 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
I made two web interfaces to use and interact with the Personal Fitness Trainer AI. The links and descriptions are provided below.
|
10 |
-
|
11 |
-
### Streamlit + Docker + Heroku + GitHub Actions (CI/CD)
|
12 |
-
|
13 |
-
[Web App Link (Heroku)](https://ai-personal-fitness-trainer.herokuapp.com/)
|
14 |
-
|
15 |
-
I used Streamlit, a Python library designed for people who are not expert web developers, to design an application to use the AI. Streamlit allows you to build data science applications without worrying too much about the UI design, which is all handled by the Streamlit API. I then constructed a Dockerfile that provides instructions to build a Docker image with the running application. The application was then deployed on the web using Heroku and their Docker Container Registry. Finally, I automated the deployment pipeline using GitHub Actions. I did this by designing a workflow to build the Docker image and push to Heroku's registry whenever I pushed changes to the main branch of this GitHub repository. Essentially, the workflow file automatically performs the same commands that I ran on my local machine: login to the Heroku container registry, build the Docker image, and deploy it to the web.
|
16 |
-
|
17 |
-
### Streamlit Cloud
|
18 |
-
|
19 |
-
[Web App Link (Streamlit Cloud)](https://chrisprasanna-exercise-recognition-ai-app-app-enjv7a.streamlitapp.com/)
|
20 |
-
|
21 |
-
I also deployed the AI directly from Streamlit to their cloud. This was quick and easy, however, the biggest downside of Streamlit cloud deployment is its speed issues. The entire Python script is re-run in the browser every time the user interacts with the application. I included the link to this application for documentation purposes but I would recommend you use the link from the previous section.
|
22 |
-
|
23 |
-
## Installation
|
24 |
-
- Download this repository and move it to your desired working directory
|
25 |
-
- Download Anaconda if you haven't already
|
26 |
-
- Open the Anaconda Prompt
|
27 |
-
- Navigate to your working directory using the cd command
|
28 |
-
- Run the following command in the Anaconda prompt:
|
29 |
-
```
|
30 |
-
conda env create --name NAME --file environment.yml
|
31 |
-
```
|
32 |
-
where NAME needs to be changed to the name of the conda virtual environment for this project. This environment contains all the package installations and dependencies for this project.
|
33 |
-
|
34 |
-
- Run the following command in the Anaconda prompt:
|
35 |
-
```
|
36 |
-
conda activate NAME
|
37 |
-
```
|
38 |
-
This activates the conda environment containing all the required packages and their versions.
|
39 |
-
|
40 |
-
- Open Anaconda Navigator
|
41 |
-
- Under the "Applications On" dropdown menu, select the newly created conda environment
|
42 |
-
- Install and open Jupyter Notebook. NOTE: once you complete this step and if you're on a Windows device, you can call the installed version of Jupyter Notebook within the conda environment directly from the start menu.
|
43 |
-
- Navigate to the ExerciseDecoder.ipynb file within the repository
|
44 |
-
|
45 |
-
## Features
|
46 |
-
|
47 |
-
- Implementation of Google MediaPipe's BlazePose model for real-time human pose estimation
|
48 |
-
- Computer vision tools (i.e., OpenCV) for color conversion, detecting cameras, detecting camera properties, displaying images, and custom graphics/visualization
|
49 |
-
- Inferred 3D joint angle computation according to relative coordinates of surrounding body landmarks
|
50 |
-
- Guided training data generation
|
51 |
-
- Data preprocessing and callback methods for efficient deep neural network training
|
52 |
-
- Customizable LSTM and Attention-Based LSTM models
|
53 |
-
- Real-time visualization of joint angles, rep counters, and probability distribution of exercise classification predictions
|
54 |
-
|
55 |
-
## To-Do
|
56 |
-
|
57 |
-
* Higher Priority
|
58 |
-
- [x] Add precision-recall analysis
|
59 |
-
- [x] Deploy the AI and build a web app
|
60 |
-
- [x] Build a Docker Image
|
61 |
-
- [x] Build a CI/CD workflow
|
62 |
-
- [ ] Train networks using angular joint kinematics rather than xyz coordinates
|
63 |
-
- [ ] Translate AI to a portable embedded system that you can take outdoors or at a commercial gym. Components may include a microcontroller (e.g., Raspberry Pi), external USB camera, LED screen, battery, and 3D-printed case
|
64 |
-
* Back-burner
|
65 |
-
- [ ] Add AI features that can detect poor form (e.g., leaning, fast eccentric motion, knees caving in, poor squat depth, etc.) and offer real-time advice/feedback for
|
66 |
-
- [ ] Optimize hyperparameters based on minimizing training time and cross-entropy loss on the validation dataset
|
67 |
-
- [ ] Add more exercise classes
|
68 |
-
- [ ] Add additional models. For instance, even though BlazePose is a type of CNN, there may be benefits to including convolutional layers within the custom deep learning models
|
69 |
-
|
70 |
-
## Credits
|
71 |
-
|
72 |
-
- [MediaPipe Pose](https://google.github.io/mediapipe/solutions/pose.html) for the pretrained human pose estimation model
|
73 |
-
- [Nicholas Renotte](https://github.com/nicknochnack) for tutorials on real-time action detection and pose estimation
|
74 |
-
- [Philippe Rémy](https://github.com/philipperemy/keras-attention-mechanism) for the attention mechanism implementation for Keras
|
75 |
-
|
76 |
-
## License
|
77 |
-
[MIT](https://github.com/chrisprasanna/Exercise_Recognition_AI/blob/main/LICENSE)
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
title: Streamit-yogs
|
4 |
+
sdk: streamlit
|
5 |
+
emoji: 📚
|
6 |
+
colorFrom: yellow
|
7 |
+
colorTo: red
|
8 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|