Spaces:
Sleeping
Sleeping
Sean Carnahan
commited on
Commit
·
c636b75
1
Parent(s):
c70fb2e
Track image and asset files with LFS
Browse files- HFup/.gitattributes +13 -0
- HFup/Dockerfile +30 -0
- HFup/README.md +76 -0
- HFup/app.py +590 -0
- HFup/bodybuilding_pose_analyzer/README.md +63 -0
- HFup/bodybuilding_pose_analyzer/bodybuilding_pose_classifier.h5 +3 -0
- HFup/bodybuilding_pose_analyzer/requirements.txt +8 -0
- HFup/bodybuilding_pose_analyzer/src/__pycache__/movenet_analyzer.cpython-310.pyc +0 -0
- HFup/bodybuilding_pose_analyzer/src/__pycache__/pose_analyzer.cpython-310.pyc +0 -0
- HFup/bodybuilding_pose_analyzer/src/demo.py +80 -0
- HFup/bodybuilding_pose_analyzer/src/movenet_analyzer.py +321 -0
- HFup/bodybuilding_pose_analyzer/src/movenet_demo.py +66 -0
- HFup/bodybuilding_pose_analyzer/src/pose_analyzer.py +200 -0
- HFup/bodybuilding_pose_analyzer/src/sample_video.mp4 +3 -0
- HFup/external/BodybuildingPoseClassifier +1 -0
- HFup/requirements.txt +80 -0
- HFup/static/uploads/output.mp4 +3 -0
- HFup/static/uploads/output_mediapipe.mp4 +3 -0
- HFup/static/uploads/output_movenet_lightning.mp4 +3 -0
- HFup/static/uploads/output_movenet_thunder.mp4 +3 -0
- HFup/static/uploads/policeb.mp4 +3 -0
- HFup/yolov7 +1 -0
- HFup/yolov7-w6-pose.pt +3 -0
HFup/.gitattributes
CHANGED
@@ -1 +1,14 @@
|
|
1 |
*.keras filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
*.keras filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.JPEG filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.GIF filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.BMP filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.PNG filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.JPG filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.TIFF filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
HFup/Dockerfile
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Use a Python version that matches your (keras2env) as closely as possible
|
2 |
+
FROM python:3.9-slim
|
3 |
+
|
4 |
+
WORKDIR /app
|
5 |
+
|
6 |
+
# Install system dependencies
|
7 |
+
RUN apt-get update && apt-get install -y --no-install-recommends \
|
8 |
+
libgl1-mesa-glx \
|
9 |
+
libglib2.0-0 \
|
10 |
+
&& rm -rf /var/lib/apt/lists/*
|
11 |
+
|
12 |
+
COPY requirements.txt .
|
13 |
+
RUN pip install --no-cache-dir -r requirements.txt
|
14 |
+
|
15 |
+
# Copy all necessary application files and folders from HFup/ to /app in the container
|
16 |
+
# These paths are relative to the Dockerfile's location (i.e., inside HFup/)
|
17 |
+
COPY app.py .
|
18 |
+
COPY bodybuilding_pose_analyzer bodybuilding_pose_analyzer
|
19 |
+
COPY external external
|
20 |
+
COPY yolov7 yolov7
|
21 |
+
COPY yolov7-w6-pose.pt .
|
22 |
+
COPY static static
|
23 |
+
|
24 |
+
# Ensure the uploads directory within static exists and is writable
|
25 |
+
RUN mkdir -p static/uploads && chmod -R 777 static/uploads
|
26 |
+
|
27 |
+
EXPOSE 7860
|
28 |
+
|
29 |
+
# Command to run app with Gunicorn
|
30 |
+
CMD ["gunicorn", "--bind", "0.0.0.0:7860", "--workers", "1", "--threads", "2", "--timeout", "300", "app:app"]
|
HFup/README.md
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: Gladiator Pose Analyzer
|
3 |
+
emoji: 💪🏋️♂️
|
4 |
+
colorFrom: blue
|
5 |
+
colorTo: green
|
6 |
+
sdk: docker
|
7 |
+
pinned: false
|
8 |
+
app_port: 7860
|
9 |
+
# Add license if you have one, e.g., license: apache-2.0
|
10 |
+
---
|
11 |
+
|
12 |
+
# Gladiator Pose Analyzer 💪🏋️♂️
|
13 |
+
|
14 |
+
**Live Demo:** [Link to your Gladiator Pose Analyzer Hugging Face Space] (<- REPLACE THIS with your actual Space URL after deployment)
|
15 |
+
|
16 |
+
## Overview
|
17 |
+
|
18 |
+
The Gladiator Pose Analyzer is a web application designed for bodybuilding pose analysis and feedback. Users can upload videos of their poses, and the application utilizes computer vision models to provide insights into angles, form corrections, and pose classification.
|
19 |
+
|
20 |
+
This Space uses a Flask backend with various machine learning models for pose estimation and classification.
|
21 |
+
|
22 |
+
## Features
|
23 |
+
|
24 |
+
* **Video Upload:** Upload your bodybuilding pose videos (MP4, AVI, MOV, MKV).
|
25 |
+
* **Multiple Pose Estimation Models:**
|
26 |
+
* **Gladiator SupaDot (MediaPipe):** General pose estimation using MediaPipe Pose.
|
27 |
+
* **Gladiator BB - Lightning (MoveNet):** Fast and efficient pose estimation with MoveNet Lightning.
|
28 |
+
* **Gladiator BB - Thunder (MoveNet):** Higher accuracy pose estimation with MoveNet Thunder.
|
29 |
+
* **(Experimental) YOLOv7-w6 Pose:** Object detection based pose estimation (can be selected if enabled in UI).
|
30 |
+
* **Automated Pose Classification:** A custom-trained CNN classifies common bodybuilding poses (e.g., Side Chest, Front Double Biceps).
|
31 |
+
* **Real-time Feedback Panel:** Displays:
|
32 |
+
* Selected model.
|
33 |
+
* Current classified pose (via CNN, updated periodically).
|
34 |
+
* Calculated body angles (e.g., shoulder, elbow, knee).
|
35 |
+
* Specific form corrections based on ideal angle ranges for classified poses.
|
36 |
+
* General notes for poses where specific angle checks aren't defined.
|
37 |
+
* **Processed Video Output:** View the input video overlaid with detected keypoints and the feedback panel.
|
38 |
+
|
39 |
+
## How to Use
|
40 |
+
|
41 |
+
1. **Navigate to the Live Demo link** provided above.
|
42 |
+
2. **Choose a Pose Estimation Model** from the dropdown menu:
|
43 |
+
* `Gladiator SupaDot` (MediaPipe based)
|
44 |
+
* `Gladiator BB - Lightning` (MoveNet Lightning)
|
45 |
+
* `Gladiator BB - Thunder` (MoveNet Thunder)
|
46 |
+
3. **Select a Video File:** Click the "Choose File" button and select a video of your pose.
|
47 |
+
4. **Upload:** Click the "Upload Video" button.
|
48 |
+
5. **Processing:** Wait for the video to be processed. The server will analyze the video frame by frame.
|
49 |
+
6. **View Results:** The processed video with keypoint overlays and the dynamic feedback panel will be displayed.
|
50 |
+
|
51 |
+
## Models Used
|
52 |
+
|
53 |
+
* **Pose Estimation:**
|
54 |
+
* **MediaPipe Pose:** For the "Gladiator SupaDot" option.
|
55 |
+
* **Google MoveNet (Lightning & Thunder):** TensorFlow Hub models for "Gladiator BB" options.
|
56 |
+
* **YOLOv7-w6 Pose:** `yolov7-w6-pose.pt` (if enabled/selected).
|
57 |
+
* **Pose Classification:**
|
58 |
+
* A custom Convolutional Neural Network (CNN) trained on bodybuilding poses, loaded from `external/BodybuildingPoseClassifier/bodybuilding_pose_classifier.h5`.
|
59 |
+
* Classes: Side Chest, Front Double Biceps, Back Double Biceps, Front Lat Spread, Back Lat Spread.
|
60 |
+
|
61 |
+
## Technical Stack
|
62 |
+
|
63 |
+
* **Backend:** Flask (Python)
|
64 |
+
* **Frontend:** HTML, CSS, JavaScript (served by Flask)
|
65 |
+
* **CV & ML Libraries:** OpenCV, TensorFlow/Keras, PyTorch, MediaPipe
|
66 |
+
* **Deployment:** Docker on Hugging Face Spaces
|
67 |
+
|
68 |
+
## Known Issues & Limitations
|
69 |
+
|
70 |
+
* Accuracy of pose estimation and classification can vary depending on video quality, lighting, angle, and occlusion.
|
71 |
+
* The feedback provided is based on predefined angle ranges and may not cover all nuances of perfect form.
|
72 |
+
* Processing time can be significant for longer videos or when using more computationally intensive models.
|
73 |
+
|
74 |
+
---
|
75 |
+
|
76 |
+
*Remember to replace placeholder links and add any other specific information relevant to your project!*
|
HFup/app.py
ADDED
@@ -0,0 +1,590 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from flask import Flask, render_template, request, jsonify, send_from_directory, url_for
|
2 |
+
from flask_cors import CORS
|
3 |
+
import cv2
|
4 |
+
import torch
|
5 |
+
import numpy as np
|
6 |
+
import os
|
7 |
+
from werkzeug.utils import secure_filename
|
8 |
+
import sys
|
9 |
+
import traceback
|
10 |
+
from tensorflow.keras.models import load_model
|
11 |
+
from tensorflow.keras.preprocessing import image
|
12 |
+
import time
|
13 |
+
|
14 |
+
# Add bodybuilding_pose_analyzer to path
|
15 |
+
sys.path.append('.') # Assuming app.py is at the root of cv.github.io
|
16 |
+
from bodybuilding_pose_analyzer.src.movenet_analyzer import MoveNetAnalyzer
|
17 |
+
from bodybuilding_pose_analyzer.src.pose_analyzer import PoseAnalyzer
|
18 |
+
|
19 |
+
# Add YOLOv7 to path
|
20 |
+
sys.path.append('yolov7')
|
21 |
+
|
22 |
+
from yolov7.models.experimental import attempt_load
|
23 |
+
from yolov7.utils.general import check_img_size, non_max_suppression_kpt, scale_coords
|
24 |
+
from yolov7.utils.torch_utils import select_device
|
25 |
+
from yolov7.utils.plots import plot_skeleton_kpts
|
26 |
+
|
27 |
+
def wrap_text(text: str, font_face: int, font_scale: float, thickness: int, max_width: int) -> list[str]:
|
28 |
+
"""Wrap text to fit within max_width."""
|
29 |
+
if not text:
|
30 |
+
return []
|
31 |
+
|
32 |
+
lines = []
|
33 |
+
words = text.split(' ')
|
34 |
+
current_line = ''
|
35 |
+
|
36 |
+
for word in words:
|
37 |
+
# Check width if current_line + word fits
|
38 |
+
test_line = current_line + word + ' '
|
39 |
+
(text_width, _), _ = cv2.getTextSize(test_line.strip(), font_face, font_scale, thickness)
|
40 |
+
|
41 |
+
if text_width <= max_width:
|
42 |
+
current_line = test_line
|
43 |
+
else:
|
44 |
+
# Word doesn't fit, so current_line (without the new word) is a complete line
|
45 |
+
lines.append(current_line.strip())
|
46 |
+
# Start new line with the current word
|
47 |
+
current_line = word + ' '
|
48 |
+
# If a single word is too long, it will still overflow. Handle by breaking word if necessary (future enhancement)
|
49 |
+
(single_word_width, _), _ = cv2.getTextSize(word.strip(), font_face, font_scale, thickness)
|
50 |
+
if single_word_width > max_width:
|
51 |
+
# For now, just add the long word and let it overflow, or truncate it.
|
52 |
+
# A more complex solution would break the word.
|
53 |
+
lines.append(word.strip()) # Add the long word as its own line
|
54 |
+
current_line = '' # Reset current_line as the long word is handled
|
55 |
+
|
56 |
+
if current_line.strip(): # Add the last line
|
57 |
+
lines.append(current_line.strip())
|
58 |
+
|
59 |
+
return lines if lines else [text] # Ensure at least the original text is returned if no wrapping happens
|
60 |
+
|
61 |
+
app = Flask(__name__, static_url_path='/static', static_folder='static')
|
62 |
+
CORS(app, resources={r"/*": {"origins": "*"}})
|
63 |
+
|
64 |
+
app.config['UPLOAD_FOLDER'] = 'static/uploads'
|
65 |
+
app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024 # 16MB max file size
|
66 |
+
|
67 |
+
# Ensure upload directory exists
|
68 |
+
os.makedirs(app.config['UPLOAD_FOLDER'], exist_ok=True)
|
69 |
+
|
70 |
+
# Initialize YOLOv7 model
|
71 |
+
device = select_device('')
|
72 |
+
yolo_model = None # Initialize as None
|
73 |
+
stride = None
|
74 |
+
imgsz = None
|
75 |
+
|
76 |
+
try:
|
77 |
+
yolo_model = attempt_load('yolov7-w6-pose.pt', map_location=device)
|
78 |
+
stride = int(yolo_model.stride.max())
|
79 |
+
imgsz = check_img_size(640, s=stride)
|
80 |
+
print("YOLOv7 Model loaded successfully")
|
81 |
+
except Exception as e:
|
82 |
+
print(f"Error loading YOLOv7 model: {e}")
|
83 |
+
traceback.print_exc()
|
84 |
+
# Not raising here to allow app to run if only MoveNet is used. Error will be caught if YOLOv7 is selected.
|
85 |
+
|
86 |
+
# YOLOv7 pose model expects 17 keypoints
|
87 |
+
kpt_shape = (17, 3)
|
88 |
+
|
89 |
+
# Load CNN model for bodybuilding pose classification
|
90 |
+
cnn_model_path = 'external/BodybuildingPoseClassifier/bodybuilding_pose_classifier.h5'
|
91 |
+
cnn_model = load_model(cnn_model_path)
|
92 |
+
cnn_class_labels = ['side_chest', 'front_double_biceps', 'back_double_biceps', 'front_lat_spread', 'back_lat_spread']
|
93 |
+
|
94 |
+
def predict_pose_cnn(img_path):
|
95 |
+
img = image.load_img(img_path, target_size=(150, 150))
|
96 |
+
img_array = image.img_to_array(img)
|
97 |
+
img_array = np.expand_dims(img_array, axis=0) / 255.0
|
98 |
+
predictions = cnn_model.predict(img_array)
|
99 |
+
predicted_class = np.argmax(predictions, axis=1)
|
100 |
+
confidence = float(np.max(predictions))
|
101 |
+
return cnn_class_labels[predicted_class[0]], confidence
|
102 |
+
|
103 |
+
@app.route('/static/uploads/<path:filename>')
|
104 |
+
def serve_video(filename):
|
105 |
+
response = send_from_directory(app.config['UPLOAD_FOLDER'], filename, as_attachment=False)
|
106 |
+
# Ensure correct content type, especially for Safari/iOS if issues arise
|
107 |
+
if filename.lower().endswith('.mp4'):
|
108 |
+
response.headers['Content-Type'] = 'video/mp4'
|
109 |
+
return response
|
110 |
+
|
111 |
+
@app.after_request
|
112 |
+
def after_request(response):
|
113 |
+
response.headers.add('Access-Control-Allow-Origin', '*')
|
114 |
+
response.headers.add('Access-Control-Allow-Headers', 'Content-Type,Authorization,X-Requested-With,Accept')
|
115 |
+
response.headers.add('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS')
|
116 |
+
return response
|
117 |
+
|
118 |
+
def process_video_yolov7(video_path): # Renamed from process_video
|
119 |
+
global yolo_model, imgsz, stride # Ensure global model is used
|
120 |
+
if yolo_model is None:
|
121 |
+
raise RuntimeError("YOLOv7 model failed to load. Cannot process video.")
|
122 |
+
try:
|
123 |
+
if not os.path.exists(video_path):
|
124 |
+
raise FileNotFoundError(f"Video file not found: {video_path}")
|
125 |
+
|
126 |
+
cap = cv2.VideoCapture(video_path)
|
127 |
+
if not cap.isOpened():
|
128 |
+
raise ValueError(f"Failed to open video file: {video_path}")
|
129 |
+
|
130 |
+
fps = int(cap.get(cv2.CAP_PROP_FPS))
|
131 |
+
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
|
132 |
+
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
|
133 |
+
|
134 |
+
print(f"Processing video: {width}x{height} @ {fps}fps")
|
135 |
+
|
136 |
+
# Create output video writer
|
137 |
+
output_path = os.path.join(app.config['UPLOAD_FOLDER'], 'output.mp4')
|
138 |
+
fourcc = cv2.VideoWriter_fourcc(*'avc1')
|
139 |
+
out = cv2.VideoWriter(output_path, fourcc, fps, (width, height))
|
140 |
+
|
141 |
+
frame_count = 0
|
142 |
+
while cap.isOpened():
|
143 |
+
ret, frame = cap.read()
|
144 |
+
if not ret:
|
145 |
+
break
|
146 |
+
|
147 |
+
frame_count += 1
|
148 |
+
print(f"Processing frame {frame_count}")
|
149 |
+
|
150 |
+
# Prepare image
|
151 |
+
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
|
152 |
+
img = cv2.resize(img, (imgsz, imgsz))
|
153 |
+
img = img.transpose((2, 0, 1)) # HWC to CHW
|
154 |
+
img = np.ascontiguousarray(img)
|
155 |
+
img = torch.from_numpy(img).to(device)
|
156 |
+
img = img.float() / 255.0
|
157 |
+
if img.ndimension() == 3:
|
158 |
+
img = img.unsqueeze(0)
|
159 |
+
|
160 |
+
# Inference
|
161 |
+
with torch.no_grad():
|
162 |
+
pred = yolo_model(img)[0] # Use yolo_model
|
163 |
+
pred = non_max_suppression_kpt(pred, conf_thres=0.25, iou_thres=0.45, nc=yolo_model.yaml['nc'], kpt_label=True)
|
164 |
+
|
165 |
+
# Draw results
|
166 |
+
output_frame = frame.copy()
|
167 |
+
poses_detected = False
|
168 |
+
for det in pred:
|
169 |
+
if len(det):
|
170 |
+
poses_detected = True
|
171 |
+
det[:, :4] = scale_coords(img.shape[2:], det[:, :4], frame.shape).round()
|
172 |
+
for row in det:
|
173 |
+
xyxy = row[:4]
|
174 |
+
conf = row[4]
|
175 |
+
cls = row[5]
|
176 |
+
kpts = row[6:]
|
177 |
+
kpts = torch.tensor(kpts).view(kpt_shape)
|
178 |
+
output_frame = plot_skeleton_kpts(output_frame, kpts, steps=3, orig_shape=output_frame.shape[:2])
|
179 |
+
|
180 |
+
if not poses_detected:
|
181 |
+
print(f"No poses detected in frame {frame_count}")
|
182 |
+
|
183 |
+
out.write(output_frame)
|
184 |
+
|
185 |
+
cap.release()
|
186 |
+
out.release()
|
187 |
+
|
188 |
+
if frame_count == 0:
|
189 |
+
raise ValueError("No frames were processed from the video")
|
190 |
+
|
191 |
+
print(f"Video processing completed. Processed {frame_count} frames")
|
192 |
+
# Return URL for the client, using the 'serve_video' endpoint
|
193 |
+
output_filename = 'output.mp4'
|
194 |
+
return url_for('serve_video', filename=output_filename, _external=False)
|
195 |
+
except Exception as e:
|
196 |
+
print('Error in process_video:', e)
|
197 |
+
traceback.print_exc()
|
198 |
+
raise
|
199 |
+
|
200 |
+
def process_video_movenet(video_path, model_variant='lightning', pose_type='front_double_biceps'):
|
201 |
+
try:
|
202 |
+
print(f"[PROCESS_VIDEO_MOVENET] Called with video_path: {video_path}, model_variant: {model_variant}, pose_type: {pose_type}")
|
203 |
+
if not os.path.exists(video_path):
|
204 |
+
raise FileNotFoundError(f"Video file not found: {video_path}")
|
205 |
+
|
206 |
+
analyzer = MoveNetAnalyzer(model_name=model_variant)
|
207 |
+
cap = cv2.VideoCapture(video_path)
|
208 |
+
if not cap.isOpened():
|
209 |
+
raise ValueError(f"Failed to open video file: {video_path}")
|
210 |
+
fps = int(cap.get(cv2.CAP_PROP_FPS))
|
211 |
+
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
|
212 |
+
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
|
213 |
+
|
214 |
+
# Add panel width to total width
|
215 |
+
panel_width = 300
|
216 |
+
total_width = width + panel_width
|
217 |
+
|
218 |
+
print(f"Processing video with MoveNet ({model_variant}): {width}x{height} @ {fps}fps")
|
219 |
+
print(f"Output dimensions will be: {total_width}x{height}")
|
220 |
+
output_filename = f'output_movenet_{model_variant}.mp4'
|
221 |
+
output_path = os.path.join(app.config['UPLOAD_FOLDER'], output_filename)
|
222 |
+
print(f"Output path: {output_path}")
|
223 |
+
|
224 |
+
fourcc = cv2.VideoWriter_fourcc(*'avc1')
|
225 |
+
out = cv2.VideoWriter(output_path, fourcc, fps, (total_width, height))
|
226 |
+
if not out.isOpened():
|
227 |
+
raise ValueError(f"Failed to create output video writer at {output_path}")
|
228 |
+
|
229 |
+
frame_count = 0
|
230 |
+
current_pose = pose_type
|
231 |
+
segment_length = 4 * fps if fps > 0 else 120
|
232 |
+
cnn_pose = None
|
233 |
+
last_valid_landmarks = None
|
234 |
+
landmarks_analysis = {'error': 'Processing not started'} # Initialize landmarks_analysis
|
235 |
+
|
236 |
+
while cap.isOpened():
|
237 |
+
ret, frame = cap.read()
|
238 |
+
if not ret:
|
239 |
+
break
|
240 |
+
frame_count += 1
|
241 |
+
if frame_count % 30 == 0:
|
242 |
+
print(f"Processing frame {frame_count}")
|
243 |
+
|
244 |
+
# Process frame
|
245 |
+
processed_frame, current_landmarks_analysis, landmarks = analyzer.process_frame(frame, current_pose, last_valid_landmarks=last_valid_landmarks)
|
246 |
+
landmarks_analysis = current_landmarks_analysis # Update with the latest analysis
|
247 |
+
if frame_count % 30 == 0: # Log every 30 frames
|
248 |
+
print(f"[MOVENET_DEBUG] Frame {frame_count} - landmarks_analysis: {landmarks_analysis}")
|
249 |
+
if landmarks:
|
250 |
+
last_valid_landmarks = landmarks
|
251 |
+
|
252 |
+
# CNN prediction (every 4 seconds)
|
253 |
+
if (frame_count - 1) % segment_length == 0:
|
254 |
+
temp_img_path = f'temp_frame_for_cnn_{frame_count}.jpg' # Unique temp name
|
255 |
+
cv2.imwrite(temp_img_path, frame)
|
256 |
+
try:
|
257 |
+
cnn_pose_pred, cnn_conf = predict_pose_cnn(temp_img_path)
|
258 |
+
print(f"[CNN] Frame {frame_count}: Pose: {cnn_pose_pred}, Conf: {cnn_conf:.2f}")
|
259 |
+
if cnn_conf >= 0.3:
|
260 |
+
current_pose = cnn_pose_pred # Update current_pose for the analyzer
|
261 |
+
except Exception as e:
|
262 |
+
print(f"[CNN] Error predicting pose on frame {frame_count}: {e}")
|
263 |
+
finally:
|
264 |
+
if os.path.exists(temp_img_path):
|
265 |
+
os.remove(temp_img_path)
|
266 |
+
|
267 |
+
# Create side panel
|
268 |
+
panel = np.zeros((height, panel_width, 3), dtype=np.uint8)
|
269 |
+
|
270 |
+
# --- Dynamic Text Parameter Calculations ---
|
271 |
+
current_font = cv2.FONT_HERSHEY_DUPLEX
|
272 |
+
|
273 |
+
# Base font scale and reference video height for scaling
|
274 |
+
# Adjust base_font_scale_at_ref_height if text is generally too large or too small
|
275 |
+
base_font_scale_at_ref_height = 0.6
|
276 |
+
reference_height_for_font_scale = 640.0 # e.g., a common video height like 480p, 720p
|
277 |
+
|
278 |
+
# Calculate dynamic font_scale
|
279 |
+
font_scale = (height / reference_height_for_font_scale) * base_font_scale_at_ref_height
|
280 |
+
# Clamp font_scale to a min/max range to avoid extremes
|
281 |
+
font_scale = max(0.4, min(font_scale, 1.2))
|
282 |
+
|
283 |
+
# Calculate dynamic thickness
|
284 |
+
thickness = 1 if font_scale < 0.7 else 2
|
285 |
+
|
286 |
+
# Calculate dynamic line_height based on actual text height
|
287 |
+
# Using a sample string like "Ag" which has ascenders and descenders
|
288 |
+
(_, text_actual_height), _ = cv2.getTextSize("Ag", current_font, font_scale, thickness)
|
289 |
+
line_spacing_factor = 1.8 # Adjust for more or less space between lines
|
290 |
+
line_height = int(text_actual_height * line_spacing_factor)
|
291 |
+
line_height = max(line_height, 15) # Ensure a minimum line height
|
292 |
+
|
293 |
+
# Initial y_offset for the first line of text
|
294 |
+
y_offset_panel = max(line_height, 20) # Start considering top margin and text height
|
295 |
+
# --- End of Dynamic Text Parameter Calculations ---
|
296 |
+
|
297 |
+
display_model_name = f"Gladiator {model_variant.capitalize()}"
|
298 |
+
cv2.putText(panel, f"Model: {display_model_name}", (10, y_offset_panel), current_font, font_scale, (0, 255, 255), thickness, lineType=cv2.LINE_AA)
|
299 |
+
y_offset_panel += line_height
|
300 |
+
|
301 |
+
if 'error' not in landmarks_analysis:
|
302 |
+
cv2.putText(panel, "Angles:", (10, y_offset_panel), current_font, font_scale, (255, 255, 255), thickness, lineType=cv2.LINE_AA)
|
303 |
+
y_offset_panel += line_height
|
304 |
+
for joint, angle in landmarks_analysis.get('angles', {}).items():
|
305 |
+
text_to_display = f"{joint.capitalize()}: {angle:.1f} deg"
|
306 |
+
cv2.putText(panel, text_to_display, (20, y_offset_panel), current_font, font_scale, (0, 255, 0), thickness, lineType=cv2.LINE_AA)
|
307 |
+
y_offset_panel += line_height
|
308 |
+
|
309 |
+
# Define available width for text within the panel, considering padding
|
310 |
+
text_area_x_start = 20
|
311 |
+
panel_padding = 10 # Padding from the right edge of the panel
|
312 |
+
text_area_width = panel_width - text_area_x_start - panel_padding
|
313 |
+
|
314 |
+
if landmarks_analysis.get('corrections'):
|
315 |
+
y_offset_panel += int(line_height * 0.5) # Smaller gap before section title
|
316 |
+
cv2.putText(panel, "Corrections:", (10, y_offset_panel), current_font, font_scale, (255, 255, 255), thickness, lineType=cv2.LINE_AA)
|
317 |
+
y_offset_panel += line_height
|
318 |
+
for correction_text in landmarks_analysis.get('corrections', []):
|
319 |
+
wrapped_lines = wrap_text(correction_text, current_font, font_scale, thickness, text_area_width)
|
320 |
+
for line in wrapped_lines:
|
321 |
+
cv2.putText(panel, line, (text_area_x_start, y_offset_panel), current_font, font_scale, (0, 0, 255), thickness, lineType=cv2.LINE_AA)
|
322 |
+
y_offset_panel += line_height
|
323 |
+
|
324 |
+
# Display notes if any
|
325 |
+
if landmarks_analysis.get('notes'):
|
326 |
+
y_offset_panel += int(line_height * 0.5) # Smaller gap before section title
|
327 |
+
cv2.putText(panel, "Notes:", (10, y_offset_panel), current_font, font_scale, (200, 200, 200), thickness, lineType=cv2.LINE_AA)
|
328 |
+
y_offset_panel += line_height
|
329 |
+
for note_text in landmarks_analysis.get('notes', []):
|
330 |
+
wrapped_lines = wrap_text(note_text, current_font, font_scale, thickness, text_area_width)
|
331 |
+
for line in wrapped_lines:
|
332 |
+
cv2.putText(panel, line, (text_area_x_start, y_offset_panel), current_font, font_scale, (200, 200, 200), thickness, lineType=cv2.LINE_AA)
|
333 |
+
y_offset_panel += line_height
|
334 |
+
else:
|
335 |
+
cv2.putText(panel, "Error:", (10, y_offset_panel), current_font, font_scale, (255, 255, 255), thickness, lineType=cv2.LINE_AA)
|
336 |
+
y_offset_panel += line_height
|
337 |
+
# Also wrap error message if it can be long
|
338 |
+
error_text = landmarks_analysis.get('error', 'Unknown error')
|
339 |
+
text_area_x_start = 20 # Assuming error message also starts at x=20
|
340 |
+
panel_padding = 10
|
341 |
+
text_area_width = panel_width - text_area_x_start - panel_padding
|
342 |
+
wrapped_error_lines = wrap_text(error_text, current_font, font_scale, thickness, text_area_width)
|
343 |
+
for line in wrapped_error_lines:
|
344 |
+
cv2.putText(panel, line, (text_area_x_start, y_offset_panel), current_font, font_scale, (0, 0, 255), thickness, lineType=cv2.LINE_AA)
|
345 |
+
y_offset_panel += line_height
|
346 |
+
|
347 |
+
combined_frame = np.hstack((processed_frame, panel))
|
348 |
+
out.write(combined_frame)
|
349 |
+
|
350 |
+
cap.release()
|
351 |
+
out.release()
|
352 |
+
|
353 |
+
if frame_count == 0:
|
354 |
+
raise ValueError("No frames were processed from the video by MoveNet")
|
355 |
+
|
356 |
+
print(f"MoveNet video processing completed. Processed {frame_count} frames. Output: {output_path}")
|
357 |
+
print(f"Output file size: {os.path.getsize(output_path)} bytes")
|
358 |
+
|
359 |
+
return url_for('serve_video', filename=output_filename, _external=False)
|
360 |
+
except Exception as e:
|
361 |
+
print(f'Error in process_video_movenet: {e}')
|
362 |
+
traceback.print_exc()
|
363 |
+
raise
|
364 |
+
|
365 |
+
def process_video_mediapipe(video_path):
|
366 |
+
try:
|
367 |
+
print(f"[PROCESS_VIDEO_MEDIAPIPE] Called with video_path: {video_path}")
|
368 |
+
if not os.path.exists(video_path):
|
369 |
+
raise FileNotFoundError(f"Video file not found: {video_path}")
|
370 |
+
|
371 |
+
analyzer = PoseAnalyzer()
|
372 |
+
cap = cv2.VideoCapture(video_path)
|
373 |
+
if not cap.isOpened():
|
374 |
+
raise ValueError(f"Failed to open video file: {video_path}")
|
375 |
+
fps = int(cap.get(cv2.CAP_PROP_FPS))
|
376 |
+
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
|
377 |
+
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
|
378 |
+
|
379 |
+
# Add panel width to total width
|
380 |
+
panel_width = 300
|
381 |
+
total_width = width + panel_width
|
382 |
+
|
383 |
+
print(f"Processing video with MediaPipe: {width}x{height} @ {fps}fps")
|
384 |
+
output_filename = f'output_mediapipe.mp4'
|
385 |
+
output_path = os.path.join(app.config['UPLOAD_FOLDER'], output_filename)
|
386 |
+
fourcc = cv2.VideoWriter_fourcc(*'avc1')
|
387 |
+
out = cv2.VideoWriter(output_path, fourcc, fps, (total_width, height))
|
388 |
+
if not out.isOpened():
|
389 |
+
raise ValueError(f"Failed to create output video writer at {output_path}")
|
390 |
+
|
391 |
+
frame_count = 0
|
392 |
+
current_pose = 'Uncertain' # Initial pose for MediaPipe
|
393 |
+
segment_length = 4 * fps if fps > 0 else 120
|
394 |
+
cnn_pose = None
|
395 |
+
last_valid_landmarks = None
|
396 |
+
analysis_results = {'error': 'Processing not started'} # Initialize analysis_results
|
397 |
+
|
398 |
+
while cap.isOpened():
|
399 |
+
ret, frame = cap.read()
|
400 |
+
if not ret:
|
401 |
+
break
|
402 |
+
frame_count += 1
|
403 |
+
if frame_count % 30 == 0:
|
404 |
+
print(f"Processing frame {frame_count}")
|
405 |
+
|
406 |
+
# Process frame with MediaPipe
|
407 |
+
processed_frame, current_analysis_results, landmarks = analyzer.process_frame(frame, last_valid_landmarks=last_valid_landmarks)
|
408 |
+
analysis_results = current_analysis_results # Update with the latest analysis
|
409 |
+
if landmarks:
|
410 |
+
last_valid_landmarks = landmarks
|
411 |
+
|
412 |
+
# CNN prediction (every 4 seconds)
|
413 |
+
if (frame_count - 1) % segment_length == 0:
|
414 |
+
temp_img_path = f'temp_frame_for_cnn_{frame_count}.jpg' # Unique temp name
|
415 |
+
cv2.imwrite(temp_img_path, frame)
|
416 |
+
try:
|
417 |
+
cnn_pose_pred, cnn_conf = predict_pose_cnn(temp_img_path)
|
418 |
+
print(f"[CNN] Frame {frame_count}: Pose: {cnn_pose_pred}, Conf: {cnn_conf:.2f}")
|
419 |
+
if cnn_conf >= 0.3:
|
420 |
+
current_pose = cnn_pose_pred # Update current_pose to be displayed
|
421 |
+
except Exception as e:
|
422 |
+
print(f"[CNN] Error predicting pose on frame {frame_count}: {e}")
|
423 |
+
finally:
|
424 |
+
if os.path.exists(temp_img_path):
|
425 |
+
os.remove(temp_img_path)
|
426 |
+
|
427 |
+
# Create side panel
|
428 |
+
panel = np.zeros((height, panel_width, 3), dtype=np.uint8)
|
429 |
+
|
430 |
+
# --- Dynamic Text Parameter Calculations ---
|
431 |
+
current_font = cv2.FONT_HERSHEY_DUPLEX
|
432 |
+
|
433 |
+
# Base font scale and reference video height for scaling
|
434 |
+
# Adjust base_font_scale_at_ref_height if text is generally too large or too small
|
435 |
+
base_font_scale_at_ref_height = 0.6
|
436 |
+
reference_height_for_font_scale = 640.0 # e.g., a common video height like 480p, 720p
|
437 |
+
|
438 |
+
# Calculate dynamic font_scale
|
439 |
+
font_scale = (height / reference_height_for_font_scale) * base_font_scale_at_ref_height
|
440 |
+
# Clamp font_scale to a min/max range to avoid extremes
|
441 |
+
font_scale = max(0.4, min(font_scale, 1.2))
|
442 |
+
|
443 |
+
# Calculate dynamic thickness
|
444 |
+
thickness = 1 if font_scale < 0.7 else 2
|
445 |
+
|
446 |
+
# Calculate dynamic line_height based on actual text height
|
447 |
+
# Using a sample string like "Ag" which has ascenders and descenders
|
448 |
+
(_, text_actual_height), _ = cv2.getTextSize("Ag", current_font, font_scale, thickness)
|
449 |
+
line_spacing_factor = 1.8 # Adjust for more or less space between lines
|
450 |
+
line_height = int(text_actual_height * line_spacing_factor)
|
451 |
+
line_height = max(line_height, 15) # Ensure a minimum line height
|
452 |
+
|
453 |
+
# Initial y_offset for the first line of text
|
454 |
+
y_offset_panel = max(line_height, 20) # Start considering top margin and text height
|
455 |
+
# --- End of Dynamic Text Parameter Calculations ---
|
456 |
+
|
457 |
+
cv2.putText(panel, "Model: Gladiator SupaDot", (10, y_offset_panel), current_font, font_scale, (0, 255, 255), thickness, lineType=cv2.LINE_AA)
|
458 |
+
y_offset_panel += line_height
|
459 |
+
if frame_count % 30 == 0: # Print every 30 frames to avoid flooding console
|
460 |
+
print(f"[MEDIAPIPE_PANEL] Frame {frame_count} - Current Pose for Panel: {current_pose}")
|
461 |
+
cv2.putText(panel, f"Pose: {current_pose}", (10, y_offset_panel), current_font, font_scale, (255, 0, 0), thickness, lineType=cv2.LINE_AA)
|
462 |
+
y_offset_panel += int(line_height * 1.5)
|
463 |
+
|
464 |
+
if 'error' not in analysis_results:
|
465 |
+
cv2.putText(panel, "Angles:", (10, y_offset_panel), current_font, font_scale, (255, 255, 255), thickness, lineType=cv2.LINE_AA)
|
466 |
+
y_offset_panel += line_height
|
467 |
+
for joint, angle in analysis_results.get('angles', {}).items():
|
468 |
+
text_to_display = f"{joint.capitalize()}: {angle:.1f} deg"
|
469 |
+
cv2.putText(panel, text_to_display, (20, y_offset_panel), current_font, font_scale, (0, 255, 0), thickness, lineType=cv2.LINE_AA)
|
470 |
+
y_offset_panel += line_height
|
471 |
+
|
472 |
+
if analysis_results.get('corrections'):
|
473 |
+
y_offset_panel += line_height
|
474 |
+
cv2.putText(panel, "Corrections:", (10, y_offset_panel), current_font, font_scale, (255, 255, 255), thickness, lineType=cv2.LINE_AA)
|
475 |
+
y_offset_panel += line_height
|
476 |
+
for correction in analysis_results.get('corrections', []):
|
477 |
+
cv2.putText(panel, f"• {correction}", (20, y_offset_panel), current_font, font_scale, (0, 0, 255), thickness, lineType=cv2.LINE_AA)
|
478 |
+
y_offset_panel += line_height
|
479 |
+
|
480 |
+
# Display notes if any
|
481 |
+
if analysis_results.get('notes'):
|
482 |
+
y_offset_panel += line_height
|
483 |
+
cv2.putText(panel, "Notes:", (10, y_offset_panel), current_font, font_scale, (200, 200, 200), thickness, lineType=cv2.LINE_AA) # Grey color for notes
|
484 |
+
y_offset_panel += line_height
|
485 |
+
for note in analysis_results.get('notes', []):
|
486 |
+
cv2.putText(panel, f"• {note}", (20, y_offset_panel), current_font, font_scale, (200, 200, 200), thickness, lineType=cv2.LINE_AA)
|
487 |
+
y_offset_panel += line_height
|
488 |
+
else:
|
489 |
+
cv2.putText(panel, "Error:", (10, y_offset_panel), current_font, font_scale, (255, 255, 255), thickness, lineType=cv2.LINE_AA)
|
490 |
+
y_offset_panel += line_height
|
491 |
+
cv2.putText(panel, analysis_results.get('error', 'Unknown error'), (20, y_offset_panel), current_font, font_scale, (0, 0, 255), thickness, lineType=cv2.LINE_AA)
|
492 |
+
|
493 |
+
combined_frame = np.hstack((processed_frame, panel)) # Use processed_frame from analyzer
|
494 |
+
out.write(combined_frame)
|
495 |
+
|
496 |
+
cap.release()
|
497 |
+
out.release()
|
498 |
+
if frame_count == 0:
|
499 |
+
raise ValueError("No frames were processed from the video by MediaPipe")
|
500 |
+
print(f"MediaPipe video processing completed. Processed {frame_count} frames. Output: {output_path}")
|
501 |
+
return url_for('serve_video', filename=output_filename, _external=False)
|
502 |
+
except Exception as e:
|
503 |
+
print(f'Error in process_video_mediapipe: {e}')
|
504 |
+
traceback.print_exc()
|
505 |
+
raise
|
506 |
+
|
507 |
+
@app.route('/')
|
508 |
+
def index():
|
509 |
+
return render_template('index.html')
|
510 |
+
|
511 |
+
@app.route('/upload', methods=['POST'])
|
512 |
+
def upload_file():
|
513 |
+
try:
|
514 |
+
if 'video' not in request.files:
|
515 |
+
print("[UPLOAD] No video file in request")
|
516 |
+
return jsonify({'error': 'No video file provided'}), 400
|
517 |
+
|
518 |
+
file = request.files['video']
|
519 |
+
if file.filename == '':
|
520 |
+
print("[UPLOAD] Empty filename")
|
521 |
+
return jsonify({'error': 'No selected file'}), 400
|
522 |
+
|
523 |
+
if file:
|
524 |
+
allowed_extensions = {'mp4', 'avi', 'mov', 'mkv'}
|
525 |
+
if '.' not in file.filename or file.filename.rsplit('.', 1)[1].lower() not in allowed_extensions:
|
526 |
+
print(f"[UPLOAD] Invalid file format: {file.filename}")
|
527 |
+
return jsonify({'error': 'Invalid file format. Allowed formats: mp4, avi, mov, mkv'}), 400
|
528 |
+
|
529 |
+
# Ensure the filename is properly sanitized
|
530 |
+
filename = secure_filename(file.filename)
|
531 |
+
print(f"[UPLOAD] Original filename: {file.filename}")
|
532 |
+
print(f"[UPLOAD] Sanitized filename: {filename}")
|
533 |
+
|
534 |
+
# Create a unique filename to prevent conflicts
|
535 |
+
base, ext = os.path.splitext(filename)
|
536 |
+
unique_filename = f"{base}_{int(time.time())}{ext}"
|
537 |
+
filepath = os.path.join(app.config['UPLOAD_FOLDER'], unique_filename)
|
538 |
+
|
539 |
+
print(f"[UPLOAD] Saving file to: {filepath}")
|
540 |
+
file.save(filepath)
|
541 |
+
|
542 |
+
if not os.path.exists(filepath):
|
543 |
+
print(f"[UPLOAD] File not found after save: {filepath}")
|
544 |
+
return jsonify({'error': 'Failed to save uploaded file'}), 500
|
545 |
+
|
546 |
+
print(f"[UPLOAD] File saved successfully. Size: {os.path.getsize(filepath)} bytes")
|
547 |
+
|
548 |
+
try:
|
549 |
+
model_choice = request.form.get('model_choice', 'Gladiator SupaDot')
|
550 |
+
print(f"[UPLOAD] Processing with model: {model_choice}")
|
551 |
+
|
552 |
+
if model_choice == 'movenet':
|
553 |
+
movenet_variant = request.form.get('movenet_variant', 'lightning')
|
554 |
+
print(f"[UPLOAD] Using MoveNet variant: {movenet_variant}")
|
555 |
+
output_path_url = process_video_movenet(filepath, model_variant=movenet_variant)
|
556 |
+
else:
|
557 |
+
output_path_url = process_video_mediapipe(filepath)
|
558 |
+
|
559 |
+
print(f"[UPLOAD] Processing complete. Output URL: {output_path_url}")
|
560 |
+
|
561 |
+
if not os.path.exists(os.path.join(app.config['UPLOAD_FOLDER'], os.path.basename(output_path_url))):
|
562 |
+
print(f"[UPLOAD] Output file not found: {output_path_url}")
|
563 |
+
return jsonify({'error': 'Output video file not found'}), 500
|
564 |
+
|
565 |
+
return jsonify({
|
566 |
+
'message': f'Video processed successfully with {model_choice}',
|
567 |
+
'output_path': output_path_url
|
568 |
+
})
|
569 |
+
|
570 |
+
except Exception as e:
|
571 |
+
print(f"[UPLOAD] Error processing video: {str(e)}")
|
572 |
+
traceback.print_exc()
|
573 |
+
return jsonify({'error': f'Error processing video: {str(e)}'}), 500
|
574 |
+
|
575 |
+
finally:
|
576 |
+
try:
|
577 |
+
if os.path.exists(filepath):
|
578 |
+
os.remove(filepath)
|
579 |
+
print(f"[UPLOAD] Cleaned up input file: {filepath}")
|
580 |
+
except Exception as e:
|
581 |
+
print(f"[UPLOAD] Error cleaning up file: {str(e)}")
|
582 |
+
|
583 |
+
except Exception as e:
|
584 |
+
print(f"[UPLOAD] Unexpected error: {str(e)}")
|
585 |
+
traceback.print_exc()
|
586 |
+
return jsonify({'error': 'Internal server error'}), 500
|
587 |
+
|
588 |
+
if __name__ == '__main__':
|
589 |
+
# Ensure the port is 7860 and debug is False for HF Spaces deployment
|
590 |
+
app.run(host='0.0.0.0', port=7860, debug=False)
|
HFup/bodybuilding_pose_analyzer/README.md
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Bodybuilding Pose Analyzer
|
2 |
+
|
3 |
+
A real-time pose analysis tool for bodybuilders that helps analyze and provide feedback on common bodybuilding poses.
|
4 |
+
|
5 |
+
## Features
|
6 |
+
|
7 |
+
- Real-time pose detection using MediaPipe
|
8 |
+
- Analysis of common bodybuilding poses:
|
9 |
+
- Front Double Biceps
|
10 |
+
- Side Chest
|
11 |
+
- Back Double Biceps
|
12 |
+
- Angle measurements for key body parts
|
13 |
+
- Real-time feedback and corrections
|
14 |
+
- FPS display
|
15 |
+
|
16 |
+
## Requirements
|
17 |
+
|
18 |
+
- Python 3.8+
|
19 |
+
- Webcam
|
20 |
+
- Required Python packages (listed in requirements.txt)
|
21 |
+
|
22 |
+
## Installation
|
23 |
+
|
24 |
+
1. Clone the repository:
|
25 |
+
```bash
|
26 |
+
git clone <repository-url>
|
27 |
+
cd bodybuilding_pose_analyzer
|
28 |
+
```
|
29 |
+
|
30 |
+
2. Create a virtual environment (recommended):
|
31 |
+
```bash
|
32 |
+
python -m venv venv
|
33 |
+
source venv/bin/activate # On Windows: venv\Scripts\activate
|
34 |
+
```
|
35 |
+
|
36 |
+
3. Install required packages:
|
37 |
+
```bash
|
38 |
+
pip install -r requirements.txt
|
39 |
+
```
|
40 |
+
|
41 |
+
## Usage
|
42 |
+
|
43 |
+
1. Run the demo script:
|
44 |
+
```bash
|
45 |
+
python src/demo.py
|
46 |
+
```
|
47 |
+
|
48 |
+
2. Position yourself in front of the webcam
|
49 |
+
3. The system will automatically detect your pose and provide feedback
|
50 |
+
4. Press 'q' to quit the application
|
51 |
+
|
52 |
+
## Supported Poses
|
53 |
+
|
54 |
+
Currently, the system supports the following poses:
|
55 |
+
- Front Double Biceps
|
56 |
+
- Side Chest
|
57 |
+
- Back Double Biceps
|
58 |
+
|
59 |
+
More poses will be added in future updates.
|
60 |
+
|
61 |
+
## Contributing
|
62 |
+
|
63 |
+
Feel free to submit issues and enhancement requests!
|
HFup/bodybuilding_pose_analyzer/bodybuilding_pose_classifier.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:56cdfbdadbef2675622e699fcf444d5bcb0aab6c695bb32165ae60e984278346
|
3 |
+
size 228483160
|
HFup/bodybuilding_pose_analyzer/requirements.txt
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
opencv-python>=4.8.0
|
2 |
+
mediapipe>=0.10.0
|
3 |
+
numpy>=1.24.0
|
4 |
+
torch>=2.0.0
|
5 |
+
torchvision>=0.15.0
|
6 |
+
scikit-learn>=1.3.0
|
7 |
+
matplotlib>=3.7.0
|
8 |
+
tqdm>=4.65.0
|
HFup/bodybuilding_pose_analyzer/src/__pycache__/movenet_analyzer.cpython-310.pyc
ADDED
Binary file (6.91 kB). View file
|
|
HFup/bodybuilding_pose_analyzer/src/__pycache__/pose_analyzer.cpython-310.pyc
ADDED
Binary file (5.46 kB). View file
|
|
HFup/bodybuilding_pose_analyzer/src/demo.py
ADDED
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import cv2
|
2 |
+
import time
|
3 |
+
import argparse
|
4 |
+
from pose_analyzer import PoseAnalyzer
|
5 |
+
|
6 |
+
def process_video(video_source, analyzer):
|
7 |
+
# Initialize video capture
|
8 |
+
cap = cv2.VideoCapture(video_source)
|
9 |
+
|
10 |
+
# Set window properties
|
11 |
+
cv2.namedWindow('Bodybuilding Pose Analyzer', cv2.WINDOW_NORMAL)
|
12 |
+
cv2.resizeWindow('Bodybuilding Pose Analyzer', 1280, 720)
|
13 |
+
|
14 |
+
# FPS calculation variables
|
15 |
+
prev_time = 0
|
16 |
+
curr_time = 0
|
17 |
+
|
18 |
+
while cap.isOpened():
|
19 |
+
# Read frame
|
20 |
+
ret, frame = cap.read()
|
21 |
+
if not ret:
|
22 |
+
break
|
23 |
+
|
24 |
+
# Calculate FPS
|
25 |
+
curr_time = time.time()
|
26 |
+
fps = 1 / (curr_time - prev_time) if prev_time > 0 else 0
|
27 |
+
prev_time = curr_time
|
28 |
+
|
29 |
+
# Process frame
|
30 |
+
frame_with_pose, analysis = analyzer.process_frame(frame)
|
31 |
+
|
32 |
+
# Add FPS and analysis text to frame
|
33 |
+
cv2.putText(frame_with_pose, f'FPS: {fps:.1f}', (10, 30),
|
34 |
+
cv2.FONT_HERSHEY_PLAIN, 0.5, (0, 255, 0), 1, lineType=cv2.LINE_AA)
|
35 |
+
|
36 |
+
# Display feedback
|
37 |
+
if 'error' not in analysis:
|
38 |
+
y_offset = 70
|
39 |
+
cv2.putText(frame_with_pose, f'Pose: {analysis["pose_type"]}', (10, y_offset),
|
40 |
+
cv2.FONT_HERSHEY_PLAIN, 0.5, (0, 255, 0), 1, lineType=cv2.LINE_AA)
|
41 |
+
|
42 |
+
for angle_name, angle_value in analysis['angles'].items():
|
43 |
+
y_offset += 40
|
44 |
+
cv2.putText(frame_with_pose, f'{angle_name}: {angle_value:.1f}°', (10, y_offset),
|
45 |
+
cv2.FONT_HERSHEY_PLAIN, 0.5, (0, 255, 0), 1, lineType=cv2.LINE_AA)
|
46 |
+
|
47 |
+
for correction in analysis['corrections']:
|
48 |
+
y_offset += 40
|
49 |
+
cv2.putText(frame_with_pose, correction, (10, y_offset),
|
50 |
+
cv2.FONT_HERSHEY_PLAIN, 0.5, (0, 0, 255), 1, lineType=cv2.LINE_AA)
|
51 |
+
else:
|
52 |
+
cv2.putText(frame_with_pose, analysis['error'], (10, 70),
|
53 |
+
cv2.FONT_HERSHEY_PLAIN, 0.5, (0, 0, 255), 1, lineType=cv2.LINE_AA)
|
54 |
+
|
55 |
+
# Display the frame
|
56 |
+
cv2.imshow('Bodybuilding Pose Analyzer', frame_with_pose)
|
57 |
+
|
58 |
+
# Break the loop if 'q' is pressed
|
59 |
+
if cv2.waitKey(1) & 0xFF == ord('q'):
|
60 |
+
break
|
61 |
+
|
62 |
+
# Release resources
|
63 |
+
cap.release()
|
64 |
+
cv2.destroyAllWindows()
|
65 |
+
|
66 |
+
def main():
|
67 |
+
# Parse command line arguments
|
68 |
+
parser = argparse.ArgumentParser(description='Bodybuilding Pose Analyzer Demo')
|
69 |
+
parser.add_argument('--video', type=str, help='Path to video file (optional)')
|
70 |
+
args = parser.parse_args()
|
71 |
+
|
72 |
+
# Initialize the pose analyzer
|
73 |
+
analyzer = PoseAnalyzer()
|
74 |
+
|
75 |
+
# Process video (either webcam or file)
|
76 |
+
video_source = args.video if args.video else 0
|
77 |
+
process_video(video_source, analyzer)
|
78 |
+
|
79 |
+
if __name__ == '__main__':
|
80 |
+
main()
|
HFup/bodybuilding_pose_analyzer/src/movenet_analyzer.py
ADDED
@@ -0,0 +1,321 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import cv2
|
2 |
+
import numpy as np
|
3 |
+
import tensorflow as tf
|
4 |
+
import tensorflow_hub as hub
|
5 |
+
from typing import List, Dict, Tuple
|
6 |
+
|
7 |
+
class MoveNetAnalyzer:
|
8 |
+
KEYPOINT_DICT = {
|
9 |
+
'nose': 0,
|
10 |
+
'left_eye': 1,
|
11 |
+
'right_eye': 2,
|
12 |
+
'left_ear': 3,
|
13 |
+
'right_ear': 4,
|
14 |
+
'left_shoulder': 5,
|
15 |
+
'right_shoulder': 6,
|
16 |
+
'left_elbow': 7,
|
17 |
+
'right_elbow': 8,
|
18 |
+
'left_wrist': 9,
|
19 |
+
'right_wrist': 10,
|
20 |
+
'left_hip': 11,
|
21 |
+
'right_hip': 12,
|
22 |
+
'left_knee': 13,
|
23 |
+
'right_knee': 14,
|
24 |
+
'left_ankle': 15,
|
25 |
+
'right_ankle': 16
|
26 |
+
}
|
27 |
+
|
28 |
+
def __init__(self, model_name="lightning"):
|
29 |
+
# Initialize MoveNet model
|
30 |
+
if model_name == "lightning":
|
31 |
+
self.model = hub.load("https://tfhub.dev/google/movenet/singlepose/lightning/4")
|
32 |
+
self.input_size = 192
|
33 |
+
else: # thunder
|
34 |
+
self.model = hub.load("https://tfhub.dev/google/movenet/singlepose/thunder/4")
|
35 |
+
self.input_size = 256
|
36 |
+
|
37 |
+
self.movenet = self.model.signatures['serving_default']
|
38 |
+
|
39 |
+
# Define key angles for bodybuilding poses
|
40 |
+
self.key_angles = {
|
41 |
+
'front_double_biceps': {
|
42 |
+
'shoulder_angle': (90, 120), # Expected angle range
|
43 |
+
'elbow_angle': (80, 100),
|
44 |
+
'wrist_angle': (0, 20)
|
45 |
+
},
|
46 |
+
'side_chest': {
|
47 |
+
'shoulder_angle': (45, 75),
|
48 |
+
'elbow_angle': (90, 110),
|
49 |
+
'wrist_angle': (0, 20)
|
50 |
+
},
|
51 |
+
'back_double_biceps': {
|
52 |
+
'shoulder_angle': (90, 120),
|
53 |
+
'elbow_angle': (80, 100),
|
54 |
+
'wrist_angle': (0, 20)
|
55 |
+
}
|
56 |
+
}
|
57 |
+
|
58 |
+
def detect_pose(self, frame: np.ndarray, last_valid_landmarks=None) -> Tuple[np.ndarray, List[Dict]]:
|
59 |
+
"""
|
60 |
+
Detect pose in the given frame and return the frame with pose landmarks drawn
|
61 |
+
and the list of detected landmarks.
|
62 |
+
If detection fails, reuse last valid landmarks if provided.
|
63 |
+
"""
|
64 |
+
# Resize and pad the image to keep aspect ratio
|
65 |
+
img = frame.copy()
|
66 |
+
img = tf.image.resize_with_pad(tf.expand_dims(img, axis=0), self.input_size, self.input_size)
|
67 |
+
img = tf.cast(img, dtype=tf.int32)
|
68 |
+
|
69 |
+
# Detection
|
70 |
+
results = self.movenet(img)
|
71 |
+
keypoints = results['output_0'].numpy() # Shape [1, 1, 17, 3]
|
72 |
+
|
73 |
+
# Draw the pose landmarks on the frame
|
74 |
+
if keypoints[0, 0, 0, 2] > 0.1: # Lowered confidence threshold for the nose
|
75 |
+
# Convert keypoints to image coordinates
|
76 |
+
y, x, c = frame.shape
|
77 |
+
shaped = np.squeeze(keypoints) # Shape [17, 3]
|
78 |
+
|
79 |
+
# Draw keypoints
|
80 |
+
for kp in shaped:
|
81 |
+
ky, kx, kp_conf = kp
|
82 |
+
if kp_conf > 0.1:
|
83 |
+
# Convert to image coordinates
|
84 |
+
x_coord = int(kx * x)
|
85 |
+
y_coord = int(ky * y)
|
86 |
+
cv2.circle(frame, (x_coord, y_coord), 6, (0, 255, 0), -1)
|
87 |
+
|
88 |
+
# Convert landmarks to a list of dictionaries
|
89 |
+
landmarks = []
|
90 |
+
for kp in shaped:
|
91 |
+
landmarks.append({
|
92 |
+
'x': float(kp[1]),
|
93 |
+
'y': float(kp[0]),
|
94 |
+
'visibility': float(kp[2])
|
95 |
+
})
|
96 |
+
|
97 |
+
return frame, landmarks
|
98 |
+
|
99 |
+
# If detection fails, reuse last valid landmarks if provided
|
100 |
+
if last_valid_landmarks is not None:
|
101 |
+
return frame, last_valid_landmarks
|
102 |
+
return frame, []
|
103 |
+
|
104 |
+
def calculate_angle(self, landmarks: List[Dict], joint1: int, joint2: int, joint3: int) -> float:
|
105 |
+
"""
|
106 |
+
Calculate the angle between three joints.
|
107 |
+
"""
|
108 |
+
if len(landmarks) < max(joint1, joint2, joint3):
|
109 |
+
return None
|
110 |
+
|
111 |
+
# Get the coordinates of the three joints
|
112 |
+
p1 = np.array([landmarks[joint1]['x'], landmarks[joint1]['y']])
|
113 |
+
p2 = np.array([landmarks[joint2]['x'], landmarks[joint2]['y']])
|
114 |
+
p3 = np.array([landmarks[joint3]['x'], landmarks[joint3]['y']])
|
115 |
+
|
116 |
+
# Calculate the angle
|
117 |
+
v1 = p1 - p2
|
118 |
+
v2 = p3 - p2
|
119 |
+
|
120 |
+
angle = np.degrees(np.arccos(
|
121 |
+
np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2))
|
122 |
+
))
|
123 |
+
|
124 |
+
return angle
|
125 |
+
|
126 |
+
def analyze_pose(self, landmarks: List[Dict], pose_type: str) -> Dict:
|
127 |
+
"""
|
128 |
+
Analyze the pose and provide feedback based on the pose_type.
|
129 |
+
Handles pose_types not in self.key_angles by providing a note.
|
130 |
+
"""
|
131 |
+
feedback = {
|
132 |
+
'pose_type': pose_type,
|
133 |
+
'angles': {},
|
134 |
+
'corrections': [],
|
135 |
+
'notes': [] # Initialize notes field
|
136 |
+
}
|
137 |
+
|
138 |
+
if not landmarks:
|
139 |
+
# If no landmarks, it's a more fundamental issue than just pose_type.
|
140 |
+
# The process_frame method already handles this by passing {'error': 'No pose detected'}
|
141 |
+
# from self.analyze_pose if landmarks is empty.
|
142 |
+
# However, to be safe, if this method is called directly with no landmarks:
|
143 |
+
feedback['error'] = 'No landmarks provided for analysis'
|
144 |
+
return feedback
|
145 |
+
|
146 |
+
if pose_type not in self.key_angles:
|
147 |
+
feedback['notes'].append(f"No specific angle checks defined for pose: {pose_type}")
|
148 |
+
# Still return the feedback structure, but angles and corrections will be empty.
|
149 |
+
# The 'error' field will not be set here, allowing app.py to distinguish this case.
|
150 |
+
return feedback
|
151 |
+
|
152 |
+
pose_rules = self.key_angles[pose_type]
|
153 |
+
|
154 |
+
if pose_type == 'front_double_biceps':
|
155 |
+
# Example: Left Shoulder - Elbow - Wrist for elbow angle
|
156 |
+
# Example: Left Hip - Shoulder - Elbow for shoulder angle (arm abduction)
|
157 |
+
# Note: These are examples, actual biomechanical definitions can be complex.
|
158 |
+
# We'll stick to the previous definition for front_double_biceps shoulder angle for now.
|
159 |
+
# Shoulder angle: right_hip - right_shoulder - right_elbow (can also use left)
|
160 |
+
# Elbow angle: right_shoulder - right_elbow - right_wrist (can also use left)
|
161 |
+
# Wrist angle (simplistic): right_elbow - right_wrist - a point slightly above wrist (not easily done without more points)
|
162 |
+
|
163 |
+
# Using right side for front_double_biceps as an example, consistent with a typical bodybuilding pose display
|
164 |
+
# Shoulder Angle (approximating arm abduction/flexion relative to torso)
|
165 |
+
# Using Right Hip, Right Shoulder, Right Elbow
|
166 |
+
rs = self.KEYPOINT_DICT['right_shoulder']
|
167 |
+
re = self.KEYPOINT_DICT['right_elbow']
|
168 |
+
rh = self.KEYPOINT_DICT['right_hip']
|
169 |
+
rw = self.KEYPOINT_DICT['right_wrist']
|
170 |
+
|
171 |
+
shoulder_angle = self.calculate_angle(landmarks, rh, rs, re)
|
172 |
+
if shoulder_angle is not None:
|
173 |
+
feedback['angles']['R Shoulder'] = shoulder_angle
|
174 |
+
if not (pose_rules['shoulder_angle'][0] <= shoulder_angle <= pose_rules['shoulder_angle'][1]):
|
175 |
+
# Debug print before forming correction string
|
176 |
+
print(f"[MOVENET_DEBUG_CORRECTION] pose_type: {pose_type}, rule_key: 'shoulder_angle', rules_for_angle: {pose_rules.get('shoulder_angle')}")
|
177 |
+
feedback['corrections'].append(
|
178 |
+
f"Adjust R Shoulder to {pose_rules['shoulder_angle'][0]}-{pose_rules['shoulder_angle'][1]} deg"
|
179 |
+
)
|
180 |
+
|
181 |
+
elbow_angle = self.calculate_angle(landmarks, rs, re, rw)
|
182 |
+
if elbow_angle is not None:
|
183 |
+
feedback['angles']['R Elbow'] = elbow_angle
|
184 |
+
if not (pose_rules['elbow_angle'][0] <= elbow_angle <= pose_rules['elbow_angle'][1]):
|
185 |
+
feedback['corrections'].append(
|
186 |
+
f"Adjust R Elbow to {pose_rules['elbow_angle'][0]}-{pose_rules['elbow_angle'][1]} deg"
|
187 |
+
)
|
188 |
+
# Wrist angle is hard to define meaningfully with current keypoints for this pose, skipping for now.
|
189 |
+
|
190 |
+
elif pose_type == 'side_chest':
|
191 |
+
# Assuming side chest often displays left side to judges
|
192 |
+
ls = self.KEYPOINT_DICT['left_shoulder']
|
193 |
+
le = self.KEYPOINT_DICT['left_elbow']
|
194 |
+
lw = self.KEYPOINT_DICT['left_wrist']
|
195 |
+
lh = self.KEYPOINT_DICT['left_hip'] # For shoulder angle relative to torso
|
196 |
+
|
197 |
+
# Shoulder angle (e.g. arm flexion/extension in sagittal plane for the front arm)
|
198 |
+
# For side chest, the front arm's shoulder angle relative to the torso (hip-shoulder-elbow)
|
199 |
+
shoulder_angle = self.calculate_angle(landmarks, lh, ls, le)
|
200 |
+
if shoulder_angle is not None:
|
201 |
+
feedback['angles']['L Shoulder'] = shoulder_angle
|
202 |
+
if not (pose_rules['shoulder_angle'][0] <= shoulder_angle <= pose_rules['shoulder_angle'][1]):
|
203 |
+
feedback['corrections'].append(
|
204 |
+
f"Adjust L Shoulder to {pose_rules['shoulder_angle'][0]}-{pose_rules['shoulder_angle'][1]} deg"
|
205 |
+
)
|
206 |
+
|
207 |
+
elbow_angle = self.calculate_angle(landmarks, ls, le, lw)
|
208 |
+
if elbow_angle is not None:
|
209 |
+
feedback['angles']['L Elbow'] = elbow_angle
|
210 |
+
if not (pose_rules['elbow_angle'][0] <= elbow_angle <= pose_rules['elbow_angle'][1]):
|
211 |
+
feedback['corrections'].append(
|
212 |
+
f"Adjust L Elbow to {pose_rules['elbow_angle'][0]}-{pose_rules['elbow_angle'][1]} deg"
|
213 |
+
)
|
214 |
+
# Wrist angle for side chest is also nuanced, skipping detailed check for now.
|
215 |
+
|
216 |
+
elif pose_type == 'back_double_biceps':
|
217 |
+
# Similar to front, but from back. We can calculate for both arms or pick one.
|
218 |
+
# Let's do right side for consistency with front_double_biceps example.
|
219 |
+
rs = self.KEYPOINT_DICT['right_shoulder']
|
220 |
+
re = self.KEYPOINT_DICT['right_elbow']
|
221 |
+
rh = self.KEYPOINT_DICT['right_hip']
|
222 |
+
rw = self.KEYPOINT_DICT['right_wrist']
|
223 |
+
|
224 |
+
shoulder_angle = self.calculate_angle(landmarks, rh, rs, re)
|
225 |
+
if shoulder_angle is not None:
|
226 |
+
feedback['angles']['R Shoulder'] = shoulder_angle
|
227 |
+
if not (pose_rules['shoulder_angle'][0] <= shoulder_angle <= pose_rules['shoulder_angle'][1]):
|
228 |
+
feedback['corrections'].append(
|
229 |
+
f"Adjust R Shoulder to {pose_rules['shoulder_angle'][0]}-{pose_rules['shoulder_angle'][1]} deg"
|
230 |
+
)
|
231 |
+
|
232 |
+
elbow_angle = self.calculate_angle(landmarks, rs, re, rw)
|
233 |
+
if elbow_angle is not None:
|
234 |
+
feedback['angles']['R Elbow'] = elbow_angle
|
235 |
+
if not (pose_rules['elbow_angle'][0] <= elbow_angle <= pose_rules['elbow_angle'][1]):
|
236 |
+
feedback['corrections'].append(
|
237 |
+
f"Adjust R Elbow to {pose_rules['elbow_angle'][0]}-{pose_rules['elbow_angle'][1]} deg"
|
238 |
+
)
|
239 |
+
|
240 |
+
# Clear notes if pose_type was valid and processed, unless specific notes were added by pose logic
|
241 |
+
if not feedback['notes']: # Only clear if no specific notes were added during pose rule processing
|
242 |
+
feedback.pop('notes', None)
|
243 |
+
|
244 |
+
return feedback
|
245 |
+
|
246 |
+
def process_frame(self, frame: np.ndarray, pose_type: str = 'front_double_biceps', last_valid_landmarks=None) -> Tuple[np.ndarray, Dict, List[Dict]]:
|
247 |
+
"""
|
248 |
+
Process a single frame, detect pose, and analyze it. Returns frame, analysis, and used landmarks.
|
249 |
+
"""
|
250 |
+
# Detect pose
|
251 |
+
frame_with_pose, landmarks = self.detect_pose(frame, last_valid_landmarks=last_valid_landmarks)
|
252 |
+
|
253 |
+
# Analyze pose if landmarks are detected
|
254 |
+
analysis = self.analyze_pose(landmarks, pose_type) if landmarks else {'error': 'No pose detected'}
|
255 |
+
|
256 |
+
return frame_with_pose, analysis, landmarks
|
257 |
+
|
258 |
+
def classify_pose(self, landmarks: List[Dict]) -> str:
|
259 |
+
"""
|
260 |
+
Classify the pose based on keypoint positions and angles.
|
261 |
+
Returns one of: 'front_double_biceps', 'side_chest', 'back_double_biceps'.
|
262 |
+
"""
|
263 |
+
if not landmarks or len(landmarks) < 17:
|
264 |
+
return 'front_double_biceps' # Default/fallback
|
265 |
+
|
266 |
+
# Calculate angles for both arms
|
267 |
+
# Right side
|
268 |
+
rs = self.KEYPOINT_DICT['right_shoulder']
|
269 |
+
re = self.KEYPOINT_DICT['right_elbow']
|
270 |
+
rh = self.KEYPOINT_DICT['right_hip']
|
271 |
+
rw = self.KEYPOINT_DICT['right_wrist']
|
272 |
+
# Left side
|
273 |
+
ls = self.KEYPOINT_DICT['left_shoulder']
|
274 |
+
le = self.KEYPOINT_DICT['left_elbow']
|
275 |
+
lh = self.KEYPOINT_DICT['left_hip']
|
276 |
+
lw = self.KEYPOINT_DICT['left_wrist']
|
277 |
+
|
278 |
+
# Shoulder angles
|
279 |
+
r_shoulder_angle = self.calculate_angle(landmarks, rh, rs, re)
|
280 |
+
l_shoulder_angle = self.calculate_angle(landmarks, lh, ls, le)
|
281 |
+
# Elbow angles
|
282 |
+
r_elbow_angle = self.calculate_angle(landmarks, rs, re, rw)
|
283 |
+
l_elbow_angle = self.calculate_angle(landmarks, ls, le, lw)
|
284 |
+
|
285 |
+
# Heuristic rules:
|
286 |
+
# - Front double biceps: both arms raised, elbows bent, both shoulders abducted
|
287 |
+
# - Side chest: one arm across chest (elbow in front of body), other arm flexed
|
288 |
+
# - Back double biceps: both arms raised, elbows bent, but person is facing away (shoulders/hips x order reversed)
|
289 |
+
|
290 |
+
# Use x-coordinates to estimate facing direction
|
291 |
+
# If right shoulder x < left shoulder x, assume facing front; else, facing back
|
292 |
+
facing_front = landmarks[rs]['x'] < landmarks[ls]['x']
|
293 |
+
|
294 |
+
# Count how many arms are "up" (shoulder angle in expected range)
|
295 |
+
arms_up = 0
|
296 |
+
if r_shoulder_angle and 80 < r_shoulder_angle < 150:
|
297 |
+
arms_up += 1
|
298 |
+
if l_shoulder_angle and 80 < l_shoulder_angle < 150:
|
299 |
+
arms_up += 1
|
300 |
+
elbows_bent = 0
|
301 |
+
if r_elbow_angle and 60 < r_elbow_angle < 130:
|
302 |
+
elbows_bent += 1
|
303 |
+
if l_elbow_angle and 60 < l_elbow_angle < 130:
|
304 |
+
elbows_bent += 1
|
305 |
+
|
306 |
+
# Side chest: one arm's elbow is much closer to the body's midline (x of elbow near x of nose)
|
307 |
+
nose_x = landmarks[self.KEYPOINT_DICT['nose']]['x']
|
308 |
+
le_x = landmarks[le]['x']
|
309 |
+
re_x = landmarks[re]['x']
|
310 |
+
side_chest_like = (abs(le_x - nose_x) < 0.08 or abs(re_x - nose_x) < 0.08)
|
311 |
+
|
312 |
+
if arms_up == 2 and elbows_bent == 2:
|
313 |
+
if facing_front:
|
314 |
+
return 'front_double_biceps'
|
315 |
+
else:
|
316 |
+
return 'back_double_biceps'
|
317 |
+
elif side_chest_like:
|
318 |
+
return 'side_chest'
|
319 |
+
else:
|
320 |
+
# Default/fallback
|
321 |
+
return 'front_double_biceps'
|
HFup/bodybuilding_pose_analyzer/src/movenet_demo.py
ADDED
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import cv2
|
2 |
+
import argparse
|
3 |
+
from movenet_analyzer import MoveNetAnalyzer
|
4 |
+
|
5 |
+
def main():
|
6 |
+
parser = argparse.ArgumentParser(description='MoveNet Pose Analysis Demo')
|
7 |
+
parser.add_argument('--video', type=str, help='Path to video file (optional)')
|
8 |
+
parser.add_argument('--model', type=str, default='lightning', choices=['lightning', 'thunder'],
|
9 |
+
help='MoveNet model variant (lightning or thunder)')
|
10 |
+
args = parser.parse_args()
|
11 |
+
|
12 |
+
# Initialize the MoveNet analyzer
|
13 |
+
analyzer = MoveNetAnalyzer(model_name=args.model)
|
14 |
+
|
15 |
+
# Initialize video capture
|
16 |
+
if args.video:
|
17 |
+
cap = cv2.VideoCapture(args.video)
|
18 |
+
else:
|
19 |
+
cap = cv2.VideoCapture(0) # Use webcam if no video file provided
|
20 |
+
|
21 |
+
if not cap.isOpened():
|
22 |
+
print("Error: Could not open video source")
|
23 |
+
return
|
24 |
+
|
25 |
+
while True:
|
26 |
+
ret, frame = cap.read()
|
27 |
+
if not ret:
|
28 |
+
break
|
29 |
+
|
30 |
+
# Process frame
|
31 |
+
frame_with_pose, analysis = analyzer.process_frame(frame)
|
32 |
+
|
33 |
+
# Display analysis results
|
34 |
+
if 'error' not in analysis:
|
35 |
+
# Display pose type
|
36 |
+
cv2.putText(frame_with_pose, f"Pose: {analysis['pose_type']}",
|
37 |
+
(10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
|
38 |
+
|
39 |
+
# Display angles
|
40 |
+
y_offset = 60
|
41 |
+
for joint, angle in analysis['angles'].items():
|
42 |
+
cv2.putText(frame_with_pose, f"{joint}: {angle:.1f}°",
|
43 |
+
(10, y_offset), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
|
44 |
+
y_offset += 30
|
45 |
+
|
46 |
+
# Display corrections
|
47 |
+
for correction in analysis['corrections']:
|
48 |
+
cv2.putText(frame_with_pose, correction,
|
49 |
+
(10, y_offset), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
|
50 |
+
y_offset += 30
|
51 |
+
else:
|
52 |
+
cv2.putText(frame_with_pose, analysis['error'],
|
53 |
+
(10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
|
54 |
+
|
55 |
+
# Display the frame
|
56 |
+
cv2.imshow('MoveNet Pose Analysis', frame_with_pose)
|
57 |
+
|
58 |
+
# Break the loop if 'q' is pressed
|
59 |
+
if cv2.waitKey(1) & 0xFF == ord('q'):
|
60 |
+
break
|
61 |
+
|
62 |
+
cap.release()
|
63 |
+
cv2.destroyAllWindows()
|
64 |
+
|
65 |
+
if __name__ == '__main__':
|
66 |
+
main()
|
HFup/bodybuilding_pose_analyzer/src/pose_analyzer.py
ADDED
@@ -0,0 +1,200 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import cv2
|
2 |
+
import mediapipe as mp
|
3 |
+
import numpy as np
|
4 |
+
from typing import List, Dict, Tuple
|
5 |
+
|
6 |
+
class PoseAnalyzer:
|
7 |
+
# Add MediaPipe skeleton connections as a class variable
|
8 |
+
MP_CONNECTIONS = [
|
9 |
+
(11, 13), (13, 15), # Left arm
|
10 |
+
(12, 14), (14, 16), # Right arm
|
11 |
+
(11, 12), # Shoulders
|
12 |
+
(11, 23), (12, 24), # Torso sides
|
13 |
+
(23, 24), # Hips
|
14 |
+
(23, 25), (25, 27), # Left leg
|
15 |
+
(24, 26), (26, 28), # Right leg
|
16 |
+
(27, 31), (28, 32), # Ankles to feet
|
17 |
+
(15, 17), (16, 18), # Wrists to hands
|
18 |
+
(15, 19), (16, 20), # Wrists to pinky
|
19 |
+
(15, 21), (16, 22), # Wrists to index
|
20 |
+
(15, 17), (17, 19), (19, 21), # Left hand
|
21 |
+
(16, 18), (18, 20), (20, 22) # Right hand
|
22 |
+
]
|
23 |
+
def __init__(self):
|
24 |
+
# Initialize MediaPipe Pose
|
25 |
+
self.mp_pose = mp.solutions.pose
|
26 |
+
self.pose = self.mp_pose.Pose(
|
27 |
+
static_image_mode=False,
|
28 |
+
model_complexity=2, # Using the most accurate model
|
29 |
+
min_detection_confidence=0.1,
|
30 |
+
min_tracking_confidence=0.1
|
31 |
+
)
|
32 |
+
self.mp_drawing = mp.solutions.drawing_utils
|
33 |
+
|
34 |
+
# Define key angles for bodybuilding poses
|
35 |
+
self.key_angles = {
|
36 |
+
'front_double_biceps': {
|
37 |
+
'shoulder_angle': (90, 120), # Expected angle range
|
38 |
+
'elbow_angle': (80, 100),
|
39 |
+
'wrist_angle': (0, 20)
|
40 |
+
},
|
41 |
+
'side_chest': {
|
42 |
+
'shoulder_angle': (45, 75),
|
43 |
+
'elbow_angle': (90, 110),
|
44 |
+
'wrist_angle': (0, 20)
|
45 |
+
},
|
46 |
+
'back_double_biceps': {
|
47 |
+
'shoulder_angle': (90, 120),
|
48 |
+
'elbow_angle': (80, 100),
|
49 |
+
'wrist_angle': (0, 20)
|
50 |
+
}
|
51 |
+
}
|
52 |
+
|
53 |
+
def detect_pose(self, frame: np.ndarray, last_valid_landmarks=None) -> Tuple[np.ndarray, List[Dict]]:
|
54 |
+
"""
|
55 |
+
Detect pose in the given frame and return the frame with pose landmarks drawn
|
56 |
+
and the list of detected landmarks. If detection fails, reuse last valid landmarks if provided.
|
57 |
+
"""
|
58 |
+
# Convert the BGR image to RGB
|
59 |
+
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
|
60 |
+
|
61 |
+
# Process the frame and detect pose
|
62 |
+
results = self.pose.process(rgb_frame)
|
63 |
+
|
64 |
+
# Draw the pose landmarks on the frame
|
65 |
+
if results.pose_landmarks:
|
66 |
+
# Draw all 33 keypoints as bright red, smaller circles, and show index
|
67 |
+
for idx, landmark in enumerate(results.pose_landmarks.landmark):
|
68 |
+
x = int(landmark.x * frame.shape[1])
|
69 |
+
y = int(landmark.y * frame.shape[0])
|
70 |
+
if landmark.visibility > 0.1: # Lowered threshold from 0.3 to 0.1
|
71 |
+
cv2.circle(frame, (x, y), 3, (0, 0, 255), -1)
|
72 |
+
cv2.putText(frame, str(idx), (x+8, y-8), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (255, 255, 255), 1)
|
73 |
+
# Draw skeleton lines
|
74 |
+
# Convert landmarks to pixel coordinates for easier access
|
75 |
+
landmark_points = []
|
76 |
+
for landmark in results.pose_landmarks.landmark:
|
77 |
+
landmark_points.append((int(landmark.x * frame.shape[1]), int(landmark.y * frame.shape[0]), landmark.visibility))
|
78 |
+
for pt1, pt2 in self.MP_CONNECTIONS:
|
79 |
+
if pt1 < len(landmark_points) and pt2 < len(landmark_points):
|
80 |
+
x1, y1, v1 = landmark_points[pt1]
|
81 |
+
x2, y2, v2 = landmark_points[pt2]
|
82 |
+
if v1 > 0.1 and v2 > 0.1:
|
83 |
+
cv2.line(frame, (x1, y1), (x2, y2), (0, 255, 255), 2)
|
84 |
+
# Convert landmarks to a list of dictionaries
|
85 |
+
landmarks = []
|
86 |
+
for idx, landmark in enumerate(results.pose_landmarks.landmark):
|
87 |
+
landmarks.append({
|
88 |
+
'x': landmark.x,
|
89 |
+
'y': landmark.y,
|
90 |
+
'z': landmark.z,
|
91 |
+
'visibility': landmark.visibility
|
92 |
+
})
|
93 |
+
return frame, landmarks
|
94 |
+
# If detection fails, reuse last valid landmarks if provided
|
95 |
+
if last_valid_landmarks is not None:
|
96 |
+
return frame, last_valid_landmarks
|
97 |
+
return frame, []
|
98 |
+
|
99 |
+
def calculate_angle(self, landmarks: List[Dict], joint1: int, joint2: int, joint3: int) -> float:
|
100 |
+
"""
|
101 |
+
Calculate the angle between three joints.
|
102 |
+
"""
|
103 |
+
if len(landmarks) < max(joint1, joint2, joint3):
|
104 |
+
return None
|
105 |
+
|
106 |
+
# Get the coordinates of the three joints
|
107 |
+
p1 = np.array([landmarks[joint1]['x'], landmarks[joint1]['y']])
|
108 |
+
p2 = np.array([landmarks[joint2]['x'], landmarks[joint2]['y']])
|
109 |
+
p3 = np.array([landmarks[joint3]['x'], landmarks[joint3]['y']])
|
110 |
+
|
111 |
+
# Calculate the angle
|
112 |
+
v1 = p1 - p2
|
113 |
+
v2 = p3 - p2
|
114 |
+
|
115 |
+
angle = np.degrees(np.arccos(
|
116 |
+
np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2))
|
117 |
+
))
|
118 |
+
|
119 |
+
return angle
|
120 |
+
|
121 |
+
def analyze_pose(self, landmarks: List[Dict], pose_type: str) -> Dict:
|
122 |
+
"""
|
123 |
+
Analyze the pose and provide feedback based on the pose type.
|
124 |
+
Enhanced: Calculates angles for both left and right arms (shoulder, elbow, wrist) for all pose types.
|
125 |
+
"""
|
126 |
+
if not landmarks or pose_type not in self.key_angles:
|
127 |
+
return {'error': 'Invalid pose type or no landmarks detected'}
|
128 |
+
|
129 |
+
feedback = {
|
130 |
+
'pose_type': pose_type,
|
131 |
+
'angles': {},
|
132 |
+
'corrections': []
|
133 |
+
}
|
134 |
+
# Indices for MediaPipe 33 keypoints
|
135 |
+
LEFT_SHOULDER = 11
|
136 |
+
RIGHT_SHOULDER = 12
|
137 |
+
LEFT_ELBOW = 13
|
138 |
+
RIGHT_ELBOW = 14
|
139 |
+
LEFT_WRIST = 15
|
140 |
+
RIGHT_WRIST = 16
|
141 |
+
LEFT_HIP = 23
|
142 |
+
RIGHT_HIP = 24
|
143 |
+
LEFT_KNEE = 25
|
144 |
+
RIGHT_KNEE = 26
|
145 |
+
LEFT_ANKLE = 27
|
146 |
+
RIGHT_ANKLE = 28
|
147 |
+
# Calculate angles for both arms
|
148 |
+
# Shoulder angles (hip-shoulder-elbow)
|
149 |
+
l_shoulder_angle = self.calculate_angle(landmarks, LEFT_HIP, LEFT_SHOULDER, LEFT_ELBOW)
|
150 |
+
r_shoulder_angle = self.calculate_angle(landmarks, RIGHT_HIP, RIGHT_SHOULDER, RIGHT_ELBOW)
|
151 |
+
# Elbow angles (shoulder-elbow-wrist)
|
152 |
+
l_elbow_angle = self.calculate_angle(landmarks, LEFT_SHOULDER, LEFT_ELBOW, LEFT_WRIST)
|
153 |
+
r_elbow_angle = self.calculate_angle(landmarks, RIGHT_SHOULDER, RIGHT_ELBOW, RIGHT_WRIST)
|
154 |
+
# Wrist angles (elbow-wrist-hand index, if available)
|
155 |
+
# MediaPipe does not have hand index, so we can use a pseudo point (e.g., extend wrist direction)
|
156 |
+
# For now, skip wrist angle or set to None
|
157 |
+
# Leg angles (optional)
|
158 |
+
l_knee_angle = self.calculate_angle(landmarks, LEFT_HIP, LEFT_KNEE, LEFT_ANKLE)
|
159 |
+
r_knee_angle = self.calculate_angle(landmarks, RIGHT_HIP, RIGHT_KNEE, RIGHT_ANKLE)
|
160 |
+
# Add angles to feedback
|
161 |
+
if l_shoulder_angle:
|
162 |
+
feedback['angles']['L Shoulder'] = l_shoulder_angle
|
163 |
+
if not self.key_angles[pose_type]['shoulder_angle'][0] <= l_shoulder_angle <= self.key_angles[pose_type]['shoulder_angle'][1]:
|
164 |
+
feedback['corrections'].append(
|
165 |
+
f"Adjust L Shoulder to {self.key_angles[pose_type]['shoulder_angle'][0]}-{self.key_angles[pose_type]['shoulder_angle'][1]} deg"
|
166 |
+
)
|
167 |
+
if r_shoulder_angle:
|
168 |
+
feedback['angles']['R Shoulder'] = r_shoulder_angle
|
169 |
+
if not self.key_angles[pose_type]['shoulder_angle'][0] <= r_shoulder_angle <= self.key_angles[pose_type]['shoulder_angle'][1]:
|
170 |
+
feedback['corrections'].append(
|
171 |
+
f"Adjust R Shoulder to {self.key_angles[pose_type]['shoulder_angle'][0]}-{self.key_angles[pose_type]['shoulder_angle'][1]} deg"
|
172 |
+
)
|
173 |
+
if l_elbow_angle:
|
174 |
+
feedback['angles']['L Elbow'] = l_elbow_angle
|
175 |
+
if not self.key_angles[pose_type]['elbow_angle'][0] <= l_elbow_angle <= self.key_angles[pose_type]['elbow_angle'][1]:
|
176 |
+
feedback['corrections'].append(
|
177 |
+
f"Adjust L Elbow to {self.key_angles[pose_type]['elbow_angle'][0]}-{self.key_angles[pose_type]['elbow_angle'][1]} deg"
|
178 |
+
)
|
179 |
+
if r_elbow_angle:
|
180 |
+
feedback['angles']['R Elbow'] = r_elbow_angle
|
181 |
+
if not self.key_angles[pose_type]['elbow_angle'][0] <= r_elbow_angle <= self.key_angles[pose_type]['elbow_angle'][1]:
|
182 |
+
feedback['corrections'].append(
|
183 |
+
f"Adjust R Elbow to {self.key_angles[pose_type]['elbow_angle'][0]}-{self.key_angles[pose_type]['elbow_angle'][1]} deg"
|
184 |
+
)
|
185 |
+
# Optionally add knee angles
|
186 |
+
if l_knee_angle:
|
187 |
+
feedback['angles']['L Knee'] = l_knee_angle
|
188 |
+
if r_knee_angle:
|
189 |
+
feedback['angles']['R Knee'] = r_knee_angle
|
190 |
+
return feedback
|
191 |
+
|
192 |
+
def process_frame(self, frame: np.ndarray, pose_type: str = 'front_double_biceps', last_valid_landmarks=None) -> Tuple[np.ndarray, Dict, List[Dict]]:
|
193 |
+
"""
|
194 |
+
Process a single frame, detect pose, and analyze it. Returns frame, analysis, and used landmarks.
|
195 |
+
"""
|
196 |
+
# Detect pose
|
197 |
+
frame_with_pose, landmarks = self.detect_pose(frame, last_valid_landmarks=last_valid_landmarks)
|
198 |
+
# Analyze pose if landmarks are detected
|
199 |
+
analysis = self.analyze_pose(landmarks, pose_type) if landmarks else {'error': 'No pose detected'}
|
200 |
+
return frame_with_pose, analysis, landmarks
|
HFup/bodybuilding_pose_analyzer/src/sample_video.mp4
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:16b5cfe3c836a5fba2c46ce4bcf9d241b9a9292647822fbbf767f3db9f1aa0e9
|
3 |
+
size 1684449
|
HFup/external/BodybuildingPoseClassifier
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Subproject commit 0af89e6c36e63818059f89e82af6b13686ad25e3
|
HFup/requirements.txt
ADDED
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
absl-py==2.2.2
|
2 |
+
astunparse==1.6.3
|
3 |
+
attrs==25.3.0
|
4 |
+
blinker==1.9.0
|
5 |
+
certifi==2025.4.26
|
6 |
+
cffi==1.17.1
|
7 |
+
charset-normalizer==3.4.2
|
8 |
+
click==8.1.7
|
9 |
+
contourpy==1.2.1
|
10 |
+
cycler==0.12.1
|
11 |
+
ffmpeg-python==0.2.0
|
12 |
+
filelock==3.18.0
|
13 |
+
Flask==3.1.1
|
14 |
+
flask-cors==5.0.1
|
15 |
+
flatbuffers==25.2.10
|
16 |
+
fonttools==4.58.0
|
17 |
+
fsspec==2025.3.2
|
18 |
+
future==1.0.0
|
19 |
+
gast==0.6.0
|
20 |
+
google-pasta==0.2.0
|
21 |
+
gunicorn==22.0.0
|
22 |
+
grpcio==1.71.0
|
23 |
+
h5py==3.13.0
|
24 |
+
idna==3.10
|
25 |
+
itsdangerous==2.2.0
|
26 |
+
jax==0.4.30
|
27 |
+
jaxlib==0.4.30
|
28 |
+
Jinja2==3.1.6
|
29 |
+
keras==3.9.2
|
30 |
+
kiwisolver==1.4.7
|
31 |
+
libclang==18.1.1
|
32 |
+
Markdown==3.8
|
33 |
+
markdown-it-py==3.0.0
|
34 |
+
MarkupSafe==3.0.2
|
35 |
+
matplotlib==3.9.4
|
36 |
+
mdurl==0.1.2
|
37 |
+
mediapipe==0.10.21
|
38 |
+
ml_dtypes==0.5.1
|
39 |
+
mpmath==1.3.0
|
40 |
+
namex==0.0.9
|
41 |
+
networkx==3.2.1
|
42 |
+
ngrok==1.4.0
|
43 |
+
numpy==1.26.4
|
44 |
+
opencv-contrib-python==4.11.0.86
|
45 |
+
opencv-python==4.11.0.86
|
46 |
+
opt_einsum==3.4.0
|
47 |
+
optree==0.15.0
|
48 |
+
packaging==25.0
|
49 |
+
pandas==2.2.3
|
50 |
+
pillow==11.2.1
|
51 |
+
protobuf==4.25.7
|
52 |
+
pycparser==2.22
|
53 |
+
Pygments==2.19.1
|
54 |
+
pyparsing==3.2.3
|
55 |
+
python-dateutil==2.9.0.post0
|
56 |
+
pytz==2025.2
|
57 |
+
PyYAML==6.0.2
|
58 |
+
requests==2.32.3
|
59 |
+
rich==14.0.0
|
60 |
+
scipy==1.13.1
|
61 |
+
seaborn==0.13.2
|
62 |
+
sentencepiece==0.2.0
|
63 |
+
six==1.17.0
|
64 |
+
sounddevice==0.5.1
|
65 |
+
sympy==1.14.0
|
66 |
+
tensorboard==2.19.0
|
67 |
+
tensorboard-data-server==0.7.2
|
68 |
+
tensorflow==2.19.0
|
69 |
+
tensorflow-hub==0.16.1
|
70 |
+
tensorflow-io-gcs-filesystem==0.37.1
|
71 |
+
termcolor==3.1.0
|
72 |
+
tf_keras==2.19.0
|
73 |
+
torch==2.7.0
|
74 |
+
torchvision==0.22.0
|
75 |
+
tqdm==4.67.1
|
76 |
+
typing_extensions==4.13.2
|
77 |
+
tzdata==2025.2
|
78 |
+
urllib3==2.4.0
|
79 |
+
Werkzeug==3.1.3
|
80 |
+
wrapt==1.17.2
|
HFup/static/uploads/output.mp4
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a555701da6bea5183cac40ea6f1b45d6fe182db4efc0cfca10ebab60fcdce498
|
3 |
+
size 261
|
HFup/static/uploads/output_mediapipe.mp4
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dff942eb4a92e2af3f05573368ffa81cde14add1b0aeb28d7acc76b154aa56f0
|
3 |
+
size 926873
|
HFup/static/uploads/output_movenet_lightning.mp4
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:29e0173c04e5eb95a1f951e756a1f48fb56f5fee53afd0f2f812d1716de61bc4
|
3 |
+
size 557403
|
HFup/static/uploads/output_movenet_thunder.mp4
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a1aebd72c36462e0725557a154b595d70128b1723a01a33a2f9aa2854084c6a1
|
3 |
+
size 1757104
|
HFup/static/uploads/policeb.mp4
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bedaa005f970439d8b6fe99e937027a3dc7c7f7d9ccec319af22344ed06df790
|
3 |
+
size 7552156
|
HFup/yolov7
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Subproject commit a207844b1ce82d204ab36d87d496728d3d2348e7
|
HFup/yolov7-w6-pose.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8394a97f5a5283269028738e80006f3e9835088f00d293108bdee3320f2b0f8d
|
3 |
+
size 161114789
|