A comprehensive UI tool for comparing pose estimation models, interpolation methods, and noise filtering techniques for human pose analysis.
PoseCompareUI is a Gradio-based application designed to help researchers and developers compare different pose estimation approaches and post-processing techniques. The tool provides visual comparisons between various pose estimation models and filtering methods, making it easier to evaluate their performance on your own videos.
Compare multiple pose estimation models side by side:
- MediaPipe: Google's lightweight pose estimation framework
- FourDHumans: 3D human mesh reconstruction model
- Sapiens: Multiple variants (0.3b, 0.6b, 1b, 2b) of the Sapiens pose estimation model
Apply and compare different noise reduction filters:
- Original: No filtering applied
- Butterworth: Low-pass Butterworth filter for smooth trajectories
- Chebyshev: Chebyshev Type-I filter for noise reduction
- Bessel: Bessel filter with linear phase response
Test and visualize various interpolation methods:
- Kalman: Predictive filtering using Kalman filter
- Wiener: Optimal filtering for stationary signals
- Linear: Simple linear interpolation
- Bilinear: Two-dimensional linear interpolation
- Spline: Smooth curve fitting using splines
- Kriging: Spatial interpolation method
Extract movement rules and patterns from videos:
- Automatically identifies equipment used in the video
- Classifies movement patterns for different body parts (arms, legs, torso)
- Generates a comprehensive visualization with the original video, pose overlay, and detected features
- Python 3.10+
- CUDA-capable GPU (recommended)
- FFmpeg
- Clone the repository:
git clone https://github.com/yourusername/PoseCompareUI.git
cd PoseCompareUI
- Run the setup script to create the conda environment and install dependencies:
bash setup.sh
- Activate the environment:
conda activate posecompare
Launch the application with:
python app.py
Then navigate to the provided URL (typically http://127.0.0.1:7860) in your web browser.
-
Upload or Record a Video: Use the upload button or webcam option to provide a video for analysis.
-
Model Comparison Tab:
- Select the models you want to compare
- Choose a noise filtering method
- Adjust the filter window size
- Click "Process Video"
-
Noise Tab:
- Select different noise filtering methods to compare
- Adjust the filter window size
- Click "Process Video"
-
Interpolation Tab:
- Select interpolation methods to compare
- Click "Process Video"
-
Rule Extraction Tab:
- Click "Extract Rules" to analyze movement patterns
- View the combined visualization showing original video, pose estimation, and detected features
app.py
: Main application file with the Gradio interfacepose_processing.py
: Core pose processing functionalitypose_models.py
: Model wrappers for different pose estimation systemssapiens_processor.py
: Implementation for the Sapiens modelRuleExtration/
: Module for extracting movement rules from videosreal_time_debug.py
: Real-time analysis and visualizationcombine_video.py
: Utilities for combining video outputssrc/
: Source files for rule extractionpose_estimation.py
: Pose detection and analysisfeature_extraction.py
: Feature extraction from posesequipment_detection.py
: Equipment detection using YOLOv7
- CUDA/GPU Issues: Make sure you have compatible NVIDIA drivers installed
- Model Loading Errors: Check that model weights are properly downloaded
- Video Processing Failures: Ensure FFmpeg is installed and accessible
- MediaPipe from Google
- FourDHumans model
- Sapiens model