Key2Mesh: MoCap-to-Visual Domain Adaptation for Efficient Human Mesh Estimation from 2D Keypoints (CVPRW24)
Welcome! This is the official implementation of the CVPRW24 paper Key2Mesh: MoCap-to-Visual Domain Adaptation for Efficient Human Mesh Estimation from 2D Keypoints by Bedirhan Uguz, Ozhan Suat, Batuhan Karagoz, and Emre Akbas.
For more details and to see some cool results, check out the project page.
In order to evaluate Key2Mesh on 3DPW, you can use the published .pt
file under the data
folder. If you want to
reproduce the data, make sure that you download 3DPW and place it under the data
folder and run the script
preprocess/preprocess_3DPW.py
.
After downloading the data, please copy the necessary files to the data
directory.
You will need to have the following structure:
data/
├── 3DPW/
│ ├── imageFiles
│ ├── sequenceFiles
├── 3DPW_test.pt
├── J_regressor_h36m.npy
└── SMPL_NEUTRAL.pkl
For other datasets, we follow VIBE's and LGD's data preparation steps. Please refer to the VIBE repository and LGD repository.
- Install torch 1.12.1 (with CUDA 11.3 support)
- Install torchvision 0.13.1 (with CUDA 11.3 support)
- Then, run the following command to install the remaining dependencies:
pip install -r requirements.txt
The demo.py
demonstrates how to estimate and visualize SMPL body models from 2D keypoints. A sample dataset is
provided, which includes raw images and their corresponding OpenPose outputs, serving as a guide for the expected data
structure. You can run the demo using the provided example or use your own data by updating the configuration.
Simply execute the following command: python demo.py
- Extract 2D Keypoints:
- Install and run OpenPose on your images to extract 2D keypoints, and save the results in JSON format.
- Organize Your Data:
- Structure your data similarly to the example:
- Place your raw images in a folder (e.g., your_data/imgs).
- Save the corresponding OpenPose outputs (JSON files) in a folder (e.g., your_data/openpose_outs).
- Structure your data similarly to the example:
- Update Configuration:
- Modify the input parameters in the configs/demo.yaml file to point to your dataset directory and set image size.
- Finally, run the demo:
python demo.py
To evaluate the model adapted to either the 3DPW or InstaVariety datasets, update the run
parameter in
the configs/eval_3dpw.yaml
file:
- For 3DPW, set it to
target-3dpw
. - For InstaVariety, set it to
target-insta
.
Then, run the following command: python eval_3dpw.py
This code is built on top of the following work: J. Song, X. Chen, and O. Hilliges, "Human Body Model Fitting by Learned Gradient Descent," in ECCV, 2020.
If you find our work useful in your research, please consider citing:
@InProceedings{Uguz_2024_CVPR,
author = {Uguz, Bedirhan and Suat, Ozhan and Karagoz, Batuhan and Akbas, Emre},
title = {MoCap-to-Visual Domain Adaptation for Efficient Human Mesh Estimation from 2D Keypoints},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2024},
pages = {1622-1632}
}