Skip to content

Commit ef4fa92

Browse files
committed
train different datasets in a single main.py. Add TRAINING recipes.
1 parent 5853034 commit ef4fa92

File tree

14 files changed

+287
-791
lines changed

14 files changed

+287
-791
lines changed

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@ checkpoint
33
dev
44
*.pth.tar
55
data/mpii/images
6-
!data/mpii/mean.pth.tar
76
*.json
87
*debug*
98
*.idea/*
@@ -17,6 +16,7 @@ data/mscoco/images
1716
miscs/posetrack
1817
miscs/h36m
1918
data/h36m
19+
!mean.pth.tar
2020

2121
__pycache__/
2222
*.py[cod]

README.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# PyTorch-Pose
22

3+
![screenshot](./docs/screenshot.png)
4+
35
PyTorch-Pose is a PyTorch implementation of the general pipeline for 2D single human pose estimation. The aim is to provide the interface of the training/inference/evaluation, and the dataloader with various data augmentation options for the most popular human pose databases (e.g., [the MPII human pose](http://human-pose.mpi-inf.mpg.de), [LSP](http://www.comp.leeds.ac.uk/mat4saj/lsp.html) and [FLIC](http://bensapp.github.io/flic-dataset.html)).
46

57
Some codes for data preparation and augmentation are brought from the [Stacked hourglass network](https://github.com/anewell/pose-hg-train). Thanks to the original author.
@@ -37,12 +39,14 @@ Some codes for data preparation and augmentation are brought from the [Stacked h
3739

3840
## Usage
3941

42+
**Please refer to [TRAINING.md](TRAINING.md) for detailed training recipes!**
43+
4044
### Testing
4145
You may download our pretrained models (e.g., [2-stack hourglass model](https://drive.google.com/drive/folders/0B63t5HSgY4SQQ2FBRE5rQ2EzbjQ?usp=sharing)) for a quick start.
4246

4347
Run the following command in terminal to evaluate the model on MPII validation split (The train/val split is from [Tompson et al. CVPR 2015](http://www.cims.nyu.edu/~tompson/data/mpii_valid_pred.zip)).
4448
```
45-
CUDA_VISIBLE_DEVICES=0 python example/mpii.py -a hg --stacks 2 --blocks 1 --checkpoint checkpoint/mpii/hg_s2_b1 --resume checkpoint/mpii/hg_s2_b1/model_best.pth.tar -e -d
49+
CUDA_VISIBLE_DEVICES=0 python example/main.py --dataset mpii -a hg --stacks 2 --blocks 1 --checkpoint checkpoint/mpii/hg_s2_b1 --resume checkpoint/mpii/hg_s2_b1/model_best.pth.tar -e -d
4650
```
4751
* `-a` specifies a network architecture
4852
* `--resume` will load the weight from a specific model
@@ -77,21 +81,18 @@ You may also evaluate the result by running `python evaluation/eval_PCKh.py` to
7781
### Training
7882
Run the following command in terminal to train an 8-stack of hourglass network on the MPII human pose dataset.
7983
```
80-
CUDA_VISIBLE_DEVICES=0 python example/mpii.py -a hg --stacks 8 --blocks 1 --checkpoint checkpoint/mpii/hg8 -j 4
84+
CUDA_VISIBLE_DEVICES=0 python example/main.py --dataset mpii -a hg --stacks 8 --blocks 1 --checkpoint checkpoint/mpii/hg8 -j 4
8185
```
8286
Here,
8387
* `CUDA_VISIBLE_DEVICES=0` identifies the GPU devices you want to use. For example, use `CUDA_VISIBLE_DEVICES=0,1` if you want to use two GPUs with ID `0` and `1`.
8488
* `-j` specifies how many workers you want to use for data loading.
8589
* `--checkpoint` specifies where you want to save the models, the log and the predictions to.
8690

87-
Please refer to the `example/mpii.py` for the supported options/arguments.
88-
89-
## To Do List
91+
## Miscs
9092
Supported dataset
9193
- [x] [MPII human pose](http://human-pose.mpi-inf.mpg.de)
9294
- [x] [Leeds Sports Pose (LSP)](http://sam.johnson.io/research/lsp.html)
9395
- [x] [MSCOCO (single person)](http://cocodataset.org/#keypoints-challenge2017)
94-
- [ ] FLIC
9596

9697
Supported models
9798
- [x] [Stacked Hourglass networks](https://arxiv.org/abs/1603.06937)

TRAINING.md

Lines changed: 125 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,125 @@
1+
# Training Recipe
2+
3+
## Get datasets
4+
5+
### MPII Human Pose Dataset
6+
7+
- Download the [dataset](https://datasets.d2.mpi-inf.mpg.de/andriluka14cvpr/mpii_human_pose_v1.tar.gz) and create a link to the `images` directory at `./data/mpii/
8+
images`
9+
```
10+
ln -s ${MPII_PATH}/images ./data/mpii/images
11+
```
12+
13+
- Download the [annotation file](https://drive.google.com/open?id=1mQrH_yVHeB93rzCfyq5kC9ZYTwZeMsMm) in our JSON format, and save it to `./data/mpii/mpii_annotations.json`
14+
15+
- You are good to go!
16+
17+
18+
### COCO Keypoints 2014/2017
19+
20+
- Download datasets:
21+
```
22+
cd ./data/mscoco
23+
wget http://images.cocodataset.org/zips/train2014.zip
24+
wget http://images.cocodataset.org/zips/val2014.zip
25+
wget http://images.cocodataset.org/zips/train2017.zip
26+
wget http://images.cocodataset.org/zips/val2017.zip
27+
unzip train2014.zip -d images
28+
unzip train2014.zip -d images
29+
unzip train2014.zip -d images
30+
unzip train2014.zip -d images
31+
rm -rf *.zip
32+
```
33+
34+
- You are good to go!
35+
36+
### Leeds Sports Pose (LSP)
37+
- Download datasets:
38+
```
39+
mkdir -p ./data/lsp/images
40+
cd ./data/lsp/images
41+
wget http://sam.johnson.io/research/lsp_dataset.zip
42+
wget http://sam.johnson.io/research/lspet_dataset.zip
43+
unzip lsp_dataset.zip -d lsp_dataset
44+
unzip lspet_dataset.zip -d lspet_dataset
45+
```
46+
- You are good to go!
47+
48+
## Training
49+
- Example 1: Train from scratch - ECCV'16 8-stack hourglass network
50+
```
51+
CUDA_VISIBLE_DEVICES=0 python ./example/main.py \
52+
--dataset mpii \
53+
--arch hg \
54+
--stack 8 \
55+
--block 1 \
56+
--features 256 \
57+
--checkpoint ./checkpoint/mpii/hg-s8-b1
58+
```
59+
60+
- Example 2: Train a much faster version of HG (e.g., 1-stack)
61+
```
62+
CUDA_VISIBLE_DEVICES=0 python ./example/main.py \
63+
--dataset mpii \
64+
--arch hg \
65+
--stack 1 \
66+
--block 1 \
67+
--features 256 \
68+
--checkpoint ./checkpoint/mpii/hg-s1-b1
69+
```
70+
71+
- Example 3: Train on COCO 2014/2017 (set `--year` argument )
72+
```
73+
CUDA_VISIBLE_DEVICES=0 python ./example/main.py \
74+
--dataset mscoco \
75+
--year 2017 \
76+
--arch hg \
77+
--stack 1 \
78+
--block 1 \
79+
--features 256 \
80+
--checkpoint ./checkpoint/mscoco/hg-s1-b1
81+
```
82+
83+
- Example 4: resume training from a checkpoint
84+
```
85+
CUDA_VISIBLE_DEVICES=0 python ./example/main.py \
86+
--dataset mpii \
87+
--arch hg \
88+
--stack 8 \
89+
--block 1 \
90+
--features 256 \
91+
--checkpoint ./checkpoint/mpii/hg-s8-b1 \
92+
--resume ./checkpoint/mpii/hg-s8-b1/checkpoint.pth.tar
93+
```
94+
95+
96+
- **Evaluation** from an existing model: use `-e`
97+
```
98+
CUDA_VISIBLE_DEVICES=0 python ./example/main.py \
99+
--dataset mpii \
100+
--arch hg \
101+
--stack 8 \
102+
--block 1 \
103+
--features 256 \
104+
--checkpoint ./checkpoint/mpii/hg-s8-b1 \
105+
--resume ./checkpoint/mpii/hg-s8-b1/checkpoint.pth.tar \
106+
-e
107+
```
108+
109+
- **Debug**: Use `-d` if you want to visualize the keypoints with images
110+
```
111+
CUDA_VISIBLE_DEVICES=0 python ./example/main.py \
112+
--dataset mpii \
113+
--arch hg \
114+
--stack 1 \
115+
--block 1 \
116+
--features 256 \
117+
--checkpoint ./checkpoint/mpii/hg-s1-b1 \
118+
--resume ./checkpoint/mpii/hg-s1-b1/checkpoint.pth.tar \
119+
-e \
120+
-d
121+
```
122+
123+
The visualized images should be like
124+
125+
![screenshot](./docs/screenshot.png)

data/lsp/mean.pth.tar

430 Bytes
Binary file not shown.

data/mscoco/mean.pth.tar

430 Bytes
Binary file not shown.

docs/screenshot.png

365 KB
Loading

0 commit comments

Comments
 (0)