Skip to content

This is an official implementation of our AAAI2022 paper AdaptivePose and Arxiv paper AdaptivePose++

License

Notifications You must be signed in to change notification settings

buptxyb666/AdaptivePose

Repository files navigation

AdaptivePose

new_ops branch can support all pytorch version.

The current code can achieve the better performance than the results reported in papers.

👏👏👏👏👏👏👏👏👏👏👏a compact and powerful single-stage multi-person pose estimation framework:

AdaptivePose: Human Parts as Adaptive Points,
Yabo Xiao, Dongdong Yu, Xiaojuan Wang, Guoli Wang, Qian Zhang, Mingshu He;
Published on AAAI2022
AdaptivePose++: A Powerful Single-Stage Network for Multi-Person Pose Regression
Yabo Xiao, Xiaojuan Wang, Dongdong Yu, Kai Su, Lei Jin, Mei Song, Shuicheng Yan, Jian Zhao;

Highlights

  • Simple: Adaptivepose is a effecient and powerful single-stage multi-person pose estimation pipeline which can effectively model the relationship between the human instance and corresponding keypoints in a single-forward pass.

  • Generalizability: AdaptivePose is able to achieve the competitive performance on crowded and 3D scenes.

  • Fast: AdaptivePose is a very compact MPPE pipeline. During inference, we eliminate the heuristics grouping, and do not require any refinements and other hand-crafted post-processes except for center NMS.

  • Strong: AdaptivePose uses center feature together with the features at adaptive human part-related points to encode diverse human pose sufficiently. It outperforms the existing bottom-up and single-stage pose estimation approaches without the flip and multi-scale testing in terms of speed and accuracy.

Main results

The single-stage multi-person pose estimation on COCO validation

Speed Please refer to the paper(https://arxiv.org/abs/2112.13635) for inference time 🚀🚀🚀. The performance is slightly better than the reported in paper. The time is calculated on a single Tesla V100, which is more faster than the speed reported in paper.

We found that stacking more 3*3 conv-relu in each brach can further improve the performance.

We employ the OKS loss for regression head and achieve the better performance without Inference overhead. Outperforming all bottom-up and single-stage methods with faster speed !!! 🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀 -->

Backbone inp_res AP Flip AP Multi-scale AP. download time/ms
DLA-34 512 67.0 67.4 69.2 model 33
HRNet-W32 512 68.6 69.1 71.2 model 46
HRNet-W48 640 71.0 71.5 73.2 model 57

The single-stage multi-person pose estimation on CrowdPose test.

Backbone inp_res AP Flip AP Multi-scale AP. download time/ms
HRNet-W32 512 67.5 68.0 69.3 model 46
HRNet-W48 640 70.4 71.0 72.6 model 57

Prepare env

The conda environment torch12 can be downloaded directly from torch12.tar.gz. The path should like this AdaptivePose/torch12.tar.gz and then following

source prepare_env.sh

In another way, you also can deploy the environment following

source prepare_env2.sh

Prepare Data and pretrain models

Follow the instructions in DATA.md to setup the datasets. Or link dataset path to AdaptivePose/data/

cd AdaptivePose
mkdir -p data/coco
mkdir -p data/crowdpose
ln -s /path_to_coco_dataset/ data/coco/
ln -s /path_to_crowdpose_dataset/ data/crowdpose/

The pretrain models can be downloaded from pretrain_models, put the pretrain models into AdaptivePose/models

Training and Testing

After preparing the environment and data, you can train or test AdaptivePose with different network and input resolution. 🚀🚀🚀 Note that the image resolution can be optionally adjusted according to user's requirements for obtaining the different speed-accuracy trade-offs! 🚀🚀🚀

DLA34 with 512 pixels:

cd src
bash main_dla34_coco512.sh

HRNet-W32 with 512 pixels:

cd src
bash main_hrnet32_coco512.sh

HRNet-W48 with 640 pixels:

cd src
bash main_hrnet48_coco640.sh

Running demo

The input aspect ratio is closer to 1, the speed is faster !!!

visualize coco

torch12/bin/python test.py multi_pose_wodet --exp_id $EXPNAME --dataset coco_hp_wodet --resume --not_reg_offset --not_reg_hp_offset --K 20 --not_hm_hp --arch $ARCNAME --input_res $RES --keep_res --debug 1

visualize customized image

torch12/bin/python demo.py multi_pose_wodet --exp_id $EXPNAME --dataset coco_hp_wodet --resume --not_reg_offset --not_reg_hp_offset --K 20 --not_hm_hp --arch $ARCNAME --input_res $RES --keep_res --debug 1 --demo path/to/image_dir --vis_thresh 0.1

visualize customized video

torch12/bin/python demo.py multi_pose_wodet --exp_id $EXPNAME --dataset coco_hp_wodet --resume --not_reg_offset --not_reg_hp_offset --K 20 --not_hm_hp --arch $ARCNAME --input_res $RES --keep_res --debug 1 --demo path/to/xx.mp4 --vis_thresh 0.1 
xyb.2023-01-01.16.44.56.mp4
xyb.2023-01-01.16.51.39.mp4

Develop

AdaptivePose is built upon the codebase of CenterNet. If you are interested in training AdaptivePose in a new pose estimation dataset, or add a new network architecture, please refer to DEVELOP.md. Also feel free to send me emails(xiaoyabo@bupt.edu.cn) for discussions or suggestions.

Citation

If you find this project useful for your research, please use the following BibTeX entry.

  @inproceedings{xiao2022adaptivepose,
  title={Adaptivepose: Human parts as adaptive points},
  author={Xiao, Yabo and Wang, Xiao Juan and Yu, Dongdong and Wang, Guoli and Zhang, Qian and Mingshu, HE},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={36},
  number={3},
  pages={2813--2821},
  year={2022}
}

@article{xiao2022adaptivepose++,
  title={AdaptivePose++: A Powerful Single-Stage Network for Multi-Person Pose Regression},
  author={Xiao, Yabo and Wang, Xiaojuan and Yu, Dongdong and Su, Kai and Jin, Lei and Song, Mei and Yan, Shuicheng and Zhao, Jian},
  journal={arXiv preprint arXiv:2210.04014},
  year={2022}
}

About

This is an official implementation of our AAAI2022 paper AdaptivePose and Arxiv paper AdaptivePose++

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published