Skip to content

Real-time pose estimation accelerated with NVIDIA TensorRT

License

Notifications You must be signed in to change notification settings

tokk-nv/trt_pose

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

trt_pose

Want to detect hand poses? Check out the new trt_pose_hand project for real-time hand pose and gesture recognition!

trt_pose is aimed at enabling real-time pose estimation on NVIDIA Jetson. You may find it useful for other NVIDIA platforms as well. Currently the project includes

  • Pre-trained models for human pose estimation capable of running in real time on Jetson Nano. This makes it easy to detect features like left_eye, left_elbow, right_ankle, etc.

  • Training scripts to train on any keypoint task data in MSCOCO format. This means you can experiment with training trt_pose for keypoint detection tasks other than human pose.

To get started, follow the instructions below. If you run into any issues please let us know.

Getting Started

To get started with trt_pose, follow these steps.

Step 1 - Install Dependencies

  1. Install PyTorch and Torchvision. To do this on NVIDIA Jetson, we recommend following this guide

  2. Install torch2trt

    git clone https://github.com/NVIDIA-AI-IOT/torch2trt
    cd torch2trt
    sudo python3 setup.py install --plugins
  3. Install other miscellaneous packages

    sudo pip3 install tqdm cython pycocotools
    sudo apt-get install python3-matplotlib

Step 2 - Install trt_pose

git clone https://github.com/NVIDIA-AI-IOT/trt_pose
cd trt_pose
sudo python3 setup.py install

Step 3 - Run the example notebook

We provide a couple of human pose estimation models pre-trained on the MSCOCO dataset. The throughput in FPS is shown for each platform

Model Jetson Nano Jetson Xavier Weights
resnet18_baseline_att_224x224_A 22 251 download (81MB)
densenet121_baseline_att_256x256_B 12 101 download (84MB)

To run the live Jupyter Notebook demo on real-time camera input, follow these steps

  1. Download the model weights using the link in the above table.

  2. Place the downloaded weights in the tasks/human_pose directory

  3. Open and follow the live_demo.ipynb notebook

    You may need to modify the notebook, depending on which model you use

See also

  • trt_pose_hand - Real-time hand pose estimation based on trt_pose

  • torch2trt - An easy to use PyTorch to TensorRT converter

  • JetBot - An educational AI robot based on NVIDIA Jetson Nano

  • JetRacer - An educational AI racecar using NVIDIA Jetson Nano

  • JetCam - An easy to use Python camera interface for NVIDIA Jetson

References

Cao, Zhe, et al. "Realtime multi-person 2d pose estimation using part affinity fields." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.

Xiao, Bin, Haiping Wu, and Yichen Wei. "Simple baselines for human pose estimation and tracking." Proceedings of the European Conference on Computer Vision (ECCV). 2018.

About

Real-time pose estimation accelerated with NVIDIA TensorRT

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 43.6%
  • C++ 34.0%
  • Jupyter Notebook 22.0%
  • Other 0.4%