Working with Nvidia Tensor RT and Pose Estimation - Part 2

Photo by Joshua Sortino on Unsplash
Using a custom Pytorch model with Tensor RT - Part 2⌗
Our Goal: to create a ROS Node for Pose Estimation
Prerequisites⌗
Make sure you have followed the previous tutorial Here to install dependencies.
Make sure you have done the following from the repo
- downloaded the
/parse/
folder and put it in your<catkin-workspace>/<your-package>/src/
directory. - downloaded
plugins.cpp
andplugins.hpp
files to your<catkin-workspace>/<your-package>/src/
directory.
Parsing model results class⌗
Create the 2 files ParseObjects.cpp
and ParseObjects.hpp
in your <catkin-workspace>/<your-package>/src/
directory
Lets fill these out
ParseObjects.cpp
ParseObjects.hpp
Parse Class Explanation⌗
- This class exists as an interface to the libraries in the
/parse/
folders usingplugins.cpp
andplugins.hpp
- Our model result Tensors (CMAP, PAF) are converted into Object Tensors that we can use for drawing i.e.
Tensor<int> object_counts
Tensor<int> objects
Tensor<float> normalized_peaks
- These three results will be fed into the GPU for drawing in our next post
poseNet API Class⌗
Create the file node_imagetaker_posenet.cpp
in your <catkin-workspace>/<your-package>/src/
directory
Lets fill these out
node_imagetaker_posenet.h
Add the following to your CMakeLists.txt
- note the
GLOB
and the${parseSrc}
for compiling all files in a folder! - The rest, you have seen before
PoseNet ROS Node explanation⌗
This class is similar to other nodes we have explored (ResNet, ImageNet, DetectNet)
They simply declare bootstapping code for the ROS Node i.e.
- Name the node
- Set up publishers and subscribers
- Load the model and optimize it with Tensor RT
When the node receives an image, it will run pose estimation on it and publish an image showing the detected pose points.
Verify⌗
Run catkin_make
to compile your code and make sure no errors show up
If any errors show up, make a comment and I will try to help you with it
In a terminal run
In another terminal run
- NOTE: you may see a warning about a camera calibration file, you can ignore it
In a third terminal run
In rqt_image_view
click the drop down and select either imagetaker_posenet/posenet_result
to see the image segmentation results in real-time!

Notes and Tips⌗
- Use
rosparam
to set parameters for nodes - Use image_publisher to publish test images, then pipe those images into our node
GDB Debugging with ROS⌗
Phew this was too many all nighters!
If you have any tips or questions feel free to leave a comment!
NOTE: this code was created as a proof of concept, do not put it in any production code unless you are absolutely sure you know what you are doing.
In my next post, I will explain how we drew the Pose results using CUDA!
Links to source code⌗
Full source code available on GitHub
Sources and Recommended Reading⌗
- Nvidia AI IOT Trt Pose on GitHub
- Tensor RT Home
- Tensor RT Documentation
- Lots of Googling