Items with no label
3335 Discussions

ZR300 Skeleton Tracking

idata
Employee
4,066 Views

Hello,

ist it possible to use the ZR300 for tracking people via skeleton tracking?

Regards

Niko

0 Kudos
1 Solution
MartyG
Honored Contributor III
1,250 Views

The RealSense SDK For LInux, which was designed for use with the ZR300, has Person Tracking. Skeleton tracking is a sub-feature of Person Tracking.

https://software.intel.com/sites/products/realsense/sdk/getting_started.html Intel® RealSense™ SDK for Linux: Getting Started

A limitation of the feature, which is classed as Preview level and so has limited complexity, is that the body skeleton and gestures do not work well when the person is close (<0.7m) to an object.

View solution in original post

19 Replies
MartyG
Honored Contributor III
1,251 Views

The RealSense SDK For LInux, which was designed for use with the ZR300, has Person Tracking. Skeleton tracking is a sub-feature of Person Tracking.

https://software.intel.com/sites/products/realsense/sdk/getting_started.html Intel® RealSense™ SDK for Linux: Getting Started

A limitation of the feature, which is classed as Preview level and so has limited complexity, is that the body skeleton and gestures do not work well when the person is close (<0.7m) to an object.

idata
Employee
1,250 Views

Thank you MartyG for the fast response. I plan to use the ZR300 on Windows. despite reading it is not suited for windows the examples did work. Is there any way to use the realsense sdk for linux on windows?

0 Kudos
MartyG
Honored Contributor III
1,249 Views

The RealSense SDK For Linux is just for Linux.

I know that the open-source Librealsense SDK can treat the ZR300 as though it is an R200 camera (since the ZR300's IR components are identical to the R200's). So it stands to reason that there may be R200 applications in the Windows-based RealSense SDK that may run with a ZR300. Since the ZR300 is not officially supported in the RealSense SDK, I would not normally recommend it.

However, if you do wish to see what functions in the Windows SDK will work with your ZR300, the Windows SDK's '2016 R2' version does have Person Tracking and Skeleton Tracking. The quickest and easiest way to test it would be to run the R200 'Person Tracking' sample that comes with the R2 SDK.

0 Kudos
SCoop2
Beginner
1,250 Views

Hey! I have the same question. I have actually downloaded the launch files from here: https://github.com/IntelRealSense/realsense_samples_ros GitHub - IntelRealSense/realsense_samples_ros: Sample code illustrating how to develop ROS applications using the Intel®… and I launch the skeleton.launch file from the terminal. I can see the camera, and a box is created around the person, but it doesn't really detect the skeleton. Should I do something else? I ran the following code to run the launch file

mkdir -p catkin_ws/src

cd catkin_ws/src/

catkin_init_workspace

git clone https://github.com/IntelRealSense/realsense_samples_ros

cd ..

catkin_make

source devel/setup.bash

roslaunch realsense_ros_person demo_person_skeleton.launch

Do I maybe have to install the realsense sdk for linux first?

My aim here is to use the Intel for different gesture recognition, so the camera must detect when a person is raising her hand, when it's lowering, moving side-ways....

0 Kudos
MartyG
Honored Contributor III
1,249 Views

The installation instructions on that page say "The Intel RealSense SDK for Linux is used as the base for these ROS node" and follows that with a link to the installation instructions for the SDK. So yes, I would definitely recommend installing the SDK For Linux first.

Having said that, an alternative way to achieve your goals may be to use your RealSense camera with OpenCV. Google 'opencv hand tracking' for details on how to use it to recognize hand gestures.

0 Kudos
SCoop2
Beginner
1,250 Views

Hey! Thanks for replying. I have just finished downloading the sdk since I haven't had access to a screen during the weekend. However the output remains the same (a bounding box around each detected person, and keeping track of that person, but no distintive skeleton features). Has anybody managed to run this launch file? What should show up?

Will try with OpenCV!!

0 Kudos
MartyG
Honored Contributor III
1,250 Views

The documentation for Person Tracking says that the Skeleton Tracking mode is dependent on Person Tracking mode. The instruction to launch Person Tracking first is:

$ roslaunch realsense_ros_person demo_person_tracking.launch

This information is given in the doc's 'Usage' section, about three-quarters down the page.

https://github.com/IntelRealSense/realsense_samples_ros/blob/kinetic-devel/realsense_ros_person/README.md realsense_samples_ros/README.md at kinetic-devel · IntelRealSense/realsense_samples_ros · GitHub

0 Kudos
SCoop2
Beginner
1,250 Views

Hello! Yes, I have tried launching first the person_tracking.launch file and, in another terminal, the skeleton launch file, but then the first one stops running and the results is the same.....

0 Kudos
MartyG
Honored Contributor III
1,250 Views

I apologize for the delay in responding. I was carefully researching your question.

Looking again at the documentation, there is a list of parameters that can be added, including one for enabling / disabling skeleton tracking.

~sceletonEnabled

Their spelling, not mine!

I am not sure how parameters are applied in ROS. As they are an override it may be something like:

$ roslaunch realsense_ros_person ~sceletonEnabled

0 Kudos
SCoop2
Beginner
1,250 Views

Hey again, and sorry for the delay! I've been trying what you said and investigating what may be my error, and I believe that I'm not installing the realsense sdk correctly, since I apparently have unmet dependencies. I'm currently trying to figure out how I should install the sdk correctly, since I think I'm missing some packages. I will keep you posted!

0 Kudos
MartyG
Honored Contributor III
1,250 Views

Thanks for the update. Good luck!

0 Kudos
idata
Employee
1,250 Views

Hey MartyG, sadly the skeleton tracking in the R2 SDK did not work with the ZR300. We are currently looking at the D4xx series for our project. I found this post on the github of the realsense SDK 2.0 https://github.com/IntelRealSense/librealsense/issues/743 Are you going to have skeleton tracking for D435 (SDK 2.0)? · Issue # 743 · IntelRealSense/librealsense · GitHub Do you think this means that skeleton tracking/person tracking is currently not supported by the SDK 2.0 ?

0 Kudos
MartyG
Honored Contributor III
1,250 Views

I don't know of any plans for Intel to create their own Person / Skeleton tracking solution for SDK 2.0, but you may be able to get that feature in SDK 2.0 using an OpenCV person detection module. You can research this by googling for 'opencv person detection'. Here's an example:

http://mccormickml.com/2013/05/09/hog-person-detector-tutorial/ HOG Person Detector Tutorial · Chris McCormick

Intel are working on their website today (Monday) and links are having trouble launching, so in the meantime you can access the above page by copy and pasting the link below into a browser window instead of clicking on it.

mccormickml.com/2013/05/09/hog-person-detector-tutorial/

0 Kudos
SCoop2
Beginner
1,250 Views

Hello again!

 

I finally managed to install the realsense sdk correctly, and I have actually tried running the rs_pt_tutorial_3 of realsense_sample, and I have managed to detect the pointing gesture via the camera. However, I still haven't managed to run the skeleton.launch file, or even the same gesture recognition file (which should also detect the pointing gesture like in the rs_pt_tutorial_3). It does say mEnableSkeleton=1 when I run the launch file, however in a previous line I can see skeleton disabled, which is confusing.... It is true that you need to run the person_tracking module first, but it already launches that node first in the launch file. Could it be that the files aren't well connected? Because I have found a file called SkeletonJoints where it specifies different joints in the body, but I can't see where that it's called in the main launch file....
0 Kudos
MartyG
Honored Contributor III
1,250 Views

The SDK's documentation gives this information about pt_tutorial_3:

"This sample app illustrates how to get the body tracking points and detect a pointing gesture. The app will display 6 body points , a "Pointing Detected" alert when the gesture is performed, and the pointing vector."

Does the sample display the 6 body points?

0 Kudos
SCoop2
Beginner
1,250 Views

Well, I suppose that by 6 points it means the world coordinate origin and direction, like below, which is what the documentation says it shows

https://github.com/IntelRealSense/realsense_samples/tree/master/samples/pt_tutorial_3 realsense_samples/samples/pt_tutorial_3 at master · IntelRealSense/realsense_samples · GitHub

This points however are only shown when a pointing gesture is performed. It would suit me if I could disable this condition, and to be able to obtain this coordinate points the whole time, and not only when a pointing gesture is performed...Like this I'd at least me able to track is a person has raised their arms -though I'd still need to work out on how to detect other skeleton parts. However I can't figure out from the code what I should edit to obtain that.

Pointing detected, PID: 0 

color coordinates: origin(x,y): 250, 141

 

direction(x,y): 0.2, 0.324

 

world coordinates: origin(x,y,z): -0.100094, -0.213276, 1.356

 

direction(x,y,z): 0.421522, 0.678274, -0.098

 

Pointing detected, PID: 0

 

color coordinates: origin(x,y): 250, 141

 

direction(x,y): 0.2, 0.324

 

world coordinates: origin(x,y,z): -0.100094, -0.213276, 1.356

 

direction(x,y,z): 0.421522, 0.678274, -0.098
0 Kudos
MartyG
Honored Contributor III
1,250 Views

If I was editing the script, I would remove the following lines to stop activation being dependent on the gesture module.

LINES 36-46

// Enable Pointing Gesture

ptModule->QueryConfiguration()->QueryGestures()->Enable();

ptModule->QueryConfiguration()->QueryGestures()->EnableAllGestures();

ptModule->QueryConfiguration()->QueryTracking()->Enable();

// Configure enabled Pointing Gesture

if(ptModule->set_module_config(actualModuleConfig) != rs::core::status_no_error)

{

cerr<<"Error : Failed to configure the enabled Pointing Gesture" << endl;

return -1;

}

LINES 75-77

// Print gesture information

// Color coordinates and world coordinates for both gesture origin and direction.

console_view->on_person_pointing_gesture_info_update(ptModule);

0 Kudos
SCoop2
Beginner
1,250 Views

Hey!

I have done as you said, and I think that i've removed some sort of limitation, since now I can detect coordinates not only when my arm is pointing donwards, but also if I raise it to the side (like drawing a semi-circle from bottom to up, as seen in the picture below). However, it can only detect if I raise my arm a little bit below my shoulders, that is, it doesn't record anything if I raise my arms up my head for example, and it only detects one...Any idea how I could make the code even more flexible?? My final aim is to recognize if a person is using their right or left arm, and also detect if it has raised her hand (like when you raise a hand to ask a question), if the person has raised their hand to their head, nose, shoulders.... So basically I'd like to get the coordinates of the hand, and since I can record the person's face thanks to face tracking, I could determine if a body part has been touched or not by comparing both coordinates (or I think at least).

0 Kudos
MartyG
Honored Contributor III
1,250 Views

Thanks for the great feedback - I'm sure it'll be helpful to others!

Whilst I don't have direct hands-on experience of using the RealSense SDK For Linux, if the joint detection principles are similar to the Windows SDK then that may give some clues about the remaining issues that you are experiencing.

The RealSense camera has to be able to clearly see the joint points of the hand in order to lock on detection and tracking. The strongest detection is achieved when the palm of the hand faces directly towards the camera. Turning the hand side-on reduces the camera's ability to clearly see the joints, meaning that tracking may still work but is more likely to slow or stall. If a detected joint that is being tracked moves outside of the camera lens' view - for example, when lifting the arm above the head - then detection of the joint may be lost until the next time that the hand is within the camera's line of sight.

You may be able to get a fuller range of tracking by moving further away from the camera so that the camera can see more of your body. the Release Notes for the SDK's Person Library describe the ideal ranges of distance from camera, height of camera, etc.

https://software.intel.com/sites/products/realsense/release_notes/person_release_notes.html Release Notes for Intel® RealSense™ SDK for Linux: Release Notes for Intel® RealSense™ for Linux Person Library

In my own full-body tracking project in the Unity game engine, I got around the camera view limitations by amplifying the amount that the on-screen representation of the arm moved, so that a small RL arm movement could produce a virtual arm swing large enough for it to reach above the head without the RL hand leaving camera view and causing a tracking stall.

Regarding 2-handed control: in the Windows SDK at least, 2-handed control is done by establishing detection of one hand first and then the other hand. The hands should be kept apart so that they do not cross over each other and cause one of the arms to stall through detection loss. The hands should also remain within camera view as they move. I found that for maintaining 2-handed control, doing a kind of 'air drumming' was effective, where you move your hands back and forward rhythmically as you move them so that the camera does not stall because a hand returns to its line of sight before it can detect a tracking failure.

Here's a very old video of my project that demonstrates the full-body tracking principle with travel-amplified virtual limbs.

https://www.youtube.com/watch?v=T_V8SNwLTrY 'My Father's Face' Tech Trailer 7 - YouTube

0 Kudos
Reply