Items with no label
3335 Discussions

Is there any existing method of using the depth camera data to improve the accuracy of the tracking camera?

JTrevors6
Beginner
316 Views

We are using a T265 tracking camera to provide data on the x, y, and theta of our robot. Currently the tracking is not the best and we have been searching for improvement. Earlier when taking data strictly from the tracking camera we were getting values that were wildly inconsistent by large factors and the issue is that the values were incorrect by different factors each time we restarted our program. For example, on one run the realsense was getting 1.3 meters off of its target, and the next run 4.3 meters off, and the run after that 2.4 meters off. Nothing was changed between these runs. We did some research and found many who recommended adding wheel odometry, so we tried this method. After adding our wheel odometry data, the robot was accurately seeing its location, but given time. The data being outputted was inconsistent when first arriving at its point, but then slowly realizes its position after 10-20 seconds. We are looking to get data quickly and reliably so we started looking into compatability with the realsense D435i depth camera. We are now wondering if there is a way to pull the distance from depth camera and apply that to the tracking camera to get more accurate tracking. We have seen some integration between the two cameras but mainly just programs created from products of the two cameras rather than one camera being used to adjust the accuracy of the other. We will have the cameras in a constant position and angle on the bot so I was wondering if there currently existed any algorithms for pulling depth data from the D435i and adjusting the tracking of the T265. If any other thoughts come to mind or if a clarification is needed on any of this feel free to comment below. Thanks

0 Kudos
1 Reply
MartyG
Honored Contributor III
154 Views

You may get improved results if you accelerate the robot above its normal speed when starting motion. This helps to generate a high tracking confidence and give the T265 enough motion to better understand its surroundings. The speed can then be reduced to normal ​once the high confidence is achieved.

Intel has published a guide to combining T265 with a depth camera for better tracking. It includes a link to a ROS package to ​generate a 2D occupancy map based on depth images and poses from depth and tracking cameras respectively.

https://www.intelrealsense.com/depth-and-tracking-combined-get-started/

The guide states: "Combining the two devices together gives a robot a broader understanding of the space that it is in, and allow it to create an occupancy map of the environment and navigate through it with ease."

0 Kudos
Reply