Items with no label
3335 Discussions

Combining multiple depth streams into one

MMend10
Beginner
4,374 Views

Hello,

I know it is possible to view depth streams from multiple cameras. However, these streams are separate.

Right now I need to combine streams from multiple cameras (more specifically, 2 D415's) to obtain combined point cloud data. Problem being, these streams obviously don't match, due to differences in the camera's position and rotation from one another.

I was wondering whether there is a native and prepared solution to somehow have these streams combined into a single matched one, or if an additional, customized, programmatic solution is required to achieve this.

Thank you in advance for whatever answers/attention.

0 Kudos
6 Replies
MartyG
Honored Contributor III
2,146 Views

The easiest approach may be to modify the 'Align' sample of the RealSense SDK 2.0, which aligns streams. The sample's code automatically looks for streams from all attached cameras.

https://github.com/IntelRealSense/librealsense/tree/master/examples/align librealsense/examples/align at master · IntelRealSense/librealsense · GitHub

0 Kudos
MMend10
Beginner
2,146 Views

This may be the best option I've found so far, thank you.

I'll be looking into this solution as quickly as possible, and report back soon™

Since this wasn't included in the examples that were provided with the SDK 2.0 VS project, I never found it, let alone consider it.

0 Kudos
MartyG
Honored Contributor III
2,146 Views

You are very welcome. You can find the current full list of SDK 2.0 samples at the link below. Further samples are sometimes added when new builds of the SDK 2.0 are released. Good luck!

https://github.com/IntelRealSense/librealsense/tree/master/examples librealsense/examples at master · IntelRealSense/librealsense · GitHub

0 Kudos
EMuel
Novice
2,146 Views

Hello,

I'm working on the same issue. Did you find a way to combine/align pointclouds from two D415?

I spent weeks to (somehow) understand the sdk examples. Had to start learning c++ meanwhile. Now I can project pointclouds from two D415 on the same openGL window. One on top of the other.

 

So far so good. But how to go further and merge the pointclouds?

The rs-align example doesn't really help.

Cheers, Edgar

0 Kudos
Anders_G_Intel
Employee
2,146 Views

The first and easiest way to do this is to create individual point clouds and then translate and rotate the using affine transforms.

https://en.wikipedia.org/wiki/Affine_transformation Affine transformation - Wikipedia

Basically it involves transforming you (X, Y, Z) points to a new rotated set. Many programming languages have this as a predefined function.

Then just rotate/translate the one camera view point until it look aligned. Append the two arrays of point clouds, and you have your final point cloud.

This should work for the most part.

If you want even better you should look to packages or techniques that will try to recalibrate the two cameras, and bundle adjust giving you best alignment.

Maybe someone else can guide you on this if you need this level of alignment.

EMuel
Novice
2,146 Views

Thank you for the advice.

 

 

I need the pointclouds to match as best as possible.

 

I got the transformation working by manipulating the translation and rotation of one cloud. However this is manual alignment and not precise.

RecFusion provides what I need. It also saves the transformation values to file.

Using this information might be a quick and dirty solution for now. But since I want to record the streams and don't want a static 3D object RecFusion is not my choice.

 

I also prefer to program it myself.

So recalibration of the cameras is the next step.

 

RecFusion needs a calibration chart.

I will try to

- find the edges of the chart's rectangle in each devices frame

- compare the distancies that are given by the depth value of the points

- I guess I will run into allot of Math

Does any one know a library that does the "bundle calibration" of given devices?

0 Kudos
Reply