Items with no label
3335 Discussions

Transforming point cloud from aligned depth frame back into depth coordinates (D415)

_xct
Beginner
2,617 Views

Problem:

With RealSense D415, I use color camera data to modify a depth map, then calculate a point cloud from it. In order to do this, I have to align the depth frame to the color frame.

The main issue is that the resulting point cloud is in the coordinate system of the color sensor and not the left imager (like all the other point clouds).

I want to transform the point cloud back to the coordinate system of the depth sensor.

Current result:

I am trying to use color->depth intrinsics to transform the point cloud back into the original space, but the result is misaligned.

Color to depth extrinsics:

Transform(

Orientation(

[[ 0.9999483 0.0047962 0.00896312]

[-0.00485901 0.9999637 0.00699891]

[-0.00892923 -0.0070421 0.9999353 ]]

),

Vector(-0.014856948517262936, 0.00011981908755842596, -6.095561184338294e-05)

)

Desired result:

Two point clouds must be in the same coordinate system (in this case - identical, since the frame is the same).

Relevant code:

# not aligned

frameset = camera.pipeline.wait_for_frames()

depth_frame = frameset.get_depth_frame()

color_frame = frameset.get_color_frame()

# get color->depth sensor transform

extrinsics = color_frame.profile.get_extrinsics_to(depth_frame.profile)

orient = m3d.Orientation(np.array(extrinsics.rotation).reshape((3, 3)))

pos = m3d.Vector(extrinsics.translation)

transform = m3d.Transform(orient, pos)

depth_frame = postprocessing.process([depth_frame]) # only spatial filter in here

depth = np.asanyarray(depth_frame.get_data())

points = camera._pc.calculate(depth_frame)

vertices = np.asanyarray(points.get_vertices())

pc = PointCloud(vertices.view(np.float32).reshape((vertices.size, 3)))

pc.save('not_aligned.npy')

# aligned

align = rs.align(rs.stream.color)

# frameset = camera.pipeline.wait_for_frames() # working with the same frameset to test similarity

 

frameset = align.process(frameset)

depth_frame = frameset.get_depth_frame()

depth_frame = postprocessing.process([depth_frame])

 

depth = np.asanyarray(depth_frame.get_data())

points = camera._pc.calculate(depth_frame)

vertices = np.asanyarray(points.get_vertices())

pc = PointCloud(vertices.view(np.float32).reshape((vertices.size, 3)))

pc.save('aligned.npy')

# transform pc back using color->depth extrinsics

pc = transform * pc

 

pc.save('transformed.npy') # must be similar to non-aligned, but isn't

0 Kudos
3 Replies
_xct
Beginner
1,402 Views

Guess I will talk to myself then.

The point cloud calculator seems to always use the depth sensor intrinsics, which produces garbage PCs from aligned depth. Especially noticeable with D435 which uses different cameras for color and IR. So continuing tests with D435.

Calculating PC manually with RGB camera intrinsics produces more realistic results. The points are obviously shifted due to different reference csys.

Now we get to the transformation part. The transform is extracted from color->depth extrinsics values and applied to the calculated point cloud. This is where I get strange behavior.

Color->Depth extrinsics rotation: [0.999981, 0.00394571, 0.00481309, -0.0039441, 0.999992, -0.000344145, -0.00481441, 0 .000325155, 0.999988] translation: [-0.0144745, -0.000329277, -0.000862016]

 

Transform( Orientation( [[ 9.9998063e-01 3.9457134e-03 4.8130937e-03] [-3.9441027e-03 9.9999219e-01 -3.4414532e-04] [-4.8144138e-03 3.2515533e-04 9.9998838e-01]] ), Vector(-0.014474545605480671, -0.00032927736174315214, -0.0008620155858807266) )

 

Color intrinsics width: 1280, height: 720, ppx: 646.732, ppy: 356.376, fx: 928.967, fy: 928.493, model: Brown Co nrady, coeffs: [0, 0, 0, 0, 0]

 

Depth intrinsics width: 1280, height: 720, ppx: 636.268, ppy: 365.072, fx: 636.515, fy: 636.515, model: Brown Co nrady, coeffs: [0, 0, 0, 0, 0]

PC from original depth frame (gray) and PC from aligned depth frame (using color intrinsics) (blue):

PC from original depth frame (orange) and PC from aligned depth frame with applied transform (purple). Closer, but not exactly:

PC from original depth frame (pink) and PC from aligned depth frame with applied transform TWICE (green):

This is as close as it gets, but what on earth is happening? Why does applying extrinsics transformation TWICE produce better results? If I am missing something, please, enlighten me. Or if you know another way of getting depth from aligned frame back to regular transformation. Or aligning color over depth map (not the other way, like every tutorial does).

0 Kudos
idata
Employee
1,402 Views

Hi Yevhenii,

 

 

I apologize for not getting back to you sooner.

 

I will continue to research this problem for you, and will get back once I find a better solution. This might be a calibration issue.

 

For now, have you tried the rs-align sample from the RealSense SDK 2.0? This sample aligns depth frames with some other stream (and vice versa).

 

So you would be able to align color over depth like you said.

 

The GitHub page for the rs-align sample can be found https://github.com/IntelRealSense/librealsense/tree/master/examples/align here.

 

 

Best,

 

Sahira

 

 

 

0 Kudos
_xct
Beginner
1,402 Views

Hi,

I think I solved it yesterday. If I invert the extrinsics rotation matrix, I get the required transform to align the two point clouds. Though I am not sure why the extrinsics transform data is recorded this way.

This sample aligns depth frames with some other stream (and vice versa) - unfortunately "vice versa" doesn't seem to be the case and it's been mentioned in several places that rs_align only aligns depth data to other streams.

0 Kudos
Reply