Items with no label
3335 Discussions

Getting complete point cloud from D435

BLync
Beginner
5,174 Views

Using Python code, we are getting the point cloud from a D435 capture but it is filtering out some points (presumably those with NaN values) and results in a data set that is not complete. The 1280x720 resolution should result in 921600 points but ours is typically around 800000-900000 points. Given that the data is generated from an image, the complete point cloud would be expected to have structure -- i.e., it would ideally be ordered according to row or column in terms of pixel locations. Once some points are filtered that structure is lost and so is the ability to easily correspond pixels to 3D points, and more importantly the ability to build a structured mesh from the data. Of course we can use something like Delaunay triangulation to build a mesh, but with structured data this would be unnecessary. Knowing the intrinsic/extrinsic parameters, we could build our own point cloud from the depth and color images, but this should also be unnecessary since the SDK is already doing this.

Is it possible to have the point cloud include NaN values when being generated so that we have the complete set of 921600 points where they are structured according to row or column pixel location?

0 Kudos
1 Solution
MartyG
Honored Contributor III
2,214 Views

I found a point cloud script for SDK 2.0 (not in Python, unfortunately) that specifies how to deal with invalid pixels, apparently giving them a depth value of zero instead of removing them.

// Set invalid pixels with a depth value of zero, which is used to indicate no data

pcl::PointXYZRGB point;

if (depth_value == 0) {

if (mUseUncolorizedPoints) {

point.x = NAN;

point.y = NAN;

point.z = NAN;

}

https://github.com/IntelRealSense/librealsense/issues/1027 Weird glitch SR300 depth aligned to color · Issue # 1027 · IntelRealSense/librealsense · GitHub

View solution in original post

0 Kudos
5 Replies
MartyG
Honored Contributor III
2,215 Views

I found a point cloud script for SDK 2.0 (not in Python, unfortunately) that specifies how to deal with invalid pixels, apparently giving them a depth value of zero instead of removing them.

// Set invalid pixels with a depth value of zero, which is used to indicate no data

pcl::PointXYZRGB point;

if (depth_value == 0) {

if (mUseUncolorizedPoints) {

point.x = NAN;

point.y = NAN;

point.z = NAN;

}

https://github.com/IntelRealSense/librealsense/issues/1027 Weird glitch SR300 depth aligned to color · Issue # 1027 · IntelRealSense/librealsense · GitHub

0 Kudos
BLync
Beginner
2,214 Views

Thanks! I'm posting on behalf of another team member and am not looking at their code right now, but I know we've been using a particular function to get the entire point cloud with one function, whereas the link you posted is doing what I was hoping to find and generating the point cloud pixel-by-pixel (and entering NaN values where there is no good data). This should be exactly what I need!

0 Kudos
MartyG
Honored Contributor III
2,214 Views

Great! Let us know later how it work outs for you, please. Good luck!

0 Kudos
BLync
Beginner
2,214 Views

Hey Marty,

Finally tried to implement this solution in C++ and ran into a few problems that were eventually solved -- so in the end I have a full point cloud which is what I was aiming for. However, here are two issues that I had to deal with as I tried following the code in the link you provided (which was extremely helpful!):

1. The function "deproject" does not seem to be available in the current version of the SDK as far as I can tell. I had to use "rs2_deproject_pixel_to_point" instead but this works fine.

2. The use of an if-else structure to catch bad points and insert NaN value points into the point cloud was unsuccessful and led to a strange problem with mapping RGB data to the point cloud. Here are two images: on the left we see the point cloud after setting x-y-z data to NAN for each point where the depth is zero; on the right we see the full point cloud without setting any points to NAN. For some reason the use of NAN data in the point cloud leads to problems with how the RGB data is interpreted (although the x-y-z data appears to be fine). It is nice to be able to have points exist as part of the point cloud but not show up using NaN values but for my application I can live with them sitting at the origin.

Thanks again for your help, I'm attaching my code below in case someone else wants to access it (sorry, it appears the indentation was lost when copying over...).

Cheers,

Brian

# include "stdafx.h"

# include

# include

# include "opencv2/imgproc/imgproc.hpp"

# include

# include

# include

# include

# include

using namespace cv;

using namespace std;

int main(int argc, char * argv[])

{

// RealSense pipeline

rs2::pipeline pipe;

rs2::config cfg;

// depth stream config

cfg.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 30);

// colour stream config

cfg.enable_stream(RS2_STREAM_COLOR, 1280, 720, RS2_FORMAT_RGB8, 30);

// start pipeline and get sensor profile data

auto pipeProfile = pipe.start(cfg);

auto sensor = pipeProfile.get_device().first();

auto depthScale = sensor.get_depth_scale();

// depth and color streams

auto align = new rs2::align(RS2_STREAM_COLOR);

auto depth_stream = pipeProfile.get_stream(RS2_STREAM_DEPTH).as();

auto color_stream = pipeProfile.get_stream(RS2_STREAM_COLOR).as();

auto depthToColor = depth_stream.get_extrinsics_to(color_stream);

auto colorToDepth = color_stream.get_extrinsics_to(depth_stream);

auto depthIntrinsic = depth_stream.get_intrinsics();

auto colorIntrinsic = color_stream.get_intrinsics();

// OpenCV and PCL windows

cv::namedWindow("Image", cv::WINDOW_AUTOSIZE);

cv::namedWindow("Depth", cv::WINDOW_AUTOSIZE);

pcl::visualization::CloudViewer viewer("Viewer");

// main loop (while PCL viewer is open)

while (!viewer.wasStopped()) {

// get new frames

rs2::frameset frames = pipe.wait_for_frames();

auto processedFrames = align->process(frames);

rs2::frame frameDepth = processedFrames.first(RS2_STREAM_DEPTH);

rs2::frame frameRGB = processedFrames.first(RS2_STREAM_COLOR);

// frame size

const int w = frameRGB.as().get_width();

const int h = frameRGB.as().get_height();

// RGB and depth images in OpenCV format

cv::Mat image(cv::Size(w, h), CV_8UC3, (void*)frameRGB.get_data(), cv::Mat::AUTO_STEP);

cv::Mat image_depth(cv::Size(w, h), CV_16U, (uchar*)frameDepth.get_data(), cv::Mat::AUTO_STEP);

// generate output images for RGB and depth (normalize depth)

cv::Mat image_bgr, image_map, image_nrm;

cv::cvtColor(image, image_bgr, cv::COLOR_RGB2BGR);

image_depth.convertTo(image_nrm, CV_32F);

double dMin, dMax;

minMaxIdx(image_depth, &dMin, &dMax);

image_nrm /= 65535.0;

normalize(image_nrm, image_nrm, 0, 1, NORM_MINMAX);

image_nrm *= 255;

image_nrm.convertTo(image_nrm, CV_8U);

applyColorMap(image_nrm, image_map, 2);

// update image windows

imshow("Image", image_bgr);

imshow("Depth", image_map);

waitKey(10);

// new point cloud

pcl::PointCloud::Ptr cloud(new pcl::PointCloud);

int N_bad = 0; // bad point counter (i.e., depth = 0)

// for each pixel...

for (int u = 0; u < h; u++) {

for (int v = 0; v < w; v++) {

// get depth from image and scale

uint16_t depth_value = image_depth.at(u, v);

float depth_in_meters = depth_value * depthScale;

// new point

pcl::PointXYZRGB point;

// set bad points to NAN

if (depth_value < 0) {

N_bad++;

point.x = NAN;

point.y = NAN;

point.z = NAN;

point.r = 255;

point.g = 255;

point.b = 255;

}

else {

// use intrinsics to map image coordinates to real world coordinates

float depth_pixel[2];

depth_pixel[0] = v;

depth_pixel[1] = u;

float depth_point[3];

rs2_deproject_pixel_to_point(depth_point, &depthIntrinsic, depth_pixel, depth_in_meters);

point.x = depth_point[0];

point.y = depth_point[1];

point.z = depth_point[2];

// get corresponding RGB data and map onto point

auto pix = image.at(u, v);

point.r = pix[0];

point.g = pix[1];

point.b = pix[2];

}

// add point to cloud

cloud->points.push_back(point);

}

}

// output cloud to viewer

viewer.showCloud(cloud);

}

// terminate

return 0;

}

0 Kudos
MartyG
Honored Contributor III
2,214 Views

Great report! I'm really glad it worked out for you. Thanks so much for sharing your experience and code with the community.

0 Kudos
Reply