Items with no label
3335 Discussions

Why do I get different depth images from the SDK and IVCAM?

YDai2
Novice
1,852 Views

I tried to get depth image by using VS2012 and the SDK. I found that, the depth images is different between using SDK and IVCAM. How can I get the IVCAM like depth image with the SDK? Actually I want to calculate out the point cloud without using "Queryvertices" function, this should be based on the depth image which can reflect the real depth information.

In addition, how can I get the intrinsic matrix of a sr300 camera?many thanks.

from the SDK,the camera is facing to a wall

from IVCAM

and how can I modify the code?

# include "stdafx.h"

# include

# include

# include "pxcsensemanager.h"

# include "util_render.h" //SDK provided utility class used for rendering (packaged in libpxcutils.lib)

void QueryFilterDevice(PXCCaptureManager *filterbydeviceinfo, int n)

{

PXCSession::ImplDesc desc1 = {};

desc1.group = PXCSession::IMPL_GROUP_SENSOR;

desc1.subgroup = PXCSession::IMPL_SUBGROUP_VIDEO_CAPTURE;

PXCSession *session = PXCSession::CreateInstance();

PXCSession::ImplDesc desc2;

session->QueryImpl(&desc1, 0, &desc2)

wprintf_s(L"Module[%d]: %s\n", 0, desc2.friendlyName);

PXCCapture *capture = 0;

session->CreateImpl(&desc2, &capture);

PXCCapture::DeviceInfo dinfo;

capture->QueryDeviceInfo(n, &dinfo);

filterbydeviceinfo->FilterByDeviceInfo(&dinfo);

}

int wmain(int argc, WCHAR* argv[]) {

// initialize the util render

// UtilRender renderColor(L"Color");

UtilRender renderDepth(L"Depth");

// create the PXCSenseManager

PXCSenseManager *psm = PXCSenseManager::CreateInstance();

if (!psm) {

wprintf_s(L"Unable to create the PXCSenseManager\n");

return 1;

}

PXCCaptureManager *cm = psm->QueryCaptureManager();

QueryFilterDevice(cm,0);

// select the color stream of size 640x480 and depth stream of size 320x240

// psm->EnableStream(PXCCapture::STREAM_TYPE_COLOR, 640, 480);

psm->EnableStream(PXCCapture::STREAM_TYPE_DEPTH, 640, 480,60);

// initialize the PXCSenseManager

if (psm->Init() != PXC_STATUS_NO_ERROR) return 2;

PXCImage *depthIm;//*colorIm,;

for (int i = 0; /*i

// This function blocks until all streams are ready (depth and color)

// if false streams will be unaligned

if (psm->AcquireFrame(true)

// retrieve all available image samples

PXCCapture::Sample *sample = psm->QuerySample();

// retrieve the image or frame by type from the sample

// colorIm = sample->color;

depthIm = sample->depth;

// render the frame

// renderColor.RenderFrame(colorIm);

renderDepth.RenderFrame(depthIm);

// release or unlock the current frame to fetch the next frame

psm->ReleaseFrame();

}

// close the last opened streams and release any session and processing module instances

psm->Release();

return 0;

}

0 Kudos
1 Solution
idata
Employee
520 Views

Hi Div3,

Thanks for your interest in the Intel® Realsense™ Platform.

You are right, they seem to be slightly different, I can only guess that the code used in order to stream the image is a bit different. I would encourage you to see this thread https://software.intel.com/en-us/forums/realsense/topic/537872 https://software.intel.com/en-us/forums/realsense/topic/537872 were samontab (the creator of of IVCAM) comments about this application.

Now, regarding how to calculate a point cloud, I found this article https://software.intel.com/en-us/articles/intel-realsense-technology-and-the-point-cloud Intel® RealSense™ Technology and the Point Cloud | Intel® Software that I think you might find it interesting.

Finally, regarding your question on how to obtain the intrinsic matrix of the SR300, it seems that RealSense provides a generic set of intrinsic parameters, for more information please check this thread https://software.intel.com/en-us/forums/realsense/topic/644068 R200, SR300 Camera matrix .

By the way I did some depth streams using the SDK and the IVCAM (You can see the images down below), if you start to change the Laser Projector parameters (specially the FilterOption) you might get some interesting results.

Facing to a wall SDK

Facing to a wall IVCAM

Hope you find this information useful, have a great day!

Best Regards,

 

-Jose P.

View solution in original post

0 Kudos
3 Replies
idata
Employee
521 Views

Hi Div3,

Thanks for your interest in the Intel® Realsense™ Platform.

You are right, they seem to be slightly different, I can only guess that the code used in order to stream the image is a bit different. I would encourage you to see this thread https://software.intel.com/en-us/forums/realsense/topic/537872 https://software.intel.com/en-us/forums/realsense/topic/537872 were samontab (the creator of of IVCAM) comments about this application.

Now, regarding how to calculate a point cloud, I found this article https://software.intel.com/en-us/articles/intel-realsense-technology-and-the-point-cloud Intel® RealSense™ Technology and the Point Cloud | Intel® Software that I think you might find it interesting.

Finally, regarding your question on how to obtain the intrinsic matrix of the SR300, it seems that RealSense provides a generic set of intrinsic parameters, for more information please check this thread https://software.intel.com/en-us/forums/realsense/topic/644068 R200, SR300 Camera matrix .

By the way I did some depth streams using the SDK and the IVCAM (You can see the images down below), if you start to change the Laser Projector parameters (specially the FilterOption) you might get some interesting results.

Facing to a wall SDK

Facing to a wall IVCAM

Hope you find this information useful, have a great day!

Best Regards,

 

-Jose P.
0 Kudos
YDai2
Novice
520 Views

Hi Jose,Thanks a lot,I will try according to your advice.

0 Kudos
idata
Employee
520 Views

Hi Div3,

We are here to help! Please don't hesitate to come back if any questions come up.

Best Regards,

 

-Jose P.
0 Kudos
Reply