Items with no label
3335 Discussions

CreateColorImageMappedToDepth() alternative

lquad
New Contributor I
1,809 Views

Hello,

some time ago I asked an alternative way for the "createDepthImageMappedtoColor()"

This is the thread:

Now, unfortunately, I have the dual problem. I need an alternative to CreateColorImageMappedToDepth().

So, suppose I have:

-an RGB image 960*540 from the camera, example F200.

-a DEPTH image 640*480 from the camera, example F200.

What I want is a RGB image 640*480 (or better 960*540) but aligned to the depth image.

I need that because I want to color a mesh obtained from the depth image.

in other words: the last time I asked your help for finding an array of depth values mapped to the colour image. now I need an array of colour values mappet to the depth image.

Is this possible?

Again, the reason why I can't use the CreateColorImageMappedToDepth() is because it is affected by a terrible memory leakage problem

The last time the solution was to use the "QueryInvUVMap()" function.

I suppose that this time I must use the "dual" function, the QueryMap() function, but how?

I have read the documentation here. https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/queryuvmap_pxcprojection.html QueryUVMap

I think that this is a common problem, so I hope in the existence of a fragment of code already wrote from someone.

thanks again for your help.

0 Kudos
11 Replies
jb455
Valued Contributor II
634 Views

Sure you can - just do the opposite to what you did before using QueryUVMap as you said - the code I linked you to last time does exactly that. Have you tried it and had a specific problem?

Also, I don't think there is a memory leak with Create[x]MappedTo[y], as long as you dispose of & release everything properly, so you may find it easier to do it that way.

0 Kudos
lquad
New Contributor I
634 Views

Hello jb455,

Ok, It is a good news that I can use the QueryUVMap();

Before resolving my doubts regarding that function I would like to investigate about the memory leak with Create[x]MappedTo[y]. This because, if this function works, I don't need to call QueryUVMap() anymore;

So, What I did is:

class MyHandler : public Intel::RealSense::SenseManager::Handler {

public:

virtual Intel::RealSense::Status PXCAPI OnNewSample(pxcUID, Intel::RealSense::Capture::Sample *sample) {

if (projection != NULL) {

...

rgb_image = sample->color;

rgb_image->AcquireAccess(PXCImage::ACCESS_READ, Intel::RealSense::Image::PIXEL_FORMAT_RGB24, &data_camera_rgb);

...

depth_image = sample->depth;

depth_image->AcquireAccess(PXCImage::ACCESS_READ, Intel::RealSense::Image::PIXEL_FORMAT_DEPTH, &ddata);

...

do stuff

...

PXCImage::ImageData data;PXCImage* mapped =projection->CreateColorImageMappedToDepth(sample->depth,sample->color);mapped->AcquireAccess(PXCImage::ACCESS_READ, &data);mapped->ReleaseAccess(&data);mapped->Release();

...

do stuff

...

rgb_image->ReleaseAccess(&data_camera_rgb);

depth_image->ReleaseAccess(&ddata);

sm->ReleaseFrame();

}

frame_counter++;

return PXC_STATUS_NO_ERROR;

}

};

EDIT OF MY POST: I modified this post, because in this way seems to work without leak! If you notice something wrong tell me. I will reply again when and if I will sure about this solution

0 Kudos
jb455
Valued Contributor II
634 Views

I'm not 100% with the C++ syntax but I think it's like this:

PXCImage mappedImage = projection->CreateColorImageMappedToDepth();

mappedImage->AcquireAccess(...);

DoStuff();

mappedImage->ReleaseAccess(...);

mappedImage->Release();

Edit: Yes, just seen your edit. Looks about right!

0 Kudos
jb455
Valued Contributor II
634 Views

The main disadvantage with CreateColorImageMappedToDepth vs the UVMap method, as I found in my original post, is that the former clips depth values to integers so you lose some precision in the z axis. But if you're happy with ints it's much easier to use!

0 Kudos
lquad
New Contributor I
634 Views

first of all:

-I tested the function, it works without memory leaks.

-here the results

thanks Jb455 again for your help.

now, investigating better:

jb455 wrote:

The main disadvantage with CreateColorImageMappedToDepth vs the UVMap method, as I found in my original post, is that the former clips depth values to integers so you lose some precision in the z axis. But if you're happy with ints it's much easier to use!

I don't get you in this sentence, the CreateColorImageMappedToDepth creates a new "rgb image" so I don't have depth values in the new generated image.

another doubt:

My RGB samples are 960*540, Is it possible to align them to the depth samples (res 640*480) keeping the original higher resolution? because obviously this would increase the overall final quality of my texturized meshes.

finally:

the black spots in the image (for example on my hand) are not a bug from the CreateColorImageMappedToDepth but a problem of mine already solved.

more details about my application: I copy the new mapped rgb image into a buffer. This buffer is used by a fragment shader to color a mesh (generated from the depth image).

0 Kudos
jb455
Valued Contributor II
634 Views

Sorry yes, I was thinking of CreateDepthImageMappedToColour for the depth value clipping.

Ah, if you want to do it in the original colour image resolution, you'll have to do it manually using the InvUVMap and setting the colour of each pixel individually based on its depth value. But then the depth values will be mapped to the colour image, which may screw up your mesh stuff.

0 Kudos
lquad
New Contributor I
634 Views

jb455 ha scritto:

Ah, if you want to do it in the original colour image resolution, you'll have to do it manually using the InvUVMap and setting the colour of each pixel individually based on its depth value. But then the depth values will be mapped to the colour image, which may screw up your mesh stuff.

I'm Sorry again, I don't get this.

I have to call the InvUVMap or the UVMap?

Let's say I want to keep the original, higher, resolution. So from a RGB-960*540 I want to obtain a RGB-960*540 but aligned at the depth.

jb455, The code that do this is the one you linked me some time ago? so.. https://mtaulty.com/2015/04/16/m_15794/ https://mtaulty.com/2015/04/16/m_15794/ this one?

There is some code similar to this

  1. for (int i = 0; i < rgb_width *rgb_height; i++) {
  2. int u = (int)(invuvmap[i].x * dwidth);
  3. int v = (int)(invuvmap[i].y * dheight);
  4. if ((u >= 0) && (v >= 0) && (u + v * dwidth < dwidth * dheight)) {
  5. mappedPixels[i] = dPixels[u + v * dwidth];
  6. }
  7. else {
  8. mappedPixels[i] = 0;
  9. }

but specific to generate an RGB image instead of a DEPTH_IMAGE?

0 Kudos
jb455
Valued Contributor II
634 Views

If you want it to be in the same resolution as the colour image, it'll have to be mapped to the colour image instead of the depth image, unless you just upscale the mapped depth image.

Because the colour image is (usually) bigger than the depth image, when you map colour to depth you 'lose' colour pixels as each depth pixel covers a larger area than a colour pixel, but when you map depth to colour you 'gain' depth pixels, either by duplication or interpolation (it just occurred to me that I don't know which method it uses).

So if you want to keep all the colour information it'll have to be aligned to the colour image. I suppose if you must keep it aligned to depth, you may be able to do some fancy stuff where you retrieve the colour pixels in between each of the depth pixels and fill them in somehow, but I don't think I'd be able to help you with that!

The linked code maps colour to depth, which you've already done here using the built-in method. If you want to do the inverse, so you have an image aligned to colour, but using the depth data to decide how to colour the pixels (eg, black if there's no depth data, tinted red if out of ideal range, normal colour otherwise) it'll be using the InvUVMap but otherwise fairly similar to the linked code (swapping dwidth for cwidth and dheight for cheight).

0 Kudos
lquad
New Contributor I
634 Views

I think I've understand the issue.

I have only the last question regarding the CreateColorImageMappedToDepth function.

The quality of the image generated by this function is off course determined by the 1) resolution of the depth image, and 2) resolution of the color image.

Let's say, case 1;I have:

-DEPTH 640*480

-COLOR 960*540

-->color image output (aligned) 640*480

case 2:

I have:

-DEPTH 640*480

-COLOR 640*480

-->color image output (aligned) 640*480

the resolution of the output image is the same, Can I say that the quality of the case1 output image is better with the respect of the case2 because the original RGB had a greater resolution? I mean, When the function maps between the depth and the color, if I have a more detailed color image can I have a better output?

I don't know how the CreateColorImageMappedToDepth works but if it "sample" from the original rgb image in order to obtain a new aligned image... I can naively think that "greater is resolution of the original rgb image, better will be the quality of the aligned new rgb image".

Am I wrong?

jb455, thanks for your help again.

0 Kudos
jb455
Valued Contributor II
634 Views

Not sure. If you try it though, please report back as I'm curious now!

0 Kudos
idata
Employee
634 Views

Hello andakkino,

 

 

Thank you for your interest in the Intel® RealSense™ Camera F200.

 

 

Regarding the two cases that you present, I haven't personally tested something similar, so I'm also curious if you have come up with a conclusion in this matter.

 

 

If you have any other question or update, don't hesitate to contact us.

 

 

Regards,

 

Andres V.
0 Kudos
Reply