Items with no label
3335 Discussions

QueryVertices(), real world coordinates and shaders

lquad
New Contributor I
1,560 Views

GoodEvening,

this is the second time I'm asking something in this forum. The first time you helped me a lot, I hope that also this time I will be lucky.

What I want to do shortly is to mimic the behavior of the QueryVertices() function inside a shader.

This because I use the "depth image" produced by the camera (an F200 just for sake of completeness) as depth map to perturb a plane mesh. The result is a 3D mesh of the object saw by the camera.

Then, now I have a good "z" component of the mesh of my objects to be rendered. But I don't have information about the "x" and "y".

I saw that when I use the QueryVertices() function, I obtain a vertex of 3D points. I did not noticed differences between the "z" component of that points and the value of the corresponding pixel in the depth image passed as argument, (I miss the source code of the QueryVertices() function, I'm only looking at the output so maybe I can be wrong). But I see that the function returns some "x" and "y" components for each point where "z" != 0.

How these components are calculated? because if I can do this inside my shader for each vertex of the mesh It would be perfect.

in summary: how to obtain x and y components, given the z component (and of course the camera resolution) of a point of an image?

preferably, to avoid misunderstandings, I don't want to call the QueryVertices() function in my code.

I thank you in advance for your help.

7 Replies
idata
Employee
555 Views

Hi andakkino,

 

 

Let us investigate further on your case. We'll post a suggestion for you as soon as we have more information to offer.

 

 

Regards,

 

-Sergio A

 

0 Kudos
MartyG
Honored Contributor III
555 Views

Calculating XYZ yourself manually is difficult. Not impossible, but not simple. This thread here discusses the topic.

0 Kudos
lquad
New Contributor I
555 Views

Hello,

thanks for your replies.

Sergio, I will wait you for further investigation.

MartyG I looked at that discussion. I will ask to my supervisor if can I use the formulas showed there.

Thanks

I will be here for further information in order to take the best way to the solution.

0 Kudos
idata
Employee
555 Views

Hi andakkino,

 

 

Thank you for your patience. We looked for information that may be relevant to your case in the RealSense documentation. Even though, there's information about Coordinate Systems and how it works, it doesn't go into much detail, in case you haven't seen it before you can take a look at https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?doc_essential_coordinate_systems.html https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?doc_essential_coordinate_systems.html to see if you can find useful information.

 

 

We also found another thread where this topic is discussed, have a look at https://software.intel.com/en-us/forums/realsense/topic/560784 https://software.intel.com/en-us/forums/realsense/topic/560784 . This thread discusses the coordinate systems and has suggestions to calculate x and y.

 

 

Hopefully you can find this useful.

 

 

Regards,

 

-Sergio A

 

0 Kudos
idata
Employee
555 Views

Hi andakkino,

 

 

Do you still need assistance with this thread? We'll be waiting for your response.

 

 

-Sergio A

 

0 Kudos
lquad
New Contributor I
555 Views

Goodevening,

I'm sorry if I reply late.

first of all, again, thanks to everyone for your help. I really appreciate that.

I have 3 "not perfect" but close solution, the language code is glsl but the algorithms are quite clear.

First, I tried to mimic the algorithms used for the Kinect and then I empirically searched a parameter for the intel real sense (F200).

//float a=0.00173667; //for kinect

float a=0.00226667; //for intel real sense

//in general the gl_Position.z is the depth value from the depth_image of the camera.

gl_Position.z= (texture( texture_depth_sampler,Texcoord).x)*65.535f; //Perturbation of my mesh *65.535 per convertire da texture [0-1] a METRIgl_Position.z= gl_Position.z;gl_Position.x=-(position.x-320)*a*(gl_Position.z);gl_Position.y=(position.y-240)*a*(gl_Position.z);gl_Position.w=1.0;

II° using a simplified pin-hole model linked by you, the params are obtained using the QueryStreamProjectionParameters() function

float QVGA_COLS= 640f;float QVGA_ROWS= 480f;float QVGA_F_X = 477.959f;float QVGA_F_Y = 477.959f;float QVGA_C_X = 321.788f;float QVGA_C_Y = 245.851f;

//identities in this case

float _fx = QVGA_F_X;float _fy = QVGA_F_Y;float _cx = QVGA_C_X;float _cy = QVGA_C_Y;

//in general the gl_Position.z is the depth value from the depth_image of the camera.gl_Position.z= (texture( texture_depth_sampler,depth_Texcoord).x)*65.535f; //Perturbation of my mesh *65.535 per convertire da texture [0-1] a METRIgl_Position.z= gl_Position.z; //identity in this casegl_Position.x=(-gl_Position.z)*(position.x - _cx) / _fx;gl_Position.y=(-gl_Position.z)* (_cy - position.y) / _fy;gl_Position.w=1.0;

III° solution, I translated in glsl the formulas used here https://github.com/IntelRealSense/librealsense/blob/master/include/librealsense/rsutil.h librealsense/rsutil.h at master · IntelRealSense/librealsense · GitHub in the function rs_project_point_to_pixel()

//calibration params found by Intel Real Sense DEPTH// vec2 focal_length = vec2(476.88f, 476.88f);// vec2 principal_point = vec2(321.973f, 254.851f);// vec3 radial_distortion_coefficents = vec3(-0.142513f, -0.0207289f, -0.0121829f);

// vec2 tangential_distortion_coefficents = vec2(-0.00258705f, -0.0000339719f);

//in general the gl_Position.z is the depth value from the depth_image of the camera.

gl_Position.z= (texture( texture_depth_sampler,Texcoord).x)*65.535f; //Perturbation of my mesh *65.535 per convertire da texture [0-1] a METRI

float x = ( position.x - principal_point[0] ) / focal_length[0];

float y = ( (position.y) - principal_point[1] ) / focal_length[1];

float r2 = x*x + y*y ;

float f = 1 + radial_distortion_coefficents[0]*r2 + radial_distortion_coefficents[1]*r2*r2 + radial_distortion_coefficents[2]*r2*r2*r2;

float ux = x * f + 2 * tangential_distortion_coefficents[0]*x*y + tangential_distortion_coefficents[1]*(r2+2*x*x);

float uy = y * f + 2 * tangential_distortion_coefficents[1]*x*y + tangential_distortion_coefficents[0]*(r2+2*y*y);

gl_Position.x=-(ux*gl_Position.z);

gl_Position.y=(uy*gl_Position.z);

gl_Position.z=gl_Position.z;

gl_Position.w=1.0;

Now, I have another problem but I'll search for another topic and, in case I will not find it, I will open another post.

thanks again to everyone.

ps: the red mesh is actually a cloud of points obtained directly from QueryVertices() function.

The colored mesh is instead generated from my shaders using opengl.

As you can see, the two shapes are almost the same. To obtain an exact results you'll need the exact formulas of the QueryVertices() function.

Now my rgb image (from with I color the mesh obtained by depth image) is not aligned anymore! So this is my problem now.

0 Kudos
idata
Employee
555 Views

Hi andakkino,

 

 

Thank you for coming back and sharing your results with the community, we appreciate it. Feel free to open a new thread if help is needed, we'd be happy to assist you.

 

 

Regards,

 

-Sergio A

 

0 Kudos
Reply