Items with no label

Disparity shift

MAbom
Beginner
5,255 Views

Hallo everyone,

I am learing to use the D415 camera. And I came across the 'Disparity shift' option which enables the modification of the Zmin and Zmax values in which the camera can 'see'.

But I am confused about this option! How can we actually determine the measurable scope of the camera without changing anything physically??

I would appreciate any ideas or thoughts about this issue and maybe some references to read more about it!

Another thing that I can't understand is the 'Depth Unit'. For example as the file https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/BKMs_Tuning_RealSense_D4xx_Cam.pdf https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/BKMs_Tuning_R… suggests:

we can set the DepthUnit=5000um so that the camera report range to ~325m. But if we set the DisparityShift to 50 then the range for Z is 30cm to 110cm!!! How is that possible? what am I missing?

Thanks!

0 Kudos
16 Replies
MartyG
Honored Contributor III
3,240 Views

There is a version of the tuning guide based on a presentation that the author did which is much more attractively presented and easy to understand.

https://realsense.intel.com/wp-content/uploads/sites/63/BKM-For-Tuning-D435-and-D415-Cameras-Webinar_Rev3.pdf https://realsense.intel.com/wp-content/uploads/sites/63/BKM-For-Tuning-D435-and-D415-Cameras-Webinar_Rev3.pdf

0 Kudos
MAbom
Beginner
3,240 Views

I read it too. But unfortunately it didn't help me with the issues I am facing!

0 Kudos
MartyG
Honored Contributor III
3,240 Views

Have you tried using the pre-made Depth Quality Tool that installs onto your computer when installing the pre-compiled version of RealSense SDK 2.0 (the one where you do not need to compile the source code). This application lets you expand open the 'Depth Visualization' settings in the side-panel and change minimum and maximum distance sliders so you can see how min and max distance settings affect an image.

0 Kudos
MAbom
Beginner
3,240 Views

Thanks for you answer. Yes I tried it.

But I am interested in the reason that lets these settings affecting the min-max range the way they does.

Moreover that dosen't explain how changing the 'Disparity shift' and 'Depth Unit' lead to different range values.

Regards

0 Kudos
MartyG
Honored Contributor III
3,240 Views

Apologies if I am not giving good explanations. The depth units dictate how far the camera can see, or its 'expressive range'.. The default depth unit on the 400 Series cameras is 1mm, meaning that it can see for a distance of 65 meters. However, this long distance range gives it a disadvantage at close-range scanning. The older RealSense SR300 camera model has a default scale of 1/32th of a millimeter, allowing for a maximum depth sensing range of two meters but giving it better depth sensing accuracy at close range.

One way to think of it would be people with short-sighted and long-sighted vision. Short sighted people (represented by the 400 Series camera) can see long distances but not see close-up detail so well. Long sighted people (the SR300) can see close-up details well but cannot see far-away objects - the SR300's maximum depth sensing range is around 2 meters.

By changing the depth units of the camera, you can change how short-sighted or long-sighted its view is.

0 Kudos
MAbom
Beginner
3,240 Views

Thanks for the explanation.

It was helpful!!

0 Kudos
jb455
Valued Contributor II
3,240 Views

I'm not sure how the disparity shift thing works - perhaps it tells the algorithm which matches the two depth images to create the depth to try more candidates, allowing it to match closer objects.

Depth unit I understand better . The depth image reports depth values as a 16-bit integer, which means there are 65,535 possible values for the depth to be. To get the actual depth, we multiply this integer value by the depth unit. So if the depth image value is 10,000 and the depth unit is 0.001 (1mm), the actual depth at this point is 10m. The maximum depth value in this case is 65,535 * 0.001 = 65.535m. Changing the depth unit to be smaller, say 0.0001 (0.1mm) means we get a greater resolution in depth values, but the maximum value is 65,535 * 0.0001 = 6.5535m.

So, changing the depth unit doesn't alter how the camera actually works - the max actual range of the camera always stays the same - it just changes the values the camera can return to you. Practically, you're unlikely to get much perfomance at distances greater than 50m or so, and it's unlikely to be accurate enough to get much benefit from setting the resolution much below 0.1mm; though I'd encourage trying it out for yourself to see if either of these suit your use-case.

MAbom
Beginner
3,240 Views

Thank you very much!

The 'Depth Unit' is quite clear for me now!

I hope that someone could clarify the 'Disparity Shift' thing!

0 Kudos
MartyG
Honored Contributor III
3,240 Views

As mentioned in the tuning guide, if disparity shift = 0 then a stereo camera can see infinitely far.

So how I like to think of disparity shift is like a person standing in front of the camera holding up a board. If disparity = 0 then they are standing so far away that the camera cannot see them at all. As disparity is increased, the person holding up the board gets closer and closer to the camera, restricting how far ahead it can read the detail of (MaxZ is reducing). Until finally MaxZ is low enough that the board is right in front of the camera and it can see very little except what is in front of the held-up board.

0 Kudos
axie
Beginner
3,240 Views

Hi ! I am comfused about the depth unit. can you explain the difference between the Controls / depth unit and Advanced Controls/ Depth Table /Depth units?

Accroding to your discussion ,I understand that the Depth Units in Controls means this unit multiply integer value , so we can get the actual depth.

But what is the meaning of another Depth Unit in Advanced Controls?

I will be greatful if you can help me!

0 Kudos
MartyG
Honored Contributor III
3,240 Views

Intel has no plans to provide documentation for much of the Advanced Mode settings. This is because they interact with each other in a complex way, and so Intel controls them automatically with 'machine learning' instead of encouraging users to change the values manually. Users are encouraged instead to use pre-configured 'Visual Preset' modes for configuration of these settings. You can select Visual Presets from a drop-down menu near the top of the RealSense Viewer's options side-panel.

https://github.com/IntelRealSense/librealsense/wiki/D400-Series-Visual-Presets D400 Series Visual Presets · IntelRealSense/librealsense Wiki · GitHub

There is a reference though to the Advanced Mode's depth units setting in Intel's camera tuning paper. It says:

"It may be necessary to change the 'depth units' in the advanced mode API. By default the [Vision Processor] D4 VPU provides 16 bit depth with a depth unit of 1000 um (1 mm). This means the max range will be ~65 m. However, by changing this to 5000 um, for example, it will be possible to report depth to a max value of 5x65 = 325 m".

https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/BKMs_Tuning_RealSense_D4xx_Cam.pdf https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/BKMs_Tuning_R

0 Kudos
axie
Beginner
3,240 Views

Thanks for your reply! I used the default seeting to capture the pointcloud, then I input this pointcloud .ply file to Meshlab. After reconstruction to .STL file , I align to the other standered STL file, which is generated by CT reconstruction. I find my STL file is too small compared to standered .STL file.

(the pointcloud quality is not so good, because I capture it in a close vivo model . But it is ok, I just need few partial information to align)

So I am wondering whether the units of the two are different. one unit is meter, another unit is millimeter. I am trying to change the unit , so that the pointcloud generated in such unit can adjust to the size of standered .STL file.

Can you suggest something in depth unit? so that I can align the two STLs in same unit and same size. Or is there something wrong in my other operation?

0 Kudos
MartyG
Honored Contributor III
3,240 Views

Coincidentally, I had another case yesterday where the ply was too small when imported. In that case, the user was importing the ply into Blender instead of MeshLab. I referred them to a link where a RealSense community member gave advice on RealSense Viewer settings, including the two different Depth Units under Controls and Advanced.

https://github.com/IntelRealSense/librealsense/issues/2009# issuecomment-403380726 Extremely Poor .ply file quality · Issue # 2009 · IntelRealSense/librealsense · GitHub

0 Kudos
axie
Beginner
3,240 Views

I solved my problem. I changed the two depth uints and other parameters that you introduced to me in the link. But the .ply is still small. It seems change the unit in RealSense doesnt cause a change in Meshlab.

The solution is changing the scale in meshlab. I multiplied 1000 on the xyz axis. So they become same unit in meter.

I also try to use the Blender, but I find align two objects in Meshlab is more convenient.

Anyway, thanks a lot!

0 Kudos
MartyG
Honored Contributor III
3,240 Views

I did wonder whether the adjustment should be made in the program where the file is imported, but it was hard to be sure from the information available and I hadn't tested it myself. Thanks for the confirmation!

0 Kudos
MPrat3
Beginner
3,240 Views

My understanding of the disparity shift is basically the "search range" along epipolar lines. The way depth matching works, you are trying to match two pixels (one in the left view and one in the right view). Their distance apart (in pixel space) is related to how far they are in physical space. By changing the disparity shift value, you are effectively limiting this search range (search for pixels close to each other, you see near; search for pixels far apart from each other, you see far). This is exactly how human vision works, as Marty explained in a previous post.

0 Kudos
Reply