- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi. I have a SR300 camera. I am trying to align the RGB and the Depth images. I have been able to do the same with the SDK 2.0 examples. It is given in documentation that the max. resolution of the depth stream is 640x480@60Hz, Z16. However, my doubt is what happens when the alignment of depth image is done with any color image larger than this resolution. Is there a supersampling of pixels. I was curious to know what algorithm is used to stretch the low resolution(image) depth map to higher image resolution and what kind of aliasing effects accompany the same.
Could you help me in this regard.
Thanks a lot
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
SDK 2.0 processes images with the SR300 a bit differently to how it handles the 400 Series cameras. The SR300's traits with the SDK are:
Depth images are always pixel-aligned with infrared images
The depth and infrared images have identical intrinsics
The depth and infrared images will always use the Inverse Brown-Conrady distortion model
The extrinsic transformation between depth and infrared is the identity transform
Pixel coordinates can be used interchangeably between these two streams
Color images have no distortion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi. Thank you for the reply. But my question was what happens when the resolution of another depth and colour frames are different.
I thought it will not be possible to align different resolutions but to my surprise the assign function have a depth image output of size 1920x1080, matching the default colour output.
However the hardware is restricted to 648x480 for depth sensor. So this stretching must be fine by some algorithm. I want to know if this introduces aliasing in the depth image.
I hope I could communicate my query properly
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My assumption would be that with both the SR300 and 400 Series camera models, the Align processing block handles the alignment of color and depth.
https://github.com/IntelRealSense/librealsense/wiki/Projection-in-RealSense-SDK-2.0# frame-alignment Projection in RealSense SDK 2.0 · IntelRealSense/librealsense Wiki · GitHub
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Here's the code where the alignment happens:
https://github.com/IntelRealSense/librealsense/blob/master/src/proc/align.cpp# L403 librealsense/align.cpp at master · IntelRealSense/librealsense · GitHub
The depth pixel is mapped to a range of 'other' pixels and copied, so no interpolation etc is done.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So, if I understood correctly, that means the same depth value is assigned for a range of RGB pixels right?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes. From looking at the code there I'm not sure what would happen if the colour image was smaller than the depth image; I think it would end up overwriting depth values if multiple depth pixels map to a single colour pixel, so the value you'd end up with may be the last value that's mapped there.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you guys. I too will look in the alignment code. But the interpolation here is just constant interpolation. So, in order to be better off, its better than the aligned RGB image and depth image are of same resolution and capped at max. resolution of the depth image which is 640x480 and refresh rate 60Hz.
Am I thinking in right direction?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It depends on the goals of your project really. In my project we use the highest colour resolution as that's what the user sees, with the depth image resolution depending on the environment it's used in. Lower depth resolution means the minimum distance is decreased so short range use is easier; but higher resolution gives better accuracy at longer range.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
But if the resolution difference between the RGB and depth image are too high, then error in depth pixels(specially at the edges) will be very high right. The edge in RGB image will not correspond to edge in depth image at the object edge, right? I saw errors in range of 40-50 pixels also when trying with depth image(640x480) and 1920x1080 RGB8 image.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
True, I hadn't thought about it like that before. You could try out some of the post-processing filters, they can do smoothing etc on the depth points.
Are you saying your alignment was off by 40-50 pixels? I've never seen it that bad before, it's usually 2-3 pixels at most in my experience. Your camera may be miscalibrated if that's the case; you may need to contact Intel support to get a replacement as there's no calibration tool for the SR300 currently.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Maybe let me just reconfirm it to make sure there is indeed an error in calibration. I will get back in this thread, if I find anything additional.
But thank you for the help. To all
Best regards
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page