Items with no label
3335 Discussions

Depth Control Parameters

jb455
Valued Contributor II
2,253 Views

Hi,

I'm trying to improve the quality of the depth image I get from each of the cameras (particularly the R200 but also SR300). I found in the librealsense source they have the following method (from https://github.com/IntelRealSense/librealsense/blob/2e3f2ad658c46703426ad87587de66cf526478be/include/librealsense/rsutil.h here, 63-86 with similar for the SR300 in 89-121):

static void rs_apply_depth_control_preset(rs_device * device, int preset){ static const rs_option depth_control_options[10] = { RS_OPTION_R200_DEPTH_CONTROL_ESTIMATE_MEDIAN_DECREMENT, RS_OPTION_R200_DEPTH_CONTROL_ESTIMATE_MEDIAN_INCREMENT, RS_OPTION_R200_DEPTH_CONTROL_MEDIAN_THRESHOLD, RS_OPTION_R200_DEPTH_CONTROL_SCORE_MINIMUM_THRESHOLD, RS_OPTION_R200_DEPTH_CONTROL_SCORE_MAXIMUM_THRESHOLD, RS_OPTION_R200_DEPTH_CONTROL_TEXTURE_COUNT_THRESHOLD, RS_OPTION_R200_DEPTH_CONTROL_TEXTURE_DIFFERENCE_THRESHOLD, RS_OPTION_R200_DEPTH_CONTROL_SECOND_PEAK_THRESHOLD, RS_OPTION_R200_DEPTH_CONTROL_NEIGHBOR_THRESHOLD, RS_OPTION_R200_DEPTH_CONTROL_LR_THRESHOLD };<td class="blob-num js-line-number" data-line-number="77" style="padding: 0 10px; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; color: rgba(0, 0, 0, 0.298039); text-a...
0 Kudos
27 Replies
MartyG
Honored Contributor III
479 Views

The SR-300's image quality parameters can be altered with the SetIVCamFilterOption instruction.

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?setivcamfilteroption_device_pxccapture.html Intel® RealSense™ SDK 2016 R2 Documentation

0 Kudos
jb455
Valued Contributor II
479 Views

Thanks Marty, I'm playing with that now, will report back my findings. There doesn't seem to be anything similar for the R200 though.

0 Kudos
MartyG
Honored Contributor III
479 Views

This forum thread on the R200 image quality may be of use to you if you have not seen it already. Good luck!

https://software.intel.com/en-us/forums/realsense/topic/616443 Way to configure R200 gain and other parameters

0 Kudos
jb455
Valued Contributor II
479 Views

Ok, so I've tested the SetIVCamFilterOption options at ranges from 15cm to about a metre with my SR300. There doesn't seem to be that much difference between Very Close, Close and Mid range, though perhaps that's not too surprising as they all claim to be good for "up to 2m for SR300". I'm more interested in close ranges so didn't spend much time on Far and Very Far, but in case anyone else is interested, I found that I could get up to about 5 metres if I also set MotionRangeTradeOff to 100 while on Very Far mode.

Regarding that thread from the old forum: I won't have any control over the lighting conditions present when my app is being used, so I can't rely on extra ambient IR etc. The camera will be hand-held too so it would be difficult to average depth across a number of frames without having to do some object-tracking stuff too which seems a bit much! The librealsense depth control presets sound pretty much ideal for what I need, so I'm hoping there's a way of getting similar functionality with the SDK.

0 Kudos
idata
Employee
479 Views

Hello jb455,

 

 

Thank you for interest in the Intel® RealSense™ Technology.

 

 

In the SDK, there is not a way to control the depth parameters similar to the librealsense method. As MartyG kindly pointed out, the links provide the best way to modify the depth parameters through the SDK (SetIVCamFilterOption and ScenePerception).

 

 

Regards,

 

Andres V.
0 Kudos
jb455
Valued Contributor II
479 Views

Hi Andre,

That's a bit annoying, but thanks for the information.

I've been looking at the https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?enhancedepth_photoutils_pxcenhancedphoto.html EnhanceDepth function for another way to improve the depth image. I've managed to get the EnhanceDepth thing itself working, but am struggling to get the created depth image (which has different dimensions to the captured depth image) to map back to the original colour image so I can get depth values for points in the colour image. All the samples I've found show how to create the enhanced image but don't show how to actually use it.

I've tried using projection.CreateDepthImageMappedToColor and projection.QueryInvUVMap with the enhanced depth image as arguments, but both just return arrays of zeroes. There is mention in the docs https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?importfrompreviewsample_pxcphoto.html here about a projection instance being encoded in the depth image but I can't figure out how to get it out (I've tried using QueryMetadata and CreateSerializable and a few other things but haven't managed to get anything other than null) so I've been using the standard device.CreateProjection instance I use for the normal captured depth/colour images instead (which just returns zeroes).

So: is it possible to map between the enhanced depth image and the standard colour image?

Thanks for your help!

James

0 Kudos
idata
Employee
479 Views

Hello James,

 

 

Thank you for sharing your investigation and testing process.

 

 

I'll need more time to come up with information that you may find relevant.

 

 

Thank you for your patience.

 

 

Regards,

 

Andres
0 Kudos
MartyG
Honored Contributor III
479 Views

I came across an interesting looking F200 script that merges depth and color together into a single combined image instead of showing them as separate streams. I don't know if it'd be any use but I thought I'd throw it your way, just in case.

https://mtaulty.com/2015/04/16/m_15794/ https://mtaulty.com/2015/04/16/m_15794/

0 Kudos
idata
Employee
479 Views

Hello James,

Have you already performed tests setting the Depth Stream Properties in the RealSense SDK to improve the quality of your images? Here is the link to the instruction set: https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html%3Fproperty_device_pxccapture.html https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?property_device_pxccapture.html

I'll be waiting for your response.

Regards,

 

Andres V.
0 Kudos
jb455
Valued Contributor II
479 Views

Hi Andres, thanks for the link.

I've been playing with the properties which have public Query and Set methods, but some of the properties listed on that page don't have them. How can I enumerate the Device.Property object to inspect and change values (in C# )?

And Marty, thanks, that's actually very useful for something else I'll need to look at soon!

Thanks,

James

0 Kudos
MartyG
Honored Contributor III
479 Views

You're very welcome - glad the script will be useful

Regarding enumeration of Device.Property, have you seen this page?

https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/property_device_pxccapture.html Property (Advanced, +UWP)

0 Kudos
jb455
Valued Contributor II
479 Views

Yeah, that's the page Andres linked. Doesn't say how to actually access the properties!

0 Kudos
MartyG
Honored Contributor III
479 Views

I found a Unity script that uses Device.Property. Maybe you could find some useful insights by looking through that.

Edit: never mind, that script was for the old Perceptual Computing Camera. It's use of Device.Property looks remarkably similar to RealSense's though.

********

using System;

using System.Collections.Generic;

using System.ComponentModel;

using System.Data;

using System.Drawing;

using System.Linq;

using System.Text;

using System.Threading;

using System.Threading.Tasks;

using System.Windows.Forms;

namespace Testing2

{

public partial class Form1 : Form

{

static UtilMPipeline pp = new UtilMPipeline();

static System.Drawing.Bitmap rgb;

static System.Drawing.Bitmap binary;

public Form1()

{

InitializeComponent();

Thread camera = new Thread(new ThreadStart(CameraCapture));

camera.Start();

}

public void CameraCapture()

{

pp.EnableImage(PXCMImage.ColorFormat.COLOR_FORMAT_DEPTH);

pp.EnableImage(PXCMImage.ColorFormat.COLOR_FORMAT_RGB32);

pp.Init();

for (;;)

{

pp.capture.device.SetProperty(PXCMCapture.Device.Property.PROPERTY_DEPTH_SMOOTHING, 0);

if (!pp.AcquireFrame(true)) break;

PXCMImage depth = pp.QueryImage(PXCMImage.ImageType.IMAGE_TYPE_DEPTH);

PXCMImage color = pp.QueryImage(PXCMImage.ImageType.IMAGE_TYPE_COLOR);

//******Older method*****

//color.QueryBitmap(pp.QuerySession(), out rgb);

//depth.QueryBitmap(pp.QuerySession(), out binary);

//pictureBox1.Image = binary;

//******New method*****

PXCMImage.ImageData data;

depth.AcquireAccess(PXCMImage.Access.ACCESS_READ_WRITE, out data);

pictureBox1.Image = data.ToBitmap(depth.info.width, depth.info.height);

// The depth image is pointed by the PXCMImage.Data structure.

depth.ReleaseAccess(ref data);

pp.ReleaseFrame();

}

pp.Close();

pp.Dispose();

}

}

}

0 Kudos
jb455
Valued Contributor II
479 Views

Yeah, that must be from before they added the Query() and Set() methods for some of the properties. I don't have access to a device.SetProperty() method

0 Kudos
idata
Employee
479 Views

Hello James,

I'll investigate a little bit more about how to enumerate the Device.Property object.

As soon as I find something that you may find useful, I'll post it here.

Thank you for your patience.

Regards,

 

Andres V.
0 Kudos
idata
Employee
479 Views

Hello James,

 

 

After doing some research, only the properties with public Set methods can be modified. The rest of the listed properties can't be changed.

 

 

Regards,

 

Andres V.
0 Kudos
idata
Employee
479 Views

Hello James,

 

 

I was wondering if you can share the current state of your project, and tell me if you need further assistance.

 

 

I'll be waiting for your response.

 

 

Regards,

 

Andres V.
0 Kudos
jb455
Valued Contributor II
479 Views

Hi Andres,

We actually noticed that using the SR300 on skin, which is a common use case for our app, produces really poor depth data with large waves. The image below shows a common example of the point cloud with the camera about 30cm from my arm. Note how it's only on my skin that the depth is messed up: as soon as it gets onto my sleeve it's fine as we'd expect.

I've spent the last week or so investigating and trying to mitigate this. I've found that setting the laser power to 1 helps but the camera still has to be much further from the subject when looking at skin (~30cm) than it does with other materials (~12cm) to get acceptable depth data. We want to be able to take photos as close as possible to improve accuracy so we'd rather not limit to >30cm but that's the only thing I've found that works. The image below shows the same area after lowering the laser power and setting the filter to "very far" (which doesn't seem to help the depth data but does mean the user can see when the camera is far enough away). It's not perfect though: you can see there's still some waviness on the skin area which shouldn't be there. I also occasionally get 'holes' where there's a small (~2-5mm) irregular oval of depth points which are suddenly a couple of millimetres deeper than the surrounding points where in reality the area is smooth.

I'm only seeing these effects using the SR300s (the Intel prototype, Razer Stargazer and Creative BlasterX all have the same effects), but the F200 is fine (until you get really close, under 10cm), as is the R200 (but you can only get to about 30cm anyway).

Have you seen this internally? Any tips to reduce this? You can test using the Camera Explorer app included in the SDK: center the camera on some skin and move the viewpoint around to get to a shallow angle to see the surface topology, as below.

0 Kudos
MartyG
Honored Contributor III
479 Views

I don't know if this will be at all relevant, but I will chip in anyway.

I recently read an article about the Microsoft Live Vision camera, the predecessor of Kinect. A developer created a dancing game that tracked the skin tone of the player. Right before the game was due to be demoed on a Jumbotron screen in Times Square though, they found that this had been a terrible idea, because the camera tracking was variable depending on the skin tone (because the camera relied on seeing the amount of light that was reflected off the skin). The darker the skin tone, the worse the tracking performed because of the decreasing amount of reflected light for the camera scan to pick up.

So this makes me wonder if our old friend the IR emitter is responsible for your results with the SR300 - whether it is affecting the perception of light reflected off the skin, if indeed the RealSense camera even processes images that way.

0 Kudos
jb455
Valued Contributor II
470 Views

Yes, my working hypothesis is that it's due to high IR reflectance of skin: the waves always seem to be oriented vertically relative to the camera so they probably represent the peaks & troughs of the projected IR pattern. Turning the IR power down to the last step before "off" helps, but I don't know how different lighting conditions will affect it - I've only tested in the office with fluorescent lighting so far, natural lighting etc may affect it in different ways so the hardcoded value of 1 may not be best for all situations.

I'm surprised this hasn't been brought up before though, as surely it can't be that uncommon for people to use these cameras on skin?

0 Kudos
Reply