Items with no label
3335 Discussions

D345 Depth resolution at 1280x270 / 60cm distance.

PBoos2
Beginner
3,289 Views

Hello, i'm kinda unboxing the D345, to see how practical it would be for our work.

I work in the area of robotics, and have used 3 camera's a lot since the kinect 360 for real world applications (not games).

 

The Kinect one, was a TOF (time of flight camera), and i see D345, uses the kinect 360 dot pattern.

 

Though since its lens angle is a bit more narrow, i get better "zoom - scaled object".

 

Although the optimizing math around it blurs it (quite a lot), and those dot patterns have a certain distance.

What can be said, about its 2D depth image c axis resolution, for the kinect one it was very clear, TOF on 512 pixels wide so it was easy to calculate its resolution.

 

From the old kinect 360, it was around 1/8th of each depth pixel was a true depth pixels, all other where interpolated (a not so widely known fact).

 

Since the D345 appear to use similar techniques as the 360,

 

I wonder what could be said about the D345 true depth resolution at a near range of 60 cm.

 

What would be the minimal size depth resolution of objects, to see.

 

In simple this would mean, how many pixels are truly 100% known, and how much in % is blended depth data on a camera frame scanline ?.

 

What are Statistical deviations of the full frame on depth data after the camera has been 'warmed up' (ea after 10 minute use) ?.

Is this information available ? (i could calculate it for a camera, but if its available as general product info, (ea average of multiple camera's it be better, I currently got only one cam

Is steroscopic view used in combination with the dot pattern as well ?, (ea some advantage over kinect 360 ?)

 

Or is the stereoscopic view mainly a description of RGB combined width depthview ?
0 Kudos
14 Replies
PBoos2
Beginner
853 Views

typo in title, that should have been 720.. not 270

0 Kudos
MartyG
Honored Contributor III
853 Views

Yes, Kinect 2 was a 'time of flight' camera. The 400 Series cameras' projection method is officially known as being Stereo or Stereoscopic because of their left and right imagers. For comparison, the earlier RealSense SR300 camera model uses Coded Light like Kinect 1.

The 400 Series has an error range of less than 1% thanks to its advanced D4 Vision Processor component.

You may also like to refer to the detailed data sheet document for the complete 400 Series.

https://software.intel.com/sites/default/files/Intel_RealSense_Depth_Cam_D400_Series_Datasheet.pdf https://software.intel.com/sites/default/files/Intel_RealSense_Depth_Cam_D400_Series_Datasheet.pdf

An earlier data sheet focused exclusively on the D415 and D435 instead of the full 400 Series is also available.

https://www.mouser.com/pdfdocs/Intel_D400_Series_Datasheet.pdf https://www.mouser.com/pdfdocs/Intel_D400_Series_Datasheet.pdf

Here is a description from the latter data sheet on how the stereo system works.

********

The Intel RealSense depth camera D400 series uses stereo vision to calculate depth. The stereo vision implementation consists of a left imager, right imager, and an optional infrared projector. The infrared projector projects non-visible static IR pattern to improve depth accuracy in scenes with low texture.

The left and right imagers capture the scene and sends imager data to the depth imaging processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via shift between a point on the Left image and the Right image.

The depth pixel values are processed to generate a depth frame. Subsequent depth frames create a depth video stream.

0 Kudos
idata
Employee
853 Views

Can you help decipher this a bit further? I have a very similar question that would be super helpful to understand in terms of mm or cm of resolvable data.

How should I be thinking about this, and is there any easily translatable way of getting to an answer from terms Intel uses? So, for example at 1.5m distance, how many cm or mm of resolution should I expect, and how can I come to that answer for varying distances?

Thanks.

0 Kudos
MartyG
Honored Contributor III
853 Views

I do not think it is as easy to describe in practice as saying 'at x distance the image quality will be a certain state. There are all kinds of environmental factors such as lighting in a location that could affect the image results.

According to the data sheet for the 400 Series cameras, "the depth pixel value is a measurement from the parallel plane of the imagers and not the absolute range".

0 Kudos
PBoos2
Beginner
853 Views

Well yes light condition can / do matter, for the kinect i've seen once reports where people just recorded a few distances.

 

Pointing it to a flat wall, and then calculated errors (btw it also had a warming up error, so some people used extra coling, on those devices).

 

Error as static deviation, max err, and average err.

Since the kinect used TOF it could easily workout, but intel relies on stereo scopic view,

 

So to measure that i think the laser dots should be used, so it has a pattern to work width on a office white/gray wall.

 

As pattern has a effect that might influence it.

Well next you can put a camera to certain known distance, then record a single frame, record the distance differences per pixel.

 

Next you take a few frames more (a hundred or so), then per pixel calculate the distance variance averaged, max differnce, and staandard deviation.

 

Since its each pixel show it in a map, where those 3 values can be made visible using some color graphics.

Then you could repeat this over several distances, for example 60, 80, 100, 120 cm

 

Those are interesting from an industrial point of view, common converyer belt sizes (hence my interest in 60cm).

 

Or just on every 10 cm increase or at every cm

Though this is just the Z resolution since intelhas lower detail resolution maybe we need to think of an other method for detailism at certain distance.

 

But what i wrote in this post above would allready give some indication i think.
0 Kudos
MartyG
Honored Contributor III
853 Views

It sounds like you've got a testing procedure well worked out.

0 Kudos
PBoos2
Beginner
853 Views

Yes well it was at a time that i had way more time to read and test things out.

 

I once did it with a measure lint for a single spot, but later I found how a university wrote reports and created maps as described above on a single distance.

 

Lacking only multiple distances, but it confirmed some of the distortion / noise ideas i then had of the device.

Those reports showed interesting light barel alike distortions, and a middle spot that required corrections.

 

Only with such reports you can see such errors, i hope intel can create them cause its best if you can compare a few camera's of the same model.

(well i remind at some point i had several kinects, but strangly one was particular bad, much worse then the others, some specific hardware release version.)

0 Kudos
PBoos2
Beginner
853 Views

Thanks, i'm not 100% sure yet cause i've not yet coded against it,(will happen soon) but so far it seams kinect one provides more detail.

 

Width the kinect one i was able (width my own software corrections) to get to a 2mm detph resolution over a scanline, and rather high contrast precision meaning detecting edges etc (or even fine facial details).

So far it (but i'm only using the Intel viewer), it seams less detailed. depth resolution +- 5mm, object resolution around a 1 cm objects, at 60cm distance, quite blob like. I see about around 3 times more pixels width a RealSence, but width about estimated 60% less detail, in total less detail, but more pixels to process for robotics applications, which might be a penalty. (but with a bonus that it will work outdoors which the kinect one didnt.

0 Kudos
MartyG
Honored Contributor III
853 Views

Others have also observed that they could not reproduce with the D-cameras values that they had on Kinect cameras. The 400 Series is undoubtedly far superior technically to the Kinects, though the differences in how they implement projection likely account for a number of differences in results. Areas in which the 400 Series compares negatively will probably be ironed out in time with SDK and firmware updates and new software tools, though the RealSense and Kinect cameras will never be exactly the same.

You can definite your own custom visual presets for the camera to re-balance its processing so that some functions are improved at the expense of others. An example is the pre-made High Accuracy setting, which boosts accuracy but lowers the fill rate.

https://github.com/IntelRealSense/librealsense/wiki/D400-Series-Visual-Presets D400 Series Visual Presets · IntelRealSense/librealsense Wiki · GitHub

0 Kudos
PBoos2
Beginner
853 Views

Well the maths might be a bit ahead, but technically TOF is more complex to do.

 

And who performs best is technically ahead, but your right its just released.

 

Even the images in those articles where not final images, I like they are very open about it.

 

Much more open then Microsoft ever was, and maybe we can improve this one too.

 

What I kinda wonder is, why its so wobbly on larger scales ?

Does the viewer has a manual, can those be removed by width altered settings ?

And I wonder if it could be improved by some math if it was given known depth areas.

Often width robotics, we have a table or transport band, width some non used areas.

 

We could simply mark those area's red or so, and tell a custommer, those should be kept clean, and empty.

 

I also noticed hot air bulbs indoors (or dust?), something i've seen in other industrial tof depth cams as well.

 

Though they didnt use stereo view, if you think about that, then its kinda strange.

 

Normal RGB vision (my eyes) dont show that, but some IR cams have that 'flaw'. (then why use such tech?).

 

Or is something else going on here ? ( perhaps its a ghost hunted area ).
0 Kudos
MartyG
Honored Contributor III
853 Views

Yes, Intel have a community-focused approach with the open-source SDK 2.0 software, and have already incorporated community contributions into the SDK .

I do not have experience with image wobble in the 400 Series cameras, but I do encounter it when programming virtual camera views in the Unity game creation engine. In that instance, it tends to be caused by the camera image constantly updating in response to processing what it is seeing. For example, I use a routine that stops the camera view from passing through the surface of objects, so in a confined space where the camera is close to the walls, it tends to bounce the image as it constantly adjusts to prevent passing through the object's surface, until the camera is moved away from the objects into a more open space.

I was extensively researching yesterday the existence of a manual for the Viewer's settings such as Rau and Hdad but could not find one. I am waiting for a response from Intel regarding whether such documentation exists yet.

I would recommend de-selecting the 'Enable auto exposure' option in the viewer to see if the wobbles decrease when exposure is switched to manual control.

Developments in math that are incorporated into the SDK, whether by the RealSense developer team or by community contributions, are sure to refine the camera's capabilities over time. For example, Intel recently demonstrated using four D435 cameras for volumetric capture at the Sundance festival.

https://www.youtube.com/watch?v=9oQDz_cfUlo Intel® RealSense™ at Sundance 2018 - YouTube

https://realsense.intel.com/intel-realsense-volumetric-capture/ Volumetric Capture @ Sundance using Intel RealSense Depth Cameras

Ah yes, spirit orbs. You are correct, these can also be dust motes that are captured by the camera, which is why professional paranormal researchers are very careful before declaring that such an orb might be a spirit. Cameras can see more light spectrums than the human eye can. For example, using depth cameras under fluorescent lights can cause image disruption due to flickering that is hard to see with our own eyes.

0 Kudos
PBoos2
Beginner
853 Views

Maybe I used the wrong word (i'm a Dutch person person) the wobbling is like, when i go to point cloud view then get lots of 'vibrations' , those vibration waves, seam rather large.

 

I'd assume depth 'noise' would be a bit more random, rather like white noise, and notsuch large waves.

 

 

A nice idea might be to use constant calibration CurrentDepth[x,y] = (depthOld[x,y]*n+CurrentDept[x,y] ) * (1/(n+1)) (where n>1 makes division by zero safe) ,

 

For configurable areas, so the other areas might use it as a reference of a known depth.
0 Kudos
MartyG
Honored Contributor III
853 Views

There was a case with the Kinect where someone was having an issue with their point cloud 'vibrating'.

https://social.msdn.microsoft.com/Forums/expression/en-US/a8f05c99-7dc3-4654-a07c-5f854a341705/kinect-point-cloud-angle-changing-due-to-vibration-of-sensor-accelerometer-issue?forum=kinectsdk Kinect point cloud angle changing due to vibration of sensor - Accelerometer issue?

0 Kudos
idata
Employee
853 Views

Hello PGTART,

 

 

Intel has a documented procedure to test the RealSense Depth Quality. Please find it in the next link:

 

 

https://www.intel.com/content/www/us/en/support/articles/000026982/emerging-technologies/intel-realsense-technology.html

 

 

Please feel free to use this testing procedure and even compare it to your own procedure. However keep in mind the procedure I am sharing with you is how Intel tested their cameras.

 

 

Best Regards,

 

Juan N.

 

0 Kudos
Reply