Items with no label
3335 Discussions

435 or 415 for fast movement .3-2 meters from camera?

ARoth4
Beginner
2,124 Views

I bought the 435 and it should arrive this week , after months of waiting. I watched some reviews and saw that the 415 performs better at close range.

I bought the 435 because it was advertised as most like the kinect. My concern is speed ,not width and not depth depth past 2-3 meters. I need less than 2 meters wide

and up to 3 meters deep. Would it make sense to not unpack the 435 and just exchange for a 415?

Also, is noise a problem? The reviews show noise. Have people been successful in implementing filters for fast movement?

0 Kudos
1 Solution
MartyG
Honored Contributor III
402 Views

The D435 is indeed the best model for capturing motion because of its fast 'global shutter'. The rolling shutter used in the D415 is suited to capturing static objects and does not cope so well with motion. This is why global shutter cameras are recommended for moving applications such as attaching a camera to a balloon and floating it up into the atmosphere. So you made the correct choice for your needs.

D435 is affected by depth noise a bit more than D415 due to its design.

Regarding FPS, a high FPS will indeed involve transmitting more data bandwidth through the cable than a lower FPS such as 30.

Edit: As an example of motion processing, if you go to 26 minutes 30 seconds in on the YouTube video below, you can see a 400 Series camera being demonstrated capturing a street as it drives down it in a car.

https://www.youtube.com/watch?v=2BIxXn0DIK0 Intel presents RealSense™ at the IDTechEx Show!: From 3D Printing to Drones - YouTube

View solution in original post

12 Replies
ARoth4
Beginner
402 Views

Also, does the 90fps mean it gathers 3 times the frames the kinect? It seems like it would be a substantial difference in information gathered. Is that right?

0 Kudos
MartyG
Honored Contributor III
403 Views

The D435 is indeed the best model for capturing motion because of its fast 'global shutter'. The rolling shutter used in the D415 is suited to capturing static objects and does not cope so well with motion. This is why global shutter cameras are recommended for moving applications such as attaching a camera to a balloon and floating it up into the atmosphere. So you made the correct choice for your needs.

D435 is affected by depth noise a bit more than D415 due to its design.

Regarding FPS, a high FPS will indeed involve transmitting more data bandwidth through the cable than a lower FPS such as 30.

Edit: As an example of motion processing, if you go to 26 minutes 30 seconds in on the YouTube video below, you can see a 400 Series camera being demonstrated capturing a street as it drives down it in a car.

https://www.youtube.com/watch?v=2BIxXn0DIK0 Intel presents RealSense™ at the IDTechEx Show!: From 3D Printing to Drones - YouTube

ARoth4
Beginner
402 Views

Thanks. I'm happy I got the right one.

The video didn't really show examples.

In kinect I create a few filters to clean small noise off the bitmap and do some averaging of depths but nothing overwhelming. The raw streamed data is pretty clean. Depths in the center mass sometimes jump a bit when you don't expect them to but the depth map rendered is very clean looking.

I read some comments complaining about the noise of the D series. It may be a small minority of people and they might not be doing basic filtering but the images on youtube do look noisier than the kinect. If I really get 3X the frames per second as the kinect then maybe I can use that extra information to smooth the data(through averaging). My fear is going down a long road of converting our C# to C++ and creating new filters in order to test the camera and then finding out it's a bit nosier than we can handle.

Thanks for the response.

0 Kudos
MartyG
Honored Contributor III
402 Views

You would not necessarily need to convert to C++, as the 400 Series cameras' SDK 2.0 has a C# wrapper to enable programming in that language. C++ is much more strongly supported in terms of sample programs and online references, but the C# option is there if you wish to take it.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/csharp librealsense/wrappers/csharp at master · IntelRealSense/librealsense · GitHub

There are environmental adjustments that can be made to reduce disruption to the image, such as turning down the power level of the laser and avoiding using the camera in rooms with fluorescent ceiling strip-lights (which flicker at a frequency that is hard to see with the human eye).

0 Kudos
ARoth4
Beginner
402 Views

Good.

Do you know how light the wrapper is? Would I experience a bottleneck and has it been thoroughly tested? I'll definitely test the camera with it but I won't know if lag is caused by the wrapper or not. I use pointers to a bitmap which is as low a level as I can get in c# . I'll have to refresh my c++ knowledge on how the same is accomplished but any extra overhead per pixel rendering is going to create lag. I know c++ is usually faster than c# and we'll convert completely at some point(if the camera proves itself). Either way we'll try it out with the wrapper.

I asked in a response to rjo__

"Also, do you know how the conversion between depth space and camera space(real distance) is accomplished in the SDK? Is it fast (I assume it exists)?"

Maybe you have a thought about that?

Thanks again for the response.

0 Kudos
MartyG
Honored Contributor III
402 Views

Lag is more likely to occur in the current versions of the RealSense SDK 2.0 when streaming color. The link below explains this.

0 Kudos
ARoth4
Beginner
402 Views

That's an important link.

0 Kudos
ROhle
Novice
402 Views

I don't know... I have both. And they are both fantastic.

While the D435 Infrared sensors supports advanced frame rates, the Viewer only supports 30FPS (and there is probably a good reason for this.)

In the D415 all of the sensors are the same, both the 2 IR(s) and the color. So, the color view looks exactly like the IR view, except offset between the two views of course. With the D435, the IR sensors are different from the color sensor(by way, the color sensor of the D435 is exactly the same as the as the sensors in the D415... no global shutter). What surprised me was that (at this point in time, using Win10 pro... the field of view of the color image of the D435 is radically different from the field of view of the IR sensors... much more difficult for me to use.

On the plus side, the D435 uses monochromatic IR sensors... to my eye, the images are cleaner and the exposure controls give better results.

I think you will get good results from both... depending about the problem you are trying to solve.

I expect there to be a load of 3rd party add-ons... one that I would expect would be a prismatic separator to give a variable base-line. I have tried this without the IR emitter and it could work, but would require supporting software... this would improve the flexibility of the acquisitions. Don't worry if it doesn't seem like you can get there from here... you can there from here, but it's going to take time.

ARoth4
Beginner
402 Views

Thanks

Great to hear that there will be 3rd party add ons.

I am not sure what you mean by the Viewer is 30fps. I have yet to even unpack the camera and probably wont for a a couple of weeks when I'll have time to do some testing. Does the camera offer close to 3* the FPS of kinect(advertised at 30fps) as far as depth table data? iow will I have more frames of data per second to analyze? I'm not using the color image at all. I may find it useful if the camera really is significantly faster.

Also, do you know how the conversion between depth space and camera space(real distance) is accomplished in the SDK? Is it fast (I assume it exists)?

Thanks for the response.

0 Kudos
MartyG
Honored Contributor III
402 Views

The RealSense Viewer does support 90 FPS, though not at every combination of depth and FPS settings. For example, it can do 90 FPS at 848x480 depth resolution and Y8 IR format. But when selecting higher settings such as 1280x1080 and Y16 IR format, the activation button for the streams ghosts-out to gray and cannot be pressed.

Deprojection takes a 2D pixel location on a stream's images, as well as a depth, specified in meters, and maps it to a 3D point location within the stream's associated 3D coordinate space. It is provided by the function rs2_deproject_pixel_to_point

0 Kudos
ARoth4
Beginner
402 Views

Thanks. Deprojection is the word I was looking for. So, good, I assume it will be comparable to kinect.

848X480 is more than kinect depth resolution so I guess I will get 90fps. That's good to know. It seems, on paper, that this camera will be much better.

Hopefully the light source wont be a problem. Can I provide my own light source to "drown out" fluorescent ceiling strip-lights. We are using this in gym environments so the lighting is something we can't fully control.

0 Kudos
MartyG
Honored Contributor III
402 Views

The disruptive effects of lighting tend to be in proportion to how close the camera is to the lighting. The closer a camera is to the light source, the more likely the image is to be flooded with large areas of bright white color. So if the gym has high ceilings and the camera is close to the ground then you may not be affected as badly as if you were using the camera in a room with average height ceilings.

In situations where there is noise on the image you have a range of options to counter it, including turning off the IR Emitter component to help prevent the IR Sensor from becoming saturated with light, and use SDK 2.0's post-processing filters.

0 Kudos
Reply