- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am working on a RealSense D435 multicamera setup where I show only those 3D points which are close enough to the related camera.
My actual workflow for each camera is:
Calculate the calibration rotation matrix and translation vector with a circle pattern using OpenCV
Using some post-filter to reduce aliasing
Align process based on the example
Calculate the pointcloud
I had some issue about using post-filtering before align but I got the answer from other topic. Currently I have a processing_block for the post-filter and the align with pointcloud calculation as well.
I also saw that when I want to use the aligned video_frame and the depth_frame to generate the pointcloud then the result is really strange so I use the post-filtered depth_frame to get the pointcloud and the aligned video_frame for the mapping. I dont know about this issue as well but my main problem is that I dont know how to get properly those point indicies at align which are close enough to the related camera. I noticed that the decimation filter can change the resolution but still not got it.
Here is my code for the align_process:
rs2::processing_block align_processing( [&](rs2::frameset data, // Input frameset (from the pipeline)
rs2::frame_source& source) // Frame pool that can allocate new frames
{
std::vector point_indicies;
auto frameset_data = align_frameset(profile, data, point_indicies, (*filter_config_ptr)->get_process_option("Depth_Clip"));
rs2::frameset aligned_frameset = source.allocate_composite_frame({ frameset_data.first, data.get_depth_frame() });
//the first is the aligned video_frame, the other the post-filtered depth_frame
auto depth_frame = aligned_frameset.get_depth_frame();
auto rgb_frame = aligned_frameset.get_color_frame();
pc.map_to(rgb_frame);
auto points = pc.calculate(depth_frame);
data_handler.lock();
output_d.points = points; //rs2::points
output_d.pois = point_indicies; //Only these points will be drawn
output_d.rgb_frame = rgb_frame;
if (calibration_m.valid) output_d.calibration_m = calibration_m;
data_handler.unlock();
source.frame_ready(aligned_frameset);
});
In the remove_background() function I store these indicies:
point_indicies.push_back((y / scale) * (width / scale) + (x / scale));
where y goes through the row and x through the column and scale = aligned_depth_frame.get_width() / depth_frame.get_width()
The result is:
I marked with red circle that area where it shouldnt render the points.
Someone can see where is my problem or know a better way to get those indicies from the pointcloud?
Thanks for your support!
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello spektro,
Intel relies on the community to help with this type of algorithm implementation question. Unfortunately, it is out of our support scope.
However, this sample may help you find a solution: https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/grabcuts https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/grabcuts.
If you find a solution, please share your results with the community in this thread
Thank you and best regards,
Eliza
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page