- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, I am fairly new to the RealSense API. I have the RealSense D415. How do I capture, from a stream, an RGB image and the corresponding Point Cloud (in PCD format) and store it? Is there any existing code for it? Running on Ubuntu 17.10
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The RealSense SDK 2.0 software used with the 400 Series cameras comes with a sample point cloud program called 'pointcloud'. It is in the 'intel RealSense SDK 2.0 > Samples' folder of the SDK. A web-page with the scripting is here:
https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud librealsense/examples/pointcloud at master · IntelRealSense/librealsense · GitHub
The 'capture' sample program in the same folder, meanwhile, shows how to render RGB and depth to the same screen. You may be able to adapt this code to only render the RGB part to the screen.
https://github.com/IntelRealSense/librealsense/tree/master/examples/capture librealsense/examples/capture at master · IntelRealSense/librealsense · GitHub
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank You, I have seen those. I was wondering how I could save the rgb image and its corresponding point cloud. I don't have any experience with the RealSense API, so any help would be appreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Another thing is, how do I get this in the pcd organized point cloud format?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The ability to save the point cloud to a MeshLab-compatible PLY file directly from pyrealsense was added recently.
https://github.com/IntelRealSense/librealsense/blob/development/wrappers/python/examples/export_ply_example.py librealsense/export_ply_example.py at development · IntelRealSense/librealsense · GitHub
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
https://software.intel.com/en-us/articles/using-librealsense-and-pcl-to-create-point-cloud-data# introduction https://software.intel.com/en-us/articles/using-librealsense-and-pcl-to-create-point-cloud-data# introduction
So I basically need an equivalent of this for the SDK2. I was using an SR300 before and that was working with this code. Is there a migration guide or some way I can implement this for the D415?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am not aware of a migration guide for Librealsesense 1 (original Librealsense) to Librealsense 2 (also known as RealSense SDK 2.0). Research for such a guide also did not find one, unfortunately. I am guessing that trying to convert the old Librealsense 1 script to SDK 2.0 may be sufficiently difficult that the better option may be to try to convert the 'RealSense Viewer' program that comes with the RealSense SDK 2.0 for your own needs, since SDK 2.0 is open-source. I suggest this because the RealSense Viewer has the ability to display point cloud data in real-time or export it to a file.
https://github.com/IntelRealSense/librealsense/tree/master/tools/realsense-viewer librealsense/tools/realsense-viewer at master · IntelRealSense/librealsense · GitHub
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is there any way I can use the D415 with the LibRealSense 1 instead?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The legacy Librealsense 1 only supports F200, R200 and ZR300, unfortunately.
During my research though, I did find information about aligning RGB and depth to a point cloud in SDK 2.0 by using the SDK in combination with ROS.
https://github.com/intel-ros/realsense/tree/development# rgbd-point-cloud GitHub - intel-ros/realsense: Intel(R) RealSense(TM) ROS Wrapper for D400 series and SR300 Camera
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
MartyG Thank You, that is helpful. I will look into that. If you do come across some other way, please update.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I was able to get it working to do what I want, but am unable to control the resolution of the streams. Is there a routine I could call to do that?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Info on setting the resolution in SDK 2.0 was very hard to find. A script in the link below seems to provide the answer.
https://github.com/IntelRealSense/librealsense/blob/master/doc/stepbystep/getting_started_with_openCV.md librealsense/getting_started_with_openCV.md at master · IntelRealSense/librealsense · GitHub
A quote from that script:
//Contruct a pipeline which abstracts the device
rs2::pipeline pipe;
//Create a configuration for configuring the pipeline with a non default profile
rs2::config cfg;
//Add desired streams to configuration
cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 30);
//Instruct pipeline to start streaming with the requested configuration
pipe.start(cfg);
Substitute 640 and 480 for the resolution you need (e.g 1040 and 480 to get 1040x480) and change 30 to the desired frame rate (30, 60 or 90).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Seems very hard to find anything to do with SDK 2 xD. Anyway, I tried this before but I got an error saying : "
RealSense error calling rs2_pipeline_start_with_config(pipe:0x232ba00, config:0x232e9e0):
Failed to resolve request. No device found that satisfies all requirements"
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
SDK 2.0 is a continuously evolving product. To quote George Carlin at the end of Bill And Ted's Excellent Adventure in relation to Bill And Ted's terrible guitar playing: "They do get better".
Regarding your error, I could only find one other case so far in which that problem occurred. In that case, the user had the camera plugged into a USB 2.0 port instead of a USB 3.0 port, which the camera requires. Can you confirm please that your camera is in a USB 3.0 port?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It was connected in a USB 3.0 port, but apparently was a bit loose. Fixing it helped. As ever, you have been helpful. Thank You
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Awesome! I'm glad we could find a solution. Have a great day!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The precision for the depth values seems to be in e-03m, whereas it is e-09m for the x and y values.
For example, all points are similar to the form
-0.033532809 -0.27685541 0.72100002
-0.032393653 -0.27685541 0.72100002
-0.031254496 -0.27685541 0.72100002
-0.03011534 -0.27685541 0.72100002
-0.028976185 -0.27685541 0.72100002
-0.027837032 -0.27685541 0.72100002
-0.026697876 -0.27685541 0.72100002
OR
0.059508596 -0.19579296 0.51200002
0.060317542 -0.19579296 0.51200002
0.061126482 -0.19579296 0.51200002
0.061814461 -0.19541056 0.51100004
0.062499277 -0.19502816 0.51000005
0.063305058 -0.19502816 0.51000005
0.063985132 -0.19464573 0.509
0.064662047 -0.19426332 0.50800002
0.065335803 -0.19388093 0.50700003
0.065875947 -0.1931161 0.505
0.066541806 -0.19273369 0.50400001
0.067204498 -0.19235128 0.50300002
0.067864038 -0.19196889 0.50200003
0.068520412 -0.19158648 0.50100005
0.069311984 -0.19158648 0.50100005
0.069963604 -0.19120406 0.5
0.070753589 -0.19120406 0.5
0.07168667 -0.19158648 0.50100005
Any idea why?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I don't know why for sure. Assuming that the precision for the camera is consistent for both RGB and imagers, maybe the difference is due to a difference in alignment between the RGB and depth streams. An Intel support agent can probably answer that question better than I can, as they have access to resources that I do not.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page