Items with no label
3335 Discussions

Why can't I find the PXCMImage.ImageData.ToBitmap function in Unity platform (The Intel RealSense SDK 2.0)?

HHe6
Beginner
3,634 Views

I could apply this function in the corresponding C# version, but when I came to the same SDK in unity, it turned out to be inaccessible. What I tried to do is saving the depth image data into bitmap for my further use. The pictures below show the SDK library references and the script interface. I'll be very grateful for anyone helping me with this.

0 Kudos
1 Solution
MartyG
Honored Contributor III
1,576 Views

This instruction is used in the 2016 R2 / 2016 R3 Windows SDKs but not in the current RealSense SDK 2.0 unfortunately, as they are totally different architectures. SDK 2.0 is based on the open-source Librealsense SDK.

Are you streaming the camera's data onto a texture in Unity? To save a texture as a bitmap, a workaround may be to save the texture as a PNG using the Unity instruction EncodeToPNG in the link below.

https://docs.unity3d.com/ScriptReference/ImageConversion.EncodeToPNG.html Unity - Scripting API: ImageConversion.EncodeToPNG

View solution in original post

21 Replies
MartyG
Honored Contributor III
1,577 Views

This instruction is used in the 2016 R2 / 2016 R3 Windows SDKs but not in the current RealSense SDK 2.0 unfortunately, as they are totally different architectures. SDK 2.0 is based on the open-source Librealsense SDK.

Are you streaming the camera's data onto a texture in Unity? To save a texture as a bitmap, a workaround may be to save the texture as a PNG using the Unity instruction EncodeToPNG in the link below.

https://docs.unity3d.com/ScriptReference/ImageConversion.EncodeToPNG.html Unity - Scripting API: ImageConversion.EncodeToPNG

HHe6
Beginner
1,413 Views

Thank you so much Marty! Now I got it. I had already streamed the depth data onto a texture in Unity a few days ago in the way you introduced with R200, but kind of different as I expected, showed in the following picture. I could not visually figure out the depth distributions on the scene, I thought maybe the EncodeToPNG function had changed the structure of the depth data, so I tried to catch them in the first beginning while the depth data was generated from the stream rather than converting to a texture object, as I asked before about the ToBimap function. Was it the problem of the device or the wrong settings of the output format? Waiting for your kindly reply, best regards!

th

0 Kudos
MartyG
Honored Contributor III
1,413 Views

Are you streaming a point cloud onto the texture, please?

0 Kudos
HHe6
Beginner
1,413 Views

Here are my codes, actually I tried to stream the depth image onto the texture and save it to the disk at the same time.

//Depth

PXCMImage.ImageInfo depthImgInfo = new PXCMImage.ImageInfo

{

width = depthWidth,

height = depthHeight,

format = PXCMImage.PixelFormat.PIXEL_FORMAT_RGB32

};

PXCMImage depthImg = psm.session.CreateImage(depthImgInfo);

depthImg = sample.depth;

//Texture Depth

if (depthImg != null)

{

if (depthImageTexture2D == null)

{

depthImageTexture2D = new Texture2D(depthWidth, depthHeight, TextureFormat.ARGB32, false);

DepthMaterial.mainTexture = depthImageTexture2D;

DepthMaterial.mainTextureScale = new Vector2(-1, -1);

}

}

PXCMImage.ImageData depthImageData;

depthImg.AcquireAccess(PXCMImage.Access.ACCESS_READ, out depthImageData);

depthImageData.ToTexture2D(0, depthImageTexture2D);

byte[] depth_bytes = depthImageTexture2D.EncodeToJPG();

if (!depthWritten)

{

depthWritten = true;

File.WriteAllBytes("C:/Users/Ho Ho/Desktop/Gaze_robotic/Assets/ImageProcessing/" + "DepthImage" + Time.time.ToString() + ".jpg", depth_bytes)

}

depthImg.ReleaseAccess(depthImageData);

depthImageTexture2D.Apply();

The results was like this. And there was an another weird problem, I had set the width and height parameters for both color and depth streams, but the output color image resolution always remained 640*480.

[Header("Color Settings")]

public int colorWidth = 1920;

public int colorHeight = 1080;

public int colorFrameRate = 30;

public Material RGBMaterial;

public bool RGBWritten=false;

[Header("Depth Settings")]

public int depthWidth = 628;

public int depthHeight = 468;

public int depthFrameRate = 30;

public Material DepthMaterial;

public bool depthWritten = false;

//RGB

PXCMImage.ImageInfo colorImgInfo = new PXCMImage.ImageInfo

{

width = colorWidth,

height = colorHeight,

format = PXCMImage.PixelFormat.PIXEL_FORMAT_RGB32

};

PXCMImage colorImg = psm.session.CreateImage(colorImgInfo);

colorImg = sample.color;

//Depth

PXCMImage.ImageInfo depthImgInfo = new PXCMImage.ImageInfo

{

width = depthWidth,

height = depthHeight,

format = PXCMImage.PixelFormat.PIXEL_FORMAT_RGB32

};

0 Kudos
MartyG
Honored Contributor III
1,413 Views

Are you using the above script in the 2016 R2 SDK with an R200 camera and using the EncodeToPNG instruction to save the texture, please?

0 Kudos
HHe6
Beginner
1,413 Views
0 Kudos
MartyG
Honored Contributor III
1,413 Views

The R200 is four year old technology and unfortunately does not have the capabilities of the current range of RealSense cameras. The chart below shows the limitations of the available depth resolutions on R200.

In regard to the depth image quality you are getting: a discussion from 2015 is linked to below, where the user was getting a similar depth image to yours with an R200.

https://software.intel.com/en-us/forums/realsense/topic/603666 https://software.intel.com/en-us/forums/realsense/topic/603666

Also please bear in mind that the R200 has a maximum depth sensing range of 4 meters.

0 Kudos
HHe6
Beginner
1,413 Views

Thank you so much Marty! I guess the R200 does have some shortcomings, so I had purchased a new D435 camera, and it's on its way. May I ask when the SDK 2.0 documentation will be released? Or is there any recommended document I can reach first before the coming out of the official one. Thanks again for your kindly help, this is my first time to purpose an English discussion, please bear my poor English presentation. Best regards.

0 Kudos
MartyG
Honored Contributor III
1,413 Views

RealSense SDK 2.0's documentation has been available since the launch of the 400 Series and is continually added to. You can find it at the link below. It uses a different format, based on the GitHub developer website, instead of the format that the 2016 R2 documentation used.

https://github.com/IntelRealSense/librealsense/tree/master/doc librealsense/doc at master · IntelRealSense/librealsense · GitHub

For looking up specific instructions, I recommend the site below, which draws from the official documentation files and organizes the information into an easily searchable format.

https://unanancyowen.github.io/librealsense2_apireference/ Intel® RealSense™ Cross Platform API: Main Page

Your English is excellent. I am always happy to discuss in native languages though if it is easier for the user.

0 Kudos
HHe6
Beginner
1,413 Views

Thanks buddy! Now I can move my mind into the website you introduced. I think I will come back soon haha!

0 Kudos
MartyG
Honored Contributor III
1,413 Views

You are very welcome. Please return to the forum any time. Good luck!

0 Kudos
HHe6
Beginner
1,413 Views

Hey Marty! Here I go again! Just like the old problem mentioned above, I try to save the streaming textures into images for further use, and now I am using D435. The situation is the texture displaying function is provided for both depth & color streams by the "RsStreamTextureRenderer" script in SDK 2.0 for Unity, which successfully realize it's function, as showed in the picture below.

I try to use an external DIY script to extract the textures on the RawImage components rather than modifying the contents on the streaming script.

0 Kudos
HHe6
Beginner
1,413 Views

Sorry! I just did a "SEND" operation. Let's continue the question.

As the method I used above, I could not get the exact images displayed on the RawImage, everything was whole black. I think the problem was the texture transformations. How do you think about it? Waiting for you kind reply. Thank you so much!

0 Kudos
MartyG
Honored Contributor III
1,413 Views

You will have trouble with depth-capturing a black cable, because physics means that black or grey objects can absorb the laser light. This is not a problem with RealSense specifically - because it is related to physics, it can happen to any depth-sensing camera. So in areas of the image that are black or grey, you can end up with large black-colored holes in the image, like the cable-shaped hole on your picture.

Dark objects may be able to be captured if you coat them in a spray of a fine spray-on powder such as foot powder or baby powder.

Regarding the missing detail on the rest of your image, does the room that the camera is being used in have fluorescent type lights such as a ceiling srip-lights? Unlike bulb lights, these can flicker at frequencies hard to see with the human eye and cause noise in the image. If you have these lights, you can reduce the disruption by setting your stream to a resolution with 60 FPS, as this means that the camera should be streaming at a frequency closer to the frequency of the fluorescent lights.

Another cause of a large amount of white in an image can be the presence of a strong light source nearby. This could include a strong light (any type of light) on the ceiling above where you are scanning, or a desk lamp.

0 Kudos
HHe6
Beginner
1,413 Views

Thank you very much Marty, the advice is really helpful!

0 Kudos
HHe6
Beginner
1,413 Views

But the trouble I got is something related to the convert from texture to image. The "RsStreamTextureRenderer" script in Unity helps to update and bind the real-time stream to a RawImage conponent in the UI when a new frame is called with the texture.LoadRawTextureData() method. What I tried to do is save the texture to the image format. The texture was like this.

I used the EncodeToPNG() method to do the convert.

if(_stream.ToString()=="Depth" && frameCount%30==0)

File.WriteAllBytes("C:/Users/Ho Ho/Desktop/Gaze_robotic/Assets/ImageProcessing/" + "DepthImage" + Time.time.ToString() + ".png",texture.EncodeToPNG());

else if (_stream.ToString() == "Color" && frameCount%30 == 0)

File.WriteAllBytes("C:/Users/Ho Ho/Desktop/Gaze_robotic/Assets/ImageProcessing/" + "ColorImage" + Time.time.ToString() + ".png",texture.EncodeToPNG());

I could successfully get the color image, but the depth image turned out to be a "Filled with noise" one. I got really confused about this. Thanks for you kindly help Marty!

0 Kudos
MartyG
Honored Contributor III
1,413 Views

It's hard to say what is happening. If I were writing your script though, I would make it something like this, separating the creation of the color and depth PNG files into two separate parts instead of using an Else statement: This should ensure that both files get created, with filenames starting with ColorImage and DepthImage.

if(_stream.ToString()=="Depth" && frameCount%30==0)

{

File.WriteAllBytes("C:/Users/Ho Ho/Desktop/Gaze_robotic/Assets/ImageProcessing/" + "DepthImage" + Time.time.ToString() + ".png",texture.EncodeToPNG());

}

If (_stream.ToString() == "Color" && frameCount%30 == 0)

{

File.WriteAllBytes("C:/Users/Ho Ho/Desktop/Gaze_robotic/Assets/ImageProcessing/" + "ColorImage" + Time.time.ToString() + ".png",texture.EncodeToPNG());

}

Edit: looking back over the script, it still seems as though it is only going to save one stream at a time, because the _stream variable can only be Color or Depth but not Color and Depth at the same time. So if _stream = Depth, then it would have to change to _stream = Color before the If statement that creates the color stream can activate. It may be easier just to drop the If statements entirely

Ignore the script below if you meant for only color or only depth to be saved.

You could make activation dependent on a true / false bool instead.

If (SaveImage == true)

{

// Only allow the instruction to run once by making the bool false

SaveImage = false;

File.WriteAllBytes("C:/Users/Ho Ho/Desktop/Gaze_robotic/Assets/ImageProcessing/" + "DepthImage" + Time.time.ToString() + ".png",texture.EncodeToPNG());

File.WriteAllBytes("C:/Users/Ho Ho/Desktop/Gaze_robotic/Assets/ImageProcessing/" + "ColorImage" + Time.time.ToString() + ".png",texture.EncodeToPNG());

}

0 Kudos
MartyG
Honored Contributor III
1,413 Views

Also, I am not sure if your code is saving the first frame of each stream. If it is, it is good practice to let the stream run for several frames before doing a capture. This gives the camera's exposure enough time to settle down after the start of a stream.

0 Kudos
HHe6
Beginner
1,413 Views

Hey Marty, thanks for you patience! I had tried all the possible solutions, including the tips you just purposed, but the results stayed the same. The depth image was always a noisy one!

0 Kudos
MartyG
Honored Contributor III
1,339 Views

I have looked very hard at this, and I am wondering if the EncodeToPNG instruction is saving the data in an RGB format that is suited to color textures but not to depth-based textures.

0 Kudos
Reply