Items with no label
3335 Discussions

Multiple Camera Calibration (D415) using opencv findHomography

EMuel
Novice
2,955 Views

Hi,

I have three d415 and I want to calibrate them to get aligned pointclouds from the streams later.

I managed to get opencv running on my windows10 machine and can get Homography between a given source image and the stream's color frame.

So far so good. But now I'm stuck.

 

How can I find the rotation and translation from the findHomography values?

This is the result using SURF detection with opencv:

I'm using

cv::Mat H = findHomography(object_to_search_for, frame_scene, cv::RANSAC);

on each frame in RealsenseViewer

and then

// set up a virtual camera

float f = 100, w = 1280, h = 740;

cv::Mat1f K = (cv::Mat1f(3, 3) <<

f, 0, w / 2,

0, f, h / 2,

0, 0, 1);

std::vector Rs, Ts;

cv::decomposeHomographyMat(H,

K,

Rs, Ts,

cv::noArray());

.....

int cnt = 0;

for (auto rsp : Rs) {

sprintf_s(buffert, "\nRotation ");

OutputDebugStringA(buffert);

plotMat(rsp);

cnt++;

}

......

with following output:

# serial: 822512060464

# H: 0.942819 -0.348376 616.138989 0.124825 0.729206 338.550335 0.000230 -0.000437 1.000000

Rotation -0.304252 -0.800240 0.516765 -0.828867 0.489754 0.270407 -0.469478 -0.346058 -0.812302

Rotation -0.304252 -0.800240 0.516765 -0.828867 0.489754 0.270407 -0.469478 -0.346058 -0.812302

Rotation 0.993840 -0.105665 -0.033412 0.107171 0.993127 0.047044 0.028212 -0.050335 0.998334

Rotation 0.993840 -0.105665 -0.033412 0.107171 0.993127 0.047044 0.028212 -0.050335 0.998334

# serial: 823112060695

# H: 0.992691 -0.006116 432.967415 0.004298 1.004756 457.600143 -0.000017 0.000006 1.000000

Rotation 0.149869 -0.909308 0.388198 -0.906438 0.030438 0.421240 -0.394853 -0.415008 -0.819670

Rotation 0.149869 -0.909308 0.388198 -0.906438 0.030438 0.421240 -0.394853 -0.415008 -0.819670

Rotation 0.999950 -0.009834 0.001653 0.009835 0.999951 -0.000620 -0.001647 0.000636 0.999998

Rotation 0.999950 -0.009834 0.001653 0.009835 0.999951 -0.000620 -0.001647 0.000636 0.999998

# serial: 821212061501

# H: 0.552248 -0.172998 340.165420 -0.056285 0.763573 316.373767 -0.000248 -0.000196 1.000000

Rotation 0.608867 -0.680016 0.408485 -0.675527 -0.174516 0.716384 -0.415865 -0.712125 -0.565627

Rotation 0.608867 -0.680016 0.408485 -0.675527 -0.174516 0.716384 -0.415865 -0.712125 -0.565627

Rotation 0.991481 -0.127028 0.028791 0.126124 0.991520 0.031326 -0.032526 -0.027428 0.999094

Rotation 0.991481 -0.127028 0.028791 0.126124 0.991520 0.031326 -0.032526 -0.027428 0.999094

I'm am quite new to matrix so am lost somehow.

 

Maybe someone can help me to get started?

Thanks

1 Solution
MartyG
Honored Contributor III
1,784 Views

You do not need to write your own tool to calibrate the cameras, unless camera calibration needs to be built into the application you are writing. The 400 Series cameras have a Dynamic Calibration Tool that can be used to calibrate each camera individually.

https://downloadcenter.intel.com/download/27955/Intel-RealSense-D400-Series-Calibration-Tools-and-API Download Intel® RealSense™ D400 Series Calibration Tools and API

If you do need your own calibration tool, Vicalib may be an option for you. Intel themselves use it with the 400 Series cameras.

https://github.com/arpg/vicalib GitHub - arpg/vicalib: Visual-Inertial Calibration Tool

Regarding how to do multiple camera 3D point cloud alignment, calibration and software tools: Vicalib can be used. "It uses a board that you show to each of the cameras in turn., and it establishes overlapping regions to then minimize the pose of each of those together".

Regarding aligning multiple point clouds together, Intel stated in a recent webinar about multiple cameras: "Vicalib can do this, but there is a simpler approach, which will work in 90% of cases. This is is to take the point cloud from every one of the cameras and then do an Affine Transform. Basically, just rotate and move the point clouds, in 3D space, and then once you've done that, you append the point clouds together and just have one large point cloud".

View solution in original post

0 Kudos
1 Reply
MartyG
Honored Contributor III
1,785 Views

You do not need to write your own tool to calibrate the cameras, unless camera calibration needs to be built into the application you are writing. The 400 Series cameras have a Dynamic Calibration Tool that can be used to calibrate each camera individually.

https://downloadcenter.intel.com/download/27955/Intel-RealSense-D400-Series-Calibration-Tools-and-API Download Intel® RealSense™ D400 Series Calibration Tools and API

If you do need your own calibration tool, Vicalib may be an option for you. Intel themselves use it with the 400 Series cameras.

https://github.com/arpg/vicalib GitHub - arpg/vicalib: Visual-Inertial Calibration Tool

Regarding how to do multiple camera 3D point cloud alignment, calibration and software tools: Vicalib can be used. "It uses a board that you show to each of the cameras in turn., and it establishes overlapping regions to then minimize the pose of each of those together".

Regarding aligning multiple point clouds together, Intel stated in a recent webinar about multiple cameras: "Vicalib can do this, but there is a simpler approach, which will work in 90% of cases. This is is to take the point cloud from every one of the cameras and then do an Affine Transform. Basically, just rotate and move the point clouds, in 3D space, and then once you've done that, you append the point clouds together and just have one large point cloud".

0 Kudos
Reply