Multi Camera Alignment using opencv help

Hi there,
I`m trying to align two point clouds from kinect 4 azure.

I`m running charuco_detection on the color cameras

i`m able to build board pose matrix in camera space.
“Final 4x4 transform”

So far i`m failing in using this data to align the two cameras.
Given camera A and B

( one of my attempts )
On Camera B point cloud
(CameraA transofrm) * inverse(CameraB transofrm) * PCL

uniform mat4 cam01_transform;
uniform mat4 cam00_transform;
uniform mat4 cv2gl;

out vec4 fragColor;
void main()
{
vec4 color = texture(sTD2DInputs[0], vUV.st);

color = cam00_transform * inverse(cam01_transform) * color;
fragColor = TDOutputSwizzle(vec4(color.rgb,1.0));
}

Any tips and hints are welcome.

project file :

Output:

/////// Get and Calculate camera transform //////////////////
Found 4 k4a connected devices:

Current device serial number and ID
Serial: 000137392012
ID: 3

Camera 3x3 matrix (focal and principal)
[1943.6014, 0, 2042.2634;
0, 1943.634, 1558.2002;
0, 0, 1]

Distortion coefficients
[0.56407541;
-2.8563795;
0.00034544262;
9.3485214e-06;
1.7019695;
0.44322005;
-2.6765807;
1.6240268]

Number of markerCorners coordinates: 17
translate (tvec) vector generated from estimatePoseCharucoBoard
[-0.321135, 0.023058, 1.421785]

rotation (rvec) vector generated from estimatePoseCharucoBoard
[-1.629482, -2.033616, 0.512786]

Rodrigues rotation matrix output + transpose
[-0.1750317663359425, 0.9753690835116966, 0.1342349868807432;
0.7951029246634684, 0.2204397962637846, -0.5649934826303578;
-0.5806679085027165, 0.007838823427953889, -0.8141027778372619]

transform cv 2 Opengl
[1, 0, 0, 0;
0, -1, 0, 0;
0, 0, -1, 0;
0, 0, 0, 1]

Final 4x4 transform from Opencv (3x3 transposed Rodrigues matrix + tvec)
[-0.1750317663359425, 0.9753690835116966, 0.1342349868807432, -0.3211353392539955;
0.7951029246634684, 0.2204397962637846, -0.5649934826303578, 0.02305835302761156;
-0.5806679085027165, 0.007838823427953889, -0.8141027778372619, 1.421785033369236;
0, 0, 0, 1]

CV to Opengl matrix * transform4x4 (optional)
[-0.1750317663359425, 0.9753690835116966, 0.1342349868807432, -0.3211353392539955;
-0.7951029246634684, -0.2204397962637846, 0.5649934826303578, -0.02305835302761156;
0.5806679085027165, -0.007838823427953889, 0.8141027778372619, -1.421785033369236;
0, 0, 0, 1]

/////// Get and Calculate camera transform //////////////////
Found 4 k4a connected devices:

Current device serial number and ID
Serial: 000071692012
ID: 1

Camera 3x3 matrix (focal and principal)
[1940.4982, 0, 2043.9741;
0, 1940.4558, 1572.3068;
0, 0, 1]

Distortion coefficients
[0.44659063;
-2.823009;
0.00058650092;
9.9364013e-05;
1.7742159;
0.32623905;
-2.6353929;
1.6881952]

Number of markerCorners coordinates: 17
translate (tvec) vector generated from estimatePoseCharucoBoard
[-0.259144, -0.038194, 1.264992]

rotation (rvec) vector generated from estimatePoseCharucoBoard
[2.358736, 1.123323, 1.093566]

Rodrigues rotation matrix output + transpose
[0.4017498951636057, 0.7625208839299694, 0.5071083940408503;
0.527392747049635, -0.6453658146018424, 0.5525937528625711;
0.7486346986985521, 0.04544080670453149, -0.6614236320170073]

transform cv 2 Opengl
[1, 0, 0, 0;
0, -1, 0, 0;
0, 0, -1, 0;
0, 0, 0, 1]

Final 4x4 transform from Opencv (3x3 transposed Rodrigues matrix + tvec)
[0.4017498951636057, 0.7625208839299694, 0.5071083940408503, -0.2591442373942539;
0.527392747049635, -0.6453658146018424, 0.5525937528625711, -0.03819378849616589;
0.7486346986985521, 0.04544080670453149, -0.6614236320170073, 1.264991775861966;
0, 0, 0, 1]

CV to Opengl matrix * transform4x4 (optional)
[0.4017498951636057, 0.7625208839299694, 0.5071083940408503, -0.2591442373942539;
-0.527392747049635, 0.6453658146018424, -0.5525937528625711, 0.03819378849616589;
-0.7486346986985521, -0.04544080670453149, 0.6614236320170073, -1.264991775861966;
0, 0, 0, 1]

Hi,
Just an updated.
I have made two major mistakes:
1.No need to transpose the Rodrigues rotation matrix.
2. to move from TD coordinates system to opencv one -
its possible to multiple the x and y axis of the point cloud by -1.

1 Like

Hi,

Thanks for sharing your files / progress, it’s very helpful!

I’m going down the same route and having hard time wrapping my head around this, I’ve calibrated / registered 2 Kinects with charuco_detection as well, using the python script within the SDK. This is the output:

Translation camera B to A:
[[ 0.94879255]
 [ 0.03614175]
 [-0.25404676]]
Rotation camera B to A:
[[ 0.96204389 -0.21654443  0.16607249]
 [ 0.19428802  0.97084434  0.14040455]
 [-0.19163436 -0.10280944  0.97606685]]
Find forward reprojection error (camera B to camera A).
RMS (Radians): 0.0018772057324453067
Find reverse reprojection error (camera A to camera B).
RMS (Radians): 0.1212630650488486```

And the calibration_blob.json it produces:

{
    "CalibrationInformation": {
        "Cameras": [
            {
                "Intrinsics": {
                    "ModelParameterCount": 14,
                    "ModelParameters": [
                        0.4925431105367309,
                        0.5257006593745118,
                        0.5331877100713321,
                        0.7120542714543393,
                        -1.6367332282402776,
                        -1.1950454163742352,
                        3.639862495467776,
                        -1.8423448134872138,
                        -0.5809292136294578,
                        3.055925373303548,
                        0,
                        0,
                        -0.005154362946103922,
                        0.00943870557293672
                    ],
                    "ModelType": "CALIBRATION_LensDistortionModelBrownConrady"
                },
                "Rt": {
                    "Rotation": [
                        1,
                        0,
                        0,
                        0,
                        1,
                        0,
                        0,
                        0,
                        1
                    ],
                    "Translation": [
                        0,
                        0,
                        0
                    ]
                },
                "SensorHeight": [
                    3072
                ],
                "SensorWidth": [
                    4096
                ]
            },
            {
                "Intrinsics": {
                    "ModelParameterCount": 14,
                    "ModelParameters": [
                        0.4925431105367309,
                        0.5257006593745118,
                        0.5331877100713321,
                        0.7120542714543393,
                        -1.6367332282402776,
                        -1.1950454163742352,
                        3.639862495467776,
                        -1.8423448134872138,
                        -0.5809292136294578,
                        3.055925373303548,
                        0,
                        0,
                        -0.005154362946103922,
                        0.00943870557293672
                    ],
                    "ModelType": "CALIBRATION_LensDistortionModelBrownConrady"
                },
                "Rt": {
                    "Rotation": [
                        0.9620438864646617,
                        -0.2165444281210149,
                        0.1660724876844805,
                        0.19428801724577305,
                        0.9708443381307449,
                        0.1404045493357756,
                        -0.19163435722927002,
                        -0.10280944396901803,
                        0.9760668477927656
                    ],
                    "Translation": [
                        0.9487925512323472,
                        0.03614174754439743,
                        -0.25404675859688797
                    ]
                },
                "SensorHeight": [
                    3072
                ],
                "SensorWidth": [
                    4096
                ]
            }
        ]
    }
}

Now I “just” need to apply the transformation to my cams in TD but I can’t figure out how to do that exactly… :thinking: