Note: This Component is now part of the Palette and can be found under Tools>KinectCalibration
Admittedly a bit late to the party but hopefully helpful anyways, I finally finished this years old project and packed it into a component. With being able to read TOPs directly into numpy arrays this now is a pretty clean solution:
A component which let’s you calibrate the projector to your kinect so that you can reproject the skeleton or pointcloud onto the physical scene you are looking at with your kinect camera.
In essence I’m following step by step what Elliot Woods did here youtube.com/watch?v=llQM-OGsETQ (7 years ago)
What do you need:
- a Kinect2
- a projector
- a board of some kind that you can project onto (foam core or cardboard)
Step by Step instructions:
- place the kinect2 and your projector
- on the Kinect Projector Calibration Parameter Page click “Open” to see the control screen
- select the output Monitor which should be your projector
- check that the Projector Resolution in the parameters matches the actual projector resolution
- Click the “Open Checkerboard” parameter to project a checkerboard
- use a piece of cardboard or similar and make sure the checkboard lands on it. You can adjust the size, position and brightness of the grid - double check that the color camera from the kinect can see the grid
- click “Get Point Pair” to collect the checkerboard corners
- repeat with different poses of the board for a few times (try 5-10 different positions)
- If the parameter “Pointpairs Collected” has a sufficient number, click “Calibrate”
- If everything ran ok, click “Open Pointcloud” to reproject the point cloud onto your scene.
- You can also turn off the pointcloud and instead project the Skeleton onto somebody entering the scene
The whole thing acts as a camera, so you should be able to just place it into your scene and you can now render from the viewpoint of your projector in relative position of your kinect camera.
The background is quite simple:
In general it’s the same Idea as camSchnappr: give the openCV cameraCalibrate a collection of point pairs: 2D points in the screen space and 3D points in the world space.
The 2D points are collected from the projected checkboard itself while the 3D points are collected from the Kinect pointcloud. To get those 3D points, we run the openCV findChessboardCorners function on the color camera and then use the returned uv positions and lookup the point in the pointcloud.
In the component itself, all functionality is contained in the Calibrate Extension.
It worked well in the cases I have tested here… Let me know what should be improved.
Version 0.5 posted fixing an issue for Non-Commercial Licenses related to reduced resolution of the Kinect Camera.
Version 0.6 posted fixing an issue for build 099.2018.27550 and greater where dumping the resulting matrix into a table had rows and columns switched.