Kinect2 - Projector Calibration

Note: This Component is now part of the Palette and can be found under Tools>KinectCalibration

Admittedly a bit late to the party but hopefully helpful anyways, I finally finished this years old project and packed it into a component. With being able to read TOPs directly into numpy arrays this now is a pretty clean solution:
A component which let’s you calibrate the projector to your kinect so that you can reproject the skeleton or pointcloud onto the physical scene you are looking at with your kinect camera.

In essence I’m following step by step what Elliot Woods did here (7 years ago) :laughing:

What do you need:

  • a Kinect2
  • a projector
  • a board of some kind that you can project onto (foam core or cardboard)

Step by Step instructions:

  • place the kinect2 and your projector
  • on the Kinect Projector Calibration Parameter Page click “Open” to see the control screen
  • select the output Monitor which should be your projector
  • check that the Projector Resolution in the parameters matches the actual projector resolution
  • Click the “Open Checkerboard” parameter to project a checkerboard
  • use a piece of cardboard or similar and make sure the checkboard lands on it. You can adjust the size, position and brightness of the grid - double check that the color camera from the kinect can see the grid
  • click “Get Point Pair” to collect the checkerboard corners
  • repeat with different poses of the board for a few times (try 5-10 different positions)
  • If the parameter “Pointpairs Collected” has a sufficient number, click “Calibrate”
  • If everything ran ok, click “Open Pointcloud” to reproject the point cloud onto your scene.
  • You can also turn off the pointcloud and instead project the Skeleton onto somebody entering the scene

The whole thing acts as a camera, so you should be able to just place it into your scene and you can now render from the viewpoint of your projector in relative position of your kinect camera.

The background is quite simple:
In general it’s the same Idea as camSchnappr: give the openCV cameraCalibrate a collection of point pairs: 2D points in the screen space and 3D points in the world space.
The 2D points are collected from the projected checkboard itself while the 3D points are collected from the Kinect pointcloud. To get those 3D points, we run the openCV findChessboardCorners function on the color camera and then use the returned uv positions and lookup the point in the pointcloud.
In the component itself, all functionality is contained in the Calibrate Extension.

It worked well in the cases I have tested here… Let me know what should be improved.


Version 0.5 posted fixing an issue for Non-Commercial Licenses related to reduced resolution of the Kinect Camera.

Version 0.6 posted fixing an issue for build 099.2018.27550 and greater where dumping the resulting matrix into a table had rows and columns switched.

1 Like

Thnx for this.
I am very interested in trying it out. However:

  1. If I load the tox I get a version conflict warning. I use 2018.25850 and it tells me
    I should use 2018.26390. I cant find that version, also not in the experimental builds
  2. If I load the tox into my version, the kinectCalibration chop shows an error (no idea
    how to copy the text into this mail)
  3. If I run the calibration anyhow, I can execute the first steps (open the checkerboard, scale it).
    But when hitting the Get Point Pair Button, TD crashes
  4. Dont know how to change position of the grid. Scale and Brightness ok.

greetings knut

Hi Knut,

sorry for the version conflict, but we just released a new build to download today.
What build were you trying this with and what was the error that the Kinect CHOP was showing (doesn’t have to be exactly the error - just to get an idea of where it might have come from)


hi markus
the build I used was 25850, the newest from td 99.
I am away from my machine now…so details on
the error text tomorrow…sorry
greetings knut

Hey Markus

  1. This morning with TD 2018.25850 and your .tox the chop-error message disappeared. Strange.
    I had my PC switched off over night…the only thing that I changed (fingers crossed)
    Rest is unchanged: After hitting get-point-pair TD closes. I cant find a crash file…
  2. After that I installed 2018.26450.
    No errors from the chop but same behaeviuor: TD silently closes after hitting get point pair
    button (of course I did the other steps first: Projected checkboard an white surface)

I am on windows 10
GPU GTX 1080, Driver: 397.93

greetings knut

…and markus
please let me know what I can do in addition.
I am very much interested in getting this going in
my environment.

Hi Knut,

and I suspect no dmp file created?
Maybe we can trace the error with this script:

import numpy as np
import cv2

# capture a TOP into a numpyArray
img = op('camColor').numpyArray()*255

# remove the alpha channel
img = img[:,:,:3]

# convert to 8 bit
img = img.astype(np.uint8)

# convert to gray scale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)

# find corners
ret, corners = cv2.findChessboardCorners(gray, (gridResX, gridResY), None)

If this closes TouchDesigner, we should comment out lines starting from the bottom.
If it runs we can go further and check if the cornerSubPix or calibrateCamera functions are causing this behavior.


so you want me to put this in a text dat and then execute
that text dat…correct?

no dmp file created…I searched for touchdesigner99*

yup, if you could create a TOP (can be a constant TOP) and call it camColor and then paste the script into a Text DAT and run it.

any chance of a more interactive session within the next hour?
it’s a 30 min drive for me back to the office.
otherwise I would do it tomorrow morning.


I found a problem using the checkerboard.
Green dots appear only on the left side of the screen.

Is there a way to fix it?

Thank you.


Just wondering if this is in Non-Commercial?
If so, can you try reducing your projector resolution to 1280x720.
Can’t test it right now so mainly guessing…


Yes I am using non-commercial tools.
Even though resolution is reduced, it does not match exactly.

Thank you so much for releasing this. I have been trying to figure out a better way then importing the calibration from RoomAlive for a while now. This is a much cleaner solution.


I posted an updated version (viewtopic.php?f=22&t=12895#p49372) that fixes the issue with offset grid points when running the kinect at a lower resolution (mainly when trying this with the Non-Commercial version.)

The issue was that I had a fixed offset and orthographic camera width that assumed a camera resolution of 1920x1080.




Hi Markus
during the last week I did a lot of calibrations on differernt locations using your .tox.
Everything worked fine: It is stable, the results are reproducible, the UI is good.
Thnx a lot, great job!
I use this camera for the kinect directly but also for the vive tracker as an alternative to
camschnappr, if I dont have a 3D model of the object.

I will now invest some time to understand what needs to be done in order to get the best results possible by using this checkerboard approach with the kinect.
The quality of the results is today not easy judge, because this can only be done by projecting
the skeleton on a person and then visually inspecting the image (as far as I know).
I would expect that there is somewhere inside opencv some kind of value, that describes the quality or preciseness of the parameter estimation.
Is this true ? Do you plan to make this value availabale through the UI?
Could you give a starting point if I would like to do that on my own ?
thnx for your help

Hi Knut,

the calibrateCamera returns a value that should indicate how precise it is.
If you go to the DAT called Calibrate, find the function of the same name:

def Calibrate(self):
fov = 180
pWidth = int(op(‘monitors1’)[parent.Kinect.par.Monitor+1,‘width’])
pHeight = int(op(‘monitors1’)[parent.Kinect.par.Monitor+1,‘height’])
size = (pWidth,pHeight)
ret, mtx, dist, rvecs, tvecs = self.calibrateCamera(self.objPoints, self.imgPoints)
rot, jacob = cv2.Rodrigues(rvecs[0],None)

extrinsic = self.returnExt(rot, tvecs[0])
intrinsic = self.returnIntrinsics(mtx, size)[/code]

the “ret” should be this value, so if you add:

parent.Kinect.par.Message = 'Calibration Error: {0}'.format(ret)

it should output it to the little Message field on the parameters.

def Calibrate(self):
	fov = 180
	pWidth = int(op('monitors1')[parent.Kinect.par.Monitor+1,'width'])
	pHeight = int(op('monitors1')[parent.Kinect.par.Monitor+1,'height'])
	size = (pWidth,pHeight)
	ret, mtx, dist, rvecs, tvecs = self.calibrateCamera(self.objPoints, self.imgPoints)
	rot, jacob = cv2.Rodrigues(rvecs[0],None)

	extrinsic = self.returnExt(rot, tvecs[0])
	intrinsic = self.returnIntrinsics(mtx, size)

	parent.Kinect.par.Message = 'Calibration Error: {0}'.format(ret)

Will add this in to a later release!

HI Markus
thnx. That just works. I will now play around for while to find out what this error means in
my environment and if this number allows me to find precise results faster…something
like a criterion to stop calibration, a little more accurate then your advice to collect something like 10-12 point pairs.
Greetings knut