I’m attempting to calibrate a projector using a new OpenFrameworks/OpenCV library called mapamok, but I’m having trouble and I think this question is generally applicable to anyone doing projector calibration.
My process is as follows:
Load a model of my physical scene into Mapamok and perform the calibration routine to obtain an estimate of the transformation matrix (GL_MODELVIEW) and the camera intrinsic field-of-view. (i.e. the outputs of openCV’s cameraCalibration method)
In TouchDesigner, load the same model. Use the Matrix CHOP parameter of a Geometry COMP node to transform it with the transformation matrix derived in step 1. Set the horizontal FOV of my Camera COMP to the value derived in step 1.
I would expect the rendered scene to match my physical scene, but it doesn’t. The orientation of the model is correct, but it appears to be off by some translation and/or scale.
I believe the screen resolutions match. Are there other camera intrinsics that I’m neglecting? Or other parameters that have to be taken into account when rendering the scene from a different computer?
Any hunches? People who’ve had success with projector calibration?
It was an intrinsic parameter that I was neglecting – the principal point.
I don’t know of a way to set the principal point of a camera in TouchDesigner without writing a custom shader, so instead I set Mapamok to use the default principal point (w/2, h/2) and not solve for it. This fixed the discrepancy.
Check out Mapamok for your projector calibration, and maybe we’ll have a TouchDesigner version of it soon.
this is great! I’ve always wanted to modify kyle’s procamtoolkit to send the final calibration data via osc to touch, but never found the time. Did you get mapamok to compile on windows? Are you sending calibration data via OSC or just manually transferring the values?
does this mean you are working on getting the calibration part ported to touch?
Has anyone else had any luck bringing Mapamok into touch designer? I have the output from mapamok below. You can use the rotation-matrx and translation matrix to feed the Matrix Chop parameter of the Geometry Comp, and the FOV for the camera. This is not quite enough though. I think there are still some rotations that need to happen on the camera. I can’t figure out what my camera rotations should be to finish calibrating the projection.
Well, have had some success bringing this information into Touch. As originally suggested in this post, I used the rotation and translation matrix as an input to the Matrix Chop parameter on the geometry comp. I used the FOV to set the camera horizontal field of view. One trick was I had to rotate the camera by 180 degrees on the x-axis. I am guessing this is because of differences in the coordinate systems between the two.
My current struggle is instead of moving the geometry, I want to leave the geometry at the origin and instead move the camera (to make it easier in the future to add additional camera’s). I though I would need to take the negative of the transpose of the rotation matrix to accomplish this, but no luck so far…
I’m trying to follow the process explained here and getting a bit stuck when i get to
The exporting the matrix data from mapamok to a matrix CHOP in touch. do i find
the matrix data in the calibration-advanced.yml file? I’m defiantly stretching my knowledge at this point as this is the first time i’ve used a matrix …
With the new ‘Custom Projection’ parameter is this process now simpler? what info do you put custom projection parameter ?
I tried this tonight and I wasn’t able to get it to work. I used a GLSL MAT and added the standard shader. I wasn’t able to drag it into the Custom Projection field so I typed it in. I also tried it in the ‘Display/Material’ field. I get an ‘Invalid Source’ warning on the CameraCOMP and the following in the GLSL MAT.
Program Link Results:
0(56) : error C1013: function "TDCamToProj" is already defined at 0(66)
0(60) : error C1013: function "TDSOPToProj" is already defined at 0(68)
You should specify only a DAT to the custom projection parameter, not a GLSL MAT. The DAT should contain those two functions in it (you can have other functions as well as uniforms as needed, you can pass uniforms to it using the GLSL page of the Render TOP). I’ve updated the help further to avoid this confusion.
here’s how I managed to get it to work without transforming the geo, only the camera.
take the rotation matrix from the yml file.
pre-multiply it by the following axis conversion matrix :
1 0 0
0 -1 0
0 0 -1
transpose the resulting matrix, you thus obtain the inverted rotation matrix, lets call it i_rot.
take the translation vector from the yml file. let’s call it trans.
compute the following scalar multiplication : -1 X trans
let’s call the result snart
premultiply snart by i_rot. let’s call the result i_trans
append i_trans to i_rot as fourth column
append 0 0 0 1 as fourth line to the result.
transpose this, dat_to and you’ve got your extrinsic parameters. put it in matrix chop in the pre-xform tab of the camera
for intrinsic parameters (the matrix chop/dat in the view tab) :
compute the following 4x4 matrix :
there’s a little problem with the example I uploaded in my previous post.
as with the rotation matrix, we need to premultiply the translation vector by the cv_gl matrix.
this time I think it should be ok. mapamok_calibration.tox (1.88 KB)