Kinect and projector calibration.

Hello everyone,
I am a student and i am working on a project which involves projecting live interactive visuals on dancers and a screen behind them. I am using a kinect v2 for this, but my problem is that i cannot get the kinect’s feed to link up with the projector so that the masking of the dancers is precise.
To get a better idea you can watch this video - [url]https://www.youtube.com/watch?v=_woNBiIyOKI[/url]

Does anyone have any idea on how to get this sync with the minimum amount of setup required, because i’ll have only twenty minutes to calibrate before the performance.

I cannot stress upon the urgency enough, i have to get this working within two to three days.

Any input is welcome.

Thanking you in advance,
Vinay

Hi Vinay,

as described in this post here you would want to calculate the relative position between the projector and the Kinect Camera.

While there is no premade implementation in TouchDesigner, all the tools are there. Especially there is the tdu.calibrateCamera() Method which can help you with this.

For the calibrateCamera() Method to be useful you need to collect point pairs. One member of a pair is a 2D pixelposition which you already know as it’s going through the projector. The second part of the pair is a 3D position which you have to acquire via the Kinect Depth Camera. As it might be hard to get the projected pixels position via the infrared camera, you can use the Kinect TOPs Camera Remap parameter to remap the Depth Camera View to the Color Image camera space and resolution.

Biggest problem perhaps is gathering the point position in the Kinect Color Camera. For this there are many different ways you could go about it. One is using Gray Code Patterns which will give you the mapping for each projector pixel on the 3D surface. From this you can pick an random number of points and retrieve their pixel and 3D position from the Color Camera and the Depth Camera Mapping.

For running the point pairs through the CalibrateCamera Method refer to the CamSchnappr tool in the palette.
In there go to camSchnappr/camSchnappr/secondScreen and find a script called solver. The script passes it’s values over to the projMat Base COMP which converts the values to a Projection Matrix and transform parameters for the Camera simulating the projector position.

This might be a bit complex and lots of moving parts but I believe this to be doable. Attached is one part of the puzzle, namely a structured light file generating and decoding gray code patterns.
structuredLight.11.toe (7.36 KB)

There is a system designed for exactly this from microsoft called the RoomAlive SDK. It generates an xml file that gives you everything you need about the kinect and the projector extrinsics. However, if you only have a few days and you are not well versed on projection geometry, you may want to try to hire someone on this forum to help you out.

All it needs is someone to parse the xml file to get the camera and projector matrix, it has been on my list for a while but I have not had time to get to it.

But this will take care of all of the greycode and should give a very nice mapping solution.

Best of luck to you…

Ah - nice pointer Matt!

Cheers
Markus

Best bet would be not to double post, so that everyone contributes in one place.

Hey Folks,

I’m trying to play around with the kinect, roomalive SDK and projectors. I’ve got the software running, and it produces a perfect result in it’s own software. I’m having a few issues bring the camera matrix into touch. I’ve attached the XML file for the system I’ve setup. The pose all looks correct. I’m trying to project the OBJ that the system gives you back onto the realworld object.

It’s the section:

<ProjectorCameraEnsemble.Projector>
  <cameraMatrix>

That I’m trying to turn into a projection matrix.

I’m doing this as a bit of a lesson in all things matrix, touch, and projectors. It’s a steep learning curve but very interesting.
calibration3.xml (8.03 KB)

I’m at the same point as you tallscott. Compiled the roomAlive tools and got my calibration xml output. I’m just scratching my head at what to do the results.

I’m guessing the <ProjectorCameraEnsemble.Projector> children are most relevant to us. Can we just plug these values into a table and use that as the custom projection matrix DAT for the camera comp?

Whats the next step if you just want to see your kinect depth image projected back into your room aligned to the objects in the space? Is the camera comp even the right tool to warp the 2D depth image to the projectors point of view?

I’ll have a play…

C.

Thank you all for your answers and sorry for the late reply. Considering the amount of time i had left, i decided to leave out the idea of using the Kinect TOP for masking the dancers as i thought would be simpler and more polished if i’d rather use the KInect CHOP to position and create interactive visuals. Although, after i finish this project i will try out your solutions and let you know what happens.

I had another question, how can i connect two computers on a network,so that i can use the Touch Out/In Chop? Is it as simple as connecting the computers to the same router. I need to do this because the PC that i am using doesn’t support the intel usb 3 controller for the kinect, but whereas my laptop does, but the laptop isn’t powerful enough to be able to run all my visuals. So i was thinking i could use my laptop for the kinect and feed the data into my pc.

Thanking you again,
Vinay

Yes, you can use Touch In/Out CHOPs to send the kinect data across between the machines.
Another option is to buy a USB3 PCIe card for your PC, they are cheap, like 30$. Get one that supports the Kinect2 and you are all set on your PC. That’s what we’ve had to do for most of the PCs in our office.

Hey Corey,

Yeah the looks to be the transform matrix. When you export the OBJ, the origin appears to be the kinect. I tried a couple of different setups to confirm this. The kinect is always at the origin, the translations and rotations in pose appear to be for the projector. The axis are out so first the rotations will need to be modified to work this out. I found some examples in the forum on taking the mapamok info into touch. Unfortunately my python (and t-script) isn’t up to scratch (learning it now) to transfer this across properly (I know what I want to do, just need to work out how to do it!).

viewtopic.php?f=4&t=3376&p=15219&hilit=mapamok#p15219

I also found this github code repo where someone is taking the data into openframeworks (line 540 → 660) github.com/micuat/sharedFace2v2 … /ofApp.cpp

This seems to apply the depth to colour transform along with some other values (line 649 and 656) to get their projector extrinsic.

For the camera matrix, they appear to be using a scaling factor (line 991) but I don’t know if this is needed or not. I agree that projectorcameraensemble > cameramatrix is our ticket. I just don’t know how the 4x4 touch projector matrix needs to be formatted from this 3x3 matrix.

There is definitely an answer there. When you run the calibration tool, the result obviously works, and it gives the matrix straight away in the output text window.

Scott

Hmmmm that looks very interesting. I’ll have a look at replicating what they’re doing in that ofx code in python inside touch. I’m pretty good with python but rubbish at vector math and 3d concepts in general, so having some code to work from is a good start.

Failing that it might be a quick hack to gut that ofx app of everything except the calibration code and add a spout server + receiver and use it to manage the kinect and projector and touch just for content and control. Would be much better to be all in one though.

I’ll let you know if I come up with anything. I’m working on some other stuff today but might get lucky…

C.

Update on this: I overestimated my understanding of what needs doing…

If anyone has any advice on where to start I’ll take it!

Specifically I’m confused about how the transformation matrices should be applied to 2D content generated from the depth image, that I want map back onto the objects picked up in the depth image.

If I’ve got a projector and kinectv2 mounted overhead both pointing straight down, giving a depth image of my floor (acting as my projection canvas) and the people walking across it in bird’s eye view, where I’ve filtered the depth image to the just the shoulders up (and multiplied it over colour ramps, movies etc) - how do I make use of the relationship between the projector and the kinect depth image that the roomAlive toolkit just calculated to warp that TOP so the projector is projecting nicely aligned heads and shoulders back onto the people?

I imagine the transformation matrices just need to be fed into a shader that samples the final output TOP and applies the transformation? I’m only just learning glsl now and don’t quite know where to begin with attempting that.

I’m not sure the transformation Matrix CHOP/DAT option in the camera COMP is helpful for this 2D perspective correction. Would that involve making a sprite that matches the aspect ratio of your projector and zooming your camera until the sprite fills the field of view and having the transform applied that way?

As tallscott pointed out it looks like some similar wrangling needs to happen to the matrices before they will look correct in touch, like they were for the mapamok data. Anybody know what actually needs to be done?

Cheers.

C.

Best bet would be not to double post, so that everyone contributes in one place.

Hi…

referring to the first post, here a quick and dirty solution:

use the projector to beam a grid or something into the scene, take a picture with the kinect top and use the stoner to seperate the projectors area. Its not the smartest solution, but for me it worked good enough…