ofx mapamok, camera calibration - touch integration


I have been playing with projection mapping in touch, and am interested in the tricky area of projector calibration.

I notice that all the massive projects that use mapping - all seem to use some bespoke software for their camera calibration. I’m thinking of camera obscurer, and the ISAM project which all seem to say in their descriptions of their project that they are able to calibrate the real world with their virtual models quickly using outside calibration software.

I was very interested in this project by Kyle MacDonald for the Open Frameworks platform that seems to be an excellent calibration technique, and was wondering if it were able to be ported to touch relatively easily.

github.com/YCAMInterlab/ProCamT … k-(English


Unfortunately I’m not an expert at these things, but it seems that Open frameworks uses Open Gl as does touch, so persumable the callibration that is done in the Mapamok application could be exported to touch somehow?

Presumable using their callibration tool - it creates a shader that warps the vertices appropriately for your individual real world setting? Would it be possible to save this shader and use it in touch somehow?

I hope I am not completely off the mark here, and things I’m talking about are within the realms of how the program works!! I know calibration is the hardest part of mapping, but I am trying to avoid changing the mesh vertice by vertice to map the texture to the real world scene.

Any help , always appreciated.


Did you have any luck?



A new feature in 088 in the Camera COMP called Custom Projection was added to make it easier to use these custom projection formulas in a global manner.

When can we expect 088?

The beta is out right now for Commercial or Pro users.

It’s still very unclear to me. Does anyone have a working setup up?

Yes, I agree.

It would be great to have an example on how to use this custom projection feature.
What are we suppose to plug in ? a shader ?



The problem I’m having is mapamok is not giving me the values I’ve measures in real-life. The degrees in which my projector is rotated doesn’t seem to correspond with any values in the .yml file.


Anyone want to share their workflow? I’ve been using 123dcatch with good results but it doesn’t export texture maps that it creates.

here’s an implementation of mapamok to TD https://forum.derivative.ca/t/088-map-an-object-with-camschnappr/3644/1 we’ve used this component in lots of our shows and it works great. BUT! As long as the actual geometry is pretty close to your model.

I did a workshop for beginners demonstrating the workflow using Kantan and CamSchnappr (the implementation of mapamok in TouchDesigner). Maybe it’ll answer some of your questions?

Scroll down to the end:
derivative.ca/Events/2014/T … hopVideos/

Thank you for sharing that. I’ll be sure to check it out. My current workflow is to get a mesh from 123D Catch and bring it into Houdini Apprentice. I’m able to get the texture map in as well so this helps when building. Next, I snap a large polygon to the mesh, then create smaller and smaller polygons inside it to match the mesh model. Sometimes it’s a polygon mess so I delete what I’ve created while keeping the points and rebuild the model.

Hey, Shawn, how have you found 123 capture? I’ve been looking at Smart Capture, which it’s based on. Can it replace laser scanning?


123 is remarkable for what it does, certainly more capable than kinect for scans. But the mesh are fugly, mangled. I can see how it might be a good starting point for a rebuild, as per what maasta I think is suggesting. A very exciting tool