You remember in 2007 when Johnny Chung Lee published his amazing wiimote hack to create a virtual 3D desktop?
I want to build on the concept. I’ve managed to get it to work, but is kludgy. Is there a better way to apply the anamorphosis warp directly in camera? I’ve tried deforming the geometry of the scene, or rendering out the scene and projecting the texture onto some new geometry and re-shooting it, but both are sub-optimal.
Is there a way to skew a scene directly in-camera?
I’ve done a similar project and got very frustrated with the clunkiness of rendering a scene out then applying it to other geometry and re-rendering.
There is no simple way to do this directly in the camera - however I had luck rendering to a cubemap then passing that and head tracking data into a glslTOP using gnomonic projection ( https://en.wikipedia.org/wiki/Gnomonic_projection ) to create the anamorphosis effect.
Here’s the results from my project - https://ianshelanskey.com/category/technology/room-space-augmented-reality/
I’ll try digging up some of the code for you.
Not 100% sure if it’s still functional in the latest builds, but I did a .tox that did this here:
Wow! Malcolm, that is so very cool! I don’t have the Kinect on hand right now, but I tried it with a constant chop for the UV, and it works seamlessly.
The calculatePerspective Execute DAT does exactly what I needed. I couldn’t have done this myself.
Thanks a million, you made my day!
I didn’t know Gnomonic projection was what it was called. I’ll have to look into the glslTOP. I’ll keep you posted. Now back to work!
Your project looks pretty cool, BTW!
Your .tox still works for Kinect 2, very cool. Could you describe what DAT Tables “projMat” and “transMat” do, and how their values are determined? It seems to control initial camera position, as well as view size and resolution.
Also as someone relatively new to creative coding and wanting to learn Kinect, I saw elburz suggest starting with Shiffman’s Processing & Kinect videos. Would you have anything else to add to that?