Using Kinect for live visuals like this

Hey! Absolute bloody beginner, im trying to produce live visuals like in this video:https://www.youtube.com/watch?v=Ul5rTdYo628
around 2:23 especially, but the entire video, it seems they are using the Kinect to motion capture and then stylize it in this really glowy, smooth, bright and sometimes glitchy way. Is this possible to do with touchdesigner? All i could find is tutorials about creating the depth map/pointcloud but it never looks quite like this. Anyone that could point me in the right direction?

Pretty cool looking show, Sinjin Hawke has a very unique sound and glad to see the visuals really backing that up!

If done in TD ( which is very possible) it seems like most of the kinect work would be using very heavily effected variants of a displaced mesh shader and depth culling to remove the background. The starting point for this is probably using the Depth Point Cloud and Color Point Cloud Image options on the Kinect TOP. From there you need to get this information into a material that uses the depth data to push the vertices and possibly compare and remove/alpha = 0 the ones beyond a certain depth. This sort of thing can all be done in a custom GLSL MAT using those two Kinect TOPs as inputs, but these days you can do a whole lot in TOPs before you need to dive into raw shader world ( things like Math and Limit TOPs for ranging and clamping your depth pixels). You may even be able to utilize the PBR MAT’s built in height mapping with Displace Vertices on to get this working without doing any GLSL yourself.

I’m sure there are loads of examples on this forum and elsewhere for how to implement, but that would be my brief thoughts about the basics of this setup.

In very short, this is entirely possible in TD, and I would put a decent bet on that particular production being TD based.

Thanks so much for pointing me in the right direction! That has been super helpful, I will reference this in the future. Gotta start learning the basics of the software :slight_smile:

Everything @archo-p said is plausible, but I think there’s a possibility they used Kinect 2 features for getting 3D triangulated meshes that aren’t as neatly accessible in TouchDesigner.

For example, the CalculateMesh method in the Kinect API:
https://docs.microsoft.com/en-us/previous-versions/windows/kinect/dn799275(v=ieb.10)

https://docs.microsoft.com/en-us/previous-versions/windows/kinect/dn782073(v=ieb.10)

I have a feeling they used ofxKinectForWindows2 as a convenient way to use the Kinect API:

Or foot in mouth… does Kinect SOP do the mesh?