Some people seem to be wondering how to get this to work, so I figured I’d share this crude work around.
Start by downloading fred-dev’s github repo KinectV2_Syphon and follow its directions. This will install a Syphon app that you can open to fire up your Kinect V2 and receive its color, depth and IR video feeds.
Then in Touchdesigner you can open the Syphon_Spout_In TOP to pick and choose which kinect feed you want to use. A major limitation is no Pointcloud feed.
However! With this super basic tox you’ll have a bare bones Color Pointcloud base.
Kinect_colorPointCloud_mac.tox (2.8 KB)
Within the tox you can adjust the rgbkey to dial in what you want to take out of the depth image. Red is x axis, Green is y, and blue is z. Adjusting the math tops’s range will help to get more surgical.
Three things I’m struggling with:
-Aligning the Color image (1280x720) to the depth & IR image (512x424)
-Getting the skeletal CHOP data
-Smoothing out the depth image
-Vectorizing to real world coordinates similar to this
If anyone has any way to integrate the Skeletal data into the Syphon App that’d be amazing! There’s this Kinect_Smoothing repo, but I don’t how or if it’s possible to integrate it into the Syphon repo. If anyone has any ideas or know how to solve these issues, and if anyone has any obvious improvements to add, this novice would be so grateful!