I am quite new to Touchdesginer and my team and I are working on an interactive experience for school (we are students in multimedia), we’ve been trying to cause triggers based on specific movements that would be done by the users using Kinect Azure…
We’ve been going around for a while and still haven’t found any solution.
Can anyone help with this?
With Kinect, this is easier to do when using the skeleton data channels in the Kinect or Kinect Azure CHOP. Then you can easily identify where wrists, feet, elbows are and monitor those values in CHOPs, when they hit a certain value then trigger. The Logic CHOP is good for taking specific values and changing this into a on-off signal, use Math CHOPs to get in the range you need.
If you want to do it purely from the depth camera you need to start filtering the video images until you can isolate the player, then you could put that into a Blob Track TOP or CHOP and get values that way, but it takes more image processing and massaging to get right.