One of the most useful utilities of a Kinect is getting the silhouette of a tracked user to use as a mask or generative starting point. I believe that the player id is embedded into the depth data coming from the Kinect. Is this data still available in the depth map output by the current version of the Kinect Top (SDK 1.7)?
If not, I’d love to see an implementation of this. I believe there’s an example included with the Developer’s Toolkit on how this information is extracted, also explained in this StackOverflow post.
Try using a Kinect Top connected to a Cache top. Cache the incomming kinect background.
Then using a composite subtract top you can subtract you cached backround kinect image with your live kinect feed which should leave you with a silhouette. some hsv adjust or other tops can clean the noise.
Hey emintzer, thanks for the reply but it’s not what I’m looking for. The idea here is that the Kinect can do this actively - using a user’s tracked skeleton and known depth, it can detect whether a given pixel belongs to a person or not. In this way, you can use it with any kind of busy background, and even things moving in front of the user while still detecting a pretty clean outline.
Doesn’t make much difference to me either way, it is easy enough to pull the channel we need if there’s a benefit on your side to keeping it all in one input.
This is done, but didn’t make it into the 12000 series experimental. It’ll show up when we post a 14000 series experimental, which will occur sometime after the 12000 series has gone Official.
I see this feature has been implemented as Player Index in the experimental release. When I try to use it, TouchDesigner freezes up on my machine.
Windows 7 64-Bit, using TD Build 16360 (32 and 64-bit versions both freeze). NVidia GTX 660 graphics card on driver 335.23, Kinect For Windows Hardware with SDK 1.8 installed.
How would you go using this as a mask for the rgb stream?
I’ve played with the tox at viewtopic.php?f=22&t=5909&p=23417&hilit=kinect#p23417 to match rgb and depth streams (there are some offset/scale on the color stream basically) but is not perfect.
In the depth with color sample in the sdk they use this function :
NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution
Is there a proper way to emulate it in touch?
Vincent - barakooda was asking about this in another thread. I’ve been trying to figure this out this evening, haven’t got it working, but posted some info here :