I’m new to TD and I’m trying to build a project that uses a Kinect Azure to track up to 6 people. It needs to draw circles where the joints are detected and overlays those circles on top of the RGB camera feed from the Kinect Azure.
I’m having difficulty mapping the location of the joint positions to the RGB feed and need some advice.
Is it best to use the joint worldspace X,Y positions (I don’t really care about visualizing the depth since the circles will be mapped onto the RGB camera feed) or the UV coordinates from the color image positions?
What’s the best way to normalize the joint coordinates (whether from worldspace X,Y positions or UV coordinates)? Are both the wordspace X,Y positions and UV coordinates going to have consistent min and max values?
If you plan to overlay the circles on the 2d camera image, then I would recommend just using the UV coordinates of the joint rather than worldspace where you would need to project that back into camera space.
The UV values will map from 0-1 across the color image, but they are not necessarily clamped to that range. If part of the skeleton would be offscreen, then the values may go below zero or above one.
Thanks @robmc ! I just realized that the color camera has a different resolution than the depth camera, and the depth camera is what is driving the body tracking, so maybe that explains why the I’m struggling with the mapping? It would appear that the depth camera can see more of the physical space in front of it than the color camera, so, for example, 1.0 in the U coordinates for the depth camera is farther left than 1.0 in the color camera (U coordinates are mirrored). Because of this, if I’m tracking my left hand, when my hand appears on the color camera just at the far left edge of the video image, the tracked position is way off to the left. For reference, the depth mode is set to Wide FOV - 2x2 Binned
Here is a gif to show you what I mean: kinect-Depth-Mapping hosted at ImgBB — ImgBB The values for U and V in the gif are coming straight out of a Select CHOP connected to the Kinect Azure CHOP.
Ideally, 0 and 1 are the same for both the color camera and depth camera, but this doesn’t seem to be the case? I must be missing something.
The color and depth cameras do have different fields of view, so they will see slightly different things and their coordinates do not match. However, the Kinect Azure CHOP should already be handling this and has separate channels for both images. The regular u/v channels should be relative to the color camera image and the depthu/depthv channels should match the depth image.
Alternatively, you can also you can also transform either the depth or color to match the other camera using the ‘Align Image to Other Camera’ parameter on the Kinect Azure TOP.
Hope that helps. If you want to post your project file here, we could take a look and see what is going on.
Thanks! I realized what I was doing wrong! I was using a Math CHOP connected to the Kinect Azure CHOP in the project to remap the position of the circle BEFORE I toggled on outputting the color image positions, so those old remap values were way off. I also had “Aspect Correct UVs” toggled on in the Kinect Azure CHOP which seemed to have changed the UVs in a way I did not expect.
What is the best option for recording the body position data synced to the color and depth images? I’d like to create a “testing clip” so other people using the Kinect to drive interactivity in their projects can develop without actually having the actual Kinect connected to their computer. In other words, I would need to simultaneously record the joint position UVs and the color video stream.