Fusing Body Tracking Data from Two or More Azure Kinect Cameras?

I’m familiar with this discussion about how to align the camera orientations
Azure Pointcloud Merger - 2021-01-18 07:35 - Community Post Comments - TouchDesigner forum
However, I’m trying to use two Femto Mega cameras to capture body tracking data, then solve for the most accurate user joint rotations for each user on a stage using joint confidence, fusing the overlapping skeleton data from both cameras into a single data set to output to Unreal.

ZED has a great tool developed for this call Fusion (which includes orientation calibration, floor detection, and other controls for managing multi-camera setups) and I was wondering if anyone knew if any similar tools have been developed for cameras that output Azure Kinect data?

Any recommendations for how to approach this problem would be a huge help. I’m still pretty new to TouchDesigner and setting up these kinds of camera systems, in general, and this seems like something way out of my depth to make from scratch haha. I would greatly appreciate any advice anyone has to offer. Is this even realistic or feasible in TD? Thanks!

Hi @albatro5s,

doesn’t look like the Azure supports it the way the zed does.

They mention taking into account the joint confidence when merging skeletons. That might be a good approach and maybe even not that expensive on the CPU…

cheers
Markus

1 Like