I’m building an art projection using the ParticlesGPU prebuilt container. I’ve added custom forces that currently follow my mouse. I’ve also integrated a Kinect to track body movement, and I’m trying to replace the mouse input with a tracked point on the Kinect skeleton (being the shoulder_c).
The issue is that the Kinect joint positions are in 3D space, but my particle system and camera are effectively working in a different camera view. When I move, the tracked point doesn’t line up with the Kinect camera footage at all, it moves to a complete different proportion.
Is there a recommended way to properly map Kinect skeleton joint positions into the particle system’s space (or camera space), so the movement visually lines up with the Kinect image?