converting 3D points to match a 2D view

I’m building an art projection using the ParticlesGPU prebuilt container. I’ve added custom forces that currently follow my mouse. I’ve also integrated a Kinect to track body movement, and I’m trying to replace the mouse input with a tracked point on the Kinect skeleton (being the shoulder_c).

The issue is that the Kinect joint positions are in 3D space, but my particle system and camera are effectively working in a different camera view. When I move, the tracked point doesn’t line up with the Kinect camera footage at all, it moves to a complete different proportion.

Is there a recommended way to properly map Kinect skeleton joint positions into the particle system’s space (or camera space), so the movement visually lines up with the Kinect image?

Hello,

A little bit more informations about your project could be useful…

But more generaly

– the kinect camera is situated at (0,0,0), the standard TD camera is at (0,0,5)

Put your TD camera at (0,0,0)

– the kinect camera is pointed on positive z, the TD camera toward negative

You have to turn your TD camera by (0,180,0)

After that, you have to adapt the FOV of the TD camera.