Compensate for Kinect ceiling-mounted downward viewing angle?


I have a Kinect Azure and am using both CHOP and TOP data to drive various project elements, both XYZ coordinates and projects involving the image itself. The trick is that the Azure is mounted in a hallway at the top corner where the wall meets the ceiling (security camera style), so users moving their limbs are providing data that isn’t cleanly lateral/vertical/etc., and the image itself of course is skewed, so using the Azure’s TOP image has been the result of lots of post (Stoner tool, etc.) with limited accuracy and success.

What methods/tools are you using to allow a Kinect Azure or equivalent to be mounted like this but allow for clean interactive development? Thank you for your help!

I don’t have an Azure but but for my first Kinect2 show I used the Microsoft Kinect Studio extensively to get sample recordings to work from, and also to work out the ideal positioning for camera. It is best to work really hard to get your camera sensors in the best possible position physically. So consider moving it if you have to, rather than trying to improve in post processing. It is like starting with the best Signal so the Signal to Noise ratio is as high as possible. Good luck.

1 Like

if you think less about the image and rgb data, and more about the pointcloud data from the Azure, your can do a lot to transform this info into something a bit more usable. I’m not usre which aspects of the kinect data you are using(skeleton tracking, depth image etc), but if you get the pointcloud and use a pointtransformTOP to rotate and transform that date into somethign a straight on camera in your scene can use, you can do a lot to correct for an odd angle that you’re environment forces you to work with

1 Like

Good thinking, thank you! I have been using CHOPS to track xyz axes of limbs/head/etc., is is there a way to do this with TOP point cloud day that has been adjusted? I’m thinking blob tracker won’t be granular/specific enough.

I do this by creating a camera that can only see a limited near / far plane, essentially turning it into a bounding pox. That is easy enough to blob track, although you won’t get the specifics of left hand right hand etc. like you would from skeleton tracking