Hi there.
I have a Kinect mounted slightly overhead and I’m looking for a way to get a depth image as if it was mounted at body height.

There’s a ‘pointWeight’ COMP in the Palette, which looked really promising - looks like it’s set up to do some matrix transform on the points and then recalculate brightness of pixels based on distance.

However the matrix transform there doesn’t seem to do anything. Upon closer inspection there’s no vertex shader (which I presume is needed to transform the positions) and also I see a few red errors in the Vectors tab of the glslTOP in there so I wanted to share incase this is just a bug.

If not, then any advice would be welcome. I know how to do this sort of transformation using instancing (like the pointTransform COMP) but the result I need is a 2D float32 image with brightness values based on the new point positions. I also tried doing this by deforming a grid with Kinect depth, then moving the camera position in Geometry comp, but I was unable to map distance to brightness and it was all a bit noisy/slow.

Thanks (and apologies if this is obvious - I imagine this is probably a common problem)

The pointWeight component is meant to calculate a weight map or selection mask for a point cloud. By default, all points within a unit sphere will get a value of 1 in the output and all points outside the sphere will get a zero. This output mask can then be used to filter out points in the pointTransform component or using the instance active channel in the geometry component.

The transformation parameters allow you to change the size, position and scaling of the unit sphere so that you can change which points are selected. Inside the comp it works by finding the inverse matrix and passing it to the glsl pixel shader to transform the points into the unit sphere space.

Since the point data is stored in the pixel’s color instead of vertices we don’t actually need a special vertex shader.

The red errors you noticed inside the comp are left over from a previous iteration and aren’t used anymore. Thanks for pointing them out. We’ll get them cleaned up in the next update.

As far as ideas go for generating a new 2D depth map, I’m not entirely sure what the best technique would be. Rendering the point cloud from a new perspective with a shader that outputs the depth is fairly straight forward, but you’ll be left with a lot of gaps in your image and working out a good interpolation shader may be complicated.

It might be worth looking into the opencv libraries.

Is your goal to use thresholdTOP or similar to remove the foreground/background?

If so, you can accomplish this by multiplying your kinectTOP[depth mode] to a rampTOP. Simple make your rampTOP a blk-wht gradient in the reverse angle of your kinect. You can be super precise by looking up the RGB depth values of the person’s head vs ankle and doing some math until your ramp makes these values the same. This is the technique I use.

Thanks for the help guys. Just wanted to let you know I worked this out. Basically I rendered a point cloud from Kinect and adjusted my camera position to counteract the physical Kinect mounting position. Then by rendering points using a Line MAT I was able to control their brightness and size based on distance from the camera, so the net result was a depth image from a different virtual angle. Not perfect as there are some gaps and shadows in the Kinect data, but good enough for what I need. Thanks for the tips though much appreciated !