Translating depth data into useful range

Hi there,

I am trying to use the depth information provided by both a kinect and an orbbec femto mega. Below are two images of me standing in the same place. When I view the data stream in the sdk it looks like the colour image. Very sensitive, lots of range, lots of granularity in the data for perceiving depth. But when I view the same data stream in TD it is blown out in the background and I only have gradient in the first 1.5m or so. How do i get the depth info that the SDK can see? It exists in the point cloud, so its coming in to TD somehow…..

Hello,

It works as espected, as a 16 bits mono images, there is no nuanced colors!

To obtain colors:

  • a math to re-range the input values to 0-1
  • a lookup set to 8 bits RGBA with a ramp with your favourite colors

et voilà,

Jacques

kinectColors.1.toe (3.7 KB)

2 Likes