Hi there,
I am trying to use the depth information provided by both a kinect and an orbbec femto mega. Below are two images of me standing in the same place. When I view the data stream in the sdk it looks like the colour image. Very sensitive, lots of range, lots of granularity in the data for perceiving depth. But when I view the same data stream in TD it is blown out in the background and I only have gradient in the first 1.5m or so. How do i get the depth info that the SDK can see? It exists in the point cloud, so its coming in to TD somehow…..


