Kinect and RealSense pointclound washed quadrant problem

It seems to me that the Kinect 2 and the Realsense TOP have some sort of issue. I only see it when I select the point cloud view. Please see picture attached. It seems like one cuadrant of their tops is washed by a solid color, giving the illusion that all pixels are in a similar plane. That is not the case in the room.

INFO: The computer is new, as of this month. I have the latest stable build of Touch, and the SDKs of Kinect 2 and Realsense.

Does anyone have any experience with this? Any pointers would greatly be appreaciated.

Any help or pointers would be greatly appreaciated.

Have you converted this to a point cloud yet? I’m wondering if the the illusion here is party because of the bit depth of the data.

1 Like

Yeah, Its is because the data is 32-bit float so the data goes outside the ‘visual’ range. You can use a Math TOP to rerange it so ‘see’ the data, but it looks correct from here.

Thanks you both for answering.

Matt when you mention the trying the point cloud. Do you mean this? Or do you mean instancing geometry with the data from the kinect?

ben, is there a standard way to turn 32-bit float to the visual range? I madesome numbers up in the ‘from’ and ‘to’ but not sure if there are standard values that should be input here.

quick update: I used it to instance a but of spheres and thedata looked fine. As you guys suggested it was just fine. I am still currious if there is a set of correct values to re arrange it in to the visual.

I related question.
I am trying to isolate only what happens on a certain depth. i.e. if is too far or too close, ignore it. I am using the Crhoma Key TOP for that, as long as I mantain the data in 32-bit float is this the best way to clip the depth data?

The range totally depends on the range of the data. Visual range is only 0-1. The values in the point cloud will be in meters, so something 3 meters away will need to be reranged from 0-3 → 0-1 for it to be right at the top end of the brightness.
Basically, range based on the size of your area of interest.

There’s a lot of ways to tackle this one, so bear with me for a moment.

Do you want a point cloud at a certain depth, or are you only looking for if objects / a human exists at that depth?

Is this for trigger an event / state / something else? Or is this more of a visualization piece?

How I might approach this would be influenced by some of those pieces. Glad that the point cloud all got sorted though :fire:

malcolm Super helpful, thanks. Knowing that I can start to plug in numbers base on the location.

raganmd This is more of a trigger type thing. The basic idea is to place the kinect looking down at a table and clip the range to only show hands hands moving on top of the surface and use blob tracking to follow their position.

Ahh - in that case I don’t think I’d actually use the Depth point cloud. Instead, I’d just use the depth image - from there you can threshold based on luminance to only get hands at a specific depth or greater - the Data from that process is going to be much more useful for blob tracking than the Depth point cloud in this use case.

What’s cool about the point cloud data is it uses colors to represent XYZ positions. Blue is your depth channel. I believe it has more resolution than the depth output if I remember correctly.

Although raganmd’s suggestion is a good one to use the depth output. Much easier to tailor a grey scale image to set cutoff limits.