Generating Pointclouds from Depth Map / Color Image

Hello! Is it possible to use a depth map (black and white image from a depth sensor, in my case the Kinect for Azure) to generate a pointcloud? Can you also use the color image from the depth sensor to color that same pointcloud? It’s essentially what the KinectAzure CHOP is doing I imagine, but deconstructed.

Any help would be greatly appreciated - thank you!

Hey! There’s actually an example of the Kinect point cloud in the Palette within TD. Take a look under Techniques, there’s kinectAzurePointcloud, and kinectPointcloud. They should help you get started at least!

1 Like

Thanks for the reply! That’s a great start but it seems it’s still missing one aspect - The ‘in’ node is already taking generated pointcloud data from the azure node, instead of a depth image/sequence. I can’t find a function that’s turning the depth image data INTO the pointcloud, and it seems like I can’t see what’s happening behind the scenes in the azure node. Do you have any more insight into how to achieve this? Thank you!

Ah sure, so depth image is handed to the geo here, the geo uses the rgb (xyz) data in the image (positionInstances) and instances them as points.

Inside the geo there’s an Add and a Convert SOP that create a point within the geometry, this is what gets instanced.
image

Does that make sense?

Still a little confused - apologies. What is the context of the positionInstances node and add1 and convert1 nodes?

@jessekirbs If i understand, you’re trying to get the point cloud (xyz) data directly from the depth image rather than using the point cloud image generated by the kinect azure top?

That conversion is currently done on the Kinect Azure itself, but the basic idea is that you’re projecting rays from the camera position out to the distance indicated by the depth map and putting a point there. The main thing you need here is just the field of view of the camera so that you can figure out the ray angle for each pixel in your depth map.

Hey Rob! Yes, exactly. I have a depth image (or sequence) like so from the Kinect for Azure:

Is there any sort of example setup that demonstrates this technique? Thank you!

I don’t have a touch example off hand, but I think this article goes over the idea fairly well: From depth map to point cloud. How to convert a RGBD image to points… | by yodayoda | Map for Robots | Medium

The article includes python code, but you could also do that in a glsl shader in touch. The Kinect Azure TOP class (kinectazureTOP Class - Derivative) also has a depthCameraIntrinsics member that can retrieve the fx, fy, cx, cy values for the kinect.

1 Like

Sorry but I doesnt understand why you need deep map for a point cloud. Kinect Azure gives you a direct point cloud. I havent one at reach now but I am sure its one, you can have the point cloud and have a kinect azure select top with color aligned. So you can instance spher with the first top for position and tge second for color. You doesnt need to use the deep map. Here a quick example with the kinect V2

Awesome, I’ll go through this. Thanks, Rob!

hey jacques - I’m dealing with pre-recorded Kinect footage, so I only have the color/depth maps to work with.

You can reproduce a point cloud more easely from depth map, using ramp for red and green layer, For the color, you have to crop and scale it to map the depth map (not the same res and format).
here is a quick example
kinectDepth.toe (4.7 KB)

2 Likes

Thank you very much for this, @jacqueshoepffner! I’ll take a look at this tonight.

@jacqueshoepffner This was a great start, so thanks again for sending this over. I’ve manged to plug in my depth image and color video and it’s certainly creating points in Z-Depth, however it seems that there is not range or blending like you’d see from the Kinect. It’s as if the values of the depth map are clamped to either be extruded or in the background:

image

Is this something that can be tweaked in your setup to get the more robust pointclouds that have more precise depth per point? Thanks @jacqueshoepffner !

If you see my project, I use a math top, on the red and green (the ramps), I transform 0/1 to -1/1 so thats centered on 0 and for blue, I expand the scale to have a reasonable depth. Because all this is arbitrary.
Have you tried to record the original point cloud? Or depth map and color are the only available.

Thanks for the clarification, @jacqueshoepffner! For further tests I will record the direct pointcloud, but for this experiment I only have the depth and color maps available.

Hello @jessekirbs,

i do get your question.
I have been facing the same difficulty.

I think you could start with this :

The trick is to convert two 16-bit floating-point values packed into a single 32-bit integer into a vector of two 32-bit floating-point quantities.

Hope this helps

Bisou !

Hi,
I am facing the same issue.
I want to record the depth map of an Azure Kinect as raw depth map (16bit or 32bit float mono).
This is much less heavy on performance than capturing the already transformed xyz pointcloud texture.

But then I want/need to transform it to xyz after recording for post processing.
@robmc this seems to be very promising: KinectazureTOP_Class
I am new to python in TD (last used it when it was mostly TScript).
Could you give me a hint on how to use the method for xforming the raw depth top, to a rgb top with proper xyz values (considering the kinect intrinsics/distortion)?

Would be much appreciated!
Thanks

I’ve attached a simple tox I made awhile ago that can project an arbitrary depth map into a point cloud using some of the intrinsics. It uses the same general technique mentioned above using scaled ramp tops.

Depending on how accurate you need to be, the results are pretty close to the original point cloud; however, when I compare them closely there is definitely some further transformations going on in the kinect that I’m not too clear on.

depthProjection.tox (1.2 KB)

1 Like

Hi,
thanks for your promt reply!
Unfortunately I need it to be very precise. So I somehow need to get the actual transform thats somhow defined in the Azure Kinect SDK I suppose.

With your approach, your expected input values seem to be in quite a different range from what the azure intrinsics spit out when calling “depthCameraIntrinsics” for tor the Azure TOP.
My result is:
x=323.996826171875, cy=324.98052978515625, fx=504.1566162109375, fy=504.3530578613281
How would I convert them so they fit in your 0-1 normalized values?

Thanks!

Anyobody else has an idea how to get the actual depth to xyz transform that’s also internally done in the Azure Kinect TOP?