Generating Pointclouds from Depth Map / Color Image

Hello! Is it possible to use a depth map (black and white image from a depth sensor, in my case the Kinect for Azure) to generate a pointcloud? Can you also use the color image from the depth sensor to color that same pointcloud? It’s essentially what the KinectAzure CHOP is doing I imagine, but deconstructed.

Any help would be greatly appreciated - thank you!

Hey! There’s actually an example of the Kinect point cloud in the Palette within TD. Take a look under Techniques, there’s kinectAzurePointcloud, and kinectPointcloud. They should help you get started at least!

1 Like

Thanks for the reply! That’s a great start but it seems it’s still missing one aspect - The ‘in’ node is already taking generated pointcloud data from the azure node, instead of a depth image/sequence. I can’t find a function that’s turning the depth image data INTO the pointcloud, and it seems like I can’t see what’s happening behind the scenes in the azure node. Do you have any more insight into how to achieve this? Thank you!

Ah sure, so depth image is handed to the geo here, the geo uses the rgb (xyz) data in the image (positionInstances) and instances them as points.

Inside the geo there’s an Add and a Convert SOP that create a point within the geometry, this is what gets instanced.
image

Does that make sense?

Still a little confused - apologies. What is the context of the positionInstances node and add1 and convert1 nodes?

@jessekirbs If i understand, you’re trying to get the point cloud (xyz) data directly from the depth image rather than using the point cloud image generated by the kinect azure top?

That conversion is currently done on the Kinect Azure itself, but the basic idea is that you’re projecting rays from the camera position out to the distance indicated by the depth map and putting a point there. The main thing you need here is just the field of view of the camera so that you can figure out the ray angle for each pixel in your depth map.

Hey Rob! Yes, exactly. I have a depth image (or sequence) like so from the Kinect for Azure:

Is there any sort of example setup that demonstrates this technique? Thank you!

I don’t have a touch example off hand, but I think this article goes over the idea fairly well: From depth map to point cloud. How to convert a RGBD image to points… | by yodayoda | yodayoda | Medium

The article includes python code, but you could also do that in a glsl shader in touch. The Kinect Azure TOP class (kinectazureTOP Class - Derivative) also has a depthCameraIntrinsics member that can retrieve the fx, fy, cx, cy values for the kinect.

1 Like

Sorry but I doesnt understand why you need deep map for a point cloud. Kinect Azure gives you a direct point cloud. I havent one at reach now but I am sure its one, you can have the point cloud and have a kinect azure select top with color aligned. So you can instance spher with the first top for position and tge second for color. You doesnt need to use the deep map. Here a quick example with the kinect V2

Awesome, I’ll go through this. Thanks, Rob!

hey jacques - I’m dealing with pre-recorded Kinect footage, so I only have the color/depth maps to work with.

You can reproduce a point cloud more easely from depth map, using ramp for red and green layer, For the color, you have to crop and scale it to map the depth map (not the same res and format).
here is a quick example
kinectDepth.toe (4.7 KB)

2 Likes

Thank you very much for this, @jacqueshoepffner! I’ll take a look at this tonight.

@jacqueshoepffner This was a great start, so thanks again for sending this over. I’ve manged to plug in my depth image and color video and it’s certainly creating points in Z-Depth, however it seems that there is not range or blending like you’d see from the Kinect. It’s as if the values of the depth map are clamped to either be extruded or in the background:

image

Is this something that can be tweaked in your setup to get the more robust pointclouds that have more precise depth per point? Thanks @jacqueshoepffner !

If you see my project, I use a math top, on the red and green (the ramps), I transform 0/1 to -1/1 so thats centered on 0 and for blue, I expand the scale to have a reasonable depth. Because all this is arbitrary.
Have you tried to record the original point cloud? Or depth map and color are the only available.

Thanks for the clarification, @jacqueshoepffner! For further tests I will record the direct pointcloud, but for this experiment I only have the depth and color maps available.