Hello! Is it possible to use a depth map (black and white image from a depth sensor, in my case the Kinect for Azure) to generate a pointcloud? Can you also use the color image from the depth sensor to color that same pointcloud? It’s essentially what the KinectAzure CHOP is doing I imagine, but deconstructed.
Any help would be greatly appreciated - thank you!