I currently have a way of doing this that entails shared memory between an outside program, but I think it would be really easy to incorporate into the Kinect TOP. Basically, I’ve found it extremely useful to be able to have the full point cloud from the Kinect (not the depth map, as this has the problem of being tied to the field of view of the Kinect). My solution was to encode the point cloud as a floating point texture where R = x, G = y, and B = z. The resulting texture still appears to be photographed from the Kinec’ts field of view, except encoded in each pixel is the true coordinates, in meters, of the point it represents. This has all kinds of useful ramifications. For example, I can matte the point cloud texture with the the player index texture and extract just the point cloud of a single person. Then, on the GPU, I can render this point cloud with any field of view or from any camera position. It’s really only a few lines of code to do create the texture, and I’d be happy to share the code!
Interesting. We are looking at add the remapping capabilities for the color/depth images. In your solution if it’s from the color’s point of view, how do you decide the Z coordinate?
The texture stores the XYZ coordinates in camera spaces encoded as RGB values, so, the z coordinate I just store in the blue component of the texture. It also has the interesting property of being formatted like the depth image, which makes it useful for reasons I discussed in my previous post. I can send you the full project if you’re interested, but the key part of the code is as follows:
pointCloudSHM->lock(); float * sharedFloatBuffer = (float *) pointCloudHeader->getImage(); memcpy(sharedFloatBuffer,cameraPoints,totalImageSizePointCloud); pointCloudSHM->unlock();
the function MapDepthFrameToCameraSpace will take the Kinect’s depth frame and generate a point cloud with the XYZ packed in an array of 32-bit floats. Since I’ve set up my shared memory texture as a 32-bit RGB texture, it fits perfectly. The result is a texture that has the same resolution as the depth frame, and looks similar to the depth frame in the sense that the image you see matches the depth/IR frame of the Kinect, but it has way more information than the depth frame: every pixel contains the x,y,z coordinates in camera space. I use this in conjunction with this touch network I posted here:
[forum.derivative.ca/t/rendering-a-point-cloud-from-a-texture-sampler/5071/1) . It lets me render point clouds using textures as inputs. Below is the code where I set up the buffers/textures:
/set up our shared memory for our point cloud (12 bytes per pixel)
totalImageSizePointCloud = (cDepthWidth * cDepthHeight * 3 * 4 );
// header + height x width *number of bytes per pixel memsize = sizeof(TOP_SharedMemHeader) + totalImageSizePointCloud; pointCloudSHM = new UT_SharedMem("pointCloud", memsize); // figure out the size based on the OP pointCloudHeader = (TOP_SharedMemHeader*)pointCloudSHM->getMemory(); pointCloudHeader->height = cDepthHeight; pointCloudHeader->width = cDepthWidth; pointCloudHeader->dataFormat = GL_RGB; pointCloudHeader->pixelFormat = GL_RGB32F_ARB; pointCloudHeader->dataOffset = depthHeader->calcDataOffset(); pointCloudHeader->magicNumber= TOP_SHM_MAGIC_NUMBER; pointCloudHeader->version = TOP_SHM_VERSION_NUMBER; pointCloudHeader->dataType = GL_FLOAT; cameraPoints = (CameraSpacePoint *)malloc(totalImageSizePointCloud);
The next official will have a ‘Color Point Cloud’ option in the Kinect TOP that will outputs a 1920x1080 texture with XYZ in it’s RGB channels relative to the color camera.
Further ones relative to the Depth camera will come next.
Sounds great, thanks! I’m not entirely sure how you get a full 16:9 frame of depth values when the depth sensor is closer to a square aspect ratio. Are they letterboxing the depth data or something?
The posted build now has the feature, here is a sample .toe using it
pointCloud.toe (6.83 KB)
I`m connecting my kinect v2 !
Well done !
- I have FPS drop to ~28 avg (GTX660TI) when using the color point cloud option.
- Your example is great I had only to rotate the camera 180 degrees and change z position.
(As far much as I notcied)
Keep the great work !
It will be a lot of work to add another layer of “player index” maybe as alpha channel option ?
or just another option to be aligned with with 1080p output.
It will help a lot to isolate users…
Exactly what i needed !
Where i am sad, i want record kinect point cloud output…
Kinect point cloud need to have a pixel format at less 16 bit float.
No MovieFilesOut can export 16bit format video ?
This is awesome indeed !
Question though :
How can I record the Color Point Cloud texture in an Animated movie file ?
I want to synchronize the 2 Point Clouds of 2 Kinects (back/front) to reproduce full 3D.
Did it with 2 stills already
My plan is to sync the 2 videos in editing software then use it in Touch and generate 2 geos merged.
Thanks for this @derivative!
This color point cloud feature is awesome, super handy. I’m seeing some artifacts in the cloud, though, because it’s been scaled up and remapped into the color camera point of view. It’s also cropping the top and bottom of the point cloud.
Malcolm, any chance of adding a second “point cloud” texture, but leaving it unmapped and uncropped straight to the depth view? It’s possible to reproject the depth image into XYZ with a glsl top, but it’d be awesome to have this built right in!
Hello, there’s any suggestion on how to align this image with a projector?
I’m alignin the camera manually, in the 3D space trying to match the projector position and I put the FOV of the camera to a value like the projector FOV but it doesn’t work.
There’s something else to do?
Just curious - are you running two kinect sensors (k4w v2) on the same machine in the same touch process?
looking at the best way to use multiple kinect v2 sensors in TD - looks like it may have to run on separate machine and touch in/out around the place…?
How did you go with this? Did you solve your problem? I am currently looking at hour to calibrate with the realworld view of the projector
I was playing with this today and saw that you did a great job of assigning depth to the the 1080p RGB texture.
This is nice and very useful, but I’d rather get the the 512x424 depth image with the color data, is there a way to do so? I feel like the color to depth mapper is in C and so we can’t use it without a change in the top code.
I want to record a performance, should I record both point cloud texture and RGB camera texture? Or the point cloud texture has already the RGB color of the color camera?
point cloud texture provide xyz values of the points.
the color camera only provide you the rgb data.
two separate textures.
New to TD.
Experimenting with the point cloud and the example offered, thank you…!
Not sure how to explain my issue. I seam to have a large shadow/vacuum of points around outline of my body that moves to covers the points representing my body the closer I get to the Kinect.? is this artefact or a byproduct of the process.