How can I export the point cloud data into Houdini? Anyone?
To export to Houdini…there are so many ways.
Use a tableDat and write to a csv
Write a .chan file that can be read through Houdini in Chops
Write a sequential .bgeo sequence
Try outputting it as .exr with the Movie File Out TOP. Set the Movie File Out TOP to output as an Image, or an Image Sequence. Make sure of course your input is a 32-bit floating point TOP.
Related to much earlier posts in this thread, the next build of the 2019.10000 series will have a Depth Point Cloud output which is lower resolution but solves some of the artifact issues the color space point cloud has
digging out this old thread i also struggle with this issue, newest build of TD using a kinect azure.
is this weird contour shadow because of a narrow distance between the person and the wall behind?
I’m not entirely sure which shadow you’re referring to (posting a screenshot might help), but I know there are some artifacts when you remap the color image to the depth camera (using the Align to Other Camera parameter). This is because of the distance between the two lenses on the sensor and unfortunately there isn’t much that can be down about it (it is worse when you’re closer to the camera).
You can also get some black (unknown depth) outlines in the depth image which I believe are caused by difficulties capturing the reflected IR light on surfaces angled away from the camera. You could probably fill some of these in with a shader that used a min or max on surrounding pixels.
thanks for your quick response @robmc
this was the reason, along with a narrow distance as you said.
i’ve also found an article on this by microsoft; this shadows are an occlusion that happes due to lack of information in the background:
unfortunately i am not good at writing shaders and would need a lot of work to find the right tweaking to this, but i was planing to get a second kinect azure and i think that syncing them will imrove the image and maybe this way i can get rid of these occlusions.
in the meantime i will start to learn how to write an appropriate glsl-shader
No problem. A second camera can definitely help fill in some of the data obstructed in the other camera’s view. Unfortunately, the sensor doesn’t automatically merge the data from both sources, but there are ways to combine it depending on how you plan to use it.
Let me know if you need any more help.
@robmc: did not think that i would ask so quick, but is there any documentation that you would recommend on combining two kinect pointclouds into a single TOP for instancing ?
Unfortunately, I’m not aware of any specific documentation on the process at the moment; however, the basic setup isn’t too difficult.
Generally, all you need in your network are your two Kinect Azure nodes in Point Cloud mode, two pointTransform components (from the palette) and a pointMerge component (also in the palette).
This toe file has the basic layout: pointMergeExample.toe (10.3 KB)
The potentially tricky part is aligning the point clouds from the two cameras so that they appear in a consistent 3D space (otherwise each set of points is positioned relative to the camera that captured them). The point transform components will allow you to shift and rotate the points relative to each camera in order to align them, but there are multiple ways you can figure out the correct values to use here. Depending on how you plan to use the data, you can also choose whether you want to transform both sets of points into a new unified space, or just shift one set to set to align with a primary camera.
Depending on the level of accuracy you need:
- you can physically measure the camera position and angle
- you can place some sort of reference object (or ideally more than one) in the scene that are visible from both cameras and then manually shift/rotate the points until they align sufficiently
- or you can use some computer vision algorithms to automatically calculate the orientation of the camera (the kinectCalibration component in the palette does some of this, but was designed for a projector).
Hope that helps.
that definitely helps, thank you so much
i will make some tests and eventually send some results here
@robmc Thanks a ton for posting the pointMerge example. I’m able to get the pointclouds aligned pretty well using this, however I’m having trouble overlaying the Kinect color camera feed on the pointcloud, like in the kinectAzurePointcloud technique. I’ve tried using pointTransform from the kinectAzurePointcloud node, but it doesn’t retain the color overlay. Could you please give some advice about the best way to achieve this? Thank you!
No worries. I’m not sure exactly where you’re running into trouble, but I’ve attached a modified version of the pointMerge example that includes the colour camera data for colouring the point cloud.
The main thing to remember is checking the ‘Align Image to Other Camera’ parameter on the Kinect Azure Select. This will make sure the pixels in the colour image lineup with the correct pixels in the point cloud image.
I’m then merging the two colour images using an identical pointMerge component and then feeding them into the pointRender component that sets up the geometry component and material for rendering.
Hope that helps. Let me know if you have any questions on it.
pointMergeColorExample.toe (18.3 KB)
@robmc This was perfect, thank you! Regarding aligning the two (or more) pointclouds, you mentioned this as an option:
you can physically measure the camera position and angle
After getting the measurements for the Kinect’s position, rotation, etc. what would be the best way to utilize this info to align the pointclouds? If there are any examples that would be amazing. Manual calibration is not exactly fun.
I also have some issues regarding manually aligning the pointclouds:
When the alignment is correct, it unfortunately only stays aligned at the reference point (static object) that I’ve used for the alignment.
But when an object moves away from that reference point the alignment is not present anymore.
Is this due to the curvature of the camera lens? or is there some kind of distortion happening?
I’ve read through a ton of OpenCV tutorials, compiled alingment executables and tried to hack @snaut’s calibration tool to project the calibration to a Camera COMP instead of a Window COMP, but this aligning process is really too much to wrap my head around atm…
It would be so perfect to have some kind of standardized process for this, just like EF EVE has…
My understanding was that the Kinect software should compensate for the lens distortion in the point cloud mode, and it’s just the extrinsic (position/orientation) calibration you would need to do.
@jessekirbs In theory, you’re looking for the difference in position and angle and using those values in the pointTransform, but unfortunately I don’t have a second camera here right now to really try it out with.
We are looking at some more automated solutions based on the existing calibration tools we have, but I’m not sure of the timeline there yet.
@robmc So I’m able to get good readings on my Kinect’s rotational information through the IMU, and I’ve measured the height and distance from each other. I’m still stumped on exactly how you use this information with the pointTransform nodes. Would you mind going into some more detail about what this would entail? I’m happy to do testing on my end and report back. Thanks, Rob!
Unfortunately, I’ve only got access to one camera now, so it’s a little awkward to test it out. But I’ve attached a quick example by locking a node with the camera in one position and then moving the camera a little and locking it again. I had to link the example via dropbox since it was too large for the forums.
It sounds like you may have already done this part, but the attached network has a rough auto-level system that uses the kinect’s IMU data to adjust the point cloud so that the ground is parallel with the XZ plane and vertical surfaces are aligned with the Y plane.
I’m then using an additional pointTransform to rotate the cloud just around the Y-axis (panning) so that the clouds are facing the same direction, and then I used the XYZ translate parameters to shift the cloud so they overlap (you could use your position measurements here).
I don’t know if this is accurate enough for your use case, but hopefully it helps.
An automated calibration system is on our todo list now, but I’m not sure how quickly that will be.
Hello amazing community!
Finally got to this thread, cause i was searching for any explanation on how to record the Kinect (V2) Point Cloud Data and use it as a loop for better experimenting experience on movements, without need to dance all the time in front of it…
Recording it as a Movie File, it gis not giving me a full resolution point cloud output and also without any Z information…
Is there any solution to record that data properly?
tried also exr export, but also with image sequence selected on the movie file out it is saving me only one frame…
I myself did not try this yet, but what format do you record it?
It is essential to have 32bit RGB-material to achieve a proper representation of the pointcloud. I don’t know if the MovieOut TOP natively records any format in 32bit, but you could otherwise try to convert to CHOP’s and record this as an instancing base