Aligning two Kinect Azure Cameras

Hi!
I’m using two Kinect Azures to cover a bigger field to track. I just need the depth cam, but because of the wide FOV, aligning the two images doesn’t work well. Do you have any idea how to approach this?
Is this possible with just the depth cam or do you need to use the point cloud? If so, how?

The data is just controlling particles, so the final output is an abstraction of the data.

Thanks a lot in advance,
bileam

I’m not sure if there’s a direct way to merge the two depth images together, but you could definitely align and merge the point clouds from the two cameras, and then render the combined point clouds into a single scene from a new virtual camera position to produce a combined depth map.

OpenCV has some calibration routines that can be used to align the two point clouds - we’ve been experimenting with building some tools for this, but they’re not quite ready yet.

Hope that helps - let me know if you’ve got any questions.

1 Like

@elekktronaut @robmc Hi there. I’ve built a calibration tool for the Azures that works really well for situations where you have enough overlap between kinects from a similar angle. It does not work when you have the kinects in angles bigger then 90 degrees from each other for instance, since it depends on the pointcloud data itself.

The method I’m using its super fast and easy to use and best of all is that it is fully automatic, so you don’t need any boards like chess patterns or charuco. I 've sent this one to Markus from Derivate, since we were looking into this together a bit. I need to tweak it a bit further to support an N amount of Kinects so I have not released it publicly yet, but I can share the two kinects version in private if you want to give it a shot. Just PM me :slight_smile:

2 Likes

Thank you both! PM’d you @Darien.Brito

I would so much like to also get access to this @Darien.Brito :smiley:

Hi everyone,

I am going on a similar path with two realsense 435i and in the process of getting the realsense-ros working on Linux. Ahhhh… (Checkout this script
realsense-ros/set_cams_transforms.py at ros1-legacy · IntelRealSense/realsense-ros · GitHub)
It requires termios and tf to communicate with the camera which is Unix only AFAIK. So that is a bit farfatched ATM though it will work with N x realsense cams.

I was following the affine transformation (for outside-in) method based on the two imu data and was wondering if there is a better way in Touch to merging the point cloud through manipulation over the color channels? No doubt it will also be much more flexible.

@Darien.Brito I’m wondering if you were using the cv::stereoCalibrate() and/or any SIFT/SURF for alignment (for Inside-Out)? What’s your approaches using the imu?

Thanks.

1 Like

Hello there @majinshaoyuindustry I checked into calibration with openCV using stereoCalibrate() indeed but went for a different route using open3D. Did not check SIFT or SURF. Like I wrote in my PM, I’ll post my tool in the forum in the next couple of days, since I received an avalanche of messages about it :slight_smile:

Awesome! Thanks so much @Darien.Brito !

Hello there,

I’ve released a first version of the tool here:

4 Likes

Hi, thanks for this! It works great. Although a bit stuck with the specific version of Touchdesigner you are using for the build. Any ideas if it will be possible to use in the later versions of Touch? As i suppose in the newer versions embedded python 3.9 prevents the open3d library from being used at the moment