Hi - can anyone point me in the direction of any resources or advice that might help explain how to calibrate and merge 2 depth camera images into a single image? I have 2 overlapping depth images from 2 Orbbec Femto Mega sensors and want to combine them but obviously the distance and angle mean there is some calibration required.
The sensors are synced etc. I have seen various tools and discussion about doing this with point clouds on multiple Kinect Azure sensors but the info was quite old, but nothing I can find about combining depth cameras.
I am trying to cover a large circular area, and each camera is getting most of the area when using WFOV. The sensors are too low to use NFOV and cover the area. For context this is being used for blob tracking to control an interactive floor projection (maybe there is a better way?). I think I am going to have to go down the route of open3D etc but am hoping there is a simpler way as I am not familiar with open3D and the documentation for any tools there I admit I am struggling with.
Thanks!