I’m using two Kinect Azures to cover a bigger field to track. I just need the depth cam, but because of the wide FOV, aligning the two images doesn’t work well. Do you have any idea how to approach this?
Is this possible with just the depth cam or do you need to use the point cloud? If so, how?
The data is just controlling particles, so the final output is an abstraction of the data.
Thanks a lot in advance,