I’ve been looking and reading about the realsense cameras as an upgrade to the older kinect V2 for skeletal tracking.
It seems ideal since computers can handle more than one, more easily. However, any time there are multiple sensors who’s depth is being processed by some skeletal tracking SDK it presents the problem of dealing with, and intelligently switching between the same skeleton data each of those sensors sees.
so wondering - does the multi-cam mode these sensors support, work natively with skeleton tracking? So that, the sdk ingests 1 unified data set, and produces 1 cohesive skeletal set from that?
Hope that question makes sense!