Multi-Cam skeletal tracking using the Real-Sense cameras?

Hi all,

I’ve been looking and reading about the realsense cameras as an upgrade to the older kinect V2 for skeletal tracking.

It seems ideal since computers can handle more than one, more easily. However, any time there are multiple sensors who’s depth is being processed by some skeletal tracking SDK it presents the problem of dealing with, and intelligently switching between the same skeleton data each of those sensors sees.

so wondering - does the multi-cam mode these sensors support, work natively with skeleton tracking? So that, the sdk ingests 1 unified data set, and produces 1 cohesive skeletal set from that?

Hope that question makes sense!
Thanks.

Hey Lucasm I don’t know the answer but am interested if you ended up buying one, and if so, what do you think of it? thanks.

we tried the realsense but the skeletal tracking was not great IMO. Def way more jittery, and harder to use. We ended up going with the Kinect Azures, Not my favorite things either but the benefit of those is that they guestimate joints they cannot see, so if a user is detected at all, you can expect all the joints to be there, even if not matching the user, they tend towards a sitting position or standing position, I assume because the SDK’s are trained off of some data like that.

Multi-cam I think is a nogo regardless of platform, unless you set it up yourself outside of Touch, but that’s a potentially large task.

Thank you sir! good to know…

There always is brekel. The hardest part afaik was parsing the data into touchdesigner. I think it also was pretty powerhungry, but that is more in regards to Azure Kinect in general:
https://brekel.com/brekel-body-v3/
Pinging @MXZEHN with some in depth experiences.