Sync Body tracking with Multiple Cameras

I have a 7 cameras , lined up one after the other. All the cameras are connected to a different PC.
I am using the Nvidea body tracking CHOP to get the body tracking information of the users.
I have one more PC where I am getting all the information of the 7 cameras.

I have a red box showing on the user getting detected by the camera.
I want to know how can I sycn all the body tracking information in way that if a user starts walking infront of camera 1 and ends up to camera 7 the red box follows the user?

Nvidia Body tracking CHOP uses any TOP to detect and track skeleton. So, for me there are 2 ways you can do it:

  1. Try and blend &warp camera views in one big picture and see if it gets you there and the track is stable enough. Imho, this is the most stable way to do it.

  2. You can try and and match the start and end of detection views (I assume bounding box) for each view and switch each view as soon as a person is detected, provided only 1 person is tracked at a time. When multiple people are tracked, it’s becoming a tricky business indeed.

I honestly think you should try and blend the views. There’s a stitcher component 360 Stitcher Component - Shared .tox Components - TouchDesigner forum (derivative.ca) that help with this.

This can be used for real-time video stitching. Use a snapshot of the inputs from all the cameras, send it through PTGui to create a .pts file and then load the .pts file. You can now feed the same cameras into the component and a live-stitch will be performed.

I think it could be a way to explore it probably with more panoramic settings.

cheers
sH

I have tried the Nvidia body tracking CHOP, maybe i am wrong but it only tracks 8 users max.
I used the fit CHOP to add all my camera views together and test it but it only tracked 8 users

I still don’t understand what are you trying to achieve?

  • how many people to you want to track?
  • is the consistency is smth you’re after or not?

You can use other methods like mediapipe that will track as many people as it can see (as far as I know). You need to test it with extra large images though.

Have you considered another approach? smth like 4° lens camera with ultrawide view?

If the results of your bodytracking + fit CHOP are satisfying for your installation and you just need to track more people, you can try mediapipe. I don’t see how exactly it can work (because of the overlap - same person can be far from camera in both or even more cameras depending on space geometry).

I just went to Bodytrack CHOP though and you can actually determine how many people you want to track and couple of settings to help filter out unwanted results

You should be all set with this. “Just” stitch the camera views (I would investigate Stitcher component way) and here you go.

Cheers

I have a room with a long wall, i have 7 cameras in the wall to detect users, there will be at a time 50 people in the room. so I want to detect them and have an effect on each user.

Consistency is important.

I have orbbecc cameras right now also the Zed2.

I have tested the bodytrack CHOP it does mention that it can track more but it only does track 8, tested it out.

Issue with mediapipe is that it does not have an ID for the object being detected so its difficult to track, it only creates a box around a user

Not sure what’s the limitation of Body Track. Graphic card? I never wanted to track more than couple of persons at a time. Perhaps the camera position? If there are a lot of people one behind another (like a concert), it cannot figure out this is a person?

The body track CHOP is internally clamped at 8 people based on current limits in the Nvidia Maxine SDK.

It’s not as precise as skeletal body tracking, but I’ve seen users merge depth data from multiple cameras into a single point cloud and then use a top-down blob track on the whole scene to identify people.

1 Like