Hi everyone,
I’m working an interactive floor installation where visuals are projected on the ground and follow/react to multiple people walking on it. Space setup will be a 6×5meter area, dim or low light conditions and a camera placed at ~2.5m height to capture the movement.
My issue is how to get people’s positions. The idea is either getting 3D coordinates through skeleton, point clouds, or using optical flow, thereby avoiding tracking consistency altogether.
I’m considering several approaches and I’d really appreciate some tips and opinions:
Option 1: Zed 2i / Femto bolt with skeletal tracking
Zed has 10m depth range, Femto ~6m. Both CHOPs should work out of the box(?). My only doubt is accuracy drops in dim/low light conditions for the Zed. (though I could pump visuals brightness and light the floor)
2: RGB/(IR?) Camera with MediaPipe
Is it viable to obtain multiple people’s coordinates in space? I couldn’t manage to find a specific guide. also infrared cameras would be better for low light conditions.
3: Depth camera to Point cloud
Don’t know if messing with shadow artifacts would complicate things. I did read in a previous post about merging multiple point clouds to fill occluded areas.
4: Lidar
Slamtec R1 placed at ankle height. but same issue as above, should use 2 of them to fill occlusions.
5: Bail and use Optical flow/blob tracking
Easier(?) solution, could avoid the issue of positions consistency. For optimal results the ideal positioning of the camera would be perpendicular to the floor, capturing a top-down view of the scene. I’d leave this option as last resort.
Which method did you try or would you suggest me to focus on? I appreciate any insights, experiences, or tips.
Sorry for the lenght of the post, this project is pushing me out of my knowledge zone so I’m overthinking it a bit.
Thanks in advance for your time!