Interactive Table

Hello everyone,
I am trying to make a touchable table with projection, but I am stuck on which sensor to use.
I was trying to use a webcam with Pyautogui and Opencv libraries to identify the hand and assign control to the cursor, but the idea is that 4 people can interact at the same time at this table. And this implies that I would have to use 4 webcams, and I feel that this can complicate the programming.
My question is, has anyone tried something similar? and know which sensor would be the best to use?
Thank you all, I will be very grateful if anyone can share their experiences.

I usually use a single camera and the built-in blob track TOP.

You will be able to track a lot of points at the same time, basically unlimited touch points for pro/commercial versions.

If you use an infrared camera, you can avoid the camera seeing any visible projected light.

A number of techniques exist like FTIR and LLP to find fingers using IR light.

here is an example we did last week with a projection wall and a single camera:
https://www.instagram.com/reel/Cq6syWcgA4_/?utm_source=ig_web_copy_link

1 Like

Thank you @harveymoon, actually, I am using the render pick Top that allows the triggering of certain areas by rendering geometry, but I am using the Kinect Azure and I am getting some problems because the lens of the projector is not the same aperture as the Kinect color camera. So, it doesn’t work well when I am trying to interact with this geometry. because the geometries rendering it is not in the same spot in the Kinect color view camera, I am stuck in this problem. Does someone already have this problem?
I will upload the project above.
testeMesa.6.toe (9.2 KB)