Hello people!
I’m new around here, this is my first post
I have in mind an installation in which it would require 4 kinect sensors (v1 because v2 does not allow multiple inputs) to get the position of the user’s hands.
But researched I thought that it could work through OpenCV, but I don’t know how to start. I have the minimal notions of python.
Is it better to use 4 kinects? Or do you think it can improve performance by using y 4 video in and OpenCV realtime?
If so, do you know where I can find information to do Video in → cv2 → Hands tracking?
Thank you very much in advance to whoever answers me
My two cents. If you can get the new Azure kinect i dont think they have the single sensor limitation of v2.
Tracking hands i think is a job for skeletal tracking and imho kinect is the solution.
OpenCV relies on feature detection and AFAIK detecting let alone tracking, hands would be difficult if not impossible.
Usually multiple Kinect are used for corridors and long interactive display.
If you just need the hand position you can rely on the ‘tz’ coming from one Kinect v2 (up to 5-6 meters distance). Anyway, the difficult part of the multiple Kinect is to keep the ‘id’ of the user when you pass from one kinect to the next (needs some custom logic).
In the past I used 3 Kinect v2 but only because i wanted to create a 360° PointCloud around a DJset using and yes I had to set up 2 clients + 1 server. The clients were streaming PointCloud TOP with Pack TOP over CAT6 ethernet (crazy).
There are online some examples for hand tracking using RGB cameras:
a recent video
If you want to start using external python environment communicating with TouchDesigner through Spout I suggest you to start with these tutorials from @Vasily
Consider also to have a look to Leuze ROD4 - Radar Touch, maybe this device can come in handy as a simpler solution for your installation.
Thank you very much for your answer, I am seeing that I have a lot to experience
My idea was to use a cube, each user on a side face, therefore it is limited to 4 participants.
I find the two videos and the python library super interesting for what I want to do.
I will try to learn from today with the ML LEGO tutorials