Kinect Interactions

Hey folks,

I resurrected an older project that I have been working on and I have realized that I am running into the same problems that I had when I last worked on it.

First:
I am using a Kinect for Windows v2 to track the positions of the users hands and I am using that to drive the position of a container on screen which contains an image of a hand icon. So as the user moves his/her hand around, the idea is that the hand on the screen would move around to match. I have this element working, but I have a couple buttons on the screen with rollover states, how do I trigger these rollover states as the hand moves over them? How do I also enable the user to click on the buttons using the hand closed state?

Second:
I would like to allow the user to “smudge” an image on screen in a finger painting fashion. So when the user closes his/her fist, any movement of the hand will cause the underlying image to smudge as if it was wet paint. Any suggestions on how I might accomplish this?

Thanks in advance.

Glen

Hey, did you ever get this working in the end and might you be able to point me in the direction of how you managed to control an image via the kinect. I’m currently trying to do the same thing where in I have an image on the screen that I control the movement, scale and rotation (essentially so it follows the hand with the correct orientation when it’s going left and right).

Looks like you ran the image though a container but i’m unsure how you’ve done it!

Cheers

Hello, sorry for the delay in responding. I did make some progress but I never completed it. Due to the timeline I had to work with, I ended up simplifying the process considerably to the point where I remove the navigation system entirely. Now that you mention it though, maybe it’s time to resurrect the project. How are you making out with it? As far as the movement of the hand goes, I believe I captured the UV screen space of the hand and then mirrored that on the object but I believe I had to do some math to get them lined up, even then it wasn’t 100% accurate. Also if the user wasn’t standing the right distance from the screen, the alignment was less accurate. It was mentioned to me that it would be better to use vector math to determine where the user was pointing on the screen and move the object to that point, that way the accuracy shouldn’t be affected by the users position.