I’m attempting to drive a skeleton with bound skin exported from Maya from the kinect tracked positions.
I wanted to use single bone ik chains (what gets created using the “forward kinematics” mode when creating bones), and constrain the ik goal to the tracked positions, however importing the fbx model create nulls in Touch and not bones, and the inverse kin chop only wants bones.
I don’t want to do rigging/binding in touch, is there any way to have an imported fbx turned into bones?
I guess the other option is to use the bone rotation, though I’m wondering what the initial pose should be like.
I tried a simple chain like this :
If I plug the values from the kinect chop with a straight bone chain and original angles at 0 I’m not getting good results. I tried adding 90/180 offsets to some of the bones but I’m not getting super close.
Would be great to have an example skeleton ready to go shipped with touch.
Going to do more research, but appreciate any input!
I seem to recall that doing your rig in houdini and then bringing it in works fine, I’m going from memory and this was about 2-3 years ago so I may be wrong.
Using FBX I get somewhat the same results from Maya or Houdini, skin weights and hierarchy come through fine, but I end up with nulls instead of bones, making the ik chops useless in Touch.
Else from houdini I’ve used hclassic for geometry but I don’t think hierarchy comes through.
Going to spend more time and I’m sure I’ll eventually figure it out, was wondering if there was any tested/recommended workflow.
Bone rotation is the correct way to do this, as that already has all the rotations solved for you. I know we had a sample skeleton at one point but I’m not sure where that is now.
Thanks Malcolm. It makes sense.
Though I was also given some test recorded data that has only the point positions, which is why I was wondering about constraining a rig as well. Else I guess I need to do the math to end up with rotations values.
First, it feels Right and Left are switched, as if the pose was mirrored.
Else for the clavicles, I’m getting for left and right rx and ry at -180 all the time, which seems a little weird because when I put them back at 0,0 the arms are straight up.
And last, left and right limbs have the same orientation, instead of being mirrored.
So for a symmetrical pose, I’m getting rz -60 for left, and +60 for right.
I’ll probably continue a bit at it but if you find a working skeleton it would be awesome.
I’ll probably have another go at making ik and using positions, looks like a more flexible approach.
Yeah, I’ve spent days trying to figure out how to get Kinect to drive an imported rigged model with no luck . . . tried swapping sides like you say, changed rotation orders, swapping XYZ data channels, tried taking bone positions and calculating the rotations with an Object CHOP . . . all to no avail. There’s some very specific set up with Kinect rotations vs “standard” that I can’t figure out. FWIW I have been able to drive a rigged character using a stream of FK (rotational) data from a Phase Space system and it worked fine and very quickly.
So I don’t know, but yeah would sure be nice to have a working rigged character driven by Kinect.
I love matrices! Still work in progress but I got the skinned mesh to behave as expected.
I’m using the tracked positions and made my own aim constraint using the lookat python function, which allows me to specify aim axis(in the attached screenshot x along the bones).
About to start a project where I will need to rig an imported 3D character with a kinect skeleton, and would love push in the right direction by anyone if possible?
I’ve started messing around with this, and an example rig would be immensely helpful so I can work through how it’s all set up. Right now I’m a bit confused with how the Kinect data relates to the imported rigs I’ve tried from Houdini and Mixamo. For instance, the nulls created in the rig have rotation imports as well, but the Kinect CHOP only outputs translation values. Unfortunately, my understanding of how Houdini works is limited/non-existent.
I’ve been looking back into this recently now that the Kinect v2 is outputting rotation values. Honestly, I’ve put a bit of time into this and I’m still quite lost! Part of the issue may be that I don’t really understand how to use rigged skeletons in TouchDesiger, let alone with a Kinect as a control device. I tried importing the basic cartoon rig from Houdini into Touch, but the rigging a) seemed to be a completely different scale from the model and b) didn’t deform the character at all when I changed its values. Having heard that Touch shares some similarities with Houdini, I thought doing the Houdini overview and character animation tutorials might better help me how things work in Touch, but I’m still not sure if and where that knowledge crosses over.
I also tried importing a Mixamo model, and the rigging seemed to make import properly and deform the character when changed, but I can’t seem to figure out how to get the values from the Kinect to manipulate the rig in any useful way. Should I be looking at the relative rotation values from the Kinect?
If anyone has an example file showing how to connect the Kinect to a rigged character, that would be immensely helpful (Derivative, maybe?).
We are working on a sample file, hopefully we can get something together. Yes you’d want to use the relative rotations. Attached is some sample files that show a working skeleton, but not rig/skinned with a mesh. If you change your pane to a Geometry Viewer you’ll be able to see the skeleton in action. Our plan is to release a .fbx file that you can drag drop into TD and it’ll instantly start working with the Kinects relative rotation channels.
Maybe these will help you a bit though while we work on it.
Requires builds above 30000 of TouchDesigner 088 skel_rel.toe (7.46 KB)
Thanks Malcolm! That file is helpful in seeing how the Kinect rotation data can be used to build a basic skeleton. How would you suggest changing the length of the “bones” in the skeleton to make it look more proportional? I took a shot at it myself using the bone lengths from the Kinect CHOP and have attached the result, but I’m not sure if that’s the smartest/most efficient approach.
Looking forward to the FBX example! Thanks again to the Derivative team for your help and responsiveness.