How to export animation/point position data (NVIDIA face tracking) from TouchDesigner?

I’ve been trying out the NVIDIA nodes after getting my new GPU, and I’d like to record my face in TD, then export the movements of the facial tracking points, with the hope of using them to control a facial rig in an external package like Blender. What would be the best way to do this?

Pictured is my very basic face tracking setup, using the U and V values from the points to instance spheres.

A couple of us were looking into this today, but there are so many different face tracking projects out there for blender that its hard to say what might be the best option for your case.

Based on what we saw, on the TouchDesigner side it might just be simplest just to export the data as standard text data, and then you could use python in blender to import this data and attach it to your rig.

One way to do this in Touch would be to use an Execute DAT script to grab the FaceTrack CHOP channels you want to save at the end of every frame and append them into a Table DAT. You can then right-click the Table DAT and choose Save Contents to write out the data in various text formats.

Once you can read the data in blender, there are lots of tutorials out there for hooking up a face rig.

This one seems to use a rig that matches the landmark data that the Face Track CHOP produces, although it pulls the data from OpenCV: Blender 2.8 facial mocap using OpenCV and webcam - YouTube

I think this project may have some ideas on loading custom text data into Blender: GitHub - cgtinker/blendartrack: blender add-on for importing augmented reality motion tracking data recorded with the mobile application blendartrack

Hopefully that might give you somewhere to start. If you do get something working, we’d love to hear about it.


I have previously created a Blender add-on to control custom parameters in OSC.
May be you can use this.


Thank you very much for the suggestions. I’m admittedly lacking in knowledge and experience of DATs and Python, so this is probably beyond my skillset at present.

Thank you very much for sharing this add-on with me. I was successfully able to implement the add-on, and set up drivers to control the rig. At the moment, the actual movement is a work in progress. The rotation axes are correct, but any amount of movement causes the overall face proportions to pull away and distort beyond what is desired. You can also see in the attached video that bones in areas like the eyes and mouth barely move at all. However, if I use something like a Math CHOP node in TD to multiply the range of movement, it multiplies all of the bones’ ranges, causing further distortion. Is it worth sharing my TD and Blender files in case someone has a better idea of how to set this up?

TouchDesigner to Blender Facial Rig Test 01

I think that is due to the difference in default scales between Blender and TouchDesigner. I think Blender scales are 10x TouchDesigner’s there about. So you could try shrinking your geoemtry in Blender as a first simple test.

Thank you for the suggestion. I tried experimenting with different scales, but it seems to not make a difference. The distortion appears to be the same.