Hi everyone.
Can anyone point me in the right direction of workflow between TD and d3? How would we go about inputting interactive signals/generative content?
Thanks!
Hi everyone.
Can anyone point me in the right direction of workflow between TD and d3? How would we go about inputting interactive signals/generative content?
Thanks!
In the past we just used video capture cards, but if I remember correctly the latest Disguise software can now support NDI (video over ethernet). TouchDesigner also supports NDI in /out, so this would be the easiest to share live video between the systems.
If you can, test the NDI connection before your final deployment. On a project in Jan 2019 we found Disguise couldn’t handle anything greater than 1x 4k NDI feed coming from TD before the feed would die inside their server. This was not a network issue as everything was 10gbe from end to end and worked perfectly when sent from TD to TD. We think it was because they were still on an older NDI SDK version at the time. Disguise might have improved things since then, but test it first!
It all depends on your desired output composition, where your content is coming from and how you plan on manipulating it.
As others have mentioned NDI is great for streaming video layers from TD to Disguise. You can also use OSC (and theoretically MIDI and DMX although I have not tried) as control parameters, cues, etc.
I would also look into transport timelines if you want to build things in Disguise and use Touch as a more of an input control system into the server.