Anyone played with this? I’m looking at some PTZ’s with FreeD and NDI (with genlock) and I want to render frames in TouchDesigner that match up with the sync on the cameras.
In my video-centric view that means I want to supply the same sync to my TouchDesigner machine (the main render loop) and the cameras.
I don’t really know where that would an out - maybe the NDI coming off TouchDesigner would be in sync with it’s output (I’d use Quadro sync) and I can use that to lock the cameras?
It’s a brave new world!
I think you would do this with a timecode value that you use to sync the content together. the tdSynchro COMP in the palette is intended for that.
There is certainly no way to genlock an NDI Out TOP, since it’s not supplied via a 'timed/refreshed" hardware output the way SDI or DisplayPort is. It’s just a UDP data stream over a general network.
I think NDI that supports genlock would be dedicated hardware that does NDI.
I’ll look at that comp thanks. Timecode doesn’t help if the clocks don’t match.
But if the main touch render loop is locked to the GPU, the NDI won’t come out in sync? I sort of presumed that if a screen was involved, touch ran with the ‘pull’ based on that. The NDI output could be wild from that?
And yeah there’s some hardware supporting NDI genlock. Seems to be PPTP based.
PS I don’t mean synced the way a video signal is, I mean the clocks coinciding enough that there’s an exact 1:1 relationship between video frames and rendered frames.
When a screen is involved, it’ll throttle us if we try to generate too many frames, but we are not running ‘in-sync’ with the monitor in terms of when we start or end processing or frame. The GPU has a queue of 2-3 frames which it’ll use to scan-out when vsync occurs, and we just add frames to that queue.
We could in theory stall waiting for an NDI frame to come in, and only start processing a new frame when that occurs. This would sync up the start of processing, but when our frames get shown is still a bit variable. Also when those frames arrive will be variable since they are relying on the network stack.
Hmm ok. So time-slicing, kind of. But - won’t a new frame be created every time one is consumed off the queue? Otherwise occasional frames would be discarded or duplicated. What about with the pro sync features?
In terms of the NDI exact timing - I sort of don’t care. If there’s a 1:1 on the number of frames captured and created there’s probably a fixed latency