Aha, I think I get it now - thanks. I think that it is best when queue doesn’t get backed up at all. I would say that in this area (video i/o) is latency the “king” and each frame should go through the system as fast as possible - with fixed latency. (If some frame gets longer than it should be, than it is my responsibility to fix it and make it more optimized so that it doesn’t happen when it shouldn’t )
+1000 for this.
This would be fantastic if this also could be used for the Video Device Out Nodes as well - right now via the info chops it looks like the sdi outputs are genlocked when the appropriate signal is supplied to the reference input. However, even with a sync card, the relationship between the i/o and vsync is decoupled. The sync card manages vsync and therefore the phase of TouchDesigner’s rendering, and the physical sdi outputs are phased to the reference input, but even if the same reference input is supplied to the Sync ii card and the sdi output cards, the result is usually an output buffer of up to 2-3 frames, as reported by info chops. My guess is that this could be tightened up by having some sort of tag in the opposite direction. I might be thinking in the wrong direction entirely, though!
Hello @malcolm, I have noticed minor issue with Info CHOP & timecode. When Info CHOP is placed inside of a tox component (that is being reloaded on start), Info CHOP doesn’t provide timecode information until you dive inside of the component and click on it. I guess it should work right away.
@monty_python, thanks for the report. This will be fixed in builds 2021.14670 and later.
+1 for this idea of making each input frame create precisely one output frame. In this mode, if the cooking takes too long, I would prefer either freezing the current frame or outputting black as a warning, so I know I need to rearchitect the network to make it realtime. The most important thing is to have a consistent latency, so other systems can be adjusted to match. A queue that causes variable / unknown latency would be a big problem.
Yeah, this is something I’m thinking about a lot lately. A different playback mode where the playbar is held back by waiting for a new frame to come in over a capture input, and output doesn’t do any queueing. It won’t make it into the 2021.10000 series of builds, but I’m kicking around the idea for the 2021.30000 series.
AJA timecode support will be available in builds 2021.15410 and later.
@malcolm please may I ask if there is a chance this might get into later builds of new experimental branch? Thanks (I am just hoping this will find its way to this experimental branch - for me personally it is definitely the most anticipated feature in TD… )
I have recently realized this would introduce one huge plus as a side effect. I have been working with some external genlocked devices and polling data from them usually required queue and some synchronization mechanism - as the python code polling data could run before or after new data was available (its execution start wasn’t locked to their timing in any way).
If TD could be genlocked to input video (and these external devices would be locked to the same reference signal), one could just call such data poll on Frame Start without having to worry about queue and synchronization (this way data from last frame will always be polled at the right time).
+1 for this idea of genlocking Touchdesigner to professional video input. Variable latency is a huge problem in professional production environment (having consistent latency is extremely important).
Genlocking TD to video input could potentially have yet another one very beneficial side effect - that being the ability to use spout between TD and UE without frame drops.
This would be a huge plus as right now, it is impossible to have stable texture transfer from UE to TD (phase shifts and small differences between clocking of these two cause instability and occasional frame drops). Having both TD and UE locked to same reference signal (Blackmagic / AJA video input) would largely eliminate this behavior.
I have tested this theory the other way around - since I have no control over clocking of TD at the moment, I decided to genlock Unity to TD to see if it helps (I chose Unity as it was easier to implement than in UE). It worked very nicely and spout transfer became super stable once both programs were in sync.
Based on this I assume spout transfer between TD and UE would stabilize once both programs would lock to same reference - which would make them run in sync.
How did you genlock unity to TD?
Spout in particular is tricky since it has no notion of sync at all, it’s just a texture that is written to/read from whenever the app wants. Even if you sync when both apps start their frames using some genlock mechanism, at what point during the 16.6ms of frame time the apps write and read from the Spout texture is totally arbitrary, and can occur at different times within the frame, from frame to frame. The GPU is also free to schedule that work whenever it wants, since there is no imposed sync between the write and read operations. So you are actually just getting lucky here with your Unity to TD Spout sync.
Having said all that, there will be a new ‘Sync To Input Frame’ mode that is operational for AJA devices (so far) in the next 2022.20000 series build we release, which waits for a frame to arrive over an SDI input, then TD’s frame will start operating.
+1 for a synced playback mode
maybe expanded to include syncing to the word clock of an audio inputs?
I’ve been thinking about this topic a lot lately, and when I saw this thread’s update, I really wanted to join the discussion right away! I made a repository for the discussion the other day (using a not-so-beautiful way to implement FrameLock) GitHub - yeataro/TD-Framelock_Genlock-Discuss: This repository is for discussing Frame Lock/Genlock methods in TouchDesigner.
But it looks like a new experimental version is about to add this feature, so looking forward to it!
Malcolm this is going to be sooo amazing I can’t even say how happy I am right now I would like to thank you very much for implementing this. I was really hoping this feature will find its way into this year’s experimental branch - so I am super excited to hear these news I will order some AJA card right away so that I can start testing this (however I have heard it could take couple of months to get them these days, so it might take some time for me to start testing).
It was quite simple actually. I have created shared memory block that was accessed from both python in TD and C# in Unity. TD would write “1” into this shared memory block On Frame Start. Unity would stall until it finds this “1”, then it would start working on new frame (and also write “0” to this shared memory). Before doing this I was getting frame drops several times during 10 second period. After performing this “genlock” I tried running the same setup for about half an hour and I haven’t observed any frame drop.
Thank you very much for explanation. That makes total sense. I thought something like this could be happening there, but now I understand this much better. You are right, I guess I was just getting lucky there, but while thinking about it I imagine something like this might be happening in my case:
Unity is delivering some heavy HDRP scenes that take quite a long time to render - lets say it writes final texture to spout at time of 16-18ms (out of 20ms per frame - as I was testing this at 50fps).
On the other hand TD scene is very light, not doing that much work. TD cooks Spout In TOP really early - lets say at time of 6-8ms (again out of 20ms per frame).
That makes 10ms difference between spout writes and reads, which is quite a lot and possibly a reason why I was getting stable results during long time periods. But yeah, you are right, it is most probably just a super lucky scenario.
@malcolm I would like to apologize for not providing you with any info on how this new feature works on my side. Unfortunately it seems impossible to get some AJA card in Europe these days. They are all sold out and without any signs of a new supply from AJA coming here. I will dive into testing as soon as I get my hands on some of their cards.
I’m looking into using TouchDesigner for some really simple but real-time audio processing - just a very simple crossfade between the audio embedded in 2 SDI signals coming in via a capture card.
The thing is that TouchDesigner will sit in a chain of other professional broadcast devices, all synced up via the house-reference (blackburst) and therefore should also procude a fixed latency in order to be usable… But so far I also seem to get a variable latency - probably on frame drops but not 100% sure.
- Would this (fixed latency for video and audio) become feasable in 2022.20000 using an AJA device in ‘sync to input frame’ mode?
- Or would the separate audio buffers still introduce a variable latency?
- Or should I rather be using a dedicated audio interface card such as an AES/EBU PCI express card?
Hope this is the right place for this question but feel free to refer me to another thread
@monty_python: any luck in getting your hands on an AJA card for some testing?
Unfortunately no luck so far. I have tried contacting various AJA resellers but there seems to be a huge shortage of these cards in Europe. Back in March I was told they expect a new supply of Corvid cards in May, but that didn’t happen so I am still waiting…
It wouldn’t be frame locked yet on the output. On that side there is still a queue that has a bit of variance.
I have finally received my AJA card, weey! Last night I have tested ‘Sync To Input Frame’ mode and it seems to work really great! @malcolm thank you so much for implementing this. Input queue was causing me quite some trouble - this new mode is changing everything for me.
I wanted to ask one thing related to disabled realtime mode and absolute frame. Since realtime mode should be disabled when using ‘Sync To Input Frame’ mode, I was wondering what could be a stable way of getting some (true) frame number (that would take the dropped frames into account).
Since TD is now locked to input frames, do you think
capture_total + frames_dropped (channels from Info CHOP pointing to Video Device In) could get me some sort of absolute frame information?
Also I have been wondering what might be the relation between the point in time, when new frames truly start (based on reference signal like tri-level sync - defining true start of frame) and point in time when AJA provides new frame to TD (the point when TD starts working on a new frame). I guess this might be a question more for AJA developers, but I was wondering what you think about this.
Do you think these two points in time are close to each other (meaning that AJA is emitting new frame events closely to its internal genlock), or there might be some delay between them? I am just curious how could one imagine the TD timing phase in relation to actual reference. Not that it is something important, but based on some tests I have performed it seemed like there could be 8 millisecond delay between these, but I am not really sure about it (as it is super hard to measure)…