Reading timecode from Blackmagic / AJA devices

Please would it be possible to add timecode capture for Video Device In TOP (that could be accessed trough something like Info CHOP)?
Blackmagic SDK supports accessing timecode - using IDeckLinkVideoInputFrame::GetTimecode. I am not sure about AJA SDK, but I guess similar approach should exist also there.

I guess it shouldn’t have to be extremely difficult to implement the timecode reading, right? However it would help tremendously in certain areas… Thanks.

(Forked from Use BMD / AJA Signal are reference for touch frame timer as timecode reading it was more suitable for a separate RFE.)

2 Likes

+1 for being able to get timecode via Info CHOP/DAT

2 Likes

+1, this would be a nice addition.

1 Like

I would like to emphasize how important this is with simple example I have put together. This is a quick sketch of setup I have used for following example.


(EDIT: Ah, I have spotted that there is AUX instead of ANC on the image. Sorry about that.)

Here you can see that I am using professional camera and blackmagic video i/o card. Both devices use the same tri-level sync signal as input reference - to ensure they have exactly the same “clock” and no drift is happening between them.

The camera has dedicated timecode input - when used, camera is embedding this external timecode into SDI ancillary data (lets call this VITC for now, even though I am not 100% sure if these ancillary data contain timecode embedded as VITC, but I guess they are) - this is the data TD can’t read at the moment.

Apart from dedicated TC input, camera also has XLR audio input (for generic microphones). LTC plugged into this port gets embedded into one SDI audio channel.

Here comes the fun part. You can see that I have utilized both of these inputs in order to demonstrate problems with LTC in audio channel. I would like to show that decoding LTC from audio inside of TD isn’t stable and in certain situations could make whole timecode based synchronization very problematic… In following examples I am comparing timecode decoded from audio channel (LTC) with timecode embedded in SDI ancillary data (VITC).

Since TD can’t read VITC, I have configured SDI video output on camera in a way that is renders external timecode information on top of video feed. This way we can be 100% sure that video frame that we get from Blackmagic i/o card is correctly stamped with corresponding VITC information (even though it is baked into video signal, but that is fine for this test :slightly_smiling_face:).

Now we can decode LTC from audio and compare it to VITC. Please feel free to take a look at this video to see both timecodes side by side (white is VITC, yellow is LTC, trail is showing LTC).

Please note that project on video is running at 50fps, while timecode is 25fps (this is absolutely normal situation as SMPTE doesn’t define anything above 30fps - therefore when having 50fps project, you have to use 25fps timecode). Please also note that due to my very simple LTC offset compensation (this is more of a “deep” LTC topic and I won’t go into detail what is happening there), there are incorrect numbers reported on limit values (frame 50 should be 0) - but this isn’t really a problem in this case, you can just ignore it.

If you observe reported values closely, you can see that there are moments where LTC drifts away from VITC and then, after a while, it gets back to correct values again.


So the question becomes - why is this happening? I believe there are two common cases that cause such behavior. One is V-Sync and the other one is drop-frame situation. When you have V-Sync enabled, audio input buffer in TD (provided by Blackmagic) is likely to “drift” on various UI related events, or even without them.

Take a look at this video or this one to see how is the buffer literally “drifting” back and forth. This behavior causes LTC sync word to arrive at different frame once drift occurs - resulting in incorrect timecode. This makes it impossible to use LTC decoding with V-Sync. Sometimes it is possible to go without using V-Sync in entire project, but this isn’t always the case. Take a look here to see the same setup with V-Sync disabled (no drift occurs).

As I have said previously, this also happens on drop-frame situations. This means that if you have a frame-drop, audio will drift for a short time until it “snaps back into place”. This again renders decoding LTC in audio channel (provided by Blackmagic video i/o) useless. All of this could be resolved by implementing reading VITC values directly from Blackmagic i/o (or AJA).

I am sorry for such a long read, but I wanted to let you know why it is important (and it became quite long since it is a complitaced topic). I don’t think this is a problem with TD - it is just a way audio is being handled in general (please correct me if I am wrong)… If you know about any way of completely eliminating the drift in input audio buffer, I would be more than happy to try it out. But for now I feel like VITC is the way to go. Thanks for reading.

2 Likes

In builds 2021.12450 and later, Blackmagic devices support sending and receiving VITC timecode. The receiving is done via the Info CHOP pointing to the Video Device In TOP.

Working on adding it to AJA in the near future.

5 Likes

Yes! Thanks so much for implementing this. I am so happy to hear these news :+1: :slight_smile:

I have just tested new implementation for sending / receiving VITC using Blackmagick cards and it worked great - thank you very much once again.

While testing I have come across the need for conversion between hours:minutes:seconds:frames, total_frames and total_seconds format. I was wondering if there is some way to do this most efficiency, but since LTC In CHOP works only with LTC data input, I quickly realized I can’t use its conversion mechanisms and therefore I have build it manually using combination of Select CHOPs and Math CHOPs.

Even though it was perfectly fine to do this conversion manually, it might be useful for some users to have this kind of functionality built directly into LTC In CHOP in the future. It could work the same way it does at the moment, but it could possibly support also input formats like hours:minutes:seconds:frames, total_frames and total_seconds. Once any of these would appear on its input, user could select what channels to output - utilizing build in conversion mechanisms inside of this CHOP?

It is true however that its name “LTC In” wouldn’t really suggest it could take any other data format than raw LTC… I am not sure if it is a good idea, feel free to ignore this if you don’t like it - I was just wondering it might help someone with simple conversion setup :slight_smile:

Yeah, this is on our radar. Ways to generate timecode values without using an LTC signal as well. Possibly a Timecode CHOP that handles a lot of this.

3 Likes

That would be cool, thanks for info :slight_smile:

I have been playing around with VITC quite a lot and I have observed one issue that could be seen thanks to timecode (therefore I thought it might be a good place to discuss it here). VITC itself if working fine - values are correct at all times - the problem seems to be related directly to video input itself.

I am using camera that is producing video output with VITC and decklink card for video input. Both devices are locked to the same genlock signal. TD is set to the same framerate as camera and therefore decoded timecode is increasing linearly in TD without any hiccups.

This works nicely until some frame is droppped in TD (due to larger cook times) - when that happens, there is a little chance that video input gets delayed by a frame. This is a huge problem for me in certain areas where video input delay must be fixed (frame-drop is much less of a problem than having changed delay after it happens).

This delay usually gets back to original value on next drop-frame, or it just snaps back after a while. It is depended on length of cook time - it can get some time to find value for Hog CHOP that produces this behavior. This happens both in editor and in perform mode (in editor it usually happens when moving in or out of components in network view). I have completely disabled v-sync during my tests - just to be sure it isn’t related to it.

Please is there a way to eliminate changes in delay of video input? I sort of feel like this can’t be problem of Blackmagic hardware since it happens on frame-drop in TD (and again corrects on another frame-drop). But I am not really sure what is happening there - I just would like to eliminate this behavior. I am sending simple scene along with demo video in following link. Thanks.

It’s tricky because there are a few queues of frames at play. The capture driver has a queue of a few frames (2-3) and TD has a queue of 1-2 frames. So a frame drop certainly can cause a queue to get backed up, other things such as a phase variance can cause a queue size to change. Really the only way to reconcile the frames is to do so by looking at their timecode stamp and ensuring you are outputting ones with matching frame numbers. Between machines you’d need to do this using Sync In/Out CHOPs to ‘select’ the frame to output. Jarrett is working on a COMP that uses Cache TOPs to do this, I think it’s ready to be released soon.

Aha, I see. I didn’t realize that TD’s input itself has some queue of frames. Even though setup based around timecode and Cache TOP sounds like solution for this, I feel like that might add yet another frame of latency since it would act as a next layer of frame queue (on top of the previous ones). Latency is a big struggle for me as I am targeting throughput latency (input → td → output) of stable 3/4 frames which is currently impossible to get even with AJA cards. Therefore adding even one more frame of latency to compensate for this wouldn’t be really great option for me…

Please would it be possible to start a discussion about some sort of video i/o specific timing approach that would completely eliminate TD’s input queue and instead would “genlock” TD to input video device - so that it would generate precisely 1 output frame for each input frame it gets from capture driver? This seems to me like a very logical thing to do, but maybe I am wrong and I am missing something important here.

Taken from UE docs: " In some cases, you may want to go even further, and lock the engine so that it only produces one single frame for each frame of video that comes in through a reference input — we refer to this as genlock."

This might be very much a naive approach, but what if this mode would let TD to start cooking and rendering new frame directly on video input event (utilizing something like “frame arrived” events)? It would either finish on time and start next frame with next event, or it would drop the following frame(s) to complete it - just like it does now. I guess that with timing based on video input events, phase variance won’t be an issue anymore - this would shave off those 1-2 frames (TD’s input queue) from latency without sacrificing stability.

I imagine this would decouple TD rate and timing from drawing to monitor screen - but I don’t see that as a problem. I mean when you are using professional video i/o you generally don’t care about rate or v-sync of your monitor - priority is the video i/o.

1 Like

Yeah, this is something we’ve been talking about internally as well. You’d tag Video Device In node as in ‘genlock’ state, and it would stall until a new frame arrived and then allow cooking to continue. One question is do you also stall if the output isn’t yet completed by the time you want start outputting the next frame.

1 Like

This would be really amazing!

I am not entirely sure if I get this. What would happen if you won’t stall in this case? The frame currently being processed (which is taking longer than it should) would be dropped in favor of the next frame (or did I misunderstood this)?

If the GPU operations that happen during the frame take longer then your frame time, it’s possible for the output queue to get backed up, and that will result in a different latency on the output side. Syncing up that part is another layer of the issue

Aha, I think I get it now - thanks. I think that it is best when queue doesn’t get backed up at all. I would say that in this area (video i/o) is latency the “king” and each frame should go through the system as fast as possible - with fixed latency. (If some frame gets longer than it should be, than it is my responsibility to fix it and make it more optimized so that it doesn’t happen when it shouldn’t :slightly_smiling_face:)

1 Like

+1000 for this.

This would be fantastic if this also could be used for the Video Device Out Nodes as well - right now via the info chops it looks like the sdi outputs are genlocked when the appropriate signal is supplied to the reference input. However, even with a sync card, the relationship between the i/o and vsync is decoupled. The sync card manages vsync and therefore the phase of TouchDesigner’s rendering, and the physical sdi outputs are phased to the reference input, but even if the same reference input is supplied to the Sync ii card and the sdi output cards, the result is usually an output buffer of up to 2-3 frames, as reported by info chops. My guess is that this could be tightened up by having some sort of tag in the opposite direction. I might be thinking in the wrong direction entirely, though!

Hello @malcolm, I have noticed minor issue with Info CHOP & timecode. When Info CHOP is placed inside of a tox component (that is being reloaded on start), Info CHOP doesn’t provide timecode information until you dive inside of the component and click on it. I guess it should work right away.

(Win10, 2021.13610, DeckLink 8K Pro, Desktop Video 12.1)
vitc_in_component.1.toe (3.7 KB)
vitc_comp.tox (494 Bytes)

@monty_python, thanks for the report. This will be fixed in builds 2021.14670 and later.

1 Like

+1 for this idea of making each input frame create precisely one output frame. In this mode, if the cooking takes too long, I would prefer either freezing the current frame or outputting black as a warning, so I know I need to rearchitect the network to make it realtime. The most important thing is to have a consistent latency, so other systems can be adjusted to match. A queue that causes variable / unknown latency would be a big problem.

1 Like