Use BMD / AJA Signal are reference for touch frame timer

Hi All,

Would it be possible for touch to use either the timing from an SDI signal on a BMD/AJA etc capture card for the internal timing of touch. This workflow is used in other packages, such as unreal to allow for sync’ing the render process to a genlocked signal. I believe they use the actual video feed and not the reference port.

Doing this would allow AR/XR shows, or anything for TV to be rendered in perfect software sync, and the use of the reference port on the card to keep hardware sync.

Thanks,
Scott

2 Likes

You will need extra hardware for that:

This is two different things. The hardware lock for sure for output is required. The vast majority of unreal AR setups, with just a single output, use the input signal on the capture card to time the frame and keep everything in sync with consumer output cards.

It’s a tried and tested solution. If you have multiple output machines, then for sure you need to frame lock between machines.

This is very interesting, but I am not sure if I completely understand concept of syncing to genlocked signal. I guess one part is to match TD cook rate with input video frame rate. That should essentially mean only one frame is rendered for each incoming frame. That is easy to do.

However syncing TD to genlock would mean that internal timing of TD would have to shift (in terms of phase) to match timing of input video signal, right?
This sounds like syncing to genlocked signal would “only” make a difference in latency. TD would cook immediately once new input frame would be available as their timing would be in sync.

Please correct me if I am wrong, but in case this could help video I/O latency (in case of blackmagic / aja cards) I would very much be a fan of such feature.

Hello,
I am quite curious about topic of genlocking in terms of video I/O cards (such as Blackmagic, AJA, Bluefish…) and would like to ask for more information on how it actually works at the moment.

I guess right now TD doesn’t sync to genlocked video I/O cards when using Video Device In/Out TOP, right? Therefore it means that phase of video I/O card is different from TD’s internal cooking phase (they could be set to the same refresh rate, but it doesn’t mean they are synced). When thinking about this further, I feel like this could also mean that due to some very little differences in refresh rate, phase between TD and video I/O card might be slowly shifting and after many hours it could eventually cause a frame drop.

To better illustrate what I mean lets imagine that internal clock of both TD and video I/O card isn’t perfect (I guess it never is) and that there is an extremely small deviation. For example TD is running at 59.999999 while video card is running at 60.000001. You start up your system that needs to be super stable and everything seems to be running just fine. However, half a million frames later (about 2.3 hours) phase drifts too much and frame drop happens. This could be a nightmare right? :smile: If I am right here, it could mean that your stable system isn’t really that stable (which is a problem).

Maybe this video could better visualize what I am trying to say. :slightly_smiling_face:
sync_drift.zip (67.4 KB)

I guess right now TD doesn’t sync to genlocked video I/O cards when using Video Device In/Out TOP, right?

And that kinda is the question. @ben Maybe you have some insight.
What I understand is that the Driver is taking care of running and transporting the captured frames to the COU (or GPU with GPU Direct). Touch then takes the frames in buffer and does stuff with it. The AJA driver for example is able to detect the Framerate and format and brings the intel into touch. So, in A way, is always synced. As long as touch runs at the same framerate as the input it also is synced.
But as you said we normally will not run on perfect 60 fps, neither camera (59.94 for nts) nor computer (was running at 60.01 HZ on some projects. For whatever reason).
And this is where the whole hardware with Syncboards, Blackburstinput on the SDI Grabber come in.
Basicly, with this two we are no longer taking the internal information from our devices,
but instead everyone is listening to one conductor. piping out a burst at 60Hz, Even if we do not get perfect 60 Hz from the source, we still all fire at the same moment and in the same timeframe, so even if we get a deviation because of non matching framerates, all devices will reset on the next burst.
This also starts to get more and more interresting when working in broadcast to reduce flickering and such.

This is something I’m thinking about a lot more after talking to Scott recently too. There are many different things in the pipeline where syncing can occur. You can sync when the content starts getting generated, which is what this RFE is about. This is different from the sync that is required for broadcast to avoid flickering though. That sync is on output, and it keeps the refresh intervals of your output device (SDI output using the genlock connector on the card or Displayport output using QuadroSync’s genlock input) in sync with the shutter intervals of the studio cameras. That can happen even if the content isn’t generated in-sync with an external signal. So you can already get flicker-free output using the tools in TD, as long as it’s ok that the content itself may be off by a frame or two.

What type of sync is needed for a project is very dependent on the project and the content being shown. Slow moving content, or content only output on a single display, often doesn’t need content sync.
Right now to achieve input/content sync with TD the tools are the Sync In/Out CHOPs, but those are just to sync between TD instances, not to an external trigger.

Thanks for info.

I am not sure about whether this relates to sync (between TD and genlocked video I/O card) as I haven’t managed to do a proper test, but in general I spent quite a lot of time working with Blackmagic cards and I have noticed that even when using very simple scene, I can very occasionally see a frame drop on video output. This might and might not be related to this topic, but I though I might share it here. It isn’t something that could be easily spotted as it usually happens once in a whole day. Obvious conclusion would be that there is a performance issue - some CPU / GPU peak, many operators cooking at the time. However I sometimes feel like this isn’t the case as the scene itself is very lightweight, hardware is powerful enough and not a single frame drop happens in hours. And then it suddenly drops one frame out of nowhere. Seeing this happen many times over last years, I started to wonder if this could be related to sync between TD and video I/O card. But as I said, so far I haven’t been able to test this properly and it could be related to something totally different. My question is, please do you think this could be happening because of not having TD synced to video output?

Another part of my wonders goes to latency as there is a constant struggle to minimize this as much as possible. :slightly_smiling_face: I am not sure if sync between TD and video I/O could help in this area, but based on what you said I feel like it could make a difference, right?

I guess there is probably some reason for which Unreal decided to go this way by adding option to sync the engine to genlocked video I/O card.

Push this RFE, and after research, I think the function required for this RFE is IDeckLinkInput :: GetHardwareReferenceClock, which can get the system clock when the Genlock signal is triggered.

If it can be exposed on the Info CHOP, there is an opportunity to use this information to correct the time shift between the posture sensor and the camera’s photosensitive element.

That function seems like it just returns the time when it was called, not the time of the last genlock signal occurred. If I called it right when a new frame arrived we may get something somewhat consistent, but it’s not clear how much variance we’d get.
I do however get a timestamp with AJA frames that should be the time the frame arrived over the wire, this is what the ‘Sync Group’ feature uses. Would that be useful to you, or are you just working with Blackmagic devices?

I am not sure if I correctly understand your goal, but I assume you would like to know when exactly was given frame captured by camera sensor? (please feel free to correct me if I am mistaken here)

If so, I might just say that there are various processing delays playing significant role in the signal transfer. First processing delay is introduced by camera itself - there is some delay between the time when photons hit camera sensor and when camera outputs final image to its SDI output. Then there could be some delay introduced by long SDI cables. Last but not least there is a delay caused by your SDI input device (such as Blackmagic / AJA). This last part is usually the most problematic one as it takes quite some time (when compared to previously mentioned delays). Nevertheless my point is that even if you would get timestamp provided by Blackmagic / AJA for specific frame, I don’t think it could be used to describe point in time when was that frame actually captured…

My goal on the other hand is to synchronize TD to video input device (Blackmagic / AJA). So far I feel like there are these types of timings inside of TD (again feel free to correct me if I am wrong, I am just guessing here):

  1. synchronization to v-sync (determined by internal clock of graphics card and monitor)
  2. custom timing - without v-sync (TD tries to clock itself to desired framerate but its timing is most probably only as precise as are the system’s hardware interrupts and scheduler itself {I believe there is no other way around timing than just sleeping and possibly spin-locking until system hits desired time right?})

With either one of them being used, I feel like TD can’t really run synchronously to some video input device (especially in the long run). That just isn’t possible, right? One or the other must be a tiny bit faster / slower and sooner or later there will be a point at which phase will be shifted too much to hold up. I feel like the only way to guarantee that this won’t happen (and that eventually no input {or output} frame is dropped), is to synchronize TD to video input device. I believe this must be quite complicated, but in theory it sounds quite like v-sync - with only difference that instead of graphics card is the video device in charge of the timings…

I understand what you said about the delay of the relay signal, but if you are targeting AR/VR or robots, this problem can be simplified to: the time offset between the oscillations of all multiple sensors that need to be fused and a single reference clock.

Therefore, the necessary information required for this goal is the offset per frame between the main system clock and the external reference clock, and the reference meaning of this information will be relative rather than absolute, just to ensure the time offset of the two systems. And make your manual or calculated calibration information after the next same system startup (or after a long time running) meaningful.

In fact, I have completed an AR system based on Genlock signal :

It’s just that its sensor processing is done on custom hardware, because it is easier to associate external triggers with the sensor’s reading thread (just like Redspy and Mo-sys do). But for the versatility used by others, I still have the goal of accomplishing this system on TD.

Sorry, I think I must have made a mistake about it, maybe I should try to implement it on DeckLink SDK Simply and then report it instead of guessing in the blackmagic discussion board with sparse information :sweat_smile:

I have some retired AJA Kona series, maybe I can try them (although they are a bit old, I hope they support these functions)

Aha, I think I understand it better now. That means you are going to assume fixed processing time from sensor → to camera output → to video device input (this delay should generally be quite stable so this assumptions should work) and then treat only the last part of the process (going from video device input → to TD) as variable delay that you would calculate based on timestamps, right? This sounds good, I haven’t though of this.

Nice results from your AR system by the way. Virtual objects seem to hold up quite nicely even with fast camera movements. This makes me wonder - how did you approached delay calculation in this demo? It seemed to perform quite well, so I was wondering if it is actually needed to dive into the timestamps?

Nice results from your AR system by the way. Virtual objects seem to hold up quite nicely even with fast camera movements. This makes me wonder - how did you approached delay calculation in this demo? It seemed to perform quite well, so I was wondering if it is actually needed to dive into the timestamps?

This system uses the Genlock reference source as the trigger to read the sensor value, and because the sensor responds quickly (and its API report is based on the fusion value processed by the raw value, and the value reported at the time of the trigger is based on It is relatively credible after calculation), so it can be expected that the value will only lag the reference source within 3ms, considering the camera shutter speed, this result is visually credible of.

Synchronization is a big topic. As for the time stamp, it can only be said that it is meaningful information for sensor fusion correction, but whether to use this method depends on whether it is reasonable for the purpose.

For example, if I know the “A” timestamp on the system clock when an external reference is triggered, and my sensor is running at 1000fps and there is a buffer queue long enough to use.

In my CHOP cooking, I get the “B” timestamp of the system clock at that time, and then use “B”-“A” to get the time offset “C”, the offset between the external reference trigger and the CHOP cooked, and then I get the value I want from the offset “C” ms from the tail of the queue, which occurs when the external clock is triggered.

Cool, thanks for explanation, I get it now. :slight_smile:

Out of curiosity, may I ask one more thing? How do you trigger event (that reads sensor value) based on external reference (genlock)? Do you use some special device for that? Thanks :slight_smile:

EDIT: Oh, I have missed this part that already stated it is a custom hardware that takes care of sensor processing. Would it be possible for you to elaborate a bit more on what hardware components did you use for this?

This thread is a pure joy to read. Thanks you for all that information :slight_smile:

I’m also really interested in using the Video Device Out genlock as a trigger for frames in TD.

For broadcast playback I use On The Air Video and it uses BMD direct to play frames from video files. These frames are passed to the device for each scan so there will never be a double frame or tear, and with genlock these can be synchronised into a broadcast vision mixing desk.

It would be great if I could use that frame timing as a trigger to increment frames in a Movie In Top to create a similar broadcast player.

1 Like