this is an issue i am trying to solve for a long time with no success and would be great to develop this solution to continue using our system in touchdesigner.
the goal is to sync hardware and frame between a LED display and a cine camera (such as Alexa, Red, Sony Venice, etc). and for this first step, we can think about only one output of 4k content (h265, pro res, etc). later it will be necessary a cluster of GPUs genlocked in frame and hardware also for 8k + outputs.
my first setup was to use an AJA GEN10 to genlock the video processor, sending card of the LED and camera in 24Psf1080 and use vsync (in the GPU A6000 in On or Adaptive or Off) and also in the Window COMP (vsync On or Off). also, i am using nvidia decode with h265. in this configuration, the genlock sometimes gets in time and sometimes it gets off. i tried all possibilities for days, and, unfortnatelly, that didnt work.
we also tried to send the ref of the AJA GEN10 to the nvidia Sync 2 connected to the GPU A6000, passing through an Esync2 Optitrack to make the signal work in 48hz (not PsF). even with the GPU locked to the genlock signal we didnt saw any improvements.
my second attemp was already in Unreal making the software work with the “ref in” of a Decklink 4k SDI input 24PsF1080 genlock signal. this makes the software slave of that clock (timestamps) and locks the frames output with the shutter of the camera. and that is what i need in touchdesigner. is there anyway to make the ref in of the Decklink 4k master of the fps of Touchdesigner? or, is that possible to make the steps of the movie in slave of that reference?
if you have any more ideas and solutions i will be pleased to hear.
Hey, thanks for your question.
Yeah, the GEN10 or any signal generator controls the phase/interval of when work is done, but doesn’t control what is output. Different devices will have different queues of frames, so although they’ll process the frames at the same time, the frame that is being processed for each may be offset from each other.
We do support the similar workflow to Unreal using the ‘Sync To Input Frame’ mode for the Video Device In TOP. However that is only supported on AJA cards currently.
@novasfronteiras may I ask what output device are you using? Are you outputting frames directly using nvidia output, or are you using some video i/o card?
hi @malcolm and @monty_python
finally i could test a lot to understand how to work with unreal and return here with some updates.
my setup was using a AJA sync generator sending 24PsF genlock signals to the Nvidia Sync2 (Quadro A6000), to the sending card MCTRL4K of the LED displays and a Sony Venice Camera. on that, i was receiving in the Sync2 48hz, into falling edges configuration in control panel; 48hz in the MCTRL 4k; and 24fps genlocked on the Sony Venice camera.
into Unreal i needed to setup the NDisplay to clock its frames to the Nvidia sync policies. and the playback was made with EXR uncompressed files.
the result was getting frames totally in sync with the shutter of the camera and the virtual production results were perfect.
this is what I would love to replicate in Touchdesigner, using nvidia decode of H.265 or any other suggestions, as i already have everything ready with shaders and dmx synchronization.
is any of the infos i told before helpful for any configuration that I can do without the AJA cards? because here in Brazil they dont sell it and it would be necessary to change all our systems.
the equipments we have today that i think we could use:
- Quadro A6000 with Sync2
- AJA or Blackmagic sync generator
- Blackmagic Capture Card 4k
- Optitrack ESync 2 with Genlock and timecode inputs (maybe using NatNet protocol?)
but, if it is known that only with AJA cards we can do it, we will try to add them in our future projects.
another important thing to notice is that we were using the PC with the A6000 as a single render node and controlled that with multi-user settings in another computer. with that, the render node was 100% dedicated to output the frames and nothing else
thank you a lot!!
forgot to answer your question:
yes, we are outputting our frames directly using nvidia output
we also tried to add a video processor in between the GPU and the sending card, but didnt made any difference.
and, we never tried to use blackmagic capture card 4k, for example, to output our frames. do you think if that card is also with genlock we would have a better result?
I can’t find any documentation in the wiki nor the actual parameter in the video device in TOP. Did this ever make it into the current build ?
Right sorry, it’s in the ‘Transfer Mode’ menu. I’ll update the docs