I have a question about how to build a setup where TouchDesigner runs at 59.94fps, but the rendering happens at 29.97fps.
What I’m trying to do
- Capture a live camera feed and use that camera texture as a material for CG rendering
- Keep the latency from the camera input to the final E2 output as low as possible
- From testing, when comparing TouchDesigner fps and VideoDeviceOut settings at 29.97 vs 59.94, the 59.94fps setup shows lower latency
- However, the red nodes (CG rendering) are heavy, so they can’t maintain 59.94fps — I’d like to run those at 29.97fps
I’d like to build a project with this configuration, but I’m not sure what’s the correct approach. Even if I set a Component Time of 29.97 on the red nodes, the camera feed becomes choppy.
I understand that what I’m trying to do doesn’t follow the method recommended for Perfect Playback. Still, is there any way to render at 29.97fps while keeping input/output video processing at 59.94fps, without the rendering frame time affecting the video latency?
(Note: The video signal is PsF, so I’m treating the input as 59.94fps.)
Appreciate any advice!
This is tricky, but feasible with certain caveats creatively.
You are right to add a separate component time with the lower framerate to your rendering sub-component. The sticky bit is that anywhere you want to actually utilize your smoother video playback and see it smoothly applied will need to occur outside of the half FPS area. So if you’re trying to use it as a texture source in a render that is running at 1/2 rate, then unfortunately the result will essentially be dropping frames from your input. Even stickier, but it may be that you need to run that component as an engine COMP to truly get it to run at half rate and not effect the full rate network, i can’t recall.
If you want to render a mask, or some sort of UV lookup in your cg, then you could apply the result of that back in the full rate part of the network to maintain smooth video, but of course that will limit your creative options, but is perhaps what you are looking for?
Does that make sense?
Thinking a little further, depending on what makes your rendering part heavy… you could feasibly offload certain work to a lower framerate component (like heavy sop stuff), and still try and drive the actual render TOP at full rate. This gets more in-depth and may not suffice if it is in fact simply your actual render TOP, Geo and MAT complexity that is heavy.
1 Like
@archo-p Thanks for the clear explanation!
So, what I should do is basically this:
-
Keep all video input textures processed only on the high-framerate side
-
Drive heavy processes like SOPs in a low-framerate Engine COMP as a separate process, and return only the essential elements needed for rendering back to the high-framerate side. Then, use those results to composite and render together with the live video at the high framerate
There will be some creative limitations, but this seems like a practical and achievable approach.
Even though there will be some delay when sending data back from the Engine COMP to the main thread, that delay should be roughly constant number of frames, so with a bit of operational adjustment it shouldn’t be an issue.
For context — I work on live visuals for music concerts. The red components in my setup correspond to individual “songs.”
Ideally, I wanted to build a simple and unified structure where each low-framerate “song” component receives the video input and returns the rendered result to the high-framerate network.
But in reality, it seems I’ll need to adjust the implementation for each type of visual, and only use Engine COMPs where necessary.
This was super helpful, thanks a lot!