Hi all,
I’m opening a new thread backlinking to this reply (https://forum.derivative.ca/t/export-high-res-video-any-tips/376778/6) to avoid necroposting.
Basically, I’ve had to render an excerpt from a real-time generative AV installation. Computationally expensive, this is meant to run on a different machine. The setup involves SuperCollider synthesising audio in real-time and sending OSC data to TouchDesigner to drive the visuals. For the composition of the AV output I used the Render TOP with the audio provided by the Audio Device In CHOP, internally routing the output of Supercollider through an ASIO driver.
At the time of rendering the excerpt I had three options:
-
simplify the algorithm to make it affordable for my machine (butlosing coherence with the “real”, on-site outcome of the artwork)
-
keep the algorithm unchanged and render the output according to the possibilities of my machine (thus losing quite a lot of FPSs and ending with a laggy result)
-
disabling real-time rendering in TouchDesigner.
Both 1) and 2) were impractical and suboptimal - and, in addition, I would also have to post-produce the result to align audio to visuals due to the 0.1+ seconds buffer in the Audio Device In CHOP. Disabling the real-time flag therefore seemed sensible to me, and I was expecting some kind of automatic alignment of the external audio to the visuals through a buffering system. However, the result indeed showed pristine visuals but coupled with the audio as if it was stretched (pitched down, etc..).
I’m wondering whether I’m missing any setting or if there are workarounds to these situations I’m not aware of. Just to specify, SuperCollider runs at 44.1 kHz sample rate, and the audio exported through options 1) and 2) was perfect.
Thanks!