I have an interesting challenge that I would appreciate some TD guru advice on please. Most of what I do is audio reactive and I have really enjoyed getting some fast good 3D stereoscopic stuff happening in my Oculus Rift (wired PC HMD) using Markus Heckmann’s tutorial and Oculusrift TOP and CHOP - (thank you Markus for a great TD roundtable XIV talk). So once I get a nice Visual Music animation running at 90fps in REAL TIME with proper tracking of headset and controllers, as I did after optimizing today, my challenge is how to render out a copy others can also see properly in 3D?
So far I have successfully made low res YouTube VR360 degree mono video tests, and yesterday I got decent stereo using over-under rectilinear technique and watched on my wireless Oculus Quest2. But I want to get it closer to my wired PC VR Oculus Rift experience. (better res, fps, smoothness etc within limits of compression algorithms of YouTube and Quest2 performance).
My biggest problem is that I can’t really react properly real-time to the music in my TD Visual Music simulation if I record hi res video out with real time clock off. My laptop is too feeble, even at 30 fps. In my past experience, none of my audio reactive programs work well unless I can run with real time frame rate. Keeping good audio visual sync and smooth playback is my biggest challenge.
So what I am wondering is if it would be possible to use a two phase approach in which I first author a piece on my Oculus Rift running real time 60-90fps in touch with my music viz/reactivity and all my gestures from head motion, two touch hand controllers but somehow RECORD or CAPTURE all the important real time data for later re-use in phase two (non real time)?
In phase two, I would turn off the rather heavy cook time Oculusrift TOP and analysis and Oculus CHOPS and record out my high res 4K x 4K or higher stereo file for editing and uploading in VR360 stereo. But to keep perfect sync with the effects from my audio and gesture “performance” hopefully. I can always add the audio in editing in FCPX since in most cases I will work from a pre-recorded audio track/song.
I know you can capture CHOP channels in Trail CHOP or Gesture CHOP but I am not familiar with how to play it back non-real time again, if that is possible? In essence, I need something like a data acquisition recorder like you might use for 3D body suit performance capture systems with many channels, but that can later be used as slow as necessary for TD to render out large 3D files, in sync.
The moviefileout TOP is pretty heavy cook time for large files, so in the past I used Ben’s suggestion of recording separately on my M1 Mac with Elgato HD60 S+ with my heaviest TD programs. But I am hoping to figure out how to do this without buying a 4K to 8K video capture device to run on a separate computer. I want to do it all on my laptop in touch, if possible.
So in summary - does anyone know how to capture all parameters I need real-time for later playback and control non-real time at whatever speed td needs to write out moviefileout large files as needed for decent VR360 playback? And if so, any example files or tutorials on how to play those performance capture ops back as slow as necessary?
Any advice welcomed (or if this is unclear or a hopeless cause, what else can I try?)