Synchronizing real-time and non-real-time processes?

I have an interesting challenge that I would appreciate some TD guru advice on please. Most of what I do is audio reactive and I have really enjoyed getting some fast good 3D stereoscopic stuff happening in my Oculus Rift (wired PC HMD) using Markus Heckmann’s tutorial and Oculusrift TOP and CHOP - (thank you Markus for a great TD roundtable XIV talk). So once I get a nice Visual Music animation running at 90fps in REAL TIME with proper tracking of headset and controllers, as I did after optimizing today, my challenge is how to render out a copy others can also see properly in 3D?

So far I have successfully made low res YouTube VR360 degree mono video tests, and yesterday I got decent stereo using over-under rectilinear technique and watched on my wireless Oculus Quest2. But I want to get it closer to my wired PC VR Oculus Rift experience. (better res, fps, smoothness etc within limits of compression algorithms of YouTube and Quest2 performance).

My biggest problem is that I can’t really react properly real-time to the music in my TD Visual Music simulation if I record hi res video out with real time clock off. My laptop is too feeble, even at 30 fps. In my past experience, none of my audio reactive programs work well unless I can run with real time frame rate. Keeping good audio visual sync and smooth playback is my biggest challenge.

So what I am wondering is if it would be possible to use a two phase approach in which I first author a piece on my Oculus Rift running real time 60-90fps in touch with my music viz/reactivity and all my gestures from head motion, two touch hand controllers but somehow RECORD or CAPTURE all the important real time data for later re-use in phase two (non real time)?

In phase two, I would turn off the rather heavy cook time Oculusrift TOP and analysis and Oculus CHOPS and record out my high res 4K x 4K or higher stereo file for editing and uploading in VR360 stereo. But to keep perfect sync with the effects from my audio and gesture “performance” hopefully. I can always add the audio in editing in FCPX since in most cases I will work from a pre-recorded audio track/song.

I know you can capture CHOP channels in Trail CHOP or Gesture CHOP but I am not familiar with how to play it back non-real time again, if that is possible? In essence, I need something like a data acquisition recorder like you might use for 3D body suit performance capture systems with many channels, but that can later be used as slow as necessary for TD to render out large 3D files, in sync.

The moviefileout TOP is pretty heavy cook time for large files, so in the past I used Ben’s suggestion of recording separately on my M1 Mac with Elgato HD60 S+ with my heaviest TD programs. But I am hoping to figure out how to do this without buying a 4K to 8K video capture device to run on a separate computer. I want to do it all on my laptop in touch, if possible.

So in summary - does anyone know how to capture all parameters I need real-time for later playback and control non-real time at whatever speed td needs to write out moviefileout large files as needed for decent VR360 playback? And if so, any example files or tutorials on how to play those performance capture ops back as slow as necessary?

Any advice welcomed (or if this is unclear or a hopeless cause, what else can I try?)

I think what you’re after is entirely possible. There’s 2 big things to think through and study in your network though.

  1. Is your playback system “deterministic” ? Meaning, do things happen in lock step with the master time variable and if you “scrubbed” that time variable forwards and backwards over important moments, would events, animations, and systems stick to it so to speak?

  2. What types of data changes over the course of a recording? Only chop data? dat and chop? etc?

If you are only dealing with chop data at the source, recording it with a record chop or something more brute force like saving the chop samples to a log file on disk every frame, or writing all the values to a giant table every frame etc, could be a way to save your control channels for playback.

If your system is deterministic, it should be fairly easy to switch over to a recorded stream of chop channels using a lookup chop, or selecting rows of a table, or lines of a log file loaded into a table, etc.

If your system is partially deterministic, or fairly trigger based, then you might have more trouble, though maybe not.

If for example you have a channel that goes from zero to one, and causes some random dynamic effect like particles, or physics, etc to happen, that will happen differently each playthrough possibly, and you just want to make sure you’re recording the channels that initiate those events or triggers as well.

Depending on what type of data you’re trying to record, I could make some more specific suggestions, but you’re def on the right track, chops are easiest to record, but anything can be with enough work. Though saving some types of data out in real time can be heavy. or ram hungry. So a lot depends on the setup you have.

Thank you Lucas! Some of my visual music programs are deterministic with things like an LFO CHOP controlling camera FOV, or length of lines and stuff like that which I think would be pretty repeatable and reversible on a frame by frame basis, but over half of my 2D ones to date are not, involving particles, feedback and other triggered reactions.

What types of data changes over the course of a recording?

My current thinking is to record everything relevant coming in from Oculus Rift HMD and two touch controllers, such that where I look controls camera position and rotation, and so far I have been using 6DOF info from hand controls to control various parameters simultaneously scattered throughout the chain. Plus my biggest driver is the music, with stuff like audioanalysis lows, mids, highs etc driving various parameters of my choice to map the music to visuals. But will things like FFT spectrum even work if not running real time? It is this tight sync with music I am after. Sometimes I use MIDI, sometimes strictly audio analysis, sometimes both.

Which brings me to another possibility I thought of - trying to record all relevant parameters into Ableton Live with TDA, if it could capture all the oculus, midi and audio data fast enough.

The part I am totally green on is how to play back such data, assuming I captured it properly, in a way to record my higher res movie file out videos? I have only spent a few minutes with record CHOP and lookup CHOP. Do you know of any good examples/tutorials about how to use these best for songs maybe up to 5 minutes long? Particularly with something like moviefileout TOP?

Thanks for your encouragement regardless - it sounds like at least some of my pieces could be performance captured in real time and recorded out non real time…with recording CHOPS the easiest approach probably…

Yeah I think if you’re getting no frame drops during live execution, turning off the realtime flag and doing an intensive recording at the same fps you’re running live should in theory yield the same results.

probably the simplest solution is to record the most upstream data coming into touch, and when playing things back just let the whole system play through with out interruption. With things like particle effects, it may or may not matter if they playback the same each time, but if doesn’t, then you can just let it evolve slightly differently on your recording playthrough than on your live one.

You definitely won’t be able to scrub with out recording / caching particles and such, but doesn’t sound like you really need to?

Here’s a simple patch to illustrate recording/resetting chop data:
record_and_playback.1.toe (15.6 KB)

Thank you very much Lucas for making the time to create this patch! What a guy! I will try it out this weekend and let you know if I succeed at recording some good 3D.

Much obliged!

Sounds good, good luck!

Thanks again! Well my YouTube VR 3D is still poor after trying all Sunday, but I did succeed in recording some big 3 gig+ 4k x 4k videos with glitch free sound and playback of performance per your method. So I can continue to develop this two phase real time/non real time approach based on your code above, as it does now allow me to record hi res videos at 60 fps. Much obliged for that technique @lucasm.

1 Like

Well I just wanted to close this loop by saying you have solved my issue enough to make a successful VR180 6K recording test for you tube. If anyone has a HMD please give it a look to watch in stereo. This is a little 3D stereo test video I made to sort out the issues.

VR180 Test 15

Thanks again to @snaut for the real time Oculus VR techniques and to @lucasm for solving how to record them non-real time at higher res as I hoped, even on my feeble laptop.

1 Like