We found in movie file in TOP’s Sequential or Index mode THE way to synchronize our. Videos playing to an external clock, using signals ramp from 0 to 1.
But we have HUGE issues when it comes to read more than a specific number of movies. Actually, we have a apple Silicon M2 Max with 96Gb RAM.
Using the locked to time-line mode don’t give any issues like that.
But when we have about 18 movie file in top reading driven by index or cue… Framerate drops.
It drops…
When we load/start playing a big number at the same time (I mean… 7 reading. No pb. We trigger the load/start of 5 new movies… Drop… Then re increase then ok.
When we have more than 18 about. I know it is a big number but we need 24.
We drive our movies through signals through… blackhole.
The sampling rate + buffer matching is funky.
If I choose 48kHz and less than 256… No signals incoming to TD. (works with Max)
And I also have to resample the signals. But that part seems ok.
If you had any clues, tips or whatever, I’d be very happy to read and test and implement
are there any hints in the Performance Monitor to what causes the drop? Does it have trouble supplying the frames quick enough from the harddrive when trying to seek quickly through a movie?
What are the settings on the Tune page and are the movies always playing forward?
the drive built-in is SSD (but I read nvme express too), btw it is the builtin M2 Max laptop.
(btw, it is the first project after which I’ll now advise friends/people/collaborators to even use a macOs platform with TD. Was previously advising for Win platform!)
The 10Movie File In TOPs are all like this : (replicator based replicated)
Ok…my first action would be, as an experiment, to re-encode those movies to something which can be decoded on the GPU, such as HAPQ or NotchLC.
Usually when using HAPQ the only limiting factor is drive read speed, which is often not an issue anymore when using the latest NVME drives.
Read here further about HAP and how to use proper chunked encoding:
having a look at the Performance Monitor would potentially tell you a lot where this is struggling. You can make use of its Frame Trigger setting to only report data if a frame takes longer than x ms.
Using hap, or hap q, it is “a bit” better, which is very good.
But, I think I have a race condition in my process. Considering the complexity of triggers system, from outside of TD, I’d like to keep that triggers system as it is, and possibly deal with the race condition inside TD.
I record a movie, and I wa’t to play it as soon as it has been recorded.
Without hap, it was ok.
With hap, it appears there is possibly a very very small delay in closing the recording file then reopening it and it gives a problem.
How could I defer a command in TD?
The triggers processor is OSC Dat callbacks.
Ideally, I’d “defer” one command in my callbacks code.
ok, so with “very good” you mean HAPQ solved all your performance issues?
In regards to your new issue - I guess because HAPQ creates way bigger files than Photo/Motion JPEG, perhaps it takes a few frames longer before all data has been written to disk.
You can defer commands using the run command, like this:
Actually, when we load and start to play until 17 players simultaneously, no problem.
From 18 to 21 players simultaneously loaded and playing… framerate dropping until 45… Re increasing slowly to 60 but not really annoying.
From 21 to 24… More noticeable and annoying.
We’ll keep on testing two days more.
Ok for the defer. I have to read about how it works. As far as I understand just reading the link, it doesn’t stop(sleep) the code but just delay THAT command (a bit like threads, or asynchronous timers)
Ok for the defer. I have to read about how it works. As far as I understand just reading the link, it doesn’t stop(sleep) the code but just delay THAT command (a bit like threads, or asynchronous timers)
that is correct. That’s also what I understood you wanted.
Another possible direction (which is perhaps a bit more bulletproof), is to use the Info CHOP channels on your Movie File In, to see when the newly generated moviefile has loaded and is ready to play. So your callback could trigger a ‘loadMovieScript’ DAT which loads the new movie file into a Movie File In TOP and then checks status of the open/opening/open_failed channels, and do something like this:
if open_failed is 1, you know file is not ready yet, so wait x frames (by using either Timer CHOP or run delay), and then runs the loadMovieScript again.
if opening is 1, you know file is ready but you need to wait a frame before it has finishing loading.
if open is 1,you know the new file is ready& loaded, and you can fire a play or other command accordingly.
Actually, when we load and start to play until 17 players simultaneously, no problem.
From 18 to 21 players simultaneously loaded and playing… framerate dropping until 45… Re increasing slowly to 60 but not really annoying.
From 21 to 24… More noticeable and annoying.
As @snaut said, test with performance monitor so you can see what the current issue is. It could be that you are hitting the limit of your SSD drive.
24 movies x 60 fps x 1270x720 for HAPQ is ~1,327 MB/sec, and I saw one review the M2max macbook pro SSD drive can do ~1,446MB/sec, but in your case these bytes need to be read from 24 different locations on the SSD drive simultaniously.
I would also suggest greatly reducing your pre-read frames parameter. This is going to use up a huge amount of memory. The default is 3 for a reason, maybe 10 at most is useful if there is a large variance of decode times across a small time in the movie. But for most cases 3 is good.
Hey @malcolm,
Yes. You spotted something I tweaked and thanks a lot for your answer.
You know how it is.
Some parameters are not totally mastered and known (and even if it is, you try…)
You test.
You tweak.
One seems to have a big nice consequence (and it did have)
You keep it.
Then you design more, you test, you left it and you have to tweak another one.
Etc
Today is a big tests day.
I’ll compile every suggestions and clean these up!
HAP was the way.
Indeed, pre-read frames around 5 while GPU decoding were enough.
24 videos playing, loaded on the fly, HAP, 1080p60, perfectly read, and further global video output processing in TD too.
And the big audio setup.
On the same computer M2Max 96 Gb.
No hiccup, run perfectly.
Gosh.
(and you can add the sync from live to TD for each video independently, using signals through blackhole omg)