Reducing Complexity/Cook Load for Live Setup

Hi everyone,

I’m working on a live audiovisual setup using a variety of different effect environments and the sceneChanger pallette.

I’m trying to have as many environments as possible – let’s say 10 at the minimum. Once I hit the 4/5 mark, performance goes down quite a bit. Each environment is imported as a .tox component, then copied and pasted into the myScene templates generated by sceneChanger.

I’m using the latest version of Touch (educational) on a Windows 10 PC with the following specs:

Intel(R) Core™ i7-9750H CPU @ 2.60GHz
32GB RAM

and will be using a 1080p projector for display. Every OP’s display is off to reduce cook times.

The input is run through the audiodevin CHOP, which is then brought into each environment through a select CHOP.

I’m not sure if I’m going about this in the smartest way, or it’s a problem of hardware, or this is just normal. Any and all insights or tips would be much appreciated!

The problem the sceneChanger comp is meant to solve is the cooking of these scenes when they aren’t in use. The problem that you are having is that as soon as you are driving things in your scene with external data, I.e. audio analysis, that will cook certain downstream things anyway, negating the performance gains sceneChanger bought you.

My solution to this problem involves merging my external data to a single null inside the scenesLib comp, and selecting it into each scene, followed by a switch, and a “null_data” from which I reference this external data. I then made a callbacks system for the scenes that has onStart and onExit callbacks for each scene. OnExit, all the data from “null_data” is written to a table, then converted back to chop with a dattoCHOP, which is plugged into the switchCHOP, and the switch is toggled to select from there. When the scenes’ onStart callback is triggered, it turns the switch back on to allow my external data back into the scene. This system is kind of like the opposite of the gating system @Jarrett is using in sceneChanger, and allows you the same kind of optimization, gating out the actively cooking data when the scene is not running, and replacing it with valid channel references so that that all of your expressions don’t break if the data becomes unavailable.

It’s not a super simple setup, but feel free to ask further questions if that was too much to grok

3 Likes

Thanks much for this detailed and articulate answer. Very helpful.

I’ll mess around with the suggested workflow and let you know if I have questions.

1 Like

Still in optimization mode and slowly improving performance…

The big snag I’m now facing is presenting/rendering windows, according to the performance monitor (see screenshot).

Anyone have general tips for reducing these times?

I just built 40 fx networks into my live setup, and absolutely everything was driven by audio signals happening outside of thebscene changer.

My main hangups were gpu memory overcommitment, and a very simple thing that gave me back some overhead was making sure I wasnt using any of the inputs I had built into the individual .tox files i imported for this project.

I made sure all my TOP sources and audio triggers were inside the container via appropriate select top/chops. Its also helpful to have some selective cooking enabled on chop nulls where things are cooking continuously.

The other crucial peice was turning off all display flags and viewers.

I’m very interested in the previously mentioned workaround, but just combing over all of my scenes and what feeds into them helped me recover over 60% of allocated GPU mem.

1 Like