Frame accurate python commands

Hello everyone,

I’m building a permanent installation where we start projections from different mediaplayers via UDP commands. Every 20 minutes, a new show starts and the mediaplayers are started again. I use a clock CHOP and a python script to schedule all the commands. The python script runs every time the ‘minutes’ channel of the clock CHOP changes. The python scripts checks if the ‘minutes’ channel is 0, 20 or 40 with an if statement. If this returns true, a python function that starts all the mediaplayers (and other devices) is started.

I print the execution of every command to a log file and notice that the there’s often a delay of a few milliseconds between 2 python commands. It’s also not always the same delay, which causes lip-syncing issues (if the delay would always be 20ms, it would still be sync. However, mostly it is 20-30 ms, sometimes even more like 120ms).

This raised a few questions to me:

  • May the framerate of the project influence this delay?
  • Is there a delay between lines of code in python?
  • Is there a better way to schedule commands in TouchDesigner than using the clock CHOP?

There should only be a noticeable delay between python lines if you are doing something in that code that causes the system to hang for a bit ( like building a command by parsing or churning over lots of data…). If you were simply dispatching a UDP command in those lines with per-determined data I don’t really see how a meaningful delay could ensue.

CHOP executes are always less reliable than using callbacks from things like a timer CHOP, but this doesn’t sound like your issue exactly. If you are indeed getting a delay because of what else you are doing before sending each message in your code, perhaps you could build all command data for your messages and then send the UDP messages for those messages after they’ve all been prepared? Just guessing at what is happening, you may want to post an example for better help though.

Thanks for your reply! I will make an example project out of one of the project files in one of the following days.
Another question related to this: is there a way to prioritize which OP’s are being cooked first each frame? If I could make sure that the text OP that calls all the functions to the devices is cooked first. The delay would be minimized as well.

If you execute the network commands in the context of a Python script (as Pete suggested) you’ll have better luck than trying to mandate execution order in OP world. Try creating an Extension and centralize your Python code there, it will make it easier to troubleshoot this.

I actually create all the devices in Base OP’s using extensions to control them. However, some of the triggering still happens in OP world. You can find a minimized version of the project attached. The real project contains some more devices and a GUI.

Short explanation about the file: all the devices are inside the “DEVICES” OP. I use the gloabl OP shortcut to call their functions. All scripting to start shows, start devices, trigger functions based on the current time… is inside the “scripts” OP.

DerivativeForum_Demo.toe (28.5 KB)

I’m reviving this thread because I’m having a similar problem, but there is no network stuff involved.

Let’s say I have a basic slideshow where image A transitions image B, then you have to swap slot A and slot B on the exact same frame that you reset the transition (aka initialize the timer). Then B is in the foreground and A is the background. You replace A and the process continues.

I have a timer set up that starts the transition, and a CHOP Execute (listening to ‘done_pulse’) that swaps the images when the timer ends. This works perfectly if there is very little stress on the system. But when I try to incorporate some of the more intense transitions, the “swap” and the “reset” don’t happen on the same frame, and you see a flash of the previous frame. (I’m sure I’m not explaining this super clearly, but if you’ve ever done an image transition, you probably know what I mean)

Here’s what happens in the CHOP Execute when done_pulse pulses

def onOffToOn(channel, sampleIndex, val, prev):
    print(f'[{absTime.frame}] transition done. swapping')
    swap = (n % 2) == 1  # n is an integer that increments every transition
    op('select1').par.top = 'slot2' if swap else 'slot1'
    op('select2').par.top = 'slot1' if swap else 'slot2'
    op('timer1').par.Initialize.pulse()

Here is my Parameter Execute that is watching the top parameter of the selects

def onValueChange(par, prev):
	print(f'[{absTime.frame}] changed top value to {par.eval()}')
	return

Sure enough, there is a 5+ frame delay when the system is under pressure. Sometimes the Parameter Execute doesn’t run at all!

[64] transition done. swapping
[69] changed top value to /project1/slot1
[69] changed top value to /project1/slot2

So I guess my question is: how can I guarantee that python commands execute on the exact frame that I call them? If I can’t, what are ways to get around this problem?

I’d love to provide a minimal example of the issue, but I’m having trouble even figuring out how exactly to replicate the conditions where the problem occurs without uploading a very complex TOE, which kind of defeats the purpose. Suggestions welcome.

Take a look at the Hog CHOP.

Thanks, @jesgilbert
I made a minimal example here with the Hog CHOP, but, of course, I can’t recreate the problem I described. I will continue trying, but I’m posting it now in case anyone spots anything else stupid that I am doing.
slideshow.zip (2.1 MB)

to be continued…

Here is another version that actually demonstrates the glitch. I added a more elaborate transition, and I’m upscaling the images considerably (in reality, the images are actually that large)… both of these things add strain to the system.

slideshow.zip (2.1 MB)

Here’s a video of the glitch: Slideshow-Glitch
You should see that, when the particle transition finishes, there is a flash of the previous frame. This is what I’m trying to fix.

Thanks in advance!

Hi @jeffcrouse,


I think the issue is more how the Timer CHOP is setup in the transition network. While the onDone callback is executed and you switch the textures around, the mask actually still sit’s on the endframe and hence you see the wrong image for a frame before the timer reinitializes and the channel in horizontal phase switches to the expected value.
A simplistic fix could be to select out the done channel from the Timer CHOP, inverse the values (for 0-1 to 1-0) and multiply that with the timer_fraction resulting in the transition mask to reset already when the timer is finished and not too late when it is initializing.

I stepped through the transition frame by frame to figure out what was happening here - this can be really useful to catch these kind of “misbehaviours”.

Hope this makes sense
Markus

Hey @snaut – you’re a genius - thank you so much! That worked.

Stepping through definitely would have helped me diagnose the problem and I’m embarrassed that I didn’t try it. I was convinced that the problem only occurred when the system was stressed, and stepping frame to frame would have removed the stress. In other words, it was a critical misdiagnosis on my part.

I have to say that I’m still not 100% clear on where the 1 frame delay was coming from. Does timer_fraction=1 for one frame after Initialize is pulsed?

Hi @jeffcrouse,

hopefully this illustrates it nicely:

Using a Trail CHOP to record a few seconds of the timer_fraction, done, as well as the average color (Analyze TOP) of the two textures in select1 and select2.

At the start of the last frame of the Timer CHOP, the onDone is called which switches the textures in the two selects, a frame later (during initialization of the Timer CHOP) the timer_fraction is reset to 0.

cheers
Markus

1 Like

@snaut thanks again for your help and patience – it’s really useful to see how you approach debugging so that I can troubleshoot these things on my own in the future.

2 Likes