FIXED:The DMX POP is experiencing latency.[2025.30770]

In version 2025.30280, there is no noticeable delay,
but in version 2025.30770, I’m encountering latency issues.

Is there a solution to this?

Are you using ArtSync?

Do you notice any difference in the values on the Info DAT of the DMX Out POP between those two TD versions?

Could you attach your project so that we can take a closer look into it?

ArtSync is turned on.
I’ll attach a screenshot and the project file.
Thank you in advance.
dmxPOP3D_Matrix_New.tests.toe (283.0 KB)

We did make some significant changes to ArtSync between those builds, among many other DMX POP changes.

The ArtSync changes do also affect the DMX Out CHOP, so as a first step, I’m curious to know if you experience the same latency with DMX Out CHOP + ArtSync with a similar example.

I was only able to test with a small number of channels,
but I didn’t experience any noticeable delay when using the DMX Out CHOP.

If you’re able to do a comparable (in terms of number of DMX channels) test with the DMX Out CHOP that would be helpful in determining whether or not it is an ArtSync issue. In the meantime, we’ll try and reproduce locally.

I’ve assigned channels and universes using the DMX Fixture POP,
but I wasn’t able to successfully convert that into a CHOP.
Is there a good way to do this?

One way do it might be to record some input frames using the DMX In CHOP with a routing table, then send those same universes from a DMX Out CHOP with the same routing table. That way, it should still have the same universe layout as constructed by the DMX Fixture POP + DMX Out POP.

I haven’t been able to reproduce yet, but I am also lacking the physical set-up. However, I did make some improvements to the DMX Out POP that should help to improve any latency issues. I’m curious if there is any noticeable improvement to your example – could you please give this build a try and let me know?

If you’re able to do a comparable (in terms of number of DMX channels) test with the DMX Out CHOP that would be helpful in determining whether or not it is an ArtSync issue

Back to this previous comment I had: another way to rule out ArtSync as the issue could be to test without using ArtSync on the DMX Out POP. That might be easier to configure than to replicate your example as a DMX Out CHOP.

I’m sorry — I haven’t been able to check yet, as I’m still not sure how to properly use the DMX Out CHOP.
Even in the latest build, the latency issue still persists.
It seems like turning off ArtSync increases the delay even more.
This also happens in version 30280 — when ArtSync is off, latency occurs there as well.

Thanks for the update. That does lead me to believe that the issue is with ArtSync. I’ll continue to look into it.

I determined the cause of the bug to be from the recent ArtSync rework.

ArtSync is supposed to wait for all ArtDmx packets to be sent then send an ArtSync packet, and one function of that in the DMX Out POP/CHOP is that it wouldn’t send a new bunch of packets before the previous had all successfully been sent (with some timeout constraint). However, due to the bug it could essentially flood the write queue with thousands of ArtDmx packets, which is ultimately what introduced the latency.

The issue should be fixed in this build; could you please give it a try and confirm? I’ve also added an ArtSync Timeout parameter which allows for further control.

EDIT: fixed in builds 2025.30880+

Thank you very much.
The latency is now gone!

1 Like

DMX out chop in 2025.30960 is still experiencing lag issues not present when using artnet but sacn is affected.

Is the issue also present in 2023.12370? I’m curious if this is a regression or not.

How many universes are you sending and at what rate?

I imagine the latency is caused by the same reason as in the original post, which is that there are more packets being sent per second than can be processed, so the send queue grows until it reaches a size where latency is introduced.

The solution for ArtNet is ArtSync, which will throttle the send rate by only sending a new frame if the previous frame has finished sending. However, sACN does not have a feature like this, so the queue will just grow until it reaches its max size. It might be beneficial for us to add a way to control the max queue size, but you could also try lowering the send rate. We could also add a way to automatically throttle the sACN send rate similar to ArtSync.

It is hard for me to say exactly due to these systems being deployed remotely and live in production so I can’t exactly tamper with them but I do know that I have had to switch two systems to artnet because using sacn seemed to have a big lag. But I do think it is a regression because I never noticed packet backlogs or lag issues until these experimental 2025 versions. I have also noticed issues with crashes related to unresolvable addresses ie. I have a routing table with multiple mdns dmx receivers like wled-receiver1.local etc some which do not exist on the network. Instead of clearing the unsendable packets it causes a crash when attempting to disable the output. Sending at 60fps and even small numbers of universes like 20. I will try to create some test files but like I said it is network dependent and can be tricky to pinpoint.

Here is a simple example file in which after enabling and upon disabling the dmx output chop causes a massive hang for about a 30-60 seconds before finally catching up. In a more complex project this can cause a crash. This particular issue actually seems to affect both 2023 and 2025 versions and sacn and artnet regardless however it does seem related to a backlog of packets getting stuck in some kind of retry state. If you simply enable and then disable the dmx out chop you will trigger the buggy behavior.

dmx out mdns example.toe (4.6 KB)

Okay, so it sounds like the regression happened a while ago then, if it is also an issue in the 2023.10000 series of builds. I will take a look into it, thanks.

@kendrick90

Sorry for the delay in looking at this.

The hang is occurring while trying to resolve all of the lavatube*.local addresses, which take about 2 seconds each. The DMX Out CHOP is actually in an error state for me but it’s not output to the CHOP (which is an issue), and it keeps attempting to resolve it on each send attempt which is what ends up blocking things when the CHOP is deactivated. Are those destination addresses resolvable on your network?