SRT Latency variations

Hi everyone,
I’ve been playing around with using SRT in an application that requires extremely low latency and made a few observations that I was hoping to get some insight into. We set up a test between two instances of Touchdesigner running on a VPN between a city in California and a city in Texas and ran the stream at 60fps. I embedded the frame count in the stream (as per the example in the operator snippets) and had the remote computer send that number back to me via OSC. In this way, I found the round trip time (but obviously was hard to determine the one-way latency as it could have been assymetric). In any event, I found that it would frequently settle into one of two latency counts. The latency count (in frames) seemed to always either be 37 frames or 16 frames (at 60hz). Pausing and starting the timeline on the sender side seemed to cause it to randomly settle into one of the two. Obviously, 16 frames is preferred but I couldn’t seem to find a way to guarantee that it would connect at that latency.

My question is, why would this be? And how can I tweak this setup to guarantee that the latency stays low? None of the other parameters in either the VideoStreamIN or VideoStreamOUT TOPs seemed to make much of a difference (though if anyone has any advice on the many parameters, I’d love to hear it… the documentation in the wiki is a little light on some of the details of what you’re actually tweaking and how they pertain to either SRT or the H264/H265 codecs).

-michael

Actually, you don’t even need two computers or even two instances of Touchdesigner to observe some of the inconsistencies, which makes me think it’s either related to encoding (or maybe decoding?). See the attached
SRTLatencyTest.1.toe (10.2 KB)

The math1 CHOP shows the subtracted difference between the current frame and the frame at which the video was encoded by videostreamout1. If you hit “reload” on it, you’ll see this latency randomly changes. If you turn off “play” and wait a few second before turning it back on, the video is now delayed for as many seconds as play was turned off. The odd thing is that the video is being cached somewhere, and adjusting the “Tune” parameters don’t seem to impact how big that cache can grow to be. Is there some way to flush this buffer or to force it to use a minimal buffer?

Thanks!
michael

Interestingly, I’ve found a way to run down the buffer to a minimal 12 frame latency… break the example into two instances of touchdesigner, set the timeline to run at a higher framerate on the receiver (say, 61 FPS) , wait for the reported round trip latency to fall and stabilize (the frame count is sent back to the sender via OSC and the latency can be found by subtracting it from the current frame) and then you can set the FPS back to 60. At this point, the system seems to stick with the low latency buffer. This is great, except it feels like a hack. Is there a more legitimate way to force a smaller buffer in videostreamin?

Thanks for the report, I’ve added this to my todo to look into more

I think the main issue here was the performance of the Nvidia decoder wasn’t keeping up, causing the decode to get backed up. I’ve fixed some synchronization issues that seems to solve this.
Fixes will be in builds 2022.32230 and later. Thanks for the report!

So glad to hear that! Can’t wait to give it a try.
Thanks!
michael