Receiving a jpg from remote machine

Hi there,

I was hoping someone could clarify what is the most performant method for getting .jpgs from a remote machine into my touch designer network? These jpgs will represent the frames of a simulation running concurrent to the touch designer session.

All I can think of is using udp/tcp and converting from the bytes in the callback, but that seems perhaps like a low performance method.

Thanks!

Is the remote machine on your same network, or somewhere else on the internet?

Local networks, or even VPN connections could use something like NDI - especially handy if you need to stream frames between machines. Other alternatives here are the In / Out family of operators (there are TOP specific ops for this operation). These are much easier to work with on a local network, or when sent over a VPN. If you’re looking to get data from a machine on the web, you might consider streaming protocols for this need - the video stream in TOP, for example, would help you work with an RTSP stream.

Whoa hi Matthew, my starting point for this stuff has been your interprocess communication https://matthewragan.com/2015/04/10/thp-494-598-interprocess-communication-touchdesigner/

I’ll do some searching around that term NDI and the associated TOP. I’ll be on a vpn to access the supercomputer the simulation will be running on so I suppose that fits.

With regards to the specific in/out operators, do you mean the Touch In TOP? I can’t figure out how to configure my server’s message to work with this for the moment. Or do you mean the NDI In TOP?

oh, video stream top might be just the thing. I’ll do some learning about how to write a RTSP python server for transmitting to touch.

Is your simulation running as a TD process or on another platform?

Touch In / Out Ops are great for Touch -> Touch communication, but aren’t ideal for data streams out of the environment.

NDI In should work - the trick here will be being on the vpn (and having the bandwidth to support the simulation resolution you’re after.

How large is your simulation - another option would be sending a byte array, and then passing that data to a Script TOP to convert back to texture. Using an engine COMP for this might help keep that process from slowing down your main installation.

The other thing to keep in mind is that bit depth is going to matter a lot for simulation data - so you might run into some quantization with an 8 bit jpeg. That might not be a deal breaker, but it’s certainly something to keep an eye on early.

Hope that all helps!

Thanks for the reply!

It’s just a singularity container (basically docker for our supercomputer), not TD process.

Right, glad to confirm that.

And do I understand correctly that I can have a TCP server transmitting raw .jpg bytes, and as long as I have correct remote machine ip in the NDI Top parameter ‘extra search ip’ then I’ll be in good shape? Or do I need to do some more learning about NDI?

The official versions of some of these simulations are quite large (see one collaborators star formation simulation https://www.youtube.com/watch?v=3z9ZKAkbMhY) so I’ll be scaling down as far as I have to for things to run at a decent clip. Is sending a byte array over UDP In DAT an example of what you’re describing? If so, I’m stuck at the step “passing that data to a Script TOP”. Should I attempt to copy the byte array to the Script TOP via numpyCopyArray in the UDP In DAT callback ?

I’ll look at the engine COMP, because all the futzing I’ve done so far tanks my fps almost immediately.

About the bit depth, thanks for the heads up! I’ll try to see if I can ascertain whether that’ll be an issue as soon as possible.

This is such helpful info, its like all the things I’ve been wondering since starting with TD.

I think there’s a little more set-up here. I’d check the NDI SDK to make sure:

Yes - the script Ops can be used as write-to buffers in TouchDesigner. A possible workflow here would look like UDP In DAT Callback -> Script TOP. Here’s a simple example of that idea:

base_script_top_as_cache.tox (1.1 KB)

The big difference would be that your callback is what would write your array to texture.

Glad to help :slight_smile:

Thanks so much I think that takes care of everything I was curious about. I’ll maybe put an example file at the bottom of this post later when I have some things working in case others find it helpful too.