What are the rules for python namespaces in Touch?

Hey there, I’m trying to do particle tracing in TD. I have 2 pandas dataframes with x and y velocities over time. The sizes are like 13,000 coordinates by 1440 minutes, so table DAT’s are out of the question unless I want my computer to blow up. By the grace of God I was able to get pandas imported from my conda env, and get these dataframes pickled in. Running from a text DAT:

xvel = pd.read_pickle(pth/'xvel.pkl')
yvel = pd.read_pickle(pth/'yvel.pkl')
print(xvel,yvel)

All well and good. My question is where do these names xvel and yvel now exist, and how can I reference them from another op? Do I need to run the script another way to save names within the scope of a container or within the global project scope? I haven’t found any info online.

Thank you!

You text DATs are a run and die kind of situation - if you want to hold onto your results you’ll want to store that data in something like operator storage or as a member in an extension.

You might instead do something like:

xvel = pd.read_pickle(pth/'xvel.pkl')
yvel = pd.read_pickle(pth/'yvel.pkl')

parent().store('xvel', xvel)
parent().store('yvel', yvel)

You’d then be able to fetch those with:

parent().fetch('xvel')
parent().fetch('yvel')

Instead of parent() you can use the path to the operator where the data is store.

There’s more about storage here:

and here:

Hm ok, thanks. Do you know if it’s expensive to fetch from another op? Like is it going to ‘fetch’ a massive dataframe every single frame that it needs to retrieve values from the dataframe?

For example, I want to somehow be able to use these grids of x and y velocities as forces that drive particles, and have it read down a line on the dataframes every second or something, to simulate the changing flow grids.

Are those grids fixed, or time based? As in, have you done all of the data processing and you’re just fetching the data, or are you doing continual computation on the data as you use it in Touch?

Storage is generally fast, but it also depends on how you’re using the data.

Yeah, it’s all there, the dataframes will be static. They’re just pretty big and I’m hoping I can retreive the data in realtime.

CHOP data and the lookup CHOP will be very fast - another option would be to write this data into a script CHOP as buffer, and then read back from that operator.

There’s another thread with some info about that approach:

Ok, that buffer idea may be useful. But Lookup CHOP only works with tables no?

I’m trying to figure out what format I ultimately want the data in to drive particle forces, I don’t know much about particles. I see a Force input on the particle SOP.

I may want to use 2 chops with 13K samples that pull from the dataframes every time step (second or whatever it will be to represent 1 minute of the simulation). Then somehow these get organized into the XY grid, which will only need to be read from the dataframe once.

The lookup CHOP does act as a kind of lookup table - where the first input is the index for sampling your data and the second input is your data source / lookup table.

The lookup CHOP can also sample multiple channels simultaneously, so your timestep would be your first input, and your data frame the second. You’d then be able to find a corresponding sample based on your time step.

Writing the dataframes into a CHOP buffer would then allow you fast access to their contents. This is a great technique for a large data pre-computed data set that you need to quickly sample. The lookup CHOP will also do some interpolation between values, allowing for smoother animation, especially useful in the situation you’re working on.

The fetch() is just grabbing a reference to the data in the storage. It doesn’t make a copy, so the size of the data stored and fetched does not matter.

2 Likes

Ok, I’m a little confused by that CHOP buffer thing but it seems the idea is to write one thing at a time in a CHOP and have it only update the other things dependent on it without rewriting the whole lot of them?

If that’s the case, I don’t think I actually need to use that, since all of the values would update simultaneously. Here’s what I have so far to extract the data from the dataframes, in the setup parameters section of a script CHOP callback:

t = 800
u = scriptOp['u']
v = scriptOp['v']
for i in range(len(xvel.columns)):
        u[i] = xvel.iloc[t,i]
        v[i] = yvel.iloc[t,i]
#.iloc is just a way to index in pandas if you're not familiar

I get a nice picture of the velocities, but I’m suspicious that something is continuing to cook after loading it once, my laptop is sweating a bit now, and if I change the script it will update automatically. How can I ensure that it doesn’t continue to cook until I pulse setup parameters? Suppressing op viewer doesn’t seem to help.
image

This gave me issues for some reason, not sure why:

scriptOp['u']= xvel.iloc[t].to_list()
scriptOp['v'] = yvel.iloc[t].to_list()

Here’s a quick look at how I’ve approached this:

Here’s there’s a script in text1 that does some precomputation and then moves the values to storage. You can inspect Storage with the examine DAT to see the vals that you’ve stored. For this example I’ve put these pieces into a list to make it easier to move them into a CHOP:

The script CHOP then fetches those values from storage and packs them into channel data:

A Timer and lookup CHOP can then be used to sequence through the data at the speed / interval that you’d like.

Pulse starting the Timer will move through the script CHOPs data.

It’s worth noting that changing pars on the Script CHOP will trigger a cook, which would then re-fill the buffer.


Alternatively, if you want an approach where another script fills your Script CHOP - you could just write directly to the Script CHOP as a CHOP buffer:

xVals = []
yVals = []

samples = 14000
scaler = 0.01

for each in range(samples):
	xVals.append(math.sin(each * scaler))
	yVals.append(math.cos(each * scaler))
	

scriptOp = op('script2')

scriptOp.clear()

scriptOp.numSamples = samples
tx = scriptOp.appendChan('tx')
ty = scriptOp.appendChan('ty')

tx.vals = xVals
ty.vals = yVals

The real benefit is that your data should then be in a static CHOP that you can lookup, similar to using DATs… If your script CHOP is continually cooking something else is going on…

Here’s the above example so you can take a closer look:
base_chop_buffer.tox (246.7 KB)

2 Likes