Python to DAT performance?

tl;dr - How can we improve performance of moving data from Python to a DAT?

We are developing an interactive installation featuring 2000+ projected flower elements that respond to the presence of visitors. Our current approach is to generate each flower as a geometry instance using the GPU instancing features of the Geo SOP.

All of our business logic is written in Python and currently runs from a single DAT execute. After processing tracking input our Python class writes data into a DAT table that is linked in the Geo SOP’s “Instance CHOP/DAT” field. ([url]Geometry COMP - Derivative)

Intra-Python data operations are very fast, however, pushing the data FROM Python to a DAT table is very slow. Example: put 2000 rows of data from Python into a DAT takes 30ms on a fast machine (.toe attached). Some resulting questions:

  1. Are we doing it wrong? Currently we’re concatenating data and setting the table.text = to the concatenated data. We tried iterating on cells, rows and columns and all were slower than setting table.text in a single Py operation.

  2. Is there a better way to get data directly from Python into the “Instance CHOP/DAT” field of a Geo SOP?

  3. We can call some Python in this field, but it’s unclear if we can construct a DAT or CHOP primitive using Python code alone. Can this be done?

Thanks!
PythonTableModPerf-ForumPost.7.toe (10.2 KB)

Hi.
Looking at your example, it seems about a quarter of the time, is parsing the table, breaking it up, and creating new text with scrolled version of it?
I assume this is only for the example and the actual production toe file wont be parsing the table and shifting its contents?

I might suggest using CHOPs instead.
If you can keep all the values numeric, it might speed things up considerably.
If you have an array of numbers,
you can assign the samples of a CHOP channel in one line with its .vals member in a script CHOP.
Example:

a = [1,2,3]
c = scriptOP[‘chan1’]
c.vals = a

Feel free to send your toe file to support@derivative.ca if you want some more specific tips on this approach.

Cheers
Rob

Thanks Rob,

Correct - the reading of row data was specific to this redacted example.

Thanks for guidance on the CHOP option. We’ll give this a try.

Will a chop with 9 channels of 2000+ values be an issue?

Thanks!

Damon

I think it will be less of an issues than 9x2000 string conversions.
Some arrays like numpy are optimized to be C++ friendly.
If you have a working example using the Channel.vals route perhaps we can focus on streamlining the internal transfer even more.
Cheers
Rob.

Awesome. Will try immediately.