tl;dr - How can we improve performance of moving data from Python to a DAT?
We are developing an interactive installation featuring 2000+ projected flower elements that respond to the presence of visitors. Our current approach is to generate each flower as a geometry instance using the GPU instancing features of the Geo SOP.
All of our business logic is written in Python and currently runs from a single DAT execute. After processing tracking input our Python class writes data into a DAT table that is linked in the Geo SOP’s “Instance CHOP/DAT” field. ([url]Geometry COMP - Derivative)
Intra-Python data operations are very fast, however, pushing the data FROM Python to a DAT table is very slow. Example: put 2000 rows of data from Python into a DAT takes 30ms on a fast machine (.toe attached). Some resulting questions:
-
Are we doing it wrong? Currently we’re concatenating data and setting the table.text = to the concatenated data. We tried iterating on cells, rows and columns and all were slower than setting table.text in a single Py operation.
-
Is there a better way to get data directly from Python into the “Instance CHOP/DAT” field of a Geo SOP?
-
We can call some Python in this field, but it’s unclear if we can construct a DAT or CHOP primitive using Python code alone. Can this be done?
Thanks!
PythonTableModPerf-ForumPost.7.toe (10.2 KB)