tl:dr - is Replicator the right tool for large object counts where each instance object has internal logic and behaves individually?
We are developing an interactive installation featuring 300-400 projected flower elements that respond to the presence of visitors. Our current approach is to develop 3-5 “species” of flowers and then replicate a mixture into the space. Each flower element contains some Geo and internal Python script logic to determine behavior for itself. An example toe is attached here showing the basic idea with mouse input substituted for tracking data.
We are finding that the network slows down once a small number of Flower objects are replicated - in some cases just 20-30 objects total. It strikes me that the Replicator is probably duplicating large amounts of data, and that each object is probably also doing basic housekeeping redundantly, like reading FBX animation data for the same object. In testing it does not seem like any one aspect (script complexity, geo complexity, animation length) is outstanding as a culprit.
Is there a better approach? I understand that for the attached example a single script could scale a network of “dumb” objects, however our eventual goal is to give each flower distinct and complex behavior (FBX animation frame, color, hue, vert distortion) based on visitor proximity and type. I’ve looked at Geo instancing but am not sure it applies here?
1 - No matter what we try the replicator exhibits serious performance issues above 50-60 moderately simple replicated objects. We assume this is because replication is a copy and not an instancing of any kind. Testing this theory with small tweaks to geo and instance-internal Python code seems to support this idea.
2 - We are moving ahead with Geometry and Texture instancing using the geo object. The drawback is that we’ve had to move our project to use 2D assets as we cannot get complex textures on the instanced objects and are limited to instance properties defined in the Instancing tab. Upside is we can create hundreds or thousands of objects! Our application in an organic field of stuff, so this will probably work.
3 - Any kind of Python logic, once replicated or copied, causes a serious performance hit in multiples above double digits. I assume this has to do with how the Python runtime is integrated into threading. Seems like many simple Python calls in a frame reduce framerate significantly. We continue to work on optimizing this part of our project.
Would love to know more from anyone who has worked in this area attempting to create massive arrays of “smart” operators and interact with them programmatically. This is the kind of thing game engines do well and it would be wonderful to find a similar workflow in Touch.
Hey, sorry about the delay, I just got to moving up versions since my last projects are wrapping up.
I looked through and here are some notes:
Yes you’re correct, Replicator makes copies, so its as if you had manually copied and pasted it, except the Replicator did it for you.
Instancing is so much faster. If you combine that with the below, then you should be able to reach your goal.
Here’s where I was able to make a quick improvement to your script and run about 100+ flowers decently. You’ve given each one it’s own execute dat, which you’ve noticed is pretty slow. So the goal is to do as much as possible in a single run. What I did in my project was run both your flower species off of the same table (for simplicity sake), then I grabbed the update() function you made in each type and flower and compiled them into a single function that was on the same level as the Render TOP. Then I ran an execute dat in /project1/ that looked at the rows in table, and I ran a for() loop that many times, and used the iteration number to make the names of each flower. From there I could calculate distance to both types of flower in a single run.
Lower point counts if you can get away it. I used Polyreduce SOP on both kinds of flowers to lower to point count dramatically. Helped a lot and you could barely notice the difference. If you end up lowering point count when you’re modelling, you’ll have even more control of the look than I did.
Get rid of anything not being used. I noticed a bunch of Filter CHOPs and Switch CHOPs and a variety of DATs that didn’t serve any purpose so I deleted them all. Doesn’t help hugely, but once you start scaling, keeping the amount of operators cooking can turn the tide of war.
I had another point but I just forgot it, I’ll add if I remember.
If you can find a way to do it with hardware instancing, that would be a lot faster. You can manipulate most of the data as CHOPs or DATs and then feed it into the instance parameter of the GEO Comp. However that does limit the variation that you can have between the instances to transform (scale/rotate/translate), color, and texture coordinates/assignments.