Generative geometries, growing meshes etc

Hi there,
I learn A LOT with the forum and it is very interesting to watch a such dense communities.

My question doesn’t intend to be the Pandora’s box, but more related to try to reference a set of strategies, even if they are only quoted or described briefly. As I know there are MANY answers depending on initial purposes…

I explored growing meshes with Processing, especially.

I’d like to explore this and make a kind of prototype with TD.

We could start with a simple idea:

  • an element (a cube, a quad …) as a seed,
  • at each turn, a process take a face of the element and build a new element from it (extrusion? whatever)
  • the process can decide where to grow
  • the process can check if something can grow following rules (is there enough space, etc)

Of course, this is ONE idea.
But it could help me to figure out what could be a canvas for it and also to start to explore how I can do procedural / generative from a python that drive a nodes for instance (if it makes sense)

I often saw people pre-generating BIG meshes.
Outside of any software like TD, Processing or Max and then using them with a “custom player” that could parse the structure and then do the growing progression.

What could be your approached of this in TD ?

I used to use procedural for this kind of things.
I imagine a python somewhere.
But how would it interfere with Nodes ? I mean, referencing etc seems ok for me but, I’d probably need asynchronous engine here like: each n Frames, I trigger a script, it do find a place in the mesh, and then send all informations to a DAT used as points storage ? or write to a texture for GLSL way of point processing and also maybe drawing ?

Any ideas would light my thoughts here.

Hello Julien,
I had the same question when I was asked by a musician (Alexandre Augier) to transcribe a Processing project into TD, because Processing was too slow.
Using Python (as in script Sop) was not a solution because its too slow and you are limited in number of objects (particles, vertex etc.).
There is (for me) two way to explore what you want to do:

  • if you want a real interactive procedural universe, GLSL is the way. I use first compute Glsl Multi Top to do operations on texture, using feedback Top for speed and acceleration fields. I use the textures produced (sometime recorded as multilayer EXR) to instance objects (particles or objects). Using vertex shader to move the vertices, Geometry shader to move or grow the primitives and pixel shader to apply material.
  • another way is to build a procedural structure in Houdini, animated in time, export it as Alembic, import it in TD and play with the time. The heavy geometry computation is pre-made, you can still change materials and point of view. Using geometry shader, you can still move part of the geometry. You can even (for simple geometry) instance the alembic and animate it with variations, using premade texture.
    I prepare tutorials in french explaining the process but it take time to make a good tutorial!
    Hope thats comprehensible.
    Jacques
1 Like

Say Hello to Alex :slight_smile:
and thanks again for your approaches.
Everything is very clear.

If I understand correctly, it would be:

  • initials conditions for instance in layers of an EXR.
  • progression more or less generative, using feedback tops, glsl etc

The Houdini way is interesting and I got that approach with a prototype I did with Gaea (by Quadspinner) and Max8.
Basically, I was generating a terrain (actually, a block). I was playing with some variables (related to terrain erosion, etc). When I have found those I wanted to play with, I was creating a file with all batch command, and I was batch rendering all my frames. For each frames, a mesh. And I was loading ALL meshes into a binary structure in Max, which was helping me at the launch time to load everything in order to travel through all mesh, and faking a generative animation. Plus, my work integrates disturbances/anomalies/ “glitches” at its core so jumping from a mesh to another like that, with fragmented timeline could also be ok.

It solves the big amount of time to spend with generative system (which are so deep to explore and control… as I did with the one I used for the.collapse project)

@nettoyeur posted this link yesterday to the TouchDesigner Discord server - regarding SDF generation in python. You may find it relevant to this topic Julien:

1 Like

@jesgilbert, thanks a lot for this link.
I’ll check this out asap.