I learn A LOT with the forum and it is very interesting to watch a such dense communities.
My question doesn’t intend to be the Pandora’s box, but more related to try to reference a set of strategies, even if they are only quoted or described briefly. As I know there are MANY answers depending on initial purposes…
I explored growing meshes with Processing, especially.
I’d like to explore this and make a kind of prototype with TD.
We could start with a simple idea:
- an element (a cube, a quad …) as a seed,
- at each turn, a process take a face of the element and build a new element from it (extrusion? whatever)
- the process can decide where to grow
- the process can check if something can grow following rules (is there enough space, etc)
Of course, this is ONE idea.
But it could help me to figure out what could be a canvas for it and also to start to explore how I can do procedural / generative from a python that drive a nodes for instance (if it makes sense)
I often saw people pre-generating BIG meshes.
Outside of any software like TD, Processing or Max and then using them with a “custom player” that could parse the structure and then do the growing progression.
What could be your approached of this in TD ?
I used to use procedural for this kind of things.
I imagine a python somewhere.
But how would it interfere with Nodes ? I mean, referencing etc seems ok for me but, I’d probably need asynchronous engine here like: each n Frames, I trigger a script, it do find a place in the mesh, and then send all informations to a DAT used as points storage ? or write to a texture for GLSL way of point processing and also maybe drawing ?
Any ideas would light my thoughts here.
I had the same question when I was asked by a musician (Alexandre Augier) to transcribe a Processing project into TD, because Processing was too slow.
Using Python (as in script Sop) was not a solution because its too slow and you are limited in number of objects (particles, vertex etc.).
There is (for me) two way to explore what you want to do:
- if you want a real interactive procedural universe, GLSL is the way. I use first compute Glsl Multi Top to do operations on texture, using feedback Top for speed and acceleration fields. I use the textures produced (sometime recorded as multilayer EXR) to instance objects (particles or objects). Using vertex shader to move the vertices, Geometry shader to move or grow the primitives and pixel shader to apply material.
- another way is to build a procedural structure in Houdini, animated in time, export it as Alembic, import it in TD and play with the time. The heavy geometry computation is pre-made, you can still change materials and point of view. Using geometry shader, you can still move part of the geometry. You can even (for simple geometry) instance the alembic and animate it with variations, using premade texture.
I prepare tutorials in french explaining the process but it take time to make a good tutorial!
Hope thats comprehensible.
Say Hello to Alex
and thanks again for your approaches.
Everything is very clear.
If I understand correctly, it would be:
- initials conditions for instance in layers of an EXR.
- progression more or less generative, using feedback tops, glsl etc
The Houdini way is interesting and I got that approach with a prototype I did with Gaea (by Quadspinner) and Max8.
Basically, I was generating a terrain (actually, a block). I was playing with some variables (related to terrain erosion, etc). When I have found those I wanted to play with, I was creating a file with all batch command, and I was batch rendering all my frames. For each frames, a mesh. And I was loading ALL meshes into a binary structure in Max, which was helping me at the launch time to load everything in order to travel through all mesh, and faking a generative animation. Plus, my work integrates disturbances/anomalies/ “glitches” at its core so jumping from a mesh to another like that, with fragmented timeline could also be ok.
It solves the big amount of time to spend with generative system (which are so deep to explore and control… as I did with the one I used for the.collapse project)
@nettoyeur posted this link yesterday to the TouchDesigner Discord server - regarding SDF generation in python. You may find it relevant to this topic Julien:
@jesgilbert, thanks a lot for this link.
I’ll check this out asap.
I struggle with glsl way. I understand glsl, but here, I’d need a set of nodes related to what I’d like to do in order to figure out where I have to go.
The Houdini procedural way is interesting (and to lay further with td after the “pre-rendering” is what I was doing with Gaea as I shortly wrote). I don’t have Houdini here and I don’t know if I need a BIG license for that.
Actually, I’m thinking about pre-generating something (maybe this is what you suggested with houdini, because indeed, that part could be tricky even if I quote it here as “just something easy”) and progressively deploy what I did by truncating the set of dots. We have a exr file storing x,y,z (and more maybe) data. We use it to instanciate our set of vertices. We start, and we travel/truncate/read the exr for getting progressively all dots at the right positions.
OF COURSE, and I discussed this with some mathematical researchers friends who are in data set organisation and the tricky things here is “how to truncate ?” if the conditions is z-axis for instance, and I want the structure growing along a specific vectors: the data sets SHOULD be sorted on a very specific way… probably VERY hard to implement for custom structures.
That way (mine, I mean the one I mention here, not yours) is not the good one, I guess.
I try to answer considering my possibilities.
– with Point File In, Point Transform, and Top operators you can do a lot of things without any coding concerning point cloud.
– Concerning GLSL, I recommend you to use directly Compute Shader, it’s much more easy to manipulate texture and point clous with it.
– With Compute Shader you can also truncate in a meaningful way, becaus you can order the pixels by proximity (the physical place of buffering on the texture is arbitrary, you decide where you record it. With pixel shader you record it where you read it. I made a tutorial in French explaining a little bit how to use it (Windows only, on Derivative website)
I am preparing a tutorial on reorganing pixels coming from scan, I made a first recording but I am not happy with and I have to redo it.
Perhaps we could someday have a zoom exchange in french concerning our activities!
I’ll check these out (particularly the tutorial I’m currently searching on derivative)
And yes about a discussion on zoom, I’d be happy to do this.
You can contact me by email email@example.com