Pure 2D vector (or not) drawing strategies with TD?

Hi there,
for those who follows the (not so interesting) story, I started to work with TD 2 months ago.
I’m trying to retrieve some coding feeling & strategies I already got with Max and procedural/text frameworks like Processing and Openframeworks.

Here, I’d like to open a thread about drawing strategies.
I mean, the very drawing basic 2D.

What would be a good start for this with TD ?

Procedural can be done with pure python, and actually I’d like this (and make some processing/java port to py as exercises).

I checked Drawing in Python - Noah Norman - YouTube
It links python scripting and SOP but I don’t understand the benefits of instancing these whole big machine instead of drawing directly with python in a texture (which I don’t know how to do, in td, and which is maybe the reason I don’t understand the previously exposed “benefit”)

Maybe, the 2D way is very short-sighted as it could be done at the final stage (projecting 3D to 2D)

I’m just trying to evaluate which direction to go for a project and what could be the benefits of td here vs processing (which I know). I’d like to try to do it with td because for me, it is another way to explore with td, at first, and also because I could imagine some benefits.

But as far as I understand the very interesting tuto by Noah, we have to create a “kind of” framework at first (sop, tools, maybe line functions, ellipse and recreating everything) and then using it as a template and only modify the draw loop part of scripts, am I right ?

If this post makes sense to some of you, I’d be very interested by how you’d approach this.
Even if it ends by : “you know processing, use it for this kind of projects”

Hello Julien,
I sent you a zoom link, easier to discuss directly in french about an experience I had with Alex (Processing to TD).

1 Like

Hi there, I’m unearthing this thread.
We had a great intro/discussion with @jacqueshoepffner and we explored other topic but not totally this one and this is why I’m rebumping the thread today for trying to get other inputs.

Actually, I could summarize the thing like this:

I want to draw on the screen using procedural code including the use of array objects, each object being something to draw and including methods for changing its features (drawing a line, hiding/showing it, fragment it, distort it).

I often used that in Processing. As it is more natural for procedural things.

Here I was driving sound with a basic visual. Each line was drawn progressively. Each line was an “agent” in my code and at the ‘main()’ level, I was controlling agents. Each “instance” got features and attributes. The main code control them drawing, erase the screen, modifies things according to other triggers coming from … elsewhere.

I can understand the SOP way. and probably it is the most efficient and natural way here.
But can we do that without SOPs and instancing in TD ?

I totally got the fact Processing is totally different.
The draw() loop erasing and drawing the new state, the rendering and how we can manipulate primitives (lines(), ellipse()…) etc.

Would you suggest me to go to Processing and forget about TD for this kind of project ?
I really would like to use TD here too.

The thing I guess here could be a system that could control a SOPs and Geom COMP and using instancing… programmatically.

I mean:

  • I’d have a whole system able to instanciate lines, render them
  • I’d have a code that could control all instances on a procedural way.

Something hybrid.
The master control would be the code, and the primitives, the instancing, the rendering would be SOPs.

Good point, this is a project based on lines only. I mean, I wouldn’t instantiate or draw other kind of things here.

The code (python, I …guess) should alter the whole things using for() loops, alter instances coordinates, width etc.

I think that would be a big mess to handle things like that. As far as I understand, for instance, I couldn’t change the width of one line instance, and if I wanted a line drawn between two defined points… that would involve rotation, scale instead of “just” doing a line(point1,point2) like…

I’m loosing myself in the maze of “I want to do like in the framework A into the framework B”
Actually, it is interesting for me for exploring what I can easy do here or easier there.

Hello, To continue our conversation…
I think the metaphore beneath Processing is very different. Perhaps I say false thinks and more clever people can correct me. Beside the proper code, the object himself is not the same. The primary object in Processing is the screen, “processed” by Java but your are every frame drawing on this screen, even with the 3D motor, ading points, lines etc. If you want to erase it, you have to redraw the screen. In TD, you are basicaly in virtual 3D space, without any image and you use cameras, lights, textures etc. to produce one or several images (as screen in Processing).
The drawback ib Processing its all the calculation is made in the CPU but I think, if you have not a problem with the number of instances and the number of pixel, you would be better staying in Processing to generate the drawing, using OSC and Spout to communicate with TD.
Using TD to generate drawing is interesting if you need many more object, like in particules or point cloud system. Here the massive parallel calculus in the GPU can make the difference but you have to change a lot in your way of dealing with space and time and the input/output and buffer/memory can be a pain to manage.
Very personnaly, when I see your work I think it would be easier to use Processing as image processor. I would be interested to see a little more of your Processing project to see how it could be adapted to TD.
Interesting conversation…

1 Like

Hi Julien,

Have you taken a look at the Script TOP? For 2D operations it’s probably the clearest fit for what you describe, given that you can think of a numpy array as essentially the same as a Jitter matrix.

" The Script TOP can be used to generate a TOP image using a Python script. The core feature it exposes is copyNumpyArray, which takes a NumPy array as input, and fills the TOP with the given image. How the NumPy array is generated is entirely up to the script writter, custom code, OpenCV, etc.

The source can be 3 or 4 channels for copyNumpyArray()."

Hi @jesgilbert and thanks for your answer too.
Yes I got this idea of script TOP. Didn’t dig but I understand how it goes.
And I think it pushes me to make a decision (for this specific project) and probably to use Processing as I need primitives to draw (basically, something able to just draw me a line on the screen according to 2 points coordinates, without reinventing the wheel) and at the same time, something that can be rendered from this primitive.
I thought (I still think) I could use hybrid method with TD using SOPs for drawing lines instance that I could control with my 2 vertices per lines… not really what I know how to do with instancing easily. I know I could instantiate using translation, rotation and scale arrays… but I’d need to “convert” the process from two vertices to translate + rotate + scale to get the same results.

Maybe it is reinventing the wheel for this project and I’m thinking too much instead of using P5.

@jacqueshoepffner, yes, it is very interesting to workaround each framework and compare etc before choosing the one that fits the most the project requirements. it also gives me more ideas and inspirations for this project, and also for how to use TD.

thanks a lot

Hello, if you doesn’t want polyline, you could use replicator to draw simple lines from a table (or procedural calculation, or a python list). here a quick and dirty example.
One another solution, for a very efficient calculus would be to use two 32 bits textures, one for each line extremity, and to use a geometry shader to produce lines from each point couple. I am working on it at the moment, I will post the result.
lineReplicant.toe (6.2 KB)

1 Like

another possible strategy, using instanced lines with top or chop for position, scale and rotation
instancedLines1.toe (4.3 KB)

1 Like

just played with it for dynamic update of the table using TOP just for generating random ones

I got it and tested it too.
It would requires to be adapted for providing lines control using vertices.
actually, going from p1 p2 to translate/rotation/scale is not hard, here, for good practices/efficiency, I don’t know which way I should use.

GLSL seems the way (often, if not always)

In that case, I’d have

  • something that would updates my vertices according some rules ( = something that could be triggered, altered and which could write into a texture, for instance one pixel = one vertice, the pixel next to it, on the right = the second vertices)
  • something (GLSL) that would process this texture storing vertices coordinates and “convert” it to different textures (translate, rotation, scale)

And, to conclude, a proposition with 2 noise Top, with instancing and geometry shader.
With a compute shader drawing the samplers, it would be possible to build algorithms to have more interesting design.
instancedLineGood.1.toe (7.6 KB)

1 Like

Thanks Jacques.
Actually, I started the prototype with P5. with a clear intention to port it to TD asap.

I understand the lines instancing here.
It can be VERY nice to work like this.
As far as I understand, in my port to TD, I’d have to have a part that could manage positions of my lines by writing to a textures directly.
I mean, a python that would drive the system by writing x,y,z as r,g,b
I don’t know if that part would reduce performances. I can imagine that it could work as I would have no more than 50 lines object on the screen ( = 100 x,y,z point to update at each cycle)

Actually, I’d have all my data structures which are array of objects in Processing which would become textures in TD. Actually, that could be ok BUT with pure OOP in Processing, I can store MANY other things in my objects (so in my array of objects) and I don’t know how I could do that in TD. DAT ? But in that case, I’d have to split my data structures into :

  • my vertices positions x,y,z and speed as textures
  • my objects other features in Tables DAT (one row = one “object”, each columns are features, attributes)

I guess… Maybe I should use only DATs at the beginning, for initial porting purpose then split that.
Not sure.

This just started project (single album? series? live?) involves inter-relations between max / live / td(p5)

one informing the other which inform back the first one etc.

My routines could easily be ported.

  • lines control are basic (vertices 1 & 2 movements, or center position + rotation + size)
  • rules for moving them too (driven from outside, or inner rules)

I check lines intersections and keep a intersections states table up to date with intersections point speed vector etc.

Just have to figure out how to design the whole network.
I’d like to have a master code I could switch (I LOVE Switch DAT and Merge etc as it can switch part of code, merge etc.

I mean, as far as I understand (and dream), I could use a very modular code setup (which I miss with Processing as I have to duplicate my project, change some parts, re save it, like forking it, OR put ALL OPTIONS for each custom method in the same project) and use merge / switch etc. And recall some functions some part.
I just need to port my java to python, and especially my oop part which doesn’t seem too hard.

You can use script Top (in Python) to write the two Tops driving the GLSL Mat.
I think it would not difer too much from the P5 JS. You can store another information for each pixel, with a 2 or 3 rows top instead of one.
The most elegant and powerful (if you need more than 10000 lines) would be to use a compute shader to generate the Tops, with a feedback, you can easely introduce movement inside the system. But, as parallel computing you would need to rewrite completly the code.
I read that you will use the crossing coordonates to drive something, thats another story, because you have to check all the possible intersections to know wich one really intersect. Thats linear geometry with some tricks to simplify it.
I will write a prototype with compute shader (not for macOs…)

1 Like

Ok Got it. I can “encode” some behaviors driven with state (boolean) or other struct (in the C++ meaning) as … pixel values, yes.

I also got it globally. I won’t have that much lines. Actually, I’m working on graphical sequencer that would drive the sound, following visual rules (collision detections, line intersections etc), and for instance, I would need one synth per possible collision. Would mean, for 4 lines for instance, it would means 6 synths as it is (n²−n)/2 maximum intersections for n lines (which is smaller than n², fortunately)
This is only one case when I’d have one intersection for one synth.
But I could have groups of n lines for each lines (I’d duplicate them), for instance, just for the visual part and still have not so much synth.
BTW, I’d be curious to instantiate A BIG NUMBER of synth in supercollider, able to respond to A BIG NUMBER of intersections.

My routines, on my i9, for processing, seems to work very acurately and fast (double for loop with the nested for index related to the other index, for avoiding the n² tests)
I have also a states & intersection table that provides me the way to update coordinates but also to know when a new intersection appears and when intersections die. These can drives events.

Thinking about how to structure the whole thing around Live (just for handling synths and samples, possibly it will be in Max or supercollider), and Max for control and TD for visuals.
Actually, driving controls with TD could be easier in the end as it will know all states for each frame and sending data from TD to Live (or to Max) is very fast by using Shared Mem.

Finaly I was able to finish a project solving some of your intentions.
It was in harmony with my actual researches so I jkeep some time to do it.
I will prepare a tutorial explaining how it works, because its a good way to demonstrate Geometry and Compute shader.
Quilckly the principles:

  • two noise driving 2 series of points, but it could be a more procedural way as with script Top
  • instancing a line with one of the noise, I use a geometry shader taking the coordonates of the second noise as 2D sampler to draw the lines
  • a compute shader take the 2 noises as sampler and calculate all the possible crossing (its easier to calculate even the redudancy), give the coordinates of the crossing (real or virtual), with the alpha value showing the real crossing. The output is a square and, in my project instance a sphere showing the crossing. But with a Top to Chop, you can use the data to send the data to Max/msp.
    I think its the more elegant solution.
    testLinesCrossing.mov on Vimeo
1 Like

@jacqueshoepffner , very interesting.
And of course, I’m interested to dig it.

As I started the project on a very explorative method, I’m currently at this point:

  • Max controls (move lines itself or enable visual system inner rules, enable/hide lines, set up global visual features etc) and fires OSC to Processing,
  • Processing owns all lines objects, all intersections, all on a very dynamic and cpu/mem wise way AND calculate intersections, lines’ angles and more and, depending on its set up and on events occurring, feeds Live through MIDI.
  • Live plays sounds and can eventually send back some sound descriptors to Processing for visuals alteration purposes.

I’d like to use Live in this context as it is easy to recall presets, to use complex synth, to use midi clip for storing sequences.

Actually, I’m deeply thinking about the global architecture right now for many reasons:

  • trying to avoid inter-applications communications if not necessary
  • keeping a good flexibility of my codes in one place
  • trying to get the control side doing all calculations and sending data to visuals system & to sound system (and not control system => visuals system => sound system)
  • trying to use TD in the equation

I was thinking about (OPTION 01)

  • Max owning all data related to lines, intersections + global preset control
  • Processing (or TD) for visuals controlled by Max
  • Live for sound controlled by Max

OR (OPTION 02) just “replacing” Processing with TD (all calculations would be done in TD, all what I call inner rules too)


  • Max owning all data related to lines, intersections + global preset control AND handling sound too
  • Processing (or TD) for visuals controlled by Max

OPTION 03 is very interesting for me as major informations from lines (lines themselves, intersections, and more) has to trigger/modulate sounds and it would save communication back. Indeed if Max owns all data required for triggering/altering sounds, it can basically drives Max synths.

If I do OPTION 03, it would mean that I would have 2 sub-options:

I know the fact P5/TD would go back to max would seem bad here, but keeping lines data (I mean all points positions, intersections features etc) near from the place where visuals are generated is very natural and easy.

Still thinking…

If I use max for owning all data + calculations, it would mean I’d have to use instancing and dynamic arrays. I’d have to port ALL data storage + calculations routines to Max. And basically trying to optimize it. It could be done easily and naturally with the js thing (omg no), with java (a bit better, as it is already done like this in p5), with java and matrices and gen (even a bit better, even if still on cpu, but as discussed, considering the number of lines… cpu could be very ok for that work), or with the shader ideas… BUT compute shaders is not available with jitter. Of course could be done with texture feedback and a couple of things to workaround further … but matrix + cpu (jit.gen) would probably be enough here.