Techniques for Z displacement

Hello Everybody,

I’m trying to make a Luma based depth displacement, I’ve kinda made one and it kinda works…
(see attached)
My main question is why does my framerate drop so considerably when the resolution of the grid and movie are increased? (linked through the constant DIM parameter)
My machine is not high end but is recent enough and should be fully capable of this task. (MSI GP70 2QE)

My second question, I think, is how would I be able to draw this geometry with different primitives? I.e triangles or points?

Third… *Actually I might save this one for later…

Coming from a Jitter background I’ve probably gone about this all wrong and will need to start again, but please do have a look and if anyone would kind enough to point me in a more productive direction it would be much appreciated…

Thanks in advance,

DP
Mesh Displace.toe (7.32 KB)

Hi DP,

Nice work so far!

The framerate drop isn’t a problem being caused by your GPU, it’s actually the result of a CPU bottleneck. If I increase DIMS to .3 and then middle mouse click on the DAT To SOP, the info panel shows that the OP takes ~13ms of CPU cook time. 60fps translates to ~16ms per frame, and when you account for everything else that Touch needs to do to render the frame the 13ms is too long.

A little bit of math can explain why the frame rate drops so rapidly when the dimensionality of the grid increases. In order to achieve the depth displacement effect, we’re displacing every point in the grid. The number of points in the grid is equal to the number of rows times the number of columns in the grid. For example, a 10x10 grid has 100 points. If we add an additional row to our grid and make it a 11x10 grid, it now has 110 points. Notice how increasing the row count by one increases the number of points by ten. This effect becomes even more pronounced as the numbers get larger — adding a row to a 200x200 grid will add 200 new points to transform! If we wanted a grid that represented a full sized movie (with a resolution of 1280x720), we would need to transform 921,600 points every frame! Trying to do this computationally expensive because the CPU has to process each calculation one at a time.

There are a couple of roads you can go down to try and speed things up. One suggestion is to use CHOPs over DATs whenever possible. Matthew has written a great tutorial on z-displacement in TouchDesigner that I recommend you check out. https://matthewragan.com/2014/04/27/inspired-by-rutt-etra-touchdesigner/. The network that he builds and the network that you built are very similar but all of the math and surface operations in his implementation are powered by CHOPs instead of DATs.

Another suggestion is investigating how to move some of these calculations onto the GPU. Instancing is one powerful technique you could use to really speed things up. Attached is an example network that has a similar flow as the one you built but uses instancing to speed things up. A MovieIn TOP is converted to monochrome and then converted into a CHOP. Instead of directly transforming the geometry of a Grid SOP, though, this information is used to instance spheres. This achieves the ‘points’ effect you’re looking for.

Hope that helps! What was your third question?

Nic
mesh_displace_instancing.toe (6.39 KB)

Also have a look at elburs shader example:
https://github.com/nVoid/rutt-etra-TouchDesigner

Another way to do this with CHOP math:
https://matthewragan.com/2014/04/27/inspired-by-rutt-etra-touchdesigner/

Hello,
confounded newb here
I’ve taught myself rhino, taught myself grasshopper (with much help from online communities and resources), but wow - getting to make something happen in touchdesign is tough. Last time i invented a project to learn (pulsing a sinewave through a 3d model) I ended up bailing for grasshopper which defeated the point of learning TD (though i got what i wanted design-wise).

again, im new to the forum and its etiquette, so apologies for the blog/hijacking- but i decided to set another project and try to get into TD again: import a fragment of a point cloud and try a simple single axis displacement on the points. I’d love to make a normalized slider (0-tz say) and slide from flat to full 3-d or just flatten the damn thing but i cant even get the transform to do anything except +/- all point on the z.

any suggestions on how to get into the ‘head space’ of TD? I can get my points in/render but after that im lost. Lurk more? do some rudiments? Im fascinated by what others have done and would love to have this be part of my creative palette…but im pretty stuck without setting up a slow long term study course. Any thoughts appreciated and again, apologies.

There are lots of learning resources these days.

Take a look at elburz book:
http://book.nvoid.com/

Derivative Workshop Videos:
https://www.derivative.ca/Education/WorkshopVideoPrograms.asp

Derivative Learning Materials:
https://www.derivative.ca/Education/ResourcesLearnTeach.asp

My Courses and Tutorials:
https://matthewragan.com/teaching-resources/touchdesigner/

I suspect that digging through some of these fundamentals will help you start thinking like TouchDesigner, then it’ll seem obvious. If you have a project file to post, folks are also pretty good about giving you some pointers that way as well.

If you’re just looking for straight examples of small concepts, check here:
https://github.com/raganmd/TD-Examples

and here specifically for small examples of single item concepts:
https://github.com/raganmd/TD-Examples/tree/master/ragan/forum_and_fb_example_files/tox_files

Thanks for posting the resources!
looks like long term study its going to be…(!)

cant even post a project as its just getting the initial xyz point info into TD, which i somehow managed from my point cloud fragment (I do some heritage/scanning work).

again, i appreciate your sharing and will give the links some time.

thanks.

j

There are lots of ways to think about encoding data for touch. Do you have a transport format that you prefer?

I’ve gotten point cloud data into touch from images files (where the rgb value of a pixel is the xyz coordinate of a point), with XML (this takes a bit of parsing, but you can construct a point cloud this way with the add SOP), or with B-CHAN data (a table or array can be imported as a CHOP file and you can construct geometry from there).

What were you thinking about as a data format / structure as your starting point?

lol spotty internet ate my post. …

i have no preferences - just whatever is currently possible to get data in / make a project happen. i save the cloud files as .txt files which gives space separated columns and rows containing xyz rgb and some other fields, so in as text, turn into table, select xyz, add an out, and then into particles as DAT to SOP gives option for particles.

Using your reference tutorials, i can now do some simple transformations on the z - but only translating the whole set together up or down … so far. (For some reason, i get no menu options if i try to drag, say, and LFO into a transform, so i have to use a text reference. maybe i need to reinstall?). i watched your first arizona class video - very helpful.

i have a couple of avenues i want to look into for this test project. One is moving/vector/animation between two sets of points (already made a cloud copy with all z data at zero - had to go out to excel to help do that…is there an evaluation expression to change all the cells in a row to zero??). But what i want is like and LFO, 0 to 1, to use as a multiplier for the points’ Z data. I’ll try making that, but will probably stumble on the script/math expression to make it work. third option would be to explore using the particles as particles - give them some gravity perhaps.

so those are the options for the next round…
will report back.
thanks again for sharing your videos.

One thing to keep in mind is that whenever possible you’ll want to do math operations in CHOPs or TOPs. Doing your math as a loop on a table isn’t going to yield fast results.

Wow.
literally just built this in 20 seconds with the same text file point cloud fragment with grasshopper in rhino.

a.jpg

b.jpg

frustrating in a way.
back to the wood shed.

The add SOP is a fast way to convert a list of numbers into geometry:

raw_geometry.PNG

But the question really becomes, how to render it. Here are two different looks at that question. In base_point_cloud it’s simply a matter of converting those points into a line, and then rendering the line.

In base_point_instances we convert the table to channel data, then use the channel data to instance a piece of geometry.

base_point_cloud_example.tox (266 KB)

Hi Matthew,

I am a big fan and have learned quite a bit from your tutorials & examples.
I tried these 2 methods with a 4.5million points table (urban streetscape 3D scan, LIDAR), My goal is to show points only, not a line though.
In your 2 solutions the line geometry works at 60fps by joining all the points,
but the instancing slows down at 4fps…at 720p.
I think I have tried most possible node solutions at this point and am not managing to run my point cloud at any decent rate. Whereas I can run a full mesh with 7 millions of polys at 25fps…on the HTC Vive with a 980 GTX card.
Anyhow, I wonder what I am doing wrong and am interested in showing point color (I am getting RG values each point) as well as using lights on the point cloud…
In a nutshell how do I show points in the best way?, do I really need to make geo instances? I also posted that elsewhere on the forum asking if GLSL was the point-cloud solution. jonathank said he got it working here in GLSL vimeo.com/149623954
So no TD traditional node solution there you think?
all the best, PG

Yes if you have >50.000 points, GLSL is the answer to make your pointcloud run fast on the GPU, which is made for massive parallel calculations. You’re currently running it on your CPU (by using SOPs), which will be too slow for your case.

Try to convert your pointcloud to a texture, where each pixel’s RGB represent x,y,z.
Then a shader can read this texture and place the points accordingly in space.

Then you can do stuff like this in TD:
vimeo.com/95365778

see for an example on how to use that technique the particlesGPU in Palette, or this thread:
viewtopic.php?f=19&t=6069

Hi priam,

I understand your frustration.

A few things to remember about touch:

  • COMP viewers for all 3d objects are drawn on the CPU not the GPU. If you have any of these viewers on you’ll see a decrease in your performance
  • z fighting reduces performance dramatically when instancing. If you’re using complex geometry with lots of intersections and overlaps you’ll need to scale up your source or scale down your instances to avoid problems with your draw calls.
  • If you’re just after drawing points, you can also use a single texture that’s instanced at your points on a quad - the tricky part will be when your camera moves, and dealing with overlapping points
  • If your standard for drawing is 7 million points you’ll have to do it in a vertex shader… which is a traditional set of OPs, you’ll just have to write some shader code to get what you want.

Here’s a re-purposed version of the thread that looked creating point clouds with the kinect v2
[url]https://forum.derivative.ca/t/kinect-point-cloud-texture/5847/1]

Here a pixel’s rgb describes the xyz position of a point - this example is 9 million points running at 60 fps on my 970m.

Working with GL requires thinking about the world in a normalized space, it will also require encoding your xyz locations as pixels - one pixel is one point and the rgb channels for that pixel describe it’s location. You’ll also need another map to represent color. In your color map the rgb values for a pixel will represent the point’s color.

In the attached example the displacement map and the color map are the same, though they can easily be different.

I would highly recommend learning some GLSL before tackling this head on - you’ll quickly find that many alterations that you want to make will rely on your understanding of openGL. Many operations that feel like they should be straightforward will not always be. That is to say that you’re on the right track, but if you’ve been feeling frustrated up to this point know that the road ahead is harder still so be willing to find a zen place in the process.

base_color_as_point_position.tox (4.1 KB)

Here’s one more example so you can see what I mean by encoding position as a texture.

Here a SOP is converted to channel data before then being converted to a texture - which is used for the vertex shader.
base_color_as_point_position_from_geometry.tox (4.61 KB)

Hello Matthew
Hello Nettoyeur,

Thanks a lot for all this,
Matt, the 2 samples files cleared up a few things for me, as well as Nettoyeur’s mention of making the GLSL texture to feed the shader.
I went through the evil space Flame, also had the Kinect shader made by Malcolm beforehand and the pieces are getting there. My 1st mistake was to make a texture in 1 line only and to hit a 16xxx limit in horizontal, so I’ll make a square texture the square root of my 4million points. So will get on that and see next steps on the way and keep in touch.
All the best! P