3D volumetric rendering
I’m not following it all yet but I really like how you build all the infrastructure with operators super-procedurally, much easier to understand. Two things that may make some of it easier:
In /project1/Dragon_Engine_unlocked you have two Layout TOPs with 20 wires going into them. A lot of the multi-input operators in TD now let you specify inputs with a parameter expression, so if in their TOPs parameter you put renderpass[1-39:2] and renderpass[2-40:2], you will get all the nodes you want without doing any hand-wiring.
Also in the 40 Camera COMP replicants (item1-item40), you can put a Render Pass TOP in each, and fetch from the previous itemxx by putting an expression in the Render/Render Pass TOP parameter… op(’…/item’ + str(parent().digits-1)) + ‘/renderpass’). Then your Layout TOPs can be set to item[1-39:2]/renderpass.
Nice! This is definitely the best “RenderPass Sliced” version of a volumetric setup that I’ve seen.
Just curious, have you tried messing with the RayTK “Volumetric Sampling” PointMapRender setup? It does require using RayTK Operators to generate the content, but there are RayTK equivalents for pretty much every Generator SOP, plus a whole bunch more. And it’s exponentially more performant than any SOP / RenderPass / Camera Slicing method like this will ever be since it’s all turned into a GLSL Shader. https://derivative.ca/community-post/tutorial/raytk-v014-volumetric-sampling-tutorial/64950
I think it was used on this project https://www.nardulistudio.com/virtualsky
And I just used it recently on a much lower res volume sculpture permanently installed in Chicago, but it would’ve run on a volume 1000x bigger without breaking a sweat, and it’s running on a very medium-spec computer. https://www.instagram.com/tv/CYGi2YbLoMo/
@Winfred_Nak Great work here, thanks for sharing!
@greg Thanks, I know with python I can reference nodes together. I did that in another version. I shared this one so the community can build on it.
@Peeet Thanks, I have tried RayTK but a normal render setup gives me more possibilities regarding content.
I made this patch for a specific type of 3D volumetric display LedPulse Dragon-O which has unlike other volumetric displays an organic placement on the LED’s meaning there is an offset in x and y on every layer (1-4 and then repeating) instead of a straight grid. Because of the offset they can pack a higher virtual resolution 120x120pixels on 3x3meter in a lower resolution videosignal 60x60 pixels which drives the LED’s over an LED controller. So only rendering on the spots where the physical LED’s are gives the possibility to pack 2 virtual layers into 1 video layer.
This gives a more accurate picture of the shown content.
The engine slices the picture and by making the slices thinner and the render resolution higher you can play with the accuracy of the model. I even went to 480x480pixels to get a more accurate representation of the physical LED placement. The amount of renderpasses is only possible with these low resolutions, the other bottleneck is then the SOP(CPU) side. I am trying to get my next version accepting point clouds to stay in TOP(GPU) world.
Normally you would need to render from 3 sides to get an accurate model, when rendering a box as a SOP because the camera is straight with the edges these will dissapear. That is the reason why I render twice: once normally and the second time with a wireframe.
The filter container filters out all the unwanted positions, not representing the leds, and then converting it to a lower resolution (not possible with just lowering the resolution). Another solution would be using a SOP grid that represents the LEDs and then pixelmapping for every layer, but with higher resolutions this slows down the computer. See example here: 3D ArtNet mapping - #18 by Winfred_Nak