RFE: Raytraced Reflections

TLDR: wen raytracing?

I know reflections have been talked about in a bunch of threads, but since I don’t see an actual RFE I figured I’d make one and hopefully get a temperature reading.

I’m wondering if built-in and simple-to-get-going raytraced reflections are at all in the cards for the future of TD. What I’m imagining is a new tab on a PBR MAT where you can enable reflectivity and set a few things like how many bounces, ray density, etc.

Now, I’m a total noob about all this stuff (which is one of the reasons I love TD: it allows me to ease into many different areas of visual programming without having to dive off the deep end), but I’m assuming that the switch to Vulkan opens up the door to using RTX features.

TLDR at the bottom :sweat_smile:

I can’t speak to RTX and Vulkan specifically, but I do want to throw in my two cents on the current system vs ray tracing a bit.

In the current system, all geometries, lights, shaders, cameras, get uploaded to the GPU.

Then the render system (render top) transforms geometry into screen space, and clips/discards everything that is outside of the viewable region as far as geo goes. After this step, the GPU is only aware of the triangles still visible from screen. cameras, shaders, lights, etc.

So, at this point, rendering in the fragment shader is “pretty fast” because it’s dealing with what’s right right there in front of us, or nearby in screen space. We can calculate AO this way because it’s a screen space technique, using other info on the screen to calculate it and nothing else.

However if you want an object to reflect something that is behind the camera, that data is technically gone, and actually it was never available to the shader in a useful structure for ray tracing in the first place.

The ray tracing side I know less about, but I can give you a very low resolution picture of what happens:

You start by shooting a ray, either a direct camera ray, or a bounce ray looking for secondary lighting information from the scene behind you, above, etc.

This ray essentially needs to find out what triangle in the sea of triangles it will hit (if any), and then find out where on that triangle it intersects (ray/triangle intersect test) then query UV information, shader information, and whatever else, then do texture lookups, shading/lighting calculations, etc, then write that final color information back to the originating pixel on your screen.

The challenge with this naively speaking, is that the ray has no idea what triangle it wants to check, so it has to loop through every single one and do the expensive ray/intersect test on all, collecting information along the way. For big scenes, this is obviously a non-starter as you can have hundreds of thousands of triangles.

A technique that is used in almost every ray tracer to solve this problem is the use of an acceleration structure(BVH, etc). These structures allow the GPU to find out which triangle out of maybe hundreds of thousands it wants, in only a few steps, relatively speaking. They’re kind of like a really complicated lookup table.

However, these structures have to be built before rendering, and that’s quite a CPU intensive task. You can leave them intact if the scene is static, but if anything moves at all, geometry deforms, etc, these structures need to be updated/recreated.

There’s all kinds of literature on this topic that is over my head, but that’s the way simplified version.
That said, I have a feeling RTX technology somehow addresses this challenge… Maybe on RTX cards you upload all the geometry, and the specialized hardware assists with ray’s finding their triangle faster… Maybe… :slight_smile: I’m just guessing at this point.

In addition to the challenge of scene traversal, you also have to shoot a lot of rays to reduce noise in the final render. this takes time, and usually needs to accumulate over several frames/seconds.

Another hot area of research is low discrepancy sequences, or in other words, really balanced “noise” ie monte carlo vs blue noise vs halton vs sobol etc. These patterns can help ray tracing converge on a less noisy result faster by smartly distributing rays/samples in a more efficient manner.

Also, we are seeing a rise in denoising technology, we have some in Touch now, also some in Blender, and combined with ray tracing allows better results with way less rays.

These are all factors in the ray tracing equation.

All this is to say, ray tracing is not so much a feature as it is an entirely different engine / data pipeline.
So I have a feeling for this to show up in Touch, we’d probably see it as a new set of nodes, or if it was a toggle, it’d be switching to an entirely different engine backend.

My total guess (Derivative, correct me if I’m wrong haha) is that since ray tracing is only somewhat accessible and still quite limited on even very powerful RTX cards, and totally unavailable on lots of other platforms(mac?) they probably will not invest much resources into this in the near future as it would be smarter to improve the current rendering system more, as it’s “realtime” and will be accessible to the most users.

The ironic thing about ray tracing is that, it’s so much more elegant, and less complicated in other ways… Light transport works much more like in real life, and thus the number of visual screen space “hacks” we employ to get realistic lighting/shading drops dramatically…

Anyways, like I said my two cents. curious about a temperature reading as well !

1 Like