Implementing nDisplay-style “mesh as framebuffer” rendering in TouchDesigner

I’m exploring a way to reproduce Unreal Engine’s nDisplay–style rendering logic inside TouchDesigner, specifically the idea of treating a mesh itself as the display surface, rather than projecting onto it after the fact. I know that sweetSpot mapping palette works conceptually in this direction, but its main downside is the need of rendering twice due to reprojection.

This nDisplay style rendering approach is especially relevant for high-resolution curved LED walls, domes/caves, experimental display geometries, and research-oriented custom render pipelines.

My current workflow:
The display surface is baked into a TOP (via GLSL Mat → Render TOP) where each pixel stores the screen’s world-space position. A GLSL TOP then renders in UV space by sampling this position texture, generating a ray from a defined sweet-spot camera (pos + orientation + FOV), and shading scene content per pixel in single pass.

The main limitation I am hitting is that the GLSL TOP cannot access the TouchDesigner scene graph or buffer. This means I can’t use it directly without an intermediate reprojection pass, which requires very high resolution to preserve quality while reprojecting.

Are there any plans to expose parts of the renderer or scene data to GLSL TOPs? Or exposing parts of renderer with dummy functions? Having limited access (geometry buffers, depth, lights, etc.) would enable new rendering workflows and custom tools similar to nDisplay-style pipelines performatively.

Cool idea - you could do this with the GLSL POP. The Collisions page gives you hardware ray queries against scene geometry.

Option A: TD handles shading (2 passes)

Setup: GLSL MAT outputs to multiple color buffers - UV-space scene render + position bake. Feed scene geometry (with UVs) to GLSL POP’s Collisions page, add both TOPs (via Render Select) to Samplers.

GLSL POP samples position bake for LED world positions, casts rays with rayQueryEXT, gets hit UV from barycentrics, samples UV-space render at that UV. TD does all shading, ray cast just determines where to sample.

Option B: Shade manually (2 passes)

Setup: Just the position bake, no UV-space render needed. Feed scene geometry to Collisions, add position bake to Samplers.

You mentioned you were already shading in your GLSL TOP - same idea but in the GLSL POP you now have actual scene intersection via rayQueryEXT. Sample position bake for LED positions, cast rays, shade at hit point yourself.

For true single pass you’d need a C++ plugin.

1 Like

Having done a fair amount of this stuff - why are you worried about the two passes? It’s smelling a bit like premature optimization.

I’d be very surprised if nDisplay doesn’t use two passes.

Bear in mind - the second pass, just doing a simple quad mesh is relatively lightweight compared to a whole scene drawing.

If you’re really determined, take a look at the render mode derivative added at my request that does a fairly full render but ‘unwrapped’. Possibly there’s some way to tweak that feature by affecting the UV map of your mesh.

(but I’ll repeat that my instinct is you’re solving a non problem)

Oh also there’s a remap top or something. In the past I’ve used various tools to make a warp image (st map is the most common term) with a single render and then locked that map and used it (make sure you do 32-bit float). Does that make sense? Basically you render your second pass with a red/green ramp image and then use that as a remap input (no ‘render’ in theory)

I’d point out that almost any top you use is probably actually another ‘pass’ under the hood.

Hello,
Can’t you use these techniques ?


Cheers,
Colas

1 Like

Let me clarify what I mean by passes, because I think this is causing confusion.

My goal is a single real-time, one to one resolution shading pass.
I’m not opposed to preprocessing or baking where it makes sense.

Ray-scene intersections, for example, are cheap to bake if geometry and camera/projector transforms are static. Once baked, those intersections don’t need to be recalculated every frame.

The real issue is shading cost and resolution, not ray queries themselves.

In my case:

  • The projected surfaces have strong foreshortening and sweet-spot dependency
  • I’m dealing with 25+ projectors
  • Because of foreshortening, anamorphic reprojection requires very high render resolution (around 20k in my case) to avoid quality loss

If I do full shading in a high-resolution every frame and reproject, the cost becomes significant. That’s why I’m trying to minimize the really high resolution reprojection burden, not necessarily the number of GPU operations in the abstract.

I’m planning to run this fully real-time, so optimization matters.

Not claiming that two passes are wrong in general, and I agree systems like nDisplay almost certainly use multiple passes. I also understand the point about premature optimization. The reason I’m pushing on this is the resolution requirements, I’m already close to practical limits, so I’m trying to be deliberate about where shading work actually happens.

Here is the geometry I am dealing with right now (you can simplify this as off-centered cubemap) (32 bit st maps in this case are for baked/fast rendered low quality previz purposes) :

Here is a buffer-light flow with two real-time passes (one bakeable) and a reorderable UV-based output for sweetspot spherical rendering:


(autoTestGrid is projected as a sphere for custom renderer in this example)

The approaches @choy described using GLSL POP + rayQueryEXT are interesting, and I’ll dig into Option A if I can. If you have suggestions or see a flaw in my assumptions, I’d appreciate hearing them.