I’m exploring a way to reproduce Unreal Engine’s nDisplay–style rendering logic inside TouchDesigner, specifically the idea of treating a mesh itself as the display surface, rather than projecting onto it after the fact. I know that sweetSpot mapping palette works conceptually in this direction, but its main downside is the need of rendering twice due to reprojection.
This nDisplay style rendering approach is especially relevant for high-resolution curved LED walls, domes/caves, experimental display geometries, and research-oriented custom render pipelines.
My current workflow:
The display surface is baked into a TOP (via GLSL Mat → Render TOP) where each pixel stores the screen’s world-space position. A GLSL TOP then renders in UV space by sampling this position texture, generating a ray from a defined sweet-spot camera (pos + orientation + FOV), and shading scene content per pixel in single pass.
The main limitation I am hitting is that the GLSL TOP cannot access the TouchDesigner scene graph or buffer. This means I can’t use it directly without an intermediate reprojection pass, which requires very high resolution to preserve quality while reprojecting.
Are there any plans to expose parts of the renderer or scene data to GLSL TOPs? Or exposing parts of renderer with dummy functions? Having limited access (geometry buffers, depth, lights, etc.) would enable new rendering workflows and custom tools similar to nDisplay-style pipelines performatively.

