Please post on my RFE if you’d like this to be added as an official feature:
derivative.ca/Forum/viewtopi … =17&t=9582
So - many systems either use unwrapped UVs to render, or allow you to ‘bake’ your feeds into a flat movie. The common way is to use UV coordinates and unwrap.
- a single viewpoint render is restricted by its angle in what parts of a geometry is visible, and what the pixel density is at that point. This method allows you to define the rendering resolution by UV mapping,
- If some of your content was produced by another artist or method, you can render to a map and perform simple video operations to transition or mix between your content and theirs,
- In projection mapping depth illusions (trompe l’oeil), you can make the best quality intermediate render from a specific viewpoint, then map onto an object for projection,
- Some systems like pixel based LEDs are inherently a pixel map, and you can render to that map from a physical representation of the LED structure/object.
- You can use TouchDesigner as production software, delivering baked renders for use in Touch or other software.
The key is to ‘stash’ the UV coordinates you need in a vertex attribute. Now you can re-texture as needed to draw in any way you need, as normal (the Texture SOP sets UV coordinates).
When you render, you need to make a custom Vertex Shader. A starting point can be generated from a Phong mat or PBR mat. Make the render look as close to your final product as you can - adding some render features require a change in shader, so try to get close enough to just need to tweak parameters and light position/angle etc.
Once you have a GLSL mat, apply textures and everything you need to make the render using that instead of the donor phong or PBR mat.
Now add near the top of your custom vertex shader:
// Use the custom attribute we added to save orig coords
in vec3 uvMap;
And at the very end of your shader, right before the closing squiggle } :
// Leave the vVert stuff, but change position!
gl_Position = vec4 ((uvMap.st * 2.0) - 1.0, 0.0, 1.0);
How it works:
TouchDesigner Fragment Shaders use the vVert structures to calculate all lighting and shading, effectively ignoring the built-in gl_position, but: the final output position of your vertex is based on that position.
It’s a hack (which is why I’d like it built into TouchDesigner) that gets the best of both worlds at the cost of some simple GLSL code (not that it was that simple to work it out!).
Check it out.
UV unwrapped rendering.zip (11 MB)