# GLSL Camera transformation

so i’m a little lost on the GLSL camera transform space, i’ve written an OSL camera shader with Octane that seems to be fine, but getting a similar port over to GLSL for use in TD is driving me nuts…

OSL
//////////
matrix objectTransform = matrix(1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1),

``````float objectSize   = 3.6,
output point pos             = P,
output vector dir            = I,
output float tMax            = 1.0/0.0)
``````

{
point cubePos = 0;
// left
if( u <= 0.5 && v <= 0.5)
{
cubePos = point(-0.5, -0.5 + 2 * v, -0.5 + 2 * u);
}
// right
else if(u >= 0.5 && v < 0.5)
{
cubePos = point(-0.5 + (u-0.5)*2, -0.5 + 2 * v, 0.5);
}
// top
else if (u > 0.5 && v >= 0.5)
{
cubePos = point(-0.5 + (u-0.5)*2, 0.5, 0.5 - (v-0.5)*2);
}
else
{
tMax = 0;
}
matrix objScale = matrix(objectSize);
matrix finalTrans = objectTransform * objScale;
point worldPos = transform(finalTrans, cubePos);
dir = worldPos - P;
}

///////////////////////////////////////

what it’s doing here, is simply taking a cube, and flattening out it’s visible three sides to equal quads on render… having a hard time porting that over to GLSL so that my camera renders the flat quads. Not much documentation on using the GLSL stuff with the camera that i can find…

thanks!

Hey,

So I’ve never written an OSL shader before, so I apologize if my understanding of the shader you posted in incorrect. What I think it’s doing is basically creating the 3 cube face surfaces, based on the UV position of the current pixel being rendered.

There are two ways to go about doing this in GLSL, depending on what you want to do with this shader in the end. One way would be using just a GLSL TOP, and using the input variable ``` vUV ``` in the pixel shader to determine which quadrant you are in, the same way the OSL shader is. The main difference from what I think an OSL shader is, is that an OSL shader creates a surface to be lit at the next stage. A GLSL TOP only has that stage, so once you’ve decided your side you then color the pixel as desired, using parametric rendering algorithms, texturing etc. From there, it depends what you want to do. Do you want to generate texture coordinates to sample an incoming texture, do you want to do lighting on the faces? Attached is a simple example for this, but I can expand on it once I know what your next requirement is.

The other way is rendering a Cube generated from a Box SOP. In the vertex shader we can throw away the sides that aren’t facing the camera, and unwrap the sides that are facing the camera by outputting coordinates in NDC space (normalized device coordinate space). The questions that need to be answered for this method are, how do you decide which face goes to which quadrant? Is the cube rotating and how do you decide which side goes where when one becomes visible while another becomes hidden? If you want to go this route I’ll need more information for that.

I’ll post an example of the 2nd one in a bit, hope this helps you get started.
unwrapCubeTOP.toe (4.05 KB)

Here is one with the 2nd technique. I’ve assumed you just want show the 3 positive sides of the cube here, but this can be tweaked. What’s nice about this way is you get to use our lighting system/materials to light the cube as you would normally in the 3D scene, but the results are still unwrapped on output.

I’ll also mention in the upcoming 2018.40000 series of builds there is a built-in Unwrap feature in the Render TOP that will make this easier. You just need to set the texture coordinates of your cube faces to be where you want them output (in 0-1 UV space), and it’ll unwrap the faces based on that.
unwrapCube.toe (8.07 KB)

had a look, yeah, that makes total sense for working in 3d space… let me see if i can better explain the scenario:

this is a 16’ tall “cube” (3 sided) that has a native resolution of 768x768 (64x120 per module), here’s a link to the pixel map we feed into the delivery system and a mock output onto the cube.

there’s two ways of authoring scenes and content for this thing, one way is to print directly to the 2D map, but doing that, obviously leaves an akward seam at the top left if your content isnt symmetrical to the surrounding quads, like this noise pattern…

the other way, would be to project onto a perspective view of cube at a distance, then “unwrap” the cube back to its 2d pixel map state… there are a few hacks like corner pinning etc, but i really want the perspective printed onto the cube map as cleanly and as efficiently as possible.

here’s the map / template thingie i’ve been working with.
mau5cube.toe (5.19 KB)

Ok right. Ya so the two methods you list are pretty much the same as the two methods I made examples for. In the 2nd one for example you can set a texture as the Projection Map for ‘light1’ and it’ll project onto the cube faces correctly, avoiding seams.

Can you lock the /project1/3D_Cube_Vis/mesh node in your example file? The geometry asset is missing for the cube, but if you lock it it’ll get baked into the .toe file.

Thanks

ah, well… here’s the locked one… also, i kinda cheesed it by hacking the Katan mapper… does the same thing i suppose, but, yeah, im not a fan of cornerpinning things / warping images.
mau5cube_katan.toe (325 KB)

Here is an example with my 2nd technique merged with your example file. The ‘pre_unwrap_vis’ visualizes the scene as it’s being rendered. The ‘unwrap’ is not a corner pin of that result, but rather another render using the same camera/light/geometry, but the results are projected differently at render-time to flatten them. So you aren’t losing any information here during a corner pin operation. If your cube suddenly had a different resolution, you’d just have to change the resolution of the ‘unwrap’ TOP and you still be generating data for your pixels 1:1.

Right now the shader is using the light Projection Map feature to apply the projection onto the cube (TDProjMap() in the pixel shader). If you add more lights it’ll sum the projection maps together, but you can change the code in the pixel shader to do something different, as needed.

There is a visible seam on the ‘post_unwrap_vis’, but this is an artifact of the mipmapping and the fact there is a grey square in the upper left quadrant that those pixels blend with when getting textured onto the cube. It wouldn’t be visible on your actual Cube.

Let me know if you have any questions.
unwrapCubeMerged.24.toe (8.98 KB)

ah ha. that’s perfect. thanks!