# Export Stereo Equirectangular pano video from Touchdesigner

Hi guys, my first post here.

I’ve been recently trying to export a sequence of stereo equirectangular images from touchdesigner, for example to be able to visualize a 360 stereo video in cardboard using youtube. I am not talking about real time , just a method that works reasonably fast to export and store the final stereo sequence.

The first idea of creating two equirectangular images from render TOP ( Cubemap option) and then Projection TOP, with the two cameras separated by let’s say 0.65 eye distance, doesn’t work as for 360 stereo, the right camera needs to rotate pivoted with the left, or both left and right camera pivoting around the center.
[url]http://paulbourke.net/geometry/transformationprojection/[/url]
[url]http://paulbourke.net/stereographics/stereopanoramic/[/url]

In Houdini with a cvex camera lens script, as explained here [url]cgwiki it’s not difficult, as with a raytrace approach, it is much easier to transform/rotate the rays. But with a glsl rasterizer render it’s not trivial.

In the .toe example attached, inside the GLSL container, I tried to tweak the camera COMP Custom Projection parameter, applying equirectangular formula from the camera space vertex positions to the projection vector result. The equirectangular equations from converting a xyz coordinate to longitud (x) latitude(y) works (with lots of artifacts at the edges of the image when 0 degrees turn into 360), but when I try to rotate the camera space based on longitude the result is a blur of polygons. Here is the code so far and some links in case someone more advanced with glsl or c++ has any ideas.

[code]vec4 TDSOPToProj(vec4 p)
{

`````` vec4 projP = uTDMat.worldCamProj * p;
return projP;
``````

}

vec4 TDCamToProj(vec4 p)
{

``````vec4 q= vec4(0,0,0,0);
// q = uTDMat.camInverse * p; // Real world position
//q = uTDMat.worldCam * q; // Back to camera space
q=p; // camera xyz space
vec3 angles;
mat4 rotateY;

float PI = 3.14159265359;
float r = length(q);
float lonxz = atan(q.z,q.x);
``````

/* My attempt to rotate the camera based on longitude
float deg = degrees(lonxz);
rotateY = mat4(cos(deg), 0, sin(deg),0,0, 1, 0,0,-sin(deg), 0, cos(deg),0,0,0,0,1);
q = rotateY * q;
*/
float laty = asin(q.y/r)/(PI/2);
lonxz = lonxz/PI;
vec4 projP = vec4(lonxz,laty,0.0,1.0);
//projP = uTDMat.proj * q; // The default return result
return projP;
}[/code]

I read that not only the vertex and fragment, but a geometry shader needs to be applied as well to subdivide the geometry on render time, but this glsl material would need to be applied then to everything that needs to be rendered.

Then I tried something else (In the .toe example, inside the 6_CUBEMAPS container) simulating this rotation for the right eye (placing the left camera at the center) by rendering six cube maps from correct camera right eye rotations and then using a cubeMap TOP - projection TOP to get the equirectangular image. The problem is there are significant stitches between the views.

Another approach I have tried (the last container in the .toe attachment, 4 EQUI_BLEND), based on this method to export stereo 360 panoramas from Unreal
[url]Stereo Pano Camera Tutorial for UE4 - YouTube
, is to render four equirectangular images from the right eye, each of them with 90 degrees rotation, and then blend between them with ramps. This is so far the best result although still there are some noticeable stitches in between.

I guess the more renders done with this method, covering more degrees, the smoother results. Ideally we would need a render per degree, 360 renders in total (and 90 degrees up and down) to have the most accurate, smooth result.
There must be a way to do this without eating the RAM alive but that’s as far as I have gone at the moment. Any ideas more than welcome!

Thanks,
David
stereoPano.toe (12 KB)

Hey There!

I have been meaning to respond to this post for a while as I read it when I embarked on the same quest to render “true” stereo equirectangular panoramas from touch.

I have come up with the main technique for what I think really helps this subject along in the vein of Unreal’s Kite and Lightning pano render plugin as well as David’s final conjecture in his post, that a render per degree may be what is needed.

I have a big annotation text DAT in the file which should explain quite a bit.

Suffice to say, there’s no way (right now) to get a relatively accurate stereo pano without multiple cameras ( lots of them ) so this method requires multiple frames to accumulate a single image, not unlike a nodal camera capture.

There is also a Oculus Rift setup within with multiple modes as described in the annotation dat. This is useful for comparing the differences between viewing a pano versus actual VR realtime rendering.

Sorry the zip is kind of big, there are images in there to compare things against. I will likely move this sort of thing to github eventually like all you other awesome touch wizards

Hope this is of interest, its just a start really, very un-optimized but it should address many of David’s initial issues in creating these kinds of images.

Cheers
P
PsystemPanoRender.zip (13.1 MB)

2 Likes

Hi!

Following up on this old post. How does your method compare to Unity 360 stereo video capture system based on Google Omni‐directional Stereo?
[url]https://blogs.unity3d.com/2018/01/26/stereo-360-image-and-video-capture/[/url]

I am working on a project where I need to render/export alpha stereo equirectangular video from TD to create a sound generated geometric layer on top of a captured/stitched 360 video.
Everything pre-rendered.

What did you guys ended up using to get the best stereo 360 video out of TD?

Thanks,

Ben

1 Like

Anyone got this solved?
To me it also looks like the only proper option would be to maybe write a custom Projection GLSL DAT for the Camera COMP?
With the Google ODS approach, mentioned above?

A proper native workflow for this would be really nice TD Team!