Hello, trying to wrap my brain around some concepts surrounding 360 video/ equirectangular projection.
The gist is we’re using a 360 camera (Ricoh Theta) to map LEDs in 3d space.
That much is working, though coordinates are based on equirectangular space.
Ideally if I can make existing flat content conform to that distorted coordinate space, we’ll have pseudo-3d output based on 2d processes.
I think I have it working by texturing the content onto a sphere, rendering a cube map, then projecting into equirectangular but I’m not sure if this is accurate.
Here’s that idea combined with an example from the forums that generates a skybox from equirectangular images. Flat content is emulating what an LED chaser animation might look like. 360skybox.toe (6.5 KB)
Is this the correct way to do this? Or does anyone know a more straightforward path?
Any guidance would be appreciated.