088: 3D Compositing Techniques

3D Techniques

This is a an example file that demonstrates the creation of a multi-pass 3d world with a few layers of compositing that provide a great deal of control over the final look.

This is the final render…

This following archive is the example file upgraded to version 088 build range 50000…

SkySystemExample_088_50000.zip (10.8 MB)

This is the original example file…

SkySystemExample.zip (10.7 MB)

The purpose of this demonstration is to outline a number of compositing techniques, and in addition, to suggest a pipeline that may be used to combine 2D image elements into a more complex 3D composition.

Also included are techniques for generating a real-time Cube map for the generation of an environment map for bump mapping, projection mapping lights to create a glow effect, and an example of the new real-time connection to Adobe Photoshop CS6.

All textures were found online or painted by hand Photoshop.

Important Note

Use the parameter target button on parameters as you investigate to see what non-default parameters are used. If no parameters appear then only default values are used.

Clouds - Sky Dome

There are many approaches to create a sky that envelopes a 3D world. In this case a large half sphere is placed around the entire scene. Then an image of a cloudy sky was downloaded using Google images.

Sky Dome Geometry Construction

In simple terms, a sky dome is a video game term for a sky texture and geometry that envelopes the entire scene. Another option is a skybox which would use a cubemap to texture a box that surrounds the 3D scene. In this case, because no cube map was available and we only wanted to use easy to find or produce materials a sphere is better for texture mapping using found regular photographs for the sky.

The below network begins with a sphere SOP using a primitive type “mesh”. It is set to exactly 40 rows and 40 columns. This sphere is then cut in half using carve SOP. However if you were to attempt to use this sphere to texture a flat sky texture onto it, the current set of texture coordinates would not work correctly.


The issue here is that that texture method for sphere’s by default is parametric and follows the uniform distribution of the rows and column of the mesh. This means that as the rows and columns converge at the polls the texture pinches as depicted here…


Instead we want to project the texture evenly around the sphere. However because we are simply downloading a flat photo of sky, the best kind of texture projection is orthographically facing down the Y axis. You can re-apply texture coordinates to geometry using a texture SOP. The texture SOP has the option for orthographic with the perspective down the Y axis. However this isn’t going to be correct either. As depicted below, as the surface angle curves to be parallel with the Y axis, the projection texture stretches at the edges of the dome. So while the top middle of the sphere looks good the edges are stretched.


Looks like the texture SOP just isn’t going to work for this case. The question now is, how do we project orthogonal onto a sphere. The answer is, that we don’t need to. Instead we can build a custom surface that matches the topology of the sphere, but with evenly spaced rows inside a flat orthogonal square, then transfer those coordinates to the sphere verticies.


The following network reconstructs a mesh of the same topology but with evenly spaced rows making it easy to project orthographic texture coordinates. Each operator is explained…


circle1: A 1 unit radius circle is created with the same number of divisions as the sphere1 SOP has columns.

copy1: The circle is copied for same number of rows in the sphere, divided by 2.

script1: A python script is used to multiply the scale of each circle by the primitive number which has been reversed and normallzed. This means that primitive number 0 will be a 1 unit diameter circle, and each primitive after gets uniformly scaled smaller and smaller, evenly distributed to the final primitive which is (1/n)*n) small.

skin1: This SOP simply skins the circles into a ploygon mesh.

texture1: Texture coordinates are projected orthographically onto the uniform circle mesh.

primitive1: Sphere meshes in Touch are by default “unrolled”. This means there is an extra column of vertcies that overlap at the surface end points. See the primitive SOP close U parameter on the “Face/Hull” page. The skin sop by default skins mesh surfaces as closed. This causes there to be 1 column of vertcies more on the sphere vs the custom skinned surface… Once the primitive is unrolled, and the topologies match, the attributes may be transfered from one SOP chain to the other.

To transfer texture coordinates from one set of geometry to another, the vertex SOP is used. The first input of this SOP is the geometry that be passed along. This means the geometry passing into the first input will maintain its primitive type, connectivity and all attributes. However, the vertex SOP can reference the second input geometry data and replace either the color or texture coordinates of the first input. In this case, the parameter “mapu,w,v” are used to access the texture coordinates. The following expressions in python are used to access vertex attributes from the SOP connected to the second input…

mapu = me.inputVertex2.uv[0]
mapv = me.inputVertex2.uv[1]
mapw = me.inputVertex2.uv[2]

Finally the dome is scaled down in Y axis to flatten the view overhead. In this case it is scaled by 0.4 in Y using the transform1 SOP. Next transform2 does a final uniform scale up to 1000 units.

To compare the results you can apply the phong MAT called “checkboard_test_mat” to the object called “r3_sky”. The generic orthographic projection starts to stretch and pinch as it passes over head. The custom projection however is nice and even as it passes overhead. Yes there is a hole at the top but in this case it is not an issue.

Regular Orthographic

SkySystemExample.zip (10.7 MB)

Custom Orthographic

SkySystemExample_088_50000.zip (10.8 MB)

Sky Dome Texturing and Material Settings

Repeating and Animating a Texture

The technique used here for creating a repeating sky texture from a still image is quite simple. A cloudy sky texture was found on Google images. The image is loaded into the movie TOP called “sky_texture_from_internet”. The following transform TOP called “transform_mirror_V” has a simple expression ‘me.time.frame/2000’ in the “ty” parameter on the transform page. This simply moves the texture from bottom to top over the range of the animation which is 2000 frames. On the “Tile” parameter page the “extend” parameter is set to “mirror”. Also the repeat parameters are set to a high range so the image can continue to repeat as a mirror of itself for values greater than 1 and less than 0. This is a very simply way to repeat a texture relatively seamlessly. This is a quick technique for stretching textures infinitely because if the texture is stretched enough and the mirror seam passes by infrequently the mirrored seem is difficult to detect.

The following level1 TOP is used for finely adjusting the look of the texture before it gets used by the material called “cloudsA”. By adjusting even this level TOP a great deal of different looking hazy clouds, overcast sky or eerie mostly clear sky’s can be achieved.


To create an evolving mistiness, with more dynamic layers of motion this network branches into 2 layers. The level7 TOP allows for adjustment, then transform7 stretches and offsets the clouds further. Therefore, this layer is moving slightly faster using another ‘me.time.frame/8000’. Notice how these small offset values create more interesting motion. The original level1 clouds are combined with this new layer using a ‘matte’ TOP.

The 3rd input to the matte TOP uses the red channel of the ‘cloud_mixer’ top - where the image is red, the first input’s image is revealed, while the second input is revealed where the ‘cloud_mixer’ is red. Notice on the ‘common’ parameters page of ‘cloud_mixer’ the pixel format is set to only ‘8bit fixed®’. Since the red channel is selected as the channel to use as the matte channel in ‘matte1’ TOP, there is no need to calculate 3 channels of noise. This is a simple optimization. Also of note, the 3rd input also can be a much lower resolution since it is simply being used a mixer for the layers. Using a higher resolution in this case would yield very little difference and therefore calculating lower resolution noise is good practice.

Material Parameters

The material for the sky dome is called “cloudsA”. Its important to note here, that the sky dome material is really just for the cloud texture only. The distant stars and gradient glow as the sky recedes behind the mountain tops are achieved with different techniques.

Furthermore, the dynamics of the shading of the clouds is achieved by “cheating”. To the eyes of the “trained” 3D expert, its likely obvious that the cloud texture is simply a single layer texture map being stretched over a 3D dome. The clouds are not using volumetric rendering techniques as found in non-real-time rendering engines OR using some of the more advanced techniques for shading volumes in real-time. To carry out this “cheat” a variety of openGL features are employed. These tools come together to create a thin layer of clouds that even react to the light of the moon.

The Desintation of the Cloud Texture

The cloud texture is sent to two different places. The upper two TOPs called ‘level4’ and ‘cloud_project_map’ is used as a projection map from the light object called ‘scene_3Dlight’. The light is used when rendering the forground tree’s and ground geometry. This will be discussed in the 3D Lighting section. The ‘level8’, and ‘cloudmap’ TOP are used as the color map in the material called ‘cloudsA’ for the Sky Dome geometry.

The cloudsA material is really quite simple. If you toggle the ‘non-default’ parameters option for this operator you will see the only parameters used are the ‘diffuse’ color set to be off white - with a slight yellow. Also the constant value is set just above 60%.

Finally the cloudmap is used both as a color map and alpha map. The use of the color map is obvious. However, the alpha map uses the same texture’s luminance as an alpha channel. It might be noted that the same effect could have been acheived by clearing the ‘alphamap’ parameter, and adding an alpha channel to the ‘cloudmap’ channel. Inserting a reorder TOP between ‘level8’ and ‘cloudmap’ and setting the Alpha channel parameter to the luminance value of input1 would yeild an almost identical result.

Projection Lights

Lights can act as projectors, by projecting textures onto 3D geometry. This is an extremely useful feature and can be used to add interesting effects in ways one might not immediately imagine. In the case of the sky dome, a light is parented to the main camera then set to “lookat” and track the animating moon object. Like the beacon light for batman, the texture is projected onto the clouds to create a glowing moon effect from the point of view of the camera.

The texture that is actually projected, is made in the network depicted below. A radial ramp is created for generating a streaky glow, which is combined with a circular ramp and finally distorted by some noise to generate a glowing effect.


The resulting texture in the TOP called ‘projection_map’ is applied to the light called ‘light_moonglow’ using the Projector Map parameter reference field. The effect is used very subtly however changing the color of the light to something different than blue will make clear how the effect is working.

For clarity the following images shows this light projection with no alpha or compositing passes on sky geometry.

Camera Rigging for Multi-pass Rendering and Compositing


There are actually 2 camera objects used in the setup. The main camera is called “cam_3dworld”. This camera is used to render the main 3D scene with the trees and ground / water objects. The other camera object is a child of cam_3dworld, and is called ‘cam_rendernofog’. For the tree objects it was was desired to have fog reveal the scene. For the sky and moutain render’s no fog is required. In addition, the near and far planes needed to be different for the close forground objects vs the distant moutain and sky objects.

To make managing two cameras easy the second camera is simply made a child of the first. Therefore any changes to tranform identical. In addition, the field of view parameter is referenced using a python parameter reference.


Using Multiple Render Passes

These cameras are actually used in 5 different renders. First, the forground geometry is rendered in the render TOP called “render_3d_world”. Following this main render, the moon polygon is rendered using a “Render Pass” TOP in “render_moon”. Also using “Render Pass” TOPs, the sky dome geometry is rendered in “render_clouds”, followed finally by the moutains “render_moutains”.

The 5th render generates a real-time cube map in the TOP called “cubemap_generator”. Notice, the “Render Cube Map” option is active. This render TOP renders only the sky and moon objects. This render is not done with a “Render Pass” TOP because the resolution needed to be different than the main render, and render pass TOPs must be rendered at the same resolution as their input. This cubemap is then used in the phong material for the water object material called “mat_water”. The cube map render is referenced in the “environment map” parameter.

Its interesting to take notice that we have rendered a few scene elements, and in real-time reassigned the result ( in this case a cubemap) back onto the material of another object. This workflow is important to understand. Far more complex or strange effects may be acheived by experimenting with this completely open and customizable render pipeline.

3D World Rendering and Lighting

The lighting of the 3D forground objects is very simple, but it illustrates a variety of important basic features for rendering in openGL. The Phong MAT is a very flexible and very powerful openGL shader if used correctly. Here are few features used in shading of this scene important to note…

In this scene Diffuse lighting is applied to the trees, ground and water geometry (r1_trees, r1_forfloor, r1_water). Also of note is that ambient light color is also applied to each object. However if there is no ambient light object ( in this scene called ‘ambient’ ) and that ambient light isnt correctly referenced by the render TOP, then the ambient light values applied in the material parameters won’t be used.

Play around with the ambient light values for the ambient light object to see how dramatically it changes to the look of the scene.

The overall effect of as if moonlight is actually lighting the 3D geometry is a combined of tricks.

First of all, at the begining of the animation, the “cam_3dworld” geometry COMP has a fog value where the far value is set to 0, This means that the fog will completely overwhelm the scene. So in this case, the fog far value is used instead to reveal the scene as the moon rises over the moutain tops. Notice how the fog alpha parameter is set to 0. This allows for fog to be cast over visible object, but the fog color will no appear in the area where the alpha channel of the scene render is 0. This is critical for using fog in conjuction with multipass scene compositing.

As the 3D render is revealed there are a variety of rendering features used to achieve the look of the forground forest and water. The first feature we will cover is the generation of shadows. The process for creating shadows in 3D scenes is much easier in TouchDesigner088. Notice the parameters for light object “scene_3Dlight”. The Shadow Type parameter is set to “Hard”, and there is a “Shadow Casters” parameter that allows for the selection of objects that will cast objects from that light. This is pretty much all you need to do. This shadow casting light must be referenced in the render TOP.

However, because the shadows are actually generated by a real-time shadow map, rendered from the perspective of the camera, the “Shadow Resolution” parameter, in conjunction with FOV Angle of the light. The FOV angle will need to be scaled relative to the distance of the light. When the camera is moving setting up a scene that casts shadows can be finicky. In this case the shadow casting light is actually a child of the camera. As the camera position pans, so does the shadow casting light. This technique allows for the generation of a reasonable size shadow map that follows the camera whereever it goes. If there was a desire for the shadows to move with the moon angle, this light position could be animated as well.

A side note while we discuss shadows, is that the phong Material has a useful parameter called “Shadow Color” on the “Advanced” parameters page. This can be used to color the shadows. Its barely used in this scene however. Very useful is the scene is brighter and more colorful… Take note of the “Darkness Emit” parameter, another useful feature but not used in this scene.

In addition to shadows, this light is also projecting the cloud texture that we discussed at the begining of this post. So the shadow casting light also projects the same cloud texture to give a feel as if the clouds are casting a shadow as well. Its a very subtle effect but

It’s worth mentioning again, there is nothing physically realistic about the lighting systems here. Its all done by eye and invention, using the available tools creatively to get the job done.

As mentioned already, scene_3Dlight is a child of the camera, but positioned way out front and above the camera and facing back. This is an ideal scenerio for generating a nice specular highlight. In addition, moving noise is converted into a normal map and applied to the water texture. The same normal texture was copied and locked and used as a bump map for the “mat_ground” MAT. The bump maps add some detail and interest for the behavior of the lighting.

Take notice also of how nicely the environment map generated by the with the “cubemap_geneator” works in conjunction with the animating bump map, creating a water like surface texture.

It may be of some surprise that there is only a single light for the 3D scene render. The tree material is also using a built in rim light feature. This is a very powerful and easy way to add greater form to your lighting scenerios. Essentially ever object in a scene can have two of its own personal rim lights to accetuate its edges and form. Trying adding the other rim light in the “mat_trees” MAT, located on the “Rim” parameter page.

As the specular is used for effect on the ground object. Of particular importance here was to create an effect as if the moonlight was actually catching the ground surface. This is accomplished by making the light that casts the shadows and that also generates the specular highlight,

Background Moutains - Photoshop Plugin

Matte Painting from Photoshop into Composited a 3D World

The idea behind making this demonstration file began with testing the Photoshop plugin. As we were developing the plugin we considered how such a live connection between Photoshop and TouchDesigner might be used. Clearly it will be useful in live scenerios with talented painters that might touch up masks or textures for projection mapping. This was certainly the original intent.

However, after investigation on YouTube, when searching for “Matte Painting in Photoshop”, we stumbled on the world of “Speed Painting”. This is a very raw creative process of grabbing content from the web and very quickly combining, painting over, mixing and quickly generating beautiful high resolution still images.

In the commercial production world, this process is used in conjunction with Adobe After Effects (and other compositing apps) to create Matte backdrops for hybird 2D / 3D scenes, and motion tracked live action shots with video plates. It became clear to us that TouchDesigner was very well suited to this work flow and in many ways brings something new to the table.

Yet, “Matte Painting” is nothing new. A very inspirational movie found on YouTube is about Peter Ellenshaw, a “real life” matte painter for Disney films. Watching this movie, really inspires, but also underscores how important it is to have amazing artwork as the foundation of any visual scene. It creates the mood, sets the basis for the visual composition and in the case of a real-time system like TouchDesigner, adds a visual richness that is virtually free with regard to GPU and CPU resources.

We beleive TouchDesigner users will benefit greatly from having a deeper understanding of the matte painting process. There are plenty of videos online to go deeper. Furthermore, we hope Matte painters, 3D modellers and animators will discover TouchDesigner as an open real-time flexible compositing environment for quickly sketching ideas, testing out visual concepts or quickly combining video and still image, with geometry and shaders - using multiple pass render tricks to quickly render videos if for no other reason than to share their artwork with the world.

As TouchDesigner matures, it is becoming a real-time compositor’s experimentation toolkit. The engine is not just a traditional compositor that has been partially accelerated by advances in GPU and CPU technologies, but it’s purely real-time, designed from the ground up as a real-time compositing engine with a multifacted focus on interaction and performance as well as integration with music. Now with the new realtime Photoshop connection, the opportunities for the visual artist, painter and compositor to work in a completely fluid environment, similar to the world of the musician, is becoming a reality. Of course, these techniques combined with the already powerful projection systems and ease for integrating interactivty, TouchDesigner continues to lay the foundation for the future of stage and installation design, with the bluring of the cinematic arts and video game technology used with live performance and music in the creation of immersive environments.

How the Plugin Works

Ok, so its not perfect, but its a good start… There are some issues we hope to resolve but for now this is the process for setting up a live connection between TouchDesigner088 and Photoshop CS6.

When photoshop isn’t running, or the correct files arent loaded into photoshop, there are “standin” files loaded into the “movie” TOP’s called “rgb_in” and “alpha_in”. These two images will be passed through the “Photoshop In” TOPs. The two “Photoshop In” tops are called “photoshopin_moutain” and “photoshopin_moutainalpha”.

Why are there seperate inputs for RGB vs Alpha? Unfortunately the Adobe SDK only supports streaming the output of 3 channels per loaded image. Thus, unfortunately the Alpha channel and RGB channel must be loaded as sepeate files.

Another option could be to work only in RGB and start with a Green screen BG color, and pull the alpha using and “RGB Key” TOP. We are not Photoshop experts either so there may be some tricks to pass along an alpha layer automatically from an image using a Photoshop script of some kind.

Working in Photoshop

First of all, the ideal setup is a system with two screens. TouchDesigner on one screen and Photoshop on the other. Obviously a Wacom tablet is also very useful, and the pen range can be assigned to work only on the Photoshop screen if desired.


TouchDesigner and Photoshop communicate with eachother through the Photoshop “Remote Connections” facilities. To setup this feature use the Photoshop menu “Edit > Remote Connections”. A dialog appears with fields for “Service Name” and “Password”. Set the Service Name = “Photoshop Server” and set the Password = “password”. Take note that this password can be anything you want, but it must match the password that is entered into the TouchDesigner “Photoshop In” plugin TOPs.


That should be all that is required to get the live connection working. However, on the TouchDesigner side there are a few important things to beware of in order to be comfortable painting. First of all, if you are painting the moutains, the fog will get in the way during most of the animation. This can be avoided by dragging the timeline range controls to loop at the end of the animation when the moutain texture is more clearly visibile.


Furthermore, the moutains are not actually occluded by openGL fog as with the forground 3D geometry. Its another compositing trick. To completely by pass the effect of the moutain in darkness, locate the “multiply3” TOP. Here you see an animating ramp is causing the texture to reveal from black over the span of the animation. While painting, simply bypass “multiply3” to avoid the moutain darkness to allow for painting while the animation continues to loop over the entire range.

The Moon Rigging

The moon is simply a jpg found on Google Images. A single rectangle geo object called “r2_moon” orbits on the outer back side of the 3D world. The texture is modified to clean up the moon alpha channel and then its applied to a material with Emit set to 1. However, one could easilly replace the input of the texture TOP “moon_final” with a photoshop_in TOP and have yet another object to interactivly paint from Photoshop.

Star Field - Particle Instancing

The star field uses a similar approach to this forum post…



This star field generator network is explained as follows.

The “grid1” SOP: Creates 40,000 points at the coordinates (0,0,0). This is an easy way to simply generate points through one could use a sphere or any other generator.

The “point1” adds point attributes for color (Cd), and pscale (used for randomizing the scale of the points).

The “sopto4” DAT strips away the primitive mesh topology and simply holds the point attribute data in table form. We do this because we dont care about the “Mesh” connectivity of the points, and in fact we dont want the Mesh primitive, instead we want a particle system primitive so we can render point sprites.

The “datto3” SOP converts the table point data back into the format we want. In this case we convert the table of point information into a Particle System, and we set “Particle Type” parameter to “Render as Point Sprites”. Now we have the geometry formatted correctly for rendering point sprites.

The “script1” SOP is a custom python SOP that modifies the point positions, and the code is already well documented but it is included here for reference.

#me is this DAT.
#scriptOP is the OP which is cooking.
import math
import random

def cook(scriptOP):

	#First copy the point data from the first input to this SOP

	#Loop through each point in the list of points in 
	#the input of the script1 SOP
	for point in scriptOP.points:

		#A parameter to control the overall scale of 
		#the sphere distribution
		scale = scriptOP.par.value0x

		#Generate a random number between 0 and 1 and scale it by the 
		#script SOP custom parameter for scale
		R = random.random()*scale+scriptOP.par.value0y

		#get PI THETA AND PHI for generation of sphereical coordinates
		PI = math.pi
		THETA = (random.random())*PI*2
		PHI = random.random()*PI
		#Math to position a point in a sphere. 
		point.x = R * math.cos( THETA ) * math.sin( PHI )
		point.y = R * math.sin( THETA ) * math.sin( PHI )
		point.z = R * math.cos( PHI )
		#Generate point normals for rendering 
		d = math.sqrt(point.x*point.x + point.y*point.y + point.z*point.z)
		normV = (point.x/d,point.y/d,point.z/d)
		#Assign the normal values to each N attribute for all points
		point.N[0] = normV[0]
		point.N[1] = normV[1]
		point.N[2] = normV[2]

		#Generate some random point colors
		point.Cd[0] = 0.9 + (random.random()*0.5)*0.2
		point.Cd[1] = 0.9 + (random.random()*0.5)*0.2
		point.Cd[2] = 1 + (random.random()*1)
		point.Cd[3] = 1
		#Create a weighted distribution of different sizes for stars 
		#For python experts, I'm sure there is a better way to do this
		sizes = [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,3,3,3,3,6,8,10]

		#Randomly select a size from the sizes list
		point.pscale[0] = random.choice(sizes)

The geometry is now ready to be rendered so it is sent into the stars Geometry Component and a GLSL material called starSprites is applied.

The starSprites shaders are very simple but they outline the basic code to create a custom particle shader in GLSL. Again the code is documented well but is included here for renference.

sprite_GLSLVertex shader code:

//Create uniform for controlling size of point sprite
uniform float pSize;

//Import the touch attribute pscale
//Notice this was done with varying declarations in GLSL 1.2
in float pscale;

//Define the structure attributes to pass along to the pixel shader
struct Vertex {
	vec2 texCoord0;
	vec4 color;

//Declare the struct 
out Vertex vVert;

void main()
	//Assign the touch ready color variable to the output vertex structure
	vVert.color = Cd;

	//Deform the verticies positions with object level transforms
	vec3 camSpaceVert = TDDeform(P.xyz).xyz;

	//Tranform the verticies from camera space into projection space
	gl_Position = TDCamToProj(camSpaceVert);

	//Assign scale values to the point sprite size attribute gl_PointSize
	gl_PointSize = pscale*pSize;

sprite_GLSLPixel shader code:

//Texture map for sprite
uniform sampler2D textureFace;

//This struct variables sent from the vertex shader
struct Vertex {
	vec4 color;
	vec2 texCoord0;

// Struct that holds all the interpolated info from the vertices 
in Vertex vVert;

// Output variable for the color
layout(location = 0) out vec4 fragColor;

void main()

	//use gl_PointCoord.st the built in openGL texture
	//coordinates for point sprites 
	vec4 tcolor = texture(textureFace,gl_PointCoord.st);
	//multiply the vertex color	by the texture color
	fragColor.rgb =  vVert.color.rgb * tcolor.rgb;

	//set alpha to the texture color
	fragColor.a = tcolor.a;

Another parameter to be aware of in the case of the star shader is on the Common Page, called “Depth Test” and notice this is turned off, with “Blending (Transparency)” turned on. This essentially shades the sprites ignoring their depth position and so it simply just adds the transparently of each sprite together. This avoids having to actually sort the positions of the points. If you activate “Depth Test” you will notice the Z depth sorted is out of order and it looks a mess.

Finally, the star object is transformed with null2, null1 and some scale up in the Z axis to create a more galaxy like shape. The motion was just faked to give a general impression of a milky way like object in time lapse. Kind of effective in very abstract and crude terms.

The Final Composition

The final composite ends up quite straight forward. The following composite network is explained here…


In the end, the final composite is quite simple. The foreground 3D world is rendered first, into render_3d_world. Next, the moon and clouds and mountain 3D back-plate get their own renders. The mountains are multiplied by an animating ramp to make it look like they are being revealed by the rising moon. The moon is composited behind the clouds at “over4” and finally the clouds and moon get comped over the stars component.

At “over2” this whole real-time 3D world and 2D composite is simply rendered over a static ramp that give the feeling of early in the morning somewhere in the mountains. Of course the ramp4 night sky ramp could be animated as well for dramatic effect.

1 Like

Wow, incredible tutorial, Jarrett. Can’t wait to open this up and sink my teeth in!

Awesome Jarrett. Thank you for this excellent tutorial!

Was the example file you posted removed? I could have sworn I saw a .toe posted during the first few hours of this thread and I wanted to dissect the python script used in the SkyDome tutorial that multiplies the scale of each circle by the primitive number. Thanks!

This is glorious. I am indebted to your genius!

The Zip archive associated with this seems to be broken, has anyone successfully downloaded the example file?


yes same here … gets part way thru downloading then fails … anywhere else to grab the file?
great looking tut tho!

Hmm, works fine for me. Can you try using a different browser?

nice work!

Thank you for this tutorial! I how can I remove the fixed camera-relative positioning of the stars? I’m trying to incorporate them into a scene, and I want to be able to fly in and out with a camera and not have them stuck to the camera. Thank you!

The camera is now mobile in the new version just posted at the top of this page. For version 088 build range 50000+.