Fractal Flames with Compute Shader

I’m working on a fractal flame system (aka the Electric Sheep algorithm). The algorithm is described here:, on Wikipedia, and chapter 3 in Evolutionary Visual Art and Design.

This repo can give some intuition for it too FractalFlame/FractalFlame.pde at master · CodingTrain/FractalFlame · GitHub

It’s a very sequential algorithm. You pick a random point and then apply transformations to that point millions or even billions of times. I think that the Processing demo does the algorithm correctly.

However, I can imagine making some sacrifices to the algorithm to parallelize it a bit. For example in a compute shader you could have different threads all doing their own particle iteration. The issue then is that the threads are writing to the same buffer, and there could be race conditions if they write to the same location. What’s the way to make sure one thread finishes before another even begins? I couldn’t accomplish it with memoryBarrierShared and barrier().

Suppose you had only one thread, I think you can’t do more than 4096 iterations because the GLSL compiler prevents this.

So I’m wondering what are some ideas for doing fractal flames with GLSL?

I found this repo too FractalFlames/FreeParticle.cs at master · nshelton/FractalFlames · GitHub

But I’m not sure how strictly it follows the original algorithm either. It seems to have 1000 particles and each particle is only iterated once per frame?


Ahh Fractal Flames! Wow what a throwback I am stoked to hear you diving into this. I got a lot of enjoyment out of Apophysis 7x back in the day.

Honestly, I have not done a ton with computes, but the few times I’ve made use of them I was also not able to get barriers to work, or didn’t understand how to use them properly and they seemed not to work - not sure.

I guess if every point has to go through a series of transforms, then there’s no way to get around sequencing them? Is there anyway that using a 3d texture would be useful here you think?

If a point is really just going on a 2d journey around the canvas you think there’s a way to pre-calculate the journey for a lower res grid then based on where the randomly picked point starts, lerp between those two paths? Something like a temporal displacement map. Not sure if that would just result in a lower fidelity final image or just create more problems than it solves haha.

Anyways hope you figure something out! This would be super neat.

found this today parusing a bit - seems to be another (quite old) paper on parallelizing this thing to some degree, imagine you’ve already seen it but just incase not!

1 Like

Fun topic! I’ve got a somewhat basic implementation of the Scott Draves algorithm to work with a compute shader, simply by having each work-item run a single particle (really just a random coordinate) through a number of iterations via a for-loop with several sequential transforms (about 20 iterations start to converge nicely). The final coordinate gets plotted onto the output buffer with some colour value. With enough work-items and multi-pass rendering you can emulate millions of particles this way. I’m sure there are race conditions and other gotcha’s, which I’m not aware of. But I chose to ignore that for now :wink:


Here’s what I got, based on @lucasm’s shared paper.

Fractal-Flame_shared.toe (19.2 KB)

I do a multipass compute shader where each thread in each pass loads a position, does an update, and saves the position back to the texture. The top section of the texture is reserved for loading/saving “position” data. The bottom section (about 90% the total) is for “painting” the color according to the position. Eventually this setup might be able to do the trick described in the paper about picking a random position but then doing a predetermined flame iteration. Or maybe it still needs to be in a CUDA top.

This probably has some overhead because the position is written/loaded for each pass. What @prema suggested is better in some regards, but I was worried that having a single-pass with many many branches would be bad. I’m not sure if you’re using randomness to pick your transforms.

I’m using Python DEAP (strongly typed genetic programming) to generate the shader code in the “edit_shader” DAT, but I’ve excluded that from this tox for now. But the tox has some useful GLSL nonetheless.

Another nuance is picking good affine transformations. It seems to help to pick rotations of 360 degrees divided by some integer. For example, a 60 degree rotation with some random xy offset.

1 Like

wow nice progress! there’s so much potential in this algo, especially porting it into Touch like this.

I’ll be honest I fell down a bit of a rabbit hole the other day when you posted this thread looking at all the different functions and porting some to glsl and just applying them to video. I know it’s not exactly the same thing but some really wacky and fun effects just doing 1 uv transform/iteration. Even more interesting with combining different transforms together.



Polar: (lmao)


(Spherical * 1) + (Handkerchief * 0.5):


Nice work. For those following along, I had uploaded some of the “Variations” from the flame algorithm, written in GLSL. @lucasm used these functions as a kind of Remap TOP.

Also here’s a teaser of the genetic algorithm

Variation 39 happens to take two parameters. The genetic algorithm constructs this tree to compose uniforms and predefined functions that lead to these two parameters. A flame is then a collection of several of these trees with a probability assigned to each tree. Each variation/tree also has its own details such as color and pre-transform/post-transform.


this is super cool… thank you for sharing the nuts and bolts of this!