GLSL - Spatial Hashing / Bitonic Sort

Hey all,

Some times ago I came across this very cool video from Sebastian Lague

Lots of times I have been confronted to the need of looping threw arbitrary neighboring points that aren’t neatly arrange in UV space, so after seeing this I tried ( with a lot of help from chatGPT, notably for the bitonic sort shader ). And after struggling for a while I managed to get it to work. But the performance weren’t what i would hope for, seeing him doing that onto millions of particles i was expecting to be able to handle 1024x1024 texture faster that.

What’s the heavier is the bitonic sort as I have to run it say 105 for 128x128 texture and it gets quite heavy the larger I go. So I wanted to share my network to see if anyone could help me make it faster.

A few thing I noticed and try :

First of all here is a snap of the performance on a 128x128

What’s weird first of all is that if I test this onto a duplicate glsl node that isn’t connected to the rest of the network after it, it’s quite notably faster in some cases

On thing i’m currently trying after speaking with the friendly robot GPT is to do first a pass of sorting using workgroups to be able to sort in 1 pass buckets of 16x16 that could then be merge and sort in fewer passes in a second shader but i’m struggling to make it work as of now.

Any help is welcome !
( Also I know in the 2025 their is the neighbours POP, this is awesome, but I could still quite easly hit performance when reaching the millions of points so I’m still wondering if their is a world where we could match Sebastian performance)

Here is the .tox attached

Thanks in advance !
EDIT : While cleaning my network to share the .tox I’m just noticing that the performance hit is quite reduced after looking at it in an empty scene, i’m able to sort 1024x1024 in around 5ms


It’s now the density computation that get super heavy unpredictably, but anyway, any tips onto how to improve this is welcome

spatial_hashing_example.tox (82.9 KB)