Animated pointclouds and work-around for vdb files

Here´s a short video that shows a workflow to get vdb files into touchdesigner.
I use Embergen and a preset that comes with it to export a simulation to vdb files;
Next i import them in Blender, use a single point mesh with the volume to mesh modifier.
I adjust the mesh size via voxel count and export everything to an obj sequence;
(fyi only density information of the vdb files is exported in this example.)
the obj files have position and normal information but no color.
ply files can export color as well(you need to bake it to vertex first) but theres no build-in batch export for ply files in blender atm.

Then i import the obj sequence in TD for instancing.

If anyone wants to try this for yourself you can download a 60 frames version of the
obj sequence i created. link.

edit:
here is another obj sequence of an animated character for tryout.
link

this is another example with the same technique where i exported a rig from blender and mix it with a flex simulation in touchdesigner

it would be gr8 to be able to import the vdb files directly to touch. any chance this will be possible in the near future?

… i forgot to mute the sound in the first video before recording so… if you don´t like it mute it.
if you like it you can buy it here.

2 Likes

Nice! Another option I was playing with for a second was to convert the vdb to pixels and export a .exr sequence in Houdini and then loading that into a 3d texture in TD.

If someone is bored, that would be something fun to explore as as c++ TOP

edit, according to the comments in the link above :
“NanoVDB primarily targets non-dynamic workflows like simulation-sourcing, rendering and collision-detection, as the VDB structure is streamed into a pointerless buffer.
However, GVDB’s dynamic functionality, such as points-to-grid and voxelization, will be handled by an, as yet unannounced, project which will be compatible with NanoVDB workflows. Stay tuned.”

more like this one :

though it seems they have something new in the works

1 Like

I´ve read a little on GVDB, but not enough to know how it works.
i figured that the vdb files must have a smart build-in way to compress its data.
The tornado scene for example… a single frame vdb is about 6mb in size and contains everything from
density, smoke, fire, ect. Exporting the same frame to an obj. (density only) without reducing the geo (point to grid) is about 45mb, which is a lot more for only part of the data.

Well there’s the hierarchical sparse data thing + I think it’s compressed on top of it? but yeah very efficient.
Probably more details somewhere here OpenVDB - About

there´s the link i was thinking about:

@ 17min - example where they show a nanoVDB file used as emitter source for nvidia flow.
@ 22min - if understand this correctly, they can access and tweak parameters of points inside a vdb file/sequence and to do this they have written their own dedicated language(AX); sounds cool.

2 Likes

Hi Vincent,
I am curious if slicing through a pointcloud on TOP level (like from a rgbd camera) and storing them using 3D Texture and doing the raymarching trick you showed in the TD Summit is a good idea? Well that would not be a “real” real-time solution but close to it? On the other hand, I am trying to implement convex hulling…
Shaoyu

a quick example of how good vdb works in real time.
a scene i created in embergen, which uses vdb to drive its fluid engine.
i use a volume with 33 million voxels and this is running on at 60 fps on a rxt2080s.
the emitter is a lod0 metahuman mesh i exported from blender.