MUTEK Japan POPs Retrospective

Hi Guys,

We’ve had a day to chill after MUTEK but we used POP’s heavily in the workflow for our collaboration with Kara-Lis Coverdale and it was interesting to build a long (45 min) set with them.

The concept was creatures after dark and we used analysis of footage and 3D scans. We’ve done a similar project before using TOP’s for the AT&T discovery district LED screen in Dallas and the first thing is just how much quicker we can develop things in TouchDesigner and how much less reliant we are on GLSL shaders in the new build (we have exactly 1 and it’s the optical flow component). It’s absolutely incredible and I think it’ll really help artists who are not necessarily super technical push the boundaries more.

Some fun techniques we used…

Scene 01 - Evening

We used found footage here and ran it through comfyui and depthanything to give us depth maps from the images. When using POP’s with footage it’s very useful to have segmentation or depth and it can generate depth images pretty much in real-time now so I think that’d be a nice-to-have when working with POP’s and footage. We had 2 NotchLC edits running simultaneously, one was the colour info and one was the depth info and then we were using a delete POP to carve up the depth scan and creating particles from that carved depth. Any of the comfyui pre-processors for controlnets in AI are actually really useful for non-AI tasks in TouchDesigner too.

Scene 02 - Particle Forest

We did a bunch of gaussian splats in Ome near Tokyo and converted them to PLY files to bring into TouchDesigner. Unfortunately loading the PLY’s with 1 million points currently in TOPs or POP’s causes a stutter in playback as I mentioned in another thread so we made a small converter tool in TouchDesigner that allowed us to take the PLY as a POP, clip and transform it to the correct coordinate space and then saved the position and colour data out as DDS files. We then used the moviefilein TOP to bring those in and we’d only get 1 frame of stutter (if even that) when loading a new scan in. We then used a TOP to POP to convert over. (FYI - if I have a scan set to RGB XYZ and have position and active it automatically sets all actives to 0. It’d be great if there was a position only option. It’s no biggie though as we just set to custom and did it manually)

The biggest challenge is blending between point clouds that are non-sorted as it looks a bit of a mess. A nice way around this would be to be able to compare one point cloud set to another to find it’s nearest neighbour and then re-sort the second cloud to match the first to get nice transitions. This might be achieavable with the neighbour POP…I’d be interested if anyone knows how that might work. In the end we faked it by blending to noise then blending back. It looked okayish but could be better.

We wanted volumetric light rays and nice bloom so dev’d up a proximity POP that would take all the points in our scan and a single point at the light source and then gave the lines it created a low opacity to get nice light rays. Proximity connections by attribute or group would be great as we found if our light got too close we’d get connections to other points. We just kept our lights very far away to get around that issue.

For bloom we found Lucas Morgan’s bloom component was much more efficient than the Bloom TOP. (or at least it was reporting it’s cook times that way) so we ended up using that component instead.

We created the little audio reactive glowing spheres in that effect with a delete POP set to *:1000 to down-rez our POP’s. A random resample POP would make this a little easier I guess but otherwise it was great.

Scene 03 - Fauna

We did a fun little trick where we took depth maps of animals (again in comfy), culled out the background and then blended between these fake scans. Again we used DDS files but this time we also made a fun little technique of deleting all points by a thin bounding box and moving that down the scan in a quantised fashion and then using the analyze POP and a trail to get the center point along the animals scan data. It ended up creating these nice flowing data graphics that would follow the contour of the creature. Really easy to do and something that would probably have been a bit of a pain before.

Issues

We had a couple of issues during the show, one of which is not relevant as we just lost audio signal but the other may be.

Due to Particle POP’s not pausing when pausing the timeline yet (I appreciate it’s the very first version so this is probably being amended) we had to turn on and off cooking on COMP’s to get performance back. When turning on cooking on a particular COMP the moviefilein TOP issue that I reported earlier was happening where the file wouldn’t load and in order to reload it I had to clear the file field and then re-enter the filename to refresh the TOP. We started getting bluescreens on that PC today though and I’d heard the latest Nvidia drivers (which we ran for the show) have issues so I’ve rolled back to a driver from July and I will test again with that and see if it fixes. We found the issue in rehearsal so put a manual refresh in for the show to make sure it kicked in and worked.

We built a camera blending tool that would take the CameraViewport COMP’s camera and store the localtransform into a DAT. It seems that there’s some issue getting the local transform of a camera and then de-composing that transform. I’ll make a simple example of this though for another thread. It seems to be in the correct transform position but the FOV feels wrong…the camera viewport is set to 45 as are our cameras so I’m not sure why that would be.

On to the next project

We’re now working on another POP’s project where we’re interested in bringing DICOM files into TouchDesigner. It’s the main format used for CT scans and currently the workflow for bringing them in is to convert them to an Image sequence or PLY using other software and then converting them to dds files and bringing them in through a TOP. It’d be cool to have more options for other volumetric 3D data types there too like VDBs (the most obvious choice I guess)

If we could somehow get a way to store and load POP’s in an asynchronous fashion it’d be amazing.

Another thought was a new MAT that could make volumes with opacity easier to render in a way that feels cloud-like and dense. I’ll be looking into this more on my next project so can report back with findings there too. T3D’s style of rendering volumes to be precise.

Here’s some images of the networks and looks:
https://drive.google.com/drive/folders/1of-lZGczTgQy8GafE2j5VrFmPENOFlih?usp=sharing

P.S. I’d be happy to show through the project on a call if anyone want’s further details.

8 Likes

Thanks for the feedback and sharing some screenshots! Glad things mostly worked out with POPs

This came up recently, we’ll add it for convenience!

Good question, I’ll give that a shot and report back

That’s coming soon!

Noted, definitely on our list

2 Likes

That’s really beautiful work. :heart:

1 Like

I thought about this a bit more, what the neighbor POP allows you to do now with 2 inputs is to match the first input to the second, but it only works if it’s the same point clouds with a different point order.
Otherwise the issue is that you can get the same id multiple times for one point if one point from second input is the closest one to multiple point in the first input, so it breaks

So for your problem I think the most helpful is to sort both point clouds along a vector that’s perpendicular to the camera plane I think - sorting by X or Y with camera looking down Z seems to work well. might just work with my contrived example though
see toe!

BlendPCSortNeighborPOPMatchIds.toe (7.8 KB)

2 Likes

also exploring blending point clouds (in my case, specifically focused on splats) - I have tried sorting along a vector perp to the camera (and many other sorting methods) which all break down for more complex scenes, especially if the dimensions are quite different. Even if the sorting is ‘working’ there are still points that need to be killed or created based on whether the current point cloud is larger or smaller than the one being transitioned to.

One way i thought about fixing this was to use a GLSL and hardcode the output to a specific dimension, and then blend through GLSL to treat birth/death as special cases. this kind of works, but is still not ideal

the best way i have thought of so far, but have not yet implemented, is a custom spatial hashing that would sort points from both clouds into voxels, and then each point would search its own voxel for points in the other point cloud that it could ‘target’ during the transitions. However, this also has a lot of issues (how to ensure every point is targetted, for example), so idk