Big point cloud progressive loading and point cloud strategies?

Hi there,
trying to start to figure out ways to go in my current project, I’d need to know how to orient some ideas and will post some questions related to point cloud.

This first one is about handle big point cloud and load / memory.

I’m questioning myself in order to prepare some space scan small campaign with a lidar type scanner.

Would it be better to have A HUGE prepared file (decimated well, saved as EXR for x,y,z & other space features too), loaded at launching stage one time, and “truncated” while we are traveling the point cloud ?

OR

Smaller files, maybe all load at launch time too (I got a lot of RAM) and progressively “enabled” instancing-wise ? I mean, I’m in this part of the space, I’m running all processes only on that part of the points, I’m elsewhere I’m running all on this other part (the other parts which don’t have to be seen are not visible, not calculated etc)

As far as I know, point cloud storage is tricky when we want to make proximity (x,y,z) operations as the informations in the point cloud (or exr, textures) are not sorted/organized for this. And actually well sorted doesn’t mean anything… as it really depends on how we explore the cloud, I guess.

Basically, I want a traveling camera inside the cloud (let’s imagine the point clous is static, probably meaning we can do some calculations before !). I’d like to alter the cloud, to grab features from it and only inside a bounded space just in front of the camera.

Any ideas around this would help a lot.
And thanks @jacqueshoepffner for the very nice discussion we had the other day. Trying to setup the small prototype I was talking about.

1 Like

I’m interested in this too. Something like potree can handle billions of points, but it’s all javascript/webgl. I haven’t seen a good C++ library yet.

2 Likes

I don’t even know how to reorganize points in my texture.
Probably I’d need to statically decide about a path in order to traverse the texture following a specific trip…

1 Like

Porting it would be so powerful.

Sounds like a really interesting project. I’ve haven’t really tried something like this in touch yet, but I’d probably look at something like an oct-tree segmentation and put the points from each cube space into a separate texture and then do some frustrum culling to check which areas are in view at any time and only render/process those.

I don’t know how dense/sparse your data is, but when building the textures ahead of time you could just set a max texture size based on reasonable exr load times and then just keep splitting your oct-tree until every node fits into that texture size. I think you can use the atomic counters on the GLSL top to help selectively pull points from one texture and pack them into another.

As far as libraries go, I’ve used the Point Cloud Library (PCL) for offline point sorting before in C++, but not in a real-time application.

I don’t have worked yet on segmentation like this.
I guess it would reorganize the points according to x,y,z values

If it is an offline process, that would be OK.
I don’t know if I would force to have a static path. I mean I don’t know if Octree segmentation would be directly linked to a specific path.

I’d really like to build something like this as a prototype. But wouldn’t know from where to start.
Maybe we can work on a small prototype together with interested people?

Yeah, if you’ve got your source points loaded into a texture in touch, then you can run a shader to select them by x, y, z position and pack them into new smaller textures. You’d probably run some sort of loop (maybe in python), to count the points in a specific area, decide whether you need to split that into smaller areas, capture the points, save the exr, etc

Once your points are broken down by cells in your graph (oct-tree or whatever), you can use a general culling algorithm to decide which cells are within view of your camera and only load/process those points. You shouldn’t need to use a fixed path, but you might need to put limits on movement speed or view distance to keep consistent performance.

Before you dig too far into it, do you know how many points you’re potentially dealing with and what the bottlenecks are? Is it rendering performance, texture size, GPU mem, CPU mem?

1 Like

I would be interested to participate in such an experiment, since I also did not find an appropriate way of dealing with large point cloud sets in realtime.

For tests I was using large point cloud sets from Global Digital Heritage‘s Sketchfab assets, since these are usually very well scanned and cleaned up, and it‘s always huge datasets:

However the issues that I was running into were mostly running out of GPU VRAM, since calculating all point instances and their manipulations in realtime was challenging my RTX3080, but a good way to smooth things out is using Pre-filled Cache TOP‘s for animations, which unfortunately will not be 100% realtime, but giving at least some headspace for other GPU tasks.

What also really helps a lot is using Point Sprites for the instances instead of 3D geometries such as spheres or boxes: These will also save some VRAM and the visual difference is not that big tbh.

1 Like

Hi Rob, I know that I may put the car before the horses…
But as I’m new to TD and to point cloud (!), I just would like to have some ways to follow in order to start with that.
It would need to be benchmarked. Actually, I can have custom point clouds in that project and these could be prepared on purpose. I can decimate them or not, targetting no more than 3M points or 1M, or whatever

The system I use is i9-10850K CPU @ 3.60GHz, 64Gb RAM and RTX2080S

I’d like to know if you could point me in the right direction for this.
I didn’t know we could run a shader with an input texture from a size and giving a smaller one at its output.

Actually, the idea would be:
EXR with x,y,z layers → shader → smaller one.

the shader would be also feed with an uniform vec3 which would be the position of the camera for instance.

it would check the distance between each texture pixel (points) and the cam, and would write the pixel if the distance is smaller than n and zero if the distance is greater. That one would instantiate points geometry.

But in that case, the texture would remain “big”

@julien I think I have an example file laying around where I was cropping larger point clouds … I’ll take a look around and get back to you on that. I believe it was using the atomic counters to track how many pixels were selected from the larger texture. You can also do it by converting to CHOPs, filter the samples and then uploading back to the TOP, but that will be slower.

To make sure we’re on the same page, my thought was that those smaller files would actually be cropped based on world space boundaries rather than a specific camera position. The camera already crops out points that are outside of its view frustrum, so I’m not sure you’d get much overall performance benefit from just pre-cropping the full point texture prior to instancing.

The idea I was thinking was that you’d have a handful of Geo COMPs that are instancing the smaller point cloud textures closest to the camera from a set of potentially dozens or hundreds of textures that make up the full cloud. I think this approach would only make sense if you’re planning on rendering just a small fraction of the overall cloud at any one time.

3 Likes

Hi @robmc, totally in the same way.

The idea of frustum already works natively fine in that case too, I guess.

I’d appreciate if you had this things with atomic counters and glsl (compute shader I guess)
It would be very useful and totally interesting (and teaching)

Doing operations on points “around the camera” would be something I’d aim at.
Indeed, I’d like to calculate distances, densities around the cam (aka in a cropped part of the texture, IF we can rearrange it according to x,y,z critera)

Getting a prototype of this, even small, would be really interesting.

Hey @julien, I’ve attached an example that shows two ways of extracting parts of one cloud into a smaller one. Both examples use a mask created by glsl3 that writes a 1 for pixels to include and a 0 for those to skip.

For the GPU approach, I did find my old example using atomic counters, but it doesn’t seem 100% reliable. The main shader is in glsl2 and for each pixel of the output it’s basically doing a search on the input to find the next valid pixel. The atomic counter is supposed to keep the parallel searches from interfering. Analyze1 is just counting the number of valid pixels to determine an output size.

The other example uses the TOP to CHOP and the CHOP to TOP to do all of the filtering on the CPU and should give you an exact answer.

Hope that helps.

pointextract.toe (5.1 KB)

2 Likes

Hi Rob,
thanks a lot for your answer, I’m answering shortly and lately.
I didn’t get a chance (time) to work with it deeply.
I’ll do it asap.

I have many questions about point clouds.
I will post a couple of separate questions in order to make reference and answer easier to do and discuss.

In this regard i recommend to have a look meshlab. its free and i use it as a swiss army knife for meshes and pointclouds. its very handy for cleaning up or resampling your data.

also have a look at ply files; you can import these as tops(pointfile in) and completely process everything on the texture shader of the gpu. so with this method a 4k texture gives you about 40mil points.
i did a tutorial recently that shows this workflow; here is the link

2 Likes

hello @ship_trap, I used meshlab already and this is a really interesting tool.
I’ll check your workflow asap.

thanks a lot.

Hi all,

May I suggest some @paketa12 TOP methods for sorting points? In his latest Plexus example he detects proximity at a specific distance from each point to each point. That was a big step for me to manage collisions on my point-clouds for each point (I haven’t optimized it yet, and it needs a good GPU aha :slight_smile: . You could also use Richard Burns (sorry no idea how to find his @handle, spent 5mn with no avail trying) method to arrange points in the 2D texture according to their position in space, see: 1/3 TouchDesigner Vol.032 Creative Techniques with Point Clouds and Depth Maps - YouTube

Just pointing out: I forgot this may only be avail on Patreon and Vimeo:
It’s on section 3of3 on Patreon’s TDSW stream like Paketa12 actually, apologies for those who can’t access them).

Hi @robmc

I hope you are doing well,
I decided to test your shader (with atomic counter) TOP solution, works great for geometry but I am not managing to match the color TOP to the new texture, colors get scrambled. Attaching my experiment file here
PointCloudReduceSIZEbyCut.zip (14.2 KB)
, Cheers, P

Sorry for the delay … I haven’t had a change to experiment with it yet, but I suspect the problem is because the order of operations isn’t guaranteed between the color and position pass, so you are getting different arrangements of pixels in each output.

You can probably solve this by doing everything in a single shader that samples both the position and color together and then writes them out to multiple buffers. There is some information here on how to use multiple output buffers in the GLSL Top:Write a GLSL TOP - Derivative

Hope that helps.