Hi there,
trying to start to figure out ways to go in my current project, I’d need to know how to orient some ideas and will post some questions related to point cloud.
This first one is about handle big point cloud and load / memory.
I’m questioning myself in order to prepare some space scan small campaign with a lidar type scanner.
Would it be better to have A HUGE prepared file (decimated well, saved as EXR for x,y,z & other space features too), loaded at launching stage one time, and “truncated” while we are traveling the point cloud ?
OR
Smaller files, maybe all load at launch time too (I got a lot of RAM) and progressively “enabled” instancing-wise ? I mean, I’m in this part of the space, I’m running all processes only on that part of the points, I’m elsewhere I’m running all on this other part (the other parts which don’t have to be seen are not visible, not calculated etc)
As far as I know, point cloud storage is tricky when we want to make proximity (x,y,z) operations as the informations in the point cloud (or exr, textures) are not sorted/organized for this. And actually well sorted doesn’t mean anything… as it really depends on how we explore the cloud, I guess.
Basically, I want a traveling camera inside the cloud (let’s imagine the point clous is static, probably meaning we can do some calculations before !). I’d like to alter the cloud, to grab features from it and only inside a bounded space just in front of the camera.
Any ideas around this would help a lot.
And thanks @jacqueshoepffner for the very nice discussion we had the other day. Trying to setup the small prototype I was talking about.