GPU Usage with higher Pointcloud setups


I’ve got an opportunity to get a higher end GPU for my current system, which is based on a i7 7700K running at 5 GHz, alongside with a GTX 1080 TI SC2.

I am planing on either getting a Quadron RTX 5000 or a RTX TITAN, and my main purpose is to work with higher count pointclouds, that get manipulated in realtime.

I realized that my current GPU starts to peak at its GPU memory when I load higher count pointclouds, which leads to significant framedrops.

So my first thought was to go with the TITAN, but as I also work on projects with 4+ screen setups, I would tend more towards the Quadro.

So what do you think? Is 16GB memory enough for working with realtime-pointcloud manipulation at around 30fps?

Currently what is your bottle neck as you increase point cloud points in your setup, memory or processing power? GPU memory can vary a lot based on your usage (16-bit float vs 32-bit floats, how many TOPs you need, resolution of your pipeline, etc), so you can optimize your project perhaps when you start running out?

Having a Quadro has a lot of other advantages though, faster GPU data downloads, more encode/decode sessions for H264/H265, proper Mosaic and multiple monitor support, tear-free performance, GPU affinity, etc.