It seems like most of the Nvidia CUDA samples use kernels to write point position values to an array and then map those arrays to gl vbos and draw them as point vertices.
Is there a fast enough pipeline to convert an image of 512x512 point positions into data which could be rendered in Touch as points using SOPs or GLSL? or is there a plan in the near future to have a CUDA SOP which will output vertices from an array of point positions as calculated in a kernel? I don’t think geometry shaders are fast enough for this but I haven’t tested very thouroughly.
Seems like with a fast pipeline for creating thousands of points from an array of positions it would be pretty trivial to convert the nvidia CUDA Fluids and particles examples to be used in Touch. anybody already done something like this?
I guess another possible solution would be if the CUDA TOP could output a 3d Texture. Then you could output a 3d volume of point positions and then raycast them in a GLSL TOP.