Unfortunately, I don’t have a lidar at home where I’m working right now, but I’ve attached an example that uses a point cloud from a kinect which should give you a rough idea of the work flow.
In the file, the point cloud is stored in a locked TOP as image data where the x,y,z data are stored in the red, blue and green colors of the image. The alpha channel of the image is used as a mask so that white pixels are rendered and black pixels are ignored.
After the source point cloud, I use a pointTransform component to shift the cloud closer to the origin and then use a GLSL shader top to filter out walls and objects that I don’t want to detect. If you look inside the glsl1_pixel shader I am just setting the points that are outside of my detection area to zero so they don’t appear as blobs. Because I am doing this test in a small area my detection range is only a 1 meter cube, half a meter above the ground.
After the filter I am left with just the points that are inside that cube. I then pass this cloud of points into the geometry instance to be rendered. I’ve got the camera positioned above the points facing down to the origin, and I’m using the line material to give my points some size so they form more solid blobs.
I am then feeding the render output to the blob track TOP that uses an open CV algorithm to detect blobs in an image. I’ve tweaked the minimum and maximum sizes so that it detects the person in the image but not the bit of noise to the bottom right.
The size and position of the blob is available in the info chop to do whatever you want with.
Hopefully this is helpful to you and feel free to ask if you’ve got any questions. Your workflow might be a little different if your lidar data is in CHOPs. You can use a CHOP to TOP node to convert it, or the geometry node can use CHOPs for instances if you want to do your filtering in CHOPS.
pointcloud_blob_detect.toe (1.1 MB)