How use data of LIDAR

Hi everyone, I need your help to manage data from a lidar.

I briefly explain what I did, I and a colleague of mine have developed software that can read data from the serial port of the lidar (one or more lidar), by inserting 4 points delimiting the area that interests me manage, I transform the polar coordinates of the lidar into Cartesian (x, y) and send them via OSC to Touchdesigner, now I find all my nice points of the movements.

Well my question how can I get these points?
Since I want to create interactive video projections (floor or walls) people can be 1 or many more, how can I manage this movement to make objects move (a sphere for each person for example)

Thanks to who will answer me

are you using a hokuyo?

No i use YDLIDAR G4.
For now i create a software for send the data with OSC protocol into TD.

One technique that we’ve had people use is to create geometry instances using your point data and then render them from above and attach a Blob Track TOP to the output to detect people in your scene. I can give you more details if you’re interested.

There has been some work on a Blob Track CHOP that would bypass the need to render the points, but I’m not sure what stage that is in right now.

Yes, it would be great to receive some more information, would you possibly have an example file?

Unfortunately, I don’t have a lidar at home where I’m working right now, but I’ve attached an example that uses a point cloud from a kinect which should give you a rough idea of the work flow.

In the file, the point cloud is stored in a locked TOP as image data where the x,y,z data are stored in the red, blue and green colors of the image. The alpha channel of the image is used as a mask so that white pixels are rendered and black pixels are ignored.

After the source point cloud, I use a pointTransform component to shift the cloud closer to the origin and then use a GLSL shader top to filter out walls and objects that I don’t want to detect. If you look inside the glsl1_pixel shader I am just setting the points that are outside of my detection area to zero so they don’t appear as blobs. Because I am doing this test in a small area my detection range is only a 1 meter cube, half a meter above the ground.

After the filter I am left with just the points that are inside that cube. I then pass this cloud of points into the geometry instance to be rendered. I’ve got the camera positioned above the points facing down to the origin, and I’m using the line material to give my points some size so they form more solid blobs.

I am then feeding the render output to the blob track TOP that uses an open CV algorithm to detect blobs in an image. I’ve tweaked the minimum and maximum sizes so that it detects the person in the image but not the bit of noise to the bottom right.

The size and position of the blob is available in the info chop to do whatever you want with.

Hopefully this is helpful to you and feel free to ask if you’ve got any questions. Your workflow might be a little different if your lidar data is in CHOPs. You can use a CHOP to TOP node to convert it, or the geometry node can use CHOPs for instances if you want to do your filtering in CHOPS.

pointcloud_blob_detect.toe (1.1 MB)

Thanks for the explanation, you have been very kind, in days I will have the lidar at hand and I will try what you have recommended
Thanks!

No problem. We also just put out an update yesterday that includes a new Blob Track CHOP that could be useful to you.

https://docs.derivative.ca/Release_Notes#Build_2020.23680_-_May_20.2C_2020