Trouble using Ouster Os1 (touchdesigner 2021.1) with blob track or gathering positional data and attaching something to it

THE GOAL:

To dynamically illuminate a predetermined path in front of individuals based on their directional movement. Essentially, if a person were to change their direction, the lit pathway should intuitively adjust and follow their new heading. The system should be adept at accommodating individuals moving in both directions simultaneously. While I have the projection mapping aspect under control, it’s the directional detection and dynamic path lighting that I’m primarily focusing on.

My Problem:

Hello TouchDesigner community,

I’ve successfully connected to an Ouster OS1-64 (please note, it’s an older version which requires TouchDesigner 2021.1) and can access the RGBA panorama with simple ousterTOP, giving me a visual representation of what the Ouster perceives. I’ve also visualized this data in a 3D format.

However, I’m encountering challenges when attempting to use this data for blob tracking. I’m not entirely sure if a more 2d conversion is essential, but my understanding is that utilizing just a slice of the 64 lasers (centrally focused) should be sufficient to capture people’s positional data for my needs. I’m pondering whether the approach should involve cropping the elevation or isolating a specific laser.

I’m seeking assistance in setting up the required nodes for this functionality. While I anticipate handling calibration aspects such as scale and projection matching, my immediate need is to establish the core functionality. My primary goal is to delve deeper into the visual side of things once this foundation is established.

Moreover, I’m preparing to present this as an art installation for a local festival this Wednesday, attended by approximately 100 enthusiasts. Any guidance, insights, or suggestions would be immensely appreciated!

Thank you in advance for your help!



THIS IS WHAT I THINK WE NEED. Although I am new to this, please review and tell me what I’m missing or if there is a better approach because I don’t know this tool that well.

  1. Data Acquisition and Preparation

Ouster TOP: Drag and drop the Ouster TOP into your workspace.

I Have the Lidar connected and working with Ouster TOP. I’m not sure how to get the appropriate data set out to just get a 2D Overhead view of the lidar or if we even need that.

  1. Blob Tracking

Blob Track TOP

Not sure if blob track Top will work with the ouster os1

Adjust parameters based on the environment, focusing on:

Minimum Blob Size and Maximum Blob Size to cater to human sizes.

Threshold to fine-tune detection sensitivity.

Maximum Move Distance to manage fast-moving objects. ( probably not necessary)

Toggle Draw Blob Bounds on to visualize the detection. (would love to mask out areas that are not or shouldnt be effected)

Adjust the Threshold parameter to get a binary image based on the difference between the background and the primary input.

Cleaning up Blob Detection:

Track individual blob data such as coordinates, ID, and size.

For trajectory predictions, logic assessments, or to trigger events based on blob movement. (it can be cheated if necessary )

  1. Providing Directions

Use the blob data to generate guidance. For visual directions:
Be able to attach visuals or geo or particles to someones location

  1. Output and Display

Integrate a Window COMP to project or display your combined visuals (original data + blob detection + guidance visuals).

  1. On-Site Adjustments

Once set up, you’d need to make on-site calibrations:

Adjust the Threshold on the Blob Track TOP.

Modify the Minimum Blob Size and Maximum Blob Size parameters, especially if the environment is different from the initial setup.

Blob mask.

Projection map and Tracking alignment.

I’ve only experimented a little with this technique, but I think other users have made this work by rendering the point cloud data from a top down camera and then running the Block Track TOP on the resulting 2D image to detect people in your scene.

So, you’d put the Ouster TOP in your scene and have the fields set to X, Y, Z and Active. The layout doesn’t really matter when you’re working with them as points. I’d then connect that to a GLSL TOP with a simple shader that did some boundary checks on where you expect people to be in your scene. The shader will write the RGB value untouched, but will set the alpha i.e. Active data to 1 or 0 depending on whether that point is inside your detection zone. So the pseudo code would be something like if( (x > -10) && (x < -10) && (y > 0.5) ) active = 1 This allows you to to filter out things like walls or ground. How complicated your filter gets will depend on how busy your environment is.

You can then set up a point render system with a camera that looks straight down on your scene from above. If you use the image alpha in the ‘Active’ parameter on the Geometry Instance page of your point system, it will only draw points that are inside your detection zone so that you hopefully only have blobs that represent people.

You can then run the Blob Track TOP on the image to track people in your scene.

I’ve skipped over a lot of implementation details here, but hopefully that gives you an idea. With only one sensor, you will likely run into issues with shadows where people closer to the sensor block people behind it. I know some users get around this by merging the point clouds from multiple sensors.

Let me know if you have more questions. And, btw I think you can update the firmware on that sensor to one of the 2.0 versions if you want it to work with more recent versions of TouchDesigner.

Okay! thanks Rob this is amzing info! i was just thinking something similar to what you are describing as a potential work around. I will give it a shot! If you know of tutorial/file with similar attributes otherwise i will do my best from what you are suggesting :slight_smile: