Pointcloud transformation in partial area only

Hello everyone,
after the awesome “Napoleon dissolved” tutorial by Markus Heckmann I need to find out, if there is a way to transform the pointcloud in a partial area only - like using a sphere as Force source in a particle-SOP? With the threshold-TOP it’s divided in half and you can control where the cut is done, but it’s not so precise like chosing only the head, shoulder or even chose an area interactively (with x, y, z values from a leap-motion for e.g.). I guess the transformation/selection must be applied to the pointfilein-TOP - but how? I would be so happy about any tips or advice! Thank you.

Hey,

you can calculate the distance of each point in the pointcloud to a specific position in a few steps:
subtract the force position simulated by a Constant TOP from the Pointcloud itself. Now use the Math TOP to calculate the length of each point in the cloud by choosing Combine Channels>Length.
With the distance of each point in the cloud to your simulated force position, you now can use the Threshold TOP to create a mask tht covers points in a certain range to the force position. Last, multiply the origina pointcloud with the mask.

Hope this helps
Cheers
Markus
distance2Point.toe (11.3 KB)

2 Likes

Wow! Thank you so much for this helpful answer and example! It worked out very well :smiley:

How can I tell the selected points within my force to follow my force position for a little time?
I guess somehow the current force position needs to be multiplied with the Threshold mask and via feedback blended to the original again?

If you’re looking to apply the same movement to each point, the the Point Transform TOP might be useful for you. You can apply any geometric transform to a point cloud texture and it has a second input that takes a weight (threshold) mask to control what points are affected.

Setting the translate value to a percentage of the distance from the center of your points to the force position and then feeding the outback back through a Feedback TOP would push the point cloud towards the force over time.

If you want your point cloud to converge on your force position then you could create a Constant TOP with the force position and subtract your current point cloud from it to create a vector map. You could then scale that, multiply in your threshold mask, and then add it back to your point cloud to make each point move independently to your force.

Hope that helps. fyi, there is also a pointWeight component in the palette that you might find useful for generating weight / threshold masks.

1 Like

Thank you for your answer!
The points should just be a little sticky or magentic to the moving force - when it moves close to or into the pointcloud.
What do you mean by “setting the translate value to a percentage of the distance from the center of your points to the force position”? And how do I get this distance or this percentage? As the solution for my first question I already subtracted my constant from the pointcloud to create the threshold mask with a certain distance around my constant. But how do I porgress with this?
Feeding through a feedback means: output of the pointtransform 1. into a feedback and level and 2. into an add TOP?

Honestly I would need examples or screenshots to follow your explanations - guess I am a beginner still in this field.

I’ve attached a small example project that shows some ideas on how you can get the selected portion of the point cloud to follow a target. They’re just to show some concepts and aren’t necessarily the best or only ways of doing it.

In one case I’m using an analyze top to figure out the current average position of the selected point cloud and then using some chops to calculate how much I need to move it towards the target pos. I’m using a point transform with the weight input to apply that movement to just the selected points.

In the second case I’m using Subtract, Add and Math tops to move each particle towards the target individually. In this simple case, this would lead to the point cloud converging so I added some noise to scatter it out.

In both cases I’m using the pointWeight component to select the points, but in one case it has no falloff so that the selection has hard edges, and the other has a linear falloff so that some points are less affected than others.

Hope that at least gives you a starting point to explore things.

pointCloudFollow.toe (15.5 KB)

1 Like

Thank you so much for this helpful example and your effort!! And I am sorry for replying so late - I was busy with theoretical part of my project.

The second way is more interesting in my case and I tried several experiments:
What do you use the reorder for? If I don’t want a fixed area (chosen in the pointWeight) but rather always the region where the sphere/force touches the pointcloud - I tried to put the xyz noise values into the translate of the pointWeight - without succes. I guess the feedback-system is a problem? Do you have any advice here?

I need the particles to follow the sphere/force for a little while each time it crosses the pointcloud. After that they must slowly turn back to original position. I guess I need to add the orginal position with reduced opacity in a feedback-system to the distorted cloud to move the points back?
Thank you for sharing so many tips!

I’m glad the examples were useful.

The reorder top was just used to insert the weight values from input 2 into the alpha channel of the point cloud.

I’ve thrown together another quick example that might be more what you’re looking for. In this one the weight map is calculated from the current position of the moving target and there is an additional force that moves the points back to their original positions when the target moves away.

Hope its useful.

pointCloudFollow.toe (10.9 KB)

2 Likes

Hey Rob,
this is freakin awesome and exactly what I was looking for!! I continued working on it. Thank you so much!
What effects does an alpha channel have in the point cloud - aren’t three coordinates from RGB enough to define the point positions?
The step with the inverse weight as a force seems logic - now as I see it. But comprehending every operator is still a bit hard. I thought some level TOP with reduced opacity in the feedback-system would be necessary to return the points - but it isn’t? Because the forces are added together (in “add1”) and by that already produce a mixed force of moving away and turning back, right?

Nevertheless experimenting with handinertaction in VR I now modified a part of your first example to fit for my purpose - like you can see below. What do you think about it?
Thanks a lot for this conversation - its so instructive.

I’m glad it was helpful. The alpha channel is just being used to store a weight value to limit which particles are affected by the force. So, the math2 node in my original example has a ‘pre-multiply rgb by alpha’ operation which basically multiplies the force vector in rgb by the weight in alpha, so particles that are too far from the force and have a weight of zero get cancelled out.

I’m returning the particles to their original positions by subtracting the current positions from their original unaltered positions in the sub2 node. Adding a little bit of that force each frame will push the particles back to their original position.

You can definitely use the level top to adjust the forces, but personally I find it a little awkward to use when working with particles/forces stored in images because the math being done was all designed to work on color data (gamma, contrast, brightness, etc).