I’m starting to experiment with creating AR filters in TouchDesigner and one of the key challenges I’m facing is achieving realistic occlusion. I’m looking for guidance on how to properly implement occlusion effects in my projects.
Specifically, I’m wondering:
What are the most effective methods for creating occlusion in TouchDesigner for AR filters?
Are there any recommended workflows, example networks, or specific operators that I should be using?
Any best practices that I should be aware of, regarding both the implementation and performance of occlusion?
Occlusion is a pretty general term for objects in front of other objects.
There are many different techniques to achieve this effect and it would be particular to your conditions/subject matter.
For example, using the Nvidia BG TOP You can create a matte between a human figure and the BG, allowing you to composite items behind the figure.
More complex Occlusions require more complex systems.
For example, if you wanted to walk around a table and have something behind that table, the entire table would need to be detected and added to a layer matte for compositing.
You could use some segmentation algorithm to isolate those objects in 2D camera space and then add them to your composite. Image Segmentation may be a good place to start with this.
If you can track fingers with a leap-motion or with a hand-tracking algorithm, you can generate a mask based on the expected position of each finger and use this mask to occlude the AR. This is how most headsets try to do it Vision Pro Hand Occlusion
Phone AR will actually create a 3D understanding of the planes in the space to reconstruct a 3D mirror of the world to use as an occlusion mask. You would also need to localize the cameras position relative to this object for each frame. This is super tricky though and a lot of people are working on it now, basically what all the AR glasses teams are focused on.
Another technique I found recently would be AI generated depth maps that are getting really good. Depth-Anything v2 But I haven’t seen anyone do this in realtime or in touch yet.
Here is a nice walkthrough for a Blender re-lighting (occlusions at 5:10)
If I were going to add Occlusions to a project I would try to eliminate as many additional variables or headaches. Simplify the scene and don’t move the camera. If we don’t need to move the camera you can manually recreate a 3D simulation of basic parts in the room to occlude your 3D AR effects. Ex a crude box sop stands in for the table, lined up in camera space with real table or walls. If you need to move the camera, find a way to track it. then your 3D scene and occluding objects will move with the camera and align to the world space.