Passthrough TOP for Quest 3

Hi there,

I’ve been having great luck developing in Touch for Quest 3. I run Virtual Desktop, and use a double tap on the left controller menu button to go between “in VR” full 3d view in Steam VR, and back to my Windows machine with Touch full screen. It’s great!

I’m excited about trying some MR/AR work, but at the moment don’t have a way to get the passthrough texture from the 3d cameras. Is this something OpenXR offers as a texture output or similar? Excited to try.

2 Likes

Hi @mtf,

thank you for the suggestion. I’m moving this into the RFE section of the Forum for further consideration.

cheers
Markus

1 Like

Hello,
I am just starting to experimenting with a MetaQuest 3 I got and I was curious if I can use the passthrough images (the ones that are created by the cameras on the headset and let’s you have an Augmented reality experience, seeing your surroundings). I want to postprocess these images with regular TOPs to make some “AR-ish” effects, but can’t really find how to get them in TD, sounds simple, am I missing something obvious?
Thanks in advance!

1 Like

Hi @johncaraj,

TouchDesigner currently does not have access to the passthrough cameras from the MetaQuest3.

Wonder how this would actually work - is it more compositing ontop of the image so somehow a image with alpha has to be send to the MetaQuest - I’m imagining some uncomfortable latency when pulling the video off the Quest and pushing it back up… Something to investigate.

cheers
Markus

2 Likes

Hm, is there any other way to get that footage into TD, that you can suggest? I tried looking into just casting the in-headset footage to the PC, which is easy but it won’t work (as it will create a feedback loop of the postprocess that I don’t need). And I don’t know if that’s true, but, Meta doesn’t allow raw footage from the headset cameras to be accessed, for privacy reasons (but I guess nothing is impossible).

If let’s say I figure out how to get that (live) video footage into my PC, is there anyway to then send it in TD?

As far as my project goes I was thinking ontop compositing as you say but I am not sure yet, I want to experiment and see what works, but actually in this one latency might be something positive (it’s my thesis project and it’s more of a performance - experience where the tech limitations are embraced, but still in a very early stage to say if it would be successful).

I have never worked with Meta Quest headsets, so I might be wrong, but based on what I read on the internet it seems clear that Meta doesn’t allow access to raw camera data on their headsets (just like you mentioned). This is a shame as it would be a great feature to have.

Apple is unfortunately doing the same thing on upcoming Apple Vision Pro (at least I think so based on what I read on their site about visionOS and ARKit) - they also don’t allow developers to access any raw camera data. Apple is saying it is because of privacy, but it doesn’t make sense. They could always let user choose whether he wants to allow access to cameras or not. People around could know the headset is recording based on the front display going red or something… There definitely would be a way to go about privacy (if they wanted).

Anyway both Apple and Meta have APIs that allow you to composite your objects over passthrough. Instead of giving you direct access to camera data, they give you these APIs that handle compositing with cameras for you. More on this Meta-related API here:

1 Like