I was looking for advice on the best depth camera (<$800) for interactive art exhibits with particlesGPU and concert visuals made with 3D point clouds.
Right now the winner seems to be the Oak-D as it has IR for low light (ie concert venues), laser dot projection for better depth mapping, and a price point that’s $150 lower than the mega.
However, the Femto Mega can operate using the same protocols as the Kinect cameras, which seems like it would make it more interoperable with a lot of existing tutorials and project files made for the Kinect. I’m a complete beginner to touch designer and don’t know if that should be as big of a selling point as I think it is?
Also although the PoE ups the price point I figure it’s the only way to achieve the long cable runs between a stage and FOH. If anyone has set up something similar using a USBC camera without needing a second computer, please let me know.
Thanks for any advice! Also the datasheet is for the non PoE version.
Def go for PoE for live shows and long cables. USB extenders will up the price way more and are often unreliable unless you go for the pro fiber ones.
And yes the OAK is new but I feel.it will have more to offer in the long run because of all the different AI models you can load on it. If you’re beginner it would make not much difference - you will have to learn anyway…
And yes OAK is very new in TD world but any oak tutorial you an apply on TD as well.
I haven’t done a lot of real world comparisons between the two, but my impression at the moment is that the Orbbec Femto probably has a better quality depth map than the current generation of Oak sensors due to the time of flight system it uses.
The Femto Mega is also a little easier to use than the Oak since it is a single purpose depth camera while the Oak is also a fully programmable co-processor and those extra features come with a little more complexity.
Also, if you plan to use the Femto Mega as a POE camera, you actually need to use the new Orbbec TOP rather than the Kinect Azure TOP since that system doesn’t have options for configuring ethernet devices.
Thank you so much for the details about the TOP differences with PoE on the Orbbec, I had not found that info anywhere.
This video https://www.youtube.com/watch?v=tqb_0xAqm3w&t=41s shows the 1st Gen femto compared to an Oak D Pro and while the depth map is alot more accurate, I’m concerned about the depth info quality in dark concert environments moreso than the overall quality. Ofc I wouldn’t be able to use the laser dot projection if it’s pointed at an artist’s face, but I figure the IR would come in handy.
Additionally, if the Femto Mega over PoE requires the Orbecc TOP as opposed to the Kinect TOP, then I would guess that means the one selling point (imo plug and play with preexisting Kinect TD projects / tutorials) is sort of a moot point? Can anyone confirm my understanding?
The Femto Mega uses the same hardware sensor as the Azure Kinect, so it should look the same as that example. I’ve admittedly not tried it in really dark environments, but it is also using IR for the depth map so I would expect it to be similar.
I did not realize the Femto Mega also uses IR! I’ve put alot more research into the Oak-D line as their documentation seems to be a little more verbose than Orbbec’s.
So I’m trying to compare the Oak-D Select TOP doc to what you posted here about the Femto situation.
But it’s a little unclear for me as to which TOP has more functionality assuming PoE (Orbbec TOP only for Mega).
Overall it seems like the Oak-D is the winner, but I would love to hear dissenting opinions.
If you’re talking about overall functionality, there’s no question the Oak can do a lot more than the Orbbec. As I mentioned above, it’s a fully programmable co-processor which means you’re actually uploading custom programs to the camera that can run various AI models and other processing functions in addition to just returning a depth map. So you can do things like object identification, body/face tracking, edged detection, cropping, etc directly on the camera.
On the other hand, the Femto Mega is mostly just a plug and play depth camera; you give it an IP address, select a resolution and frame rate, and you get back a depth, IR, color or point cloud image, Any further processing you want to do will be done using TouchDesigner nodes.
Robmc, I can’t express how much I appreciate the prompt, concise answers! Touchdesigner has me utterly captivated even though I haven’t dove into it’s depths yet and this was the info I needed to snag a sensor. The plug and play aspect has me sold on the Femto Mega.
I’m pretty far away from using either in a production environment and my hopes are with each being relatively new that the TD software will advance such that the Orbbec TOP catches up to the capabilities of the Kinect TOP someday. For now I’ll stick to USB if I need the color camera feed from the Femto Mega.
If I ever get deeper into python someday, I’ll snag an Oak-D as it could serve a dual purpose in my robotics applications, but for now I don’t think I would tap into the Oak’s potential.
Also new to touchdesigner here, and I recently purchased both cameras for the reasons you just described. The Oak-D will almost certainly be the one I aim to use in the actual installation, but the Femto does seem to be a lot easier to play around with at home - It took about a day to get a sense of how to use it properly.
I have a google doc with links to the software and tutorials I used to get the Femto Mega up and running, feel free to message if you would find that helpful.
For anyone finding this post in early 2024; Orbbec does not have the Femto Mega in stock even though the website says its available (per their receptionist). It looks like the only vendor with reasonable prices is MorpheusTek.
I just wanted to toss my experience in with Oak-D Pro POE cameras. I am use 3 of them overhead looking down to detect players entering the play area to trigger the activation. This is a permanent install.
If you cycle them on/off to frequently due to debugging/restarting touch they can get locked up requiring a power cycle. And just in general things can happen and you want to be able to remotely cycle power. So make sure to use a Managed POE Switch that allows you to cycle POE ports.
Of the 3 I am using, 1 is slower coming up then the others and is the one that is most likely to get hosed and require a POE power cycle. Test your cameras, replace any acting different.
I am not sure if this has been fixed but if I leave them active on save and restart the project and the cameras can’t be found it crashes. That may have been fixed but that was an issue I had. I made an execute dat that disables them on save and then another on that checks to see if they show up in the device list before activating them on project start with a small delay.
I have also had them lock up mid day requiring a power cycle. I am working on some tools to watch and automatically power cycle them if needed without restarting the TD project.
Managed POE Switch:
commandline tool for the managed POE switch I am using:
Thank you all for the quality perspectives!
I am looking for a good 3D depth camera for my next project and like @Brad_Emery I was leaning towards the OAK-D Pro due to the large body of information surrounding it and its company. Up until now, I have been using a Kinect V2 for all of my depth-tracking applications but the antiquated technology certainly has its limitations. I was looking at buying a Kinect Azure but they seem to always be out of stock. Orbbec’s Femto Mega looks like a great alternative and so does their Gemini 2L. While I am intrigued by the seemingly limitless potential of the OAK-D Pro, I am also a little intimidated by the learning curve and wonder if I might be better suited using a single-use depth camera like @robmc pointed towards.
In my particular use case, I’d be trying to track rough positions and arm gestures of people ~6m away at night with a projection on the wall behind them. It could be amazing to be able to track these items internally on the OAK-D and use the data to trigger effects in TD but as of now my only experience is with the relatively simple Kinect SDK that already has body-tracking and hand-tracking baked in. While the Kinect OPs are quite robust, I assume the OAK-D integration will take some time what with it being so new. I considered going with Intel RealSense because it has been integrated with TD for quite a while, however with Intel shutting down that department it doesn’t seem like the best option.
I’d love any advice or opinions as I’m a little stuck right now! Based on this use case where I’ll be relatively far away, do you all think I should go with the clearer depth map found in the Femto/AzureKinect or will it really matter since I’m trying to track larger body movements and not detailed hand movements? The limitless potential and open-sourced platform of the OAK-D sound super exciting and the price point is great as well. It was going to be my choice until I discovered these other options which are a bit closer to the depth cameras I’m used to.
Thanks in advance for reading through all this and I sincerely appreciate all opinions. Sorry if I should have created a new thread I just thought it would be cleaner to post here since it’s the same topic.
Hello everyone and thanks for all the valuable information.
I am planning a “low light situation” Installation, and need to sense body data (basically skelleton tracking of arms, legs and position in the room) for an interactive audio-setup.
I am wondering, which one is the better solution. i definately would use ethernet in both cases. how are your experiences with the select chops?
how precise are they with multiple “players” and in dark rooms?
In my case, I got far faster and more accurate results for skeleton tracking in low-light settings with the Kinect v2 which would lead me to believe that the Orbbec Femto Mega would be your best bet since it’s basically a Kinect Azure.
The Oak-D Pro is fun to use but the NN models for skeleton tracking are slow and the example files have them running off of the color camera which would then pass the data to the Depth/IR nodes and that made my latency for multiple players quite high.