Microsoft announced the Kinect Azure is dead, only shipping until October.
What’s the plan going Forward? The Orbecc products? I saw discussions elsewhere that said it “wasn’t on the roadmap” but hoping the latest announcement causes Derivative to reconsider.
OAK-D support is currently in the Experimental build:
It looks like Windows and MacOS are supported. I’m waiting for an OAK-D to be delivered to test it out.
Here the announcment, which is incrdible hard to find -.-
Do you have a link for that information?
Cannot find any info on AK being discontinued.
You would hope that, as central as Kinect is to the TD universe, we’d eventually get first class support for the anointed successor. That, and skeletal tracking is just the surface. Having depth and color and skeletal all in a nice synced package is pretty amazing.
As an aside, I actually think this is potentially fantastic news. Kinect always felt too niche for a behemoth like Microsoft, especially once it was divorced from XBOX. Orbecc feels like a better fit for pushing the use cases TD designers care about… Just the addition of PoE cabling in the Femto Mega seems like a huge win. Hello 100 meter runs without hacky converters/extenders! (Anyone know what latency is like on that thing?)
The beauty of Kinect* was the front end giving rgb, 3d skeleton, depth and other features that the TD Kinect(2) could access with various InComps. The downside is you were at mercy of their update and marketing. Kinect2 invalidated kinect, etc And if the StateOfArt moved, it might take a while for Kinect to update anything comparable.
Today there are a number of camera/sensors that give you point clouds, skeletons, etc. Orbbec, Zed, Realsense (no dont go there), NuiTrack’s SDK, etc. Then again the StateOfArt with AlphaPose, MediaPipe etc has been moving faster than just about anything. You can download Python demos from research projects that do (partial) 3d from single rgb camera, do skeleton tracking of dozens of people in crowds, etc. And there are new papers out just about monthly.
I’ve been experimenting with these libraries with aim of making FOSS smart cameras that are open enough that talented coders can add update the processing pipeline to meet new requirements. (train a ML system for your poses, hand positions, face, etc). You can fairly easily use a standard usb camera with python, NDI and OSC, to get you just about all the features you want… if you can afford to diy the code.
Hardware is still a bit of issue. I tried to run some on a raspberry pi4 but only got 8fps, which is useless for interactivity. I have a Google Coral dev board but got distracted from building it out, to get some basic TD demos working with the PC python app. I havent kept up the documentation or even the github well but its over at GitHub - MauiJerry/Pose2Art: Pose2Art creates an AI Camera run on small 'edge' device that extract Pose and sends tracked points via OSC to TouchDesigner and Unity/UnReal for ART!
caveat: i live on Maui and am further distracted from TD/Art at present by the fires that have wiped out a significant part of our community, and venues where we were planning to try some installations.
It seems Orbbec has a wrapper SDK that emulates the Microsoft SDK, making them a drop in replacement.
The “Femto Mega” and “Mega I” are shipping now, and the “Femto Bolt” (the most direct replacement) shipping in October (about when they anticipate Microsoft’s supply of Kinects to run out).
All have identical optics and capabilities.
- Bolt basically a Kinect Azure clone
- Mega adds PoE and edge processing.
- Mega I is an IP65 rated version of the Mega.
Tagline on the new marketing is “develop with one, deploy with any”.
I’m doubling down on my “this is a good thing” take. Exciting stuff
Reached out to Orbbec… The Axure wrapper is not supported in network mode… So still stuck on USB unless someone figures out a direct plugin.