Orbec BOLT not working correctly in in 2023.11290

Hi,
We don’t get a good point cloud from the Orbec BOLT in touch 2023.11290.
We had the same problem when we tried to use the orbec sdk in a custom top.
The orbec sdk examples are working correctly.
orbbec1.zip (366.0 KB)
Touch is also crashing a lot when changing the settings on the orbec top.
Cheers,
Colas

Thanks for the report. Unfortunately, our development was done using Femto Mega cameras and there appears to be some undocumented differences with the Bolt that we are looking into.

From what we can tell, the Bolt is starting up in passive IR mode which significantly limits how well the depth camera works. I’ve reported the issue to Orbbec and we hope to have a solution soon.

As far as the crashes are concerned, if you can you send any dmp files to support@derivative.ca we can look into them further and see what is going on.

We are currently aware of two issues that can lead to crashes: trying to run the Orbbec TOP without installing the Kinect Azure package in the TouchDesigner web installer, or trying to use both the Orbbec TOP and Kinect Azure TOP in the same project.

Is there any hacky solution or otherwise to get that IR enabled via python or external process etc, with out disrupting the feed coming into touchdesigner?

we’re looking to use this but on a super tight deadline unfortunately. have been trying to get some semblance of a point cloud into touch using shared mem out from python process and into a script TOP in touchdesigner.

I’ve gotten depth working just fine using this post, but cannot find an efficient way (yet) to convert to point cloud and send that instead… or to get ahold of intrinsics and calculate the point cloud in touch.
What data does Shared Memory In TOP expect? - General TouchDesigner Discussion - TouchDesigner forum (derivative.ca)

any insights there?

I’ve got a couple ideas to try on this end, but I had been waiting to hear back from Orbbec to know what the proper approach would be. I’ve reached out via a couple different channels, but don’t have an answer yet.

I would have thought you could switch the camera to ‘positive’ IR using the free OrbbecViewer app, but my understanding is that it still reverts back to passive IR when TouchDesigner connects to the camera.

I assume with your timeline you already have the Bolt hardware? Because it should all work fine with the Femto Megas that we’ve been using. I’ve also got Astra and Gemini cameras that appear to be working, but are not as well tested yet.

The quickest thing I can do at the moment is go ahead with the 1.8.3 SDK update since we’re going to do that anyways and see if it helps.

As far as generating the point cloud manually, I’m currently taking it directly from the SDK which is doing the projection internally so I’m not too clear on what needs to be done there. We don’t currently expose the intrinsics, but I guess you could access them via python. There is also a fairly simple depthProjection component in the point clouds section of the palette that gives an example on projection a depth map into a point cloud.

1 Like

got it thanks! we’ll switch to the megas :slight_smile: - the ethernet adds some nice future flexibility there as well.

I managed to find a way to get shared mem from a python process into touch via a script TOP.

Then I began trying to hybridize these two examples:

depth viewer:
pyorbbecsdk/examples/depth_viewer.py at main · orbbec/pyorbbecsdk (github.com)

save pointcloud to disk:
pyorbbecsdk/examples/save_pointcloud_to_disk.py at main · orbbec/pyorbbecsdk (github.com)

I can get both of those examples to work as is, and was able to send in depth to touchdesigner just fine as well - very fast.

However the pointcloud example (the only one I could find ) uses a function that actually converts the points into a filtered array, dropping any points who’s depth == 0. Effectively it’s no longer in an image layout, and it’s quite a bit slower.

So far, I have not found another function that converts depth to xyz whilst leaving it in the image array format.

Are you able to share what functions you use in the orbec SDK to achieve efficient conversion? they seem to match 1:1 with the python bindings.

I’ve updated to the new SDK now and I’m just waiting on a TouchDesigner build which I can post here for testing if you’re interested. I’ve been talking to Orbbec, but they aren’t clear on what is going on either so it make take some experiments to find a solution.

Regarding the point cloud conversion, I’m using the pointCloudFilter class in the C++ sdk to generate the points. I give the camera during initialization for the intrinsics and during runtime I pass it raw frame data and get back the point cloud frame. I’m not sure how that corresponds to the python interface.

1 Like

Here’s a link to a new build with the 1.8.3 SDK: Dropbox - TouchDesignerWebInstaller.2023.11320.exe - Simplify your life

I don’t know if this will make a difference, but you’re welcome to give it a try. The Kinect Azure interface specifically enables the active IR sensor, so it should be working properly.

Orbbec doesn’t seem to have any specific ideas on what is going wrong, so I’m going to try a few other experiments that I will post when available.

Edit: We’ve had one user confirm that this seems to fix the issue, but let us know if anybody is still having issues.

I tried upper link with my femto bolt and it works with kinnect azure top and chop. now it shows body part tracking. and I can see player index view too.


1 Like

Thanks for testing that out. We’ll get it into an official build fairly soon.

1 Like