Kinect Azure requests

Hi there,
I assume the demand makes dedicated thread for Kinect Azure requests a reasonble idea.

Please write here your Kinect Azure RFE.

Retrieve calibration data

  1. Please add access to calibration data of the camera sensors , handle it also as opencv standard.

  2. Coordinate Transformation should be generalize for any combination,
    typical example: getting the color\point cloud data in IR camera(depth camera) projected space.
    (currently available only transform data to color camera projected space)
    (update : maybe i`m wrong, you do seem to offer this option - but its seem not to work well)

  3. getting the current bandwidth sent from device by each frame can help.

Thanks for the suggestions.

I’ve made a RFE on our end regarding making the calibration data accessible. Microsoft does provide functions to get the transform matrices and the optical properties, so it shouldn’t be a problem to put them into an Info CHOP. I’m not familiar with the opencv calibration format off hand, but I will take a look into it. The documentation mentions the Brown Conrady model which they say is opencv compatible.

In the latest build you can transform any of the images into the other camera space by toggling the ‘Align Image to Other Camera’ button. This will transform color camera images to depth space and vice versa.

We’re just using the SDK’s transformation functions, so we don’t really have any control over the details.

The SDK handles all communications with the device, so I’m not aware of any way to get the bandwidth. We have a frame counter in the Info CHOP the increments when it receives a packet, but I don’t have any information on how large that packet is. If you’re aware of a function for this let me know and I can look into it further.

Azure Kinect Undistort

Please add undistort options to rgb\depth in kinect azure camera.

(it maybe you cab optimize the rgb undistort case)

Its foundational brick for different algorithms,

Keep up the great work !

Thanks. We just got our second camera into the office, so I’m currently testing out the synchronization features, but I’ll add it to our RFE list.

##Better UI representation from RGB camera exposure time values##
Currently there is continues slider which not necessarily represent the exposure unit .

Below is the mapping for the acceptable RGB camera manual exposure values:

exp 2^exp 50Hz 60Hz
-11 488 500 500
-10 977 1250 1250
-9 1953 2500 2500
-8 3906 10000 8330
-7 7813 20000 16670
-6 15625 30000 33330
-5 31250 40000 41670
-4 62500 50000 50000
-3 125000 60000 66670
-2 250000 80000 83330
-1 500000 100000 100000
0 1000000 120000 116670
1 2000000 130000 133330

Add feature :
Disable\enable the microphone and IMU if they aren’t needed to improve reliability.

Re: the exposure: you’re correct that it’s not always showing the correct value. Internally, I believe it just uses the closest valid number based on the other settings. We can probably improve this, but the exposure functions in the SDK didn’t seem to be working correctly before so I was waiting for further updates from Microsoft.

The IMU is automatically disabled unless you turn on the IMU channels in the Kinect Azure CHOP.

I’m not 100% sure about the microphone, because it’s controlled by a separate SDK. You can access it right now with the Audio Device In CHOP. I doubt it is on by default based on how everything else works, but I’ll double check.

I am not sure if I don’t see the correct toggle in the parameters of the Kinect Azure CHOP but I would love to keep using the UV coordinates of the joints which you had in Kinect 2? Is it somewhere?

The UV joint positions aren’t currently available in the kinect azure chop. The old Kinects produced it automatically, but microsoft reworked the skeleton tracking quite a bit for the azure and that information isn’t readily available in the new version.

However, microsoft does provide general functions in their sdk for converting 3D points into image space, which should work for reproducing that data.

I will add it to our feature request list and try to get it into one of our next updates.

Thanks @robmc for the quick response. Fingers crossed for it making the next update :wink: keep up the good work!

Just wanted to let you know that the work has been done now and both color and depth space uvs are available as channels now in the Kinect Azure CHOP. There are also python functions for converting arbitrary 3D points into image space.

The new features will be available in the next update.