RESOLVED: Non normalized version of 16bit fixed pixel format | CP

Hello!

I came across very sad moment when I tried to convert the kinect depth data from

32bit float ( meter scale )
to
16 bit fixed ( mm scale )

I use the depth for calculations inside function that receive 16bit unsigned pixel values.
( custom op with download to cpu option )

The new 16bit fixed texture is clamped to 1 :-(.

What is the recommend work flow ?
if its a real limit at the moment,very_sad_moment_16bit_fixed_is_normalized.toe (507.1 KB)
will happy for a fix.

Maybe I was not clear enough,
this issue more acute when you works on the data in cpp cpu top.

Update:
at the moment i use math top and multiply by
0.00001525902
which is 1/65535
maybe it will give good enough results.

thanks a lot !
Barak.

Fixed point textures are by definition 0-1 clamped. That’s where the range of values they can represent falls over. For 16-bit, 65536 different values in equal steps between 0-1.

thanks Malcolm,

what if sometime we just want to transfer data on the gpu like in this case depth map
in mm scale contained in 16bit unsgined int…

One of the only flows i can see on gpu to get unsgined short that represent depth from kinect
in mm can be:

32bit (meter) → mult by 1000 → round value → mult by 1/65536
→ copy to cpu as R16FIXED.

Any other options ?

Hey Barak,
If I’m understanding the question correctly, you are doing those operations in a shader and then looking to get the results back on the CPU, correct?
If so, then yes, that’s the correct way. Shaders work in normalized coordinates for fixed point (since that is what fixed point is), so if you want to write to a 16-bit fixed value in shader, then you need to output a value between 0-1. This will be written to the 16-bit data as 0 == 0 and 1 == 65535.