Hey, thanks for the sample. It’s hard to tell example with a more complex example such as this, but the most likely reason is because intBitsToFloat() and floatBitsToInt() is not as stable of an operation as one would hope.
In particular, if you are generating an integer ID that you want to encode as a float to be saved out into a floating point buffer, you need to make sure it doesn’t create a value that would create undefined values with inBitsToFloat().
In particular, you can’t have a set of bits that would create NaN or Inf If the encoding of a NaN is passed in *x* , it will not signal and the resulting value will be undefined. If the encoding of a floating point infinity is passed in parameter *x* , the resulting floating-point value is the corresponding (positive or negative) floating point infinity.
Additionally, denormalized floats can be flushed to 0 at any time Any denormalized value input into a shader or potentially generated by any operation in a shader can be flushed to 0.
This means an integer 0x1, which is a denormalized float when interpreted as a float, may get turned into 0x0 (but there are many such values)
Can your algorithm encode IDs directly as floats, rather than using intBitsToFloat() and back?
You shouldn’t be using 255, since you aren’t working in 8-bit anywhere. The big question is what are the ranges of values that the integers can be in this algorithm, and can those be expressed uniquely in a float.
Hello friends
I encountered another problem in another shader
It works fine in version 2021 and in version 2023 I either encounter this error or it does not work properly
It is a simple shader but I can’t find the problem with it
It is not returned as NaN or noise, this time it is a new problem I think
Thank you for your help