I just want to confirm that what I am seeing with the ‘SharedMemOut’ Texture Operator is correct.
My TD setup:
New project output (jelly beans) sent to a resolution operator. That operator changes resolution to be very small (like 2x2 or 2x1 pixels) and also changes the ‘Pixel Format’ Property on the common tab to be ‘8-bit fixed (Mono)’. The result of this ‘res’ operator is fed to a Shared Mem Out texture operator.
C++:
When I lock and examine the contents of the shared memory on the C++ side what I would expect is 8-bits(one byte) worth of data per pixel for each pixel in the image.
For example a 2x2 image would have two times two pixels = four pixels and each pixel would have a bytes-worth (256 possible different values) of data. In hex it would look something like this:
[code]
p1 |p2 |p3 |p4
0x12 0x55 0x2f 0xa1[/code]
What I actually see in c++ (with the above setup) is:
[code]
p1? | p2? | p3? | p4?
x00 x00 x12 xff x00 x00 x55 xff x00 x00 x2f xff x00 x00 xa1 xff[/code]
There is clearly a bunch of extra data there, but to my eye the 8-bit mono format seems to be a four channel format with two of the channels blacked out and one (alpha?) set to full strength.
Is this intended behavior/pixel/channel layout or is there something I need to change in order to get one byte per pixel?