FIXED: 2019.19160 Win 10 - c++ TOP (cuda) plugin - getOutputFormat() not working properly when dll is loaded as a custom op


This one is a bit confusing, also with I’ve noticed that when the plugin is loaded as a custom op, not using the cplusplus TOP, even though the output format looks like what is set in getOutputFormat() (32 bit mono), the cooking time (4x slower) reflects another format (32 bit rgba).

Here is what I’m doing
CudaSortTOP::getOutputFormat(TOP_OutputFormat* format, const OP_Inputs inputs, void reserved)

format->redChannel = true;
format->greenChannel = false;
format->blueChannel = false;
format->alphaChannel = false;

format->bitsPerChannel = 32;
format->floatPrecision = true;

return true;
//return false;


See this video (look at the format in the info as well as the cooking time, it goes from 1.3 to 6ms for the gpu when the plugin is loaded as a custom op)

Let me know if you can reproduce, thank you!

Also, in general I was thinking using getoutputformat() is a bit confusing, since it creates a disconnect between what is selected on the common page of the OP and the actual output format.

Maybe the output format selection could be grayed out, or read only with a mention “set by OP”, on the common page when this is used?

Also it seems outputFormat->pixelFormat always returns 0 in execute()?
Not sure if that’s intended?

As an alternative method to setting the format with getOutputFormat(), I’m testing for the selected format in execute(), this works :

	if (outputFormat->redBits != 32 ||
		outputFormat->greenBits != 0 ||
		outputFormat->blueBits != 0 ||
		outputFormat->alphaBits != 0)
		myError = "TOP Pixel Format should be set to 32-bit float (Mono)";

but I was hoping I could use outputFormat->pixelFormat and test for the specific format, which would be more compact.

Thank you!

Thanks for the reports! The slower performance is fixed, it was a memory allocation that was happening at the start of the cook, before it would then reallocate the texture to what getOutputFormat() requested.
pixelFormat is now filled in in all modes as well.

The weird way the pixel format is selected is a legacy thing from back when it was far more varied what each GPU supported. As part of the Vulkan work I’m going to replace this with a more direct pixel format selection.

Thanks for the bug fix Malcolm and thanks for the formats explanation!
Though in the end I reverted to displaying an error message if the wrong format was selected, as I felt the disconnect between what is selected on the common page of the OP and the actual output format, as said above.