Edge Detection Difference MacOS vs. Windows (M1 Max vs. Nvidia 4090)

Hi,

Probably should have been obvious/expected given GPU optimizations may differ, but I just want want to check… Is it known/intended behavior that MacOS M1 MAX would create different output than the same project running on windows with a Nvidia 4090? (Even with just the basic TOPs?)

It seems like the edge detection on Windows/Nvidia is significantly more sensitive and requires way more precise tuning to prevent it from going crazy.

Hey can you post an example of what you are seeing? It should be the same, they use the same code.

Interesting, definitely getting different results. (Though glad to hear that’s not expected since it’s not ideal for cross-compatibility reasons.)

Here’s a simplified version of the project and renders of it from MacOS and windows separately: Dropbox

The problem comes from the ‘Passes’ parameter on the Feedback being set to 10. If you make it 1 then it’ll be more consistent between both OSs (Windows is closer to correct).
I’ll make that parameter do nothing on the feedback, since it doesn’t make sense to do multiple passes for Feedback.