After some adjustments I was able to reproduce this issue in a simple test file. The problem only appeared when I increased the resolution from 1920px wide to 5760px. Not sure about other in-between resolutions. The image starts out smooth but progressively becomes jagged as it cycles through feedback.
When the resolution is exact powers of 2 there are no jaggies. I suspect that the memory padding that happens at the driver/gpu level (or whatever theyâre doing to handle non powers of 2) introduces small errors which are amplified by the feedback loop.
Just tested this as well in version 2023.12370 (win11, nvidia 2080ti) and having the same output.
It seems to have something to do with the addTOP being on interpolate pixels mode.
As a workaround: if you set the addTOP to ânearest pixelsâ mode, it seems not to show the jaggies.
@timgerritsen Amazing! The workaround worked for me too. I had to make sure every operator inside the feedback loop was set to ânearest pixelâ display but it fixed the issue. Thank you.
logged this so we can have a look. You can reproduce the source issue when adding the image onto itself and then subtracting it again - comparing the result of this with the original will reveal small remnants that in a feedback loop would add up.
Smallest resolution I was able to replicate this with is 3571 x 655.
I guess the thing that strikes me as odd is that the GPU interpolator is doing anything at all, since the resolution isnât being changed.
There must be something that happens at only certain (higher, non powers of two) resolutions that triggers the interpolator to kick in and start crunching the values, even when no interpolation is necessary.
The GPU doesnât know we are taking an input image and running a pixel shader on each pixel at exactly 1:1. All it knows is that itâs rendering to a WxH output texture, two triangles are being rendered ontop of those pixels, and a texture is being applied to those two triangles. Each triangle is independent so one doesnât know where the other is going to land. Although the GPU knows the resolution of the texture, it has no way of knowing that the triangles will end up rasterizing in such a way that the texture coordinates should land at the center of each source texture pixels. The texture coordinate interpolation itself is going to cause the samples to be slightly off-center due to tiny errors in floating point math.
If the interpolation is turned on, then all the sampling is going to go through it.