# limiting the number of frames on feedback top

hi,
i am trying to use the feedback top for a limited number of frames - let’s say 1000.
is there a way to do this?
i do NOT want to pulse it after 1000 frames, i want the frames to carry on accumulating, but deleting the first - i.e. when frame 1001 comes frame 1 disappears (FIFO).
another way to think about it is have a 1000 frame trail. tried with cache TOP but it’s way to heavy on performance…

Hi!

Can you describe a bit more what you are actually needing?
I mean, are you fine with the typical fading away of past frames one gets when using feedback, or do you really need all the last 1000 frames to be composited over each over at 100 % at a time?

If you really need the latter, you can’t really work with a feedback network I think. You could try building your 1000 tap FIR filter in matlab or something and let it try to deisgn an IIR one though. But I wouldn’t have a lot of hope.

What you need is a FIR instead of an IIR filter, so your cache TOP approach.

What’s your bottleneck with the Cache Top approach? GPU memory?

i am trying to achieve a ‘long’ trail of frames - that’s why feedback loops with opacity don’t work for me, so as you described it is absolutely correct - 1000 last frames to be composited over each other.
i’m afraid matlab and FIR/IIR are over my head…
attached the example i made of both approaches, but the cache top + replicator starts choking on big numbers…
any help?
feed-cache.toe (10.1 KB)

Hi!

the point is not ‘long’ the point is, if a frame is denoted by f[n] where n is the frame number, so f[n] is the frame at the moment and f[n-1] is the previous frame etc are you ok with the output being:
(in this example i do 3 frames not 1000)

output = f[n] + f[n-1]*0.5 + f[n-2]*0.25
(so visibility decreases for ‘earlier’ frames)

output = f[n] + f[n-1] + f[n-2]
(so visibility stays constant)
?

Because you can achieve ‘long’ with feedback. You just need to crank up the feedback parameter(so the opacity of a level TOP in the feedback path typically) to 0.99999
and maybe make sure you have a hight bit-depth (e.g. 32 bit).

Make sure you convert to 32bit before the feedback loop, and convert back to 8 bit or whatever after the loop if you do any processing afterwards, because doing a lot of processing in 32bit will eat up your VRAM.

(i cant look at your file right now, but I will!)

You also might play around with your compositing method in the feedback path. It might help to achieve the effect you are looking for. For example it sounds a bit like ‘over’ is more like what you are looking for than a simple addition.

yes, i am trying to go for the 2nd option:
output = f[n] + f[n-1] + f[n-2] with the ‘over’ top.
otherwise feedback top would be fine…

the part i am trying to get help on is how to delete f[n-3] from the feedback…
i can also accept a drop off behavior after N fully added frames:

output = f[n] + f[n-1] + f[n-2] … +f[n-1000]*0.5+f[n-1001]*0.4+f[n-1002]*0.3 …

no time, but look at this, answering you later
fbExample.toe (4.28 KB)

So here is the longer answer.

The feedback portion of your network was missing attenuation in the feedback path, as far as i have seen. Is the answer to this thread that simple? You have a level TOP in there but you don’t attenuate the opacity, so the feedback loop is drawing everything and previous frames never disappear. If you put the opacity to 0.999 you get your trail.

The cache TOP version seems reasonable, I didn’t see any obvious errors or something, it really seems to draw a lot of performance.

But i noticed your input image. If you plan to use something like in your example, so something with a solid white color and black/transparent background, you can easily achieve the effect you are looking for with the toe I uploaded. If your really need exactly 1000 frames, the you will need to calculate the threshold parameter and the feedack coefficient (so opacity value)…

A cheap way to do this might be to use a 32bit texture (render) and then bumping up the alpha to 999. Step the alpha down each frame with a chanmix top (alpha is 1/1000). It would hard cut between 1 and zero on the last frame however. You could remap it from 0-1 using a glsl top.

Wouldn’t that also need 999 delays?

I think this is a great use case for the Texture 3D TOP and GLSL. Use Tex3D to cache a 2d Texture Array, from which you can then access each frame for its respective compositing layer using the w texture coordinate.

Unfortunately the tex3d top does not operate in a FIFO manner when in 2D Texture Array mode. To make up for that, the lookup index is offset by frames. For me the index-sync glitches in realtime mode when frames drop, but it works dependably with realtime disabled.

I couldn’t get the full 1000 frame buffer going on my GTX980ti, but settled for a nice 3 digit array size. I expect a GPU with higher spec could handle larger texture array.
BufferComperGLSL.5.toe (6.39 KB)

The Texture 3D TOP is limited to 2048 slices. Enough for a 1000 frame buffer, provided that you have the VRAM for it… 1000 frames at 720p is going to be over 3.5 GB.

Unless there’s something highly specific about the 1000 frame sampling in this it’s likely better to use the feedback TOP.

hi guys,
thanks for the input. all the suggestions are greatly appreciated, but the cache/3d texture and various buffering techniques are all too hard on my performance…
the BufferComperGLSL.5.toe didn’t even load for me and crashed TD.

anyway, i managed to use the threshold trick to limit the feedback because my frames will have alpha… however the content will be video and not geometric and that’s why i prefer the feedback approach.

for those who might find it helpful attached my working demo - but one thing is still not perfect: the threshold makes hard edges that don’t look too good.

@asterix - is this kinda what you meant? if your way works smoother would really appreciate if you can send a .toe…
feed-cache.5.toe (4.43 KB)

Here’s a few different approaches with feedback:

base_feedback_play.tox (3.62 KB)

You’ll notice that some of these work better with video that has incoming alpha, while others work better with a camera feed / traditional video feed.

Hopefully this will give you some additional ideas to experiment with.