Pro-tips Memory Leak eating up CPU very slowly

Hey everyone! any pro tips out there on the dark arts of identifying a slow yet steady CPU memory leak… I’m currently roaming around my app with a op find DAT (looking at CPU memory) this is after looking over probe and performance monitor for a sustained time… I can see the CPU slowly being eaten up in the showcooks tool (over 8 hours) but I’m not finding the increasing CPU culprit in the op find DAT… well not yet… advice welcome!

FYI my project utilities instancing, particle systems NDI output, as well as the blob tracker and real-sense inputs (I’ve isolated these two respective items and it doesn’t seem to be them)

Also in anyone’s experience any Operators I should be scrutinizing for CPU leakage especially? feedback chop? copy chop? particle sop?

For CPU usage often a CHOP that is growing in length endlessly, such as a Record CHOP, or a SOP whose keep getting larger and larger.
You should check the Performance Monitor dialog to look for nodes that are taking a long time to cook. Thats the best way to find them. The probe tool in the palette is also very useful for this.

Thanks Malcom, yes that’s what I’ve been attempting todo over the last couple of days, I think perhaps I need to run it for a long period and then check the processes when it’s choking… it’s such a slight leak it’s hard to identify in the monitors

As a note, I haven’t used any Record Sop’s… but I am using a couple of particle systems Sop’s… could they potentially accumulate? the cpu mem use doesn’t seem to change on them when I look at the op’s

Hmm. Is the leak a CPU memory leak, or are you saying things are getting slower (taking more CPU time)? A CPU memory leak doesn’t necessarily mean things will get slower, until you run out of physical RAM on your system.
For a CPU memory leak the best way to check this is to look at the ‘Commit Size’ column in the Windows Task Manager, and see if that is going up over time. The other Memory usage columns in the Task Manager aren’t what you want to look at, as they’ll reduce.
If it does, then a good way to diagnose that is to delete parts of your network and see what COMPs make the leak stop when they are deleted.

Thanks Malcolm I’ll give the task manager option a go, I’m baffled at the moment

Yes over a period of 6 hours or so the cpu_mem_used creeps up slowly choking the app… it’s very gradual… I’ve noticed it fluctuates more when it’s going out to NDI

Thanks Malcolm, so the commit size in terms of memory in the task manager seems to be stable… so perhaps I’m looking at a CPU time problem, In which case any advice again is welcome!

How high does cpu_mem_used go?

I caught it at around 6000 meg after about 6 hours yesterday, it started at around 3300 meg… the app was running at 50fps… but at that stage it had dropped to 40fps with the cpu usage.

I’ve tweaked some things in the app and doing some fresh road-testing with NDI enabled at the moment, hoping for better results.

I’ve also set a python reset function on the blob tracker after 100 blobs… in the hope that might be the culprit

Hey Simon, just a thought but perhaps after you get your 8 hr test in to fill up your cpu mem, go on a delete rampage checking the cpu mem after each deletion. Should help you narrow it down the old fashioned tried and true way :slight_smile:

That might be the ticket Pete… literally trying everything here, bizarre it just ticks up a meg or two a minute without fail… no record sops or anything really accumulating… done my best look through everything