Scaling 4096+ Independent Sinewave Oscillators at 48kHz – Optimization Help Needed

Hey everyone,

I’m trying to build an audio synthesis project where I need to generate thousands (4096, to be exact) of sinewaves – each with independently controlled, constantly changing frequency and amplitude. My goal is to have these oscillators run at a 48kHz sample rate, but when I use a single Audio Oscillator CHOP with 4096 frequency channels (for frequency data, channels named “f#”) and 4096 amplitude channels (for amplitude data, channels named “a#”), my computer just can’t handle it.

I’ve looked into a couple of alternative approaches:

Phasor/Lookup Method:
The idea is to use a dynamic phase generator (like a Phasor or even the Phaser CHOP) to integrate the frequency data into a phase value per channel. This phase is then fed into a Lookup CHOP that references a precomputed sine table, and finally multiplied by the amplitude channels. This should be more efficient since it replaces heavy sin() calculations with fast table lookups.

Pattern CHOP Method:
Alternatively, you can use the Pattern CHOP to generate multi‑channel waveforms directly using pattern expansion for phase offsets, and then combine that with amplitude control. This is another route aimed at optimizing performance.

However, after setting things up – I connected all of my frequency channels to the Phaser CHOP’s Phase input, then to a Lookup CHOP that’s hooked up to a Pattern CHOP (which should supply the sine table data). What I see is that the waveforms in the Lookup CHOP are visibly changing, but they just seem to reflect the current (unchanging) values of the frequency channels rather than exhibiting the expected continuous audio-rate motion.

So here’s where I’m stuck:


multi oscillator.2.toe (4.0 KB)

Am I missing a step in converting these frequency channel values into proper audio-rate phase increments?

Has anyone scaled this kind of setup to 4096 oscillators at 48kHz? If so, what tweaks did you need to get the phase progression (and thus actual audio-rate waveform movement) working?

Are there any known limitations or alternative approaches that might be more feasible, especially since I hope to eventually increase the number of oscillators far beyond 4096?

Any insights, suggestions, or links to prebuilt components/examples that have successfully scaled in this way would be hugely appreciated.

Thanks in advance!

You’ll either have to drastically reduce the number of oscillators for it to work or render out the result the result of your additive synthesis process and play it back as a sample.

Thank you for your reply Owen. Yes, these are two things that would resolve the droppouts but then I end up with something that I am not really looking for as what I have planned involves thousands of these oscillators and for it to be something that can be operated in real time.

I am hoping that perhaps someone might know how to get one of the two methods I mentioned working or any additional ones that I have not thought of yet.

Thank you for your suggestion though :slight_smile:

Unfortunately you’re unlikely to find a solution that doesn’t involve some amount of pre-baking the oscillator information or compromising on sample rate and oscillator amount.

Modulating that many sine wave oscillators in real-time is asking for a nearly ideal approach to audio synthesis which, if possible, would make most audio synthesis technology largely obsolete.

I don’t know what end goal you’re after but you’ll have to consider a different approach altogether.

To address your post more specifically:

Am I missing a step in converting these frequency channel values into proper audio-rate phase increments?

Yes, you’re missing an audio-rate look-up index channel which is essentially another oscillator thus bringing you back to your initial scaling problem but with an added step.

The only possible way through might be to use the GPU entirely instead and only passing by a TOP to CHOP at the very end of the process. You’ll have to run your project at 48000 frames per second though so I don’t know how successful that would be either.

i’ll look into your suggestions, thank you. I guess one alternative I have thought of is rather than additive synthesis I could do subtractive synthesis by controlling thousands of bandpass filters on a single white noise signal. Perhaps that would be less strenuous on my computer. I’ll put some further thought into it and settle to render things offline for the time being.