How does the Audio Spectrum CHOP logarithmic scaling work?

When using the Audio Spectrum CHOP with Frequency ↔ Logarithmic scaling set to 0, the sample index corresponds to a frequency. I.e. sample 1,000 represents 1,000 Hz. But when the scaling is set to 1, what is the relationship between sample index and frequency? It doesn’t seem to be as simple as the linear frequency response plotted on to a logarithmic x-axis.

What underlying math/algorithms are used to show the frequency response in logarithmic scaling? And is there a general formula I can use to convert sample indices into their corresponding frequency values?

Thank you very much in advance for any help!

I’m also very much interesting in the underlying math!

Trying to much the scaling with custom UI to get interactive audioanalyzer tool, but simple blend between linear and log doesn’t match the visualization of Audiospectrum Chop. I created custom Script Chop to convert frequency based UI handles to sop space. It works for full log (and full linear obviously) but not in-beetween, it just doesn’t match.

Here’s how I’m going about the blending between linear and log:

def interpolated_position(frequency, lower_limit, upper_limit, L):
if lower_limit <= 0 or frequency <= 0:
raise ValueError(“greater than 0 for log”)
if frequency < lower_limit or frequency > upper_limit:
raise ValueError(“freq out of bounds”)

# calculating linear and logarithmic positions
x_linear = (frequency - lower_limit) / (upper_limit - lower_limit)
x_log = (np.log(frequency) - np.log(lower_limit)) / (np.log(upper_limit) - np.log(lower_limit))

# Interpolate between linear and logarithmic based on L
return (1 - L) * x_linear + L * x_log

It doesn’t look like it matches the scaling of AudioSpectrum CHOP. I guess it’s the blending formula is different? The rest of the math is quite straight-forward.

Thanks!

1 Like