Mapping audio frequency from one audio file to another

Hello, I’m new to TouchDesigner and have been looking into the audio CHOP for the past few days.

I know that TD has the capability to do powerful audiovisuals by analyzing some parameters from an audio file and controlling the visual output with the parameters. But what I’m currently working on is that I want to map the parameters from one audio file to another audio file (instead of visual output).
For example, I’m extracting the frequency from Audio File A. And I want to map these frequency parameters to Audio File B so that the result of Audio File B will have the same frequency as Audio File A (kind of tuning Audio File B). I was able to filter out the frequencies that I didn’t want and left with the band range I wanted to map with. But now I’m facing an issue of how to use this frequency in the Audio B file. I thought that I could use AudioFilter CHOP in Audio B File and Band Pass to get the range I want, but it shows an error that the parameter can only be a string or number, not an operator.

I’m not sure if this is not doable in TD as it concentrates more on audiovisuals instead of audio tuning / audio analysis. So if there is any recommendation as to whether it requires some other software combination (like Max MSP), that is also welcome. I’m currently working with TD because this is part of a larger project where I’m using TD to connect some arduino sensors and python machine learning so I think it would be easier to integrate all in the same interface.

Thank you.

Hi @karen0317,

I’ll first quickly explain the error: The parameter Filter Cutoff Frequency of the Audio Filter CHOP expects a single float value. Now when you reference an operator, you are giving to the parameter an object that is not meaningful for the parameter itself.

The referenced operator, in your case a CHOP, has a lot of different data attached to it. Most importantly, it can hold values of multiple channels. This can be referenced like this: op('math4')['chan1'].

Now that you have given the parameter a channel, the channel itself can hold multiple values (like an audio waveform) so you would also want to specify which sample the parameter should use: op('math4')['chan1'][0] for example to use the first sample.

All of this though I’m not sure would get you the result you want. What you are describing might be something a software like Melodyne would be able to do. Maybe others like @owenkirby have a better idea here though.

cheers
Markus

I think the desired effect needs to be thought out and described in more detail.

For example: “I want to map the parameters from one audio file to another audio file”.

Which parameters? Mapped how?

What is meant by “extracting the frequency?”. Does it mean the “root” frequency i.e. “pitch” or the frequency spectrum as a whole?

To analyze the whole spectrum of an audio file or stream the “audio spectrum CHOP” can be used. It can also be used to determine the fundamental frequency of a sound by tacking on an analyze CHOP set to “index of maximum”.

Typically, the process of applying the timbral signature of a sound to another is called “convolution” or “vocoding” process which could easily be handled by a third party VST.

The example using a band-pass for example describes a kind of single-band vocoder. The more bands there are, placed at various intervals of the sound spectrum, the more accurate the super-position of one audio stream’s timbre onto the other will be.

With vocoding the idea is not to modulate the center frequency of these filters but rather modulate their amplitude based on the incoming amplitude of the first (carrier) signal at these same frequency bands.

Hi @owenkirby, I’m not sure if this makes sense in making sound, but I was thinking of getting the frequency (as in the whole spectrum) of Audio A and applying the frequency to Audio B. So Audio B maintains its original notes and melody but it is adjusted with the frequency of Audio A.

I was using Audio Spectrum CHOP to check the frequency of both Audio A and Audio B. However, I didn’t know exactly how to apply the spectrum of Audio A to Audio B. So I was trying out with AudioFilter CHOP to filter the frequency data from Audio A and apply it to Audio B with band-pass. However, when I managed to apply it, I realized it kind of just narrowed down the range of the frequency in Audio B instead of actually mapping the frequency data in the same timeline / waveforms. So now I want to look into how can I apply the frequency of Audio A to Audio B.

Also, the term vocoding is probably something I’m looking for…! (I didn’t know a lot about sound so didn’t know the term before) The Audio A that I’m using is the speech file that I want to transform into a music form of Audio B. That is why I want to extract some audio characteristics of speech and apply them to Audio B to make my own music with speech. I’ll look into how can I make a vocoder in TD to see if it can help with what I want to achieve.

Hi @snaut, thank you so much for the explanation! I can see why it is showing errors. Now I’m able to filter out the frequency with the bandpass.