Dear Sir or Madam,
my name is Verena and I am a student. We are currently working on a project in which we use the Touchdesigner. My task is to make our sound change based on the position data of an object.
Let’s imagine a circle with 16 speakers. The object moves in the circle past the speakers. Now we want to play the sound where the object is located.
I use the CHOP audio renderer node for this purpose, but I can’t see if it builds up my levels the way I expect it to. Or rather, I don’t quite understand how this node works/responds.
As source object I use my object which is moving. I have inserted the speaker coordinates as Table DAT. What is the role of the Listener object here?
in the Parameter Dialog of the Audio Render CHOP, top left, you’ll see a question mark. Click that and you get to it’s help page.
There you’ll see the Audio render takes a mono sound audio source as input, and a 3d transform for both a single Listener (= the ‘person’ who is hearing the sound) and a single Source (the thing that is generating the sound). So to translate to real world terms, the Listener is you and the Source a ringing phone. You can run around the ringing phone, or somebody with the ringing phone can run around you - both would be different sound experiences for the Listener(you).
The final output to the speakers is just to simulate the surround audio the Listener hears.
If you right-click the Audio Render CHOP, you’ll see an option Operator Snippets. Click that and an example file for the Audio Render CHOP opens which also shows the function of the Listener COMP.
In this case the Source is at a static position and the Listener circles around it.
But the way you describe your installation you would want to place a static Listener COMP in the center (perhaps at txyz 0,0,0), and the Source COMP moving in a circle around it. And then your 16 speakers in your mapping table positioned in a circle around the Listener COMP position.
here’s a quick example with a sound source moving in a circle of 2m radius around a static Listener, and your 16 speakers positioned in a larger circle of 2.5 meter radius.
For a quick soundcheck on your headphones switch Audio Render CHOP to binaural and add a Audio Device Out CHOP.
Hi,
i have the same problem, i need add different clips to different speaker, but how can I do that? any thoughts or recommand resource? Sorry, because I’m still new user, I don’t how to find the answer…
Hi Izard,
I am setting up an audio render for 6 speaker with a mapping table.
I am not getting reasonable results with different configurations. The pre-defined setups, like 5.1 make sense but not the custom setup. in my case only the first channel shows changes, all the other ones show the same values. Now i tried your example and also there the values (i use analyse to measure the maximum of the chanels) so not make a lot of sense.
I am in build 32660 - would you mind to open your file from back then and see if it still behaves as it did back in the day? Or could it be that audio render is broken these days?
Aybe a note for everybody who tries to render to custom sound setups. Other than one might think, the audio Render only supports one listener position, which mainly makes sense for tracked headsets. It can not be used to do a panning between different speakers. It will render the sound for one position, which is the listener, not for the positions of the speakers.