Excuse me if I’m late to the party but I just noticed the serious overhaul of the audio render CHOP in the 32024 alpha build specifically, the “simulation mode”… what a surprise!
I won’t have time to experiment with it just yet but is there any information I could have regarding its implementation so that I can day-dream about it?
Am I correct in assuming that it’s gotten a more complete implementation of the steam audio API?
What is the intended workflow of the “bake” function?
Yep, that’s correct. It uses a more comprehensive workflow in Steam Audio that allows for multiple sources, static meshes, and a bunch of other features such as reflectivity. The old workflow will still be available under the “Simple Positional” mode.
Baking creates probes in the scene that contain baked audio information about reverb, reflectivity, etc. It’s useful as an optimization tool since calculating reflection without baking can have high cook times, depending on the number of rays and bounces. Baking is currently only supported for static listeners, and in the Steam Audio API that translates to IPL_BAKEDDATAVARIATION_STATICLISTENER for the baked data type.
That sounds amazing, actually! Thank you @owenkirby pointing that out. We’re using extensively Audio Render CHOP and I was wondering about the full implementation of acoustica like engin. Amazing work, guys! Can’t wait to try it out