Read more about the workshop and the Resonate Festival here
The workshop was led by Dimitry Napolnov (Sila Sveta) with Barry Threw (Obscura Digital), and Markus Heckmann (Derivative) with support from Greg Hermanovic and Isabelle Rousset (Derivative).
For the workshop we prepared a file in TouchDesigner 088 (Resonate.toe) which contains 3 parts:
the Echo Nest component in /resonate/echonest which sends a music track for analysis to echonest.com and converts the detailed results to animation channels[/]
the section which creates the visuals for the mapping[/]
CamSchnappr to map the visuals onto the physical object[/]
The Echo Nest Component
Echo Nest describes itself on the website as: “…offer[ing] an incredible array of music data and services for developers to build amazing apps and experiences.” (Source: developer.echonest.com/)
Its developer API accepts music uploads for analysis and returns a detailed description of the track including time signature, key, tempo, timbre, pitch and sequenced information on beats, bars, sections and much more. The documentation of the analyzer can be found here: developer.echonest.com/docs/v4/_ … tation.pdf
To start using Echo Nest developers will have to acquire an API key: developer.echonest.com/account/register
The Echo Nest component comes with a simple user interface where the API key has to be entered and a URL to a music track has to be specified.
TouchDesigner will download the music file into temporary storage and start playback (this might take a while depending on the size).
Clicking the Fetch button on the Echo Nest component user interface will start the process of analyzing the track.
After a short while the analysis should be done and all tables in the Echo Nest component should be filled.
This is a multi-stage process:
First Echo Nest is told were the track is located via its Track API Method (developer.echonest.com/docs/v4/track.html#upload). Inside TouchDesigner this is done via a Web DAT (/resonate/echonest/webUpload), using the POST Method to Submit and Fetch the information to and from Echo Nest. The Web DAT is fed with the API key and the music file’s URL.[/]
TouchDesigner eventually receives a Track ID back from Echo Nest as part of a JSON formatted response also containing meta-information like Artist, Title and more. The Track ID is used in the next stage to receive a detailed audio analysis.[/]
Using the Track Profile API Method (developer.echonest.com/docs/v4/t … ml#profile) with the Web DATs Fetch Method (/resonate/echonest/profile), Echo Nest returns info on a Track given its ID. The information contains beside links to previews, images and more artist information, a URL to a complete analysis file. TouchDesigner parses the returned JSON for this URL and uses it in yet another Web DAT (/resonate/echonest/audio_summary) to fetch the complete analysis.[/]
This audio_summary is a JSON formatted package containing sequenced information on beats, bars, sections and more which is parsed in TouchDesigner with a python script (/resonate/echonest/decode) and its content is passed on to a collection of Table DATs[/]
The Table DATs are being fed into an Animation Component and via a Script DAT (/resonate/echonest/animation1/script1) converted into animation channels and keyframes.[/]
The output from the Animation Component now can be used in the synth as an animation channel.[/]
Echo Nest has a lot more to offer in regards to song analysis and track recognition, the documentation at developer.echonest.com/docs/v4/index.html has a good overview of what else is possible.
The Geometry and Animation
The sample installation works with a fairly simple setup of 7 cardboard boxes. The geometry for the 7 boxes is generated with 2 techniques:
Instancing, with the channels for the instances created in the instanceData Base Component (/resonate/instanceData) and[/]
Using the Copy SOP to create the object.[/]
A series of Render Pass TOPs are used to render the different visualizations. Besides showing various techniques on how to apply textures to geometry it is also shown how shadows are created with the Geometry components shadows and shadows1, the shadowLight and littleHelper Light components.
The component soundAnalysis (/resonate/soundAnalysis) explores one way on how to convert the audio waveform to via the Spectrum CHOP to meaningful animation data.
Eventually CamSchnappr is used to map the output onto the physical object.