Just checked out the TD Ableton sync projects. I’ve got to say, Nice job!
Yesterday I started to build some M4L devices for sending midi and audio control data to TD so today I thought I would get Ableton in sync with your sync setup. I didn’t realize you guys have done such a comprehensive job, I thought it was going to be transport/clock only!
I do have a couple of comments/questions though.
At this moment the controller knobs on the master sync M4L cannot be connect directly connected to parameters in live correct? The way it seems to be explained on the wiki is that an external controller must control TD and TD in turn controls Ableton parameters. There is a work around for this by using virtual midi ports (IAC or midi Yoke) and a M4L patch if a person only wanted to control TD from Ableton.
I noticed with the template, you are using send/return tracks to send actual audio out of Ableton and then converting that audio in TD into controller data. Wouldn’t it make sense to convert the audio in Ableton and then send OSC to TD? Then you could have many channels without the need of soundcards/cables and not have any of the latency inherent with audio interfaces. Yesterday I built a M4L device that converts audio to OSC its posted below if you would like to check it out.
I like the idea of the scenes and cues and then using different containers for each scene but I’m wondering how well it would work for very large comps. My live project is quite large I couldn’t really imagine multiplying it by 15 for 15 different songs. I guess TD doesn’t use resources if the data isn’t being displayed (or op’s are requesting cooks) but I still have experience dropped frames when loading large geometry for the first time. I guess I could have a central container with my main networks and then use select TOP’s in the scene container. Any suggestions?
I noticed in the wiki when it explains how to connect a transport_ramp to a movie to make it loop it time that something isn’t right. I had a clip that is 160 frames but I had to set the range of the math chop to 320 to make the movie loop smoothly (play all frames). The wiki states “Change the the ‘Units Menu’ to ‘Index’” I couldn’t find a units menu.
HI Keith, I am excited to get into the office and check out how you went about converting audio to OSC to touch. I chose this approach for the tour when M4Liive first hit and I havent had the chance to dig into the new possibilities with tdAbleton sync. We used my method on tour sparingly, but it seemed stable and workable. I have similar questions regarding whether the conversion would be most efficient/stable/financially economical in Live, and when you say the words ‘soundcards/cables’ I cringe at the additional number of hardware/soundcard considerations on a traveling tour especially, I am on the fence about which programing environment is better suited for shaping an audio/control stream…lots of testing to be done !
I will post my version later today if I get a chance. My basic metaphor has been ‘two hemispheres of the brain’ - one for audio processing (mac/ableton), the other for visual (pc/touch) and keeping them separate , except for pre-conditioned control. Things change so fast I’m not sure if this intent is optimal, but it is my current goal. I figure this is simpler for several reasons, including the fact that the pc wouldnt need to process audio, and no complex soundcard is needed(on pc). Ive been experimenting with both approaches and look forward to the new toolset that derivative has provided.
I like the idea of a M4Live ‘effect’ that sits on a send channel with no audio send (just osc out). so much to try … so little time…
That’s great I hope it works for you. I noticed last night after I posted the device that because I didn’t use initial enable on some of live. ui objects, the default settings aren’t coming up on a new load. I’m changing that now. Also the two smooth parameters are for the number of samples of the up and down slope of a transient to smooth, the gain and offset below are data gain/offset not audio. Finally the live.menu objects all have to be manually changed by changing their items and then saving the device. I was trying to figure out a way to populate the menu with track names in Live but couldn’t really do it.
I have struggled for couple of years with finding the best audio to controller method. I used to have a Max one I was never that stoked on, then last summer I made one in TD that I thought worked pretty well. Now I think this new Max device is comparable (maybe better?) to my TD component so now I think I’ll just skip on the whole audio input into Touch deal.
I’m stoked see how you implemented your M4L device, more ideas are always better. I also that keeping the audio to Live and the video to TD is ideal.
note- scratch the last paragraph in my original post I didn’t think about the standard unit menu’s at the end of parameter line in the parameter window. Sorry my bad.
Just found something not mentioned in the wiki. If you start a fresh Ableton project and drag and drop the TouchDesignerSync_Master device on a midi track and then reinitialize it with TD on the proper ip address, TD will then chug (the transport ramps do not run smoothly). It took me a few minutes to figure out why.
The Touch FPS number box defaults to 0 when the device is first loaded into a fresh project. It must be set to a higher number like 60.
Actually I thought this box was telling me the TD frame rate, so it took me a few minutes to realize that I set the frame rate there.
The the number box can be set to 60 on load by checking initial enable box in the inspector.
I guess you will note that basically I was trying to tap into VU~ objects’ built-in smoothing to some degree. very crude and elementary, but it can produce useable amplitude modulation data streams.I also went more towards things like rampsmooth~ like you have going. I think when I go back in next time I would … oh I dunno… maybe mess around with some of max’s non-signal based smoothing objects and place them right before osc out ?
I chose to try mine with using compression and filters from Live, so keep that in mind. I guess I was testing the assumption that they would somehow be more efficient than bringing in MSP filters- and perhaps more flexible on-the-fly. But looking at yours I’m not sure my theory holds up either way. My Live cpu doesnt seem too incredibly taxed with either approach.
I too have been toying with this signal/control bridge for a while. It gets to be a bit of a grind and I have to remember what my intentions are in the first place- seems to me that ‘less is more’ might still be what I’m after re: how much of the audio signal I want to work with in Touch- but that is just me… Your filtering presets point out that every audio signal has such different character- the enrgy of a subsonic could ‘pin the meters’, yet we wont even hear it !
Besides audio sig conversion in M4L or Touch, the third option I intend to pursue (someday) would be an arduino based ‘black box’.
Nice I never thought of the VU meter. I did briefly check out the meter object but I don’t think it had the options the VU meter has. Tomorrow when I’m more awake I’ll investigate your patch further.
As for processing the data further in the Max realm I have used the line object and couple others but I found nothing beats the Lag and or the Filter chop’s in TD. If you want some really nice control of transients check out the the Lag chop.
yes, I had applied the Lag CHop with good success too. I think ultimately it will come down to a hybrid of smoothing data, initially in MSP, and later in TD. But again I guess it comes down to what kind of detail one wants in a control stream.
Hello, actually I was fascinated by the project TD Ableton Sync and would like to know which of the two plugins and the most suitable for the connection with Live/TD?You could spend a few tips of the functioning of the plugins?
Im migrating my own setup into the one from the wiki tutorial @(derivative.ca/wiki088/index. … chDesigner). Everything seems to work great, but I couldn’t figure out how to get the simple audio waveform from ableton - like I’m used to getting from an “audio file in” chop. I like having the waveform itself because it makes a great template for visuals in the scene, but I couldn’t figure out how to access it from within the TouchDesignerAbletonSync.toe.
I thought it would be part of $AUDIO_RAW or $AUDIO_ENV but when I inspected the “/tdsync/system/audio” network It just looked to me like 2 “audio device in” chops set to my default driver which is my microphone. Am I supposed to have other hardware or some driver for my sound card?
That’s when I found this thread, I threw this plugin into ableton and it looks perfect for what I want:) I just assumed that I would use an “OSC in” chop in TD using the IP and port settings from this plugin in ableton but I’m obviously missing something and couldn’t get any signal from any of the chops I tried.
There isn’t a (simple) way to send audio, but you can send a simplified OSC representation with this plugin. Also there are some alternatives, try this topic, I’ve been asking for something similar viewtopic.php?f=20&t=6915
Hello , i would like ot ask something about that osc thing from ableton to touch. First i have a problem with sending the osc data . I can’t make the Host local .It has some Ip address options on the Ableton Patch from which i can choose . I put the Host in The touch Osc in patch but no info is coming. Also i want to know if i can use launchpad and simply have a different ‘osc in’ for each of the buttons ?
My setup is simply two windows 7 pro laptops on a wireless router. So far I have the td live sync successfully working over ip. However, i tried this Audio to OSC plugin on my master in ableton and created an OSC in chop in TD using the same ip and port settings from the abeton plugin. but not getting anything out of the chop