timing of midi messages (output from touch)

Hi all,

this is something that’s been bad in Touch since I can remember…

I want to make a midi sequencer in Touch but so far I have never been able to get the timing tight enough to be usable. attached is an example .tox of a super simple one-note sequencer.

the midi output device needs to be set to the windows softsynth or whatever synth you use for midi.

try messing with the period of the timing chop (noting to keep it more than the peak length parameter in the trigger chop).

for me, it is not the perfectly even-spaced train of notes I am hoping for.

if I do this in PD or Max, it is nice and crisp but in Touch, the timing to way too mushy.

is there a better way to do this? for now, I’m using PD to do the music and Touch to do the visuals but it would be much nicer to do it all inside Touch if things were a bit tighter.


Hey Rod, you forgot the attachment. We’ll take a look if you could upload your example.


oh @#$%$@!

I’ll try to find it or make another one.

sorry 'bout that.


Here 'tis

very simple but I can hear on my laptop that the note events are not evenly spaced. any way to make this better - or maybe it’s only on my machine?

Rod_MIDI_Test01.tox (782 Bytes)

Hey Rod, I have gotten your exampe working here as you described. The notes seem evenly spaced here on my machine. It sounds a little odd when I’m actually changing the period of the timing, but as soon as I stop playing wth the vlaue, the notes sound evenly spaced. I played with period from .3 to 3.0, what range were you working in?

hmmm… I even re-installed windows but same problem. I’m using the same range.

It’s driving me nuts as touch would be perfect for making weird little midi sequencers otherwise.

I’ll try to make a more complex one to see if others can hear the difference. It’s so subjective between listeners. Maybe I’m just a nazi for timing. I guess one way would be to record the audio out from a percussion sound and measure the pulse intervals. then I could compare different computers.

my 8Mhz Atari used to have quite tight timing, so I doubt that it’s a CPU speed issue.

sigh thanks,


Hey Rodney, I’ve been able to reproduce what you’re seeing. I had a number of applications going in the background, using up CPU, and then retried your example. The MIDI out notes were not evenly spaced and some completely dropped out. I’ll show R&D.

Here is a network that will help test the MIDI response more, also using the built-in MIDI synth.

Instead of using the MIDI Out CHOP, I’m now using the “midi” command in a CHOP Execute DAT.

It caches all the pulses in the channel. The channel sample rate 600 samples per second, so you can pack in up to 300 pulses per second on one CHOP channel.

However since the frame rate of TouchDesigner defaults to 60 frames per second, multiple MIDI notes are emitted when this cooks each frame, so you can’t get timing precise over 60 Hz. And that doesn’t take into account variance in the operating system and drivers.

Anyway, that’s how I see it!
MIDIOutFast.23.toe (6.71 KB)

This is really cool. I want to understand it better and make strange things with it.

Can someone please translate the following line into python for me:
midi -n 2 10+$F%90 127

Looks like its sending a note on event, channel 2, value 127, and note value from 10+0 to 10+89 (as the frame counter increments).

Yes but what I was wondering about was the midi command in Python. I have since learned that there is no such thing as of yet.

Right, sorry, its on our todo.

Hi All,

I’d like to contribute to this thread in case there have been any fresh new takes on getting precise midi timing going on inside TouchDesigner. It’s something that stumped me a year ago and is still stumping me today after having taken a thorough wack at it.

First of all, I’m fairly confident in saying that the weak-spot in these types of approaches is the CHOP exec DAT. If we take the same pulses coming from @greg 's example and apply it to a copy-CHOP we can hear much more precise timing within the application itself. Probably due to the wonders of time-slicing.

Unfortunately the issue is that the CHOP exec DAT often has to be used with midi to convert the CHOP channel name/value into midi messages.

In my own applications, the CHOP exec DAT is used to split the channel name and value and store each of them into their own constant CHOPs as part of a synthesizer voice allocation algorithm.

The first mistake I made here was that setting the channel values via parameters would affect all samples of that channel, regardless of sample-rate etc. This meant that if my input was coming in at 480samples/sec with 8 samples per channel, I was losing those 8 samples worth of timing resultion in my constant CHOP which was itself set to 480samples/sec and 8 samples per channel.

So I figured I could find a way to modify the CHOP channel per sample using the script CHOP using one of Matthew Ragan’s techniques but in order to do that I still had to make use of a CHOP execute DAT leaving me with the same results as before and which can be seen by the uneven distribution of pulses here:

At this point I’m wondering if anything can be done by either giving it the CHOP exec DAT it’s own local time (though I’ve never really understood how to make good use of this) or maybe placing the script within an engine COMP (which I unfortunately can’t test out for myself).

Is there a way to get the CHOP exec DAT shooting at higher than session rates? If not, is it something that can be RFE’d or is it too tied in to how TD works? I’d really (reaaally) love to find a way to get this resolved and ideally, without having to use the engine comp.

I’ve attached Greg’s file including a few small additions relevant to my post.

MIDIOutFast.28.toe (110.4 KB)

Thank-you for your time!

So I’m able to get better results if I do this:

In your Script CHOP, set the number of samples to whatever’s available that cook:

scriptOp.numSamples = parent().NumSamples

Then shift the output to the current timeslice position.
(I just use a free-running LFO set to 480hz to get its start position).

It’s still susceptible to dropped frames, but the pulses are regularly spaced in this example.MIDIOutFast.30.toe (122.2 KB)

Further work might involve setting up a queue to make sure samples aren’t dropped.

I’ve attached an example.


1 Like

Hey Rob, this is incredible!

I haven’t gotten a chance to put it all together yet but I’ve already gotten such great results so far just from having slapped on that script CHOP. I’d post a file up but apparently we can’t upload .wav files here. It’s night and day though!

Could you elaborate a bit on what your cueing system might consist of? I think down the line it won’t be such an issue once using the engine COMP becomes common place but still would like to take a crack at it.

I’m also wondering if you might know of a better way to achieve that “per sample” channel modification than the one I’ve got going on in the base. It’s adding a bit of cook time to my network which I’m hoping to cut back on.

Thanks a bunch, feeling really good about this.

Hi Owen.
Glad its useful.

In terms of queuing, not fully sure how your scripting works, but the secret is to store every event, then each cook only pull out as many samples as the current timeslice length, leaving the rest for next time.
You’ll then want to make sure the queue is never drained past some minimum amount, to allow some buffering for dropped frames, etc. Similarly you’ll want an upper limit on the queue length.

In terms of per-sample channel modification, just wondering if you’ve considered using the MIDI In DAT, instead of the CHOP ? You’ll get a callback each event, removing the need for the CHOP Execute DAT.


Oof, I think I’ll have to put a pin in that for now.

I have used it for some other applications but generally speaking I’m generating the midi signals in Touch.

The bottleneck here is that I’ve got this voice allocator in my synths that splits up the incoming 127 midi-formatted CHOP channels into note and velocity values, iterates through a list the length of which is my “max number of voices” to see which one of those voices are available and sends the values over to two constant chops (triggers/notes) at the corresponding channels.

So what I’ve learned is that as well using that script CHOP trick, I’ve got to be sending these values “per sample” if I want to retain a 480samples/second resolution.

It’s a start but I’ve still got to figure out how I can effeciently make use of my voice-allocation algorithm in this context. Using the “while on/off” function of the CHOP exec DAT for 127 channels is gonna be way too expensive for sure.

Lastly, I realise I’m going off topic now and may start a new thread but I’m under the impression that changing the sample rate parameter of the midi in CHOP doesn’t give the results I’d expect. The number of samples per channel remains 1 regardless of the samplerate.

Thanks again,