Syphon feed to midi data - visualaudio sonification

My project idea is to create a visualaudio reactions - opposite to audiovisual reaction which has lots of information.

I have a live Syphon feed coming into spoutin1 - this real time video is controlled via a midi controller (not in TD).The final video is outputting at spoutout.

What I’m trying to achieve next in TD is converting elements of the video feed - colour, axis, pixel etc into MIDI data 0-127.

This midi data will then be sent out to control external modular synth.

Can anyone point me to any research that may help.

hey @nikfrisbee ,
you’ll need to be more specific in exactly what data you want to get from the video, as I don’t understand what you mean with the terms you mention in relation to video.
color - do you mean the average color of the whole videotexture? Then you can use the Analyze TOP, and use a Top to CHOP to convert that to 3 RGB channels you can send over midi.
axis - no idea what you mean here?
pixel - what pixel?

thank you for your reply. @nettoyeur

Data conversion fields would be for the five characteristics of sound;
Pitch
Loundness
Timbre
Spatial Location
Duration
Each one of these fields would be sourced from the video feed.

Pitch would need to have a midi range - lets say keyboard notes A1(21) to C7(108) - outputting random midi notes in this range. Here I would like to use the 3 RGB channel, however not have them averaged, but I would like to split them into 3 ranges.
RED A1-C3
BLUE D3-A5
GREEN B5-C7
whatever colour is more present within the video would lead to that range being selected randomly.

Loundness would control volume and could be selected from the brightness of the image - not 100% on this idea at present but it’s a starting point for now.CC7 Main Volume.

Timbre pixel information - I have seen an effect used within SD, where the image explodes from pixels into extremely fine pixels. The pixel density would control the timbre.
CC72 Release CC73 Attack CC80 Decay to VCA
CC71 Resonance CC74 Cutoff Freq. to Filter - again I’m not 100% about how the best way would be to process the video data. Im completely new to TD and not sure of the limitations, best practices or how is the best way to achieve my conceptual ideas.

Spatial Location axis data - I was thinking the x-y field to control CC10 Pan/CC8 Balance

Duration simple clock out.

As I have said above I’m completely new to code, but found the visual nature for touchdesigner very appealing, so I’m wide open to any ideas anyone has, to help me achieve visualaudio sonification. The midi information will be converted to control voltage external to TD and control a Serge Modular System and the live video feed image is created via LZX industries like application and further modulate this video graphically within TouchDesigner.

Any feedback/comments more that welcome.

thanks for explaining, but this is a bit much to answer on in a single forum post :wink:
Perhaps as a total TD-beginner you should should only bite off what you can chew, as in start with one small thing, and see where you get stuck, and then ask a question about only that specific part. This makes it easier & faster for people on this forum to help you.

So to make it simple, you could start with the brightness of the image, and try to use that to control something. Use the Analyze TOP to get the average color of your video. Use the Top to CHOP to convert that into 3 rgb channels. You can convert those 3 values into the perceived brightness by using a formula from here:

(if this gets too complicated, you can also start simpler by controlling something with the amount of Red in the image - easy to test by holding a red paper in front of your webcam)

Once you have converted this brightness (or red channel) into your volume range (use the Select CHOP and the Math CHOP for this) you can send this value out using the Midi Out CHOP, also see its documentation on how to send specific notes: midioutCHOP Class - Derivative

1 Like