I’ve been putting together an interactive light structure using TD, pixel controllers and LEDs and have a question regarding ‘global’ and ‘local’ mapping across the structure. My method so far is combination of techniques and ideas from people such as Ben Voight and Ginger Leigh.
Looking at the pre-vis, the current implementation appears to be mapping well globally across the whole structure. What has so far eluded me is how, in addition to mapping globally, I would go about targeting individual areas to map localised visual patterns to. For example, the 6 sets of four vertical strips would be individually mapped, in addition to the overall global illumination.
I’m not sure if I should be looking to address specific point index ranges in my SOPs or whether I should look at the UV image mapping? or indeed something else entirely.
I’ve attached my project in case its easier to shed some light (pardon the pun) on this. Please excuse the somewhat sandbox-y (read: messy) nature of parts of the project at present. andbreath_proto_v4.2.toe (331.7 KB)
Any help and advice would be most gratefully received.
J
One step you may or may not have already thought about is getting your RGB data (that is currently feeding the previz) into a format that can be properly “patched” to the physical / real-life DMX universes and IP addresses that your LED controllers will be set at and expecting.
Currently the order in which the pixels get sampled / data gets created doesn’t really matter since it lines up with the order of the XYZ coordinates that get used in the instancing in the previz, and this also could be fine for patching to your DMX outputs if you make sure that all of your LED controllers are set up in very uniform chunks of pixels without any gaps or overfills that would cause you to have to create more complex patching with CHOPs in TouchDesigner.
One method that helps solve this - as well as how to create multiple maps that can be mixed together seamlessly - is to have your TOPto CHOP be converting TOP data that is already arranged in a format that makes DMX patching easier. I usually get the pixels arranged so that every vertical column is a DMX universe (170 pixels) and thus after conversion to CHOPs, a single “swap channels and samples” Shuffle CHOP will interleave the RGBs and give you a bunch of 510 sample CHOP channels that are ready to pipe directly into your DMXout CHOP. This way you can have short and long strips without having to worry about using splice / trim CHOPs afterwards to account for blank DMX channels or universes.
You can create point attributes in your SOPs that define the DMX universe and channel numbers and then use those attributes in instancing Geo COMPs to create remap TOP UV textures to select pixels from your unmapped content (XY location of SOPs) which become the red and green values of the UV map texture, and then place them in the proper row and column (dmx universe / pixel number).
Then you can have as many maps as you want (UV textures) and just have remap TOPs that all use different maps and even different textures (content) if you want - to create RGB data already laid out in DMX / universe order. Mix and match those DMX ordered TOPs with a Composite TOP (Maximum or Add or whatever you want) and THEN run it through a TOPto CHOP once to get your final DMX output to send to the LED controllers. This allows for you to mix many different maps and pieces of content without having to change or cook your SOPs (the UV textures don’t need to change unless DMX patch changes) AND only have a single TOPto CHOP which is the most expensive operator in most pixelmapping setups anyways.