Let me start by saying that I was just introduced to TD about a week ago so I am still learning a lot of the basics. I’m currently working with some friends on a Volumetric display of around 7000 LEDs that will be installed at a large festival in a couple months. Prior to being exposed to TD we were writing all our own animations in python but I’m confident that putting in some time to learn TD and to create a module in for our display will provide us a major amount of additional interactivity and improved visuals. I’m not looking for anyone to walk us step by step through using TD, but just looking for guidance as to what modules, techniques, or existing solutions we should be focusing our time on.
First let me give an overview of the technology being used and the LED layout. Our display consist of 100 8’ semi-transparent tubes. The tubes are positioned in a grid of 10 tubes x 10 tubes with each tube spaced 1.5 feet apart. Inside each tube is 64 individually controlled LED Pixels. The tubes are all attached to a 16’ x 16’ frame that will be suspended 32’ in the air in the middle of a forest. The end result is a 16’ x 8’ cube generating visual patterns effected by various sensors and data input. Yeah, we think it’s going to be pretty awesome too
My current thought process is to create a module that accepts the output of a geometry modules and feeds pixel data out to the display. This is the area I’m having the biggest hurdle with. Once we have the geometric shape that we want to use as a mask to set our pixel values, I’m not easily finding the best way to select the specific points within the 3d space of our shape and obtain their color value.
The initial suggestion was to slice our 3D shape into 10 layers, and then to select the desired 640 pixel color values needed per slice. I’m not rejecting this method at all, I just feel like there is likely to be a way of accessing these values by their 3D cordinates eliminating the need to slice a 3D shape into 10 different flat images. Again, only 1 week with TD so I may be way off base on the entire process, and feel free to tell me so.
Any guidance or suggestions on where we should be focusing our efforts will be greatly appreciated. Thanks!
There are so many way to approach this, the first thing you need to decide if is how you will be addressing your tube’s LED’s, how you will store/manipulate the color for each tube? In the end you will be likely sending the data to the LEDs from CHOPs or DATs. I’ve attached a example file with some ideas and approaches for you to consider.
At the top example, I have a 10x10 grid and convert that positional data into CHOPs. This it the position of each tube. Then the Noise TOP quickly creates 100 varying colors which I also convert into r g b a channels, each with 100 samples, ie the RGBA colors for each of the 100 tubes. Then this is fed into a Geo COMP and uses geo instancing to position and color all your 100 tubes for a pre-viz of your setup.
From this you can take the color channels in topto1 and really do anything with them before sending them out to the LEDs. I’ve also converted that data into DATs so you can see the RGBA values of all 100 tubes in the 100-row Table DAT called chopto1.
Below that I have an unrelated example, but it shows you one way of selecting geometry in 3D space which is sounded like you were attempting (sorry if I was a bit confused what you interested in). It shows how to use bounding areas in the Delete SOP and the Group SOP to select different parts of your geometry. I then throw that geometry into Geo COMPs for rendering with different materials. This approach works for rendering a scene and lets you more easily address certain tubes, but doesn’t really get you any closer to sending color data out to LEDs.
Anyways, maybe these small examples will spark some ideas and get you started. Also check out the Operator Snippets examples in the Help menu (Experimental Builds only) to get good ideas of how to use some of these operators. tubes.tox (2.38 KB)
First let me saw WOW! I’m blown away by fact that someone took the time to actually create a TD file to give me some specific examples of what I’m trying to achieve. I have only started to analyze the different modules you put together but I think you got a pretty good idea of exactly what we are after. I’m going to put on a fresh pot of coffee and dive into this right now. I think your example will help us get a much clearer plan for how we want to attempt this in TD. I will update (and most like ask a few more questions) as I’m working through the file.
In regards to the method we will be using to get the values from TD to our pixels… We are using boards that all connect to a server designed for controlling LED pixels. We use our current python libraries to generate the values for each pixel and then package all the pixel values in TCP packets that then get sent to the server connected to the LED boards. (the method and even the protocol is all very similar to DMX over TCP).
My thought is to either
1.) Use the python scripting ability of TD to make use of our existing python library that already send pixel values to the server. The existing library is written in 2.7 and not extremely simple to convert to python3 but it is def doable.
2.) If possible, create a module that takes the pixel color values for our display, packages the data according to the protocol expected by the server, and then just sends right from TD to the server.
Again, thanks a ton for the help and examples. I’m now 10x more motivated to use TD on this project, and to learn TD in general.
The advantage of the ‘10 slices’ approach is that you can access a full set of rendering tools. Just trying to sample the color at one position will tie your hands - you’ll be working in an entirely generative space. If you do 10 slices, you’ll be able to use arbitrary 3D shapes and then also apply post-process steps as needed (such as purely generative actions).
It boils down to the fact that you’re using GPUs, and GPUs are built to rasterize things. If you don’t rasterize, then you may as well stay in python (in Touch, or without).
Give it a try! The best thing about touch is that there are no ‘this or that’ decisions, it’s a continual ‘bit of this, bit of that’ experiment.
First of all, if this installation was at EF this year I want to say I saw it and congrats on pulling off that amazing work of art!
I got word that it was running off of touch designer and meant to come by for a second closer look but alas I didn’t make it back in time.
Would you mind sharing more info or at least some direction on how one actually communicates / controls this many led’s with the hopes of achieving 30 - 60 fps?
I’m building some modular led panels using a teensy 3.1 microcontroller (arduino based) and the octows8211 library. Right now, my hopes are to have the teensy push 3,840 leds, but in the 30-60 fps range…
This has all been working great on smaller tests, I’ve split my vj system in touch into two processes. 1- Handes the ui, media, and all controls and calculations and sends it out over a touchOut chop. 2- Receives the chop data through touchIn chop, and immediately samples it via python script, formats it, groups it into the largest batches of bytes possible and sends via serial.
I’ve analyzed the python code that does the heavy lifting, and the .sendBytes() command is the largest bottleneck by far…
I’m beginning to wonder if I’m approaching this the right way over all.
My code is below!
Any help or direction would be GREATLY appreciated !