UV maps and dynamically changing content

Hi there,

I am working on a large scale permanent interactive installation that involves projection mapping and I’m trying to tackle it in pieces. With the help of Matthew Ragan’s tutorial on edge blending and projection mapping, I have successfully set up a virtual scene with a geometry and cameras as projectors. I have a few questions at this point:

  1. To map my content on the geometry, should my MovieFileIn TOP be a UV Map of the geometry?

  2. The installation is designed so that when sensors detect people in certain parts of the room, what is being projected on the geometry changes. I will be creating another network that handles this interaction of sensor data coming into Touch and triggering a new image to be displayed. What kind of operator will function as the output from that network / the input for mapping content on the geometry? Can it still be a MovieFileIn?

  3. Is there a way to tell the scale of everything in the geometry viewer? I can see my geometry and cameras but how can I ensure that everything is the same size as it will actually be in real life?

Thanks so much! I know it’s a little confusing - happy to answer questions and provide more information.

Anum

Some rapid fire answers for you:

No… not exactly. Ideally, your geometry is modeled in another application with a correct set of UVs that represent how a flat video file will be wrapped on the the object. Ian’s got a great tut up about UV mapping in blender, and then texturing in TouchDesigner to help clear up what that process / pipeline looks like: youtube.com/watch?v=N0jSb6SD8J0

The real question here is what’s your design intention - you could use movement of people to displace the textures on your geometry, or change videos, or create trails, or any one of a thousand different ideas. This really comes down to what you want, creatively, to achieve. If you want to focus on changing just video content, consider looking into making a solid AB deck and how to handle preloading videos to avoid stutters.

There’s lots of interesting discussion in the slack group around scale and which application to use here. Regardless of tool, I’d start by building the model of your space in an environment that you like. C4D, Blender, Maya - wherever you feel most comfortable. Make sure you include a piece of throw-away geometry as human scale, then import to touch to make sure you understand how scaling moves between the two applications. It’ll be important to double check your lensing and throw calculations, and with the correct information about your projectors you can set up a frustum simulation a lot of different ways to get a handle on how that will work, and to sort our where projectors should live. Richard has a nice previs tool up on chopchopchop that might also be worth looking into:
chopchopchop.org/assets/lights/previz

Hope that helps.

I added a TOX example in this post,
I am not sure if this helps or not.

It has one base that gets random values and then sends it over the network.
the other base picks up those signals and uses them to show/hide different faces of a box.

Each face of the box is in a CubeMap, however whatever geometry you choose will need to be unwrapped somehow. I suggest following Matthew’s suggestion and looking into blender.

Typically I assume that the touch Geometry world space is 1Meter per unit.
So if you have a camera in the touch world at 0,1,0 it will be 1 meter up on the Y axis.

When importing in from another program, it won’t just know automatically what scale the file was exported in. So if you export an item in mm, it will become much larger in the touch env.
Similar issues exist with Inches / Metric conversions.

again, take Matthew’s advice by using known geometries to confirm that your transformations are working properly when bringing it into TouchDesigner.
fakeSensors.tox (2.88 KB)

Thanks so much Matthew and Harvey! Extremely helpful in helping me figure out next steps.