I’m sure this should be obvious to me but its not. Is camshnappr designed for aligning projected images/movie files or is just intended for aligning UV materials? I can get it to align UV materials on my model no problem but I’ve been struck trying to align an image/movie projected from a light. Is this not what it’s also intended for or am I missing something? I put my filein1 model into the GeoSOP field and my moviefile into Color Map in Camshnappr and the result is a projected image that’s tiled.
Hello,
You can use the boxmap file included in TD sample but its not very suited for your task
You need to prepare a proper UV unfolding in a 3D software as blender.
I have produced a tuto on how to do that, very old (and in French) but informative. I must redo one actualized and in english !
Thanks Jacques, I’m actually familiar with Blender and unwrapping.
What I’m trying to figure out is if camshnappr is designed for aligning an image/movie projected from a light.
I’m starting with a moviefile created in Blender that is has a position that is camera matched to a 3d model file and I’m trying to project the moviefile onto the 3d model, align points with a real architecture.
I dont understand exactly what you do and want to do.
I think it doesnt works with projected image, only with textured material with the proper uv unwrapping.
Camschnapper calculate the beamer position and settings aligning the points but doesnt change anything to the texture. You can align camschnapper without using any texture, it only help to visualize the result.
Perhaps if you publish your project, it would be more understandable.
I may be misunderstanding but, if you want to simulate that movie being projected from a given point onto the geo, the TD way of doing this is to use a textureSOP or POP depending on your network, set it’s Texture Type mode to “Perspective from Camera” and then feed it your camschnappr OP (the camschnappr is a Camera COMP with extensions). This will create the projected from view texture coordinates that I think you might be looking for.
If your model already has UV’s from the camera in Blender baked into it, it should just work. Keep in mind that you will want to add a constantMAT with your final texture to your render geo.
I’ve done extensive work with Blender UVs and camschnappr and would be happy to help you figure it out.
Hi flowb,
I appreciate this and I get it. It’s challenging to explain (add a second language and it makes it even harder).
My model doesn’t have UVs. It’s just an obj (geo) without a material (Camera_001_convertedtoTD”). The other image I uploaded (animation complex scene0301) is still from moviefile that I created in Blender from the position of my actual projector. What I want to do is project this moviefile onto the geo.
Is camshnappr the right tool to do this or am I better trying to find another way to align my projected moviefile with my model?
Hi,
-
camschnappr is the right tool if the geo you are projecting onto (mostly) matches the model you have and you want to have the ability to reliably land your content in specific spots on the surface.
-
if you do not have texture coordinates on your geo then there is no way for a renderer to know which pixels of the movie/texture go where on the model. so these would want to be added either in blender or in TD
in your image it looks like you have assigned materials and textures to your cubes and cylinders. it sounds like you have rendered a movie from the point of view of your imported camera (Camera_001…) and you want to project the movie from that perspective.
You’ll want to do a couple things here:
First, you’ll want to dive into the imported geo network and add a TextureSOP or POP depending on what kind of network is in there. then set that up to use Texture coordinates from a camera and assign the imported camera to that. This will apply texture coordinates from your imported camera. then add a constantMAT, set your imported geo’s render mat to it and assign your rendered content as the colorMap for that. This should let you test to make sure your content maps to your geo properly.
Once you have your content properly mapped to your geo in TD you can move on to using the camschnappr to properly align the projector to your as-built geo.
If you can share your project or just the geo and a frame from your movie that shows everything we can help you set up a basic network that does all this.
Cheers
I don’t seem to be able to upload the obj but here is a still that I’m feeding into my moviefile1 and my .toe
To share.toe (57.5 KB)
Hi,
I’m afraid the geo is needed. Can you try zipping it up before uploading or DM me and I’ll send you a Dropbox file request
Hi @slurpman ,
I took a look at your blender file. Since you had provided it, I figured it was worthwhile demoing how to do UVs projected from a camera there first. There were some obstacles that are worth highlighting here.
Since the position of the camera in the blender project had moved, I started out by rendering out a new test frame from the camera perspective:
I saw in your project that you had already created a unified copy of all your geo. I assume this was a test to get everything over to TD in a single object. I decided to use this piece of geo to demo project UV from Camera in blender.
I selected your “Flattened Geo” object, went to modifiers and picked “UV Project”
The UV Project settings are fairly straightforward. You need to specify the target uv map layer on the object (you had added one called ‘automap’ which I used), specify the camera aspect and the camera.
The modifier takes effect immediately. If we add a material to the geo we can apply our rendered frame. I’m glad that I tested it in blender first because it turned out there was more work to do.
You’ll note that texture has gaps around the edge of the camera frame in the viewport. The objects in the scene are not adequately subdivided. A full explanation of why this matters might be out of scope of our thread here but, to simplify: UV coords on geo are linearly interpolated across each face but camera projection is perspective-correct (non-linear). The difference between these ranges is what causes our warping in the image at the margins.
Here’s a wireframe view of your geo, it’s all just verts at corners with no triangles:
In order to properly project a perspective correct UV on this we need to give Blender (or TD for that matter) more places to write perspective uv values so that the linear interpolation between them becomes less of an issue:
For good measure I also triangulated all the faces. This is standard practice for processing geo that you want to render for projection since it helps to stabilize the interpolation of UVs across surfaces.
Pro-Tip: Blender lets you assign operations to a “Q” menu that pops up when you press the Q key on your keyboard. Triangulate faces and Subdivide geo are both on my Q menu for this reason.
This object can be directly exported to whatever format you wish and rendered with the sample texture in TD. So in theory there is no TD-based project from camera texture mapping required:
You could add a camschnappr to this network and use it to project this scene from a different angle, however you’d probably wind up seeing doubled up image data in your output. If we look at the geo from another angle the issue becomes immediately apparent:
I have used Perspective UVs in projection mapping projects for building facades but we tend to do a couple extra things. Usually we project perspective from the audience point of view so that the perspective looks right from where people are standing, we’ll also often use either Houdini or lately Blender geo-nodes to manipulate the “Color” property of points that are doubled so that we can black/mask them out. TouchDesigner’s constantMAT reads point Color data by default so that mostly tends to just work.
In your case, since the scene has so many objects at variable depths, it might be difficult to pull this off. I know this is somewhat incomplete but we’re kind of at a crossroads here that depends on how you are planning to show this. you can certainly project this from the UV perspective, and then you will naturally shadow all the doubled up faces.
Since you are animating the textures in Blender it seems likely that you would benefit from doing texture baking (aka: render to unwrapped texture) in blender and then applying these in TD instead. This would give you the flexibility to render the scene from any angle, at the cost of some (but not all) freedom to bake perspective effects into you render.
I hope this is helpful. Please let me know if you’d like any of these assets.
It be great to get the assets. It’ll take me a bit to sort out what you did here. Seeing the files will help probably.
Thank you so much for your help on this.
I remade my Blender file with the camera in the correct position + subdividing/triangulation the geometry and got it close to working, as you can see here. What I’m missing is how you got the Translate/Rotate values for the camera and the null? You used an OBJ with your filein node but you can’t export a camera from Blender along with an OBJ so how did you get your cam position? I tried exporting an FBX with a camera because I know you get Camera_convertedtoTD and Camera Null nodes inside the FBX. When I did that the camera may have worked (it’s hard to tell for sure) but the materials didn’t.
After posting I noticed you used a filein POP rather than a SOP so I switched that in my setup. I don’t actually know what that changes?
The other kind of separate question I’m wondering is, when i get this working, will this process be able to be used with a two or more projector set up? I think the answer is yes, you’d do the whole process again from an offset angle and as long as the audience is positioned in the right place you could overlap projector beams and it would work?
Hi,
I actually exported the geo using the unified OBJ and then exported just the camera using fbx. Sorry I didn’t call that out. It’s ok to mix these. Blender’s exporter allows you to export only “Selected Objects”. So you can export the camera by itself, and then just copy it out of the FBX COMP.
POPs are a new class of Operator that Derivative is developing to eventually supplant SOPs. In this case, since we’re not really doing much with the geo data after importing it, the difference it trivial. If you are unfamiliar with either, I’d recommend working with POPs since that gets you fully GPU accelerated performance and, helps you learn newer workflows.
With regard to multi projector setups, remember that each projector in your output rig will be represented by another camera that is used to render it. So the doubling of data that you see in your screenshot (box textures on the ground plane) will happen there as well. If this is not an issue then, yes, by all means go ahead. You can use a separate camschnappr for each projector and then use them as the cameras bound to renderTOPs. You would likely want to pick an optimal viewpoint from which to project UVs from (audience POV). There’s no need to pre-calculate the projector position in blender since this would be done dynamically using camschnappr to match your actual projector layout on set.
As for overlapping projector beams, a lot depends on how your two projectors are set up. The 90 degree corners in your target geo means that you might not need to “blend” projectors if you’re fine with brighter and darker regions. If you were hoping to normalize the intensity across the multi-projector setup and fill in shadows cast by one projector with data from another, that is a more complex setup.
Oh man. I don’t know what I’m doing wrong.
I think I’m doing what you’re saying? I’m exporting a unified (joined) OBJ and then exporting just the camera as an FBX. Then I’m diving into the FBX in TD, copying and pasting the camera and the NULL that have the Translate and Rotation positions to the main project1 level plugging the Camera into the Render node. It makes sense but somehow it’s not working? I’ve even tried exporting the camera as FBX from your Blender file. I must be missing one tiny thing!
Ah,
I bet I know what’s happening. The default FBX scaling settings in Blender’s exporter will result in the blender’s default unit (meters), being converted into the FBX default (centimeters) which scales the base unit by 100.
The easiest way to avoid this is to change the Scaling settings in the FBX export panel in blender to apply the FBX units here:
YESSSS!!! That’s it!












