# convert 3D coordinates to screen coordinates

I have points displaying in 3D space and want to know where they are in a 2D image. I’m a good programmer but… not super mathy Can someone help me out?

You’ll need access to the projection matrix, world matrix and possibly the object matrix (if want to access points in an object). If your not running your own shader then you might need to make your own camera matrix (projection) or calculate your matrix from the FOV of the camera - I’m not sure though about how far behind the screen the camera is.

Either way once you know the frustum (the angle of the perspective) and the position of the camera behind the screen you should be able to use trigonometry get your distance from the center of the screen. Once you have that (or before) you’ll need to convert screen space (pixel space).

It could be a bit tricky or really easy depending… I have 3d scenes that use my own camera inside of a shader where it wouldn’t be too hard because I have access to all that data.

I’m super busy this week but if you remind me next week maybe I can give it a shot and see if I can come up with an example (I’m sure it would be super useful in the future).

cheers
Keith

I’ve been wondering about this myself… since you can access the projection matrix from camera objects now, this should be relatively easy, right? I also lack the math to get this working. Does anyone have an example of this working?

So after some research, I worked out a simple way of doing this that seems to work just fine. You can read about the technique and download the example here: viewtopic.php?f=20&t=5645

With matrices it’s very easy too.
You just get the projection matrix and multiply it by your camera-space position. This will give you a position where X = -1 is the left of the image and 1 is the right of the image. Y = -1 is the bottom of the image and Y = 1 is the top of the image. Values outside -1/1 will not be visible in your render. If you read math articles about this you’ll see that you are supposed to divide all the components by W after a projection transformation, but I do this automatically for you after a matrix multiply so you don’t need to worry about it.
Z = -1 is the near plane and Z = 1 is the far plane. Similarly, values outside -1/1 will not be visible in your render (this is how the near/far clipping planes work)

To get a position into camera space from world space, multiply it by the invert of the camera’s worldTransform().

2 Likes

I’ve been operating under the (possibly very false) assumption that for realtime things, stringing together a few math CHOPs is going to outperform executing a python script to evaluate matrices every frame (since there aren’t any CHOPs for working on matrices), which is mostly what led me to pursue something CHOP-based (also it helped me better understand what’s actually going on).

-michael

Hey yall

I have just come to need this as well. I took a look at mji’s component, and it definitely starts to get me there, though my camera and object are in motion, and i can’t depend on the camera facing straight forward all the time. I’m going to try and resolve the trig to make mji’s thing work with my camera that rotates, pans and moves using the object CHOP, but I was wondering about these matrix techniques.

Outside of GLSL world I know I can now get matrices in python now, and I think I’m good to figure out getting the camera-space matrix, but what is the best way to go about getting this screen space X,Y back to chops if I have to do everything in python? Is it just a script thats cooking all the time that writes values to a constant? I’m not super savvy on matrices either so I’m not sure what the values I actually care about are, are they just entries [3,0] and [3,1] in my resulting matrix?

Ok so I think I sorted out a python script, but I’m still getting very large values for the transform column. I’m also a bit worried about how something like this will perform, can you store aliases to the constantly updating matrices in a script? Would that help? here is my code, can you see if i’m doing anything wrong?:

[code]camAspectX = 1.0
camAspectY = (1080/1920)

camOP = op(‘myCam’)
targOP = op('myTargetOp)

camProjMatrix = camOP.projection(camAspectX, camAspectY)
#print(camProjMatrix)

camWorldTrans = camOP.worldTransform
targWorldTrans = targOP.worldTransform

camWorldTrans.invert()

camSpaceMatrix = targWorldTrans * camWorldTrans

screenSpaceMatrix = camProjMatrix * camSpaceMatrix

values = screenSpaceMatrix.decompose()

print(values)
print(screenSpaceMatrix)[/code]

Ok getting alot closer, was confused about Positions vs Matrices. Things are moving into the correct places relatively but it’s overshooting, so either I’ve got something up with my geos or I’m still not quite doing this right. All my internal SOPs are at zero, which they weren’t at first, so that shouldn’t be the problem. Am I doing the aspect stuff right? I know I’m doing things ineffeciently for grabbing vectors and such, but the Position object wouldn’t init when I passed it the decomposed translation tuple on its own. Here is new code:

[code]camAspectX = 1
camAspectY = 1080/1920

screenTable = op(‘screenTable’)
camOP = op(me.var(‘myCam1’)
targOP = op(‘myGeoCOMP’)

camProjMatrix = camOP.projection(camAspectX, camAspectY)
#print(camProjMatrix)

camWorldTrans = camOP.worldTransform
targWorldTrans = targOP.worldTransform
print(targWorldTrans)

targWorldTransVec = targWorldTrans.decompose()[2]
print(targWorldTransVec)

targWorldPos = tdu.Position(targWorldTransVec[0], targWorldTransVec[1], targWorldTransVec[2])

camWorldTrans.invert()

camSpacePos = camWorldTrans * targWorldPos

screenSpacePos = camProjMatrix * camSpacePos

print(screenSpacePos)

screenTable.clear()
screenTable.appendRow([screenSpacePos[0], screenSpacePos[1], screenSpacePos[2]])[/code]

I don’t think you can use decompose like that, because it’s not giving you a position, it’s giving you the translation portion of a transformation. This may or may not be the values of the final position, depending on the order the scale, rotate, and translation are applied.

A more explicit way of doing this is just

``````pos = tdu.Position() #starts at 0,0,0

worldSpacePos = worldMat * pos
camSpacePos = camMat * worldSpacePos
ssPos = projMat * camSpacePos``````

Thanks Malcolm.

I’m all set now, I was “overshooting” because I wasn’t ranging back to the TOP transform range (-0.5, 0.5) Seems to work great, though a little expensive for my tastes, I’ll probably try and see what efficiencies I can bring in and reply back.

Out of curiosity, did you try the method that I posted? It doesn’t use python and I’d be curious to see how it performs computationally versus this method.

Would it be practical calculate the coordinates directly in the shader and then writing them out to a second color buffer?

mji, I do want to try and get your method working. I did have it functioning, but still have to implement the math to accommodate for the camera moving in all directions as well. Its a personal project, so I gotta put on hold for a bit for now. The python/matrix method is a bit slow for sure, so would like to resolve something faster in CHOPs.

Keith, I did try rendering the locations to a second color buffer, but not long enough to make it work. It seems to me like setting up a render is a similar time drain as doing the python matrix method, but i don’t know for sure.

I’d never posted it, but I had actually developed a version that’s contained neatly within a COMP that does all that, plus takes into account camera position. I just uploaded it to the techniques forum. You can get it here:

That’s fantastic, makes total sense, to just relate the camera to the object with object CHOP of course!

Thanks for that update, super useful!

I’ve shared a C++ CHOP version here github.com/DBraun/MatrixCHOP

This plugin is a life-saver thank you for this!

I didn’t see a response with the usual Houdini way that to do this, which is the texture SOP in ‘perspective from camera’ mode.